Krishna had been trading stocks for years. He considered himself methodical, disciplined—the kind of investor who made decisions based on research and reason. But when he fed his entire Zerodha portfolio history into an AI system one afternoon in a cramped workshop room, the machine delivered a verdict that stopped him cold: "Your best calls are all buys. Your worst calls are sells. You consistently exit early, especially during temporary dips. The classic pattern: you sell on fear, and stocks rebound immediately after."
He sat there, staring at the screen. "I didn't know," he said quietly. And then, more slowly: "Now that it mentions it, it makes sense. I never thought of it that way."
This wasn't supposed to happen. This was supposed to be a technical workshop about extracting data from apps and browsers. Nobody expected therapy.
But that's exactly what unfolded over four hours in a room full of engineers, data scientists, and analysts who came to learn about "mining digital exhaust"—and left having been mined themselves.
The Cyclist Who Rides at 2 AM
Here's what most people don't understand about the data we generate: it's not just information. It's confession.
Consider Shreechand. He's a cyclist. He knows he's a cyclist. What he didn't know—what he couldn't have known without feeding years of ride data into a language model trained on patterns humans can't see—was why he cycles.
The AI's analysis was brutal in its precision: "You don't ride when you're motivated. You ride more when things are slightly out of control. Your highest volume phases correlate with irregular sleep hours, late night and early morning rides, cramped clusters of activities with almost no recovery days."
It continued: "You cycle most when life is least stable. Not when you feel disciplined, but when you feel restless. That's why your best months were not your best phases in life. There were transitions, identity changes, unfinished chapters. You didn't cycle harder because you were inspired. You cycled harder because you were processing something."
And then the kicker: "Motivated people form habits. Restless people form spikes. Your archive shows spikes."
Shreechand leaned back. "I really do 2 AM rides," he admitted.
How does a spreadsheet of timestamps and distances become a psychological portrait? The answer lies in something computer scientists call "embeddings"—mathematical representations that capture the meaning behind data, not just the data itself. But the more interesting answer is simpler: patterns don't lie, even when we lie to ourselves.
The 4 PM Revelation
Anand, the workshop instructor, had his own reckoning. He'd always assumed he learned primarily through reading. Books, articles, documentation. That's what intellectuals do, right?
Then he uploaded sixteen years of YouTube history.
The model found something he'd never noticed: 48% of all his videos were watched between noon and 5 PM. Not evenings, not weekends—the middle of the workday. "Every day at 4 PM, something unusual happens," the AI narrated in its Gladwell-style analysis. "When people are wrapping up work or checking their phones, Anand is deep in YouTube. Not for entertainment, but as part of a carefully orchestrated ritual."
"I did not know that I sit in the office watching videos," Anand said, genuinely surprised. "I did not think I was learning much from videos, actually. Most of my information I assumed came from reading."
The data disagreed. The data had been watching.
The Grazer in a Platform Built for Bingers
Anirudh's YouTube data told a different story. Over 176 days, he'd consumed approximately 1,500 hours of video from thousands of unique creators. But unlike the algorithm's ideal user, he didn't binge. He didn't re-watch. He didn't loyalty-subscribe.
The AI's verdict was almost admiring: "You are a grazer in a platform designed for bingers. YouTube doesn't know what to do with you."
This is the paradox of digital exhaust. The platforms that collect our data are optimized for certain behaviors—addiction, engagement, return visits. When your patterns don't fit their models, you become illegible to them. But you become fascinating to yourself.
What does it mean to be "unoptimizable"? In an age of algorithmic manipulation, perhaps it's a form of freedom.
The Double-Hump Workday
Bhavesh's calendar analysis revealed what he already knew but had never quantified: "You don't work a 9-to-5. You work an 8-to-12 and a 9-to-1."
With team members split across the US and India, his days had evolved into two distinct chunks with a vast empty middle—a "double-hump work pattern that defies biological rhythms," as the AI put it.
Knowing is one thing. Seeing it rendered as a pattern across 2,516 calendar events is another. The data doesn't judge. It just shows you who you've become.
The Technique Behind the Revelations
How did a room full of people extract confessions from their own devices in a single afternoon?
The workshop introduced three "superpowers" that anyone can learn:
First: Chrome DevTools Protocol (CDP). By running your browser with a special flag, you can give AI coding agents access to any website you can see—including ones that require login. The agent writes code, visits pages, scrolls, extracts data, all while you watch. LinkedIn invitations. WhatsApp messages with reactions. Trading histories. If you can see it, you can scrape it.
Second: Style Transfer. This is the discovery that how you present information matters as much as what information you present. By instructing AI to write "like Malcolm Gladwell" or "like the New York Times graphics team," the same data becomes dramatically more engaging. One participant noted that the first sentence of a Gladwell-style summary hooked him immediately—solving the cognitive-load problem that makes us avoid our own data.
Third: Post-Mortems. After every AI interaction, ask: "What did I do wrong? How could I have improved this conversation? What mistakes did you make, and what should I tell you next time?" Then log it. This is how you train yourself to train machines.
