The Vibe Coding Chronicles
January 11, 2026
Technology & Society

The $10 App That Never Was

How a habit-tracking purchase taught one data scientist that the future of software isn't about writing code—it's about talking to machines

By S Anand 12 min read Full Transcript Chat Messages

A few years ago, Anand spent $10 on a habit-tracking app for his phone. It was a good app—sleek interface, satisfying animations when you checked off your daily meditation, a streak counter that made you feel like you were accomplishing something. He used it religiously for about a year, tracking his yoga practice, his weight, his reading goals. Then, as happens with most of us, he lost the habit of habit-tracking itself. The app gathered digital dust. The $10 evaporated into the ether of forgotten subscriptions.

What makes this story interesting isn't the wasted money—we've all been there. What makes it interesting is what happened on a Saturday afternoon in January 2026, during a live workshop with hundreds of participants watching. In less than three minutes, using nothing but a few sentences typed into a free website he'd never heard of until that morning, Anand built a fully functional habit tracker. It remembered your streaks. It had visualizations. It worked on any device. And it cost exactly nothing.

"This is craziness," he said to the workshop participants, genuinely baffled by what had just happened. "It took less time to build this application than it took me to find that $10 app."

Welcome to the age of vibe coding.

The complete workshop recording: Applied Vibe Coding

The Great Inversion

Here's a question that would have seemed absurd five years ago: Why should you pay for Microsoft Office if you can just build Microsoft Office?

We're not quite there yet—maybe a year away, maybe two. But the trajectory is unmistakable. And it suggests something far more profound than cheaper software. It suggests that we've been thinking about the relationship between humans and computers entirely backwards.

For decades, the orthodoxy went like this: Computers are powerful but stupid. They can do exactly what you tell them, but you have to speak their language—Python, JavaScript, SQL, whatever arcane syntax the machine gods demand. The humans who learned these languages became "programmers," a priestly class who translated between the world of human intention and the world of machine execution.

"Vibe coding is like building apps by talking to a computer instead of typing out thousands of lines of complicated code."

But what if the computers learned our language instead?

That's what vibe coding is, really. When Anand asked Google's AI mode to explain the concept in simple terms—"like I'm 15 years old," he specified, because who has patience for jargon anymore—it gave him a definition that's worth dwelling on: "Vibe coding is like building apps by talking to a computer."

Talking. Not typing thousands of lines of complicated code. Not debugging semicolons at 2 AM. Just... talking.

"Then why am I typing?" Anand asked rhetorically, realizing he should probably just be dictating his instructions. Which he then started doing, using ChatGPT's voice interface to speak his ideas into existence.

Visual sketchnote summary of the vibe coding workshop
Visual summary: The key concepts from the workshop, illustrated

The Imagination Bottleneck

There's a counterintuitive finding buried in the economics research that most people have missed. Starting from the mid-1980s—at least in the United States, and increasingly in other wealthy countries—social skills have provided a higher boost to salary than math or engineering skills. Communication and creativity have been quietly outpacing technical prowess for four decades.

And now, with AI amplifying this trend, something strange is happening. The bottleneck isn't execution anymore. It's imagination.

"What Excel does is one thing," Anand explained to the workshop participants. "Knowing that this is what I should tell Excel to do—that requires a certain kind of genius in itself."

He paused, letting the implication sink in.

"Doing becomes less important than knowing what to do."

The Gladwell Paradox If anyone can build an app in three minutes, the value shifts from the builder to the thinker. The question isn't "Can you code this?" anymore. It's "Can you imagine it?"

This is why a participant named Shivam could casually suggest building "a website that turns text into music—extract sentiment and turn it into tasteful background music for content, cinema, social media." In the old world, that idea would have required a team of engineers, months of development, significant funding. In the new world, Anand built a working prototype in about fifteen minutes, while simultaneously working on three other applications.

The Monopoly Lesson

But here's where things get really interesting. Another participant, Giridhar, suggested building an application to train people on cybersecurity. Anand's brain immediately started remixing the idea:

"When I say gamified, I mean let's create a Monopoly board. The user can roll a dice, then the player will move to one particular place. And then that will open up asking a multiple-choice question about cyber security. Keep in mind that we are focused on online safety in India, so we want to focus on the most common, maybe the 10 latest scams that people are falling prey to..."

He kept going. The training should be specific to age groups in blocks of seven years. It should consider different professions—the top 10 in India's unorganized sector. A stay-at-home mother should be considered a profession too. The questions should be tailored to each combination of age and occupation.

Within minutes, Claude (one of the AI assistants) had built a working game. It had Hindi and English options. It moved pieces around a board. It asked questions about UPI fraud and fake job offers and SBI phishing calls. And the sad part—or maybe the wonderful part—was that it was "engaging enough that I will sit and play this for another five minutes just because I'm curious about what the questions are."

🎯
Streakly
A habit tracking app with streak visualizations, built in under 3 minutes. Remembers your progress, works across devices, completely free.
🎮
Cyber Suraksha
A Monopoly-style game teaching cybersecurity awareness to Indians of all ages and professions. Roll dice, answer questions, learn to spot scams.
🎵
Text to Music
Paste any text—a movie script, a poem, a chat message—and generate background music that matches its emotional tone.
Unicode Converter
Remove the telltale signs of AI-generated text—em-dashes, smart quotes, fancy formatting—and convert to plain ASCII.

Applied Vibe Coding

But vibe coding isn't just about building apps. It's about a broader shift in how we interact with data—what Anand calls "vibe analysis."

"What are the top five data sets that almost anyone will be able to access instantly?" he asked his AI assistant during the workshop. "They will readily have it, and the data is large enough that some amount of data analysis is useful and worth doing."

The answer surprised even him. WhatsApp chat histories. Netflix viewing activity. The file inventory on your local computer. Bank statements. Screen time data.

"There is nothing more boring than using somebody else's data," Anand observed. "The most interesting data sets are our own."

So he downloaded his Netflix viewing history—1,222 unique viewing days spanning from January 2019 to January 2026—and asked ChatGPT to analyze it. Not just summarize it. Tell him something mind-blowing.

"If you share what you vibe coded, others will learn. But more importantly, we learn best when we teach."

What emerged was genuinely surprising. Yes, he watched more on Sundays (12.8% lift) and less on Wednesdays. That was predictable. But the AI identified something he called "guilt buffering behavior"—delaying gratification through the work week and overcompensating on weekends. A "disciplined mind blowing off steam."

July 2021 was the peak binge month. COVID's aftermath. May 2020 was second—the deep lockdown era. The AI even fact-checked itself: "Sunday is ahead of Wednesday is really chance. Not statistically significant." It found two errors in its own analysis and corrected them.

Then it generated a data story—a scrolling, interactive narrative with visualizations, the kind of thing that would have taken a data visualization team weeks to produce. Using a prompt Anand had refined over 20 years of experience in the field, the AI wrote like Malcolm Gladwell and drew like the New York Times data team.

Workshop Resources

The Lethal Trifecta

Of course, with great power comes great potential for catastrophe. A workshop participant asked about cyber threats in vibe-coded applications. Anand's answer referenced what security researcher Simon Willison calls the "Lethal Trifecta"—a framework for understanding when AI systems become genuinely dangerous.

The trifecta has three components: private data access, the ability to communicate externally, and exposure to untrusted content. Any two of these is fine. All three together is a security nightmare waiting to happen.

The Lethal Trifecta
Simon Willison's framework for AI security risk
Private Data Access
External Communication
Untrusted Content
⚠️ Pick any two. Never all three.

Consider an AI application that has access to your bank data and can send emails. If you paste in content from a stranger's email—which might contain hidden instructions—that content could potentially extract your private data and send it externally. The attack vector isn't the AI itself; it's the combination of capabilities.

"What constitutes untrusted content is very broad," Anand warned. "If I have downloaded a PDF that one of my colleagues sent me, is that my own content? What if I generated that PDF myself but had done it by copy-pasting from ChatGPT? There are invisible characters that we cannot see that AI can see."

The solution isn't to stop using these tools—the utility is too high, the genie too far out of the bottle. The solution is to understand the risks and design systems accordingly.

Managing Your AI Team

Here's another surprising thing about the workshop: Anand ran multiple AI systems simultaneously, bouncing between Claude, ChatGPT, Gemini, and various vibe coding platforms like Base44. At one point, he admitted to being "definitely confused because we are running so many things in parallel."

This, he suggested, is a skill we all need to develop. "It's like managing a team. Each team member is doing something different. Humans at least take half an hour, two hours, four hours, a few days, they have holidays before they come back and ask us for the next job. These things finish it in 10 minutes, 15 minutes max."

If we're lucky, they take an hour. Which means managing an AI team is actually more stressful than managing human employees. The machines never need coffee breaks.

When things don't work—and they don't work about two-thirds of the time on first attempt, by Anand's estimate—the approach is simple: Try to fix it once or twice, then give up and start over. Sometimes the exact same prompt in a different tab will work. Sometimes the idea is just too complex and needs to be broken down.

"Learn to give up," he advised. "Don't try too hard. Just abandon it, move on, try something else. That is usually higher ROI than getting stuck on something and trying to get it to work."

The AI Management Paradox Because AI assistants work so fast, managing them effectively is actually more cognitively demanding than managing human teams. You need to develop the skill of "constantly giving it work"—a strange new form of productivity.

The Aha Moment

So what does it all add up to? The habit tracker built in three minutes. The cybersecurity Monopoly game that's actually fun to play. The text-to-music converter. The Netflix data story that revealed patterns invisible to the human eye. The Unicode converter that strips away the telltale signs of AI-generated text.

Here's the counterintuitive thesis: We're not witnessing the automation of programming. We're witnessing the democratization of imagination.

For decades, ideas were cheap and execution was expensive. Everyone had app ideas; few could build them. Now execution is becoming cheap—or at least much cheaper than it was. Which means the bottleneck shifts upstream, to the ideas themselves.

But even that's not quite right. Because as the workshop demonstrated, you can ask the AI for ideas too. "If you don't know what to vibe code, ask ChatGPT. It will give you ideas."

So what's left? What's the irreducible human element in this brave new world of talking to machines?

Maybe it's this: The ability to recognize what matters. To know which of the thousand ideas the AI generates is the one worth pursuing. To understand that your own data—your Netflix history, your weight loss journey, your WhatsApp conversations—contains insights that someone else's Iris dataset never will. To have the judgment to stop when something isn't working and the curiosity to try something unexpected when it is.

That $10 habit-tracking app wasn't a waste. It was an education. It taught Anand that he could stick with something for a year before getting bored. It showed him what features mattered and which were gimmicks. It gave him the vocabulary to describe what he wanted when he finally built his own version in three minutes flat.

The machines learned our language. But we still need to know what to say.

Three Takeaways

1
Practice vibe coding every day for one month. That's what it takes to build the habit. Force yourself, and your brain will get trained to think of vibe coding solutions where you might not have before. Aim for 10 applications in a single session.
2
Learn to give up gracefully. If something fails after one or two attempts, abandon it. Start fresh. The same prompt in a different tab might work. The ROI of persistence is often negative.
3
Share what you vibe code. Not just because others will learn—though they will—but because teaching is how you learn. If you can explain vibe coding to your kids, your neighbors, your grandparents, you've truly understood it.

The workshop ended with dozens of applications built by participants—finance trackers, book recommendation engines, travel planners, a "scroll breaker" to prevent doom scrolling, an app that buries your emotions as fossils in a geological timeline (yes, probably AI-generated, but still). Some worked perfectly. Some didn't. All of them represented something that would have been impossible a few years ago: ordinary people, many without programming backgrounds, building functional software by describing what they wanted.

"The success of this workshop," Anand had said at the beginning, "is when you put a link in the chat window saying 'Here is an application I vibe coded in this workshop.'"

By the end, the chat window was full of links.

More Resources