I use AI like an intern
A "chotu". Plumber. Waiter. Secretary. Banker.
The Chotu Principle
In Indian shops and restaurants, there's a ubiquitous figure: the chotu—the young helper, the errand boy, the person you send to fetch things, carry things, sort things out. Anand's central metaphor was disarmingly simple. "I use AI like a chotu," he told the room. "A plumber. A waiter. Sometimes secretary, sometimes banker."
Not a genius. Not an oracle. Not the terrifying superintelligence of science fiction. Just a helpful kid you can send on errands. The framing mattered enormously, because it immediately dissolved the anxiety that fills most corporate AI presentations—the existential dread, the "will it take my job" paralysis. A chotu doesn't take your job. A chotu does the stuff you shouldn't be wasting time on.
Consider the Seoul bathroom saga, which didn't end with the sink. Next to the commode sat a button labeled "Emergency"—unlit, innocuous. Anand pressed it. Lights dimmed. Strange sounds emerged. He consulted ChatGPT again. The response was not encouraging: "You cannot turn it off."
So he called reception. The receptionist spoke Korean. Anand spoke English. Impasse. But then—and this is where the story shifts from comedy to something genuinely remarkable—he activated ChatGPT's Advanced Voice Mode: "Translate everything I say into Korean."
"Phone here, and I'm talking to it saying, 'Please tell this guy I have this thing, I turned on the emergency button but there really is no emergency, is everything okay?'" [Korean sounds]. "Ah." "Thank you." He replies in English.
The room laughed. But the point landed. This wasn't a carefully engineered enterprise solution. This was a person, in a bathroom, in a foreign country, using a $20/month subscription to solve a real problem in real time. The plumber, the translator, the secretary—all in one device, all in one conversation.
The $20 Revelation
If Anand had one commandment, one non-negotiable starting point, it was this: pay for AI.
Use paid AI
Buy any PAID subscription to ChatGPT, Gemini, or Claude and keep it in your phone.
"I have not encountered something with a higher ROI than this," he declared, with the conviction of someone who has tested the claim extensively. The gap between the free and paid tiers isn't a marginal improvement. It's a categorical difference—like the gap between a bicycle and a car. Both have wheels, but one fundamentally changes what's possible.
To prove his point, he pulled up a visualization he'd built from OpenAI's GDPVal research—a study where experts designed tasks, then both experts and AI attempted them, and experts judged the results. The resulting treemap was a wake-up call rendered in colored rectangles:
"Software developers—which is a profession I closely associate myself with; I am a coder at the end of the day—it says AI is beating you 70% of the time. Meaning only 30% of the time, the code written by humans—experts, remember… is better."
Seventy percent. Not on trivial tasks, either. On complex ones—cryptography mixers with Web3 front-ends. The kind of work that would have required a team and a timeline not long ago.
But here's the twist that made the audience lean forward. Anand didn't respond with despair. He responded like a pragmatist. He scanned the treemap for professions where AI was better than human experts, and he started hiring.
"Personal financial advisors seem to be doing worse than humans. Very good. I had some money." He went straight to ChatGPT. Conducted an interview. Got advice. Followed it. "The single largest financial decision that I made is entirely thanks to ChatGPT." Or Claude. Or whatever. The brand barely matters. The capability is what matters.
The Dangerous Comfort of Underuse
This is where Anand diverged from the standard corporate AI talk. Most speakers hedge. They talk about "responsible adoption" and "measured integration." Anand went the other direction entirely.
Over-use it.
Under-use is riskier!
You will lose skills. Like long division and hunting.
Learn new ones. Have 50 chats / day.
"Won't your brains soften?" he asked, channeling the audience's unspoken anxiety. "Yeah. Won't you stagnate because skills—won't you stop learning? True."
And then the reframe: "I'm sure people would have told the same thing about when people moved to agriculture. Saying, 'Ooh, won't your hunting skills stagnate?' When we moved to calculators, saying, 'Won't your arithmetic skills stagnate?'"
They did stagnate. And nobody cares. Because we started using calculators, we are doing far more complex mathematics. The skill loss was real, and it was worth it.
"The bigger risk is underusing AI, not overusing AI." The room stirred. This wasn't a nuanced both-sides argument. This was a dare. Use it more. Use it recklessly. Have fifty conversations a day. Figure out what skills atrophy and what skills emerge, and bet on the emerging ones.
The Morning Walk Revolution
Here is where the talk took its most surprising turn—toward walking.
Talk to it. Literally
Voice input is incredibly effective,
especially while on walks.
Anand's second commandment: talk to AI. Literally. Out loud. With your voice. Especially while walking.
He was wearing a custom t-shirt with a half-human, half-AI portrait of himself and the title "LLM Psychologist." The entire design—concept, vendor research, ordering—was done during a single morning walk, talking to ChatGPT.
"I told ChatGPT, 'Look, I'd like some personalized t-shirts,'" he explained. The AI guided him through a voice-based procurement workflow: ten questions, asked one by one, while he rambled on a sidewalk. It identified Printo as a vendor, found next-day delivery for 100 rupees extra, priced the shirt at 700 rupees. Custom-made. One hour. One walk.
"These morning walks are becoming very productive lately."
But the t-shirt was just the appetizer. The main course was a slide deck.
He had a session at 9:30 AM. He hadn't prepared. At 8:00 AM, he started his morning walk and told ChatGPT: "I want to create an insightful deck in markdown on how I've been using LLMs in education." But here was the key move—the one that separated casual AI use from expert-level delegation:
"In this conversation, I'd like you to interview me. Ask me questions one by one. Take my inputs. And then give me the slides. You read out the slides one by one so that I can review, tell you if this sounds good. If not, I will correct."
By 9:15, the entire deck on LLMs in education was done. Generated from the voice conversation. Only the images were added in the last fifteen minutes. One walk. One deck. Zero typing.
And then there was the retraction watch data story—born not even from a conversation with AI, but from a recorded phone call with a colleague named KG. Record the call. Transcribe with Gemini. Pass the transcript to Claude. "Do whatever KG asked you to do." The result: a full investigative data story about academic paper retractions, revealing that eight years on average was how long the American Society for Biochemistry and Molecular Biology took to retract papers, while IEEE managed it in 41 days.
The insight that arrived almost as an aside—"So the moral of the story is: talk to it. But also, by the way, walk. Walking is also a good thing."—concealed something profound. Voice interaction changes the cognitive dynamic entirely. When you type, you edit as you go. When you talk, you ramble. And rambling, it turns out, gives AI far richer context to work with.
The Deck Problem
Before the talk, Anand had surveyed about thirty Applied Materials employees. The question: "What's one repetitive task in your work that takes 30 or more minutes and you wish could be automated?"
He didn't read the responses himself. He is, as he keeps reminding everyone, not the intern—the AI is. He fed the responses into a clustering tool he'd vibe-coded on a morning walk and let it find the patterns.
The result was equal parts predictable and devastating:
"High-tech company working at the edge of AI, the single most common 30-minute chore is creating decks. We can do better than that."
Creating presentations. Weekly project reports. Customer reports. Daily standups. The engineers who fabricate the future of computing were spending their most precious hours making PowerPoints. The irony was not lost on anyone in the room.
He then fed the same survey results to Gemini with a prompt refined by experience: "Give me a beautiful McKinsey-style slide deck. Make it content-rich. Make sure that people who read it can figure it out by themselves. I want nice icons. I want nice fonts. Use images where applicable and give it to me as an HTML application."
He also generated a sketchnote version—"visually rich, intricately detailed, colorful, funny"—and this led to one of the talk's funniest and most perceptive moments.
Visual sketchnote summary of repetitive tasks (click to enlarge)
"People tell me, 'Anand, this is so nice… but I can't take this to a business review meeting.'" He paused. "What do you do? What people often miss is that the person at the other end is also a human being like us. They are not worried that they don't like it; they are worried about who they have to show it to."
The audience laughed. Recognition humor. Everyone has been that person—wanting to show something fun but fearing the chain of approvals above them.
The Adoption Trick
And then came the strategic insight, delivered almost casually:
Key Insight
"When something is not substituting an existing one, there is no competition." The trick to enterprise AI adoption is to stay away from competition. Don't replace existing workflows. Create new ones that solve better problems. Over time, people will see which is better. The old workflow will die on its own.
Don't replace the boring weekly report. Add a podcast version generated from the same data. The podcast tool he demonstrated created conversational audio summaries—two AI characters, Alex and Maya, discussing the data in an engaging way.
"At least three CIOs have come and told me, 'Look, what we want is a weekly report, but I don't want to have to sit and read it.'" He let the implication settle. "They can't read the junk that we are producing." We can't read it either, but we produce it. They don't want to read it, but they receive it. "Give it to them as an interesting podcast. Spice up their lives."
The audience laughed again. This was comedy, but it was also insurgency—a quiet call to bypass the bureaucratic immune system by offering something supplementary rather than threatening.
The Malcolm Gladwell Book-Reading Method
One of the most delightful sections of the talk concerned how Anand reads books. Remember those photos of bookshop shelves he feeds to AI for cataloging? That was just step one. Here was his reading prompt, used with a live Gemini session:
"Comprehensively and engagingly summarize, compare, and fact check in Malcolm Gladwell's style, ELI15, the following books."
Four decisions packed into one prompt, each revealing years of refined thinking:
Style transfer. "Write in Malcolm Gladwell's style." Because if content is boring, why suffer through it in boring prose? Read it in a style you enjoy. Pick your author. Transform the experience.
ELI15. Not ELI5 (Explain Like I'm 5)—that was too simplistic. ELI15 hits the sweet spot. "That's my mental age," he quipped. "I haven't really grown beyond that way of thinking." The AI understands the gradation perfectly.
Comparison. Don't summarize books in isolation. Compare them against each other. Where do they agree? Where do they contradict?
Fact-checking. This was the bombshell. He'd read Angela Duckworth's Grit and loved it. The AI's fact-check revealed: Grit is approximately 80% identical to the psychological trait called Conscientiousness from the Big Five framework. Not new. And grit helps in mature, stable domains—but in fast-changing ones, "it is called stubbornness and is a problem." Instead, he recommended Range by David Epstein. The Tiger Woods approach vs. the Roger Federer approach: specialization vs. breadth. Where the rules keep changing, breadth wins.
The Verification Superpower
Use it to verify,
not just generate
LLMs hallucinate. But using LLMs to verify is safe.
If there was one section that could change how an entire organization thinks about AI, it was this one.
Everyone knows LLMs hallucinate. It's the first objection in every boardroom. But Anand reframed the problem entirely: "Verification is the ultra-safe method of introducing AI into almost any kind of process."
The logic is elegant. If AI generates something, hallucination is dangerous—you might act on false information. But if AI verifies something, what's the worst case? "At worst, it can waste a little bit of my time saying 'Oh, check this,' 'Oh, it was actually correct.' Big deal! I don't mind."
Don't disrupt your existing process. Just add an AI checker alongside your human checkers. Then watch what happens.
He showed the data to prove it. In a classification problem: one model had a 14% error rate. Double-checking with two models: 3.7%. Triple-checking: 2.2%. Five models: 0.7%. When the models disagreed, you route to manual verification—but instead of checking 100% manually, you check only 28%.
"72% saving of effort. 99.3% quality. I'll take that."
The compounding effect was the key insight. Verification isn't just additive—it's multiplicative. Each additional AI checker doesn't just catch more errors; it makes the remaining human effort dramatically more efficient.
Based on this approach, an intern—with no domain expertise—found errors in NCERT history textbooks and bugs in open-source Python libraries. The intern submitted a pull request. The maintainer agreed: "Yeah, you're right. This is a bug. But it turns out that the entire file was not required. Thank you."
The Data Story, Not the Dashboard
As the talk neared its conclusion—having already gone well past its allotted time, with the host saying "on public demand, please continue; we'll cut the workshop a little"—Anand made his case for code as AI's greatest superpower.
Use code for analysis
Code is deterministic. LLMs code well.
Don't ask for analysis. Ask for code that analyzes.
"Don't ask for analysis," he urged. "Ask for code that analyzes." The distinction is critical. When you ask an LLM to analyze data directly, it might hallucinate numbers. But when you ask it to write code that runs the analysis, the code is deterministic—if it compiles and runs, there's a good chance the output is correct. If it fails, it fails spectacularly and obviously.
He demonstrated with a weekly report data story built entirely by telling ChatGPT: "Create sample data files, then write Python code to generate a data story from them." The result was an interactive narrative that showed, for instance, which teams using higher AI adoption had significantly faster deployment lead times.
And then the line that drew knowing laughter:
"Dashboards are for people who don't know what they want, built by people who don't know what they want. Data stories are for people who say, 'I have a question. I don't have time to read useless stuff. Make it interesting. Tell me what I need to do.'"
The Security Question
Of course, the elephant in the room at any semiconductor company: data security. If you're sending proprietary information to OpenAI, aren't you leaking IP?
Private > public models
Use AMAT-approved AI. No IP / data leak / security risk.
Anand's answer was pragmatic and, to the local-model evangelists, borderline heretical: "I think running a local model is an inferior solution."
Why? The best models are at least six months ahead of open-source alternatives, which are themselves months ahead of anything built in-house. Instead, use cloud providers—Google, AWS, Azure—who sign enterprise agreements guaranteeing your data won't be used for training. Same protections as your existing cloud infrastructure. Use your organization's approved AI tools, liberally and without worry.
The Standing Ovation Moment
There was a moment during the talk when the host interrupted. Anand had been going long, and he asked, somewhat sheepishly, "Am I going crazily over time?"
The host's response: "No no, don't skip anything, this is good. I think everyone will agree."
And then, more revealingly: "One of our meetings is actually interesting!"
The audience laughed. It was the kind of laugh that carries recognition—the admission that most corporate presentations are endured, not enjoyed. Anand had managed the rare trick of making a technology talk feel like a story worth hearing. Not because the technology was particularly novel—most people in the room had heard of ChatGPT. But because he made it personal. Every example was from his own life. Every anecdote was specific. The Seoul bathroom. The 700-rupee t-shirt. The morning walk that produced a slide deck. The unread survey fed directly to a clustering algorithm.