SKETCHNOTE · AI Workshop at LBSNAA · Click to open full size
Picture thirty of India's most capable administrators sitting in a training room in Mussoorie, the hill-station where mist rolls in from the Himalayas and the air smells of cedar and distance. These are IAS officers in the middle of their careers — people who run districts, draft policy, manage crises. The date is April 8, 2026. And Anand S, LLM Psychologist at Straive, is about to say something that will make them laugh and then think very hard.
He begins with a confession. Every senior executive he talks to tells him: "We have the most advanced AI in our organisation. We are at the forefront of policy." Then Anand asks what they personally use. The answer, almost invariably: "No, I don't use AI."
"Our team uses advanced AI. We are at the forefront." Then I ask them what they use. "No, I don't use AI." You're not part of the team? There seems to be a bit of a dichotomy — sometimes even within the same person.
— Anand S
This is the paradox he has come to Mussoorie to dissolve. Not with lectures about the future of work, not with warnings about automation, but with one deceptively modest objective:
Use AI daily in novel ways
That's our objective today.
Not to master AI. Not to understand transformers or prompt engineering theory. Just to use it. Today. In a new way. Once. That single bar, Anand knows from experience, is the one that most people need help clearing.
The Toilet in Seoul
He starts with a story. He is standing in a hotel bathroom in Seoul, South Korea. He cannot figure out how to flush the toilet. For ten minutes, he presses, pulls, twists. Nothing. Finally, he does something that would have seemed surreal three years earlier: he takes a photo and sends it to ChatGPT.
"Press it," says ChatGPT. He presses it. The water drains. He is saved.
But the night is not over. Still in the same bathroom, he accidentally presses what turns out to be an emergency alarm. A siren starts. Lights flash. The front desk calls. The caller speaks only Korean. Anand speaks no Korean. They are at an impasse — until he holds his phone between them and says: "Translate everything I say into Korean." For one surreal minute, ChatGPT's voice mode serves as interpreter. The desk clerk says "Ah, okay, no problem." Crisis averted.
ChatGPT hears best.
Gemini speaks best.
Speak in Hinglish, Hindi, Tamil, …
Share your chat.
Voice, Anand argues, is one of the most underused superpowers in the AI toolkit. ChatGPT does the best job of understanding voice — particularly in mixed languages. Gemini speaks with better intonation. And both can handle Hinglish, Hindi, Tamil, and more. So he asks the room: open ChatGPT on your phones. Right now. Say anything.
He demonstrates, speaking rapid Hindi: a question about where to spend an evening in Mussoorie. The app transcribes, responds, suggests. He holds up the screen. People in the room start muttering to their phones. Within two minutes, messages are pinging in the WhatsApp group he has set up for the session. Ramesh. Manish. Robert. The first three chats, already shared.
A big chunk of my discussions with people are: somebody asks me a question, I don't know the answer. I ask ChatGPT, and I send them the answer.
— Anand S
By the time Anand looked up, twenty different conversations were running. Officers were asking about restaurant recommendations, transport policy, medical queries, algorithm complexity. Everything, at once, in three languages.
The second superpower is the camera. Anand is a vegetarian who travels frequently. In Singapore, where restaurant menus rarely label dishes clearly, he photographs the menu and asks: "Which of these should I have if I want vegetarian dishes?" Practical. Immediate. No waiter required.
Then he shows them something odder: he photographed his own palm and asked ChatGPT to perform a palmistry reading. The model — which has, as Anand notes, "read all the books on palmistry, including the pictures" — delivered a careful analysis. "A strong-minded, idea-rich, independent, imaginative, self-directed person whose life is organized less around stability and more around meaningful mental engagement."
And then, unprompted, it began praising its own accuracy. Anand deadpans: "Interesting, but not particularly useful." The room laughs.
An audience member raises his hand. He photographs his elderly mother's medical reports and asks the model to explain them in simple terms. It does, beautifully. Anand nods.
In terms of diagnosis, Claude, ChatGPT, etc. are better than 70% of the doctors. So, totally.
— Anand S
Use Camera
Claude understands better.
ChatGPT is more diligent.
Photograph any document, screen, object, …
Share your chat.
He states it plainly, because it still surprises people: large language models are not just language models. They are vision models. They can see — and often see things we miss.
Now Anand pulls up a chart. It is a map of every major AI model in the world — plotted by cost on the x-axis and capability on the y-axis. The story it tells is almost impossible to believe unless you see it.
In 2023, the best models cost $30 per million tokens — roughly the equivalent of reading all of Harry Potter once. Today, newer models do the same job for two cents. That is not an incremental improvement. That is 1,500 times cheaper.
That's roughly the difference between spending 1.5 lakhs versus 100 rupees. And next year that will become 45 rupees. The year after that, 4.5 rupees.
— Anand S
And the quality? It has kept pace. In March 2023, the best AI was performing at the level of a high school student. By late 2025, Gemini 2.5 Pro matched a tenured professor on average benchmarks. By the time of this workshop, some models were reliably outperforming that professor.
Anand points to OpenAI's sector-by-sector study. Accountants, auditors, software developers: AI already outperforms humans on many core tasks. Civil engineering and financial management: humans still lead. But the map is shifting every quarter.
There is no better investment today that I can think of than the $20 that one pays for ChatGPT or Claude. You effectively have a thousand professors sitting in your phone. Now imagination becomes the bottleneck.
— Anand S
Use frontier models
Buy paid versions of ChatGPT, Claude or Gemini.
Always use the best model in that.
He walks the room through the settings. On ChatGPT: choose Thinking, set to Extended. Yes, it will take ten minutes to answer a question. Yes, it is worth the wait. On Claude: stick to Sonnet. "It is almost as good as the best and pretty fast. It is very rare that you will have to go outside of Sonnet."
This is a one-time change, he says. But it is the difference between talking to a high school student and a professor.
"Claude has style. ChatGPT has rigor. If I want it correct, I go to ChatGPT. If I want it classy, I go to Claude. That's it."
— Anand S, on choosing between models
The Truth Machine
The third exercise of the day is fact-checking — and it is here that the room begins to see what AI can really do for administrators.
Anand had taken a photo of the training programme's own schedule and asked ChatGPT to find errors. After seven minutes and forty-three seconds of extended thinking, it surfaced something small but precise: Prof. K VijayRaghavan's name had been printed as "Vijay Raghavan" — with a space — contrary to his official usage.
Then Anand pulls up that morning's Press Information Bureau release about the PMMY. He copies it into ChatGPT with a single instruction: "Fact check this press release. Cross-check all claims against authoritative sources. Where do the claims align? Where do they diverge? Where is the methodology unclear?"
He does not read the result. This is important. He applies what he calls the Henry Kissinger technique: Kissinger would ask anyone who submitted a report, "Is this your best work?" They would go back and triple-check. Eventually, satisfied, they would return. Only then would he read it. So Anand gives ChatGPT the same treatment: "Double-check all the items you mentioned as diverging. Fact check, did you flag them correctly? Revise if required and tell me any additional mistakes."
The result: the press release had used "disbursed" where the underlying data said "sanctioned." A small distinction, but in policy documents, one that matters.
The beauty of this process is that mistake-spotting is harmless. If it gets it right, good, we have a benefit. If it doesn't get it right, no harm done. Anyway, we are going to double-check.
— Anand S
Fact-check with AI
It's harmless.
Verifying is easy.
Fact check news, policy advice, Dept website, …
Share on chat.
Someone asks what happens when AI makes a mistake in the fact-check itself. Anand is candid. GPS used to send him to the middle of a driveway in Manipal. He argued with the auto driver: "Google Maps says go here." He paid 150 extra rupees. Eventually GPS improved. His navigation skills worsened. He does not need to know every route anymore. The same principle applies here.
The room immediately tries. Sanjeev uploads Bihar electricity regulation tariff orders and asks the model to find inconsistencies. An audience member discovers a document on forest cover where the AI not only flags figures but suggests a better reference — 45% green cover versus the document's 71%, both defensible depending on methodology. Another fact-checks a news story they read that morning.
By mid-morning, the room has begun to understand one of Anand's central arguments: the difference between AI-as-search-engine and AI-as-analyst. The insight is stated simply, but its implications are profound.
AI as a plain search engine is a waste of money. You have to give it context. And when you give it context, that's when you see the difference in the search results.
— An audience member, articulating the key principle
To demonstrate, Anand exports his IIM Bangalore batch WhatsApp group — a group he rarely reads — and uploads it to ChatGPT. "Summarize the last three months of discussions." The model thinks for two minutes. Then it tells him three things happened: a classmate reached a dance milestone, another had a major professional achievement, and a 25-year reunion is being planned.
But then he asks something more interesting: "Chart the message volume from inception." ChatGPT writes a program, runs it, and produces a graph. There is a massive, unmistakable spike. Something happened in that group. He asks what. The answer: a classmate named P Chidambaram — not that P Chidambaram, Anand hastens to add — began posting long off-topic messages. People started arguing about free speech. Someone was removed. A few members left the group temporarily. It was, as Anand says with barely concealed delight: "like hardcore drama and I missed it!"
Next, Anand turns the group's own attendance list into a contact database. Three photos of the printed attendee sheet, one prompt: "Convert these photos into a list." Then: "Give this to me as a CSV I can upload to my Google contacts." After eight minutes, the list is ready. All thirty IAS officers are now in his phone.
Upload relevant documents.
Chats, reports, analysis, …
Try analyzing YOUR WhatsApp chat.
What did you learn?
This field, Anand notes, used to be called "prompt engineering." Increasingly it is called context engineering. The distinction matters: prompts are what you type; context is what you upload. WhatsApp chats, analyst reports, medical records, policy documents. All of it amplifies what the model can do.
The officers try it with their own groups. Manish uploads two different chats — one from a media professionals group, one from the LBSNAA batch itself. The results are instant, sometimes startling: who the active members are, what themes keep recurring, what drama unfolded while everyone was too busy to read the backlog.
The next section of the workshop is devoted to what Anand considers perhaps the most powerful capability in the current generation of AI: Deep Research.
As a demonstration, he had already used it to research everyone in the room — all thirty IAS officers. The prompt: "What are their most significant contributions that they would be proud to share with their grandchildren someday? Share it like a story, one paragraph each." The model ran 671 searches. For each person, it produced a paragraph.
Then he prompts the room with a question designed to make them sit up:
Use Deep Research
It runs hundreds of searches. "Deep research every IAS officer who disagreed with a political executive and was right. What happened?"
Claude returns first, faster because it runs searches in parallel. Ashok Khemka — 57 transfers. Durga Shakti Nagpal — suspended for taking on sand mining mafias. Sanjiv Chaturvedi — winner of the Ramon Magsaysay Award, Asia's Nobel Prize, for exposing corruption. TN Seshan, who transformed the Election Commission. U Sagayam, who exposed a granite scam worth thousands of crores.
For a room full of IAS officers, it is not a neutral exercise. These are their predecessors, their colleagues, their cautionary tales and inspiration. In one prompt, the model had assembled decades of institutional memory. The research covers Ashok Khemka (57 transfers; a Robert Vadra land deal cancelled; awards from Transparency International), Durga Shakti Nagpal (66 FIRs against sand miners; suspended; vindicated when the UP Waqf Board itself cleared her), Sanjiv Chaturvedi (zero performance rating from the Health Ministry the same year he won the Ramon Magsaysay Award), and TN Seshan (who transformed India's elections against the will of every major political party).
He shows another product of Deep Research: a comparison matrix of AI policies at 25 universities worldwide. Should AI use be declared? How is privacy handled? The bottom three universities in terms of policy coverage — chosen by the model itself from global options — happen to be Ashoka University, IIT Madras, and SUTD. Three universities where Anand has recently lectured.
He pauses. "I have to clarify: correlation is not causation. I did not have anything to do with this."
Somebody in the room raises a bolder challenge: find every Supreme Court judgment that has flatly contradicted another. Not overruled — actually contradicted. After ten failed attempts on a phone (the model kept erroring), Anand runs it fresh with an emotional prompt: "Be diligent. Give me something truly mind-blowing. Make sure you fact-check yourself and not just give results that appear correct at first glance." The room waits. The result arrives.
"Emotional prompting works." — ask it to be diligent, to be careful, to give you something mind-blowing. It helps.
— Anand S
How Deep Research Works
One question. Hundreds of searches. A synthesised answer. Here is what happens between prompt and response.
When the Government Speaks
The afternoon session turns to something unexpected: government briefings. The premise is simple. India's Press Information Bureau publishes videos daily. The Ministry of Housing, the Ministry of External Affairs, the Cabinet secretariat — all of them stream press conferences and briefings. Who has time to watch thirty-four minutes, fifty-eight minutes, ninety minutes?
Anand had uploaded the audio of a Ministry of Housing and Urban Affairs press conference to Gemini. He got a full transcript. He fed it to Claude and said: "Give it to me as a quick presentation." The result: a structured slide deck summarising the entire briefing in five minutes of reading.
Press Conference by the Ministry of Housing and Urban Affairs · 7 Apr 2026 · Slide deck chat · Slide deck
Anand demonstrates a live example with an earlier Cabinet Briefing by Union Minister Ashwini Vaishnaw. From the audio, ChatGPT extracted the transcript and — using internet sources — retrieved the official slides. Then Gemini turned it into a sketchnote:
The room sees the possibilities cascade. Any briefing, any video, any meeting — transcribed, summarised, visualised. And not just for your own use. The same content could be personalised for different audiences: a state official, a foreign ministry contact, a departmental team. One recording. Infinite briefings.
With a transcript, the number of things that you can do is crazy. When you record more and more meetings — we're often on video calls, those are anyway recorded — we practically have an AI-friendly memory of what the interactions have been about.
— Anand S
Anand walks through another application: he had uploaded the audio of the earlier talk by Debjani Ghosh — who had spoken just that morning — and generated both a transcript and a sketchnote from Gemini. "This is a nice sketch of the contents of her talk. Somebody can just glance through and say — oh, she covered that. That's the anecdote she mentioned. It's a one-page mind map."
Analyze Transcripts
AI can transcribe audio.
AI can analyze the transcript.
AI can create presentations.
Share on chat.
The Data Whisperer
It is approaching five o'clock. The room has been at this for hours. Hands are still going up. Anand looks at the clock, then at the room, and makes a decision: skip the planned deep dive on data analysis. Give the summary instead. The room objects. "That is most important," someone says. So he stays.
He had downloaded a random dataset from NITI Aayog's NDAP portal — monthly allocation and distribution data under the National Food Security Act. A 300-kilobyte CSV file, publicly available, rarely analysed. He uploaded it to Claude with a prompt he had refined over ten years of data work: "Find the stories. Hunt for anomalies. What is unusual? What should concern a policymaker?"
Claude wrote a program. It ran the program. It delivered findings that would have taken a team of analysts days to surface.
West Bengal had issued 51.7 million ration cards but was activating only 18% of them monthly — the lowest rate among large states. Goa and Telangana were showing zero physical grain distribution but massive transaction volumes, suggesting a shift to cash transfers. In March, data from eighteen states simply disappeared — a reporting gap that would have gone unnoticed in a standard review. Rajasthan showed 210% of its allocation being distributed. Something was wrong with either the distribution or the data.
Without even bothering to look at the data, if you want to know if there are some problems, give it to an AI model and ask it to find out what's unusual. It will flag off these anomalies. This is better than most data analysts.
— Anand S
What Claude did, Anand explains, was not language intelligence. It was programming intelligence. It wrote code, ran the code, analysed the output, and interpreted the results like a consultant. In one sitting. On a dataset that any government portal publishes.
And seasonal patterns emerged: March consistently had the lowest transaction volumes; September the highest. The cycle tracked harvest seasons and monsoon patterns. The implication: grain allocations are flat, but demand is seasonal. A 10-20% improvement in working capital recovery might be achievable just by rebalancing the timing of allocations.
The fact that the models do not have this data is your advantage. Because that means you can get insights that others cannot.
— Anand S
Analyze Uploaded Data
It can write and run code.
From UDISE Data: What impacts girl dropouts?
Can you analyze and PREDICT the future?
He leaves them with an assignment: download the UDISE dataset — school enrolment data across India — and ask two questions. What determines the girl child dropout ratio? And how can we improve it? If I do this, predict how much it will improve by.
With data and a language model combined with the power of computing, you can, on a morning walk, speak with it. Have a conversation. By the end of your morning walk, you will have a ready presentation, data-backed, that will tell you exactly what policy interventions will have what kind of an impact.
— Anand S
The Song They Didn't See Coming
Before questions, before goodbyes, Anand says: "There is one little thing I would like to share."
He had gone to Gemini — which now has a music generation feature — and typed a single instruction: "Create a soulful vote of thanks, with patriotic Indian music playing in the background, naming each of you." He had fed it every name. Every officer in the room.
A song begins to play over the HDMI. Strings. A voice, warm and unhurried:
"The morning sun rises over the secretariat corridors, illuminating the echoes of long nights spent in duty. To Ms. Vatsala Vasudeva, for steady hands in every storm. To Shri Shyamal Misra, for the silent strength of leadership. To Shri Amit Rathore, for the vision that breaks through the haze…"
Name by name, it goes through the room. Every person in the session, described in a line. By the time it reaches "and to Ms. Mugdha Sinha, for the innovation that shapes the future of our service", the room has gone quiet in a particular way — the quiet of people who did not expect to be moved.
🎵
"Thank you for the years of service. May the path forward be clear and the morning sun shine."
The host closed the session: "AI as a tool of problem-solving, solutions, as well as the opportunities. Today we have had the most enjoyed session in AI — interactive, enlightening, and enjoyable. Thank you both."
Anand's contact details were on the screen. His email. His LinkedIn. His WhatsApp. "If nothing else," he said, "we'll get on a call and chat about it."
ChatGPT hears best; Gemini speaks best. Speak in Hinglish, Hindi, Tamil — the model understands. Voice is faster than typing and unlocks new use cases you would never think to type.
02
Use the Camera
Photograph menus, documents, books, medical reports, screens. The models can see — and often see things humans miss. Claude understands better; ChatGPT is more diligent.
03
Pay for Frontier Models
The $20/month subscription gives you access to intelligence equivalent to a tenured professor. Use ChatGPT with Extended Thinking; use Claude Sonnet. The difference is between a high school student and a PhD.
04
Fact-Check Everything
AI finds errors in policy documents, press releases, schedules. Mistake-spotting is harmless — if it's wrong, no harm done; if it's right, you gain. Always ask it to double-check its own findings.
05
Context Engineering
The difference between AI-as-search and AI-as-analyst is what you upload. WhatsApp chats, reports, transcripts, attendee lists — private data you provide becomes the model's biggest advantage.
06
Deep Research for Complexity
For questions requiring hundreds of sources — policy comparisons, research on IAS officers, university AI policies — use Deep Research on ChatGPT or Research on Claude. It runs 600+ searches for you.
07
Transcripts Are Gold
Record meetings, briefings, press conferences. Upload the audio. Get a transcript, a sketchnote, a presentation, a personalised briefing. Any recording becomes searchable, shareable, actionable intelligence.
08
Data + AI = Policy Insight
Upload government datasets. Ask for anomalies. AI writes code, runs analysis, finds the stories — better than most analysts. Your data is your competitive advantage. Public datasets become policy intelligence in minutes.