Talks · Sanand0 06 Apr 2026
Straive ACE Fireside Chat — 6 April 2026

Innovation as a Frontier

What happens when the edge keeps moving — and falling behind it is no longer optional

By Anand S · Head of Innovation, Straive · 💬 Transcript
🎧
Session Recording
Full fireside chat · ~55 min
Sketchnote summarising the Innovation as a Frontier talk

AI-generated sketchnote · click to open full size

Imagine being asked to give a talk on innovation — and responding by asking an AI to give the talk for you. That is exactly what Anand S did.

On the morning of 6 April 2026, Straive's ACE Fireside Chat host El sent Anand a question. Something about leadership, frontier thinking, the meaning of innovation. The kind of earnest, open-ended prompt that usually sends a speaker into a quiet spiral of introspection. Anand, the Head of Innovation at Straive, did what he always does: he turned to Calvin.

Calvin and Hobbes comic strip — last-minute panic
Calvin's approach to essay-writing: the definitive innovation framework. (Calvin and Hobbes by Bill Watterson)

For those unfamiliar with Calvin and Hobbes, Calvin is the six-year-old philosopher-troublemaker whose best thinking happens at the very last minute. It is, Anand confided to the 200-plus participants on the call, his definitive methodology.

"The answer to the question 'how do I think of innovation' is something that, like the answer to most questions, I craft in the perfect mode — last-minute panic."

— Anand S

So rather than muse alone, he fed El's question to Claude — Anthropic's AI — along with an instruction that says everything about how this particular innovator operates:

"Based on what you know about me, and what you can find out from past chat conversations as well as my blog and public GitHub repo — especially talks and data stories — what is the most insightful answer you can provide to this question?"

→ See the Claude chat

The AI synthesised months of Anand's writing, experiments, and conversations and handed him a framework. He then presented that framework — attributing it, with characteristic candour, to the machine. "I'm not sure if this is in fact my framework," he told the audience. "This is what seems to be my framework. But it's really just a guess."

This meta-moment — using AI to understand how you yourself think about AI — turned out to be the most perfect possible opening for a talk about innovation at the frontier. Because that is exactly what frontier innovation feels like: you are not sure what you know until you interrogate it.


The Edge Keeps Moving

The first thing Anand is clear about — truly, unambiguously clear — is that the frontier never holds still.

AI model capability vs cost, March 2023 – June 2025. Every axis is moving. (View video)

In March 2023, there were exactly three models worth talking about: two versions of Claude and GPT-3.5 Turbo. By November 2023, GPT-4 had arrived — roughly as smart as a college junior. By September 2024, o1-preview had reached the level of a master's student. By February 2025, GPT-4.5 could match a postgraduate. By June 2025, Gemini 2.5 Pro was trading blows with a tenured professor.

Remarkable enough. But the cost curve is the real punchline.

"Today you can get someone smarter than a tenured professor at $5 per million tokens. And this cost has been falling roughly at the rate of 10 times every year. 10 times, not 10 percent!"

— Anand S

At that rate, Anand pointed out, the cost of a tenured-professor-grade AI is 50 cents in a year. Five cents the year after. The implication is not merely economic. It means that whatever "innovation" meant last year — whatever problems were too expensive, too complex, too impractical to tackle — the list is shrinking at a rate that makes planning difficult and exploration essential.


Three Ways to Live at the Edge

From this restless, ever-shifting landscape, Anand — or rather, the synthesis of Anand and his AI — distilled three principles. Not a framework in the McKinsey sense. More like three orientations for staying oriented when the map keeps redrawing itself.

The Three Innovation Orientations

01

Find New Constraints

Every problem you solve reveals the next. What used to be impossible is now solved — so move past it. The bottleneck always shifts. Your job is to find where it shifted to.

02

Do What's Irrational

If it made no sense before, that's exactly why it might make sense now. Write code without documentation? Impractical. Until AI generates better docs than humans ever did. The irrational is the new rational.

03

Surface What's Invisible

A lot of things we couldn't perceive before, AI now makes legible. Patterns in transcripts, errors in textbooks, accents in foreign languages — the invisible is becoming visible at scale.

These three orientations are not abstractions. Over the next forty minutes, Anand illustrated each one with a story from his own desk.


When the Bottleneck Shifts

A few months before the fireside chat, the Times of India came to Anand with a problem. They publish a daily feature called Hack of the Day — a crisp, useful tech tip, formatted as an illustrated card. Two bottlenecks: finding the content takes journalists time; creating the visual format takes even more.

Times of India Hack of the Day cards
AI-generated Hack of the Day cards — now published in the Times of India. See the full set →

Anand's response was characteristic. He didn't build a product. He had one conversation with ChatGPT — and the conversation started with a question he admits he didn't know how to answer:

"Analyze these 10 'Hack of the Day' images carried in The Times of India. If I had to ask an intern (or an AI agent) to create several such, then what prompt will give me this kind of content in exactly this format?"

→ See the ChatGPT chat

"When I don't know what to ask for, what do I do? I just ask it, because its knowledge is as good as anyone else's."

— Anand S

He then fed the AI a list of past hacks from the Times of India's archive and asked it to find ten new ones. Then asked it to render them as SVGs. The result: 60 to 80 polished cards, ready to publish. Starting late March 2026, some of them appeared in the newspaper.

At which point, something interesting happened. The bottleneck — finding content and making images — had vanished. And a new bottleneck appeared in its place: review time. The editors had always glanced at two or three journalist-written hacks. Now there were sixty. "We don't have that kind of capacity," they said.

"When the bottleneck shifts, the problem changes. We need to keep constantly innovating in different ways to solve that problem."

— Anand S

This is what living on the frontier feels like in practice. You don't solve a problem and rest. You solve a problem and immediately encounter the shape of the next one. The frontier moved.

To understand where the frontier currently sits — across all professions, not just journalism — Anand has a favourite benchmark:

GDPVal: AI vs human benchmark across 800+ professions
GDPVal: across 800+ professions, who does AI beat? Green = AI winning. Software engineers (70% of tasks), sales managers, financial analysts. Accountants and auditors: humans still ahead. Explore the benchmark →

The Hardest Shift: From Knowing to Not-Knowing

Preeti, one of the participants, asked the question that every manager in an AI-era organisation is secretly wrestling with: What mindset shift do senior leaders struggle with most when moving from managing execution to enabling exploration?

Anand did what he always does. He asked AI first. Specifically, he asked Claude to answer in his own voice, drawing on months of recorded conversations and blog posts.

"I was asked: What mindset shifts do senior leaders struggle with most when moving from managing execution to enabling exploration? Research my past chats, writings, research & experiments and answer in my voice citing examples from my experience."

→ See Claude's answer in Anand's voice

Then he gave his own answer anyway — not instead of the AI's answer, but alongside it. The synthesis, he argued, was better than either alone.

Managing execution, he explained, rewards reliability. Nothing goes wrong. The systems work. Enabling exploration rewards something almost opposite: tolerance for uncertainty. It's okay if things go wrong. In fact — and here is where Anand's answer turned genuinely radical — it's not just okay. It's required.

"Failure is not only an option, failure is a necessity."

— Anand S, quoting the principle he gives to his innovation team

The guideline Anand gives his innovation team has the quality of a koan:

"What you normally do in a month, do it in a day. And if you can do it in a day, I'll ask for it in an hour. If you can do it in an hour, I'll ask for ten of these in an hour. In other words, I'm going to keep increasing the threshold until we fail often enough."

— Anand S

He compared this to the mental model of a venture capitalist versus a banker. The banker needs every loan repaid. The VC expects most bets to fail — and structures the portfolio so that doesn't matter. Innovation organisations, Anand argues, need to be run more like VC portfolios. Not every experiment needs to work. The portfolio does.

Claude's synthesis added a dimension Anand found genuinely compelling: the identity problem. Senior leaders rise to their positions by knowing the answers. Exploration requires them to publicly not know — to say "I don't know yet" rather than "here's the plan." That is existentially threatening to people whose status was built on expertise. Unless you can make not-knowing feel like an asset rather than a vulnerability, the culture won't change.


The Smallest Innovation with the Biggest Impact

El asked Anand what small innovation had produced a surprisingly large return. He didn't hesitate.

"Recording all my calls."

It sounds almost too mundane. Everyone has Zoom recordings. What Anand described, though, was a deliberate, structured practice: nearly every conversation since October 2025, transcribed, indexed, stored as a personal knowledge base that only he possesses. No one else has that corpus. No one else's AI can search it.

What does he do with it? Everything.

Each of these is an automated prompt running against the transcript corpus. One record button, pressed once. Compounding return for months.

"It's like gathering a repository of practically everything that I speak and hear and using that as a new knowledge base that only I have. No one else has that information."

— Anand S

The output this produced live — during the fireside chat itself — was the sketchnote you saw at the top of this page. Anand pasted the transcript into Gemini, typed a single prompt, and in minutes had a richly illustrated visual summary of the entire talk. He showed a similar sketchnote from an earlier session with Prudential:

Sketchnote from the Verifiable Agents talk
Sketchnote from Anand & Ankor Rai's Prudential session on Verification Architecture for Autonomous AI — generated from the recording transcript.

Creating a Theme Song, Live

One of the more spectacular moments in the session came when Anand decided — in real time, in front of everyone — to create a personalised theme song for the show.

He had recently discovered that Gemini could generate music. Not background loops, not royalty-free elevator jazz, but original compositions with real structure and, if you pushed it, actual lyrics. He decided to demonstrate what happens when you find a capability you didn't know existed and simply try it out.

The process was an elegant demonstration of meta-prompting — using AI to improve the instructions you give AI. First, he typed a rough music brief into ChatGPT:

"Give me a theme music that I can run in a loop for fireside chats. The background music needs to be lively, vibrant, and something that will make people sit up, but with enough contrast... a certain amount of soothing effect as well. Maybe about a minute long."

Then he took that rough prompt to Gemini and said:

"I want to use this prompt to create music using Gemini's new create music capability. Research best practices and improve this prompt."

→ See the meta-prompt refinement chat

Then he fed the improved prompt back into Gemini's music generator, with one addition: "The show will feature Anand who leads innovation at Straive, and the question he'll be posed is this… think of creative lyrics for this and incorporate it."

While the rest of the talk was happening — while questions were being answered, while Anand was demonstrating other tools — Gemini was composing. By the time he circled back, it had generated a track called Where the Paths Thrive. He downloaded it and shared it in the Zoom chat. You can listen to it here:

Live Demo — AI-Generated Theme Music
Where the Paths Thrive
Created by Gemini during this fireside chat session. Lyrics include the session theme.
→ See the Gemini music generation chat

"You're going to love this when I share it with you," he told the audience. He was right.


The Limits of the Frontier — and Why They're Temporary

Shannon, who runs the Learning Aid podcast and interviews AI leaders, asked a question that cut to something real: AI is not representing everybody. Not all languages, not all cultures. She had tried to work with Sanskrit for an Ayurveda project and found the models wanting.

Anand didn't dismiss the concern. He started with his own experience:

"I was asking for jokes in Tamil… and I found that not many models even knew the language. Which is a pity."

— Anand S

And then he showed what happened a year later. He had been watching a Tamil film and came across a frame with script he couldn't read — he speaks Tamil fluently but reads it badly. He took a screenshot, opened Gemini, and typed a message so minimal it barely qualifies as a prompt:

Tamil script on a TV frame, and Gemini's translation
"OCR and translate." That was the entire prompt. See the Gemini chat · Read the blog post →

"OCR and translate." The model not only read the Tamil accurately but provided rich context about what the text meant. In under a year, a not-particularly-mainstream language had gone from poorly supported to genuinely usable.

For clients who needed something more specific — a Dutch-language video with natural-sounding voice-overs — Anand's team built a comparison page testing multiple TTS engines side by side, even though none of them spoke a word of Dutch:

Dutch TTS voice comparison page
Side-by-side TTS evaluation in Dutch — ElevenLabs, Gemini 2.5 Pro, and others on the same text, built by a team that speaks no Dutch. Gemini's "Algieba" voice won. Listen and compare →

The winner? Gemini 2.5 Pro's "Algieba" voice — a model that had barely existed months before — nailing the nuance and accent of Dutch in a way even the client's native speakers found remarkable.

His broader answer to Shannon: maybe we just need to wait and keep exploring at the edges to see what's now possible. And the cost of building niche AI is falling fast. He had just downloaded Google Edge Gallery — a 2.5GB model that runs entirely on a phone, no internet required. "That's today," he said. "People who are working on niche areas are going to be able to take their content, put it in." The inclusion gap is real. But its half-life is getting shorter.


Doing What's Irrational

The second of Anand's three orientations — doing what made no sense before — produced some of the most unexpected examples in the talk.

Take the mathematician George Pólya and his book How to Solve It. Pólya proposed heuristics for solving mathematical problems — understand the problem, make a plan, carry it out, review. For sixty years, these were philosophical suggestions. Testing them rigorously on humans was impractical, expensive, irrational.

With AI, testing them became trivial. Anand's team took various models, gave them maths problems, and added one extra line to each prompt — a specific Pólya heuristic, like "solve using the contradiction method: assume the opposite is true and reason until you reach an absurdity." Then measured whether it helped or hurt.

Impact of different Pólya heuristics on AI math performance
Which Pólya heuristics help AI models — and which hurt? Now testable in days. Explore the data story →

Contradiction helps with geometry. It hurts number theory. Pre-algebra responds well to most tips. A philosophy that had been un-testable for six decades suddenly had experimental evidence behind it. What was once irrational — running hundreds of controlled maths experiments — became an afternoon's project.

Next: a grade 12 history textbook. One of Anand's colleagues ran it through an AI fact-checker. In the first thirty pages, the model found a genuine factual error: the textbook claimed that only broken or useless objects end up in archaeological records. Cambridge research showed otherwise — intact, functional objects are common in digs, left behind during migrations or placed as religious offerings. The textbook was wrong.

AI fact-checking a grade 12 history textbook
Grade 12 history textbook: 45 verified claims, 1 factual error, 2 precision issues, 2 questionable claims — in the first ~30 pages. See the full analysis →

And then there was Dilbert. Scott Adams passed away in 2025, leaving behind a vast archive of comic strips. Could AI transcribe them accurately enough to make the entire corpus searchable? Anand's colleague tested multiple models on the same strips.

Dilbert comic strip used in the transcription experiment
One of the strips used to benchmark transcription quality. Gemma 3 hallucinated a character that wasn't there. Qwen-VL-32B was nearly perfect. (Dilbert archive)
AI transcription accuracy benchmark for Dilbert comic strips
Benchmarking Gemma 3, Qwen-VL-32B, Gemini 1.5 Flash and others on Dilbert strips. Gemini 1.5 Flash preview: 99.3% accuracy. Full corpus cost: under $20. See the benchmark →

Gemini 1.5 Flash preview hit 99.3% accuracy. The full Dilbert corpus — decades of strips — could be transcribed for under twenty dollars. What was irrational before is now practically free.

The pattern is the same in every example: an experiment that would have been too expensive, too slow, or too pointless to attempt just a year ago has become a matter of minutes and pocket change. The irrational is becoming rational so fast that the list of "things we haven't tried" is the most valuable asset on the table.


When the Delivery System Isn't Ready

Preeti posed another question: can you share a moment when an innovation initiative failed not because of the idea, but because the delivery system wasn't ready?

Anand's answer was blunt: "This happens all the time."

A financial services client wanted a chatbot to answer questions from financial statements. A reasonable idea; straightforward enough to prototype. But building it properly required AI to write software within their organisation. That wasn't enabled. Nine months later, it still isn't. The innovation is parked in an architectural review committee, waiting.

He described the technical evolution in the space: RAG — the earlier approach — works for specific lookups. Agents, which write and run their own code to solve problems, work for everything else. But agents require permissions that the architectural council hadn't granted.

"If an innovation initiative is not bottlenecked by the delivery system, then it is not innovative. Failure is therefore inevitable in the short run. The question is: is the delivery system able to adapt to innovation quickly enough?"

— Anand S

The innovation team's job, he argued, isn't just to generate ideas. It's to challenge the delivery system at a rate it can absorb — and then help the delivery system catch up. Innovation that never gets deployed isn't innovation. It's a nice story.


Where the Ideas Actually Come From

Near the end of the session, Anand addressed something that often goes unspoken in innovation conversations: how do you come up with the ideas in the first place?

His answer was honest to the point of being disarming. He doesn't get them himself.

A big part of how the innovation team functions is simply asking AI: "How can we innovate?" But AI, left to its own devices, tends toward the conventional — it averages the world's knowledge, which means it regresses to the mean. So you have to push it sideways. Think like someone unusual. Pick a random object and apply its principles to the problem. Use the creativity techniques you'd use with a human — lateral thinking, random associations, de Bono's hats — and force the AI to apply them.

"Questions are in fact the biggest skill. Please ask random questions, and we'll see where we go from there."

— Anand S, to the fireside chat audience

The talk itself was an example of this. Every question from the audience — Vel's frustration with LLM Foundry's UI, Murali's concern about AI errors, Shannon's worry about inclusion — became a live demonstration. Anand didn't have polished answers. He had a process: ask the question, query the AI, synthesise the two, share both. The innovation happened in real time, in public.

He also had a note on AI reliability. When Murali asked what to watch out for:

What should we be alert about when using AI?

It could be wrong. I would not trust it anymore than any random person, even if they're an expert, even if they're totally dumb. AI is more like a weird person whom we don't understand — a foreigner with very different cultural values and very different kinds of mistakes. When models change, it's almost like a different person. With these caveats, you can take AI pretty far.

— Anand S, in response to Murali's question

Top Takeaways

What to carry forward from an hour at the frontier

  1. The frontier never holds still AI capability is doubling roughly every few months; cost is falling 10× per year. Whatever seemed impossible last year may be trivial now. Treat the frontier as a moving target, not a destination.
  2. Innovate by finding the new bottleneck Every problem you solve reveals the next constraint. The Times of India's bottleneck moved from "finding content" to "reviewing 60 AI-generated cards." Your job is to keep asking: where did the constraint go?
  3. Do what's irrational — it may now be rational Testing 60-year-old maths heuristics on AI models. Fact-checking school textbooks. Transcribing decades of comics for $20. These were all irrational until recently. Make a list of the irrational and start ticking it off.
  4. Failure is a necessity, not a risk Run the innovation team like a VC portfolio. What takes a month should be done in a day. What takes a day, in an hour. Keep raising the bar until failure is common enough to be normal — and instructive.
  5. Use AI to understand yourself Ask AI to answer questions in your voice, drawing on your past conversations and writing. The synthesis of your knowledge and AI's breadth is more insightful than either alone. Record your meetings. Build your corpus.
  6. Meta-prompt your way to better prompts Don't know how to ask for something? Ask AI how to ask for it. Anand didn't know the right prompt for Hack of the Day cards — so he asked ChatGPT to generate the prompt. Then asked Gemini to improve it. Prompting is a skill you can automate.
  7. Talk to AI 50 times a day "You'll be surprised how quickly you run out of questions — and how quickly you come up with new questions you never thought you could ask." The frontier doesn't reveal itself to those who visit occasionally.