Sketchnote · Click to open full size · AI in SDLC Panel, PyConf Hyderabad 2026
The Panel
Panelist
Usha Rengaraju
World's first female triple Kaggle Grandmaster. Chief of Research at Exa Protocol. Deep learning, GenAI & probabilistic graphical models. Organised India's first NeuroAI symposium.
LLM Psychologist at Straive. Co-founder of Gramener (acquired). Specialist in AI-driven workflows, rapid prototyping, and making interns look like rockstars.
Picture the scene. Hyderabad, March 15, 2026. The second day of PyConf, India's largest Python conference, and track one has just filled up faster than the others. Not because of a framework tutorial or a deep-dive into async Python. Because someone has put three people on a stage and asked them to argue about something every developer in that room is quietly worried about: what happens to us now that AI can write the code?
The moderator, Snehith Allamraju — Director of Data & Analytics at RSM US LLP and part of the PyConf organizing team — starts with a disarming question. Not "will AI take our jobs?" (too obvious) or "is AI reliable?" (too abstract). He asks each panelist: walk me through your day.
What follows is forty-five minutes that nobody in the room quite expected.
Act I: Three People, Three Realities
The first thing you notice is that the three panelists aren't just offering different opinions. They're describing different worlds.
Usha Rengaraju — triple Kaggle Grandmaster, Chief of Research at Exa Protocol — works primarily with pre-seed startups that haven't yet raised their seed round. Greenfield systems. Demos for investors. Speed is everything. For her, AI isn't a productivity multiplier. It's a team replacement.
"Until 2024, I used to have six to seven interns, predominantly hired from IITs — Bombay and Chennai. Because I want less people, six or seven, but the volume of work is very high. I used to spend somewhere around 3 to 3.5 lakhs on intern salaries every month. From March and April 2024 till now, I am saving like 60, 70 lakhs on intern salaries every year."
— Usha Rengaraju
Let that settle. Sixty to seventy lakhs a year. Saved. Not because she's getting less done — because she's getting more done, with no interns. For the kind of work she does — startup prototypes, early-stage demos, greenfield code — AI has entirely replaced a six-person team.
Lakshman Peethani lives in a different world entirely. Director of Technology Solutions at EPAM Systems, he spends his days talking to large enterprise clients about how AI can accelerate product cycles. His world is brownfield: legacy systems, compliance requirements, thousands of engineers, and the kind of institutional inertia that makes a three-day sprint feel like a moon landing.
"We are now leveraging AI to be more competitive and bid for those projects at a much lower price point."
— Lakshman Peethani
For Lakshman, AI is a competitive weapon in an industry where margins are tight and clients are demanding. His team is building agentic solutions — systems where AI agents run feature implementations end-to-end and raise a PR when they're done. Humans review. Humans approve. But the writing? That's the agents' job now.
And then there's Anand S — LLM Psychologist at Straive, former co-founder of Gramener. He doesn't ease you into his answer.
"AI is my SDLC. I've stopped designing, I've stopped gathering requirements, I've stopped coding, I've stopped testing, I've stopped deploying."
The audience laughs — and then realizes he's not joking. Anand's workflow now looks like this: a colleague calls, Anand records it. He sends the recording to Gemini and asks "tell me what he wants." Gemini produces a prompt. The prompt goes to Copilot or Claude. Claude writes the code. Another agent tests it. A third deploys it. A fourth emails the stakeholder. He describes this with the studied nonchalance of someone who has been doing it for eighteen months and finds your surprise slightly quaint.
Even more striking: Anand is hiring more interns, not fewer — the opposite of Usha. His logic is counterintuitive but watertight: experienced developers bring preconceptions, slow down the process, and resist the new way. Interns, fresh out of college, have none of that baggage. He puts Varun — a student from IIT Madras — directly in front of Ankor, Straive's CEO. "Ankor is like, 'This guy is fantastic, how is he doing it?'" Anand grins. "The process is: he records the call, he feeds it to ChatGPT, and gets the result."
₹70L
Annual intern salary savings for Usha's consultancy
3 days
For a 3rd-year student to learn advanced CUDA kernel fusion with AI
0
Lines of code Anand writes manually anymore
4 centuries
Of legal precedent for corporate accountability — none for AI agents
The New Software Development Pipeline
How AI has transformed each stage of SDLC — from voice call to deployed app
Act II: The Polyglot Future
Snehith pivots to the question every student and junior developer in the room is holding. Not out loud — nobody wants to ask it out loud. But the moderator asks it for them: will they ever write raw code again?
Lakshman's answer is the one that deserves to be framed and hung in every computer science department in the country.
"It's no longer like, 'I'm a Java developer, I will only write Java code,' or 'I'm a React developer, I will only write React code.' No, you have to be a polyglot developer and you have to do end-to-end. Not necessarily just a developer role, but you need to understand product management, you need to understand refinement of your requirements, development, and even QA as well."
— Lakshman Peethani
This is a genuine inversion. For two decades, software engineering moved toward specialization. You were a frontend developer or a backend developer, a DBA or a DevOps engineer. The tooling, the frameworks, the interview pipelines — everything reinforced the idea that depth was more valuable than breadth. Now Lakshman is describing a reversal: AI handles the depth, humans supply the breadth. You're a generalist again. You're back to doing everything end-to-end — just with better tools.
Usha, whose work spans both bleeding-edge AI research and desktop application development she'd never touched before last year, offers the acceleration story. Give a third-year engineering student access to Claude and ChatGPT Pro, set a three-day deadline, and ask them to learn kernel fusion and advanced PyTorch internals?
"You give a third-year engineering student good access to all cutting-edge tools, three days' time, he will still be able to do it. That's the amount of acceleration which AI brings in."
— Usha Rengaraju
Anand's advice to junior developers, delivered with characteristic compression:
"Use AI like crazy to find out what it does well. If it's doing something well, stop learning that. Nobody is going to hire you for that. Where it breaks, use AI to learn that. That way you'll be building a skill."
— Anand S
There's something almost Zen about it. The edge of AI competence is precisely where human value lives. Not the comfortable middle, where AI is reliable and fluent. The edge — where it hallucinates, where it loses the thread, where it confidently produces something confidently wrong. That's where you want to be. That's where it's still possible to know more than the machine.
💡 Insight
Usha spent four months building a desktop application (she'd never done desktop development before) using Electron and AI assistance. "Three years back, if you asked desktop development, I would have run three kilometers away. But now I'm confident of breaking anything in the SDLC lifecycle."
Act III: The Accountability Problem
Snehith, who has been threading these threads with the dexterity of someone who's spent a lot of time managing consultants, now reaches for the question that will define the next decade of enterprise software: who owns the commit?
If an AI agent writes the code, raises the PR, and the code passes review — who's accountable when it breaks in production?
Lakshman gives the expected answer: the developer who delegated to the agent is still responsible. It's a reasonable, defensible position. Accountability chains don't disappear just because you've added an intermediary.
But Anand reaches for history. He always does.
"It's largely about how do we assign responsibility and control. Humans, we know how to do that. Four centuries ago, we learned how to do that for companies. We also do that for gods, rivers, ships. These are all legal precedents where you can actually hold them accountable by law. We haven't yet learned how to do it for agents."
— Anand S
He's not being dramatic. He's being precise. The legal doctrine of corporate personhood took centuries to develop. Rivers in New Zealand have legal personhood. Ships have been held "accountable" through legal constructs for over a thousand years. Every one of these was a creative legal solution to the problem of assigning responsibility to non-human actors.
And then he proposes something remarkable: what if you could create an agency — a limited liability corporation for AI agents?
"We may be able to create something like a limited liability corporation with a pool of agents and still give them a significant chunk of accountability while being able to trust them because they are going to be so bloody cheap."
— Anand S · PyConf Hyderabad 2026
He describes it with characteristic specificity: a pool of five journalist agents for the Times of India. Each agent has a name. Each one earns feedback. Good agents get more compute; bad agents get terminated. "Not that they mind dying — they are just agents." Out of a fixed budget of tokens and compute, you create an ecology — not a factory, not an assembly line, but something more like an evolutionary system. Performance determines survival.
This is somewhere between corporate org-chart and Darwin. And the audience — mostly developers who spend their days thinking about pull requests and sprint reviews — suddenly finds themselves thinking about accountability philosophy.
⚖️ The Accountability Gap
As of 2026, no legal framework holds AI agents directly accountable. The developer who deploys an agent remains responsible. But as agents become more autonomous — and the chain between human instruction and agent action grows longer — this will become untenable. The panel agreed: we're building the accountability frameworks now, empirically, through practice.
Act IV: Safe Spaces for Hallucination
Usha draws the line clearly. She wouldn't hand over complete control. Not even for greenfield. Especially not for anything touching sensitive domains: Nvidia's specialist libraries, banking systems using obscure proprietary languages, anything where the training data thins out and the model starts confabulating with confidence.
"The amount of knowledge which is available externally for the LLM to be trained on is very limited. I find the hallucination rate is so high that if I have to do it from scratch, it would have taken me half the time. Fixing and debugging the code takes... for certain areas it's literally very high."
— Usha Rengaraju
Anand's response is his most elegant: find a domain where hallucinations don't matter. Or better yet — find one where they're a feature.
"I have a continuous publication of comics from news. When it hallucinates, it is a feature. Exactly! So in that case, I can discover 10 things, 20 things that it can do which will be useful tomorrow."
— Anand S
At one point, Snehith mentioned they'd discussed having an AI panelist on stage. He'd rejected the idea. "I was afraid that probably AI would take over my job as a moderator." Anand, deadpan: "That's why I stopped inviting it as a co-host — it was stealing the show."
The deeper point is strategic: you can't learn to trust AI by giving it high-stakes tasks and hoping for the best. You learn to trust it by giving it low-stakes tasks where failure is informative rather than catastrophic. Each failure teaches you the contours of competence. Each success builds confidence. Over time, you develop a mental map of where AI is reliable enough for production and where it isn't — yet.
Lakshman echoes this: "Experimenting in the safe areas is the best way to start this adoption." And then adds the kicker: "Whether we like it or not, this is going to be the norm." He's talking to clients who used to accept 30% productivity gains and are now asking for 50%, 80%. The numbers are moving. The question isn't whether to adopt AI in your engineering process. It's how fast.
Coda: The People Problem
Near the end, Snehith cuts to the core of it: what is the biggest blocker to AI in SDLC?
Anand doesn't pause. "It's the people. It's our unwillingness to include that as a part of... I am a coder. If AI is going to do my code, what am I?"
Identity. That's the blocker. Not the tools — the tools work. Not the infrastructure — the infrastructure is there. The blocker is the profound human discomfort of having your professional identity threaten to evaporate. For twenty years, being a developer meant being someone who could write code. And now, suddenly, writing code is the thing you're being asked to stop doing. The ego has to be renegotiated, and nobody told us the ego was part of the job.
Usha closes with something warmer. She says she's charging the same rates she charged in 2023 and 2024. "But the time is running out. I am quite aware of it." She appreciates the acceleration, the new confidence that lets her tackle topics she'd have run from before. And she closes with something that has nothing to do with AI at all — a genuine, warm appreciation for the Python community in Hyderabad, which she calls one of the most active in the world.
"PyCon Hyderabad, the Python community in Hyderabad, is one of the most active Python developer communities in the world, actually. I travel across the globe to give keynotes at PyCons. So I can say hats off to all the organizing committee."
— Usha Rengaraju
It's a reminder that amid all the disruption and the philosophy and the agent LLCs, this panel took place because a group of people got together, volunteered their time, and built a community. And the community showed up.
Anand's last thought is the one that lingers. He says the hardest thing isn't the AI — it's the planning horizon. We're making year-long plans based on the capabilities of today, and today's capabilities are already outdated by next quarter. His solution: extrapolate a year out, put it in a "safe pocket somewhere," and revisit it when the world catches up.
"Having a safe space where hallucinations are not a problem and running with it — I think that is one way of being able to anticipate the future."
— Anand S
Top Takeaways
01
Greenfield vs Brownfield is a real divide
AI delivers transformative results in greenfield (new projects), but brownfield (legacy) requires significant groundwork, workflow redesign, and mindset shifts before it pays off.
02
Intern economics have inverted
One practitioner saves ₹60–70L/year on interns. Another is hiring more. The difference: whether you want AI to replace people, or people to leverage AI. Both strategies work — in different contexts.
03
The polyglot developer era has arrived
Specialization is out. End-to-end ownership is back. You need to understand product, dev, QA, and deployment — because AI handles the deep execution. The human supplies judgment across the whole system.
04
Learn where AI breaks, not where it works
"If it does something well, stop learning that." The only sustainable skill-building strategy is to work at the edges of AI competence — the domains where it hallucates — and develop expertise there.
05
AI accountability is the next legal frontier
We have 400 years of legal precedent for assigning responsibility to companies, rivers, and ships. We have none for AI agents. The framework that emerges will reshape software engineering as much as any technology.
06
Find your safe hallucination space
Experiment with AI in domains where failure is acceptable — even valuable. Comics that hallucinate are funny. Production banking systems that hallucinate are catastrophic. Build expertise in the gap between them.
07
The real blocker is identity
"I am a coder. If AI does my code, what am I?" The technology is ready. The infrastructure is ready. The barrier is human ego — the need to renegotiate professional identity in a world where coding is no longer the value add.
08
Plan for where you'll be in a year
AI capabilities will make your current plans obsolete in one quarter. Build experimental systems today that will be production-ready when AI catches up. Put those plans in a safe pocket — and revisit.