Second, I always optimize my prompts. This prompt was carefully constructed by searching for "prompt optimizer." And since I'm using OpenAI models, I search for the OpenAI prompt optimizer. And you will eventually find it, but here is where I would... on the playground for OpenAI, you can put in a prompt. So let's say the prompt is, "Given two concepts, create a new creative idea." Yeah, let's say that is a prompt. Now I click on optimize. What it does is, knowing how a specific model, in this case GPT-5, works, it incorporates prompting best practices, rewrites the prompt, and comes up with a better prompt. Now, this works fairly well if you want to one-shot it, that is you're not really sure what to do and how to improve it. But what if you actually know that there is a specific purpose that you have to apply it for? And while this is churning, I will show you something else. A pharma company came to us and they said, "We want to build a model where we tell the patients what they should be doing after the clinical trial test." So they said, "We administer drugs, and there is a standard procedure. For example, the procedure says that following the administration of investigational antibody MBX 23, blah, blah, blah." That is what the clinical trial procedure says. Nobody will understand this. What we really want to tell them is after you receive the study medicine through an IV, we will watch you closely for four hours at the clinic. We will check your blood pressure, we will check your heart rate. So the left side translates into this. And they shared about 10 or 11 such examples and they said, "Can you convert it?" This is the classic machine learning cycle. And what we can do is pass this to an LLM and tell it to automatically generate the prompt. This is the input, this is the output, you generate the prompt. And it said, "You are a medical communicator tasked with transforming blah, blah, blah," and it provides a prompt. Exactly what the prompt optimizer did as well.
And here for the earlier one, it's saying, begin with a concise checklist, three to seven bullet points outlining how you will approach the two ideas conceptually. And it gives me a reason that initial checklist of three to seven bullets planned first promotes clearer, structured thinking for complex synthesis tasks. The same think step-by-step or the reasoning that we saw earlier. It incorporates these best practices and comes up with a better prompt. Good. So rule number one, always meta-prompt. Not always, if it's important, use a meta-prompt. If it is not important, say what you want. Second, evaluate it. So I can generate the output for this. That is, the first column is the expected input, second column is the expected... input, expected output. Third column is what this prompt generated. So we are taking the prompt that it has given us, generating the output. Now we can then check, does it have any extra content? Does it have all relevant content? What is the embedding similarity between these? And evaluate the prompt. That gives us a set of metrics. So it's gone through this and said, on the generated content seems to be introducing extra details in every single case. So for instance, here, the generated output has extra details that are not presented in the expected output like the study medicine, the disease condition, etc. So the prompt that it has generated seems to be something that is always putting in technical terms, which we don't want in our output. Okay. So now we know that it is failing, which is useful, or it's not working perfectly. And we know that it's particularly failing on this side, but not so much on this side. Good. It's not missing stuff, it's adding too much. Now, rather than manually try and fix it, try to fix it, let's revise the prompt. We will send this again to the LLM and have it correct it. So now it says, okay, instead of you are an expert medical, make it a skilled communicator, not expert, just skilled. And it makes a whole series of corrections like this, which we can then re-evaluate. First time, it got a score of 16.84 out of 30. Next time, maybe it will get something higher, maybe it will get something higher. So you can iterate.
In other words, all of the engineering that you learned is still applicable in this case, just that the domain has transferred. But what we are doing here is learning how to use the tools better. Better models, better prompts, using the tools themselves and whatever else we know to improve the prompts, maybe improve the models. The models themselves are being improved by the labs by the models. And that's exactly what we should be doing as well.
I said I'll show you examples of how I'm using it and I showed how I'm ideating with it. And I also said I will show how I code with it. This ideator tool that I showed you was entirely coded by an LLM.
Host Conclusion
Okay, that's Anand. Anything more I have to say? I don't know. Great job. Thank you, thank you, thank you. And actually I want to leave it open to the floor, you know? What did you guys get from this session?
Question: To use LLMs. If you are not using them till now, then yes, absolutely. Anything else?
Question: Go as life takes you.
Answer: Go as life takes you. Okay.
Question: Transform yourself.
Answer: That's one thing. That’s for myself, yeah.
Question: Keep thinking about LLMs.
Answer: Keep thinking about LLMs. Okay. In fact, once again, I would say don't think. Let the LLM think for you. That's what he said. Right? Don't think.
Question: The same thing, if we use LLMs continuously, our brains will deteriorate.
Answer: If we use calculators continuously, we stop being able to do mental mathematics. If we use machines continuously, our muscles will atrophy. If we stop cultivating food, then we will lose our ability to survive when there is no food available to us. If we stop wearing clothes, we will lose the ability to protect ourselves against the weather. We should stop doing all of these if we want to live in an environment which is very harsh, where we don't have all of this kind of support. And it is, I'm not saying that we should not do that. I'm saying that over time, the opportunities that we have to live in such environments, the need for such things will keep reducing. We had to study log tables. We did not have the opportunity to use calculators. The current generation has the opportunity to use scientific calculators but not computers in exams. In my exams, I tell them, "Please use the internet, please use ChatGPT, please use your friends, please use your pets if you want, work in a group, pay somebody to take the exam for you, but get the job done." After having told all of this to them, only 50% are copying. The rest are saying, "No, no, I still will not copy. I will do it by myself." And then they come to my company and then they say, "No, I will discover the wheel by myself. I will not reuse, I will only reinvent." There is a place for originality, all of that. There is also a place for reuse and standing on the shoulders of technological innovations is not necessarily a bad thing, but it comes with consequences.
Host: Great. Thank you. I don't know, many of you are not sure if you're aware, so Anand is also a professor at IIT Madras, and he makes a lot of new nerds coming out in the space of artificial intelligence, data science, data. If you say data, I think that's him. I know, do check out. I think the biggest learning for all of us, why I really want him to be here is, Anand was not like this two years back. Not like this two and a half years back. He was not. I know him for years, I mean, I don't tell my age here, but or his age, but knowing him from so long. Two and a half years back, in the last two and a half years, if I see how he has transformed himself from being that old data science kind of a person, the old data engineering kind of a person to what he is today, you can figure out when you start following his posts on LinkedIn. He is someone who posts almost every day his thoughts and learnings because he wants to share with the world. With the opinion, if I'm not wrong, that when more and more people share like this, we learn, we are letting our LLMs learn more from it and it's going to help us as we need it. It’s a very different thought process. And I think if you are still using typing and creating PPTs, you do have a problem. If you are still using Excel and formulae on that, we still have a problem. We do have a problem. If you're still not able to understand how to do data analysis from data that you're seeing, you're trying to do manually a lot of things, that means we are not being efficient. I think our promise to our business is about making ourselves more efficient and making our businesses more efficient. And I think there's a huge opportunity. There's a world of abundance right in front of us. And as he talked about Moore's Law, it is always compounding continuously. So please ensure, reflect back, I think the recording is going to be there. Do take a look at it. I'm sure a lot of you will start looking at the recording again, what he did, how he did, how he did the comparisons, how he did the verification of one LLM with another LLM, etc. So please do that in your day-to-day life and ensure that we all actually embrace this as part of our life and not being afraid of staying away from it. That's the last thing that we want to have. And it very much ties into our overall strategy of this year of building the AI muscle. Okay? So thank you, Anand. Thanks a lot. That was eye-opening, heart-opening, I don't know what all to say. You know, I think on behalf of everyone, thanks a lot.
Thank you. Thanks a lot, sir. I also did not know it was so heavy.