[0:00]Lang chain is a framework that allows you to build applications on top of LLM or large language model. In this crash course video, we are going to go over all the basics of Lang chain and then we will build a restaurant idea generator application using streamlit where you can input any cuisine Indian, Mexican, etcetera and it will generate a fancy restaurant name along with the menu items. First, let us understand what is Lang Chain and what kind of problem does it address? When you are using Chat GPT as an application, internally it is making call to OpenAI API, which internally uses any LLM such as GPT 3.5 or 4. In this case, Chat GPT itself is not an LLM, it is an application, whereas GPT 3.5, 4, these are large language models. Now let's say you want to build an application for restaurant idea generator where you give a cuisine and then it will generate a fancy name such as Taco Temptation for Mexican and menu items as well. Let's say you give Indian cuisine, it will say, okay, Curry Palace or Sahara Palace for Arabic along with these menu items. So this is a sample application which we are going to build, but this is an LLM based application and for this, we can use the same architecture as Chat GPT. Where we can directly call OpenAI API, and here I have provided a screenshot of their main API, so you can call it and you can get a behavior similar to Chat GPT. Internally it will use GPT 3.5 or GPT 4 model. In this case, once again, restaurant idea generator is an application, similar to Chat GPT, but internally you are using OpenAI API and the LLM's. Now there are a couple of limitations of following this approach. And by the way, the reason I'm telling you this is nowadays there is a big boom in the industry where every business wants to build their own LLM. You would think why they can't use Chat GPT because Chat GPT has no access to your internal organization data. So people want to build applications which are based on LLM. Okay, so there is a clear demand and clear boom in the industry for this, and why do business not use this kind of architecture? Well, there are couple of things to consider. First of all, calling OpenAI API has a cost associated with it. For every 1,000 token, they will charge $0.002 or something. You can check OpenAI pricing page, but there is a cost associated with it and if you are a startup who is having funding issues and you know your budget is limited, this is going to be a bottleneck for you. Another thing is, you might have noticed Chat GPT doesn't answer latest question. Its knowledge is limited to September 2021, as of this video recording. So if you want to incorporate some latest information, let's say from Google, Wikipedia or somewhere else, you can't get that here. The other issue is, atlik is my own software development and data science company. If I want to know how many employees joined last month, Chat GPT can't answer because it doesn't have access to my own internal organization data. So if you use this kind of architecture for building your application, you will hit some roadblocks or you will rather have some limitation. And look, OpenAI guys are pretty smart actually, if they want, they can address all of this, but their stance is very clear. We will provide foundational API's and building framework is something that other people should do. And that's what happens. See, if you have just OpenAI API, it is not enough to build an LLM. Therefore, you need some kind of framework where you can call OpenAI, GPT 3, GPT 4 or maybe if you want to save the cost, you call some open source models such as Hugging Face Bloom. There are so many models out there. Let's say you want to use them, you don't want to spend money on OpenAI GPT 3 model. Then this framework should provide that plugin play support. You know, where you can integrate to one of these models and your code kind of remains the same. This framework should also provide integration with Google search, Wikipedia or even integration with your own organizational databases so that the application can pull information from these various sources as well. And this framework is LangChain. That is what it does. It's a framework that allows you to build applications using LLM's. Okay, let's install Lang Chain now and do some initial setup. Let us first create an account on OpenAI. You can go to OpenAI website, click on login and create a login using Google or individual email credentials. And once your login is created, you will come to a dashboard. So let me just show you, so you go to OpenAI, say login, you're logged in, click on API. And then from your account, you can go to your manage account and API keys. You will find a key here which will look something like SK hyphen something. That is like a password, so you need to use that key in our code for Lang Chain. You can also create a separate keys for separate projects. So I have some client projects going on, YouTube tutorials, so for each of them I have a separate key. In your case, you can just use the one key. By the way, you can generate a new key here as well, so let's say you have that key ready with you.
[5:47]Uh here, then you can just import OS module and then in OS module you can create an environment variable with that particular key. Your key will be SK something. Okay. In my case, I have stored that key in one Python file. Okay, I don't want to share that key with all of you, that is the reason, and that Python file looks something like this. You know, secret underscore key dot pie, uh it will have my own internal key, I can have N number of keys here, and I'm just importing that variable here and just setting it here. Control enter, so that thing is set. Now, let's go to the terminal and install some modules, so you are going to install Lang Chain module, that's number one. And the second module you are going to install is called pip install OpenAI. Once you have installed those modules, let's now import few important things from Lang Chain. We are going to import the LLM called OpenAI.
[6:55]Now, we are using OpenAI because OpenAI, I know it costs some money, but it is the best one. Uh if you want some other ones, then just just hit tab and it will just show you Hugging Face, whatever the the other type of whatever other LLM's that it has available, it will show you all of that. We are right now happy with OpenAI and then I will create my OpenAI model. It has a variable called temperature. Now what temperature means is how creative you want your model to be. So if the temperature is set to, let's say, zero, it means it is very safe, it is not taking any bets. But if it is one, it will take risk, it might generate wrong output, but it is very creative at the same time. I tend to set it to 0.6, 0.7, things like that, and now in that LLM, you can pass any question. So let's say I want to open a restaurant for Indian food, suggest a fancy name for this. And I want some fancy name for it, I'm not able to come up with that product name idea or restaurant name idea, and let's see what this guy does. And I typed in the same question in here also, see, I want to open a restaurant for Mexican food, it told me this. Uh if you say Indian food, it will tell you something else. So we are using essentially the same concept here. Okay, so here, uh it says okay, Maharaja's Palace Cuisine. Uh if you say Italian food, see, the name sounds real, as if it's an Italian restaurant. So we have imported that OpenAI class, which created an LLM, and in the LLM we are just passing a simple text. Now, I don't want to keep on changing this same string.
[8:52]So I will now go ahead and create something called a prompt template, so from Lang Chain dot prompts, you can import a prompt template. And in that prompt template, you can pass some variables such as input variable. What will be the input variable, by the way? Variables, it will be a cuisine. And then the template that you want looks something like this. So I'll just copy paste here and what we are doing is just changing that Italian, etcetera, with that cuisine variable. And this template is called prompt template, and let's say this is for a restaurant name, that's why I'm saying name here. Uh and once that template is created, you can just say prompt template name dot format, and in that format, you can pass cuisine as let's say Mexican. And see, I want to open a restaurant for Mexican food. If you say Italian, it will say Italian. This is more like a Python string formatting. You would be wondering why you don't use Python string formatting. Well, let me just show you that using something called Chain. So we are going to use this concept of Chain in Lang Chain and it is one of the most important objects in in in Lang Chain framework. You can figure out from the name of the framework itself. Uh and we are importing LLM Chain. And LLM Chain is essentially a very simple object where you are saying my LLM is this, whatever you created here. Okay, that is my LLM and my prompt is this prompt template. And this is my chain, and in the chain, you can say chain dot run. Let's say, I want to open a American restaurant. See, The All-American Grill & Bar. So now, here I don't have to pass the whole sentence. I want to open a restaurant for this that. I just pass the cuisine, the variable and it will just work every time. Mexican, see, so internally it is calling OpenAI API, and we made that connection via this module here. So if you are using Hugging Face, then you'll have to do the Hugging Face setup and it will call Hugging Face. Okay. Here, I know it's probably not the best example, but the idea is I wanted to demonstrate this K parameter here, and based on this use case, you might see benefit in using conversational buffer window memory.



