From the course: Building AI Agents with AutoGen
Configuring LLMs and setting up AutoGen - Autogen Tutorial
From the course: Building AI Agents with AutoGen
Configuring LLMs and setting up AutoGen
At the core of an AI agent is large language model. LLMs basically enable agents to converse in natural languages and transform between structured and unstructured text. So in this video, first, we need to configure the large language models that are going to be powering our AI agents. First of all, we need to import all the required libraries. I'm importing autogen to create our AI agents, the os module to access all the directories and files, and dotenv package allows us to load all the environment variables using the load_dotenv() function. Now all our environment variables are going to be stored within this hidden file called dotenv. So all of our API keys are going to be stored within this file. Let's quickly run this. So it has returned true. That means all our environment variables are now loaded in the runtime. Next step is to configure the large language model. That means we need to specify which model we are going to be using and who is going to be providing that service, providing that model. We are using OpenAI's GPT-4o for this entire course. So llm_config is my Python dictionary, which essentially has two keys. One is the model where I need to specify which model we are using, and then API key. This API key is something that you will have to create on your own. The instructions are provided in the lecture notes. os.getenv is going to get that API key corresponding to the string OPENAI_API_KEY, which is already present in your dotenv file. Run this and your llm_config dictionary is now ready, which will further be passed on to your AI agents. But let's say you want to use multiple large language models. That means one is from OpenAI, one is from some other provider. Or if you want to use any open-source large language model, that is also possible. In this case, we are going to create a list of LLMs, a list of dictionaries. So here I have config_list which is a list of dictionaries. Each dictionary represents a model. I need to specify again the model name. So GPT-4o is going to be the first model that I'm using here along with the API key which is going to come from my dotenv file. Then the second model is Llama-7B. Here, I have configured it because I'm running it on my local system at this particular local server. So the link, the base URL corresponds to 8080 port which is running on my system using a service called Llama. So if you want to use multiple LLMs, you can configure it using config_list method. Here, your config_list is now ready. You can use multiple LLMs in this particular manner. Throughout the course, we'll be using one single LLM. So we are going to be using this configuration throughout the course. In the next video, we'll see how to create a basic chatbot and use these LLMs to create a ConversableAgent.