From the course: Building a Project with the ChatGPT API

Create chatbots using chat completion - ChatGPT Tutorial

From the course: Building a Project with the ChatGPT API

Create chatbots using chat completion

- You will not believe how easy it is to use ChatGPT to add interactive assistance and improve user experiences within your applications. The Chat Completions API is used to pass a list of messages to the model as input and receive a model generated response as output. The conversation can be single-turn without a back and forth conversation or multi-turn. Do you remember when I explained tokenization to you? If you recall, traditional GPT models consume unstructured text, which is represented to the model as a sequence of tokens. This is considered text in and text out, where a prompt string is accepted and a completion to append to the prompt is returned. Unlike traditional GPT models, the ChatGPT, like GPT-3.5 and GPT-4 models, are conversation in and message out. That is, the model takes in a series of messages along with metadata. I bet many of you are thinking, "How does this work under the hood?" Well, the messages are still translated to tokens before they are consumed by the model, but our interaction with the model is through a series of messages. Here's the format of a basic Chat Completion. Let me explain using code. You can follow along by accessing the Exercise Files in GitHub. We'll be using the file 02_01 for this video. We'll be using Jupyter Notebooks for the environment. If you haven't used Python or Jupyter Notebooks, I don't expect you to get into the code right now. I'll go more in depth about setting up your environment in Chapter 3. You'll also get a chance for some hands-on challenges in a later chapter. Now going into our code, here's the format of a basic Chat Completion. You'll notice here on line 4, the first line in the array is the system message. The system message sets the context for the model and sets the persona for the AI assistant. You'll customize this each time for your specific use case After the system message, here on line 6, you include a series of messages between you, the user, and the AI assistant. A sample conversation would look like this. IN the system message. you pass in the context. "You are a helpful assistant that acts as a sous chef." The next prompt comes from you, the user, and it says, "When should I use Capellini pasta?" I've already executed this code, and I've printed out the response. You'll notice that the model talks about Capellini pasta, also known as angel hair pasta, and it offers up four suggestions on when to use Capellini pasta, light tomato sauces, classic Italian dishes, light seafood dishes, and Asian-inspired dishes. And just like a great sous chef, it provides tips so that you do not overcook the pasta. You'll see the model is wordy, and this is a known challenge. Don't forget that we are charged per token on the input and the output. To fix this, we could change the initial message to teach or train the model what we expect back, and this is a brief introduction to prompt engineering. Notice here, the system message is exactly the same, but the second prompt, I've tweaked it just a bit. Now it reads, "Can you tell me when I should use Capellini pasta in 15 words or less?" And notice the response back. "Capellini pasta is best used in delicate, light dishes, like simple sauces or seafood." This course will use OpenAI's Python library to access the API endpoints. However, you can test them directly from tools like cURL or Postman. Now that you understand how to create chatbots using Chat Completion, let's look at the Text Completion API and explore the differences between the two.

Contents