From the course: Hands-On AI: Build a RAG Model from Scratch with Open Source
Unlock this course with a free trial
Join today to access over 24,900 courses taught by industry experts.
Prompt engineering
From the course: Hands-On AI: Build a RAG Model from Scratch with Open Source
Prompt engineering
- [Speaker] We now have nearly all of our necessary ingredients. We have a function to generate a variable called context, which contains the context relevant to the provided query. And of course, we also have a variable which contains the query itself. We now need to feed the LLM, the query and context from the database while ensuring that the model responds to the user query based on only that context which we provide. We'll do this by constructing a prompt, which will have some hardcoded language that instructs the LLM to do just that. There are two options for how we can customize prompts that are passed to the LLM. One is directly through modifying the query that's being passed to the large language model. And the other is by going back to the model file and editing some of the parameters in there. We're gonna use a combination of both as they serve different purposes. The model file will be used to set static content…