From the course: Hands-On AI: Build a RAG Model from Scratch with Open Source
Unlock this course with a free trial
Join today to access over 24,900 courses taught by industry experts.
Vectorizing a query and finding relevant text
From the course: Hands-On AI: Build a RAG Model from Scratch with Open Source
Vectorizing a query and finding relevant text
- [Narrator] The next task we'll take on is identifying the text relevant to a given query, and this will be composed of a few subtasks. The first being that we must convert our query into a computer-friendly form for searching across our database, and as we've seen before, this means that we'll be creating a vector embedding out of our query using the same vector embedding model, which we used in generating our vector database. Once we have our vector embedding, we can scan it against all other vector embeddings in the database to find the database content most similar to our query. There are many ways to perform such a search, but for our example project, we'll stick to using an exact search to identify the top nearest vectors. We explicitly search for not just the nearest single vector, but a collection of the nearest vectors. The nearest vector does not ensure that that vector is associated with the content that our…
Contents
-
-
-
Running your LLM from open source2m 16s
-
(Locked)
Collecting data to generate our corpus1m 54s
-
(Locked)
What are vector embeddings, and how are they generated?3m 12s
-
(Locked)
Setting up a database and retrieving vectors and files2m 53s
-
(Locked)
Vectorizing a query and finding relevant text2m 48s
-
(Locked)
Prompt engineering and packaging pieces together3m 17s
-
-
-
-
-