From the course: Prometheus and Grafana: Visualizing Application Performance
Unlock this course with a free trial
Join today to access over 24,900 courses taught by industry experts.
The role of context awareness: RAG and MCPs
From the course: Prometheus and Grafana: Visualizing Application Performance
The role of context awareness: RAG and MCPs
- [Instructor] By now, you must be wondering how a machine can possibly have all the relevant information about your unique platform setup, business, or even team. Without context, the only information available to an LLM is the data it was trained on. That information clearly won't be enough to solve your local problems accurately. So how can we bring that context and how do we do it in Grafana? There are currently two main ways to provide context to LLMs. The first is by providing the ability to retrieve external knowledge before generating a response. This is known as RAG, which stands for Retrieval Augmented Generation. It allows the LLM to obtain either domain specific or up to date information. The second way to provide context is by implementing the model context protocol commonly referred to as MCP. This protocol provides a framework for LLMs to interact with external systems and data sources. With either of these two mechanisms, you can easily augment the information…
Contents
-
-
-
-
-
-
-
-
-
(Locked)
Introduction to generative AI3m 41s
-
(Locked)
How AI can help solve challenges in Grafana3m 14s
-
(Locked)
Grafana and LLMs: Current features3m 51s
-
(Locked)
The role of context awareness: RAG and MCPs4m 14s
-
(Locked)
Lab 1: An LLM assistant for troubleshooting6m 50s
-
(Locked)
Lab 2: Upgrading the assistant with MCP5m 38s
-
(Locked)
-
-
-