HPE OpsRamp
1842070 Members
2211 Online
110186 Solutions
New Discussion

LLM Observability in OpsRamp

 
lsantiagos01
Occasional Contributor

LLM Observability in OpsRamp

Hello,

Is there any roadmap or recommended approach within OpsRamp LLM Observability?

I’m interested in understanding how the platform can be used or integrated to observe Large Language Model (LLM) workloads, including inference metrics, GPU utilization, response latency, endpoint availability, and vector/embedding behavior.

Does OpsRamp currently provide any native integration, API Poller, or customizable template that could be adapted to monitor AI or LLM inference pipelines, whether deployed locally or via providers like OpenAI, Hugging Face, or Azure AI?

Thank you,
lsantiagos01

2 REPLIES 2
RaghuIyengar
HPE Pro

Re: LLM Observability in OpsRamp

We already monitor GPU and other infra components.

Regarding LLM - I'm not aware as of now.



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
RaghuIyengar
HPE Pro

Re: LLM Observability in OpsRamp

We also provide ability to do custom Monitoring using RSE - Remote script executor - https://docs.opsramp.com/solutions/monitors/agentless-monitors/remote-script-executor/

For many public cloud workloads, we already have integration for AWS (including AWS Lex) Azure -  (Azure Machine learning etc.)

We also provide ability to integrate using Webhooks.

 

 



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo