- Community Home
- >
- Software
- >
- HPE OpsRamp
- >
- LLM Observability in OpsRamp
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 weeks ago
4 weeks ago
LLM Observability in OpsRamp
Hello,
Is there any roadmap or recommended approach within OpsRamp LLM Observability?
I’m interested in understanding how the platform can be used or integrated to observe Large Language Model (LLM) workloads, including inference metrics, GPU utilization, response latency, endpoint availability, and vector/embedding behavior.
Does OpsRamp currently provide any native integration, API Poller, or customizable template that could be adapted to monitor AI or LLM inference pipelines, whether deployed locally or via providers like OpenAI, Hugging Face, or Azure AI?
Thank you,
lsantiagos01
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thursday
Thursday
Re: LLM Observability in OpsRamp
We already monitor GPU and other infra components.
Regarding LLM - I'm not aware as of now.
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thursday
Thursday
Re: LLM Observability in OpsRamp
We also provide ability to do custom Monitoring using RSE - Remote script executor - https://docs.opsramp.com/solutions/monitors/agentless-monitors/remote-script-executor/
For many public cloud workloads, we already have integration for AWS (including AWS Lex) Azure - (Azure Machine learning etc.)
We also provide ability to integrate using Webhooks.
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]