Datadog LLM Observability Is Now Generally Available to Help Businesses Monitor, Improve and Secure Generative AI Applications
New product helps companies like WHOOP and AppFolio monitor hallucinations, adopt LLMs and release generative AI features with confidence
Organizations across all industries are racing to release generative AI features in a cost-effective way, but implementing and bringing them to production can present several challenges due to the complexity of LLM chains, their non-deterministic nature and the security risks they pose.
Datadog LLM Observability helps customers overcome these challenges so they can confidently deploy and monitor their generative AI applications. This new product provides visibility into each step of the LLM chain to easily identify the root cause of errors and unexpected responses such as hallucinations. Users can also monitor operational metrics like latency and token usage to optimize performance and cost, and can evaluate the quality of their AI applications—such as topic relevance or toxicity—and gain insights to mitigate security and privacy risks with out-of-the-box quality and safety evaluations.
Unlike traditional tools and point solutions,
"WHOOP Coach is powered by the latest and greatest in LLM AI.
"The Datadog LLM Observability solution helps our team understand, debug and evaluate the usage and performance of our GenAI applications. With it, we are able to address real-world issues, including monitoring response quality to prevent negative interactions and performance degradations, while ensuring we are providing our end users with positive experiences," said
"There's a rush to adopt new LLM-based technologies, but organizations of all sizes and industries are finding it difficult to do so in a way that is both cost effective and doesn't negatively impact the end user experience," said Yrieix Garnier, VP of Product at
LLM Observability helps organizations:
- Evaluate Inference Quality: Visualize the quality and effectiveness of LLM applications' conversations—such as failure to answer—to monitor any hallucinations, drifts and the overall experience of the apps' end users.
- Identify Root Causes: Quickly pinpoint the root cause of errors and failures in the LLM chain with full visibility into end-to-end traces for each user request.
- Improve Costs and Performance: Efficiently monitor key operational metrics for applications across all major platforms—including
OpenAI , Anthropic, AzureOpenAI , Amazon Bedrock, Vertex AI and more—in a unified dashboard to uncover opportunities for performance and cost optimization. - Protect Against Security Threats: Safeguard applications against prompt hacking and help prevent leaks of sensitive data, such as PII, emails and IP addresses, using built-in security and privacy scanners powered by Datadog Sensitive Data Scanner.
Datadog LLM Observability is generally available now. To learn more, please visit: http://datadoghq.com/product/llm-observability.
About
Forward-Looking Statements
This press release may include certain "forward-looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, or the Securities Act, and Section 21E of the Securities Exchange Act of 1934, as amended including statements on the benefits of new products and features. These forward-looking statements reflect our current views about our plans, intentions, expectations, strategies and prospects, which are based on the information currently available to us and on assumptions we have made. Actual results may differ materially from those described in the forward-looking statements and are subject to a variety of assumptions, uncertainties, risks and factors that are beyond our control, including those risks detailed under the caption "Risk Factors" and elsewhere in our
Contact
press@datadoghq.com
View original content to download multimedia:https://www.prnewswire.com/news-releases/datadog-llm-observability-is-now-generally-available-to-help-businesses-monitor-improve-and-secure-generative-ai-applications-302182343.html
SOURCE