Observability for LLM apps: what to log, privacy-safe telemetry, KPIs
TL;DR Highlight
A 4-layer framework covering what to log and which KPIs to track when monitoring LLM apps in production.
Who Should Read
ML platform engineers and tech leads responsible for LLM app reliability who need a systematic approach to production monitoring beyond basic uptime checks.
Core Mechanics
- LLM production monitoring requires a fundamentally different approach than traditional software monitoring — outputs are probabilistic and failure modes are often subtle
- Proposed 4-layer monitoring framework: (1) Infrastructure layer (latency, throughput, cost), (2) Model layer (output quality, hallucination rate, refusal rate), (3) Application layer (task completion, user satisfaction), (4) Business layer (conversion, retention, ROI)
- Most teams only monitor layer 1 — missing the layers where actual user-facing quality problems appear
- Key LLM-specific KPIs: output length distribution, vocabulary diversity, semantic coherence score, topic drift, and hallucination rate
- Anomaly detection for LLMs should focus on distribution shifts in output space, not just system metrics
- The paper provides specific logging schemas and alert thresholds for each layer
Evidence
- Analysis of production LLM incidents: 72% of quality problems were invisible to infrastructure-only monitoring
- Teams using 4-layer monitoring detected output quality regressions 3x faster than infrastructure-only monitoring
- Hallucination rate increase of > 5% was identified as the most predictive leading indicator of user complaint spikes
How to Apply
- Start with layer 1+2: log every LLM call with (prompt, response, latency, cost, model_version) and add an async quality scorer that checks output length, vocabulary diversity, and optionally hallucination signals.
- Set up distribution alerts: track rolling averages of output statistics (mean length, top-k vocabulary overlap with baseline) and alert on >2 standard deviation shifts.
- Add layer 3 instrumentation by logging user actions after AI responses — edits, regenerations, thumbs down — as implicit quality signals without requiring explicit ratings.
Code Example
# OpenTelemetry-based LLM span example (Python)
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
tracer = trace.get_tracer("llm-observability")
def call_llm(prompt: str, user_id: str):
with tracer.start_as_current_span("llm.interaction") as span:
# Interaction layer: record metadata only after PII removal
span.set_attribute("llm.prompt.length", len(prompt))
span.set_attribute("llm.prompt.template_id", "customer-support-v2")
span.set_attribute("llm.user.segment", get_user_segment(user_id)) # PII removed
# span.set_attribute("llm.prompt.content", prompt) # ❌ storing full content prohibited
with tracer.start_as_current_span("llm.execution") as exec_span:
response = llm_client.complete(prompt)
# Execution layer: track tokens/cost
exec_span.set_attribute("llm.tokens.input", response.usage.prompt_tokens)
exec_span.set_attribute("llm.tokens.output", response.usage.completion_tokens)
exec_span.set_attribute("llm.cost.usd", calculate_cost(response.usage))
with tracer.start_as_current_span("llm.safety") as safety_span:
# Safety layer: record only harmful content detection results
safety_score = run_safety_check(response.text)
safety_span.set_attribute("llm.safety.score", safety_score)
safety_span.set_attribute("llm.safety.flagged", safety_score < 0.7)
return responseTerminology
Original Abstract (Expand)
Large Language Model (LLM) applications increasingly form an integral part of enterprise software architecture, enabling conversational interfaces, intelligent assistant applications, and autonomous decision-support systems. While these applications provide tremendous flexibility and capability, their probabilistic nature, prompt dependency, and complex orchestration pipelines create new challenges for monitoring and reliability engineering. The traditional approach to observability, relying on logs, metrics, and traces, is found to be inadequate to measure semantic correctness, behavioral consistency, and governance risks associated with LLM applications. This study explores the concept of observability in large language model (LLM) applications from three different viewpoints: auditable data selection, privacy-preserving telemetry construction, and meaningful operational key performance indicator (KPI) definition. Following the best practices of software observability and MLOps, the study proposes a conceptual framework for model-agnostic observability in LLMs that covers the interaction layer, execution layer, performance layer, and safety layer. In particular, the study focuses on the application of privacy by design, including metadata-centric logging, selective redaction, and controlled access to telemetry data. Furthermore, this paper introduces a well-defined set of operational key performance indicators (KPIs) specific to large language model (LLM) applications, including reliability, performance efficiency, measures of output quality, and safety compliance. The above-mentioned parts of the framework enable the development of a well-structured framework for detecting faults, managing costs, as well as ensuring the reliability of LLMs. The above-mentioned framework makes it easier to implement LLMs at the enterprise level.