RelayPlane vs OpenLLMetry

OpenLLMetry is an Apache 2.0 open-source observability SDK for LLMs built on OpenTelemetry, created by Traceloop. RelayPlane is an MIT-licensed npm proxy that intercepts LLM requests at the network level for real-time cost control and model routing. Here is how they compare for teams who need to govern their LLM spend.

TL;DR

Choose RelayPlane when you want:

  • npm install and running in 30 seconds with no code changes
  • Real-time cost control and model routing in the request path
  • Local SQLite cost tracking with no data leaving your machine
  • OpenAI-compatible drop-in with one baseURL swap, no instrumentation
  • Claude Code and Cursor cost tracking without SDK instrumentation

OpenLLMetry may work for you if you need:

  • Standard OpenTelemetry spans for LLM calls that plug into your existing OTel backend
  • Native SDK instrumentation across Python, JS/TS, Go, Ruby, and Java
  • LLM traces correlated with distributed traces across your broader application
  • Apache 2.0 licensed library with no proprietary components

Feature Comparison

FeatureRelayPlaneOpenLLMetry
Product type

RelayPlane sits in the critical path of every LLM request: it intercepts, routes, and logs each call transparently. OpenLLMetry is an instrumentation library that adds OpenTelemetry spans to your LLM calls by wrapping them in your application code. OpenLLMetry does not proxy or route requests.

npm-native LLM proxy and gateway (local-first)Open-source OpenTelemetry SDK for LLM observability (instrumentation library)
Install method

RelayPlane ships as a standalone npm binary: one command and you are proxying requests. OpenLLMetry requires installing a language-specific SDK and then adding instrumentation calls to your application code before any traces are collected.

npm install -g @relayplane/proxypip install traceloop-sdk (Python) or npm install @traceloop/node-server-sdk (JS/TS)
Requires application code changes

RelayPlane intercepts requests at the network level via a baseURL swap. No application code changes are required. OpenLLMetry requires you to initialize the SDK and instrument every LLM call in your codebase. Every call must be wrapped or patched for tracing to work.

No account required

Both RelayPlane and OpenLLMetry can run without a cloud account. RelayPlane runs entirely on localhost. OpenLLMetry is open source and can export traces to any OpenTelemetry-compatible backend you control, including self-hosted options.

Pricing

Both products are entirely free and open source. OpenLLMetry is released under Apache 2.0 by Traceloop and is part of the OpenTelemetry ecosystem. RelayPlane is MIT licensed. Neither product has a paid cloud tier for the core library.

MIT open source, free self-hostedApache 2.0, free and open source (no paid tiers)
Model routing and fallback

RelayPlane routes requests to different models based on complexity and cost, with automatic fallback on provider failures. OpenLLMetry is an observability library and does not route or proxy requests. It cannot reroute traffic or apply routing policies.

Request interception (proxy mode)

RelayPlane intercepts every LLM request transparently via a baseURL swap. No code changes needed beyond pointing your client at localhost:4100. OpenLLMetry patches your LLM SDK client at the library level inside your application process. It does not function as a network proxy.

Local SQLite cost tracking

RelayPlane logs every request's exact dollar cost in local SQLite with no data leaving your machine. OpenLLMetry emits token usage as OpenTelemetry span attributes and metrics, but cost calculation and storage depend on your chosen OTel backend.

No data leaves your machine

RelayPlane runs entirely on localhost by default with zero external telemetry. OpenLLMetry exports traces to whatever OpenTelemetry endpoint you configure. By default the SDK will not send data anywhere until you set an OTLP exporter, but the destination is fully in your control.

Depends on OTel exporter config
Spend governance and budget limits

RelayPlane can enforce spend limits and route away from expensive models when budgets are exceeded. OpenLLMetry tracks token usage as observability data but has no mechanism to block, reroute, or cap spending on live requests.

OpenAI-compatible drop-in

RelayPlane exposes an OpenAI-compatible endpoint: set OPENAI_BASE_URL=http://localhost:4100 and your existing code works unchanged. OpenLLMetry requires SDK initialization and wraps LLM client libraries at the code level. It is not a network endpoint.

Works with Claude Code and Cursor

RelayPlane is designed for Claude Code, Cursor, Windsurf, and Aider. Because it is a network proxy, any tool that makes HTTP requests to an LLM can be pointed at it. OpenLLMetry instruments application code and cannot intercept traffic from AI coding assistants at the IDE or CLI level.

Not applicable (SDK instrumentation, not a proxy)
Language support

OpenLLMetry has native SDK support across Python, JS/TS, Go, Ruby, and Java, making it versatile for polyglot teams who want SDK-level instrumentation. RelayPlane is a language-agnostic HTTP proxy: any language that can make HTTP requests works without a dedicated SDK.

Node.js proxy intercepts any language via HTTPPython, JavaScript/TypeScript, Go, Ruby, Java (native SDKs)
OpenTelemetry native

OpenLLMetry is built on OpenTelemetry and emits standard OTLP spans, metrics, and traces. This means traces from OpenLLMetry can flow into any OTel-compatible backend: Jaeger, Zipkin, Grafana Tempo, Honeycomb, Datadog, and more. RelayPlane does not emit OTel spans.

LLM tracing with spans

OpenLLMetry generates full OpenTelemetry spans for LLM calls including model, prompt, completion, token counts, and latency as standardized span attributes. RelayPlane logs per-request metadata but does not produce OTel-compatible distributed trace trees.

Basic request logging
Integrates with existing OTel stack

If your team already uses OpenTelemetry for distributed tracing, OpenLLMetry slots directly into your existing pipeline. LLM spans appear alongside your application spans in whatever backend you already use. RelayPlane has no OTel integration.

Open source license

Both products are fully open source. OpenLLMetry is Apache 2.0 licensed by Traceloop and contributed to the OpenTelemetry ecosystem. RelayPlane is MIT licensed. There are no proprietary enterprise tiers for either product's core library.

MITApache 2.0

Why Teams Choose RelayPlane When They Need Cost Control, Not Just Observability

1.

A network proxy intercepts requests. An SDK instruments code. These solve different problems.

OpenLLMetry and RelayPlane are not direct substitutes. OpenLLMetry is an instrumentation library: you add it to your application code, it patches your LLM SDK clients, and it emits OpenTelemetry spans you can route to any OTel backend. RelayPlane is a network proxy: you change one baseURL and every LLM call flows through it, enabling real-time routing, cost tracking, and spend governance without touching application code. For tools you do not control, like Claude Code or Cursor, SDK instrumentation is simply not an option.

2.

npm install in 30 seconds vs wrapping every LLM call in your codebase

npm install -g @relayplane/proxy and you are proxying requests in under 30 seconds with zero code changes. OpenLLMetry requires installing a language-specific SDK, initializing it in your application entry point, and ensuring your LLM client libraries are patched correctly. For teams with existing applications across multiple services or languages, that instrumentation work compounds across every service that makes LLM calls.

3.

Built-in cost control. Not just cost observation.

OpenLLMetry is excellent at capturing token usage as OpenTelemetry span attributes, which you can aggregate into cost metrics in your OTel backend. But it cannot stop you from spending. RelayPlane tracks cost and can enforce it: budget limits, routing away from expensive models when thresholds are hit, automatic fallback on cost overruns. If you want to observe your LLM spend in your existing OTel stack, OpenLLMetry is a strong choice. If you want to control it in real time, you need a proxy in the request path.

4.

No code changes. One baseURL swap.

RelayPlane requires zero application code changes. Set OPENAI_BASE_URL=http://localhost:4100 and your existing OpenAI SDK calls, Claude Code sessions, and Cursor requests are automatically proxied. OpenLLMetry requires you to add SDK initialization to your application and wrap every LLM call. For existing applications you cannot modify, third-party tools, or coding assistants running at the CLI level, RelayPlane works where OpenLLMetry cannot.

OpenLLMetry Solves Observability Problems. RelayPlane Solves Cost Control Problems.

OpenLLMetry is a genuinely useful open-source library for teams who are already invested in the OpenTelemetry ecosystem and want LLM calls to appear as first-class spans alongside their application traces. Its multi-language support, standard OTLP output, and zero-vendor-lock-in design make it a strong choice for platform teams who own an observability stack and want to extend it to cover LLM workloads.

But OpenLLMetry requires code instrumentation in every service that makes LLM calls. It cannot intercept traffic from tools you do not control, cannot route requests, and cannot enforce budget limits. If you want to start tracking LLM costs in Claude Code or Cursor today without touching your codebase, without adding SDK dependencies, and without sending logs to a third-party platform, RelayPlane installs in one command and runs on localhost.

What is OpenLLMetry

PropertyDetail
Created byTraceloop (now part of the OpenTelemetry ecosystem)
LicenseApache 2.0 (fully open source, no paid tiers)
LanguagesPython, JavaScript/TypeScript, Go, Ruby, Java
What it doesInstruments LLM SDK calls with OpenTelemetry spans and exports to any OTLP-compatible backend
Requires code changesYes: SDK initialization and LLM client patching in your application
Does not doProxy requests, route traffic, enforce budgets, or intercept tools you do not control

Get Running in 30 Seconds

No account. No code changes. No instrumentation:

# Install globally
npm install -g @relayplane/proxy
# Start the proxy
relayplane start
# Point Claude Code at localhost
// OPENAI_BASE_URL=http://localhost:4100

Start controlling LLM costs in one command

No account. No monthly fee. MIT open source. Runs on localhost with Claude Code and Cursor in under 30 seconds.

npm install -g @relayplane/proxy