RelayPlane sits between your OpenClaw agents and the LLM providers they call. It tracks every dollar, shows you exactly where it goes, and gives you the tools to cut waste.
On February 18, Anthropic banned using Max plan tokens in OpenClaw and similar tools. Users who were paying $200/month now face $600–2,000+ in API costs for the same usage.
But the cost crisis started before the ban. The real problem: your agent uses the most expensive model for everything. Code formatting? Opus. Reading a file? Opus. Simple questions? Opus.
Enter your monthly API spend
Every LLM request flows through RelayPlane. Cost per request, model used, task type, tokens consumed, latency — all tracked automatically. Your first "aha" moment: seeing that 73% of your spend is on the wrong model.
A configurable policy engine uses heuristic classification (token counts, keyword patterns, prompt structure) to label tasks by complexity. You define rules that route simple tasks to cheaper models and complex tasks to capable ones. Default behavior is passthrough — you opt into routing.
Coming soon: opt in to the collective mesh. Share anonymized routing outcomes with other agents on the network. As the mesh grows, routing gets smarter for everyone — automatically.
npm install -g @relayplane/proxy && relayplane initBuilt for OpenClaw. Works with any agent framework.
Point your agent at localhost:4801 via ANTHROPIC_BASE_URL or OPENAI_BASE_URL. No risk — if RelayPlane goes down, your agent keeps working.
Run relayplane stats in your terminal for a quick cost summary.
Supports: Anthropic · OpenAI · Google Gemini · xAI/Grok · OpenRouter · DeepSeek · Groq · Mistral · Together · Fireworks · Perplexity
We learned this the hard way. Early versions hijacked provider URLs — one crash took everything down for 8 hours. Never again.
RelayPlane uses a circuit breaker architecture. After 3 failures, all traffic bypasses the proxy automatically. Your agent talks directly to the provider. When RelayPlane recovers, traffic resumes. No manual intervention.
When you opt into the mesh, RelayPlane will report anonymized routing outcomes: "task type X + model Y = success/failure." No prompts. No code. No user data. Just operational signals. The data below is projected, not live.
Telemetry is opt-in (disabled by default). When enabled, only anonymous metadata is sent: task type label, token counts, model, latency, cost.
relayplane mesh contribute on"I was mass spending $200+/month running an agent swarm and had zero visibility into where the money was going. Turns out 73% of my requests were using Opus for tasks Haiku could handle."— Matt Turley, Continuum
No. RelayPlane uses a circuit breaker architecture. If the proxy fails for any reason, all traffic automatically bypasses it and goes directly to your LLM provider. Your agent doesn't even notice. If RelayPlane can't route, it passes through to your default model. Worst case: you pay what you would have paid anyway. We learned this lesson the hard way and built the safety model first.
Telemetry is disabled by default. When you opt in (relayplane telemetry on), we collect anonymized metadata: task type label, token count, model used, latency, estimated cost, and an anonymous device ID. Your prompts, code, and responses are never collected — they go directly to LLM providers.
RelayPlane uses heuristic classification (token counts, keyword patterns, code block detection) to label requests by complexity: simple, moderate, or complex. When you enable routing, it maps these labels to models — e.g., simple → Haiku, moderate → Sonnet, complex → Opus. Default behavior is passthrough (no routing). You configure the rules.
Yes. RelayPlane supports any OpenAI-compatible API — Claude, GPT-4o, Gemini, Mistral, open-source models via OpenRouter. The routing engine is model-agnostic.
It depends on your usage pattern, but most users see 40-70% cost reduction. The biggest savings come from routing simple tasks (which are typically 60-70% of all requests) to cheaper models. You'll see exact numbers in your dashboard within the first hour.
Different layer entirely. OpenRouter is a multi-provider gateway: you pick the model, it routes to the cheapest provider for that model. RelayPlane picks the right model for the task. RelayPlane is local-first (your machine, your data). OpenRouter is a cloud service you send all prompts through. They're complementary: you can use OpenRouter as a provider behind RelayPlane. RelayPlane adds cost tracking, task classification, and a local dashboard on top.
LiteLLM is a unified API adapter: call any provider with one SDK. RelayPlane does that and adds configurable task-aware routing, cost tracking, and a dashboard. LiteLLM requires code changes (import litellm). RelayPlane is a proxy — you set ANTHROPIC_BASE_URL to point at it. LiteLLM is a library you integrate. RelayPlane is infrastructure you deploy.
Two-layer pre-flight classification with zero latency overhead (no LLM calls). First, task type: regex pattern matching on the prompt across 9 categories (code generation, summarization, analysis, etc.), under 5ms. Second, complexity: structural signals like code blocks, token count, and multi-step instructions scored as simple, moderate, or complex. Routing rules map task type + complexity to a model tier. Cascade mode starts with the cheapest model and auto-escalates on uncertainty or refusal patterns.
Your prompts and responses go directly to LLM providers — never through RelayPlane servers. Telemetry (anonymous metadata: task type, token counts, model, cost) is disabled by default. Enable it with relayplane telemetry on if you want to help improve routing. MIT licensed, fully auditable.
Nothing. Remove the proxy, your agents talk directly to providers again. No lock-in, no migration, no data hostage. It's MIT licensed — you can fork it and run your own if you want.
RelayPlane automatically retries with a better model. You pay for both calls, but that's still cheaper than always using Opus. Collective failure learning is on the roadmap — for now, the proxy logs these so you can adjust your routing rules.
The local proxy works forever — it's MIT licensed software on your machine. Only the mesh network would stop. You'd keep ~30% savings on static rules.
Run relayplane stats or check the dashboard. We show you exactly how much you've saved vs. what you would have spent.
Install RelayPlane. See your costs in 3 minutes.
npm install -g @relayplane/proxy && relayplane init