RelayPlane MCP Server

Give AI coding agents access to multi-step AI workflows with 90%+ context reduction.

The Problem

When AI agents use MCP tools directly, they face two problems:

  • 1.Tool definitions bloat context — 100k+ tokens for many tools
  • 2.Intermediate results pass through context — a 50k token document copied between steps

The Solution

The RelayPlane MCP Server lets agents orchestrate AI workflows where intermediate results stay in the workflow engine, not your context window.

1# Without RelayPlane: 50k transcript flows through context twice
2TOOL CALL: gdrive.getDocument() → [50k tokens in context]
3TOOL CALL: salesforce.update({ notes: [50k tokens written again] })
4
5# With RelayPlane: transcript stays in workflow
6relay_workflow_run({
7 steps: [
8 { name: "fetch", mcp: "gdrive:getDocument", params: { id: "abc123" } },
9 { name: "save", mcp: "salesforce:update",
10 params: { notes: "{{steps.fetch.content}}" },
11 depends: ["fetch"] }
12 ]
13})
14# Only final result enters context

Result: 90%+ context reduction on multi-step pipelines

Quick Install

1claude mcp add relayplane -- npx @relayplane/mcp-server

Available Tools

ToolPurposeCost
relay_runExecute a single AI model callProvider cost
relay_workflow_runExecute multi-step workflowsProvider cost
relay_workflow_validateValidate DAG structure (no LLM calls)Free
relay_models_listList available models with pricingFree
relay_skills_listDiscover pre-built workflow patternsFree
relay_runs_listView recent execution historyFree
relay_run_getGet run details and trace URLFree
BYOK Pricing: Budget tracking protects you from runaway provider costs (OpenAI, Anthropic bills). RelayPlane is BYOK (Bring Your Own Keys) — we do not charge for API usage.

Supported Models

  • OpenAI: gpt-5.2 (New!), gpt-5.2-pro, gpt-5-mini, gpt-5-nano, gpt-5.1, gpt-4.1, o3, o4-mini
  • Anthropic: claude-opus-4.5, claude-sonnet-4.5, claude-haiku-4.5, claude-opus-4.1, claude-sonnet-4, claude-3.7-sonnet
  • Google: gemini-3-pro, gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite
  • xAI: grok-4, grok-4-fast, grok-3, grok-3-mini
  • Perplexity: sonar-pro, sonar, sonar-reasoning-pro, sonar-deep-research
  • Local: llama3.3, qwen2.5, deepseek-r1, mistral (via Ollama)

Next Steps

  • Installation — Setup for Claude Code, Cursor, and other clients
  • Tools Reference — Complete tool documentation with schemas
  • Skills — Pre-built workflow patterns with context reduction metrics
  • Budget & Limits — Configure safety limits for provider costs