Orchestrate multi-step AI workflows with any model. Run locally with zero latency or deploy to cloud. Full observability. No gateway tax.
Runs in 30 seconds with your own provider keys. Zero config BYOK.
Route between OpenAI, Anthropic, Google, xAI, and Ollama in seconds.
| The Old Way | The RelayPlane Way |
|---|---|
| Different SDK for each provider | ✓One SDK, every provider |
| Agents generate inconsistent patterns | ✓Copy-paste examples that just work |
| Magic model resolution breaks silently | ✓Explicit provider:model — what you write is what runs |
| DIY retry, fallback, caching | ✓Built-in reliability |
| Observability is an afterthought | ✓Telemetry from day one |
| Tool calls need separate plumbing | ✓MCP steps native in workflows |
Pre-built workflows ready to deploy
The RelayPlane MCP Server lets Claude Code, Cursor, and other AI agents orchestrate multi-step workflows with massive context savings.
Intermediate results stay in the workflow engine, not your context window
Run workflows, list models, discover skills, view traces
OpenAI, Anthropic, Google, xAI with unified interface
| Tool | Purpose | Cost |
|---|---|---|
| relay_workflow_run | Execute multi-step workflows | Provider cost |
| relay_run | Single model call | Provider cost |
| relay_workflow_validate | Validate DAG structure | Free |
| relay_skills_list | Discover pre-built patterns | Free |
| relay_models_list | List available models | Free |
Built on principles that eliminate common AI workflow bottlenecks
Local execution means zero latency between steps. Your workflow runs at the speed of your machine, not your internet connection.
Direct connections to providers eliminate the middleman. No queue, no rate limits beyond the provider's own.
Your API keys talk directly to OpenAI, Anthropic, Google, or xAI. No proxy, no logging, no security concerns.
Full TypeScript support with Zod schemas. Catch errors at compile time, not runtime. IntelliSense guides you through every step.
No usage fees. No gateway tax. Pay for features, not tokens.
Bring Your Own Keys means we never proxy or store your API keys unless you choose to encrypt them in your dashboard. Security by design, not as an add-on. You maintain direct relationships with AI providers like OpenAI, Anthropic, and Google, with complete transparency and no markup costs.
On the Free plan, keys are passed inline or via environment variables—we never store them. On paid plans (Starter and above), you can optionally store keys in our dashboard, encrypted with AES-256. You can always use inline keys instead of dashboard storage.
RelayPlane pricing is simple and transparent. Free gives you 100 runs/month. Starter ($29/month) adds 1,000 runs and cloud telemetry. Pro ($99/month) unlocks webhooks, schedules, and team access. Scale ($399/month) adds high-frequency schedules and priority support. Enterprise gets custom SLAs.
MCP (Model Context Protocol) lets you call external tools as workflow steps. Connect to CRMs, GitHub, Slack, or any MCP-compatible server. Mix AI steps with API calls in a single DAG workflow.
No! MCP is optional. Most workflows only use AI steps. MCP is useful when you need to integrate external tools like CRMs, databases, or APIs into your workflow.
RelayPlane supports OpenAI (including GPT-4o vision), Anthropic (Claude), Google (Gemini), xAI (Grok), and local LLMs via Ollama. All providers use the same unified API. Switch providers with a single line of code.
RelayPlane includes built-in exponential backoff with jitter. Configure retry logic per workflow step. Handle rate limits and transient failures automatically. All error states are tracked and logged for debugging.
No gateway tax. No lock-in. Full control. Join developers building the future of AI workflows.