AIDER TERMINAL
ROUTING ACTIVE
AIDER INTEGRATION
OPEN SOURCE CLISMART ROUTING

RelayPlane
+ Aider
Save 50%

Aider is an AI pair programming tool with 1.9M+ GitHub stars. Set one environment variable to route every aider session through RelayPlane for cost tracking and intelligent model routing.

$npx @relayplane/proxy --port 4100
AIDER SESSION LOGROUTING
[10:04:01] Code search -> haiku
[10:04:03] Add imports -> haiku
[10:04:06] Refactor function -> sonnet
[10:04:10] Write unit tests -> haiku
[10:04:14] Fix failing test -> sonnet
[10:04:18] Architect new feature -> opus
[10:04:22] Update docs -> haiku
[10:04:25] Git commit message -> haiku
✓ Session complete8 calls · $0.28 (was $1.84)
ALL OPUS$15.00/1M
WITH RELAYPLANEavg $0.85/1M
// ABOUT AIDER

AI Pair Programming in Your Terminal

Aider is an open-source AI coding assistant with 1.9M+ GitHub stars. It runs in your terminal, edits files directly, and commits changes with git. You describe what you want; aider writes the code.

Aider uses the OpenAI API format, so it respects the OPENAI_API_BASE environment variable. Set it to your RelayPlane proxy URL and every aider request routes through RelayPlane for full cost visibility.

TYPICAL AIDER SESSION COSTS
Quick fix (15 calls)
$1.80$0.90
Debug session (60 calls)
$7.20$3.60
Feature build (150 calls)
$18.00$9.00
MONTHLY (heavy use)
$450+$225
1,900,000+
GitHub stars
50%
Average cost reduction
<1ms
Routing decision latency
// QUICK START

Two Steps to Start Saving

One environment variable. No code changes to aider.

~/projects/my-project
# Step 1: Start the RelayPlane proxy
npx @relayplane/proxy --port 4100
# Step 2: Run aider with OPENAI_API_BASE pointed at the proxy
OPENAI_API_BASE=http://localhost:4100 aider --model gpt-4
✓ All aider LLM calls now route through RelayPlane
# Alternative: use the --openai-api-base flag directly
aider --openai-api-base http://localhost:4100 --model gpt-4
PERSIST THE SETTING IN YOUR SHELL
Add to ~/.bashrc or ~/.zshrc to make it permanent:
export OPENAI_API_BASE=http://localhost:4100
Aider will use the proxy automatically on every run once set.
// HOW IT WORKS

Transparent Proxy Architecture

01

Start the Proxy

npx @relayplane/proxy --port 4100

RelayPlane runs locally. No cloud dependency. No data leaves your machine without opt-in.

02

Point Aider at It

OPENAI_API_BASE=http://localhost:4100

Set the environment variable before running aider. It treats the proxy as a standard OpenAI-compatible endpoint.

03

Requests Route Intelligently

refactor -> sonnet

Every request is analyzed and routed to the optimal model. Costs are tracked per-request in local SQLite.

5
PROVIDERS
50%
AVG. SAVINGS
<1
MS ROUTING

Five Providers. One Interface.

Route between any model seamlessly from aider

Anthropic
Claude 4.5 (Opus, Sonnet, Haiku)
OpenAI
GPT-5.2, o1, o3
Google
Gemini 2.0, 1.5
xAI
Grok-3, Grok-3-mini
Moonshot
v1-8k to 128k
// ROUTING MODES

Choose Your Strategy

DEFAULT

Smart Routing

relayplane:auto

Infers task type from prompt. Simple edits go to Haiku. Complex refactors go to Sonnet or Opus.

add imports -> haiku
refactor -> sonnet
AGGRESSIVE

Cost Priority

relayplane:cost

Always routes to cheapest models. Maximum savings, escalates on failure.

everything -> haiku
# escalate on error
CONSERVATIVE

Quality Priority

relayplane:quality

Uses best available model. Maximum quality, similar to no optimization.

everything -> opus
# full quality mode
// BENEFITS

What You Get

RelayPlane adds intelligent routing and observability to every aider request.

Per-Request Cost Tracking

relayplane stats --days 7

See exactly what each aider session costs

relayplane stats --breakdown

Cost by model, by task type, by hour

Automatic Model Fallback

cascade: haiku -> sonnet -> opus

Try cheap models first, escalate on uncertainty

provider cooldowns: auto

Failing providers are paused, alternatives used

Local-First Privacy

~/.relayplane/data.db

All logs stored locally in SQLite

telemetry: off (default)

No data sent anywhere without opt-in

OpenAI-Compatible

POST /v1/chat/completions

Standard OpenAI endpoint, no aider config changes

streaming: supported

Full SSE streaming passthrough

// TROUBLESHOOTING

Common Issues

Proxy not running?

Start it with verbose mode to see what is happening:

npx @relayplane/proxy --port 4100 -v

Aider not routing through proxy?

Confirm OPENAI_API_BASE is set in your current shell session:

echo $OPENAI_API_BASE
# Should output: http://localhost:4100

Getting authentication errors?

Make sure your API key is set. Aider passes it through the proxy to the upstream provider:

export OPENAI_API_KEY="sk-..."

How do I verify routing is working?

Check the stats endpoint while aider is running:

curl http://localhost:4100/control/stats

Want to bypass the proxy temporarily?

Unset the environment variable for that session or run aider without it:

OPENAI_API_BASE= aider --model gpt-4

Start Saving Today

One environment variable. No cloud dependency. Instant cost reduction in every aider session.

100% OPEN SOURCEMIT LICENSEAIDER COMPATIBLE