First Run

Make your first proxied request and see it in the Learning Ledger.

Start the Proxy

1# Set at least one provider API key
2export ANTHROPIC_API_KEY=your-key
3
4# Start the proxy
5npx relayplane start
6
7# Output:
8# 🚀 RelayPlane proxy started
9# 📍 http://localhost:3001
10# 📊 Dashboard: http://localhost:3001/dashboard

Make a Request

Use curl or any HTTP client:

1curl http://localhost:3001/v1/chat/completions \
2 -H "Content-Type: application/json" \
3 -H "X-RelayPlane-Agent: my-first-agent" \
4 -d '{
5 "model": "claude-3-5-sonnet",
6 "messages": [
7 { "role": "user", "content": "Say hello and tell me about yourself" }
8 ]
9 }'

Or with TypeScript:

1import Anthropic from '@anthropic-ai/sdk';
2
3const client = new Anthropic({
4 baseURL: 'http://localhost:3001/v1',
5});
6
7async function main() {
8 const response = await client.messages.create({
9 model: 'claude-3-5-sonnet',
10 max_tokens: 1024,
11 messages: [
12 { role: 'user', content: 'Say hello and tell me about yourself' }
13 ],
14 }, {
15 headers: {
16 'X-RelayPlane-Agent': 'my-first-agent',
17 'X-RelayPlane-Session': 'demo-session',
18 },
19 });
20
21 console.log(response.content);
22 console.log('Run ID:', response.id);
23}
24
25main();

Check the Response

The response includes RelayPlane metadata:

1{
2 "id": "chatcmpl-abc123",
3 "object": "chat.completion",
4 "created": 1707264000,
5 "model": "claude-3-5-sonnet",
6 "choices": [{
7 "index": 0,
8 "message": {
9 "role": "assistant",
10 "content": "Hello! I'm Claude, an AI assistant..."
11 },
12 "finish_reason": "stop"
13 }],
14 "usage": {
15 "prompt_tokens": 12,
16 "completion_tokens": 45,
17 "total_tokens": 57
18 },
19 "relayplane": {
20 "run_id": "run_xyz789",
21 "latency_ms": 1250,
22 "ttft_ms": 350
23 }
24}

View in the Ledger

Query the run you just made:

1curl http://localhost:3001/v1/runs/run_xyz789
2
3# Returns full run details including:
4# - Auth type (api/consumer)
5# - Execution mode (interactive/background/scheduled)
6# - Policy decisions
7# - Routing decisions
8# - Cost and token usage

Get an Explanation

Every decision is explainable:

1curl http://localhost:3001/v1/runs/run_xyz789/explain
2
3{
4 "run_id": "run_xyz789",
5 "narrative": "Request allowed. Used claude-3-5-sonnet via anthropic. Cost: $0.002. Latency: 1.25s.",
6 "timeline": [
7 {
8 "stage": "auth",
9 "outcome": "passed",
10 "detail": "API auth verified, agent: my-first-agent"
11 },
12 {
13 "stage": "policy",
14 "outcome": "passed",
15 "detail": "No policies configured (default: allow all)"
16 },
17 {
18 "stage": "routing",
19 "outcome": "selected",
20 "detail": "claude-3-5-sonnet (requested model available)"
21 },
22 {
23 "stage": "provider",
24 "outcome": "success",
25 "detail": "anthropic responded in 1.25s"
26 }
27 ]
28}
🎉 You've made your first proxied request! Every request is now observable and explainable.

Open the Dashboard

Visit http://localhost:3001/dashboard to see:

  • Runs — All requests with filtering and search
  • Policies — Create and manage governance rules
  • Routing — View and configure model routing
  • Analytics — Cost, latency, and usage metrics

Next Steps