Tools Reference

Complete reference for all 7 RelayPlane MCP Server tools.

ToolPurposeCost
relay_runSingle AI model callProvider cost
relay_workflow_runMulti-step workflowsProvider cost
relay_workflow_validateDAG validationFree
relay_models_listList available modelsFree
relay_skills_listDiscover skillsFree
relay_runs_listRecent runsFree
relay_run_getRun details + traceFree

relay_run

Execute a single AI model call. Useful for testing prompts before building full workflows.

Input Schema

1{
2 model: string, // "provider:model" format (e.g., "openai:gpt-4o")
3 prompt: string, // The user prompt to send
4 systemPrompt?: string, // Optional system prompt
5 schema?: object // Optional JSON schema for structured output
6}

Response

1{
2 success: boolean,
3 output: string | object,
4 model: string,
5 usage: {
6 promptTokens: number,
7 completionTokens: number,
8 totalTokens: number,
9 estimatedProviderCostUsd: number
10 },
11 durationMs: number,
12 runId: string,
13 traceUrl: string,
14 error?: { code: string, message: string }
15}

Example

1relay_run({
2 model: "openai:gpt-5.2",
3 prompt: "Extract the company name from: john@acme.com",
4 schema: {
5 type: "object",
6 properties: { company: { type: "string" } },
7 required: ["company"]
8 }
9})

relay_workflow_run

Execute a multi-step AI workflow. Intermediate results stay in the workflow engine (not your context), providing 90%+ context reduction on complex pipelines.

Input Schema

1{
2 name: string, // Workflow name for tracing
3 steps: Array<{
4 name: string, // Step identifier
5 model?: string, // "provider:model" format
6 prompt?: string, // Prompt template (supports {{interpolation}})
7 systemPrompt?: string,
8 depends?: string[], // Dependencies on other steps
9 mcp?: string, // MCP tool ("server:tool" format)
10 params?: object, // MCP tool parameters
11 schema?: object // JSON schema for structured output
12 }>,
13 input: object // Input data (accessible via {{input.field}})
14}

Response

1{
2 success: boolean,
3 steps: Record
4 success: boolean,
5 output: any,
6 durationMs: number,
7 usage?: {
8 promptTokens: number,
9 completionTokens: number,
10 estimatedProviderCostUsd: number
11 },
12 error?: { code: string, message: string }
13 }>,
14 finalOutput: any,
15 totalUsage: {
16 totalTokens: number,
17 estimatedProviderCostUsd: number
18 },
19 totalDurationMs: number,
20 runId: string,
21 traceUrl: string,
22 contextReduction: string // e.g., "94% (saved ~45k tokens)"
23}

Example

1relay_workflow_run({
2 name: "invoice-processor",
3 steps: [
4 {
5 name: "extract",
6 model: "openai:gpt-5.2",
7 prompt: "Extract invoice data from: {{input.fileContent}}"
8 },
9 {
10 name: "validate",
11 model: "anthropic:claude-sonnet-4.5",
12 depends: ["extract"],
13 prompt: "Verify totals match in: {{steps.extract.output}}"
14 },
15 {
16 name: "summarize",
17 model: "openai:gpt-5-nano",
18 depends: ["validate"],
19 prompt: "Create 2-sentence summary: {{steps.validate.output}}"
20 }
21 ],
22 input: { fileContent: "..." }
23})

relay_workflow_validate

Validate workflow structure without making any LLM calls. Free to use. Checks DAG structure (no cycles), dependency references, and model ID format.

Use this tool before relay_workflow_run to catch structural errors without spending on provider costs.

Input Schema

1{
2 steps: Array<{
3 name: string,
4 model?: string,
5 prompt?: string,
6 depends?: string[],
7 mcp?: string,
8 params?: object
9 }>
10}

Response

1{
2 valid: boolean,
3 errors: Array<{
4 step: string,
5 field: string,
6 message: string
7 }>,
8 warnings: Array<{
9 step: string,
10 message: string
11 }>,
12 structure: {
13 totalSteps: number,
14 executionOrder: string[],
15 parallelGroups: string[][]
16 }
17}

Validates

  • DAG structure (no cycles)
  • Dependency references exist
  • Model IDs are valid format
  • Required fields present

Does NOT Validate

  • Schema compatibility between steps
  • Prompt effectiveness

relay_models_list

List available AI models with capabilities and pricing. Use to check valid model IDs before testing.

Input Schema

1{
2 provider?: "openai" | "anthropic" | "google" | "xai" // Optional filter
3}

Response

1{
2 models: Array<{
3 id: string, // e.g., "openai:gpt-4o"
4 provider: string,
5 name: string,
6 capabilities: string[], // e.g., ["text", "vision", "function_calling"]
7 contextWindow: number,
8 inputCostPer1kTokens: number,
9 outputCostPer1kTokens: number
10 }>
11}

relay_skills_list

List available pre-built workflow skills. Skills are reusable patterns for common tasks with documented context reduction metrics.

Input Schema

1{
2 category?: "extraction" | "content" | "integration" | "all"
3}

Response

1{
2 skills: Array<{
3 name: string,
4 category: string,
5 description: string,
6 models: string[],
7 contextReduction: string, // e.g., "97%"
8 usage: string // Example usage
9 }>
10}

relay_runs_list

List recent workflow runs for debugging and reference.

Input Schema

1{
2 limit?: number // Default: 10, max: 50
3}

Response

1{
2 runs: Array<{
3 runId: string,
4 name: string,
5 status: "success" | "error",
6 createdAt: string,
7 durationMs: number,
8 totalCost: number
9 }>
10}

relay_run_get

Get full details of a specific run including all step outputs and trace URL.

Input Schema

1{
2 runId: string // The run ID to retrieve
3}

Response

1{
2 runId: string,
3 name: string,
4 status: "success" | "error",
5 steps: Record
6 output: any,
7 durationMs: number,
8 usage?: object
9 }>,
10 finalOutput: any,
11 traceUrl: string,
12 createdAt: string,
13 totalDurationMs: number
14}

Next Steps

  • Skills — Pre-built workflow patterns with context reduction metrics
  • Budget & Limits — Configure safety limits for provider costs