Core Concepts
Understanding the fundamentals of RelayPlane workflows - from DAG-based execution to schema validation and error handling.
What is a Workflow?
A RelayPlane workflow is a multi-step AI orchestration system that coordinates multiple AI model calls into a coherent pipeline. Unlike simple API wrappers, workflows let you chain complex operations where each step can depend on outputs from previous steps.
DAG-Based Execution Model
At its core, every RelayPlane workflow is a Directed Acyclic Graph (DAG). Each step in your workflow is a node in the graph, and dependencies between steps form directed edges. This structure ensures that:
- •Steps execute in the correct order based on their dependencies
- •Independent steps can run in parallel for optimal performance
- •Circular dependencies are detected and prevented at build time
- •The execution plan is deterministic and reproducible
1// Simple linear workflow (A -> B -> C)2const linearWorkflow = relay3 .workflow('linear')4 .step('analyze', { systemPrompt: 'Analyze the input' })5 .with('openai:gpt-4o')6 .step('summarize', { systemPrompt: 'Summarize: {{analyze.output}}' })7 .with('anthropic:claude-sonnet-4-20250514')8 .depends('analyze')9 .step('format', { systemPrompt: 'Format as markdown: {{summarize.output}}' })10 .with('openai:gpt-4o-mini')11 .depends('summarize')1213// Diamond workflow (A -> B, A -> C, B+C -> D)14const diamondWorkflow = relay15 .workflow('diamond')16 .step('extract', { systemPrompt: 'Extract key points' })17 .with('openai:gpt-4o')18 .step('sentiment', { systemPrompt: 'Analyze sentiment: {{extract.output}}' })19 .with('anthropic:claude-sonnet-4-20250514')20 .depends('extract')21 .step('entities', { systemPrompt: 'Extract entities: {{extract.output}}' })22 .with('openai:gpt-4o')23 .depends('extract')24 .step('combine', {25 systemPrompt: 'Combine analysis: {{sentiment.output}} + {{entities.output}}'26 })27 .with('openai:gpt-4o')28 .depends('sentiment', 'entities')Local-First Philosophy
RelayPlane embraces a local-first philosophy: your API keys stay on your machine, your data never passes through our servers, and you have complete control over execution. This design provides several key benefits:
- •Security - Sensitive data and API keys never leave your environment
- •Privacy - No telemetry, no logging of your prompts or outputs
- •Control - Choose exactly which models handle each step
- •Cost transparency - Direct billing from providers, no markup
Steps and Dependencies
Steps are the fundamental building blocks of a workflow. Each step represents a single AI model invocation with its own configuration, provider selection, and optional dependencies.
Step Definition
Every step requires a unique name and a configuration object. The configuration defines what the AI model should do and how it should behave:
1// Basic step definition2.step('stepName', {3 // Required: Instructions for the AI model4 systemPrompt: 'Your instructions here',56 // Optional: Structured output schema (Zod)7 schema: z.object({8 result: z.string(),9 confidence: z.number()10 }),1112 // Optional: Temperature for response randomness (0-2)13 temperature: 0.7,1415 // Optional: Maximum tokens in response16 maxTokens: 1000,1718 // Optional: Retry configuration19 retries: 3,20 retryDelay: 100021})22.with('provider:model') // Required: Specify the AI model23.depends('otherStep') // Optional: Declare dependenciesDependency Graphs and DAG Construction
When you call .depends(), you're adding edges to the workflow's dependency graph. The engine validates this graph to ensure it forms a valid DAG before execution:
1// The engine performs these validations:2// 1. Check all referenced steps exist3// 2. Detect circular dependencies (A -> B -> C -> A)4// 3. Verify graph connectivity5// 4. Compute topological sort order67const workflow = relay8 .workflow('validated')9 .step('first', { systemPrompt: 'Start' })10 .with('openai:gpt-4o')11 .step('second', { systemPrompt: 'Continue: {{first.output}}' })12 .with('openai:gpt-4o')13 .depends('first')14 .step('third', { systemPrompt: 'Finish: {{second.output}}' })15 .with('openai:gpt-4o')16 .depends('second')17 // .depends('third') // This would create a cycle!1819// DAG validation happens when you call .run()20// If invalid, you'll get a descriptive errorParallel vs Sequential Execution
The execution engine automatically optimizes your workflow by running independent steps in parallel. Steps only wait for their declared dependencies, not all preceding steps:
1// These steps run in PARALLEL (no dependencies between them)2const parallelWorkflow = relay3 .workflow('parallel')4 .step('sentiment', { systemPrompt: 'Analyze sentiment' })5 .with('openai:gpt-4o')6 .step('entities', { systemPrompt: 'Extract entities' })7 .with('anthropic:claude-sonnet-4-20250514')8 .step('keywords', { systemPrompt: 'Extract keywords' })9 .with('google:gemini-2.0-flash')10 // No .depends() calls = all three run simultaneously!11 .step('combine', {12 systemPrompt: 'Combine: {{sentiment.output}}, {{entities.output}}, {{keywords.output}}'13 })14 .with('openai:gpt-4o')15 .depends('sentiment', 'entities', 'keywords')1617// Execution timeline:18// T0: sentiment, entities, keywords start (parallel)19// T1: All three complete20// T2: combine starts and finishesTopological Sorting
Before execution, the engine performs a topological sort to determine the optimal execution order. This algorithm ensures that every step runs only after all its dependencies have completed:
- 1.Identify all steps with no dependencies (entry points)
- 2.Execute entry points in parallel
- 3.As each step completes, check if it unblocks dependent steps
- 4.Execute newly unblocked steps immediately
- 5.Continue until all steps complete
Provider:Model Selection
Every step in RelayPlane requires an explicit provider:model selection. This deliberate design choice ensures you always know exactly which AI model handles each task.
Why Explicit Selection Matters
Unlike systems that automatically route requests, RelayPlane requires you to specify the model for each step. This provides several important benefits:
- •Predictable costs - You know exactly what each run will cost
- •Reproducible results - Same model = consistent behavior
- •Debugging clarity - Easy to identify which model caused issues
- •No vendor lock-in - Switch models per step without changing code
Cost Optimization Strategies
Strategic model selection can dramatically reduce costs while maintaining quality. Use powerful models for complex reasoning and cheaper models for simpler tasks:
1const costOptimizedWorkflow = relay2 .workflow('cost-optimized')34 // Complex reasoning: Use most capable model5 .step('analyze', {6 systemPrompt: 'Perform deep analysis of this legal contract...',7 maxTokens: 20008 })9 .with('anthropic:claude-sonnet-4-20250514') // $3/1M input tokens1011 // Simple extraction: Use cheaper model12 .step('extractDates', {13 systemPrompt: 'Extract all dates from: {{analyze.output}}'14 })15 .with('openai:gpt-4o-mini') // $0.15/1M input tokens16 .depends('analyze')1718 // Formatting: Use fastest/cheapest19 .step('format', {20 systemPrompt: 'Format as JSON: {{extractDates.output}}'21 })22 .with('google:gemini-2.0-flash') // Very cost-effective23 .depends('extractDates')2425// Result: 10x cost reduction vs using Claude for everythingCapability Matching
Different models excel at different tasks. Match model capabilities to your step requirements:
- •Vision tasks - Use
openai:gpt-4ooranthropic:claude-sonnet-4-20250514 - •Long context - Use
anthropic:claude-sonnet-4-20250514(200K) orgoogle:gemini-1.5-pro(1M) - •Code generation - Use
anthropic:claude-sonnet-4-20250514oropenai:gpt-4o - •Fast responses - Use
openai:gpt-4o-miniorgoogle:gemini-2.0-flash
Schema Validation
RelayPlane integrates with Zod to provide runtime validation and TypeScript type inference for step outputs. This ensures your workflow produces structured, predictable data.
Zod Integration for Typed Outputs
Define output schemas using Zod, and RelayPlane will instruct the AI model to respond in that format and validate the response at runtime:
1import { z } from 'zod'2import { relay } from '@relayplane/sdk'34// Define your schema5const AnalysisSchema = z.object({6 sentiment: z.enum(['positive', 'negative', 'neutral']),7 confidence: z.number().min(0).max(1),8 keyPhrases: z.array(z.string()),9 summary: z.string().max(200)10})1112const workflow = relay13 .workflow('typed-analysis')14 .step('analyze', {15 systemPrompt: 'Analyze the sentiment of this text',16 schema: AnalysisSchema // AI will respond in this format17 })18 .with('openai:gpt-4o')1920const result = await workflow.run({21 apiKeys: { openai: process.env.OPENAI_API_KEY },22 input: { text: 'Great product, fast shipping!' }23})2425// TypeScript knows the shape of the output!26if (result.success) {27 const analysis = result.steps[0].output28 console.log(analysis.sentiment) // 'positive' | 'negative' | 'neutral'29 console.log(analysis.confidence) // number30 console.log(analysis.keyPhrases) // string[]31}Type Safety Benefits
Schema validation provides multiple layers of safety:
- •Compile-time checks - TypeScript catches schema mismatches during development
- •Runtime validation - Invalid AI responses are caught and reported
- •IDE support - Full autocomplete and type hints for outputs
- •Documentation - Schemas serve as executable documentation
Runtime Validation
When a model responds, RelayPlane parses and validates the response against your schema. If validation fails, the step fails with a descriptive error:
1const StrictSchema = z.object({2 score: z.number().int().min(1).max(10),3 category: z.enum(['A', 'B', 'C']),4 timestamp: z.string().datetime()5})67// If the AI responds with:8// { score: 3.5, category: "D", timestamp: "invalid" }9//10// The step fails with validation errors:11// - score: Expected integer, received float12// - category: Invalid enum value. Expected 'A' | 'B' | 'C'13// - timestamp: Invalid datetime string1415// You can catch and handle validation failures16const result = await workflow.run(options)17if (!result.success) {18 const failedStep = result.steps.find(s => !s.success)19 if (failedStep?.error?.type === 'VALIDATION_ERROR') {20 console.error('Schema validation failed:', failedStep.error.details)21 }22}.describe() in your Zod schemas to help the AI understand what each field should contain.Error Handling & Retries
AI workflows can fail for many reasons: rate limits, network issues, invalid responses, or model errors. RelayPlane provides robust error handling and automatic retry mechanisms.
Common Failure Modes
Understanding failure modes helps you build resilient workflows:
- •Rate limits (429) - Too many requests to provider API
- •Timeout errors - Model took too long to respond
- •Context length exceeded - Input too large for model
- •Invalid JSON response - Model didn't follow schema
- •Authentication errors - Invalid or expired API key
- •Provider outages - Temporary service unavailability
Retry Strategies with Exponential Backoff
Configure retries per step with exponential backoff to handle transient failures gracefully:
1const resilientWorkflow = relay2 .workflow('resilient')3 .step('fetchData', {4 systemPrompt: 'Analyze this data',56 // Retry configuration7 retries: 3, // Maximum retry attempts8 retryDelay: 1000, // Initial delay in ms9 retryBackoff: 2, // Multiplier for exponential backoff10 retryOn: [ // Which errors trigger retries11 'RATE_LIMIT',12 'TIMEOUT',13 'NETWORK_ERROR'14 ]15 })16 .with('openai:gpt-4o')1718// Retry timeline with exponential backoff:19// Attempt 1: Immediate20// Attempt 2: Wait 1000ms (1s)21// Attempt 3: Wait 2000ms (2s)22// Attempt 4: Wait 4000ms (4s)23// Total max wait: 7 secondsGraceful Degradation
For non-critical steps, you can configure workflows to continue even when individual steps fail:
1const degradingWorkflow = relay2 .workflow('graceful')34 // Critical step - workflow fails if this fails5 .step('core', {6 systemPrompt: 'Core analysis',7 required: true // Default behavior8 })9 .with('openai:gpt-4o')1011 // Optional enrichment - workflow continues if this fails12 .step('enrich', {13 systemPrompt: 'Enrich with additional data: {{core.output}}',14 required: false, // Mark as optional15 retries: 116 })17 .with('anthropic:claude-sonnet-4-20250514')18 .depends('core')1920 // This step runs even if 'enrich' failed21 .step('format', {22 systemPrompt: 'Format results: {{core.output}}',23 })24 .with('openai:gpt-4o-mini')25 .depends('core') // Note: depends on 'core', not 'enrich'2627// Handle partial failures28const result = await degradingWorkflow.run(options)29if (result.success) {30 // Core and format succeeded31 const enrichFailed = result.steps.find(s => s.stepName === 'enrich' && !s.success)32 if (enrichFailed) {33 console.log('Enrichment unavailable, using basic results')34 }35}Template Variables
Template variables let you reference previous step outputs, workflow inputs, and context data in your prompts. They use a double-brace syntax that's resolved at runtime.
Step Output References
Reference outputs from previous steps using the {{stepName.output}} syntax:
1const workflow = relay2 .workflow('chained')3 .step('extract', {4 systemPrompt: 'Extract key points from the document'5 })6 .with('openai:gpt-4o')7 .step('summarize', {8 // Reference the extract step's output9 systemPrompt: `10 Summarize these key points:11 {{extract.output}}1213 Keep it under 100 words.14 `15 })16 .with('anthropic:claude-sonnet-4-20250514')17 .depends('extract')18 .step('translate', {19 // Reference the summarize step's output20 systemPrompt: `21 Translate this summary to Spanish:22 {{summarize.output}}23 `24 })25 .with('openai:gpt-4o')26 .depends('summarize')Input References
Access workflow input data using {{input.field}}:
1const workflow = relay2 .workflow('personalized')3 .step('greet', {4 // Reference input fields5 systemPrompt: `6 Generate a personalized greeting for {{input.userName}}.7 They work at {{input.company}} as a {{input.role}}.8 Tone: {{input.tone}}9 `10 })11 .with('openai:gpt-4o')1213// Provide inputs when running14const result = await workflow.run({15 apiKeys: { openai: process.env.OPENAI_API_KEY },16 input: {17 userName: 'Alice',18 company: 'Acme Corp',19 role: 'Software Engineer',20 tone: 'friendly'21 }22})Context Object Access
For more complex scenarios, access the full context object which includes metadata, timestamps, and run information:
1const workflow = relay2 .workflow('contextual')3 .step('process', {4 systemPrompt: `5 Process this request:67 Run ID: {{context.runId}}8 Workflow: {{context.workflowName}}9 Timestamp: {{context.timestamp}}1011 User Input: {{input.query}}1213 Previous Analysis: {{analyze.output}}14 `15 })16 .with('openai:gpt-4o')17 .depends('analyze')1819// The context object contains:20// - runId: Unique identifier for this execution21// - workflowName: Name of the workflow22// - timestamp: ISO timestamp of run start23// - input: All input fields24// - steps: Completed step outputs (by name)Nested Object Access
When step outputs are structured objects (via schemas), access nested fields with dot notation:
1const AnalysisSchema = z.object({2 sentiment: z.object({3 score: z.number(),4 label: z.string()5 }),6 topics: z.array(z.string())7})89const workflow = relay10 .workflow('nested-access')11 .step('analyze', {12 systemPrompt: 'Analyze sentiment and extract topics',13 schema: AnalysisSchema14 })15 .with('openai:gpt-4o')16 .step('report', {17 // Access nested fields18 systemPrompt: `19 Generate a report:2021 Sentiment Score: {{analyze.output.sentiment.score}}22 Sentiment Label: {{analyze.output.sentiment.label}}23 Topics: {{analyze.output.topics}}24 `25 })26 .with('anthropic:claude-sonnet-4-20250514')27 .depends('analyze')Next Steps
Now that you understand the core concepts, continue your learning journey:
- Quickstart Guide - Build your first workflow in 5 minutes
- API Reference - Complete API documentation
- Providers - Configure OpenAI, Anthropic, Google, and more
- Invoice Processor - See a production-ready workflow example
- Workflow Templates - Start from pre-built templates