Templates Library
Copy-paste ready workflow templates to jumpstart your AI automation projects. From simple two-step workflows to complex integration patterns.
Starter Templates
These minimal templates demonstrate the core RelayPlane patterns. Perfect for learning the SDK or as a foundation for more complex workflows.
Simple Two-Step: Extract and Summarize
The most common pattern: extract structured data from text, then summarize it. This template processes raw input and produces a concise summary.
1import { relay } from '@relayplane/sdk'23const extractSummarize = relay4 .workflow('extract-summarize')5 .step('extract', {6 systemPrompt: `Extract key information from the following text.7Return as JSON with fields: topic, main_points (array), entities (array).89Text: {{input.text}}`10 })11 .with('openai:gpt-4o')12 .step('summarize', {13 systemPrompt: `Based on this extracted data, write a 2-3 sentence summary:14{{extract.output}}`15 })16 .with('anthropic:claude-sonnet-4-20250514')17 .depends('extract')1819const result = await extractSummarize.run({20 apiKeys: {21 openai: process.env.OPENAI_API_KEY,22 anthropic: process.env.ANTHROPIC_API_KEY23 },24 input: {25 text: 'Your raw document text here...'26 }27})Vision + Language Pipeline
Process images with a vision model, then use language models for further analysis. Perfect for document processing, image analysis, or multimodal workflows.
1import { relay } from '@relayplane/sdk'23const visionPipeline = relay4 .workflow('vision-language-pipeline')5 .step('analyze-image', {6 systemPrompt: `Analyze this image and describe:71. What objects/elements are present82. Any text visible in the image93. Overall context or scene1011Image: {{input.imageUrl}}`,12 model: 'gpt-4o' // Vision-capable model13 })14 .with('openai:gpt-4o')15 .step('extract-insights', {16 systemPrompt: `From this image analysis, extract actionable insights:17{{analyze-image.output}}1819Return as JSON: { insights: [], recommendations: [], confidence: "high|medium|low" }`20 })21 .with('anthropic:claude-sonnet-4-20250514')22 .depends('analyze-image')23 .step('generate-report', {24 systemPrompt: `Create a professional report based on:25Analysis: {{analyze-image.output}}26Insights: {{extract-insights.output}}2728Format as markdown with sections: Summary, Findings, Recommendations.`29 })30 .with('openai:gpt-4o')31 .depends('analyze-image', 'extract-insights')3233const result = await visionPipeline.run({34 apiKeys: {35 openai: process.env.OPENAI_API_KEY,36 anthropic: process.env.ANTHROPIC_API_KEY37 },38 input: {39 imageUrl: 'https://example.com/document.png'40 }41})Multi-Model Comparison
Run the same task across multiple models to compare outputs. Useful for benchmarking, A/B testing, or ensuring consistency across providers.
1import { relay } from '@relayplane/sdk'23const multiModelComparison = relay4 .workflow('multi-model-comparison')5 .step('openai-response', {6 systemPrompt: `Answer this question concisely: {{input.question}}`7 })8 .with('openai:gpt-4o')9 .step('anthropic-response', {10 systemPrompt: `Answer this question concisely: {{input.question}}`11 })12 .with('anthropic:claude-sonnet-4-20250514')13 .step('google-response', {14 systemPrompt: `Answer this question concisely: {{input.question}}`15 })16 .with('google:gemini-1.5-pro')17 .step('compare-results', {18 systemPrompt: `Compare these three AI responses and identify:191. Common themes202. Key differences213. Which provides the most comprehensive answer2223OpenAI: {{openai-response.output}}24Anthropic: {{anthropic-response.output}}25Google: {{google-response.output}}`26 })27 .with('anthropic:claude-sonnet-4-20250514')28 .depends('openai-response', 'anthropic-response', 'google-response')2930const result = await multiModelComparison.run({31 apiKeys: {32 openai: process.env.OPENAI_API_KEY,33 anthropic: process.env.ANTHROPIC_API_KEY,34 google: process.env.GOOGLE_API_KEY35 },36 input: {37 question: 'What are the key considerations when designing a microservices architecture?'38 }39})Common Patterns
These patterns solve recurring workflow challenges. Each demonstrates a proven approach to handling specific use cases.
Extract, Validate, Summarize (Data Processing)
A three-step pattern that ensures data quality: extract structured data, validate it against rules, then create a summary. Essential for document processing pipelines.
1import { relay } from '@relayplane/sdk'23const dataProcessingPipeline = relay4 .workflow('extract-validate-summarize')5 .step('extract', {6 systemPrompt: `Extract structured data from this document:7{{input.document}}89Return JSON with:10{11 "fields": { ... },12 "metadata": { "confidence": 0-100, "missing_fields": [] }13}`14 })15 .with('openai:gpt-4o')16 .step('validate', {17 systemPrompt: `Validate this extracted data against business rules:18{{extract.output}}1920Rules:21- All required fields must be present22- Dates must be in ISO format23- Numeric values must be positive24- Email addresses must be valid format2526Return: { "valid": boolean, "errors": [], "warnings": [] }`27 })28 .with('anthropic:claude-sonnet-4-20250514')29 .depends('extract')30 .step('summarize', {31 systemPrompt: `Create a processing summary:32Extracted Data: {{extract.output}}33Validation Result: {{validate.output}}3435Include: data overview, validation status, any issues found, recommended actions.`36 })37 .with('openai:gpt-4o')38 .depends('extract', 'validate')3940const result = await dataProcessingPipeline.run({41 apiKeys: {42 openai: process.env.OPENAI_API_KEY,43 anthropic: process.env.ANTHROPIC_API_KEY44 },45 input: {46 document: 'Raw document content...'47 }48})Analyze, Classify, Route (Ticket Routing)
Intelligent routing pattern: analyze incoming content, classify it into categories, then determine the appropriate routing action. Perfect for support systems and triage workflows.
1import { relay } from '@relayplane/sdk'23const ticketRouter = relay4 .workflow('analyze-classify-route')5 .step('analyze', {6 systemPrompt: `Analyze this support ticket:7{{input.ticket}}89Extract:10- Primary issue description11- Urgency indicators (keywords, tone)12- Technical complexity13- Customer sentiment`14 })15 .with('openai:gpt-4o')16 .step('classify', {17 systemPrompt: `Based on this analysis, classify the ticket:18{{analyze.output}}1920Categories:21- billing: Payment, subscription, refund issues22- technical: Bugs, errors, integration problems23- feature: Feature requests, suggestions24- general: Questions, feedback, other2526Return JSON: {27 "category": "string",28 "subcategory": "string",29 "priority": "critical|high|medium|low",30 "confidence": 0-10031}`32 })33 .with('anthropic:claude-sonnet-4-20250514')34 .depends('analyze')35 .step('route', {36 systemPrompt: `Determine routing for this ticket:37Analysis: {{analyze.output}}38Classification: {{classify.output}}3940Routing rules:41- critical priority -> on-call team42- technical + high priority -> senior engineers43- billing -> finance team44- feature requests -> product team45- low priority -> queue for batch processing4647Return: {48 "team": "string",49 "assignee_type": "specific|pool|queue",50 "sla_hours": number,51 "escalation_path": []52}`53 })54 .with('openai:gpt-4o')55 .depends('analyze', 'classify')5657const result = await ticketRouter.run({58 apiKeys: {59 openai: process.env.OPENAI_API_KEY,60 anthropic: process.env.ANTHROPIC_API_KEY61 },62 input: {63 ticket: 'Customer ticket content...'64 }65})Generate, Review, Refine (Content Generation)
Self-improving content pattern: generate initial content, review it for quality and issues, then refine based on feedback. Produces higher-quality outputs than single-shot generation.
1import { relay } from '@relayplane/sdk'23const contentPipeline = relay4 .workflow('generate-review-refine')5 .step('generate', {6 systemPrompt: `Write {{input.contentType}} about: {{input.topic}}78Requirements:9- Target audience: {{input.audience}}10- Tone: {{input.tone}}11- Length: {{input.wordCount}} words approximately`12 })13 .with('anthropic:claude-sonnet-4-20250514')14 .step('review', {15 systemPrompt: `Review this content as an expert editor:16{{generate.output}}1718Evaluate:191. Clarity and readability202. Factual accuracy213. Tone consistency224. Structure and flow235. Grammar and style2425Return JSON: {26 "score": 1-10,27 "strengths": [],28 "improvements": [],29 "specific_edits": [{ "original": "", "suggested": "", "reason": "" }]30}`31 })32 .with('openai:gpt-4o')33 .depends('generate')34 .step('refine', {35 systemPrompt: `Refine this content based on editorial feedback:3637Original: {{generate.output}}38Review: {{review.output}}3940Apply all suggested improvements while maintaining the original voice and intent.41Return the polished final version.`42 })43 .with('anthropic:claude-sonnet-4-20250514')44 .depends('generate', 'review')4546const result = await contentPipeline.run({47 apiKeys: {48 openai: process.env.OPENAI_API_KEY,49 anthropic: process.env.ANTHROPIC_API_KEY50 },51 input: {52 contentType: 'blog post',53 topic: 'Benefits of AI automation in business',54 audience: 'business executives',55 tone: 'professional yet approachable',56 wordCount: 80057 }58})Integration Templates
Templates for integrating RelayPlane workflows into your existing infrastructure. These patterns show how to trigger workflows from various sources.
Webhook-Triggered Workflow
Expose your workflow as an HTTP endpoint. Perfect for integrating with third-party services, Slack commands, or custom applications.
1import { relay } from '@relayplane/sdk'2import express from 'express'34// Define the workflow5const webhookWorkflow = relay6 .workflow('webhook-processor')7 .step('process', {8 systemPrompt: `Process this webhook payload:9{{input.payload}}1011Extract relevant information and determine required actions.`12 })13 .with('openai:gpt-4o')14 .step('respond', {15 systemPrompt: `Based on the processed data:16{{process.output}}1718Generate an appropriate response message.`19 })20 .with('anthropic:claude-sonnet-4-20250514')21 .depends('process')2223// Create Express server24const app = express()25app.use(express.json())2627app.post('/webhook', async (req, res) => {28 try {29 const result = await webhookWorkflow.run({30 apiKeys: {31 openai: process.env.OPENAI_API_KEY,32 anthropic: process.env.ANTHROPIC_API_KEY33 },34 input: {35 payload: JSON.stringify(req.body)36 }37 })3839 if (result.success) {40 res.json({41 success: true,42 runId: result.runId,43 response: result.steps[1].output44 })45 } else {46 res.status(500).json({47 success: false,48 error: result.error49 })50 }51 } catch (error) {52 res.status(500).json({ error: 'Workflow execution failed' })53 }54})5556app.listen(3000, () => {57 console.log('Webhook server running on port 3000')58})Scheduled Batch Processing
Process multiple items on a schedule. Ideal for nightly reports, batch document processing, or periodic data analysis.
1import { relay } from '@relayplane/sdk'2import cron from 'node-cron'34// Define the batch processing workflow5const batchWorkflow = relay6 .workflow('batch-processor')7 .step('fetch-items', {8 systemPrompt: `Analyze and categorize these items:9{{input.items}}1011Return structured analysis for each item.`12 })13 .with('openai:gpt-4o')14 .step('aggregate', {15 systemPrompt: `Aggregate the analysis results:16{{fetch-items.output}}1718Create a summary report with:19- Total items processed20- Category breakdown21- Key insights22- Recommended actions`23 })24 .with('anthropic:claude-sonnet-4-20250514')25 .depends('fetch-items')2627// Schedule to run every day at 2 AM28cron.schedule('0 2 * * *', async () => {29 console.log('Starting scheduled batch processing...')3031 // Fetch items from your data source32 const items = await fetchItemsFromDatabase()3334 const result = await batchWorkflow.run({35 apiKeys: {36 openai: process.env.OPENAI_API_KEY,37 anthropic: process.env.ANTHROPIC_API_KEY38 },39 input: {40 items: JSON.stringify(items)41 }42 })4344 if (result.success) {45 // Store results or send notifications46 await saveReport(result.steps[1].output)47 await sendSlackNotification('Batch processing complete')48 console.log(`Batch complete: ${result.runId}`)49 } else {50 await sendAlertNotification('Batch processing failed', result.error)51 }52})5354async function fetchItemsFromDatabase() {55 // Your database query logic here56 return []57}5859async function saveReport(report: string) {60 // Save to database or file system61}6263async function sendSlackNotification(message: string) {64 // Send Slack notification65}6667async function sendAlertNotification(title: string, error: unknown) {68 // Send alert notification69}Event-Driven Architecture
Trigger workflows from message queues or event streams. Suitable for high-throughput systems with decoupled components.
1import { relay } from '@relayplane/sdk'2import { SQSClient, ReceiveMessageCommand, DeleteMessageCommand } from '@aws-sdk/client-sqs'34// Define the event processing workflow5const eventWorkflow = relay6 .workflow('event-processor')7 .step('parse-event', {8 systemPrompt: `Parse and validate this event:9{{input.event}}1011Extract: event_type, timestamp, payload, metadata`12 })13 .with('openai:gpt-4o')14 .step('process-event', {15 systemPrompt: `Process the parsed event:16{{parse-event.output}}1718Determine required actions and generate appropriate response.`19 })20 .with('anthropic:claude-sonnet-4-20250514')21 .depends('parse-event')22 .step('emit-result', {23 systemPrompt: `Format the processing result for downstream systems:24{{process-event.output}}2526Return JSON suitable for publishing to result queue.`27 })28 .with('openai:gpt-4o')29 .depends('process-event')3031// SQS consumer32const sqs = new SQSClient({ region: 'us-east-1' })33const QUEUE_URL = process.env.SQS_QUEUE_URL!3435async function processMessages() {36 while (true) {37 const command = new ReceiveMessageCommand({38 QueueUrl: QUEUE_URL,39 MaxNumberOfMessages: 10,40 WaitTimeSeconds: 2041 })4243 const response = await sqs.send(command)4445 if (response.Messages) {46 for (const message of response.Messages) {47 try {48 const result = await eventWorkflow.run({49 apiKeys: {50 openai: process.env.OPENAI_API_KEY,51 anthropic: process.env.ANTHROPIC_API_KEY52 },53 input: {54 event: message.Body55 }56 })5758 if (result.success) {59 // Delete processed message60 await sqs.send(new DeleteMessageCommand({61 QueueUrl: QUEUE_URL,62 ReceiptHandle: message.ReceiptHandle63 }))6465 // Publish result to output queue66 await publishResult(result.steps[2].output)67 }68 } catch (error) {69 console.error('Failed to process message:', error)70 }71 }72 }73 }74}7576async function publishResult(result: string) {77 // Publish to SNS, another SQS queue, or EventBridge78}7980processMessages().catch(console.error)Full Examples by Use Case
Explore complete, production-ready workflow examples organized by industry use case. Each example includes detailed documentation, code, and best practices.
Document Processing
Automate document analysis, data extraction, and report generation.
- •Invoice Processor - Extract line items, validate totals, generate summaries
- •Contract Analyzer - Identify key clauses, risks, and obligations
- •Meeting Notes - Transcribe, summarize, and extract action items
- •PII Detector - Identify and redact sensitive personal information
Customer Service
Enhance support operations with intelligent automation.
- •Ticket Router - Classify and route support tickets automatically
- •Support Reply Generator - Draft contextual responses to customer inquiries
- •CSAT Analyzer - Analyze customer satisfaction surveys and feedback
- •FAQ Generator - Generate FAQs from support ticket patterns
- •Churn Risk Analyzer - Identify at-risk customers from interaction patterns
Sales & Marketing
Accelerate sales cycles and scale content production.
- •Lead Qualifier - Score and qualify leads based on engagement data
- •Proposal Generator - Create customized sales proposals from templates
- •Social Post Generator - Generate platform-optimized social media content
- •Sales Follow-up - Generate personalized follow-up emails
- •Content Pipeline - Multi-stage content creation and review workflow
- •SEO Audit - Analyze and optimize content for search engines
Engineering
Streamline development workflows and operations.
- •Code Review - Automated code review with actionable feedback
- •Log Analyzer - Parse and summarize application logs
- •Incident Report - Generate post-mortems from incident data
- •PR Changelog - Generate changelogs from pull request history
- •Metrics Summary - Summarize and explain performance metrics
HR & Operations
Automate people operations and internal workflows.
- •Hiring Screen - Initial candidate screening and evaluation
- •Employee Feedback - Analyze and summarize employee surveys
- •Policy Checker - Verify compliance with internal policies
- •Email Digest - Summarize email threads and extract key points
- •Data Enrichment - Enrich records with additional context and metadata
Next Steps
Ready to build your own workflows? Here are some resources to help you get started:
- •Quickstart Guide - Build your first workflow in 5 minutes
- •Core Concepts - Understand DAGs, dependencies, and execution
- •API Reference - Complete SDK documentation
- •Providers - Configure OpenAI, Anthropic, Google, xAI, and local models