Templates Library

Copy-paste ready workflow templates to jumpstart your AI automation projects. From simple two-step workflows to complex integration patterns.

All templates below are complete and runnable. Simply copy the code, add your API keys, and execute. Each template demonstrates a specific pattern you can adapt for your use case.

Starter Templates

These minimal templates demonstrate the core RelayPlane patterns. Perfect for learning the SDK or as a foundation for more complex workflows.

Simple Two-Step: Extract and Summarize

The most common pattern: extract structured data from text, then summarize it. This template processes raw input and produces a concise summary.

extract-summarize.ts
1import { relay } from '@relayplane/sdk'
2
3const extractSummarize = relay
4 .workflow('extract-summarize')
5 .step('extract', {
6 systemPrompt: `Extract key information from the following text.
7Return as JSON with fields: topic, main_points (array), entities (array).
8
9Text: {{input.text}}`
10 })
11 .with('openai:gpt-4o')
12 .step('summarize', {
13 systemPrompt: `Based on this extracted data, write a 2-3 sentence summary:
14{{extract.output}}`
15 })
16 .with('anthropic:claude-sonnet-4-20250514')
17 .depends('extract')
18
19const result = await extractSummarize.run({
20 apiKeys: {
21 openai: process.env.OPENAI_API_KEY,
22 anthropic: process.env.ANTHROPIC_API_KEY
23 },
24 input: {
25 text: 'Your raw document text here...'
26 }
27})

Vision + Language Pipeline

Process images with a vision model, then use language models for further analysis. Perfect for document processing, image analysis, or multimodal workflows.

vision-pipeline.ts
1import { relay } from '@relayplane/sdk'
2
3const visionPipeline = relay
4 .workflow('vision-language-pipeline')
5 .step('analyze-image', {
6 systemPrompt: `Analyze this image and describe:
71. What objects/elements are present
82. Any text visible in the image
93. Overall context or scene
10
11Image: {{input.imageUrl}}`,
12 model: 'gpt-4o' // Vision-capable model
13 })
14 .with('openai:gpt-4o')
15 .step('extract-insights', {
16 systemPrompt: `From this image analysis, extract actionable insights:
17{{analyze-image.output}}
18
19Return as JSON: { insights: [], recommendations: [], confidence: "high|medium|low" }`
20 })
21 .with('anthropic:claude-sonnet-4-20250514')
22 .depends('analyze-image')
23 .step('generate-report', {
24 systemPrompt: `Create a professional report based on:
25Analysis: {{analyze-image.output}}
26Insights: {{extract-insights.output}}
27
28Format as markdown with sections: Summary, Findings, Recommendations.`
29 })
30 .with('openai:gpt-4o')
31 .depends('analyze-image', 'extract-insights')
32
33const result = await visionPipeline.run({
34 apiKeys: {
35 openai: process.env.OPENAI_API_KEY,
36 anthropic: process.env.ANTHROPIC_API_KEY
37 },
38 input: {
39 imageUrl: 'https://example.com/document.png'
40 }
41})

Multi-Model Comparison

Run the same task across multiple models to compare outputs. Useful for benchmarking, A/B testing, or ensuring consistency across providers.

multi-model-comparison.ts
1import { relay } from '@relayplane/sdk'
2
3const multiModelComparison = relay
4 .workflow('multi-model-comparison')
5 .step('openai-response', {
6 systemPrompt: `Answer this question concisely: {{input.question}}`
7 })
8 .with('openai:gpt-4o')
9 .step('anthropic-response', {
10 systemPrompt: `Answer this question concisely: {{input.question}}`
11 })
12 .with('anthropic:claude-sonnet-4-20250514')
13 .step('google-response', {
14 systemPrompt: `Answer this question concisely: {{input.question}}`
15 })
16 .with('google:gemini-1.5-pro')
17 .step('compare-results', {
18 systemPrompt: `Compare these three AI responses and identify:
191. Common themes
202. Key differences
213. Which provides the most comprehensive answer
22
23OpenAI: {{openai-response.output}}
24Anthropic: {{anthropic-response.output}}
25Google: {{google-response.output}}`
26 })
27 .with('anthropic:claude-sonnet-4-20250514')
28 .depends('openai-response', 'anthropic-response', 'google-response')
29
30const result = await multiModelComparison.run({
31 apiKeys: {
32 openai: process.env.OPENAI_API_KEY,
33 anthropic: process.env.ANTHROPIC_API_KEY,
34 google: process.env.GOOGLE_API_KEY
35 },
36 input: {
37 question: 'What are the key considerations when designing a microservices architecture?'
38 }
39})
In the multi-model comparison, the first three steps run in parallel since they have no dependencies. The final comparison step waits for all three to complete.

Common Patterns

These patterns solve recurring workflow challenges. Each demonstrates a proven approach to handling specific use cases.

Extract, Validate, Summarize (Data Processing)

A three-step pattern that ensures data quality: extract structured data, validate it against rules, then create a summary. Essential for document processing pipelines.

extract-validate-summarize.ts
1import { relay } from '@relayplane/sdk'
2
3const dataProcessingPipeline = relay
4 .workflow('extract-validate-summarize')
5 .step('extract', {
6 systemPrompt: `Extract structured data from this document:
7{{input.document}}
8
9Return JSON with:
10{
11 "fields": { ... },
12 "metadata": { "confidence": 0-100, "missing_fields": [] }
13}`
14 })
15 .with('openai:gpt-4o')
16 .step('validate', {
17 systemPrompt: `Validate this extracted data against business rules:
18{{extract.output}}
19
20Rules:
21- All required fields must be present
22- Dates must be in ISO format
23- Numeric values must be positive
24- Email addresses must be valid format
25
26Return: { "valid": boolean, "errors": [], "warnings": [] }`
27 })
28 .with('anthropic:claude-sonnet-4-20250514')
29 .depends('extract')
30 .step('summarize', {
31 systemPrompt: `Create a processing summary:
32Extracted Data: {{extract.output}}
33Validation Result: {{validate.output}}
34
35Include: data overview, validation status, any issues found, recommended actions.`
36 })
37 .with('openai:gpt-4o')
38 .depends('extract', 'validate')
39
40const result = await dataProcessingPipeline.run({
41 apiKeys: {
42 openai: process.env.OPENAI_API_KEY,
43 anthropic: process.env.ANTHROPIC_API_KEY
44 },
45 input: {
46 document: 'Raw document content...'
47 }
48})

Analyze, Classify, Route (Ticket Routing)

Intelligent routing pattern: analyze incoming content, classify it into categories, then determine the appropriate routing action. Perfect for support systems and triage workflows.

ticket-router.ts
1import { relay } from '@relayplane/sdk'
2
3const ticketRouter = relay
4 .workflow('analyze-classify-route')
5 .step('analyze', {
6 systemPrompt: `Analyze this support ticket:
7{{input.ticket}}
8
9Extract:
10- Primary issue description
11- Urgency indicators (keywords, tone)
12- Technical complexity
13- Customer sentiment`
14 })
15 .with('openai:gpt-4o')
16 .step('classify', {
17 systemPrompt: `Based on this analysis, classify the ticket:
18{{analyze.output}}
19
20Categories:
21- billing: Payment, subscription, refund issues
22- technical: Bugs, errors, integration problems
23- feature: Feature requests, suggestions
24- general: Questions, feedback, other
25
26Return JSON: {
27 "category": "string",
28 "subcategory": "string",
29 "priority": "critical|high|medium|low",
30 "confidence": 0-100
31}`
32 })
33 .with('anthropic:claude-sonnet-4-20250514')
34 .depends('analyze')
35 .step('route', {
36 systemPrompt: `Determine routing for this ticket:
37Analysis: {{analyze.output}}
38Classification: {{classify.output}}
39
40Routing rules:
41- critical priority -> on-call team
42- technical + high priority -> senior engineers
43- billing -> finance team
44- feature requests -> product team
45- low priority -> queue for batch processing
46
47Return: {
48 "team": "string",
49 "assignee_type": "specific|pool|queue",
50 "sla_hours": number,
51 "escalation_path": []
52}`
53 })
54 .with('openai:gpt-4o')
55 .depends('analyze', 'classify')
56
57const result = await ticketRouter.run({
58 apiKeys: {
59 openai: process.env.OPENAI_API_KEY,
60 anthropic: process.env.ANTHROPIC_API_KEY
61 },
62 input: {
63 ticket: 'Customer ticket content...'
64 }
65})

Generate, Review, Refine (Content Generation)

Self-improving content pattern: generate initial content, review it for quality and issues, then refine based on feedback. Produces higher-quality outputs than single-shot generation.

generate-review-refine.ts
1import { relay } from '@relayplane/sdk'
2
3const contentPipeline = relay
4 .workflow('generate-review-refine')
5 .step('generate', {
6 systemPrompt: `Write {{input.contentType}} about: {{input.topic}}
7
8Requirements:
9- Target audience: {{input.audience}}
10- Tone: {{input.tone}}
11- Length: {{input.wordCount}} words approximately`
12 })
13 .with('anthropic:claude-sonnet-4-20250514')
14 .step('review', {
15 systemPrompt: `Review this content as an expert editor:
16{{generate.output}}
17
18Evaluate:
191. Clarity and readability
202. Factual accuracy
213. Tone consistency
224. Structure and flow
235. Grammar and style
24
25Return JSON: {
26 "score": 1-10,
27 "strengths": [],
28 "improvements": [],
29 "specific_edits": [{ "original": "", "suggested": "", "reason": "" }]
30}`
31 })
32 .with('openai:gpt-4o')
33 .depends('generate')
34 .step('refine', {
35 systemPrompt: `Refine this content based on editorial feedback:
36
37Original: {{generate.output}}
38Review: {{review.output}}
39
40Apply all suggested improvements while maintaining the original voice and intent.
41Return the polished final version.`
42 })
43 .with('anthropic:claude-sonnet-4-20250514')
44 .depends('generate', 'review')
45
46const result = await contentPipeline.run({
47 apiKeys: {
48 openai: process.env.OPENAI_API_KEY,
49 anthropic: process.env.ANTHROPIC_API_KEY
50 },
51 input: {
52 contentType: 'blog post',
53 topic: 'Benefits of AI automation in business',
54 audience: 'business executives',
55 tone: 'professional yet approachable',
56 wordCount: 800
57 }
58})
The Generate-Review-Refine pattern typically produces 30-50% higher quality content compared to single-shot generation. Consider adding multiple review-refine cycles for critical content.

Integration Templates

Templates for integrating RelayPlane workflows into your existing infrastructure. These patterns show how to trigger workflows from various sources.

Webhook-Triggered Workflow

Expose your workflow as an HTTP endpoint. Perfect for integrating with third-party services, Slack commands, or custom applications.

webhook-server.ts
1import { relay } from '@relayplane/sdk'
2import express from 'express'
3
4// Define the workflow
5const webhookWorkflow = relay
6 .workflow('webhook-processor')
7 .step('process', {
8 systemPrompt: `Process this webhook payload:
9{{input.payload}}
10
11Extract relevant information and determine required actions.`
12 })
13 .with('openai:gpt-4o')
14 .step('respond', {
15 systemPrompt: `Based on the processed data:
16{{process.output}}
17
18Generate an appropriate response message.`
19 })
20 .with('anthropic:claude-sonnet-4-20250514')
21 .depends('process')
22
23// Create Express server
24const app = express()
25app.use(express.json())
26
27app.post('/webhook', async (req, res) => {
28 try {
29 const result = await webhookWorkflow.run({
30 apiKeys: {
31 openai: process.env.OPENAI_API_KEY,
32 anthropic: process.env.ANTHROPIC_API_KEY
33 },
34 input: {
35 payload: JSON.stringify(req.body)
36 }
37 })
38
39 if (result.success) {
40 res.json({
41 success: true,
42 runId: result.runId,
43 response: result.steps[1].output
44 })
45 } else {
46 res.status(500).json({
47 success: false,
48 error: result.error
49 })
50 }
51 } catch (error) {
52 res.status(500).json({ error: 'Workflow execution failed' })
53 }
54})
55
56app.listen(3000, () => {
57 console.log('Webhook server running on port 3000')
58})

Scheduled Batch Processing

Process multiple items on a schedule. Ideal for nightly reports, batch document processing, or periodic data analysis.

scheduled-batch.ts
1import { relay } from '@relayplane/sdk'
2import cron from 'node-cron'
3
4// Define the batch processing workflow
5const batchWorkflow = relay
6 .workflow('batch-processor')
7 .step('fetch-items', {
8 systemPrompt: `Analyze and categorize these items:
9{{input.items}}
10
11Return structured analysis for each item.`
12 })
13 .with('openai:gpt-4o')
14 .step('aggregate', {
15 systemPrompt: `Aggregate the analysis results:
16{{fetch-items.output}}
17
18Create a summary report with:
19- Total items processed
20- Category breakdown
21- Key insights
22- Recommended actions`
23 })
24 .with('anthropic:claude-sonnet-4-20250514')
25 .depends('fetch-items')
26
27// Schedule to run every day at 2 AM
28cron.schedule('0 2 * * *', async () => {
29 console.log('Starting scheduled batch processing...')
30
31 // Fetch items from your data source
32 const items = await fetchItemsFromDatabase()
33
34 const result = await batchWorkflow.run({
35 apiKeys: {
36 openai: process.env.OPENAI_API_KEY,
37 anthropic: process.env.ANTHROPIC_API_KEY
38 },
39 input: {
40 items: JSON.stringify(items)
41 }
42 })
43
44 if (result.success) {
45 // Store results or send notifications
46 await saveReport(result.steps[1].output)
47 await sendSlackNotification('Batch processing complete')
48 console.log(`Batch complete: ${result.runId}`)
49 } else {
50 await sendAlertNotification('Batch processing failed', result.error)
51 }
52})
53
54async function fetchItemsFromDatabase() {
55 // Your database query logic here
56 return []
57}
58
59async function saveReport(report: string) {
60 // Save to database or file system
61}
62
63async function sendSlackNotification(message: string) {
64 // Send Slack notification
65}
66
67async function sendAlertNotification(title: string, error: unknown) {
68 // Send alert notification
69}

Event-Driven Architecture

Trigger workflows from message queues or event streams. Suitable for high-throughput systems with decoupled components.

event-driven.ts
1import { relay } from '@relayplane/sdk'
2import { SQSClient, ReceiveMessageCommand, DeleteMessageCommand } from '@aws-sdk/client-sqs'
3
4// Define the event processing workflow
5const eventWorkflow = relay
6 .workflow('event-processor')
7 .step('parse-event', {
8 systemPrompt: `Parse and validate this event:
9{{input.event}}
10
11Extract: event_type, timestamp, payload, metadata`
12 })
13 .with('openai:gpt-4o')
14 .step('process-event', {
15 systemPrompt: `Process the parsed event:
16{{parse-event.output}}
17
18Determine required actions and generate appropriate response.`
19 })
20 .with('anthropic:claude-sonnet-4-20250514')
21 .depends('parse-event')
22 .step('emit-result', {
23 systemPrompt: `Format the processing result for downstream systems:
24{{process-event.output}}
25
26Return JSON suitable for publishing to result queue.`
27 })
28 .with('openai:gpt-4o')
29 .depends('process-event')
30
31// SQS consumer
32const sqs = new SQSClient({ region: 'us-east-1' })
33const QUEUE_URL = process.env.SQS_QUEUE_URL!
34
35async function processMessages() {
36 while (true) {
37 const command = new ReceiveMessageCommand({
38 QueueUrl: QUEUE_URL,
39 MaxNumberOfMessages: 10,
40 WaitTimeSeconds: 20
41 })
42
43 const response = await sqs.send(command)
44
45 if (response.Messages) {
46 for (const message of response.Messages) {
47 try {
48 const result = await eventWorkflow.run({
49 apiKeys: {
50 openai: process.env.OPENAI_API_KEY,
51 anthropic: process.env.ANTHROPIC_API_KEY
52 },
53 input: {
54 event: message.Body
55 }
56 })
57
58 if (result.success) {
59 // Delete processed message
60 await sqs.send(new DeleteMessageCommand({
61 QueueUrl: QUEUE_URL,
62 ReceiptHandle: message.ReceiptHandle
63 }))
64
65 // Publish result to output queue
66 await publishResult(result.steps[2].output)
67 }
68 } catch (error) {
69 console.error('Failed to process message:', error)
70 }
71 }
72 }
73 }
74}
75
76async function publishResult(result: string) {
77 // Publish to SNS, another SQS queue, or EventBridge
78}
79
80processMessages().catch(console.error)
For production event-driven systems, implement proper error handling, dead-letter queues, and idempotency. Consider using workflow checkpointing for long-running processes.

Full Examples by Use Case

Explore complete, production-ready workflow examples organized by industry use case. Each example includes detailed documentation, code, and best practices.

Document Processing

Automate document analysis, data extraction, and report generation.

Customer Service

Enhance support operations with intelligent automation.

Sales & Marketing

Accelerate sales cycles and scale content production.

Engineering

Streamline development workflows and operations.

HR & Operations

Automate people operations and internal workflows.

Next Steps

Ready to build your own workflows? Here are some resources to help you get started: