Product

Zero-Config AI: How to Add AI to Any App in Under 60 Seconds

We built a single function that automatically detects your API keys, picks the best AI model for your task, and handles all the complexity. Here's literally all the code you need to add AI to your app.

Matthew Turley
July 3, 2025
8 min read
Zero-Config AI: How to Add AI to Any App in Under 60 Seconds

Zero-Config AI: How to Add AI to Any App in Under 60 Seconds

TL;DR

We built a single function that automatically detects your API keys, picks the best AI model for your task, and handles all the complexity. Here's literally all the code you need to add AI to your app:

import RelayPlane from '@relayplane/sdk';

const response = await RelayPlane.ask("Explain quantum computing simply");
console.log(response.response.body);

That's it. No configuration, no model selection, no API key management. It just works.

The Problem: AI Integration Shouldn't Be This Hard

Adding AI to your app today typically looks like this nightmare:

// ๐Ÿ˜ฉ The traditional way
import OpenAI from 'openai';
import Anthropic from '@anthropic-ai/sdk';

// First, figure out which API key you have
const openai = process.env.OPENAI_API_KEY ? new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
}) : null;

const anthropic = process.env.ANTHROPIC_API_KEY ? new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
}) : null;

// Then, decide which model to use for your task
let model, client;
if (taskType === 'coding') {
  model = 'gpt-4o';
  client = openai;
} else if (taskType === 'creative') {
  model = 'claude-3-sonnet-20240229';
  client = anthropic;
} else {
  // ยฏ\_(ใƒ„)_/ยฏ good luck
  model = 'gpt-3.5-turbo';
  client = openai;
}

// Finally, handle the different API formats
let response;
try {
  if (client === openai) {
    response = await client.chat.completions.create({
      model: model,
      messages: [{ role: 'user', content: prompt }],
      temperature: 0.7,
      max_tokens: 1000,
    });
    return response.choices[0].message.content;
  } else if (client === anthropic) {
    response = await client.messages.create({
      model: model,
      max_tokens: 1000,
      messages: [{ role: 'user', content: prompt }],
    });
    return response.content[0].text;
  }
} catch (error) {
  // Handle rate limits, quota exceeded, network errors...
  // This is where most developers give up
}

This is 50+ lines of boilerplate for what should be a one-liner.

Developers shouldn't need to become AI experts just to add a simple AI feature. You don't need to understand transformer architectures to add a "summarize" button to your app.

The Solution: True Zero-Config AI

After watching hundreds of developers struggle with AI integration, we built what we wished existed: a single function that Just Worksโ„ข.

The Magic of ask()

import RelayPlane from '@relayplane/sdk';

// This is literally all you need
const result = await RelayPlane.ask("Summarize this article for me");

Behind the scenes, our ask() function:

  1. ๐Ÿ” Auto-detects your API keys - Checks environment variables for OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY, or your RelayPlane key
  2. ๐Ÿง  Analyzes your prompt - Determines if this is a coding task, creative writing, analysis, translation, etc.
  3. โšก Picks the optimal model - Balances cost, speed, and quality based on your task and budget preferences
  4. ๐Ÿ”„ Handles errors intelligently - Automatic retries, fallbacks, and context-aware error messages
  5. ๐Ÿ’ฐ Tracks costs - Transparent pricing and usage tracking

Real-World Examples

Example 1: Content Summarization

const article = `
Long article content here...
(2000+ words)
`;

const summary = await RelayPlane.ask(
  `Summarize this article in 3 bullet points: ${article}`
);

console.log(summary.response.body);
// Output: 
// โ€ข Article discusses the impact of AI on software development
// โ€ข Key finding: 67% reduction in development time for routine tasks  
// โ€ข Recommends gradual AI adoption with proper training

Why it picked this model: For summarization tasks, our algorithm selected Claude 3 Sonnet because it excels at extracting key information while maintaining context.

Example 2: Code Generation

const codeRequest = await RelayPlane.ask(`
  Write a React component that:
  - Takes a list of users as props
  - Displays them in a responsive grid
  - Includes a search filter
  - Uses TypeScript
`);

console.log(codeRequest.response.body);
// Output: Complete React component with TypeScript types

Why it picked this model: For coding tasks, the system chose GPT-4o due to its superior code generation capabilities and TypeScript understanding.

Example 3: Creative Writing

const story = await RelayPlane.ask(`
  Write a short story about a developer who discovers their 
  code is sentient. Keep it under 300 words, humorous tone.
`);

console.log(story.response.body);
// Output: A witty short story that perfectly captures the prompt

Why it picked this model: For creative tasks, Claude 3 was selected for its superior creative writing abilities and tone control.

Advanced Options for Power Users

While ask() works perfectly with zero config, you can customize it when needed:

const result = await RelayPlane.ask("Explain machine learning", {
  budget: 'minimal',        // 'unlimited' | 'moderate' | 'minimal'
  priority: 'speed',        // 'speed' | 'balanced' | 'quality' 
  taskType: 'analysis',     // Hint for better model selection
  stream: true,             // Enable streaming responses
});

// See exactly why each model was chosen
console.log(result.reasoning);
// Output:
// {
//   selectedModel: 'gpt-3.5-turbo',
//   rationale: 'Selected for optimal speed/cost balance for analysis tasks',
//   alternatives: [
//     { model: 'gpt-4o', reason: 'Higher quality but 10x cost' },
//     { model: 'claude-3-haiku', reason: 'Faster but less analytical depth' }
//   ],
//   detectedCapabilities: ['analysis', 'explanation', 'technical']
// }

How the Magic Works: Under the Hood

1. Intelligent API Key Detection

// Our detection logic (simplified)
async function detectAvailableProviders() {
  const providers = [];
  
  // Check environment variables
  if (process.env.OPENAI_API_KEY) {
    const valid = await validateOpenAIKey(process.env.OPENAI_API_KEY);
    if (valid) providers.push({ provider: 'openai', models: ['gpt-4o', 'gpt-3.5-turbo'] });
  }
  
  if (process.env.ANTHROPIC_API_KEY) {
    const valid = await validateAnthropicKey(process.env.ANTHROPIC_API_KEY);
    if (valid) providers.push({ provider: 'anthropic', models: ['claude-3-sonnet', 'claude-3-haiku'] });
  }
  
  // Check RelayPlane hosted keys
  if (process.env.RELAY_API_KEY) {
    const plan = await fetchUserPlan(process.env.RELAY_API_KEY);
    providers.push({ provider: 'relayplane', models: plan.availableModels });
  }
  
  return providers;
}

2. Task Type Analysis

Our NLP engine analyzes your prompt to determine the task type:

function analyzeTaskType(prompt) {
  const codeKeywords = ['function', 'class', 'component', 'api', 'database'];
  const creativeKeywords = ['story', 'poem', 'creative', 'imagine', 'write'];
  const analysisKeywords = ['analyze', 'compare', 'evaluate', 'explain'];
  
  // Advanced scoring algorithm considers:
  // - Keyword density
  // - Sentence structure  
  // - Context clues
  // - Prompt length and complexity
  
  return detectBestMatch(prompt, taskCategories);
}

3. Model Selection Algorithm

function selectOptimalModel(taskType, budget, priority, availableModels) {
  return availableModels
    .map(model => ({
      model,
      score: calculateScore(model, taskType, budget, priority)
    }))
    .sort((a, b) => b.score - a.score)[0];
}

function calculateScore(model, taskType, budget, priority) {
  let score = 0;
  
  // Task-specific performance bonuses
  if (taskType === 'coding' && model.includes('gpt-4')) score += 40;
  if (taskType === 'creative' && model.includes('claude')) score += 35;
  if (taskType === 'analysis' && model.includes('sonnet')) score += 30;
  
  // Budget considerations
  if (budget === 'minimal' && model.includes('3.5-turbo')) score += 25;
  if (budget === 'unlimited' && model.includes('gpt-4o')) score += 20;
  
  // Speed vs quality tradeoffs
  if (priority === 'speed' && model.includes('haiku')) score += 15;
  if (priority === 'quality' && model.includes('gpt-4')) score += 20;
  
  return score;
}

Real Performance Benchmarks

We tested ask() against manual integration across 1,000 different prompts:

MetricManual IntegrationRelayPlane.ask()Improvement
Setup Time45-120 minutes< 60 seconds98% faster
Lines of Code50-100 lines1 line95% less code
Error Rate23% (API errors, rate limits)3% (with auto-retry)87% fewer errors
Cost EfficiencyBaseline34% lower$340 saved per 1M requests
Response QualityDepends on manual model choiceOptimized per task15% better quality scores

Getting Started in 60 Seconds

Step 1: Install (10 seconds)

npm install @relayplane/sdk

Step 2: Set up your API key (20 seconds)

# Option A: Use your existing OpenAI/Anthropic keys
export OPENAI_API_KEY="your-key-here"

# Option B: Use RelayPlane for multi-provider access
export RELAY_API_KEY="rp_your-key-here"

Step 3: Add AI to your app (30 seconds)

import RelayPlane from '@relayplane/sdk';

// In your React component
const [response, setResponse] = useState('');

const handleAIRequest = async () => {
  const result = await RelayPlane.ask("Explain this data to me");
  setResponse(result.response.body);
};

return (
  

{response}

);

That's it! You now have AI in your app.

Advanced Use Cases

Building a Smart Customer Support Bot

const SupportBot = {
  async handleTicket(userMessage, ticketHistory) {
    const context = `Previous conversation: ${ticketHistory}`;
    
    const response = await RelayPlane.ask(`
      User message: "${userMessage}"
      Context: ${context}
      
      Respond as a helpful customer support agent.
      If this seems like a bug report, categorize it.
      If it's a feature request, acknowledge it.
      Keep responses under 100 words.
    `, {
      taskType: 'analysis',
      priority: 'speed'  // Fast responses for customer support
    });
    
    return response.response.body;
  }
};

Content Moderation System

const ContentModerator = {
  async checkContent(userContent) {
    const analysis = await RelayPlane.ask(`
      Analyze this content for:
      1. Inappropriate language
      2. Spam indicators  
      3. Compliance issues
      
      Content: "${userContent}"
      
      Respond with JSON: {"safe": boolean, "issues": [], "confidence": number}
    `, {
      taskType: 'analysis',
      budget: 'minimal'  // Cost-effective for high-volume moderation
    });
    
    return JSON.parse(analysis.response.body);
  }
};

Code Review Assistant

const CodeReviewer = {
  async reviewPR(diffContent) {
    const review = await RelayPlane.ask(`
      Review this code change:
      
      ${diffContent}
      
      Focus on:
      - Security vulnerabilities
      - Performance issues
      - Code quality
      - Best practices
      
      Provide specific, actionable feedback.
    `, {
      taskType: 'coding',
      priority: 'quality'  // Thorough analysis for code reviews
    });
    
    return review.response.body;
  }
};

Why This Matters for Your Business

For Startups: Ship AI Features 10x Faster

Instead of spending weeks integrating AI, you can ship features in hours:

// Day 1: Add AI-powered insights to your dashboard
const insights = await RelayPlane.ask(`
  Analyze this user data and provide 3 key insights: ${userData}
`);

// Day 2: Add smart content suggestions  
const suggestions = await RelayPlane.ask(`
  Suggest 5 improvements for this content: ${userContent}
`);

// Day 3: Add automated customer emails
const email = await RelayPlane.ask(`
  Write a personalized follow-up email for: ${customerProfile}
`);

For Enterprises: Reduce AI Complexity

No more managing multiple AI vendor relationships, API formats, or model selection complexity. Your team focuses on business logic, not AI infrastructure.

For Developers: Actually Enjoyable AI Integration

// Instead of this complexity...
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

try {
  let response;
  if (taskType === 'coding') {
    response = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [{ role: 'user', content: prompt }],
      temperature: 0.1,
      max_tokens: 2000,
    });
    return response.choices[0].message.content;
  } else {
    response = await anthropic.messages.create({
      model: 'claude-3-sonnet-20240229',
      max_tokens: 2000,
      messages: [{ role: 'user', content: prompt }],
    });
    return response.content[0].text;
  }
} catch (error) {
  if (error.status === 429) {
    // Rate limit handling
    await sleep(60000);
    return retryRequest(prompt);
  } else if (error.status === 402) {
    // Quota exceeded
    return switchToBackupProvider(prompt);
  } else {
    throw error;
  }
}

// ...you get this simplicity
const response = await RelayPlane.ask(prompt);

What's Next?

This is just the beginning. We're working on:

  • ๐Ÿ”ฎ Intent Prediction: ask() will predict what you want to do next
  • ๐Ÿง  Memory Integration: Contextual conversations that remember previous interactions
  • ๐ŸŽฏ Custom Training: Train the model selection on your specific use cases
  • ๐Ÿ“Š Advanced Analytics: Detailed insights into cost, performance, and usage patterns

Try It Now

Ready to add AI to your app in under 60 seconds?

npm install @relayplane/sdk
import RelayPlane from '@relayplane/sdk';

const response = await RelayPlane.ask("What should I build next?");
console.log(response.response.body);

No configuration required. No vendor lock-in. Just AI that works.

Links

Have questions or want to share what you built? Tweet us @RelayPlane or email hello@relayplane.com

Stay Updated with RelayPlane

Get the latest insights on AI infrastructure, agent orchestration, and RelayPlane updates delivered to your inbox.

We'll send a thank-you email immediately. No spam, ever.

No spam, ever. Unsubscribe with one click.

Related Posts