ignitionstack.pro implements a multi-provider LLM architecture using the Adapter Pattern. Each provider has a dedicated adapter that normalizes different API interfaces into a unified IAIProvider contract, enabling seamless switching between providers.
┌─────────────────────────────────────────────────────────────────────────────┐
│ Strategy Router │
│ (Selects best provider based on context) │
└───────────────────────────────┬─────────────────────────────────────────────┘
│
┌───────────┬───────────┼───────────┬──────────────┬──────────────┐
▼ ▼ ▼ ▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────┐ ┌──────────┐
│ OpenAI │ │ Gemini │ │ Ollama │ │ Ollama │ │ Anthropic │ │ Custom │
│ Adapter │ │ Adapter │ │ Adapter │ │ Remote │ │ (Ready) │ │ Adapter │
└────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ └──────┬───────┘ └────┬─────┘
│ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────┐ ┌──────────┐
│ OpenAI │ │ Google │ │ Local │ │ Cloud │ │ Anthropic │ │ Your │
│ API │ │ API │ │ Server │ │ Server │ │ API │ │ API │
└──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────────┘ └──────────┘Location: src/app/lib/ai/providers/openai-adapter.ts
OpenAI is the default provider, offering the most comprehensive feature set including vision, function calling, and high-quality embeddings.
| Model | Context | Vision | Tools | Best For |
|---|---|---|---|---|
gpt-4o | 128K | Yes | Yes | Complex reasoning, multimodal |
gpt-4o-mini | 128K | Yes | Yes | Cost-effective general use |
gpt-4-turbo | 128K | Yes | Yes | High-quality generation |
o1-preview | 128K | No | No | Advanced reasoning |
o1-mini | 128K | No | No | Fast reasoning tasks |
| Model | Input | Output |
|---|---|---|
| gpt-4o | $2.50 | $10.00 |
| gpt-4o-mini | $0.15 | $0.60 |
| gpt-4-turbo | $10.00 | $30.00 |
| o1-preview | $15.00 | $60.00 |
| o1-mini | $3.00 | $12.00 |
# .env.local
OPENAI_API_KEY=sk-...
DEFAULT_AI_PROVIDER=openaitext-embedding-3-small and text-embedding-3-largeLocation: src/app/lib/ai/providers/gemini-adapter.ts
Google Gemini offers competitive pricing with strong multimodal capabilities and safety filters.
| Model | Context | Vision | Tools | Best For |
|---|---|---|---|---|
gemini-2.0-flash | 1M | Yes | Yes | Fast, cost-effective |
gemini-1.5-pro | 2M | Yes | Yes | Long context, complex tasks |
gemini-1.5-flash | 1M | Yes | Yes | Balanced speed/quality |
gemini-1.0-pro | 32K | No | Yes | Text-only tasks |
| Model | Input | Output |
|---|---|---|
| gemini-2.0-flash | $0.075 | $0.30 |
| gemini-1.5-pro | $1.25 | $5.00 |
| gemini-1.5-flash | $0.075 | $0.30 |
| gemini-1.0-pro | $0.50 | $1.50 |
# .env.local
GOOGLE_AI_API_KEY=...Gemini includes built-in safety filters for:
Configure thresholds in the adapter settings.
Location: src/app/lib/ai/providers/ollama-adapter.ts
Ollama enables self-hosted LLM inference with zero API costs. Perfect for development, privacy-sensitive applications, or air-gapped environments.
| Model | Parameters | Best For |
|---|---|---|
llama3.1:8b | 8B | Fast local inference |
llama3.1:70b | 70B | High-quality responses |
mistral | 7B | Balanced performance |
codellama | 7B-34B | Code generation |
nomic-embed-text | - | Local embeddings |
# .env.local
OLLAMA_BASE_URL=http://localhost:11434# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Start the server
ollama serve
# Pull models
ollama pull llama3.1:8b
ollama pull nomic-embed-text
# Verify
ollama listLocation: src/app/lib/ai/providers/ollama-adapter.ts (same adapter, different config)
Connect to cloud-hosted or centralized Ollama servers with API key authentication. Ideal for sharing GPU resources across teams or deploying on cloud VMs.
# .env.local
OLLAMA_REMOTE_BASE_URL=https://ollama.yourcompany.com
OLLAMA_REMOTE_API_KEY=your-api-key-hereimport { ProviderFactory } from '@/lib/ai/factory/provider-factory'
// Create remote provider
const ollama = ProviderFactory.createGlobalProvider('ollama_remote')
// List available models
const models = await ollama.listModels()
// ['llama3.2:latest', 'gemma3:12b', 'mistral:7b']
// Chat completion
const response = await ollama.chat({
messages: [{ role: 'user', content: 'Hello!' }],
model: 'llama3.2',
})See Ollama Remote for full API reference and deployment guides.
Location: src/app/lib/ai/providers/ (structure ready)
Anthropic Claude support is architecturally prepared. Add the adapter following the existing pattern.
# .env.local
ANTHROPIC_API_KEY=sk-ant-...All adapters implement this contract:
interface IAIProvider {
// Core identification
readonly name: AIProvider
readonly supportedModels: string[]
// Capabilities
supportsVision(model: string): boolean
supportsTools(model: string): boolean
supportsStreaming(): boolean
// Operations
chat(messages: Message[], options: ChatOptions): Promise<ChatResponse>
chatStream(messages: Message[], options: ChatOptions): AsyncGenerator<StreamChunk>
generateEmbedding(text: string, model?: string): Promise<number[]>
// Cost estimation
estimateCost(promptTokens: number, completionTokens: number, model: string): number
}The ProviderFactory instantiates adapters with proper configuration:
// src/app/lib/ai/factory/provider-factory.ts
import { ProviderFactory } from '@/lib/ai/factory/provider-factory'
// Get a provider instance
const provider = await ProviderFactory.create('openai', {
apiKey: process.env.OPENAI_API_KEY,
})
// Or with user's own API key
const provider = await ProviderFactory.createWithUserKey(userId, 'gemini')The router intelligently selects providers based on multiple factors:
// src/app/lib/ai/router/strategy-router.ts
const router = new StrategyRouter()
const decision = await router.route({
task: 'code', // code, creative, analysis, chat
preferredProvider: 'openai',
planTier: 'pro',
requireVision: false,
requireTools: true,
})
// Returns: { provider: 'openai', model: 'gpt-4o', score: 87 }| Factor | Weight (Free) | Weight (Pro) | Weight (Enterprise) |
|---|---|---|---|
| Cost | 40% | 25% | 15% |
| Latency | 25% | 35% | 25% |
| Capability | 20% | 25% | 35% |
| Availability | 15% | 15% | 25% |
src/app/lib/ai/providers/:// my-provider-adapter.ts
import { IAIProvider } from '../interfaces/provider'
export class MyProviderAdapter implements IAIProvider {
readonly name = 'my-provider' as const
readonly supportedModels = ['model-a', 'model-b']
// Implement interface methods...
}ProviderFactory:// provider-factory.ts
case 'my-provider':
return new MyProviderAdapter(config)src/app/types/ai.ts:export type AIProvider = 'openai' | 'gemini' | 'ollama' | 'my-provider'Use the Router: Let StrategyRouter pick the best provider instead of hardcoding.
Handle Failures: The circuit breaker automatically fails over to healthy providers.
Monitor Costs: Check ai_usage_logs table for token consumption.
Secure Keys: User API keys are encrypted with AES-256-GCM before storage.
Test Locally: Use Ollama during development to avoid API costs.