ignitionstack.pro v1.0 is out! Read the announcement →
Skip to Content

LLM Providers

ignitionstack.pro implements a multi-provider LLM architecture using the Adapter Pattern. Each provider has a dedicated adapter that normalizes different API interfaces into a unified IAIProvider contract, enabling seamless switching between providers.

Provider Architecture

┌─────────────────────────────────────────────────────────────────────────────┐ │ Strategy Router │ │ (Selects best provider based on context) │ └───────────────────────────────┬─────────────────────────────────────────────┘ ┌───────────┬───────────┼───────────┬──────────────┬──────────────┐ ▼ ▼ ▼ ▼ ▼ ▼ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────┐ ┌──────────┐ │ OpenAI │ │ Gemini │ │ Ollama │ │ Ollama │ │ Anthropic │ │ Custom │ │ Adapter │ │ Adapter │ │ Adapter │ │ Remote │ │ (Ready) │ │ Adapter │ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ └──────┬───────┘ └────┬─────┘ │ │ │ │ │ │ ▼ ▼ ▼ ▼ ▼ ▼ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────┐ ┌──────────┐ │ OpenAI │ │ Google │ │ Local │ │ Cloud │ │ Anthropic │ │ Your │ │ API │ │ API │ │ Server │ │ Server │ │ API │ │ API │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────────┘ └──────────┘

Supported Providers

OpenAI

Location: src/app/lib/ai/providers/openai-adapter.ts

OpenAI is the default provider, offering the most comprehensive feature set including vision, function calling, and high-quality embeddings.

Available Models

ModelContextVisionToolsBest For
gpt-4o128KYesYesComplex reasoning, multimodal
gpt-4o-mini128KYesYesCost-effective general use
gpt-4-turbo128KYesYesHigh-quality generation
o1-preview128KNoNoAdvanced reasoning
o1-mini128KNoNoFast reasoning tasks

Pricing (per 1M tokens)

ModelInputOutput
gpt-4o$2.50$10.00
gpt-4o-mini$0.15$0.60
gpt-4-turbo$10.00$30.00
o1-preview$15.00$60.00
o1-mini$3.00$12.00

Configuration

# .env.local OPENAI_API_KEY=sk-... DEFAULT_AI_PROVIDER=openai

Features


Google Gemini

Location: src/app/lib/ai/providers/gemini-adapter.ts

Google Gemini offers competitive pricing with strong multimodal capabilities and safety filters.

Available Models

ModelContextVisionToolsBest For
gemini-2.0-flash1MYesYesFast, cost-effective
gemini-1.5-pro2MYesYesLong context, complex tasks
gemini-1.5-flash1MYesYesBalanced speed/quality
gemini-1.0-pro32KNoYesText-only tasks

Pricing (per 1M tokens)

ModelInputOutput
gemini-2.0-flash$0.075$0.30
gemini-1.5-pro$1.25$5.00
gemini-1.5-flash$0.075$0.30
gemini-1.0-pro$0.50$1.50

Configuration

# .env.local GOOGLE_AI_API_KEY=...

Safety Filters

Gemini includes built-in safety filters for:

Configure thresholds in the adapter settings.


Ollama (Local)

Location: src/app/lib/ai/providers/ollama-adapter.ts

Ollama enables self-hosted LLM inference with zero API costs. Perfect for development, privacy-sensitive applications, or air-gapped environments.

Available Models

ModelParametersBest For
llama3.1:8b8BFast local inference
llama3.1:70b70BHigh-quality responses
mistral7BBalanced performance
codellama7B-34BCode generation
nomic-embed-text-Local embeddings

Configuration

# .env.local OLLAMA_BASE_URL=http://localhost:11434

Setup

# Install Ollama curl -fsSL https://ollama.com/install.sh | sh # Start the server ollama serve # Pull models ollama pull llama3.1:8b ollama pull nomic-embed-text # Verify ollama list

Features


Ollama Remote

Location: src/app/lib/ai/providers/ollama-adapter.ts (same adapter, different config)

Connect to cloud-hosted or centralized Ollama servers with API key authentication. Ideal for sharing GPU resources across teams or deploying on cloud VMs.

Configuration

# .env.local OLLAMA_REMOTE_BASE_URL=https://ollama.yourcompany.com OLLAMA_REMOTE_API_KEY=your-api-key-here

Features

Usage

import { ProviderFactory } from '@/lib/ai/factory/provider-factory' // Create remote provider const ollama = ProviderFactory.createGlobalProvider('ollama_remote') // List available models const models = await ollama.listModels() // ['llama3.2:latest', 'gemma3:12b', 'mistral:7b'] // Chat completion const response = await ollama.chat({ messages: [{ role: 'user', content: 'Hello!' }], model: 'llama3.2', })

See Ollama Remote for full API reference and deployment guides.


Anthropic (Ready)

Location: src/app/lib/ai/providers/ (structure ready)

Anthropic Claude support is architecturally prepared. Add the adapter following the existing pattern.

# .env.local ANTHROPIC_API_KEY=sk-ant-...

IAIProvider Interface

All adapters implement this contract:

interface IAIProvider { // Core identification readonly name: AIProvider readonly supportedModels: string[] // Capabilities supportsVision(model: string): boolean supportsTools(model: string): boolean supportsStreaming(): boolean // Operations chat(messages: Message[], options: ChatOptions): Promise<ChatResponse> chatStream(messages: Message[], options: ChatOptions): AsyncGenerator<StreamChunk> generateEmbedding(text: string, model?: string): Promise<number[]> // Cost estimation estimateCost(promptTokens: number, completionTokens: number, model: string): number }

Provider Factory

The ProviderFactory instantiates adapters with proper configuration:

// src/app/lib/ai/factory/provider-factory.ts import { ProviderFactory } from '@/lib/ai/factory/provider-factory' // Get a provider instance const provider = await ProviderFactory.create('openai', { apiKey: process.env.OPENAI_API_KEY, }) // Or with user's own API key const provider = await ProviderFactory.createWithUserKey(userId, 'gemini')

Strategy Router

The router intelligently selects providers based on multiple factors:

// src/app/lib/ai/router/strategy-router.ts const router = new StrategyRouter() const decision = await router.route({ task: 'code', // code, creative, analysis, chat preferredProvider: 'openai', planTier: 'pro', requireVision: false, requireTools: true, }) // Returns: { provider: 'openai', model: 'gpt-4o', score: 87 }

Routing Heuristics

FactorWeight (Free)Weight (Pro)Weight (Enterprise)
Cost40%25%15%
Latency25%35%25%
Capability20%25%35%
Availability15%15%25%

Adding a New Provider

  1. Create adapter in src/app/lib/ai/providers/:
// my-provider-adapter.ts import { IAIProvider } from '../interfaces/provider' export class MyProviderAdapter implements IAIProvider { readonly name = 'my-provider' as const readonly supportedModels = ['model-a', 'model-b'] // Implement interface methods... }
  1. Register in ProviderFactory:
// provider-factory.ts case 'my-provider': return new MyProviderAdapter(config)
  1. Add types to src/app/types/ai.ts:
export type AIProvider = 'openai' | 'gemini' | 'ollama' | 'my-provider'
  1. Update environment variables and documentation.

Best Practices

  1. Use the Router: Let StrategyRouter pick the best provider instead of hardcoding.

  2. Handle Failures: The circuit breaker automatically fails over to healthy providers.

  3. Monitor Costs: Check ai_usage_logs table for token consumption.

  4. Secure Keys: User API keys are encrypted with AES-256-GCM before storage.

  5. Test Locally: Use Ollama during development to avoid API costs.