ignitionstack.pro v1.0 is out! Read the announcement →
Skip to Content

AI Platform

The AI chat experience in ignitionstack.pro is implemented on top of the src/app/lib/ai stack. It orchestrates OpenAI, Google Gemini, and self-hosted Ollama models through a Strategy Router, retrieval augmented generation (RAG), and Server-Sent Events (SSE) API routes. This page also documents the developer workflows stored under /AI/ (prompts, templates, tools) and the companion .cursor / .windsurf rulesets used when pairing with AI copilots.

Provider Matrix

ProviderIntegrationEnv varsNotes
OpenAIREST API via openai SDKOPENAI_API_KEYDefault provider (gpt-4o-mini, gpt-4-turbo-preview).
Google GeminiREST API via @google/generative-aiGOOGLE_AI_API_KEYCost-effective creative/analysis tasks (gemini-pro).
OllamaLocal HTTP serverOLLAMA_BASE_URL (default http://localhost:11434)Streams responses from self-hosted models (e.g., llama2, llama3).
Ollama RemoteRemote HTTP serverOLLAMA_REMOTE_BASE_URL, OLLAMA_REMOTE_API_KEYCloud-hosted Ollama with API key auth.

Place these keys in .env.local (and .env.production as needed):

# AI providers OPENAI_API_KEY=sk-... GOOGLE_AI_API_KEY=... ANTHROPIC_API_KEY=sk-ant-... # optional Claude support OLLAMA_BASE_URL=http://localhost:11434 # Ollama Remote (cloud-hosted Ollama server) OLLAMA_REMOTE_BASE_URL=https://ollama.yourcompany.com OLLAMA_REMOTE_API_KEY=your-api-key-here # Feature toggles DEFAULT_AI_PROVIDER=openai ENABLE_RAG=true EMBEDDING_MODEL=text-embedding-3-small

Architecture Overview

Key Modules

PathResponsibility
src/app/api/ai/chat/route.tsAuthenticates user, enforces rate limits, saves messages, streams SSE chunks.
src/app/api/ai/upload/route.tsAccepts documents for RAG (Supabase Storage + embeddings).
src/app/api/ai/share/route.tsGenerates shareable conversation links.
src/app/lib/ai/router/strategy-router.tsChooses provider based on preference, task type, and provider health.
src/app/lib/ai/factory/provider-factory.tsBuilds provider adapters with API keys or OLLAMA_BASE_URL.
src/app/lib/ai/circuit-breaker/breaker.tsProtects against cascading provider failures.
src/app/lib/ai/rag/*RAG service, document processor, embeddings retrieval.
src/app/lib/repositories/{conversation,message,document}-repository.tsSupabase persistence for chat records.

Request Lifecycle

  1. Auth & Rate LimitgetUser() returns Supabase session; checkAPIRateLimit() throttles per user (20 chat requests/min).
  2. Conversation ManagementConversationRepository.create() seeds metadata (provider/model/system prompt).
  3. Message PersistenceMessageRepository stores user + assistant messages (including attachments) before streaming completes.
  4. RAGRAGService checks conversation flags, fetches embeddings (match_embeddings RPC), augments messages.
  5. RoutingStrategyRouter picks the provider (e.g., code tasks → Claude, creative → OpenAI). ProviderFactory instantiates the adapter with API key or OLLAMA_BASE_URL.
  6. Streaming – Provider-specific adapters produce tokens; route handler serializes them into SSE chunks.
  7. Post-processingToolExecutor runs actions mid-stream; final assistant message is persisted and caches revalidated.

Local Development

# Start Next.js (includes /api routes) npm run dev # Optional: start Ollama locally and pull models ollama serve ollama pull llama2 ollama run llama2 "ping"

Gemini and OpenAI require valid API keys. Add them to .env.local and restart npm run dev so process.env updates.

Testing

Security & Guardrails

AI Developer Toolkit (/AI folder)

The repository includes /AI/ to streamline AI-assisted development:

DirectoryDescription
AI/context/Project overviews, coding standards, architecture summaries to paste into ChatGPT/Claude.
AI/prompts/Optimized prompts (feature dev, bug fix, code review, performance). Paste these into your AI tool of choice.
AI/templates/Skeletons for components, API routes, server actions, tests. Combine with prompts for consistent output.
AI/guidelines/Quality guardrails (performance, security, RLS, i18n). Review when requesting AI changes.
AI/tools/Helper scripts (context builders, prompt chaining).
AI/workflows/Step-by-step flows for debugging, deploying, or running tests with AI assistance.

Usage pattern:

  1. Paste AI/context/project-overview.md into your AI tool.
  2. Paste the relevant prompt (e.g., prompts/feature-development.md).
  3. Reference templates (e.g., templates/server-action.md).
  4. Cross-check against guidelines before applying suggestions.

Pair-programming Rulesets

When onboarding new devs or configuring AI copilots (ChatGPT, Claude, Copilot, Cursor, Windsurf), point them to these rules so generated code matches the repo standards (ActionResult pattern, no direct Supabase in components, logging via createServiceLogger, etc.).

Keep this page updated whenever we add new providers, change routing logic, or evolve the AI workflows/templates so everyone (humans and AI agents) works off the same blueprint.