Ignitionstack ships a first-class conversational assistant. This page documents how the code inside src/app/actions/ai and src/app/lib/ai works today so you can safely extend providers, routing rules, or storage models.
| Action | Path | Responsibilities |
|---|---|---|
sendMessage | src/app/actions/ai/send-message.ts | Orchestrates validation, auth (requireAuth), conversation lifecycle, routing, provider execution, persistence, and usage logging. |
create-conversation, delete-conversation, update-conversation | src/app/actions/ai/*.ts | CRUD for conversations and stored preferences (system prompt, temperature, provider overrides). |
save-api-key, delete-api-key | Manage per-user API keys once we allow bring-your-own-provider flows. |
Every AI action returns an ActionResult object so UI callers can branch on success without guesswork.
src/app/lib/repositories/conversation-repository.ts) – Creates and fetches conversations scoped to the authenticated user, enforcing RLS by calling createClient().src/app/lib/repositories/message-repository.ts) – Persists each user/assistant message pair with metadata such as provider, model, token counts, and costs. Used for both context reconstruction and analytics.src/app/lib/repositories/ai-repository.ts) – Stores provider catalog, quotas, and per-plan entitlements that the router consults.Repositories map Supabase rows to domain objects through the mappers under src/app/lib/mappers. They never return raw JSON; for example src/app/lib/mappers/conversation.ts normalizes camelCase fields and attaches friendly enums.
src/app/lib/ai/router/strategy-router.ts implements the IStrategyRouter interface and scores every available model by:
MODEL_CAPABILITIES (function calling, streaming, vision) against the requested RequiredCapabilities.MODEL_LATENCY and MODEL_COST_PER_1K rank models when the user does not force a provider.AIRepository.fetchActiveModelsForPlan and filters out providers whose circuit breaker is open.ProviderFactory (src/app/lib/ai/factory/provider-factory.ts) instantiates SDK-specific providers (OpenAI, Google, Anthropic, Meta, etc.) by reading credentials from env.ts or per-user keys.src/app/lib/ai/interfaces/provider.ts so the server action only sees provider.chat({ messages, ... }).CircuitBreaker (src/app/lib/ai/circuit-breaker/breaker.ts) wraps every request. It tracks consecutive failures per {provider, model} and prevents hammering a degraded vendor until the cooldown expires.ActionResult so analytics remains accurate even if the UI disconnects.src/app/lib/ai/tools exposes the catalog that can be injected into providers that support function calling.src/app/lib/ai/rag/* contains embeddings & vector retrieval helpers. When we flip on RAG, the server action will augment the message list with contextual snippets before hitting a provider.| Table | Repository | Notes |
|---|---|---|
ai_conversations | ConversationRepository | Stores user ownership, provider defaults, system prompt, and UI metadata. |
ai_messages | MessageRepository | Captures both user and assistant roles plus token/cost metadata for billing. |
ai_models / ai_providers | AIRepository | Feature flags per plan tier, latency/cost hints, quotas. |
Supabase migrations for these tables live in supabase/migrations/*. Each migration enables RLS and policies that ensure users can only touch their own conversations/messages.
AIProvider interface under src/app/lib/ai/providers, register it in ProviderFactory, and document capability/cost/latency in StrategyRouter.MODEL_CAPABILITIES, MODEL_LATENCY, or scoring weights inside strategy-router.ts. Every change should be backed by unit tests in src/app/actions/ai/__tests__ and routing tests under src/app/lib/ai/router/__tests__.src/app/components/chat call the server actions via progressive enhancement. Make sure new parameters are validated in sendMessageSchema (src/app/lib/validations/ai.ts).createLogger({ service: "ai" }) from src/app/lib/logger.ts. Each major step in send-message.ts logs request IDs, provider decisions, latency, and token counts.src/app/lib/rate-limit.ts to protect /chat endpoints once BYO API keys are enabled.src/app/lib/analytics-events.ts so we can monitor provider mix, conversation drop-off, and plan enforcement.Keeping this architecture in sync with the actual code makes it far easier to reason about AI regressions. When you touch any of the components above, update this page and link to the relevant ADR under content/architecture.