ignitionstack.pro ships with a catalog of opinionated AI agents that live entirely in the repository. Each agent is described in markdown under AI/agents and is loaded at runtime through the typed loader in src/app/lib/ai/agents/agent-loader.ts. This guide explains how to create or customize agents, how the UI consumes them, and how to validate everything with automated tests.
AI/
├── agents/
│ ├── README.md # Quick reference
│ ├── codebase-expert.md # Example agent
│ └── ...
└── memory|instructions|... # Additional AI context artifacts.md file in AI/agents (except README), parses its frontmatter, and extracts the system prompt from the ## System Prompt section.react.cache so the UI gets instant access without re-reading the filesystem.Every agent file follows the same structure:
---
id: codebase-expert # Unique slug
name: Codebase Expert # UI label
description: Deep knowledge...# Appears in dropdowns
icon: bot # Lucide icon name
category: development # See table below
tags: [code, docs]
---
# Codebase Expert
## Overview
Explain what problems this agent solves.
## System Prompt[Write the full system prompt here. Use fenced code blocks so the loader can extract it.]
## Usage
- Optional tips, commands, references| Field | Required | Description |
|---|---|---|
id | ✅ | Unique identifier referenced by the chat UI (agentId). |
name | ✅ | Human friendly title shown in dropdowns. |
description | ✅ | Short summary displayed beside the agent picker. |
icon | ➖ | Lucide icon (defaults to bot). |
category | ➖ | One of development, analysis, creative, education, business (fallback: development). |
tags | ➖ | Array of free-form tags used for filtering/search. |
Tip: Keep prompts under version control by treating
.mdfiles like code. Changes go through code review, simplifying audits.
getAvailableAgents (src/app/server/ai/get-agents.ts) exposes metadata to the UI without loading system prompts./[locale]/(pages)/chat and /chat/[id] call getAvailableAgents to populate the sidebar dropdown.src/app/actions/ai/create-conversation.ts accepts an optional agentId. When provided, it:
getAgentById.systemPrompt with the agent prompt.ConversationRepository.create stores the prompt + metadata in Supabase so subsequent LLM calls stay agent-aware..md file under AI/agents.## System Prompt with a fenced code block — the loader ignores any other format.| What to test | Command |
|---|---|
| Loader + parser (frontmatter, prompt extraction) | npm run test -- src/app/test/unit/ai/agent-loader.test.ts |
| createConversation agent workflow (new in this update) | npm run test -- src/app/test/unit/ai/create-conversation-action.test.ts |
The loader tests catch malformed markdown or metadata. The action tests guarantee that selecting an agent in the UI actually injects the agent prompt, updates the title, and logs missing agents correctly.
.md, the frontmatter has id, name, description, and the category matches the allowed list.## System Prompt exists and the prompt lives inside backticks (```). The loader logs "Agent has no system prompt" when extraction fails.With these guardrails, adding an opinionated Claude/GPT agent is as simple as editing a markdown file, running npm run test, and deploying.