Curia is not a single AI assistant — it is a team. A Coordinator agent serves as the unified face of that team, handling every message you or anyone else sends. Behind the scenes, specialist agents (research, expense tracking, scheduling, and more) do focused work and hand their results back to the Coordinator, which synthesizes everything into a single coherent reply. The people who email or message you never know multiple agents were involved.Documentation Index
Fetch the complete documentation index at: https://curia.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
The Coordinator agent
Every inbound message — across every channel — routes to the Coordinator first, without exception. The Coordinator has a persona you configure: a display name, a communication tone, and an email signature. To the outside world, that persona is the only identity they see. The Coordinator decides how to handle each message:- Handle directly — small talk, acknowledgments, simple questions the Coordinator can answer from memory
- Delegate to a specialist — complex or domain-specific tasks go to the right specialist agent, which returns results to the Coordinator
- Synthesize and respond — the Coordinator assembles the specialist’s work into a reply in its own voice
role: coordinator field in its YAML tells the dispatch layer to route all messages here. There is exactly one Coordinator per deployment.
Specialist agents
Specialist agents likeresearch-analyst work entirely inside the system. They receive delegated tasks from the Coordinator via the Bullpen or a direct agent.response event, do their work using their own pinned skills and memory scopes, and return results. They never communicate with the outside world directly — their internal handles (such as @research-analyst) appear only in the audit log and Bullpen threads, never in outbound messages.
Agent definition in YAML
You define agents in YAML files in theagents/ directory. No code is required for most agents. The full schema includes:
Key fields
| Field | Description |
|---|---|
model.provider | LLM provider: anthropic, openai, or ollama |
model.model | Model identifier for the chosen provider |
model.fallback | Optional fallback provider + model if the primary is unavailable |
pinned_skills | Skills always available to this agent in every task |
allow_discovery | When true, the agent can search the skill registry for capabilities not in its pinned list |
memory.scopes | Named memory scopes the agent can read from and write to |
schedule | Cron expressions for recurring tasks the agent runs automatically |
error_budget.max_turns | Maximum LLM round-trips per task execution — hard cap |
error_budget.max_cost_usd | Maximum dollar spend per task execution — hard cap |
Model configuration and fallback
Each agent specifies its own LLM provider and model. If the primary provider is unavailable, Curia automatically switches to the configured fallback — no manual intervention required.Error budgets
Every agent task has hard caps that prevent runaway behavior — infinite loops, surprise LLM bills, or agents that spin forever on a stuck task.max_turns— the maximum number of LLM round-trips for a single task execution. When exceeded, the task stops and reports an error.max_cost_usd— the maximum estimated dollar spend for a single task. Tracked across every LLM call in the task; exceeded tasks are halted immediately.
Skill discovery
Whenallow_discovery: true is set, the agent receives the built-in skill-registry tool in addition to its pinned skills. If the LLM determines it needs a capability not in its pinned list, it queries the registry:
Skills tagged
sensitivity: elevated — such as payment processing or bulk deletion — require human approval the first time an agent attempts to use them. Normal-sensitivity skills discovered via the registry are auto-approved.TypeScript handlers
For agents that need custom logic beyond what YAML configuration supports, you can add a TypeScript handler as an escape hatch. The YAML configuration stays exactly the same; you add ahandler field pointing to your TypeScript file.
The Bullpen
When specialist agents need to coordinate, they do so in the Bullpen — a structured, threaded discussion space that functions like an internal team channel. Any agent can open a thread addressed to other agents. The Coordinator checks for pending Bullpen threads on every task, ensuring nothing gets missed even if an agent only activates on inbound messages. Every Bullpen exchange is:- Logged — every message is written to the audit log with full causal tracing
- Visible — you can observe all threads via the dashboard or HTTP API SSE stream
- Interruptible — you can send a message to any thread via any channel to redirect, clarify, or stop a discussion
Agent lifecycle
When a message arrives, here is what happens inside Curia:Message received
The channel adapter publishes a normalized
inbound.message event to the bus. The dispatch layer receives it and routes it to the Coordinator.Context loaded
The Coordinator loads its system prompt, relevant entity memory and knowledge graph context, and any pending Bullpen threads. Conversation history for this
conversation_id is loaded from working memory.LLM called
The Coordinator calls its configured LLM. If the LLM decides to invoke a skill or delegate to a specialist, those requests are dispatched via the bus.
Skills and delegation
Skill results flow back through the execution layer (sanitized and validated). Specialist agents complete their work and return results to the Coordinator.
Skills
Learn how agents interact with the outside world through local and MCP skills.
Memory
Understand how Curia remembers people, decisions, and facts across restarts.