Getting Started
Your current tools track what humans are doing. But AI is now writing code, generating tests, and making architectural suggestions — with no visibility, no governance, and no quality gates.
The blind spot in every team using AI
Who approved the AI-generated code?
No record.
Were the AI's tests reviewed before merge?
No enforcement.
What did the team learn from AI-assisted work?
No capture mechanism.
What will this AI-proposed change cost in infra?
Open a spreadsheet.
What Elixium does differently
Elixium is the first project management platform built for human+AI teams. Every story on your board is a governed execution boundary where AI agents can operate — but only within structured, testable, and auditable limits.
AI proposes, humans decide
AI agents read and write to your board via MCP. They create stories, propose tests, estimate costs, and submit code. But only humans can accept work, approve tests, and ship to production.
Every story is a hypothesis
Work produces outcomes, not just deliverables. The Learning Loop captures what the team expected, what happened, and what to do next. Your board becomes an organizational learning system.
Platform work is product work
Infrastructure decisions and platform capabilities show up on the same board as features. ADRs with cost/benefit analysis make platform work visible and defensible to leadership.
What this replaces
Board with Current, Backlog, Icebox, Done lanes. AI agents read/write stories directly.
Team Decisions and ADRs live on the board, attached to the stories that created them.
Epics with hypotheses, success metrics, and AI-powered prioritization and dependency analysis.
Infrastructure-aware cost estimation per story, rolled up per epic. Real cloud pricing.
Team Decisions — searchable institutional memory that persists across all team members’ AI sessions.
Jira to Elixium translation
| What you know | Elixium |
|---|---|
| Epic | Epic |
| Sprint | Current Iteration |
| Backlog | Backlog lane |
| Story / Task | Story |
| Story Points | Points |
| Sprint Review | Learning capture |
| Definition of Ready | DoR |
| Definition of Done | DoD |
Concepts that don't exist in Jira
Learning Loop
Every story follows: Discover → Create → Implement → Deliver → Accept → Learn. The "Learn" step captures outcomes and feeds them back into future work.
Team Decisions
Searchable institutional memory. When someone says "didn’t we decide this?", the answer is one search away. Persists across all team members’ AI sessions.
Architecture Decision Records
Structured documents attached to stories: context, decision, alternatives considered, and consequences with cost analysis. AI drafts them; humans approve.
Hypotheses
Testable assumptions created before work begins. Tracked with confidence scores that update as evidence accumulates.
Infrastructure Profiles
Your board knows your cloud provider, regions, compliance frameworks, and existing services. Powers accurate cost estimation and deployment-aware AI suggestions.
MCP Integration
Model Context Protocol lets AI agents interact with your board programmatically. Your AI assistant reads acceptance criteria, proposes tests, and submits work for review.
Your first 10 minutes
Pick your role. Each path shows what you'll do first and why it matters.
Engineers
What you'll notice first: Your AI coding assistant can see the board.
# The TDD cycle
start_story → propose_test_plan → [human approves] → implement → submit_for_review → [human accepts] → record_learning
Product Managers
What you'll notice first: Your board tells you what AI is actually doing.
Designers
What you'll notice first: Design work is visible on the same board as engineering.
Platform Engineers
What you'll notice first: Your work finally has the same visibility as feature work.
Three workflows that show the difference
Real scenarios comparing how teams work today vs. with Elixium.
1Ship a feature with AI governance
The old way (Jira + AI)
PM writes a ticket. Engineer uses Copilot to implement it. Code gets pushed. Reviewer skims the PR. It ships. Nobody knows if the AI-generated code was tested properly, whether it met the original intent, or what it cost.
The Elixium way
| Step | Who | What happens |
|---|---|---|
| 1 | PM | Creates story with hypothesis, acceptance criteria, and success metric |
| 2 | AI | Estimates infrastructure cost impact before work begins |
| 3 | Engineer | Starts governed TDD workflow |
| 4 | AI | Proposes test plan based on acceptance criteria |
| 5 | Human | Reviews proposed tests — do they cover the intent? Approves or requests changes. |
| 6 | AI + Human | AI writes implementation to pass approved tests. Engineer reviews and adjusts. |
| 7 | AI | Submits for review — story moves to "finished" with linked PR |
| 8 | Human | PR review against acceptance criteria |
| 9 | Human | Only a human moves the story to Done. The governance boundary. |
| 10 | Team | Records learning — what did we expect? What happened? What’s next? |
2Make and record a platform decision
The old way
Three engineers debate in a Slack thread. Someone summarizes in Confluence. Six months later, a new engineer asks "why did we choose Kafka?" Nobody can find the page. The debate restarts.
The Elixium way
Platform engineer creates a platform story: "Evaluate message queue for order processing"
AI drafts an ADR with context, alternatives (Kafka $800/mo vs SQS FIFO $200/mo vs Redis Streams $150/mo), and consequences
Team reviews and approves the ADR on the story
Engineer records the decision with category "architecture" and tags ["messaging", "payments"]
Six months later, a new engineer’s AI agent calls prepare_implementation on a payments story. The SQS FIFO decision surfaces automatically. No Slack archaeology needed.
3Run an iteration where AI and humans share the board
Planning
- PM reviews the Current lane — 3 stories in progress, 2 ready to pull
- AI agents get full context via get_iteration_context
- prioritize_epic analyzes dependencies: "Story B should start before D"
Execution
- Engineers and AI agents work stories through TDD
- AI searches team decisions before architectural choices
- New decisions are recorded — every future AI session knows them
Review
- Delivered stories get human review: 3 accepted, 1 rejected
- Team records learnings: "AI misses concurrency edge cases in payment flows"
- That learning informs the next iteration. The team gets smarter.
Connect your AI agent
Elixium works with any MCP-compatible AI tool. Setup takes under 5 minutes.
// Add to your MCP configuration
{
"mcpServers": {
"elixium": {
"command": "npx",
"args": ["@elixium.ai/mcp-server"],
"env": {
"ELIXIUM_API_KEY": "your-api-key",
"ELIXIUM_API_URL": "https://your-workspace.elixium.ai",
"ELIXIUM_BOARD_SLUG": "main"
}
}
}
}Where to find your API key: Settings → API Keys in your workspace. For detailed setup instructions, see the IDE Setup guide.
Deployment options
SaaS
Startups, agencies, most teams
Managed infrastructure. Zero maintenance. Start in minutes.
$12/user/month
Self-Hosted
Enterprise, compliance, data sovereignty
Docker or Kubernetes. Your infrastructure, your data.
$499/year flat
Air-Gapped
Government, defense, restricted networks
Fully disconnected. No external API calls. Kubernetes.
Coming soon
All three modes run the same codebase. If it works offline, it works everywhere.
What to do next
With a real hypothesis. Not "Q2 Goals" — try "We believe that [specific capability] will [measurable outcome]."
Connect your AI agent and watch it read the criteria when starting work.
Think of something your team decided recently that isn’t written down anywhere. Now it’s searchable.
Run one story through the full cycle: start → test plan → approve → implement → review → accept → learn.
At the end of the week, ask: What did the team learn?
If you can answer that from your board, Elixium is working.
