NEW TO ELIXIUM?

Getting Started

Your current tools track what humans are doing. But AI is now writing code, generating tests, and making architectural suggestions — with no visibility, no governance, and no quality gates.

Board Setup & Story Creation

The blind spot in every team using AI

Who approved the AI-generated code?

No record.

Were the AI's tests reviewed before merge?

No enforcement.

What did the team learn from AI-assisted work?

No capture mechanism.

What will this AI-proposed change cost in infra?

Open a spreadsheet.

What Elixium does differently

Elixium is the first project management platform built for human+AI teams. Every story on your board is a governed execution boundary where AI agents can operate — but only within structured, testable, and auditable limits.

AI proposes, humans decide

AI agents read and write to your board via MCP. They create stories, propose tests, estimate costs, and submit code. But only humans can accept work, approve tests, and ship to production.

Every story is a hypothesis

Work produces outcomes, not just deliverables. The Learning Loop captures what the team expected, what happened, and what to do next. Your board becomes an organizational learning system.

Platform work is product work

Infrastructure decisions and platform capabilities show up on the same board as features. ADRs with cost/benefit analysis make platform work visible and defensible to leadership.

What this replaces

JiraStory tracking, sprints, backlog

Board with Current, Backlog, Icebox, Done lanes. AI agents read/write stories directly.

ConfluenceDecisions, architecture docs, wikis

Team Decisions and ADRs live on the board, attached to the stories that created them.

Miro / WhiteboardsSprint planning, brainstorming

Epics with hypotheses, success metrics, and AI-powered prioritization and dependency analysis.

SpreadsheetsCost estimation, capacity planning

Infrastructure-aware cost estimation per story, rolled up per epic. Real cloud pricing.

Slack threads"Didn't we already decide this?"

Team Decisions — searchable institutional memory that persists across all team members’ AI sessions.

Jira to Elixium translation

What you knowElixium
EpicEpic
SprintCurrent Iteration
BacklogBacklog lane
Story / TaskStory
Story PointsPoints
Sprint ReviewLearning capture
Definition of ReadyDoR
Definition of DoneDoD

Concepts that don't exist in Jira

Learning Loop

Every story follows: Discover → Create → Implement → Deliver → Accept → Learn. The "Learn" step captures outcomes and feeds them back into future work.

Team Decisions

Searchable institutional memory. When someone says "didn’t we decide this?", the answer is one search away. Persists across all team members’ AI sessions.

Architecture Decision Records

Structured documents attached to stories: context, decision, alternatives considered, and consequences with cost analysis. AI drafts them; humans approve.

Hypotheses

Testable assumptions created before work begins. Tracked with confidence scores that update as evidence accumulates.

Infrastructure Profiles

Your board knows your cloud provider, regions, compliance frameworks, and existing services. Powers accurate cost estimation and deployment-aware AI suggestions.

MCP Integration

Model Context Protocol lets AI agents interact with your board programmatically. Your AI assistant reads acceptance criteria, proposes tests, and submits work for review.


Your first 10 minutes

Pick your role. Each path shows what you'll do first and why it matters.

Engineers

What you'll notice first: Your AI coding assistant can see the board.

1
Open your boardStories in the Current lane are your active iteration. Pick one or get assigned one.
2
Start a storyClick "Start" (or your AI agent calls start_story). This sets the story to "started" and begins the test-first cycle.
3
Write tests firstWith TDD enabled, your AI agent proposes a test plan. You review and approve it. The tests should fail — that’s the point.
4
ImplementWrite code (with AI help) to make the tests pass. The build gate catches TypeScript, lint, and SSR issues before push.
5
Submit for reviewYour AI agent calls submit_for_review. The story moves to "finished" — ready for human review with a linked PR.
6
A human accepts or rejectsOnly humans move stories to Done. This is the governance boundary.
7
Capture what you learnedRecord the outcome. "We thought X, but Y happened." Takes 30 seconds, compounds over months.

# The TDD cycle

start_storypropose_test_plan[human approves]implementsubmit_for_review[human accepts]record_learning

Product Managers

What you'll notice first: Your board tells you what AI is actually doing.

1
Create an epic with a hypothesisNot "Build feature X" — instead: "We believe that [capability] will [outcome] as measured by [metric]."
2
Write stories with clear acceptance criteriaUse Given/When/Then format. This is what AI agents read to understand scope. Vague criteria produce vague implementations.
3
Review AI-proposed test plansWhen an engineer’s AI agent proposes tests, you can see whether they cover your intent. This is your quality gate — before implementation.
4
Track outcomes, not just velocityThe Learning Loop shows which hypotheses are validating and which aren’t. Double down on what’s working, pivot on what isn’t.
5
Use cost estimation in prioritizationBefore committing, see infrastructure cost impact. "This feature adds $2k/month" changes the conversation.
6
Search team decisions before meetingsBefore re-debating something, search existing decisions. The answer might already exist from a previous sprint.

Designers

What you'll notice first: Design work is visible on the same board as engineering.

1
Stories have typesUse "feature" stories for user-facing work that needs design. Acceptance criteria describe the user experience, not just technical behavior.
2
Acceptance criteria are your specGiven/When/Then works for design: "Given a user on mobile, when they tap the nav menu, then the drawer slides in from the left."
3
Review before acceptanceStories in "delivered" state are waiting for human review. Verify the implementation matches design intent.
4
ADRs capture design decisions tooWhen you choose one interaction pattern over another, record it. Prevents re-litigation in future sprints.
5
Hypotheses apply to UX"We believe simplifying onboarding to 3 steps will increase completion from 60% to 80%." Track it, measure it, learn.

Platform Engineers

What you'll notice first: Your work finally has the same visibility as feature work.

1
Use the "platform" story typePlatform stories get a distinct icon and support ADRs. This signals to leadership that this is strategic infrastructure, not invisible maintenance.
2
Attach ADRs to every significant decisionWhen you choose Postgres over DynamoDB, the ADR captures context, alternatives with cost estimates, and consequences.
3
Show cost impact"This migration saves $9k/month but requires 238 eng-days" is a conversation leadership can engage with.
4
Record architectural decisions"All new services use gRPC" — future AI sessions for every team member surface these automatically.
5
Define your infrastructure profileSet your cloud provider, regions, compliance frameworks. Feeds into cost estimation and makes AI suggestions infrastructure-aware.

Three workflows that show the difference

Real scenarios comparing how teams work today vs. with Elixium.

1Ship a feature with AI governance

The old way (Jira + AI)

PM writes a ticket. Engineer uses Copilot to implement it. Code gets pushed. Reviewer skims the PR. It ships. Nobody knows if the AI-generated code was tested properly, whether it met the original intent, or what it cost.

The Elixium way

StepWhoWhat happens
1PMCreates story with hypothesis, acceptance criteria, and success metric
2AIEstimates infrastructure cost impact before work begins
3EngineerStarts governed TDD workflow
4AIProposes test plan based on acceptance criteria
5HumanReviews proposed tests — do they cover the intent? Approves or requests changes.
6AI + HumanAI writes implementation to pass approved tests. Engineer reviews and adjusts.
7AISubmits for review — story moves to "finished" with linked PR
8HumanPR review against acceptance criteria
9HumanOnly a human moves the story to Done. The governance boundary.
10TeamRecords learning — what did we expect? What happened? What’s next?

2Make and record a platform decision

The old way

Three engineers debate in a Slack thread. Someone summarizes in Confluence. Six months later, a new engineer asks "why did we choose Kafka?" Nobody can find the page. The debate restarts.

The Elixium way

1

Platform engineer creates a platform story: "Evaluate message queue for order processing"

2

AI drafts an ADR with context, alternatives (Kafka $800/mo vs SQS FIFO $200/mo vs Redis Streams $150/mo), and consequences

3

Team reviews and approves the ADR on the story

4

Engineer records the decision with category "architecture" and tags ["messaging", "payments"]

5

Six months later, a new engineer’s AI agent calls prepare_implementation on a payments story. The SQS FIFO decision surfaces automatically. No Slack archaeology needed.

3Run an iteration where AI and humans share the board

Monday

Planning

  • PM reviews the Current lane — 3 stories in progress, 2 ready to pull
  • AI agents get full context via get_iteration_context
  • prioritize_epic analyzes dependencies: "Story B should start before D"
Tue–Thu

Execution

  • Engineers and AI agents work stories through TDD
  • AI searches team decisions before architectural choices
  • New decisions are recorded — every future AI session knows them
Friday

Review

  • Delivered stories get human review: 3 accepted, 1 rejected
  • Team records learnings: "AI misses concurrency edge cases in payment flows"
  • That learning informs the next iteration. The team gets smarter.

Connect your AI agent

Elixium works with any MCP-compatible AI tool. Setup takes under 5 minutes.

Claude CodeClaude DesktopCursorWindsurfVS Code + Copilot

// Add to your MCP configuration

{
  "mcpServers": {
    "elixium": {
      "command": "npx",
      "args": ["@elixium.ai/mcp-server"],
      "env": {
        "ELIXIUM_API_KEY": "your-api-key",
        "ELIXIUM_API_URL": "https://your-workspace.elixium.ai",
        "ELIXIUM_BOARD_SLUG": "main"
      }
    }
  }
}

Where to find your API key: Settings → API Keys in your workspace. For detailed setup instructions, see the IDE Setup guide.

Generate Your API Key

Deployment options

All three modes run the same codebase. If it works offline, it works everywhere.


What to do next

1Create your first epic

With a real hypothesis. Not "Q2 Goals" — try "We believe that [specific capability] will [measurable outcome]."

2Write one story with Given/When/Then

Connect your AI agent and watch it read the criteria when starting work.

3Record your first team decision

Think of something your team decided recently that isn’t written down anywhere. Now it’s searchable.

4Enable TDD workflow

Run one story through the full cycle: start → test plan → approve → implement → review → accept → learn.

At the end of the week, ask: What did the team learn?
If you can answer that from your board, Elixium is working.