Back to Artifacts
Automation Era

Agent Stories

Extending User Stories to capture autonomous AI behavior, conditional autonomy, and collaborative intelligence.

Philosophy

An Agent Story extends the User Story paradigm to capture autonomous and semi-autonomous AI behavior. Where User Stories focus on human intent ("As a user, I want..."), Agent Stories must capture emergent behavior, conditional autonomy, and collaborative intelligence.

Agent Stories recognize that modern AI agents operate through stages and state transitions, leverage logical reasoning defined by prompts, maintain memory for learning and context, utilize tools via MCP connections, and respond to triggers ranging from messages to scheduled events.

The format follows a principle of progressive disclosure: the core story remains simple and readable, while structured annotations capture complexity only where it exists.

The Core Format

Every Agent Story starts with this human-readable core. Additional annotations are added only where complexity exists.

Core Format
AGENT STORY: [ID]

As [Agent Role],
triggered by [Event],
I [Action/Goal],
so that [Outcome/Value].

Autonomy: [Full | Supervised | Collaborative | Directed]

Key Concepts

Triggers

Events that activate agents: messages (A2A), resource changes, schedules (cron), cascades, or manual activation.

Behavior Models

Workflow (predictable stages), Adaptive (runtime decisions), or Hybrid (structured flexibility).

Skills & Reasoning

Composable competencies with proficiencies, quality bars, and acquisition types (built-in, learned, delegated).

Human Collaboration

In-the-loop (every decision), On-the-loop (monitoring), or Out-of-loop (fully autonomous).

Memory & Learning

Working memory (ephemeral), persistent stores (KB, vector, relational), and learning signals (feedback, reinforcement).

Agent Collaboration

Supervisor (coordinates), Worker (executes), or Peer (collaborates) roles with defined communication patterns.

Autonomy Levels

Level Human Role Agent Authority
Full None during execution Complete decision authority
Supervised Monitors, intervenes on exception Executes within guardrails
Collaborative Active participant in decisions Proposes, human confirms
Directed Initiates and guides each step Executes specific instructions

When to Use Agent Stories

Ideal For

  • Background processing systems
  • Monitoring and alerting agents
  • Data pipeline automation
  • Scheduled task execution
  • Event-driven workflows
  • Multi-agent coordination

Consider HAP Plan Instead For

  • Deep human-agent partnership evolution
  • Trust-building over time
  • Dynamic responsibility allocation
  • Emergent pair capabilities
  • Future capability roadmaps

Agent Stories focus on what the agent does today. For planning how capabilities evolve over time and how trust develops between humans and agents, use the HAP Plan.

Ready to write your first Agent Story?