The primary requirements artifact for designing collaborative human-agent systems.
The HAP Plan (Human-Agent Pair Plan) is designed for systems where humans and AI agents work together as collaborative partners. Unlike User Stories or Agent Stories, the HAP Plan treats the pair itself as the unit of analysis—capturing not just what each party does, but how they work together, build trust, and evolve over time.
This artifact recognizes that in collaborative systems, capabilities are emergent: the pair can accomplish things neither the human nor the agent could do alone. It also acknowledges that the relationship changes—early interactions require more human oversight, while mature partnerships allow for greater agent autonomy.
The HAP Plan introduces concepts like confidence gates (milestones that unlock reduced oversight), guardrails (non-negotiable boundaries), and feedback loops (mechanisms for continuous improvement). It's the artifact of choice when designing AI assistants, copilots, collaborative tools, and any system where human judgment and agent capability must work in harmony.
The outcome the human-agent pair is trying to achieve together. Framed as a joint goal, not separate tasks.
Initial division of responsibilities. What the human handles vs. what the agent handles at the start.
Metrics that measure pair success, not individual performance. Joint accountability indicators.
Systems, data sources, and interfaces the pair uses. The shared workspace for collaboration.
Measurable milestones that trigger reduced human oversight. Defined by accuracy, override frequency, or domain-specific metrics.
Conditions under which the human "lets go" of specific tasks. The human's decision to trust, not just agent competence.
Non-negotiable boundaries. Actions the agent must never take, regardless of confidence or autonomy level.
Contextual boundaries that may relax over time. Require human approval initially but can be delegated later.
How the pair learns and improves. Mechanisms for human corrections, agent suggestions, and mutual adaptation.
Anticipated new abilities that may emerge. How the pair will handle skills neither anticipated at the start.
How to detect when human skills atrophy or agent performance degrades. Maintaining healthy pair dynamics.
How to measure relationship maturity. Indicators that the pair is ready for the next level of collaboration.
This HAP Plan captures elements that neither User Stories nor Agent Stories can express: the evolving relationship between analyst and agent, the trust-building milestones that unlock autonomy, and the mechanisms that ensure the human remains capable and accountable even as the agent takes on more work. The pair is designed to grow together, with clear gates preventing premature automation and feedback loops ensuring continuous improvement.
Captures how trust builds over time, not just static task allocation.
Defines concrete gates that unlock increased autonomy based on demonstrated performance.
Anticipates and manages abilities that arise from the collaboration itself.
Balances agent independence with human responsibility and oversight.
Ensures critical human capabilities don't atrophy as agent takes on more work.
Built-in feedback loops and metrics for ongoing improvement.
| Aspect | User Stories | Agent Stories | HAP Plan |
|---|---|---|---|
| Focus | Human goals | Agent capabilities | Pair relationship |
| Autonomy | None | Full | Graduated |
| Evolution | Static | Capability-based | Trust-based |
| Human Role | Decision-maker | Supervisor | Partner |
| Best For | Traditional apps | Background automation | Collaborative AI |