Back to Home

Core Definitions

The vocabulary of human-AI collaboration.

What is Agent Analysis?

Agent Analysis is a new discipline for understanding and designing systems where humans and AI agents work together as collaborative pairs. As AI capabilities rapidly advance, traditional user-centered design falls short--we can no longer treat AI as just another tool in the user's hands.

The shift is fundamental: instead of analyzing individual users, we must analyze the relationship between humans and their AI agents. This relationship evolves over time as trust builds, capabilities expand, and responsibilities shift dynamically between partners.

Agent Analysis provides the frameworks, vocabulary, and artifacts needed to design these collaborative systems thoughtfully--ensuring that as AI grows more capable, humans remain empowered, accountable, and in control of what matters most.

HAP (Human-Agent Pair)

The fundamental unit of analysis. A collaborative entity consisting of a human and an AI agent. Emergent capabilities arise that neither possesses alone.

Confidence Gate

A milestone triggering reduced human oversight. Defined by measurable criteria (accuracy, override frequency) indicating sufficient agent skill.

Human Judgment

Taste, accountability, relationships, and contextual knowledge. The irreplaceable contribution the human retains even as the agent assumes more tasks.

Delegation Trigger

Conditions under which oversight reduces. Focuses on the human's specific decision to "let go," distinct from the agent's raw competence.

Capability Expansion

New agent abilities that don't map to existing workflows. Skills the agent discovers tomorrow that neither party thought to ask for today.

Guardrails

Non-negotiable constraints. Safeguards that define what the agent must not do, regardless of its confidence level or autonomy.