Major additions: - marketplace/: Agent template registry with FTS5 search, ratings, versioning - observability/: Prometheus metrics, distributed tracing, structured logging - ledger/migrations/: Database migration scripts for multi-tenant support - tests/governance/: 15 new test files for phases 6-12 (295 total tests) - bin/validate-phases: Full 12-phase validation script New features: - Multi-tenant support with tenant isolation and quota enforcement - Agent marketplace with semantic versioning and search - Observability with metrics, tracing, and log correlation - Tier-1 agent bootstrap scripts Updated components: - ledger/api.py: Extended API for tenants, marketplace, observability - ledger/schema.sql: Added tenant, project, marketplace tables - testing/framework.ts: Enhanced test framework - checkpoint/checkpoint.py: Improved checkpoint management Archived: - External integrations (Slack/GitHub/PagerDuty) moved to .archive/ - Old checkpoint files cleaned up Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Agent Governance Pipeline System
This directory contains the authoritative pipeline implementation for the AI Agent Governance System.
Architecture Reference
See /opt/agent-governance/docs/ARCHITECTURE.md for the full system design.
Directory Structure
pipeline/
├── core.py # AUTHORITATIVE: Core definitions, enums, constants
├── pipeline.py # Pipeline DSL parser and executor
├── README.md # This file
├── schemas/
│ └── pipeline.schema.json # JSON Schema for pipeline validation
├── templates/
│ ├── default.yaml # Generic observer-tier agent
│ ├── terraform.yaml # Infrastructure specialist (T1)
│ ├── ansible.yaml # Configuration management (T1)
│ └── code-review.yaml # Code review specialist (T0)
└── examples/
├── infrastructure-deploy.yaml
└── multi-agent-analysis.yaml
Core Module (core.py)
All code should import pipeline definitions from pipeline.core to ensure consistency.
Agent Lifecycle Phases
The official agent lifecycle follows these phases in order:
BOOTSTRAP → PREFLIGHT → PLAN → EXECUTE → VERIFY → PACKAGE → REPORT → EXIT
| Phase | Description | Output Type |
|---|---|---|
| BOOTSTRAP | Agent initialization and authentication | Alpha |
| PREFLIGHT | Pre-execution validation (sandbox, inventory, deps) | Alpha |
| PLAN | Generate and validate execution plan | Beta |
| EXECUTE | Perform the planned actions | Beta |
| VERIFY | Validate execution results | Gamma |
| PACKAGE | Bundle artifacts and evidence | Gamma |
| REPORT | Generate completion report | Gamma |
| EXIT | Clean shutdown and resource release | Gamma |
| REVOKED | Agent was revoked (terminal state) | - |
Importing Core Definitions
from pipeline.core import (
# Enums
AgentPhase,
AgentStatus,
OutputType,
ChaosCondition,
StageType,
StageStatus,
# Data classes
AgentOutput,
ClarifiedPlan,
ErrorBudget,
StageResult,
PipelineContext,
# Constants
AGENT_PHASE_NAMES,
AGENT_PHASES_ORDERED,
PHASE_OUTPUT_TYPES,
DEFAULT_REDIS_HOST,
DEFAULT_REDIS_PORT,
DEFAULT_REDIS_PASSWORD,
DEFAULT_LEDGER_PATH,
# Key patterns
RedisKeys,
# Utilities
get_output_type_for_phase,
is_terminal_phase,
next_phase,
)
Output Types (Alpha/Beta/Gamma)
Agents produce outputs at checkpoints, classified as:
- Alpha: Initial/draft outputs (plans, analysis)
- Beta: Refined outputs (validated plans, partial results)
- Gamma: Final outputs (completed work, verified results)
DragonflyDB Key Patterns
Use RedisKeys class for consistent key naming:
from pipeline.core import RedisKeys
# Agent keys
agent_state = RedisKeys.agent_state("agent-001") # "agent:agent-001:state"
agent_lock = RedisKeys.agent_lock("agent-001") # "agent:agent-001:lock"
# Project keys
project_agents = RedisKeys.project_agents("proj-001") # "project:proj-001:agents"
Pipeline DSL (pipeline.py)
The pipeline DSL supports four stage types:
Stage Types
- agent: Execute an agent task
- gate: Approval/consensus checkpoint
- parallel: Concurrent execution of branches
- condition: Conditional branching (if/then/else)
Example Pipeline
name: example-pipeline
version: "1.0"
timeout: 30m
stages:
- name: plan
type: agent
template: default
config:
tier: 1
timeout: 10m
- name: review
type: gate
requires: [plan]
config:
gate_type: approval
approvers: ["team-lead"]
timeout: 30m
- name: execute
type: agent
requires: [review]
template: terraform
config:
tier: 2
Running a Pipeline
# Validate a pipeline
python pipeline/pipeline.py validate pipeline/examples/infrastructure-deploy.yaml
# Run a pipeline
python pipeline/pipeline.py run pipeline/examples/infrastructure-deploy.yaml \
--input environment=staging
# List available templates
python pipeline/pipeline.py list
Chaos Testing Integration
The chaos test framework (tests/multi-agent-chaos/) imports from pipeline.core to ensure consistency:
# In tests/multi-agent-chaos/orchestrator.py
from pipeline.core import (
AgentPhase,
OutputType,
ChaosCondition,
AGENT_PHASE_NAMES,
RedisKeys,
)
Running Chaos Tests
python tests/multi-agent-chaos/orchestrator.py
The chaos test:
- Spawns multiple real agents (Python, Bun, Diagnostic)
- Injects chaos conditions (lock loss, error spikes, etc.)
- Tracks Alpha/Beta/Gamma outputs
- Triggers plan clarification when error threshold crossed
- Verifies unified objective reached via DragonflyDB readiness checks
Files Changed for Unification
The following files were aligned with the architecture spec:
| File | Change |
|---|---|
pipeline/core.py |
NEW: Authoritative definitions |
tests/multi-agent-chaos/orchestrator.py |
Updated to import from pipeline.core |
pipeline/README.md |
NEW: This documentation |
Consistency Checklist
When adding new pipeline-related code:
- Import from
pipeline.core- never define enums/constants locally - Use official phase names -
AGENT_PHASE_NAMESorAGENT_PHASES_ORDERED - Use RedisKeys class - for consistent DragonflyDB key naming
- Follow output types -
PHASE_OUTPUT_TYPESmaps phases to Alpha/Beta/Gamma - Include PACKAGE phase - often forgotten, but required for artifact bundling
Version History
- v1.0 (2026-01-23): Initial unified pipeline consolidation