feat: re-organize skills

This commit is contained in:
Timothy
2026-01-21 20:17:07 -08:00
parent 17fcd3f774
commit f44c1314f9
6 changed files with 2207 additions and 914 deletions
+458
View File
@@ -0,0 +1,458 @@
---
name: agent-workflow
description: Complete workflow for building, implementing, and testing goal-driven agents. Orchestrates building-agents-* and testing-agent skills. Use when starting a new agent project, unsure which skill to use, or need end-to-end guidance.
license: Apache-2.0
metadata:
author: hive
version: "2.0"
type: workflow-orchestrator
orchestrates:
- building-agents-core
- building-agents-construction
- building-agents-patterns
- testing-agent
---
# Agent Development Workflow
Complete Standard Operating Procedure (SOP) for building production-ready goal-driven agents.
## Overview
This workflow orchestrates specialized skills to take you from initial concept to production-ready agent:
1. **Understand Concepts** (5-10 min) → `/building-agents-core` (optional)
2. **Build Structure** (15-30 min) → `/building-agents-construction`
3. **Optimize Design** (10-15 min) → `/building-agents-patterns` (optional)
4. **Test & Validate** (20-40 min) → `/testing-agent`
## When to Use This Workflow
Use this meta-skill when:
- Starting a new agent from scratch
- Unclear which skill to use first
- Need end-to-end guidance for agent development
- Want consistent, repeatable agent builds
**Skip this workflow** if:
- You only need to test an existing agent → use `/testing-agent` directly
- You know exactly which phase you're in → use specific skill directly
## Quick Decision Tree
```
"Need to understand agent concepts" → building-agents-core
"Build a new agent" → building-agents-construction
"Optimize my agent design" → building-agents-patterns
"Test my agent" → testing-agent
"Not sure what I need" → Read phases below, then decide
"Agent has structure but needs implementation" → See agent directory STATUS.md
```
## Phase 0: Understand Concepts (Optional)
**Duration**: 5-10 minutes
**Skill**: `/building-agents-core`
**Input**: Questions about agent architecture
### When to Use
- First time building an agent
- Need to understand node types, edges, goals
- Want to validate tool availability
- Learning about pause/resume architecture
### What This Phase Provides
- Architecture overview (Python packages, not JSON)
- Core concepts (Goal, Node, Edge, Pause/Resume)
- Tool discovery and validation procedures
- Workflow overview
**Skip this phase** if you already understand agent fundamentals.
## Phase 1: Build Agent Structure
**Duration**: 15-30 minutes
**Skill**: `/building-agents-construction`
**Input**: User requirements ("Build an agent that...")
### What This Phase Does
Creates the complete agent architecture:
- Package structure (`exports/agent_name/`)
- Goal with success criteria and constraints
- Workflow graph (nodes and edges)
- Node specifications
- CLI interface
- Documentation
### Process
1. **Create package** - Directory structure with skeleton files
2. **Define goal** - Success criteria and constraints written to agent.py
3. **Design nodes** - Each node approved and written incrementally
4. **Connect edges** - Workflow graph with conditional routing
5. **Finalize** - Agent class, exports, and documentation
### Outputs
-`exports/agent_name/` package created
- ✅ Goal defined in agent.py
- ✅ 5-10 nodes specified in nodes/__init__.py
- ✅ 8-15 edges connecting workflow
- ✅ Validated structure (passes `python -m agent_name validate`)
- ✅ README.md with usage instructions
- ✅ CLI commands (info, validate, run, shell)
### Success Criteria
You're ready for Phase 2 when:
- Agent structure validates without errors
- All nodes and edges are defined
- CLI commands work (info, validate)
- You see: "Agent complete: exports/agent_name/"
### Common Outputs
The building-agents-construction skill produces:
```
exports/agent_name/
├── __init__.py (package exports)
├── __main__.py (CLI interface)
├── agent.py (goal, graph, agent class)
├── nodes/__init__.py (node specifications)
├── config.py (configuration)
├── implementations.py (may be created for Python functions)
└── README.md (documentation)
```
### Next Steps
**If structure complete and validated:**
→ Check `exports/agent_name/STATUS.md` or `IMPLEMENTATION_GUIDE.md`
→ These files explain implementation options
→ You may need to add Python functions or MCP tools (not covered by current skills)
**If want to optimize design:**
→ Proceed to Phase 1.5 (building-agents-patterns)
**If ready to test:**
→ Proceed to Phase 2
## Phase 1.5: Optimize Design (Optional)
**Duration**: 10-15 minutes
**Skill**: `/building-agents-patterns`
**Input**: Completed agent structure
### When to Use
- Want to add pause/resume functionality
- Need error handling patterns
- Want to optimize performance
- Need examples of complex routing
- Want best practices guidance
### What This Phase Provides
- Practical examples and patterns
- Pause/resume architecture
- Error handling strategies
- Anti-patterns to avoid
- Performance optimization techniques
**Skip this phase** if your agent design is straightforward.
## Phase 2: Test & Validate
**Duration**: 20-40 minutes
**Skill**: `/testing-agent`
**Input**: Working agent from Phase 1
### What This Phase Does
Creates comprehensive test suite:
- Constraint tests (verify hard requirements)
- Success criteria tests (measure goal achievement)
- Edge case tests (handle failures gracefully)
- Integration tests (end-to-end workflows)
### Process
1. **Analyze agent** - Read goal, constraints, success criteria
2. **Generate tests** - Create pytest files in `exports/agent_name/tests/`
3. **User approval** - Review and approve each test
4. **Run evaluation** - Execute tests and collect results
5. **Debug failures** - Identify and fix issues
6. **Iterate** - Repeat until all tests pass
### Outputs
- ✅ Test files in `exports/agent_name/tests/`
- ✅ Test report with pass/fail metrics
- ✅ Coverage of all success criteria
- ✅ Coverage of all constraints
- ✅ Edge case handling verified
### Success Criteria
You're done when:
- All tests pass
- All success criteria validated
- All constraints verified
- Agent handles edge cases
- Test coverage is comprehensive
### Next Steps
**Agent ready for:**
- Production deployment
- Integration into larger systems
- Documentation and handoff
- Continuous monitoring
## Phase Transitions
### From Phase 1 to Phase 2
**Trigger signals:**
- "Agent complete: exports/..."
- Structure validation passes
- README indicates implementation complete
**Before proceeding:**
- Verify agent can be imported: `from exports.agent_name import default_agent`
- Check if implementation is needed (see STATUS.md or IMPLEMENTATION_GUIDE.md)
- Confirm agent executes without import errors
### Skipping Phases
**When to skip Phase 1:**
- Agent structure already exists
- Only need to add tests
- Modifying existing agent
**When to skip Phase 2:**
- Prototyping or exploring
- Agent not production-bound
- Manual testing sufficient
## Common Patterns
### Pattern 1: Complete New Build (Simple)
```
User: "Build an agent that monitors files"
→ Use /building-agents-construction
→ Agent structure created
→ Use /testing-agent
→ Tests created and passing
→ Done: Production-ready agent
```
### Pattern 1b: Complete New Build (With Learning)
```
User: "Build an agent (first time)"
→ Use /building-agents-core (understand concepts)
→ Use /building-agents-construction (build structure)
→ Use /building-agents-patterns (optimize design)
→ Use /testing-agent (validate)
→ Done: Production-ready agent
```
### Pattern 2: Test Existing Agent
```
User: "Test my agent at exports/my_agent"
→ Skip Phase 1
→ Use /testing-agent directly
→ Tests created
→ Done: Validated agent
```
### Pattern 3: Iterative Development
```
User: "Build an agent"
→ Use /building-agents-construction (Phase 1)
→ Implementation needed (see STATUS.md)
→ [User implements functions]
→ Use /testing-agent (Phase 2)
→ Tests reveal bugs
→ [Fix bugs manually]
→ Re-run tests
→ Done: Working agent
```
### Pattern 4: Complex Agent with Patterns
```
User: "Build an agent with multi-turn conversations"
→ Use /building-agents-core (learn pause/resume)
→ Use /building-agents-construction (build structure)
→ Use /building-agents-patterns (implement pause/resume pattern)
→ Use /testing-agent (validate conversation flows)
→ Done: Complex conversational agent
```
## Skill Dependencies
```
agent-workflow (meta-skill)
├── building-agents-core (foundational)
│ ├── Architecture concepts
│ ├── Node/Edge/Goal definitions
│ ├── Tool discovery procedures
│ └── Workflow overview
├── building-agents-construction (procedural)
│ ├── Creates package structure
│ ├── Defines goal
│ ├── Adds nodes incrementally
│ ├── Connects edges
│ ├── Finalizes agent class
│ └── Requires: building-agents-core
├── building-agents-patterns (reference)
│ ├── Best practices
│ ├── Pause/resume patterns
│ ├── Error handling
│ ├── Anti-patterns
│ └── Performance optimization
└── testing-agent
├── Reads agent goal
├── Generates tests
├── Runs evaluation
└── Reports results
```
## Troubleshooting
### "Agent structure won't validate"
- Check node IDs match between nodes/__init__.py and agent.py
- Verify all edges reference valid node IDs
- Ensure entry_node exists in nodes list
- Run: `PYTHONPATH=core:exports python -m agent_name validate`
### "Agent has structure but won't run"
- Check for STATUS.md or IMPLEMENTATION_GUIDE.md in agent directory
- Implementation may be needed (Python functions or MCP tools)
- This is expected - building-agents-construction creates structure, not implementation
- See implementation guide for completion options
### "Tests are failing"
- Review test output for specific failures
- Check agent goal and success criteria
- Verify constraints are met
- Use `/testing-agent` to debug and iterate
- Fix agent code and re-run tests
### "Not sure which phase I'm in"
Run these checks:
```bash
# Check if agent structure exists
ls exports/my_agent/agent.py
# Check if it validates
PYTHONPATH=core:exports python -m my_agent validate
# Check if tests exist
ls exports/my_agent/tests/
# If structure exists and validates → Phase 2 (testing)
# If structure doesn't exist → Phase 1 (building)
# If tests exist but failing → Debug phase
```
## Best Practices
### For Phase 1 (Building)
1. **Start with clear requirements** - Know what the agent should do
2. **Define success criteria early** - Measurable goals drive design
3. **Keep nodes focused** - One responsibility per node
4. **Use descriptive names** - Node IDs should explain purpose
5. **Validate incrementally** - Check structure after each major addition
### For Phase 2 (Testing)
1. **Test constraints first** - Hard requirements must pass
2. **Mock external dependencies** - Use mock mode for LLMs/APIs
3. **Cover edge cases** - Test failures, not just success paths
4. **Iterate quickly** - Fix one test at a time
5. **Document test patterns** - Future tests follow same structure
### General Workflow
1. **Use version control** - Git commit after each phase
2. **Document decisions** - Update README with changes
3. **Keep iterations small** - Build → Test → Fix → Repeat
4. **Preserve working states** - Tag successful iterations
5. **Learn from failures** - Failed tests reveal design issues
## Exit Criteria
You're done with the workflow when:
✅ Agent structure validates
✅ All tests pass
✅ Success criteria met
✅ Constraints verified
✅ Documentation complete
✅ Agent ready for deployment
## Additional Resources
- **building-agents-core**: See `.claude/skills/building-agents-core/SKILL.md`
- **building-agents-construction**: See `.claude/skills/building-agents-construction/SKILL.md`
- **building-agents-patterns**: See `.claude/skills/building-agents-patterns/SKILL.md`
- **testing-agent**: See `.claude/skills/testing-agent/SKILL.md`
- **Agent framework docs**: See `core/README.md`
- **Example agents**: See `exports/` directory
## Summary
This workflow provides a proven path from concept to production-ready agent:
1. **Learn** with `/building-agents-core` → Understand fundamentals (optional)
2. **Build** with `/building-agents-construction` → Get validated structure
3. **Optimize** with `/building-agents-patterns` → Apply best practices (optional)
4. **Test** with `/testing-agent` → Get verified functionality
The workflow is **flexible** - skip phases as needed, iterate freely, and adapt to your specific requirements. The goal is **production-ready agents** built with **consistent, repeatable processes**.
## Skill Selection Guide
**Choose building-agents-core when:**
- First time building agents
- Need to understand architecture
- Validating tool availability
- Learning about node types and edges
**Choose building-agents-construction when:**
- Actually building an agent
- Have clear requirements
- Ready to write code
- Want step-by-step guidance
**Choose building-agents-patterns when:**
- Agent structure complete
- Need advanced patterns
- Implementing pause/resume
- Optimizing performance
- Want best practices
**Choose testing-agent when:**
- Agent structure complete
- Ready to validate functionality
- Need comprehensive test coverage
- Debugging agent behavior
@@ -0,0 +1,199 @@
# Example: File Monitor Agent
This example shows the complete agent-workflow in action for building a file monitoring agent.
## Initial Request
```
User: "Build an agent that monitors ~/Downloads and copies new files to ~/Documents"
```
## Phase 1: Building (20 minutes)
### Step 1: Create Structure
Agent invokes `/building-agents` skill and:
1. Creates `exports/file_monitor_agent/` package
2. Writes skeleton files (__init__.py, __main__.py, agent.py, etc.)
**Output**: Package structure visible immediately
### Step 2: Define Goal
```python
goal = Goal(
id="file-monitor-copy",
name="Automated File Monitor & Copy",
success_criteria=[
# 100% detection rate
# 100% copy success
# 100% conflict resolution
# >99% uptime
],
constraints=[
# Preserve originals
# Handle errors gracefully
# Track state
# Respect permissions
]
)
```
**Output**: Goal written to agent.py
### Step 3: Design Nodes
7 nodes approved and written incrementally:
1. `initialize-state` - Set up tracking
2. `list-downloads` - Scan directory
3. `identify-new-files` - Find new files
4. `check-for-new-files` - Router
5. `copy-files` - Copy with conflict resolution
6. `update-state` - Mark as processed
7. `wait-interval` - Sleep between cycles
**Output**: All nodes in nodes/__init__.py
### Step 4: Connect Edges
8 edges connecting the workflow loop:
```
initialize → list → identify → check
↓ ↓
copy wait
↓ ↑
update ↓
↓ ↓
wait → list (loop)
```
**Output**: Edges written to agent.py
### Step 5: Finalize
```bash
$ PYTHONPATH=core:exports python -m file_monitor_agent validate
✓ Agent is valid
$ PYTHONPATH=core:exports python -m file_monitor_agent info
Agent: File Monitor & Copy Agent
Nodes: 7
Edges: 8
```
**Phase 1 Complete**: Structure validated ✅
### Status After Phase 1
```
exports/file_monitor_agent/
├── __init__.py ✅ (exports)
├── __main__.py ✅ (CLI)
├── agent.py ✅ (goal, graph, agent class)
├── nodes/__init__.py ✅ (7 nodes)
├── config.py ✅ (configuration)
├── implementations.py ✅ (Python functions)
├── README.md ✅ (documentation)
├── IMPLEMENTATION_GUIDE.md ✅ (next steps)
└── STATUS.md ✅ (current state)
```
**Note**: Implementation gap exists - data flow needs connection (covered in STATUS.md)
## Phase 2: Testing (25 minutes)
### Step 1: Analyze Agent
Agent invokes `/testing-agent` skill and:
1. Reads goal from `exports/file_monitor_agent/agent.py`
2. Identifies 4 success criteria to test
3. Identifies 4 constraints to verify
4. Plans test coverage
### Step 2: Generate Tests
Creates test files:
```
exports/file_monitor_agent/tests/
├── conftest.py (fixtures)
├── test_constraints.py (4 constraint tests)
├── test_success_criteria.py (4 success tests)
└── test_edge_cases.py (error handling)
```
Tests approved incrementally by user.
### Step 3: Run Tests
```bash
$ PYTHONPATH=core:exports pytest exports/file_monitor_agent/tests/
test_constraints.py::test_preserves_originals PASSED
test_constraints.py::test_handles_errors PASSED
test_constraints.py::test_tracks_state PASSED
test_constraints.py::test_respects_permissions PASSED
test_success_criteria.py::test_detects_all_files PASSED
test_success_criteria.py::test_copies_all_files PASSED
test_success_criteria.py::test_resolves_conflicts PASSED
test_success_criteria.py::test_continuous_run PASSED
test_edge_cases.py::test_empty_directory PASSED
test_edge_cases.py::test_permission_denied PASSED
test_edge_cases.py::test_disk_full PASSED
test_edge_cases.py::test_large_files PASSED
========================== 12 passed in 3.42s ==========================
```
**Phase 2 Complete**: All tests pass ✅
## Final Output
**Production-Ready Agent:**
```bash
# Run the agent
./RUN_AGENT.sh
# Or manually
PYTHONPATH=core:exports:aden-tools/src python -m file_monitor_agent run
```
**Capabilities:**
- Monitors ~/Downloads continuously
- Copies new files to ~/Documents
- Resolves conflicts with timestamps
- Handles errors gracefully
- Tracks processed files
- Runs as background service
**Total Time**: ~45 minutes from concept to production
## Key Learnings
1. **Incremental building** - Files written immediately, visible throughout
2. **Validation early** - Structure validated before moving to implementation
3. **Test-driven** - Tests reveal real behavior
4. **Documentation included** - README, STATUS, and guides auto-generated
5. **Repeatable process** - Same workflow for any agent type
## Variations
**For simpler agents:**
- Fewer nodes (3-5 instead of 7)
- Simpler workflow (linear instead of looping)
- Faster build time (10-15 minutes)
**For complex agents:**
- More nodes (10-15+)
- Multiple subgraphs
- Pause/resume points for human-in-the-loop
- Longer build time (45-60 minutes)
The workflow scales to your needs!
@@ -0,0 +1,693 @@
---
name: building-agents-construction
description: Step-by-step guide for building goal-driven agents. Creates package structure, defines goals, adds nodes, connects edges, and finalizes agent class. Use when actively building an agent.
license: Apache-2.0
metadata:
author: hive
version: "1.0"
type: procedural
part_of: building-agents
requires: building-agents-core
---
# Building Agents - Construction Process
Step-by-step guide for building goal-driven agent packages.
**Prerequisites:** Read `building-agents-core` for fundamental concepts.
## Step-by-Step Guide
### Step 1: Create Package Structure
When user requests an agent, **immediately create the package**:
```python
# 1. Create directory
agent_name = "technical_research_agent" # snake_case
package_path = f"exports/{agent_name}"
Bash(f"mkdir -p {package_path}/nodes")
# 2. Write skeleton files
Write(
file_path=f"{package_path}/__init__.py",
content='''"""
Agent package - will be populated as build progresses.
"""
'''
)
Write(
file_path=f"{package_path}/nodes/__init__.py",
content='''"""Node definitions."""
from framework.graph import NodeSpec
# Nodes will be added here as they are approved
__all__ = []
'''
)
Write(
file_path=f"{package_path}/agent.py",
content='''"""Agent graph construction."""
from framework.graph import EdgeSpec, EdgeCondition, Goal, SuccessCriterion, Constraint
from framework.graph.edge import GraphSpec
from framework.graph.executor import GraphExecutor
from framework.runtime import Runtime
from framework.llm.anthropic import AnthropicProvider
from framework.runner.tool_registry import ToolRegistry
from aden_tools.credentials import CredentialManager
# Goal will be added when defined
# Nodes will be imported from .nodes
# Edges will be added when approved
# Agent class will be created when graph is complete
'''
)
Write(
file_path=f"{package_path}/config.py",
content='''"""Runtime configuration."""
from dataclasses import dataclass
@dataclass
class RuntimeConfig:
model: str = "claude-sonnet-4-5-20250929"
temperature: float = 0.7
max_tokens: int = 4096
default_config = RuntimeConfig()
# Metadata will be added when goal is set
'''
)
Write(
file_path=f"{package_path}/__main__.py",
content=CLI_TEMPLATE # Full CLI template (see below)
)
```
**Show user:**
```
✅ Package created: exports/technical_research_agent/
📁 Files created:
- __init__.py (skeleton)
- __main__.py (CLI ready)
- agent.py (skeleton)
- nodes/__init__.py (empty)
- config.py (skeleton)
You can open these files now and watch them grow as we build!
```
### Step 2: Define Goal
Propose goal, get approval, **write immediately**:
```python
# After user approves goal...
goal_code = f'''
goal = Goal(
id="{goal_id}",
name="{name}",
description="{description}",
success_criteria=[
SuccessCriterion(
id="{sc.id}",
description="{sc.description}",
metric="{sc.metric}",
target="{sc.target}",
weight={sc.weight},
),
# ... more criteria
],
constraints=[
Constraint(
id="{c.id}",
description="{c.description}",
constraint_type="{c.constraint_type}",
category="{c.category}",
),
# ... more constraints
],
)
'''
# Append to agent.py
Read(f"{package_path}/agent.py") # Must read first
Edit(
file_path=f"{package_path}/agent.py",
old_string="# Goal will be added when defined",
new_string=f"# Goal definition\n{goal_code}"
)
# Write metadata to config.py
metadata_code = f'''
@dataclass
class AgentMetadata:
name: str = "{name}"
version: str = "1.0.0"
description: str = "{description}"
metadata = AgentMetadata()
'''
Read(f"{package_path}/config.py")
Edit(
file_path=f"{package_path}/config.py",
old_string="# Metadata will be added when goal is set",
new_string=f"# Agent metadata\n{metadata_code}"
)
```
**Show user:**
```
✅ Goal written to agent.py
✅ Metadata written to config.py
Open exports/technical_research_agent/agent.py to see the goal!
```
### Step 3: Add Nodes (Incremental)
**⚠️ IMPORTANT:** Before adding any node with tools, you MUST:
1. Call `mcp__agent-builder__list_mcp_tools()` to discover available tools
2. Verify each tool exists in the response
3. If a tool doesn't exist, inform the user and ask how to proceed
For each node, **write immediately after approval**:
```python
# After user approves node...
node_code = f'''
{node_id.replace('-', '_')}_node = NodeSpec(
id="{node_id}",
name="{name}",
description="{description}",
node_type="{node_type}",
input_keys={input_keys},
output_keys={output_keys},
system_prompt="""\\
{system_prompt}
""",
tools={tools},
max_retries={max_retries},
)
'''
# Append to nodes/__init__.py
Read(f"{package_path}/nodes/__init__.py")
Edit(
file_path=f"{package_path}/nodes/__init__.py",
old_string="__all__ = []",
new_string=f"{node_code}\n__all__ = []"
)
# Update __all__ exports
all_node_names = [n.replace('-', '_') + '_node' for n in approved_nodes]
all_exports = f"__all__ = {all_node_names}"
Edit(
file_path=f"{package_path}/nodes/__init__.py",
old_string="__all__ = []",
new_string=all_exports
)
```
**Show user after each node:**
```
✅ Added analyze_request_node to nodes/__init__.py
📊 Progress: 1/6 nodes added
Open exports/technical_research_agent/nodes/__init__.py to see it!
```
**Repeat for each node.** User watches the file grow.
#### Optional: Validate Node with MCP Tools
After writing a node, you can optionally use MCP tools for validation:
```python
# Node is already written to file. Now validate it:
mcp__agent-builder__test_node(
node_id="analyze-request",
test_input='{"query": "test query"}',
mock_llm_response='{"analysis": "mock output"}'
)
# Returns validation result showing node behavior
# This is OPTIONAL - for bookkeeping/validation only
# The node already exists in the file!
```
**Key Point:** The node was written to `nodes/__init__.py` FIRST. The MCP tool is just for validation.
### Step 4: Connect Edges
After all nodes approved, add edges:
```python
# Generate edges code
edges_code = "edges = [\n"
for edge in approved_edges:
edges_code += f''' EdgeSpec(
id="{edge.id}",
source="{edge.source}",
target="{edge.target}",
condition=EdgeCondition.{edge.condition.upper()},
'''
if edge.condition_expr:
edges_code += f' condition_expr="{edge.condition_expr}",\n'
edges_code += f' priority={edge.priority},\n'
edges_code += ' ),\n'
edges_code += "]\n"
# Write to agent.py
Read(f"{package_path}/agent.py")
Edit(
file_path=f"{package_path}/agent.py",
old_string="# Edges will be added when approved",
new_string=f"# Edge definitions\n{edges_code}"
)
# Write entry points and terminal nodes
graph_config = f'''
# Graph configuration
entry_node = "{entry_node_id}"
entry_points = {entry_points}
pause_nodes = {pause_nodes}
terminal_nodes = {terminal_nodes}
# Collect all nodes
nodes = [
{', '.join(node_names)},
]
'''
Edit(
file_path=f"{package_path}/agent.py",
old_string="# Agent class will be created when graph is complete",
new_string=graph_config
)
```
**Show user:**
```
✅ Edges written to agent.py
✅ Graph configuration added
5 edges connecting 6 nodes
```
#### Optional: Validate Graph Structure
After writing edges, optionally validate with MCP tools:
```python
# Edges already written to agent.py. Now validate structure:
mcp__agent-builder__validate_graph()
# Returns: unreachable nodes, missing connections, etc.
# This is OPTIONAL - for validation only
```
### Step 5: Finalize Agent Class
Write the agent class:
```python
agent_class_code = f'''
class {agent_class_name}:
"""
{agent_description}
"""
def __init__(self, config=None):
self.config = config or default_config
self.goal = goal
self.nodes = nodes
self.edges = edges
self.entry_node = entry_node
self.entry_points = entry_points
self.pause_nodes = pause_nodes
self.terminal_nodes = terminal_nodes
self.executor = None
def _create_executor(self, mock_mode=False):
"""Create executor instance."""
import tempfile
from pathlib import Path
storage_path = Path(tempfile.gettempdir()) / "{agent_name}"
storage_path.mkdir(parents=True, exist_ok=True)
runtime = Runtime(storage_path=storage_path)
tool_registry = ToolRegistry()
llm = None
if not mock_mode:
creds = CredentialManager()
if creds.is_available("anthropic"):
api_key = creds.get("anthropic")
llm = AnthropicProvider(api_key=api_key, model=self.config.model)
graph = GraphSpec(
id="{agent_name}-graph",
goal_id=self.goal.id,
version="1.0.0",
entry_node=self.entry_node,
entry_points=self.entry_points,
terminal_nodes=self.terminal_nodes,
pause_nodes=self.pause_nodes,
nodes=self.nodes,
edges=self.edges,
default_model=self.config.model,
max_tokens=self.config.max_tokens,
)
self.executor = GraphExecutor(
runtime=runtime,
llm=llm,
tools=list(tool_registry.get_tools().values()),
tool_executor=tool_registry.get_executor(),
)
self.graph = graph
return self.executor
async def run(self, context: dict, mock_mode=False, session_state=None):
"""Run the agent."""
executor = self._create_executor(mock_mode=mock_mode)
result = await executor.execute(
graph=self.graph,
goal=self.goal,
input_data=context,
session_state=session_state,
)
return result
def info(self):
"""Get agent information."""
return {{
"name": metadata.name,
"version": metadata.version,
"description": metadata.description,
"goal": {{
"name": self.goal.name,
"description": self.goal.description,
}},
"nodes": [n.id for n in self.nodes],
"edges": [e.id for e in self.edges],
"entry_node": self.entry_node,
"pause_nodes": self.pause_nodes,
"terminal_nodes": self.terminal_nodes,
}}
def validate(self):
"""Validate agent structure."""
errors = []
warnings = []
node_ids = {{node.id for node in self.nodes}}
for edge in self.edges:
if edge.source not in node_ids:
errors.append(f"Edge {{edge.id}}: source '{{edge.source}}' not found")
if edge.target not in node_ids:
errors.append(f"Edge {{edge.id}}: target '{{edge.target}}' not found")
if self.entry_node not in node_ids:
errors.append(f"Entry node '{{self.entry_node}}' not found")
return {{
"valid": len(errors) == 0,
"errors": errors,
"warnings": warnings,
}}
# Create default instance
default_agent = {agent_class_name}()
'''
# Append agent class
Read(f"{package_path}/agent.py")
Edit(
file_path=f"{package_path}/agent.py",
old_string="nodes = [",
new_string=f"nodes = [\n{agent_class_code}"
)
# Finalize __init__.py exports
init_content = f'''"""
{agent_description}
"""
from .agent import {agent_class_name}, default_agent, goal, nodes, edges
from .config import RuntimeConfig, AgentMetadata, default_config, metadata
__version__ = "1.0.0"
__all__ = [
"{agent_class_name}",
"default_agent",
"goal",
"nodes",
"edges",
"RuntimeConfig",
"AgentMetadata",
"default_config",
"metadata",
]
'''
Read(f"{package_path}/__init__.py")
Edit(
file_path=f"{package_path}/__init__.py",
old_string='"""',
new_string=init_content,
replace_all=True
)
# Write README
readme_content = f'''# {agent_name.replace('_', ' ').title()}
{agent_description}
## Usage
```bash
# Show agent info
python -m {agent_name} info
# Validate structure
python -m {agent_name} validate
# Run agent
python -m {agent_name} run --input '{{"key": "value"}}'
# Interactive shell
python -m {agent_name} shell
```
## As Python Module
```python
from {agent_name} import default_agent
result = await default_agent.run({{"key": "value"}})
```
## Structure
- `agent.py` - Goal, edges, graph construction
- `nodes/__init__.py` - Node definitions
- `config.py` - Runtime configuration
- `__main__.py` - CLI interface
'''
Write(
file_path=f"{package_path}/README.md",
content=readme_content
)
```
**Show user:**
```
✅ Agent class written to agent.py
✅ Package exports finalized in __init__.py
✅ README.md generated
🎉 Agent complete: exports/technical_research_agent/
Commands:
python -m technical_research_agent info
python -m technical_research_agent validate
python -m technical_research_agent run --input '{"topic": "..."}'
```
## CLI Template
```python
CLI_TEMPLATE = '''"""
CLI entry point for agent.
"""
import asyncio
import json
import sys
import click
from .agent import default_agent
@click.group()
@click.version_option(version="1.0.0")
def cli():
"""Agent CLI."""
pass
@cli.command()
@click.option("--input", "-i", "input_json", type=str, required=True)
@click.option("--mock", is_flag=True, help="Run in mock mode")
@click.option("--quiet", "-q", is_flag=True, help="Only output result JSON")
def run(input_json, mock, quiet):
"""Execute the agent."""
try:
context = json.loads(input_json)
except json.JSONDecodeError as e:
click.echo(f"Error parsing input JSON: {e}", err=True)
sys.exit(1)
if not quiet:
click.echo(f"Running agent with input: {json.dumps(context)}")
result = asyncio.run(default_agent.run(context, mock_mode=mock))
output_data = {
"success": result.success,
"steps_executed": result.steps_executed,
"output": result.output,
}
if result.error:
output_data["error"] = result.error
if result.paused_at:
output_data["paused_at"] = result.paused_at
click.echo(json.dumps(output_data, indent=2, default=str))
sys.exit(0 if result.success else 1)
@cli.command()
@click.option("--json", "output_json", is_flag=True)
def info(output_json):
"""Show agent information."""
info_data = default_agent.info()
if output_json:
click.echo(json.dumps(info_data, indent=2))
else:
click.echo(f"Agent: {info_data['name']}")
click.echo(f"Description: {info_data['description']}")
click.echo(f"Nodes: {len(info_data['nodes'])}")
click.echo(f"Edges: {len(info_data['edges'])}")
@cli.command()
def validate():
"""Validate agent structure."""
validation = default_agent.validate()
if validation["valid"]:
click.echo("✓ Agent is valid")
else:
click.echo("✗ Agent has errors:")
for error in validation["errors"]:
click.echo(f" ERROR: {error}")
sys.exit(0 if validation["valid"] else 1)
@cli.command()
def shell():
"""Interactive agent session."""
click.echo("Interactive mode - enter JSON input:")
# ... implementation
if __name__ == "__main__":
cli()
'''
```
## Testing During Build
After nodes are added:
```python
# Test individual node
python -c "
from exports.my_agent.nodes import analyze_request_node
print(analyze_request_node.id)
print(analyze_request_node.input_keys)
"
# Validate current state
PYTHONPATH=core:exports python -m my_agent validate
# Show info
PYTHONPATH=core:exports python -m my_agent info
```
## Approval Pattern
Use AskUserQuestion for all approvals:
```python
response = AskUserQuestion(
questions=[{
"question": "Do you approve this [component]?",
"header": "Approve",
"options": [
{
"label": "✓ Approve (Recommended)",
"description": "Component looks good, proceed"
},
{
"label": "✗ Reject & Modify",
"description": "Need to make changes"
},
{
"label": "⏸ Pause & Review",
"description": "Need more time to review"
}
],
"multiSelect": false
}]
)
```
## Next Steps
After completing construction:
**If agent structure complete:**
- Validate: `python -m agent_name validate`
- Test basic execution: `python -m agent_name info`
- Proceed to testing-agent skill for comprehensive tests
**If implementation needed:**
- Check for STATUS.md or IMPLEMENTATION_GUIDE.md in agent directory
- May need Python functions or MCP tool integration
## Related Skills
- **building-agents-core** - Fundamental concepts
- **building-agents-patterns** - Best practices and examples
- **testing-agent** - Test and validate completed agents
- **agent-workflow** - Complete workflow orchestrator
@@ -0,0 +1,303 @@
---
name: building-agents-core
description: Core concepts for goal-driven agents - architecture, node types, tool discovery, and workflow overview. Use when starting agent development or need to understand agent fundamentals.
license: Apache-2.0
metadata:
author: hive
version: "1.0"
type: foundational
part_of: building-agents
---
# Building Agents - Core Concepts
Foundational knowledge for building goal-driven agents as Python packages.
## Architecture: Python Services (Not JSON Configs)
Agents are built as Python packages:
```
exports/my_agent/
├── __init__.py # Package exports
├── __main__.py # CLI (run, info, validate, shell)
├── agent.py # Graph construction (goal, edges, agent class)
├── nodes/__init__.py # Node definitions (NodeSpec)
├── config.py # Runtime config
└── README.md # Documentation
```
**Key Principle: Agent is visible and editable during build**
- ✅ Files created immediately as components are approved
- ✅ User can watch files grow in their editor
- ✅ No session state - just direct file writes
- ✅ No "export" step - agent is ready when build completes
## Core Concepts
### Goal
Success criteria and constraints (written to agent.py)
```python
goal = Goal(
id="research-goal",
name="Technical Research Agent",
description="Research technical topics thoroughly",
success_criteria=[
SuccessCriterion(
id="completeness",
description="Cover all aspects of topic",
metric="coverage_score",
target=">=0.9",
weight=0.4,
),
# ... more criteria
],
constraints=[
Constraint(
id="accuracy",
description="All information must be verified",
constraint_type="hard",
category="quality",
),
# ... more constraints
],
)
```
### Node
Unit of work (written to nodes/__init__.py)
**Node Types:**
- `llm_generate` - Text generation, parsing
- `llm_tool_use` - Actions requiring tools
- `router` - Conditional branching
- `function` - Deterministic operations
```python
search_node = NodeSpec(
id="search-web",
name="Search Web",
description="Search for information online",
node_type="llm_tool_use",
input_keys=["query"],
output_keys=["search_results"],
system_prompt="Search the web for: {query}",
tools=["web_search"],
max_retries=3,
)
```
### Edge
Connection between nodes (written to agent.py)
**Edge Conditions:**
- `on_success` - Proceed if node succeeds
- `on_failure` - Handle errors
- `always` - Always proceed
- `conditional` - Based on expression
```python
EdgeSpec(
id="search-to-analyze",
source="search-web",
target="analyze-results",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
)
```
### Pause/Resume
Multi-turn conversations
- **Pause nodes** - Stop execution, wait for user input
- **Resume entry points** - Continue from pause with user's response
```python
# Example pause/resume configuration
pause_nodes = ["request-clarification"]
entry_points = {
"start": "analyze-request",
"request-clarification_resume": "process-clarification"
}
```
## Tool Discovery & Validation
**CRITICAL:** Before adding a node with tools, you MUST verify the tools exist.
Tools are provided by MCP servers. Never assume a tool exists - always discover dynamically.
### Step 1: Register MCP Server (if not already done)
```python
mcp__agent-builder__add_mcp_server(
name="aden-tools",
transport="stdio",
command="python",
args='["mcp_server.py", "--stdio"]',
cwd="../aden-tools"
)
```
### Step 2: Discover Available Tools
```python
# List all tools from all registered servers
mcp__agent-builder__list_mcp_tools()
# Or list tools from a specific server
mcp__agent-builder__list_mcp_tools(server_name="aden-tools")
```
This returns available tools with their descriptions and parameters:
```json
{
"success": true,
"tools_by_server": {
"aden-tools": [
{
"name": "web_search",
"description": "Search the web...",
"parameters": ["query"]
},
{
"name": "web_scrape",
"description": "Scrape a URL...",
"parameters": ["url"]
}
]
},
"total_tools": 14
}
```
### Step 3: Validate Before Adding Nodes
Before writing a node with `tools=[...]`:
1. Call `list_mcp_tools()` to get available tools
2. Check each tool in your node exists in the response
3. If a tool doesn't exist:
- **DO NOT proceed** with the node
- Inform the user: "The tool 'X' is not available. Available tools are: ..."
- Ask if they want to use an alternative or proceed without the tool
### Tool Validation Anti-Patterns
**Never assume a tool exists** - always call `list_mcp_tools()` first
**Never write a node with unverified tools** - validate before writing
**Never silently drop tools** - if a tool doesn't exist, inform the user
**Never guess tool names** - use exact names from discovery response
### Example Validation Flow
```python
# 1. User requests: "Add a node that searches the web"
# 2. Discover available tools
tools_response = mcp__agent-builder__list_mcp_tools()
# 3. Check if web_search exists
available = [t["name"] for tools in tools_response["tools_by_server"].values() for t in tools]
if "web_search" not in available:
# Inform user and ask how to proceed
print("'web_search' not available. Available tools:", available)
else:
# Proceed with node creation
# ...
```
## Workflow Overview: Incremental File Construction
```
1. CREATE PACKAGE → mkdir + write skeletons
2. DEFINE GOAL → Write to agent.py + config.py
3. FOR EACH NODE:
- Propose design
- User approves
- Write to nodes/__init__.py IMMEDIATELY ← FILE WRITTEN
- (Optional) Validate with test_node ← MCP VALIDATION
- User can open file and see it
4. CONNECT EDGES → Update agent.py ← FILE WRITTEN
- (Optional) Validate with validate_graph ← MCP VALIDATION
5. FINALIZE → Write agent class to agent.py ← FILE WRITTEN
6. DONE - Agent ready at exports/my_agent/
```
**Files written immediately. MCP tools optional for validation/testing bookkeeping.**
### The Key Difference
**OLD (Bad):**
```
MCP add_node → Session State → MCP add_node → Session State → ...
MCP export_graph
Files appear
```
**NEW (Good):**
```
Write node to file → (Optional: MCP test_node) → Write node to file → ...
↓ ↓
File visible File visible
immediately immediately
```
**Bottom line:** Use Write/Edit for construction, MCP for validation if needed.
## When to Use This Skill
Use building-agents-core when:
- Starting a new agent project and need to understand fundamentals
- Need to understand agent architecture before building
- Want to validate tool availability before proceeding
- Learning about node types, edges, and graph execution
**Next Steps:**
- Ready to build? → Use `building-agents-construction` skill
- Need patterns and examples? → Use `building-agents-patterns` skill
## MCP Tools for Validation
After writing files, optionally use MCP tools for validation:
**test_node** - Validate node configuration with mock inputs
```python
mcp__agent-builder__test_node(
node_id="search-web",
test_input='{"query": "test query"}',
mock_llm_response='{"results": "mock output"}'
)
```
**validate_graph** - Check graph structure
```python
mcp__agent-builder__validate_graph()
# Returns: unreachable nodes, missing connections, etc.
```
**create_session** - Track session state for bookkeeping
```python
mcp__agent-builder__create_session(session_name="my-build")
```
**Key Point:** Files are written FIRST. MCP tools are for validation only.
## Related Skills
- **building-agents-construction** - Step-by-step building process
- **building-agents-patterns** - Best practices and examples
- **agent-workflow** - Complete workflow orchestrator
- **testing-agent** - Test and validate completed agents
@@ -0,0 +1,497 @@
---
name: building-agents-patterns
description: Best practices, patterns, and examples for building goal-driven agents. Includes pause/resume architecture, hybrid workflows, anti-patterns, and handoff to testing. Use when optimizing agent design.
license: Apache-2.0
metadata:
author: hive
version: "1.0"
type: reference
part_of: building-agents
---
# Building Agents - Patterns & Best Practices
Design patterns, examples, and best practices for building robust goal-driven agents.
**Prerequisites:** Complete agent structure using `building-agents-construction`.
## Practical Example: Hybrid Workflow
How to build a node using both direct file writes and optional MCP validation:
```python
# 1. WRITE TO FILE FIRST (Primary - makes it visible)
node_code = '''
search_node = NodeSpec(
id="search-web",
node_type="llm_tool_use",
input_keys=["query"],
output_keys=["search_results"],
system_prompt="Search the web for: {query}",
tools=["web_search"],
)
'''
Edit(
file_path="exports/research_agent/nodes/__init__.py",
old_string="# Nodes will be added here",
new_string=node_code
)
print("✅ Added search_node to nodes/__init__.py")
print("📁 Open exports/research_agent/nodes/__init__.py to see it!")
# 2. OPTIONALLY VALIDATE WITH MCP (Secondary - bookkeeping)
validation = mcp__agent-builder__test_node(
node_id="search-web",
test_input='{"query": "python tutorials"}',
mock_llm_response='{"search_results": [...mock results...]}'
)
print(f"✓ Validation: {validation['success']}")
```
**User experience:**
- Immediately sees node in their editor (from step 1)
- Gets validation feedback (from step 2)
- Can edit the file directly if needed
This combines visibility (files) with validation (MCP tools).
## Pause/Resume Architecture
For agents needing multi-turn conversations with user interaction:
### Basic Pause/Resume Flow
```python
# Define pause nodes - execution stops at these nodes
pause_nodes = ["request-clarification", "await-approval"]
# Define entry points - where to resume from each pause
entry_points = {
"start": "analyze-request", # Initial entry
"request-clarification_resume": "process-clarification", # Resume from clarification
"await-approval_resume": "execute-action", # Resume from approval
}
```
### Example: Multi-Turn Research Agent
```python
# Nodes
nodes = [
NodeSpec(id="analyze-request", ...),
NodeSpec(id="request-clarification", ...), # PAUSE NODE
NodeSpec(id="process-clarification", ...),
NodeSpec(id="generate-results", ...),
NodeSpec(id="await-approval", ...), # PAUSE NODE
NodeSpec(id="execute-action", ...),
]
# Edges with resume flows
edges = [
EdgeSpec(
id="analyze-to-clarify",
source="analyze-request",
target="request-clarification",
condition=EdgeCondition.CONDITIONAL,
condition_expr="needs_clarification == true",
),
# When resumed, goes to process-clarification
EdgeSpec(
id="clarify-to-process",
source="request-clarification",
target="process-clarification",
condition=EdgeCondition.ALWAYS,
),
EdgeSpec(
id="results-to-approval",
source="generate-results",
target="await-approval",
condition=EdgeCondition.ALWAYS,
),
# When resumed, goes to execute-action
EdgeSpec(
id="approval-to-execute",
source="await-approval",
target="execute-action",
condition=EdgeCondition.ALWAYS,
),
]
# Configuration
pause_nodes = ["request-clarification", "await-approval"]
entry_points = {
"start": "analyze-request",
"request-clarification_resume": "process-clarification",
"await-approval_resume": "execute-action",
}
```
### Running Pause/Resume Agents
```python
# Initial run - will pause at first pause node
result1 = await agent.run(
context={"query": "research topic"},
session_state=None
)
# Check if paused
if result1.paused_at:
print(f"Paused at: {result1.paused_at}")
# Resume with user input
result2 = await agent.run(
context={"user_response": "clarification details"},
session_state=result1.session_state # Pass previous state
)
```
## Anti-Patterns
### What NOT to Do
**Don't rely on `export_graph`** - Write files immediately, not at end
```python
# BAD: Building in session state, exporting at end
mcp__agent-builder__add_node(...)
mcp__agent-builder__add_node(...)
mcp__agent-builder__export_graph() # Files appear only now
# GOOD: Writing files immediately
Write(file_path="...", content=node_code) # File visible now
Write(file_path="...", content=node_code) # File visible now
```
**Don't hide code in session** - Write to files as components approved
```python
# BAD: Accumulating changes invisibly
session.add_component(component1)
session.add_component(component2)
# User can't see anything yet
# GOOD: Incremental visibility
Edit(file_path="...", ...) # User sees change 1
Edit(file_path="...", ...) # User sees change 2
```
**Don't wait to write files** - Agent visible from first step
```python
# BAD: Building everything before writing
design_all_nodes()
design_all_edges()
write_everything_at_once()
# GOOD: Write as you go
write_package_structure() # Visible
write_goal() # Visible
write_node_1() # Visible
write_node_2() # Visible
```
**Don't batch everything** - Write incrementally
```python
# BAD: Batching all nodes
nodes = [design_node_1(), design_node_2(), ...]
write_all_nodes(nodes)
# GOOD: One at a time with user feedback
write_node_1() # User approves
write_node_2() # User approves
write_node_3() # User approves
```
### MCP Tools - Correct Usage
**MCP tools OK for:**
`test_node` - Validate node configuration with mock inputs
`validate_graph` - Check graph structure
`create_session` - Track session state for bookkeeping
✅ Other validation tools
**Just don't:** Use MCP as the primary construction method or rely on export_graph
## Best Practices
### 1. Show Progress After Each Write
```python
# After writing a node
print("✅ Added analyze_request_node to nodes/__init__.py")
print("📊 Progress: 1/6 nodes added")
print("📁 Open exports/my_agent/nodes/__init__.py to see it!")
```
### 2. Let User Open Files During Build
```python
# Encourage file inspection
print("✅ Goal written to agent.py")
print("")
print("💡 Tip: Open exports/my_agent/agent.py in your editor to see the goal!")
```
### 3. Write Incrementally - One Component at a Time
```python
# Good flow
write_package_structure()
show_user("Package created")
write_goal()
show_user("Goal written")
for node in nodes:
get_approval(node)
write_node(node)
show_user(f"Node {node.id} written")
```
### 4. Test As You Build
```python
# After adding several nodes
print("💡 You can test current state with:")
print(" PYTHONPATH=core:exports python -m my_agent validate")
print(" PYTHONPATH=core:exports python -m my_agent info")
```
### 5. Keep User Informed
```python
# Clear status updates
print("🔨 Creating package structure...")
print("✅ Package created: exports/my_agent/")
print("")
print("📝 Next: Define agent goal")
```
## Continuous Monitoring Agents
For agents that run continuously without terminal nodes:
```python
# No terminal nodes - loops forever
terminal_nodes = []
# Workflow loops back to start
edges = [
EdgeSpec(id="monitor-to-check", source="monitor", target="check-condition"),
EdgeSpec(id="check-to-wait", source="check-condition", target="wait"),
EdgeSpec(id="wait-to-monitor", source="wait", target="monitor"), # Loop
]
# Entry node only
entry_node = "monitor"
entry_points = {"start": "monitor"}
pause_nodes = []
```
**Example: File Monitor**
```python
nodes = [
NodeSpec(id="list-files", ...),
NodeSpec(id="check-new-files", node_type="router", ...),
NodeSpec(id="process-files", ...),
NodeSpec(id="wait-interval", node_type="function", ...),
]
edges = [
EdgeSpec(id="list-to-check", source="list-files", target="check-new-files"),
EdgeSpec(
id="check-to-process",
source="check-new-files",
target="process-files",
condition=EdgeCondition.CONDITIONAL,
condition_expr="new_files_count > 0",
),
EdgeSpec(
id="check-to-wait",
source="check-new-files",
target="wait-interval",
condition=EdgeCondition.CONDITIONAL,
condition_expr="new_files_count == 0",
),
EdgeSpec(id="process-to-wait", source="process-files", target="wait-interval"),
EdgeSpec(id="wait-to-list", source="wait-interval", target="list-files"), # Loop back
]
terminal_nodes = [] # No terminal - runs forever
```
## Complex Routing Patterns
### Multi-Condition Router
```python
router_node = NodeSpec(
id="decision-router",
node_type="router",
input_keys=["analysis_result"],
output_keys=["decision"],
system_prompt="""
Based on the analysis result, decide the next action:
- If confidence > 0.9: route to "execute"
- If 0.5 <= confidence <= 0.9: route to "review"
- If confidence < 0.5: route to "clarify"
Return: {"decision": "execute|review|clarify"}
""",
)
# Edges for each route
edges = [
EdgeSpec(
id="router-to-execute",
source="decision-router",
target="execute-action",
condition=EdgeCondition.CONDITIONAL,
condition_expr="decision == 'execute'",
priority=1,
),
EdgeSpec(
id="router-to-review",
source="decision-router",
target="human-review",
condition=EdgeCondition.CONDITIONAL,
condition_expr="decision == 'review'",
priority=2,
),
EdgeSpec(
id="router-to-clarify",
source="decision-router",
target="request-clarification",
condition=EdgeCondition.CONDITIONAL,
condition_expr="decision == 'clarify'",
priority=3,
),
]
```
## Error Handling Patterns
### Graceful Failure with Fallback
```python
# Primary node with error handling
nodes = [
NodeSpec(id="api-call", max_retries=3, ...),
NodeSpec(id="fallback-cache", ...),
NodeSpec(id="report-error", ...),
]
edges = [
# Success path
EdgeSpec(
id="api-success",
source="api-call",
target="process-results",
condition=EdgeCondition.ON_SUCCESS,
),
# Fallback on failure
EdgeSpec(
id="api-to-fallback",
source="api-call",
target="fallback-cache",
condition=EdgeCondition.ON_FAILURE,
priority=1,
),
# Report if fallback also fails
EdgeSpec(
id="fallback-to-error",
source="fallback-cache",
target="report-error",
condition=EdgeCondition.ON_FAILURE,
priority=1,
),
]
```
## Performance Optimization
### Parallel Node Execution
```python
# Use multiple edges from same source for parallel execution
edges = [
EdgeSpec(
id="start-to-search1",
source="start",
target="search-source-1",
condition=EdgeCondition.ALWAYS,
),
EdgeSpec(
id="start-to-search2",
source="start",
target="search-source-2",
condition=EdgeCondition.ALWAYS,
),
EdgeSpec(
id="start-to-search3",
source="start",
target="search-source-3",
condition=EdgeCondition.ALWAYS,
),
# Converge results
EdgeSpec(
id="search1-to-merge",
source="search-source-1",
target="merge-results",
),
EdgeSpec(
id="search2-to-merge",
source="search-source-2",
target="merge-results",
),
EdgeSpec(
id="search3-to-merge",
source="search-source-3",
target="merge-results",
),
]
```
## Handoff to Testing
When agent is complete, transition to testing phase:
```python
print("""
✅ Agent complete: exports/my_agent/
Next steps:
1. Switch to testing-agent skill
2. Generate and approve tests
3. Run evaluation
4. Debug any failures
Command: "Test the agent at exports/my_agent/"
""")
```
### Pre-Testing Checklist
Before handing off to testing-agent:
- [ ] Agent structure validates: `python -m agent_name validate`
- [ ] All nodes defined in nodes/__init__.py
- [ ] All edges connect valid nodes
- [ ] Entry node specified
- [ ] Agent can be imported: `from exports.agent_name import default_agent`
- [ ] README.md with usage instructions
- [ ] CLI commands work (info, validate)
## Related Skills
- **building-agents-core** - Fundamental concepts
- **building-agents-construction** - Step-by-step building
- **testing-agent** - Test and validate agents
- **agent-workflow** - Complete workflow orchestrator
---
**Remember: Agent is actively constructed, visible the whole time. No hidden state. No surprise exports. Just transparent, incremental file building.**
+57 -914
View File
@@ -1,939 +1,82 @@
---
name: building-agents
description: Build goal-driven agents as Python packages. Creates runnable services with full framework access. Use when asked to create an agent, design a workflow, or build automation.
description: DEPRECATED - Split into building-agents-core, building-agents-construction, and building-agents-patterns. Use agent-workflow for complete guidance.
license: Apache-2.0
metadata:
author: hive
version: "1.0"
status: deprecated
replaced_by:
- building-agents-core
- building-agents-construction
- building-agents-patterns
---
# Building Agents
# DEPRECATED: Building Agents Skill
Build goal-driven agents as **Python service packages** with direct file manipulation.
**This skill has been split into three focused skills for better organization.**
## Architecture: Python Services (Not JSON Configs)
## Use These Instead
Agents are built as Python packages:
### 1. `/building-agents-core` - Foundational Concepts
- Architecture overview (Python packages)
- Core concepts (Goal, Node, Edge, Pause/Resume)
- Tool discovery and validation procedures
- When to use: Learning fundamentals, first-time agent builders
### 2. `/building-agents-construction` - Building Process
- Step-by-step agent construction
- Package structure, goal definition, nodes, edges
- CLI interface and finalization
- When to use: Actually building an agent
### 3. `/building-agents-patterns` - Best Practices
- Design patterns and examples
- Pause/resume architecture
- Error handling and performance
- Anti-patterns to avoid
- When to use: Optimizing design, advanced features
## Complete Workflow
Use **`/agent-workflow`** meta-skill for end-to-end guidance:
```
exports/my_agent/
├── __init__.py # Package exports
├── __main__.py # CLI (run, info, validate, shell)
├── agent.py # Graph construction (goal, edges, agent class)
├── nodes/__init__.py # Node definitions (NodeSpec)
├── config.py # Runtime config
└── README.md # Documentation
Phase 0: Understand → /building-agents-core (optional)
Phase 1: Build → /building-agents-construction
Phase 1.5: Optimize → /building-agents-patterns (optional)
Phase 2: Test → /testing-agent
```
**Key Principle: Agent is visible and editable during build**
## Why Split?
- ✅ Files created immediately as components are approved
- ✅ User can watch files grow in their editor
- ✅ No session state - just direct file writes
- ✅ No "export" step - agent is ready when build completes
- **Reduced context** - Each skill under 650 lines (vs 939 lines)
- **Better organization** - Clear separation of concerns
- **Faster loading** - Load only what you need
- **Easier maintenance** - Update one aspect independently
## Core Concepts
## Quick Decision Guide
**Goal**: Success criteria and constraints (written to agent.py)
**"I'm new to agents"** → Start with `/building-agents-core`
**Node**: Unit of work (written to nodes/**init**.py)
**"Build an agent now"** → Use `/building-agents-construction`
- `llm_generate` - Text generation, parsing
- `llm_tool_use` - Actions requiring tools
- `router` - Conditional branching
- `function` - Deterministic operations
**"Optimize my design"** → Use `/building-agents-patterns`
**Edge**: Connection between nodes (written to agent.py)
**"Complete workflow"** → Use `/agent-workflow`
- `on_success` - Proceed if node succeeds
- `on_failure` - Handle errors
- `always` - Always proceed
- `conditional` - Based on expression
**"Test my agent"** → Use `/testing-agent`
**Pause/Resume**: Multi-turn conversations
## Migration
- Pause nodes stop execution, wait for user input
- Resume entry points continue from pause with user's response
All functionality from the original building-agents skill is preserved across the three new skills. The content has been reorganized, not removed.
## Tool Discovery & Validation
**Original file**: Available as `SKILL.md.bak` for reference
**CRITICAL:** Before adding a node with tools, you MUST verify the tools exist.
Tools are provided by MCP servers. Never assume a tool exists - always discover dynamically.
### Step 1: Register MCP Server (if not already done)
```python
mcp__agent-builder__add_mcp_server(
name="aden-tools",
transport="stdio",
command="python",
args='["mcp_server.py", "--stdio"]',
cwd="../aden-tools"
)
```
### Step 2: Discover Available Tools
```python
# List all tools from all registered servers
mcp__agent-builder__list_mcp_tools()
# Or list tools from a specific server
mcp__agent-builder__list_mcp_tools(server_name="aden-tools")
```
This returns available tools with their descriptions and parameters:
```json
{
"success": true,
"tools_by_server": {
"aden-tools": [
{
"name": "web_search",
"description": "Search the web...",
"parameters": ["query"]
},
{
"name": "web_scrape",
"description": "Scrape a URL...",
"parameters": ["url"]
}
]
},
"total_tools": 14
}
```
### Step 3: Validate Before Adding Nodes
Before writing a node with `tools=[...]`:
1. Call `list_mcp_tools()` to get available tools
2. Check each tool in your node exists in the response
3. If a tool doesn't exist:
- **DO NOT proceed** with the node
- Inform the user: "The tool 'X' is not available. Available tools are: ..."
- Ask if they want to use an alternative or proceed without the tool
### Tool Validation Anti-Patterns
**Never assume a tool exists** - always call `list_mcp_tools()` first
**Never write a node with unverified tools** - validate before writing
**Never silently drop tools** - if a tool doesn't exist, inform the user
**Never guess tool names** - use exact names from discovery response
### Example Validation Flow
```python
# 1. User requests: "Add a node that searches the web"
# 2. Discover available tools
tools_response = mcp__agent-builder__list_mcp_tools()
# 3. Check if web_search exists
available = [t["name"] for tools in tools_response["tools_by_server"].values() for t in tools]
if "web_search" not in available:
# Inform user and ask how to proceed
print("'web_search' not available. Available tools:", available)
else:
# Proceed with node creation
# ...
```
## Workflow: Incremental File Construction
```
1. CREATE PACKAGE → mkdir + write skeletons
2. DEFINE GOAL → Write to agent.py + config.py
3. FOR EACH NODE:
- Propose design
- User approves
- Write to nodes/__init__.py IMMEDIATELY ← FILE WRITTEN
- (Optional) Validate with test_node ← MCP VALIDATION
- User can open file and see it
4. CONNECT EDGES → Update agent.py ← FILE WRITTEN
- (Optional) Validate with validate_graph ← MCP VALIDATION
5. FINALIZE → Write agent class to agent.py ← FILE WRITTEN
6. DONE - Agent ready at exports/my_agent/
```
**Files written immediately. MCP tools optional for validation/testing bookkeeping.**
### The Key Difference
**OLD (Bad):**
```
MCP add_node → Session State → MCP add_node → Session State → ...
MCP export_graph
Files appear
```
**NEW (Good):**
```
Write node to file → (Optional: MCP test_node) → Write node to file → ...
↓ ↓
File visible File visible
immediately immediately
```
**Bottom line:** Use Write/Edit for construction, MCP for validation if needed.
## Step-by-Step Guide
### Step 1: Create Package Structure
When user requests an agent, **immediately create the package**:
```python
# 1. Create directory
agent_name = "technical_research_agent" # snake_case
package_path = f"exports/{agent_name}"
Bash(f"mkdir -p {package_path}/nodes")
# 2. Write skeleton files
Write(
file_path=f"{package_path}/__init__.py",
content='''"""
Agent package - will be populated as build progresses.
"""
'''
)
Write(
file_path=f"{package_path}/nodes/__init__.py",
content='''"""Node definitions."""
from framework.graph import NodeSpec
# Nodes will be added here as they are approved
__all__ = []
'''
)
Write(
file_path=f"{package_path}/agent.py",
content='''"""Agent graph construction."""
from framework.graph import EdgeSpec, EdgeCondition, Goal, SuccessCriterion, Constraint
from framework.graph.edge import GraphSpec
from framework.graph.executor import GraphExecutor
from framework.runtime import Runtime
from framework.llm.anthropic import AnthropicProvider
from framework.runner.tool_registry import ToolRegistry
from aden_tools.credentials import CredentialManager
# Goal will be added when defined
# Nodes will be imported from .nodes
# Edges will be added when approved
# Agent class will be created when graph is complete
'''
)
Write(
file_path=f"{package_path}/config.py",
content='''"""Runtime configuration."""
from dataclasses import dataclass
@dataclass
class RuntimeConfig:
model: str = "claude-sonnet-4-5-20250929"
temperature: float = 0.7
max_tokens: int = 4096
default_config = RuntimeConfig()
# Metadata will be added when goal is set
'''
)
Write(
file_path=f"{package_path}/__main__.py",
content=CLI_TEMPLATE # Full CLI template (see below)
)
```
**Show user:**
```
✅ Package created: exports/technical_research_agent/
📁 Files created:
- __init__.py (skeleton)
- __main__.py (CLI ready)
- agent.py (skeleton)
- nodes/__init__.py (empty)
- config.py (skeleton)
You can open these files now and watch them grow as we build!
```
### Step 2: Define Goal
Propose goal, get approval, **write immediately**:
```python
# After user approves goal...
goal_code = f'''
goal = Goal(
id="{goal_id}",
name="{name}",
description="{description}",
success_criteria=[
SuccessCriterion(
id="{sc.id}",
description="{sc.description}",
metric="{sc.metric}",
target="{sc.target}",
weight={sc.weight},
),
# ... more criteria
],
constraints=[
Constraint(
id="{c.id}",
description="{c.description}",
constraint_type="{c.constraint_type}",
category="{c.category}",
),
# ... more constraints
],
)
'''
# Append to agent.py
Read(f"{package_path}/agent.py") # Must read first
Edit(
file_path=f"{package_path}/agent.py",
old_string="# Goal will be added when defined",
new_string=f"# Goal definition\n{goal_code}"
)
# Write metadata to config.py
metadata_code = f'''
@dataclass
class AgentMetadata:
name: str = "{name}"
version: str = "1.0.0"
description: str = "{description}"
metadata = AgentMetadata()
'''
Read(f"{package_path}/config.py")
Edit(
file_path=f"{package_path}/config.py",
old_string="# Metadata will be added when goal is set",
new_string=f"# Agent metadata\n{metadata_code}"
)
```
**Show user:**
```
✅ Goal written to agent.py
✅ Metadata written to config.py
Open exports/technical_research_agent/agent.py to see the goal!
```
### Step 3: Add Nodes (Incremental)
**⚠️ IMPORTANT:** Before adding any node with tools, you MUST:
1. Call `mcp__agent-builder__list_mcp_tools()` to discover available tools
2. Verify each tool exists in the response
3. If a tool doesn't exist, inform the user and ask how to proceed
For each node, **write immediately after approval**:
```python
# After user approves node...
node_code = f'''
{node_id.replace('-', '_')}_node = NodeSpec(
id="{node_id}",
name="{name}",
description="{description}",
node_type="{node_type}",
input_keys={input_keys},
output_keys={output_keys},
system_prompt="""\\
{system_prompt}
""",
tools={tools},
max_retries={max_retries},
)
'''
# Append to nodes/__init__.py
Read(f"{package_path}/nodes/__init__.py")
Edit(
file_path=f"{package_path}/nodes/__init__.py",
old_string="__all__ = []",
new_string=f"{node_code}\n__all__ = []"
)
# Update __all__ exports
all_node_names = [n.replace('-', '_') + '_node' for n in approved_nodes]
all_exports = f"__all__ = {all_node_names}"
Edit(
file_path=f"{package_path}/nodes/__init__.py",
old_string="__all__ = []",
new_string=all_exports
)
```
**Show user after each node:**
```
✅ Added analyze_request_node to nodes/__init__.py
📊 Progress: 1/6 nodes added
Open exports/technical_research_agent/nodes/__init__.py to see it!
```
**Repeat for each node.** User watches the file grow.
#### Optional: Validate Node with MCP Tools
After writing a node, you can optionally use MCP tools for validation:
```python
# Node is already written to file. Now validate it:
mcp__agent-builder__test_node(
node_id="analyze-request",
test_input='{"query": "test query"}',
mock_llm_response='{"analysis": "mock output"}'
)
# Returns validation result showing node behavior
# This is OPTIONAL - for bookkeeping/validation only
# The node already exists in the file!
```
**Key Point:** The node was written to `nodes/__init__.py` FIRST. The MCP tool is just for validation.
### Step 4: Connect Edges
After all nodes approved, add edges:
```python
# Generate edges code
edges_code = "edges = [\n"
for edge in approved_edges:
edges_code += f''' EdgeSpec(
id="{edge.id}",
source="{edge.source}",
target="{edge.target}",
condition=EdgeCondition.{edge.condition.upper()},
'''
if edge.condition_expr:
edges_code += f' condition_expr="{edge.condition_expr}",\n'
edges_code += f' priority={edge.priority},\n'
edges_code += ' ),\n'
edges_code += "]\n"
# Write to agent.py
Read(f"{package_path}/agent.py")
Edit(
file_path=f"{package_path}/agent.py",
old_string="# Edges will be added when approved",
new_string=f"# Edge definitions\n{edges_code}"
)
# Write entry points and terminal nodes
graph_config = f'''
# Graph configuration
entry_node = "{entry_node_id}"
entry_points = {entry_points}
pause_nodes = {pause_nodes}
terminal_nodes = {terminal_nodes}
# Collect all nodes
nodes = [
{', '.join(node_names)},
]
'''
Edit(
file_path=f"{package_path}/agent.py",
old_string="# Agent class will be created when graph is complete",
new_string=graph_config
)
```
**Show user:**
```
✅ Edges written to agent.py
✅ Graph configuration added
5 edges connecting 6 nodes
```
#### Optional: Validate Graph Structure
After writing edges, optionally validate with MCP tools:
```python
# Edges already written to agent.py. Now validate structure:
mcp__agent-builder__validate_graph()
# Returns: unreachable nodes, missing connections, etc.
# This is OPTIONAL - for validation only
```
### Step 5: Finalize Agent Class
Write the agent class:
```python
agent_class_code = f'''
class {agent_class_name}:
"""
{agent_description}
"""
def __init__(self, config=None):
self.config = config or default_config
self.goal = goal
self.nodes = nodes
self.edges = edges
self.entry_node = entry_node
self.entry_points = entry_points
self.pause_nodes = pause_nodes
self.terminal_nodes = terminal_nodes
self.executor = None
def _create_executor(self, mock_mode=False):
"""Create executor instance."""
import tempfile
from pathlib import Path
storage_path = Path(tempfile.gettempdir()) / "{agent_name}"
storage_path.mkdir(parents=True, exist_ok=True)
runtime = Runtime(storage_path=storage_path)
tool_registry = ToolRegistry()
llm = None
if not mock_mode:
creds = CredentialManager()
if creds.is_available("anthropic"):
api_key = creds.get("anthropic")
llm = AnthropicProvider(api_key=api_key, model=self.config.model)
graph = GraphSpec(
id="{agent_name}-graph",
goal_id=self.goal.id,
version="1.0.0",
entry_node=self.entry_node,
entry_points=self.entry_points,
terminal_nodes=self.terminal_nodes,
pause_nodes=self.pause_nodes,
nodes=self.nodes,
edges=self.edges,
default_model=self.config.model,
max_tokens=self.config.max_tokens,
)
self.executor = GraphExecutor(
runtime=runtime,
llm=llm,
tools=list(tool_registry.get_tools().values()),
tool_executor=tool_registry.get_executor(),
)
self.graph = graph
return self.executor
async def run(self, context: dict, mock_mode=False, session_state=None):
"""Run the agent."""
executor = self._create_executor(mock_mode=mock_mode)
result = await executor.execute(
graph=self.graph,
goal=self.goal,
input_data=context,
session_state=session_state,
)
return result
def info(self):
"""Get agent information."""
return {{
"name": metadata.name,
"version": metadata.version,
"description": metadata.description,
"goal": {{
"name": self.goal.name,
"description": self.goal.description,
}},
"nodes": [n.id for n in self.nodes],
"edges": [e.id for e in self.edges],
"entry_node": self.entry_node,
"pause_nodes": self.pause_nodes,
"terminal_nodes": self.terminal_nodes,
}}
def validate(self):
"""Validate agent structure."""
errors = []
warnings = []
node_ids = {{node.id for node in self.nodes}}
for edge in self.edges:
if edge.source not in node_ids:
errors.append(f"Edge {{edge.id}}: source '{{edge.source}}' not found")
if edge.target not in node_ids:
errors.append(f"Edge {{edge.id}}: target '{{edge.target}}' not found")
if self.entry_node not in node_ids:
errors.append(f"Entry node '{{self.entry_node}}' not found")
return {{
"valid": len(errors) == 0,
"errors": errors,
"warnings": warnings,
}}
# Create default instance
default_agent = {agent_class_name}()
'''
# Append agent class
Read(f"{package_path}/agent.py")
Edit(
file_path=f"{package_path}/agent.py",
old_string="nodes = [",
new_string=f"nodes = [\n{agent_class_code}"
)
# Finalize __init__.py exports
init_content = f'''"""
{agent_description}
"""
from .agent import {agent_class_name}, default_agent, goal, nodes, edges
from .config import RuntimeConfig, AgentMetadata, default_config, metadata
__version__ = "1.0.0"
__all__ = [
"{agent_class_name}",
"default_agent",
"goal",
"nodes",
"edges",
"RuntimeConfig",
"AgentMetadata",
"default_config",
"metadata",
]
'''
Read(f"{package_path}/__init__.py")
Edit(
file_path=f"{package_path}/__init__.py",
old_string='"""',
new_string=init_content,
replace_all=True
)
# Write README
readme_content = f'''# {agent_name.replace('_', ' ').title()}
{agent_description}
## Usage
\`\`\`bash
# Show agent info
python -m {agent_name} info
# Validate structure
python -m {agent_name} validate
# Run agent
python -m {agent_name} run --input '{{"key": "value"}}'
# Interactive shell
python -m {agent_name} shell
\`\`\`
## As Python Module
\`\`\`python
from {agent_name} import default_agent
result = await default_agent.run({{"key": "value"}})
\`\`\`
## Structure
- `agent.py` - Goal, edges, graph construction
- `nodes/__init__.py` - Node definitions
- `config.py` - Runtime configuration
- `__main__.py` - CLI interface
'''
Write(
file_path=f"{package_path}/README.md",
content=readme_content
)
```
**Show user:**
```
✅ Agent class written to agent.py
✅ Package exports finalized in __init__.py
✅ README.md generated
🎉 Agent complete: exports/technical_research_agent/
Commands:
python -m technical_research_agent info
python -m technical_research_agent validate
python -m technical_research_agent run --input '{"topic": "..."}'
```
## CLI Template
```python
CLI_TEMPLATE = '''"""
CLI entry point for agent.
"""
import asyncio
import json
import sys
import click
from .agent import default_agent
@click.group()
@click.version_option(version="1.0.0")
def cli():
"""Agent CLI."""
pass
@cli.command()
@click.option("--input", "-i", "input_json", type=str, required=True)
@click.option("--mock", is_flag=True, help="Run in mock mode")
@click.option("--quiet", "-q", is_flag=True, help="Only output result JSON")
def run(input_json, mock, quiet):
"""Execute the agent."""
try:
context = json.loads(input_json)
except json.JSONDecodeError as e:
click.echo(f"Error parsing input JSON: {e}", err=True)
sys.exit(1)
if not quiet:
click.echo(f"Running agent with input: {json.dumps(context)}")
result = asyncio.run(default_agent.run(context, mock_mode=mock))
output_data = {
"success": result.success,
"steps_executed": result.steps_executed,
"output": result.output,
}
if result.error:
output_data["error"] = result.error
if result.paused_at:
output_data["paused_at"] = result.paused_at
click.echo(json.dumps(output_data, indent=2, default=str))
sys.exit(0 if result.success else 1)
@cli.command()
@click.option("--json", "output_json", is_flag=True)
def info(output_json):
"""Show agent information."""
info_data = default_agent.info()
if output_json:
click.echo(json.dumps(info_data, indent=2))
else:
click.echo(f"Agent: {info_data['name']}")
click.echo(f"Description: {info_data['description']}")
click.echo(f"Nodes: {len(info_data['nodes'])}")
click.echo(f"Edges: {len(info_data['edges'])}")
@cli.command()
def validate():
"""Validate agent structure."""
validation = default_agent.validate()
if validation["valid"]:
click.echo("✓ Agent is valid")
else:
click.echo("✗ Agent has errors:")
for error in validation["errors"]:
click.echo(f" ERROR: {error}")
sys.exit(0 if validation["valid"] else 1)
@cli.command()
def shell():
"""Interactive agent session."""
click.echo("Interactive mode - enter JSON input:")
# ... implementation
if __name__ == "__main__":
cli()
'''
```
## Testing During Build
After nodes are added:
```python
# Test individual node
python -c "
from exports.my_agent.nodes import analyze_request_node
print(analyze_request_node.id)
print(analyze_request_node.input_keys)
"
# Validate current state
PYTHONPATH=core:exports python -m my_agent validate
# Show info
PYTHONPATH=core:exports python -m my_agent info
```
## Approval Pattern
Use AskUserQuestion for all approvals:
```python
response = AskUserQuestion(
questions=[{
"question": "Do you approve this [component]?",
"header": "Approve",
"options": [
{
"label": "✓ Approve (Recommended)",
"description": "Component looks good, proceed"
},
{
"label": "✗ Reject & Modify",
"description": "Need to make changes"
},
{
"label": "⏸ Pause & Review",
"description": "Need more time to review"
}
],
"multiSelect": false
}]
)
```
## Pause/Resume Architecture
For agents needing multi-turn conversations:
1. **Pause node**: Execution stops, waits for user input
2. **Resume entry point**: Continues from pause with user's response
```python
# Example pause/resume flow
pause_nodes = ["request-clarification"]
entry_points = {
"start": "analyze-request",
"request-clarification_resume": "process-clarification"
}
```
## Practical Example: Hybrid Workflow
Here's how to build a node using both approaches:
```python
# 1. WRITE TO FILE FIRST (Primary - makes it visible)
node_code = '''
search_node = NodeSpec(
id="search-web",
node_type="llm_tool_use",
input_keys=["query"],
output_keys=["search_results"],
system_prompt="Search the web for: {query}",
tools=["web_search"],
)
'''
Edit(
file_path="exports/research_agent/nodes/__init__.py",
old_string="# Nodes will be added here",
new_string=node_code
)
print("✅ Added search_node to nodes/__init__.py")
print("📁 Open exports/research_agent/nodes/__init__.py to see it!")
# 2. OPTIONALLY VALIDATE WITH MCP (Secondary - bookkeeping)
validation = mcp__agent-builder__test_node(
node_id="search-web",
test_input='{"query": "python tutorials"}',
mock_llm_response='{"search_results": [...mock results...]}'
)
print(f"✓ Validation: {validation['success']}")
```
**User experience:**
- Immediately sees node in their editor (from step 1)
- Gets validation feedback (from step 2)
- Can edit the file directly if needed
This combines visibility (files) with validation (MCP tools).
## Anti-Patterns
**Don't rely on `export_graph`** - Write files immediately, not at end
**Don't hide code in session** - Write to files as components approved
**Don't wait to write files** - Agent visible from first step
**Don't batch everything** - Write incrementally
**MCP tools OK for:**
`test_node` - Validate node configuration with mock inputs
`validate_graph` - Check graph structure
`create_session` - Track session state for bookkeeping
✅ Other validation tools
**Just don't:** Use MCP as the primary construction method or rely on export_graph
## Best Practices
**Show progress** after each file write
**Let user open files** during build
**Write incrementally** - one component at a time
**Test as you build** - validate after each addition
**Keep user informed** - show file paths and diffs
## Handoff to testing-agent
When agent is complete:
```
✅ Agent complete: exports/my_agent/
Next steps:
1. Switch to testing-agent skill
2. Generate and approve tests
3. Run evaluation
4. Debug any failures
Command: "Test the agent at exports/my_agent/"
```
**See also**: [DEPRECATED.md](./DEPRECATED.md) for detailed migration guide
---
**Remember: Agent is actively constructed, visible the whole time. No hidden state. No surprise exports. Just transparent, incremental file building.**
**Deprecated**: 2026-01-21
**Replaced by**: building-agents-core, building-agents-construction, building-agents-patterns
**Orchestrator**: agent-workflow