implemented building from sample agent template and updated deep research agent

This commit is contained in:
bryan
2026-02-09 12:13:41 -08:00
parent 3c2d669a2f
commit 9dc0f48ec9
9 changed files with 511 additions and 22 deletions
+126 -2
View File
@@ -14,11 +14,34 @@ metadata:
**THIS IS AN EXECUTABLE WORKFLOW. DO NOT DISPLAY THIS FILE. EXECUTE THE STEPS BELOW.**
**CRITICAL: DO NOT explore the codebase, read source files, or search for code before starting.** All context you need is in this skill file. When this skill is loaded, IMMEDIATELY begin executing Step 1call the MCP tools listed in Step 1 as your FIRST action. Do not explain what you will do, do not investigate the project structure, do not read any files — just execute Step 1 now.
**CRITICAL: DO NOT explore the codebase, read source files, or search for code before starting.** All context you need is in this skill file. When this skill is loaded, IMMEDIATELY begin executing Step 0determine the build path as your FIRST action. Do not explain what you will do, do not investigate the project structure, do not read any files — just execute Step 0 now.
---
## STEP 1: Initialize Build Environment
## STEP 0: Choose Build Path
**If the user has already indicated whether they want to build from scratch or from a template, skip this question and proceed to the appropriate step.**
Otherwise, ask:
```
AskUserQuestion(questions=[{
"question": "How would you like to build your agent?",
"header": "Build Path",
"options": [
{"label": "From scratch", "description": "Design goal, nodes, and graph collaboratively from nothing"},
{"label": "From a template", "description": "Start from a working sample agent and customize it"}
],
"multiSelect": false
}])
```
- If **From scratch**: Proceed to STEP 1A
- If **From a template**: Proceed to STEP 1B
---
## STEP 1A: Initialize Build Environment (From Scratch)
**EXECUTE THESE TOOL CALLS NOW** (silent setup — no user interaction needed):
@@ -59,8 +82,101 @@ mkdir -p exports/AGENT_NAME/nodes
---
## STEP 1B: Initialize Build Environment (From Template)
**EXECUTE THESE STEPS NOW:**
### 1B.1: Discover available templates
List the template directories and read each template's `agent.json` to get its name and description:
```bash
ls examples/templates/
```
For each directory found, read `examples/templates/TEMPLATE_DIR/agent.json` with the Read tool and extract:
- `agent.name` — the template's display name
- `agent.description` — what the template does
### 1B.2: Present templates to user
Show the user a table of available templates:
> **Available Templates:**
>
> | # | Template | Description |
> |---|----------|-------------|
> | 1 | [name from agent.json] | [description from agent.json] |
> | 2 | ... | ... |
Then ask the user to pick a template and provide a name for their new agent:
```
AskUserQuestion(questions=[{
"question": "Which template would you like to start from?",
"header": "Template",
"options": [
{"label": "[template 1 name]", "description": "[template 1 description]"},
{"label": "[template 2 name]", "description": "[template 2 description]"},
...
],
"multiSelect": false
}, {
"question": "What should the new agent be named? (snake_case)",
"header": "Agent Name",
"options": [
{"label": "Use template name", "description": "Keep the original template name as-is"},
{"label": "Custom name", "description": "I'll provide a new snake_case name"}
],
"multiSelect": false
}])
```
### 1B.3: Copy template to exports
```bash
cp -r examples/templates/TEMPLATE_DIR exports/NEW_AGENT_NAME
```
### 1B.4: Create session and register MCP (same as STEP 1A)
```
mcp__agent-builder__add_mcp_server(
name="hive-tools",
transport="stdio",
command="uv",
args='["run", "python", "mcp_server.py", "--stdio"]',
cwd="tools",
description="Hive tools MCP server"
)
```
```
mcp__agent-builder__create_session(name="NEW_AGENT_NAME")
```
```
mcp__agent-builder__list_mcp_tools()
```
### 1B.5: Load template into builder session
Import the entire agent definition in one call:
```
mcp__agent-builder__import_from_export(agent_json_path="exports/NEW_AGENT_NAME/agent.json")
```
This reads the agent.json and populates the builder session with the goal, all nodes, and all edges.
**THEN immediately proceed to STEP 2.**
---
## STEP 2: Define Goal Together with User
**If starting from a template**, the goal is already loaded in the builder session. Present the existing goal to the user using the format below and ask for approval. Skip the collaborative drafting questions — go straight to presenting and asking "Do you approve this goal, or would you like to modify it?"
**DO NOT propose a complete goal on your own.** Instead, collaborate with the user to define it.
**START by asking the user to help shape the goal:**
@@ -122,6 +238,8 @@ AskUserQuestion(questions=[{
## STEP 3: Design Conceptual Nodes
**If starting from a template**, the nodes are already loaded in the builder session. Present the existing nodes using the table format below and ask for approval. Skip the design phase.
**BEFORE designing nodes**, review the available tools from Step 1. Nodes can ONLY use tools that exist.
**DESIGN the workflow** as a series of nodes. For each node, determine:
@@ -180,6 +298,8 @@ AskUserQuestion(questions=[{
## STEP 4: Design Full Graph and Review
**If starting from a template**, the edges are already loaded in the builder session. Render the existing graph as ASCII art and present it to the user for approval. Skip the edge design phase.
**DETERMINE the edges** connecting the approved nodes. For each edge:
- edge_id (kebab-case)
@@ -297,8 +417,12 @@ AskUserQuestion(questions=[{
**NOW — and only now — write the actual code.** The user has approved the goal, nodes, and graph.
**If starting from a template**, the copied files will be overwritten with the approved design. The Python files must use the NEW agent name (class name, module references, storage paths, metadata), not the original template name.
### 5a: Register nodes and edges with MCP
**If starting from a template and no modifications were made in Steps 2-4**, the nodes and edges are already registered. Skip to validation (`mcp__agent-builder__validate_graph()`). If modifications were made, re-register the changed nodes/edges (the MCP tools handle duplicates by overwriting).
**FOR EACH approved node**, call:
```
+16 -2
View File
@@ -20,7 +20,7 @@ metadata:
**THIS IS AN EXECUTABLE WORKFLOW. DO NOT explore the codebase or read source files. ROUTE to the correct skill IMMEDIATELY.**
When this skill is loaded, determine what the user needs and invoke the appropriate skill NOW:
- **User wants to build an agent** → Invoke `/hive-create` immediately
- **User wants to build an agent** (from scratch or from a template) → Invoke `/hive-create` immediately
- **User wants to test an agent** → Invoke `/hive-test` immediately
- **User wants to learn concepts** → Invoke `/hive-concepts` immediately
- **User wants patterns/optimization** → Invoke `/hive-patterns` immediately
@@ -97,7 +97,7 @@ Use this meta-skill when:
**Duration**: 15-30 minutes
**Skill**: `/hive-create`
**Input**: User requirements ("Build an agent that...")
**Input**: User requirements ("Build an agent that...") or a template to start from
### What This Phase Does
@@ -289,6 +289,19 @@ User: "Build an agent (first time)"
→ Done: Production-ready agent
```
### Pattern 1c: Build from Template
```
User: "Build an agent based on the deep research template"
→ Use /hive-create
→ Select "From a template" path
→ Pick template, name new agent
→ Review/modify goal, nodes, graph
→ Agent exported with customizations
→ Use /hive-test
→ Done: Customized agent
```
### Pattern 2: Test Existing Agent
```
@@ -492,6 +505,7 @@ The workflow is **flexible** - skip phases as needed, iterate freely, and adapt
- Have clear requirements
- Ready to write code
- Want step-by-step guidance
- Want to start from an existing template and customize it
**Choose hive-patterns when:**
- Agent structure complete
+1
View File
@@ -1,4 +1,5 @@
exports/
docs/
.agent-builder-sessions/
.pytest_cache/
**/__pycache__/
@@ -1856,6 +1856,82 @@ def export_graph() -> str:
)
@mcp.tool()
def import_from_export(
agent_json_path: Annotated[str, "Path to the agent.json file to import"],
) -> str:
"""
Import an agent definition from an exported agent.json file into the current build session.
Reads the agent.json, parses goal/nodes/edges, and populates the current session.
This is the reverse of export_graph().
Args:
agent_json_path: Path to the agent.json file to import
Returns:
JSON summary of what was imported (goal name, node count, edge count)
"""
session = get_session()
path = Path(agent_json_path)
if not path.exists():
return json.dumps({"success": False, "error": f"File not found: {agent_json_path}"})
try:
data = json.loads(path.read_text())
except json.JSONDecodeError as e:
return json.dumps({"success": False, "error": f"Invalid JSON: {e}"})
# Parse goal (same pattern as BuildSession.from_dict lines 88-99)
goal_data = data.get("goal")
if goal_data:
session.goal = Goal(
id=goal_data["id"],
name=goal_data["name"],
description=goal_data["description"],
success_criteria=[
SuccessCriterion(**sc) for sc in goal_data.get("success_criteria", [])
],
constraints=[Constraint(**c) for c in goal_data.get("constraints", [])],
)
# Parse nodes (same pattern as BuildSession.from_dict line 102)
graph_data = data.get("graph", {})
nodes_data = graph_data.get("nodes", [])
session.nodes = [NodeSpec(**n) for n in nodes_data]
# Parse edges (same pattern as BuildSession.from_dict lines 105-118)
edges_data = graph_data.get("edges", [])
session.edges = []
for e in edges_data:
condition_str = e.get("condition")
if isinstance(condition_str, str):
condition_map = {
"always": EdgeCondition.ALWAYS,
"on_success": EdgeCondition.ON_SUCCESS,
"on_failure": EdgeCondition.ON_FAILURE,
"conditional": EdgeCondition.CONDITIONAL,
"llm_decide": EdgeCondition.LLM_DECIDE,
}
e["condition"] = condition_map.get(condition_str, EdgeCondition.ON_SUCCESS)
session.edges.append(EdgeSpec(**e))
# Persist updated session
_save_session(session)
return json.dumps(
{
"success": True,
"goal": session.goal.name if session.goal else None,
"nodes_count": len(session.nodes),
"edges_count": len(session.edges),
"node_ids": [n.id for n in session.nodes],
"edge_ids": [e.id for e in session.edges],
}
)
@mcp.tool()
def get_session_status() -> str:
"""Get the current status of the build session."""
@@ -207,17 +207,8 @@ async def _interactive_shell(verbose=False):
if result.success:
output = result.output
if "report_content" in output:
click.echo("\n--- Report ---\n")
click.echo(output["report_content"])
click.echo("\n")
if "references" in output:
click.echo("--- References ---\n")
for ref in output.get("references", []):
click.echo(
f" [{ref.get('number', '?')}] {ref.get('title', '')} - {ref.get('url', '')}"
)
click.echo("\n")
status = output.get("delivery_status", "unknown")
click.echo(f"\nResearch complete (status: {status})\n")
else:
click.echo(f"\nResearch failed: {result.error}\n")
@@ -0,0 +1,276 @@
{
"agent": {
"id": "deep_research_agent",
"name": "Deep Research Agent",
"version": "1.0.0",
"description": "Interactive research agent that rigorously investigates topics through multi-source search, quality evaluation, and synthesis - with TUI conversation at key checkpoints for user guidance and feedback."
},
"graph": {
"id": "deep-research-agent-graph",
"goal_id": "rigorous-interactive-research",
"version": "1.0.0",
"entry_node": "intake",
"entry_points": {
"start": "intake"
},
"pause_nodes": [],
"terminal_nodes": [
"report"
],
"nodes": [
{
"id": "intake",
"name": "Research Intake",
"description": "Discuss the research topic with the user, clarify scope, and confirm direction",
"node_type": "event_loop",
"input_keys": [
"topic"
],
"output_keys": [
"research_brief"
],
"nullable_output_keys": [],
"input_schema": {},
"output_schema": {},
"system_prompt": "You are a research intake specialist. The user wants to research a topic.\nHave a brief conversation to clarify what they need.\n\n**STEP 1 \u2014 Read and respond (text only, NO tool calls):**\n1. Read the topic provided\n2. If it's vague, ask 1-2 clarifying questions (scope, angle, depth)\n3. If it's already clear, confirm your understanding and ask the user to confirm\n\nKeep it short. Don't over-ask.\n\nAfter your message, call ask_user() to wait for the user's response.\n\n**STEP 2 \u2014 After the user confirms, call set_output:**\n- set_output(\"research_brief\", \"A clear paragraph describing exactly what to research, what questions to answer, what scope to cover, and how deep to go.\")",
"tools": [],
"model": null,
"function": null,
"routes": {},
"max_retries": 3,
"retry_on": [],
"max_node_visits": 1,
"output_model": null,
"max_validation_retries": 2,
"client_facing": true
},
{
"id": "research",
"name": "Research",
"description": "Search the web, fetch source content, and compile findings",
"node_type": "event_loop",
"input_keys": [
"research_brief",
"feedback"
],
"output_keys": [
"findings",
"sources",
"gaps"
],
"nullable_output_keys": [
"feedback"
],
"input_schema": {},
"output_schema": {},
"system_prompt": "You are a research agent. Given a research brief, find and analyze sources.\n\nIf feedback is provided, this is a follow-up round \u2014 focus on the gaps identified.\n\nWork in phases:\n1. **Search**: Use web_search with 3-5 diverse queries covering different angles.\n Prioritize authoritative sources (.edu, .gov, established publications).\n2. **Fetch**: Use web_scrape on the most promising URLs (aim for 5-8 sources).\n Skip URLs that fail. Extract the substantive content.\n3. **Analyze**: Review what you've collected. Identify key findings, themes,\n and any contradictions between sources.\n\nImportant:\n- Work in batches of 3-4 tool calls at a time to manage context\n- After each batch, assess whether you have enough material\n- Prefer quality over quantity \u2014 5 good sources beat 15 thin ones\n- Track which URL each finding comes from (you'll need citations later)\n\nWhen done, use set_output:\n- set_output(\"findings\", \"Structured summary: key findings with source URLs for each claim. Include themes, contradictions, and confidence levels.\")\n- set_output(\"sources\", [{\"url\": \"...\", \"title\": \"...\", \"summary\": \"...\"}])\n- set_output(\"gaps\", \"What aspects of the research brief are NOT well-covered yet, if any.\")",
"tools": [
"web_search",
"web_scrape",
"load_data",
"save_data",
"list_data_files"
],
"model": null,
"function": null,
"routes": {},
"max_retries": 3,
"retry_on": [],
"max_node_visits": 3,
"output_model": null,
"max_validation_retries": 2,
"client_facing": false
},
{
"id": "review",
"name": "Review Findings",
"description": "Present findings to user and decide whether to research more or write the report",
"node_type": "event_loop",
"input_keys": [
"findings",
"sources",
"gaps",
"research_brief"
],
"output_keys": [
"needs_more_research",
"feedback"
],
"nullable_output_keys": [],
"input_schema": {},
"output_schema": {},
"system_prompt": "Present the research findings to the user clearly and concisely.\n\n**STEP 1 \u2014 Present (your first message, text only, NO tool calls):**\n1. **Summary** (2-3 sentences of what was found)\n2. **Key Findings** (bulleted, with confidence levels)\n3. **Sources Used** (count and quality assessment)\n4. **Gaps** (what's still unclear or under-covered)\n\nEnd by asking: Are they satisfied, or do they want deeper research? Should we proceed to writing the final report?\n\nAfter your presentation, call ask_user() to wait for the user's response.\n\n**STEP 2 \u2014 After the user responds, call set_output:**\n- set_output(\"needs_more_research\", \"true\") \u2014 if they want more\n- set_output(\"needs_more_research\", \"false\") \u2014 if they're satisfied\n- set_output(\"feedback\", \"What the user wants explored further, or empty string\")",
"tools": [],
"model": null,
"function": null,
"routes": {},
"max_retries": 3,
"retry_on": [],
"max_node_visits": 3,
"output_model": null,
"max_validation_retries": 2,
"client_facing": true
},
{
"id": "report",
"name": "Write & Deliver Report",
"description": "Write a cited HTML report from the findings and present it to the user",
"node_type": "event_loop",
"input_keys": [
"findings",
"sources",
"research_brief"
],
"output_keys": [
"delivery_status"
],
"nullable_output_keys": [],
"input_schema": {},
"output_schema": {},
"system_prompt": "Write a comprehensive research report as an HTML file and present it to the user.\n\n**STEP 1 \u2014 Write the HTML report (tool calls, NO text to user yet):**\n\n1. Compose a complete, self-contained HTML document with embedded CSS styling.\n Use a clean, readable design: max-width container, pleasant typography,\n numbered citation links, a table of contents, and a references section.\n\n Report structure inside the HTML:\n - Title & date\n - Executive Summary (2-3 paragraphs)\n - Table of Contents\n - Findings (organized by theme, with [n] citation links)\n - Analysis (synthesis, implications, areas of debate)\n - Conclusion (key takeaways, confidence assessment)\n - References (numbered list with clickable URLs)\n\n Requirements:\n - Every factual claim must cite its source with [n] notation\n - Be objective \u2014 present multiple viewpoints where sources disagree\n - Distinguish well-supported conclusions from speculation\n - Answer the original research questions from the brief\n\n2. Save the HTML file:\n save_data(filename=\"report.html\", data=<your_html>)\n\n3. Get the clickable link:\n serve_file_to_user(filename=\"report.html\", label=\"Research Report\")\n\n**STEP 2 \u2014 Present the link to the user (text only, NO tool calls):**\n\nTell the user the report is ready and include the file:// URI from\nserve_file_to_user so they can click it to open. Give a brief summary\nof what the report covers. Ask if they have questions.\n\nAfter presenting the link, call ask_user() to wait for the user's response.\n\n**STEP 3 \u2014 After the user responds:**\n- Answer follow-up questions from the research material\n- Call ask_user() again if they might have more questions\n- When the user is satisfied: set_output(\"delivery_status\", \"completed\")",
"tools": [
"save_data",
"serve_file_to_user",
"load_data",
"list_data_files"
],
"model": null,
"function": null,
"routes": {},
"max_retries": 3,
"retry_on": [],
"max_node_visits": 1,
"output_model": null,
"max_validation_retries": 2,
"client_facing": true
}
],
"edges": [
{
"id": "intake-to-research",
"source": "intake",
"target": "research",
"condition": "on_success",
"condition_expr": null,
"priority": 1,
"input_mapping": {}
},
{
"id": "research-to-review",
"source": "research",
"target": "review",
"condition": "on_success",
"condition_expr": null,
"priority": 1,
"input_mapping": {}
},
{
"id": "review-to-research-feedback",
"source": "review",
"target": "research",
"condition": "conditional",
"condition_expr": "str(needs_more_research).lower() == 'true'",
"priority": 2,
"input_mapping": {}
},
{
"id": "review-to-report",
"source": "review",
"target": "report",
"condition": "conditional",
"condition_expr": "str(needs_more_research).lower() != 'true'",
"priority": 1,
"input_mapping": {}
}
],
"max_steps": 100,
"max_retries_per_node": 3,
"description": "Interactive research agent that rigorously investigates topics through multi-source search, quality evaluation, and synthesis - with TUI conversation at key checkpoints for user guidance and feedback.",
"created_at": "2026-02-06T00:00:00.000000"
},
"goal": {
"id": "rigorous-interactive-research",
"name": "Rigorous Interactive Research",
"description": "Research any topic by searching diverse sources, analyzing findings, and producing a cited report \u2014 with user checkpoints to guide direction.",
"status": "draft",
"success_criteria": [
{
"id": "source-diversity",
"description": "Use multiple diverse, authoritative sources",
"metric": "source_count",
"target": ">=5",
"weight": 0.25,
"met": false
},
{
"id": "citation-coverage",
"description": "Every factual claim in the report cites its source",
"metric": "citation_coverage",
"target": "100%",
"weight": 0.25,
"met": false
},
{
"id": "user-satisfaction",
"description": "User reviews findings before report generation",
"metric": "user_approval",
"target": "true",
"weight": 0.25,
"met": false
},
{
"id": "report-completeness",
"description": "Final report answers the original research questions",
"metric": "question_coverage",
"target": "90%",
"weight": 0.25,
"met": false
}
],
"constraints": [
{
"id": "no-hallucination",
"description": "Only include information found in fetched sources",
"constraint_type": "quality",
"category": "accuracy",
"check": ""
},
{
"id": "source-attribution",
"description": "Every claim must cite its source with a numbered reference",
"constraint_type": "quality",
"category": "accuracy",
"check": ""
},
{
"id": "user-checkpoint",
"description": "Present findings to the user before writing the final report",
"constraint_type": "functional",
"category": "interaction",
"check": ""
}
],
"context": {},
"required_capabilities": [],
"input_schema": {},
"output_schema": {},
"version": "1.0.0",
"parent_version": null,
"evolution_reason": null,
"created_at": "2026-02-06 00:00:00.000000",
"updated_at": "2026-02-06 00:00:00.000000"
},
"required_tools": [
"list_data_files",
"load_data",
"save_data",
"serve_file_to_user",
"web_scrape",
"web_search"
],
"metadata": {
"created_at": "2026-02-06T00:00:00.000000",
"node_count": 4,
"edge_count": 4
}
}
@@ -102,23 +102,23 @@ edges = [
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
# review -> research (feedback loop)
# review -> research (feedback loop, checked first)
EdgeSpec(
id="review-to-research-feedback",
source="review",
target="research",
condition=EdgeCondition.CONDITIONAL,
condition_expr="needs_more_research == True",
priority=1,
condition_expr="str(needs_more_research).lower() == 'true'",
priority=2,
),
# review -> report (user satisfied)
# review -> report (complementary condition — proceed to report when no more research needed)
EdgeSpec(
id="review-to-report",
source="review",
target="report",
condition=EdgeCondition.CONDITIONAL,
condition_expr="needs_more_research == False",
priority=2,
condition_expr="str(needs_more_research).lower() != 'true'",
priority=1,
),
]
@@ -23,6 +23,8 @@ Have a brief conversation to clarify what they need.
Keep it short. Don't over-ask.
After your message, call ask_user() to wait for the user's response.
**STEP 2 After the user confirms, call set_output:**
- set_output("research_brief", "A clear paragraph describing exactly what to research, \
what questions to answer, what scope to cover, and how deep to go.")
@@ -93,6 +95,8 @@ Present the research findings to the user clearly and concisely.
End by asking: Are they satisfied, or do they want deeper research? \
Should we proceed to writing the final report?
After your presentation, call ask_user() to wait for the user's response.
**STEP 2 After the user responds, call set_output:**
- set_output("needs_more_research", "true") if they want more
- set_output("needs_more_research", "false") if they're satisfied
@@ -147,8 +151,11 @@ Tell the user the report is ready and include the file:// URI from
serve_file_to_user so they can click it to open. Give a brief summary
of what the report covers. Ask if they have questions.
After presenting the link, call ask_user() to wait for the user's response.
**STEP 3 After the user responds:**
- Answer follow-up questions from the research material
- Call ask_user() again if they might have more questions
- When the user is satisfied: set_output("delivery_status", "completed")
""",
tools=["save_data", "serve_file_to_user", "load_data", "list_data_files"],
Generated
+1 -1
View File
@@ -754,7 +754,7 @@ wheels = [
[[package]]
name = "framework"
version = "0.1.0"
version = "0.4.2"
source = { editable = "core" }
dependencies = [
{ name = "anthropic" },