Compare commits
106 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| c01dd603d7 | |||
| 9d5157d69f | |||
| 73c9a91811 | |||
| 3af13d3f90 | |||
| c7d70e0fb1 | |||
| ced64541b9 | |||
| 3c30cfe02b | |||
| 0d6267bcf1 | |||
| 6f23a30eed | |||
| ab995d8b96 | |||
| c2e560fc07 | |||
| 19f7ae862e | |||
| 5e9f74744a | |||
| 7787179a5a | |||
| b63205b91a | |||
| 347bccb9ee | |||
| 9d83f0298f | |||
| 7f7e8b4dff | |||
| f48a7380f5 | |||
| 3c7f129d86 | |||
| 4533b27aa1 | |||
| 3adf268c29 | |||
| ac8579900f | |||
| abbaaa68f3 | |||
| 11089093ef | |||
| 99b7cb07d5 | |||
| 70d61ae67a | |||
| dd054815a3 | |||
| 8e5eaae9dd | |||
| 2d0128eb5c | |||
| 06f1d4dcef | |||
| 0e7b11b5b2 | |||
| 291b78f934 | |||
| e196a03972 | |||
| a0abe2685d | |||
| e8f642c8b6 | |||
| 6260f628eb | |||
| 4a4f17ed40 | |||
| 36dcf2025b | |||
| 85c70c94e6 | |||
| 336e82ba22 | |||
| f2ddd1051d | |||
| 2dd60c8d52 | |||
| ff01c1fd99 | |||
| 421b25fdb7 | |||
| 795c3c33e2 | |||
| 97821f4d80 | |||
| 505e1e30fd | |||
| 3fb2b285fb | |||
| a76109840c | |||
| 1db8484402 | |||
| 39212350ba | |||
| f3399fe95b | |||
| d02e1155ed | |||
| 7ede3ba171 | |||
| cdaec8a837 | |||
| 2272491cf5 | |||
| bb38cb974f | |||
| 635d2976f4 | |||
| 4e1525880d | |||
| b80559df68 | |||
| 08d93ef90a | |||
| 22bf035522 | |||
| 15944a42ab | |||
| 8440ec70ba | |||
| eacf2520cf | |||
| def4f62a51 | |||
| b0c5bcd210 | |||
| 2fe1343343 | |||
| de0dcff50f | |||
| 20427e213a | |||
| 1fb5c6337a | |||
| 1e74f194a1 | |||
| 08157d2bd6 | |||
| ef036257a9 | |||
| 16ce984c74 | |||
| 1e8b5b96eb | |||
| 094ba89f19 | |||
| 7008c9f310 | |||
| 94d7cbacc2 | |||
| bddc2b413a | |||
| 48c8fb7fff | |||
| 2434c86cdf | |||
| c4a5e621aa | |||
| 0f5b83d86a | |||
| b5aadcd51e | |||
| 290d2f6823 | |||
| 9f3339650d | |||
| d5e5d3e83d | |||
| 5ea27dda09 | |||
| 6f9066ef20 | |||
| c37185732a | |||
| 0c900fb50e | |||
| 4d3ac28878 | |||
| 270c1f8c50 | |||
| 3d0859d06a | |||
| ffe47c0f71 | |||
| bf4652db4b | |||
| 2acd526b71 | |||
| df71834e4b | |||
| bc3c5a5899 | |||
| e82133741c | |||
| 5076278dcb | |||
| 2398e04e11 | |||
| d00f321627 | |||
| e76b6cb575 |
@@ -0,0 +1,78 @@
|
||||
name: Standard Bounty
|
||||
description: A bounty task for general framework contributions (not integration-specific)
|
||||
title: "[Bounty]: "
|
||||
labels: []
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
## Standard Bounty
|
||||
|
||||
This issue is part of the [Bounty Program](../../docs/bounty-program/README.md).
|
||||
**Claim this bounty** by commenting below — a maintainer will assign you within 24 hours.
|
||||
|
||||
- type: dropdown
|
||||
id: bounty-size
|
||||
attributes:
|
||||
label: Bounty Size
|
||||
options:
|
||||
- "Small (10 pts)"
|
||||
- "Medium (30 pts)"
|
||||
- "Large (75 pts)"
|
||||
- "Extreme (150 pts)"
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
id: difficulty
|
||||
attributes:
|
||||
label: Difficulty
|
||||
options:
|
||||
- Easy
|
||||
- Medium
|
||||
- Hard
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Description
|
||||
description: What needs to be done to complete this bounty.
|
||||
placeholder: |
|
||||
Describe the specific task, including:
|
||||
- What the contributor needs to do
|
||||
- Links to relevant files in the repo
|
||||
- Any context or motivation for the change
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: acceptance-criteria
|
||||
attributes:
|
||||
label: Acceptance Criteria
|
||||
description: What "done" looks like. The PR must meet all criteria.
|
||||
placeholder: |
|
||||
- [ ] Criterion 1
|
||||
- [ ] Criterion 2
|
||||
- [ ] CI passes
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: relevant-files
|
||||
attributes:
|
||||
label: Relevant Files
|
||||
description: Links to files or directories related to this bounty.
|
||||
placeholder: |
|
||||
- `path/to/file.py`
|
||||
- `path/to/directory/`
|
||||
|
||||
- type: textarea
|
||||
id: resources
|
||||
attributes:
|
||||
label: Resources
|
||||
description: Links to docs, issues, or external references that will help.
|
||||
placeholder: |
|
||||
- Related issue: #XXXX
|
||||
- Docs: https://...
|
||||
+150
-27
@@ -1,17 +1,149 @@
|
||||
# Release Notes
|
||||
|
||||
## v0.7.1
|
||||
|
||||
**Release Date:** March 13, 2026
|
||||
**Tag:** v0.7.1
|
||||
|
||||
### Chrome-Native Browser Control
|
||||
|
||||
v0.7.1 replaces Playwright with direct Chrome DevTools Protocol (CDP) integration. The GCU now launches the user's system Chrome via `open -n` on macOS, connects over CDP, and manages browser lifecycle end-to-end -- no extra browser binary required.
|
||||
|
||||
---
|
||||
|
||||
### Highlights
|
||||
|
||||
#### System Chrome via CDP
|
||||
|
||||
The entire GCU browser stack has been rewritten:
|
||||
|
||||
- **Chrome finder & launcher** -- New `chrome_finder.py` discovers installed Chrome and `chrome_launcher.py` manages process lifecycle with `--remote-debugging-port`
|
||||
- **Coexist with user's browser** -- `open -n` on macOS launches a separate Chrome instance so the user's tabs stay untouched
|
||||
- **Dynamic viewport sizing** -- Viewport auto-sizes to the available display area, suppressing Chrome warning bars
|
||||
- **Orphan cleanup** -- Chrome processes are killed on GCU server shutdown to prevent leaks
|
||||
- **`--no-startup-window`** -- Chrome launches headlessly by default until a page is needed
|
||||
|
||||
#### Per-Subagent Browser Isolation
|
||||
|
||||
Each GCU subagent gets its own Chrome user-data directory, preventing cookie/session cross-contamination:
|
||||
|
||||
- Unique browser profiles injected per subagent
|
||||
- Profiles cleaned up after top-level GCU node execution
|
||||
- Tab origin and age metadata tracked per subagent
|
||||
|
||||
#### Dummy Agent Testing Framework
|
||||
|
||||
A comprehensive test suite for validating agent graph patterns without LLM calls:
|
||||
|
||||
- 8 test modules covering echo, pipeline, branch, parallel merge, retry, feedback loop, worker, and GCU subagent patterns
|
||||
- Shared fixtures and a `run_all.py` runner for CI integration
|
||||
- Subagent lifecycle tests
|
||||
|
||||
---
|
||||
|
||||
### What's New
|
||||
|
||||
#### GCU Browser
|
||||
|
||||
- **Switch from Playwright to system Chrome via CDP** -- Direct CDP connection replaces Playwright dependency. (@bryanadenhq)
|
||||
- **Chrome finder and launcher modules** -- `chrome_finder.py` and `chrome_launcher.py` for cross-platform Chrome discovery and process management. (@bryanadenhq)
|
||||
- **Dynamic viewport sizing** -- Auto-size viewport and suppress Chrome warning bar. (@bryanadenhq)
|
||||
- **Per-subagent browser profile isolation** -- Unique user-data directories per subagent with cleanup. (@bryanadenhq)
|
||||
- **Tab origin/age metadata** -- Track which subagent opened each tab and when. (@bryanadenhq)
|
||||
- **`browser_close_all` tool** -- Bulk tab cleanup for agents managing many pages. (@bryanadenhq)
|
||||
- **Auto-track popup pages** -- Popups are automatically captured and tracked. (@bryanadenhq)
|
||||
- **Auto-snapshot from browser interactions** -- Browser interaction tools return screenshots automatically. (@bryanadenhq)
|
||||
- **Kill orphaned Chrome processes** -- GCU server shutdown cleans up lingering Chrome instances. (@bryanadenhq)
|
||||
- **`--no-startup-window` Chrome flag** -- Prevent empty window on launch. (@bryanadenhq)
|
||||
- **Launch Chrome via `open -n` on macOS** -- Coexist with the user's running browser. (@bryanadenhq)
|
||||
|
||||
#### Framework & Runtime
|
||||
|
||||
- **Session resume fix for new agents** -- Correctly resume sessions when a new agent is loaded. (@bryanadenhq)
|
||||
- **Queen upsert fix** -- Prevent duplicate queen entries on session restore. (@bryanadenhq)
|
||||
- **Anchor worker monitoring to queen's session ID on cold-restore** -- Worker monitors reconnect to the correct queen after restart. (@bryanadenhq)
|
||||
- **Update meta.json when loading workers** -- Worker metadata stays in sync with runtime state. (@RichardTang-Aden)
|
||||
- **Generate worker MCP file correctly** -- Fix MCP config generation for spawned workers. (@RichardTang-Aden)
|
||||
- **Share event bus so tool events are visible to parent** -- Tool execution events propagate up to parent graphs. (@bryanadenhq)
|
||||
- **Subagent activity tracking in queen status** -- Queen instructions include live subagent status. (@bryanadenhq)
|
||||
- **GCU system prompt updates** -- Auto-snapshots, batching, popup tracking, and close_all guidance. (@bryanadenhq)
|
||||
|
||||
#### Frontend
|
||||
|
||||
- **Loading spinner in draft panel** -- Shows spinner during planning phase instead of blank panel. (@bryanadenhq)
|
||||
- **Fix credential modal errors** -- Modal no longer eats errors; banner stays visible. (@bryanadenhq)
|
||||
- **Fix credentials_required loop** -- Stop clearing the flag on modal close to prevent infinite re-prompting. (@bryanadenhq)
|
||||
- **Fix "Add tab" dropdown overflow** -- Dropdown no longer hidden when many agents are open. (@prasoonmhwr)
|
||||
|
||||
#### Testing
|
||||
|
||||
- **Dummy agent test framework** -- 8 test modules (echo, pipeline, branch, parallel merge, retry, feedback loop, worker, GCU subagent) with shared fixtures and CI runner. (@bryanadenhq)
|
||||
- **Subagent lifecycle tests** -- Validate subagent spawn and completion flows. (@bryanadenhq)
|
||||
|
||||
#### Documentation & Infrastructure
|
||||
|
||||
- **MCP integration PRD** -- Product requirements for MCP server registry. (@TimothyZhang7)
|
||||
- **Skills registry PRD** -- Product requirements for skill registry system. (@bryanadenhq)
|
||||
- **Bounty program updates** -- Standard bounty issue template and updated contributor guide. (@bryanadenhq)
|
||||
- **Windows quickstart** -- Add default context limit for PowerShell setup. (@bryanadenhq)
|
||||
- **Remove deprecated files** -- Clean up `setup_mcp.py`, `verify_mcp.py`, `antigravity-setup.md`, and `setup-antigravity-mcp.sh`. (@bryanadenhq)
|
||||
|
||||
---
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Fix credential modal eating errors and banner staying open
|
||||
- Stop clearing `credentials_required` on modal close to prevent infinite loop
|
||||
- Share event bus so tool events are visible to parent graph
|
||||
- Use lazy %-formatting in subagent completion log to avoid f-string in logger
|
||||
- Anchor worker monitoring to queen's session ID on cold-restore
|
||||
- Update meta.json when loading workers
|
||||
- Generate worker MCP file correctly
|
||||
- Fix "Add tab" dropdown partially hidden when creating multiple agents
|
||||
|
||||
---
|
||||
|
||||
### Community Contributors
|
||||
|
||||
- **Prasoon Mahawar** (@prasoonmhwr) -- Fix UI overflow on agent tab dropdown
|
||||
- **Richard Tang** (@RichardTang-Aden) -- Worker MCP generation and meta.json fixes
|
||||
|
||||
---
|
||||
|
||||
### Upgrading
|
||||
|
||||
```bash
|
||||
git pull origin main
|
||||
uv sync
|
||||
```
|
||||
|
||||
The Playwright dependency is no longer required for GCU browser operations. Chrome must be installed on the host system.
|
||||
|
||||
---
|
||||
|
||||
## v0.7.0
|
||||
|
||||
**Release Date:** March 5, 2026
|
||||
**Tag:** v0.7.0
|
||||
|
||||
Session management refactor release.
|
||||
|
||||
---
|
||||
|
||||
## v0.5.1
|
||||
|
||||
**Release Date:** February 18, 2026
|
||||
**Tag:** v0.5.1
|
||||
|
||||
## The Hive Gets a Brain
|
||||
### The Hive Gets a Brain
|
||||
|
||||
v0.5.1 is our most ambitious release yet. Hive agents can now **build other agents** -- the new Hive Coder meta-agent writes, tests, and fixes agent packages from natural language. The runtime grows multi-graph support so one session can orchestrate multiple agents simultaneously. The TUI gets a complete overhaul with an in-app agent picker, live streaming, and seamless escalation to the Coder. And we're now provider-agnostic: Claude Code subscriptions, OpenAI-compatible endpoints, and any LiteLLM-supported model work out of the box.
|
||||
|
||||
---
|
||||
|
||||
## Highlights
|
||||
### Highlights
|
||||
|
||||
### Hive Coder -- The Agent That Builds Agents
|
||||
#### Hive Coder -- The Agent That Builds Agents
|
||||
|
||||
A native meta-agent that lives inside the framework at `core/framework/agents/hive_coder/`. Give it a natural-language specification and it produces a complete agent package -- goal definition, node prompts, edge routing, MCP tool wiring, tests, and all boilerplate files.
|
||||
|
||||
@@ -30,7 +162,7 @@ The Coder ships with:
|
||||
- **Coder Tools MCP server** -- file I/O, fuzzy-match editing, git snapshots, and sandboxed shell execution (`tools/coder_tools_server.py`)
|
||||
- **Test generation** -- structural tests for forever-alive agents that don't hang on `runner.run()`
|
||||
|
||||
### Multi-Graph Agent Runtime
|
||||
#### Multi-Graph Agent Runtime
|
||||
|
||||
`AgentRuntime` now supports loading, managing, and switching between multiple agent graphs within a single session. Six new lifecycle tools give agents (and the TUI) full control:
|
||||
|
||||
@@ -44,7 +176,7 @@ await runtime.add_graph("exports/deep_research_agent")
|
||||
|
||||
The Hive Coder uses multi-graph internally -- when you escalate from a worker agent, the Coder loads as a separate graph while the worker stays alive in the background.
|
||||
|
||||
### TUI Revamp
|
||||
#### TUI Revamp
|
||||
|
||||
The Terminal UI gets a ground-up rebuild with five major additions:
|
||||
|
||||
@@ -54,7 +186,7 @@ The Terminal UI gets a ground-up rebuild with five major additions:
|
||||
- **PDF attachments** -- `/attach` and `/detach` commands with native OS file dialog (macOS, Linux, Windows)
|
||||
- **Multi-graph commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>` for managing agent graphs in-session
|
||||
|
||||
### Provider-Agnostic LLM Support
|
||||
#### Provider-Agnostic LLM Support
|
||||
|
||||
Hive is no longer Anthropic-only. v0.5.1 adds first-class support for:
|
||||
|
||||
@@ -66,9 +198,9 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
|
||||
---
|
||||
|
||||
## What's New
|
||||
### What's New
|
||||
|
||||
### Architecture & Runtime
|
||||
#### Architecture & Runtime
|
||||
|
||||
- **Hive Coder meta-agent** -- Natural-language agent builder with reference docs, guardian watchdog, and `hive code` CLI command. (@TimothyZhang7)
|
||||
- **Multi-graph agent sessions** -- `add_graph`/`remove_graph` on AgentRuntime with 6 lifecycle tools (`load_agent`, `unload_agent`, `start_agent`, `restart_agent`, `list_agents`, `get_user_presence`). (@TimothyZhang7)
|
||||
@@ -79,7 +211,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
- **Pre-start confirmation prompt** -- Interactive prompt before agent execution allowing credential updates or abort. (@RichardTang-Aden)
|
||||
- **Event bus multi-graph support** -- `graph_id` on events, `filter_graph` on subscriptions, `ESCALATION_REQUESTED` event type, `exclude_own_graph` filter. (@TimothyZhang7)
|
||||
|
||||
### TUI Improvements
|
||||
#### TUI Improvements
|
||||
|
||||
- **In-app agent picker** (Ctrl+A) -- Tabbed modal for browsing agents with metadata badges (nodes, tools, sessions, tags). (@TimothyZhang7)
|
||||
- **Runtime-optional TUI startup** -- Launches without a pre-loaded agent, shows agent picker on startup. (@TimothyZhang7)
|
||||
@@ -89,7 +221,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
- **Multi-graph TUI commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>`. (@TimothyZhang7)
|
||||
- **Agent Guardian watchdog** -- Event-driven monitor that catches secondary agent failures and triggers automatic remediation, with `--no-guardian` CLI flag. (@TimothyZhang7)
|
||||
|
||||
### New Tool Integrations
|
||||
#### New Tool Integrations
|
||||
|
||||
| Tool | Description | Contributor |
|
||||
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
|
||||
@@ -99,7 +231,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
| **Google Docs** | Document creation, reading, and editing with OAuth credential support | @haliaeetusvocifer |
|
||||
| **Gmail enhancements** | Expanded mail operations for inbox management | @bryanadenhq |
|
||||
|
||||
### Infrastructure
|
||||
#### Infrastructure
|
||||
|
||||
- **Default node type → `event_loop`** -- `NodeSpec.node_type` defaults to `"event_loop"` instead of `"llm_tool_use"`. (@TimothyZhang7)
|
||||
- **Default `max_node_visits` → 0 (unlimited)** -- Nodes default to unlimited visits, reducing friction for feedback loops and forever-alive agents. (@TimothyZhang7)
|
||||
@@ -112,7 +244,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
|
||||
---
|
||||
|
||||
## Bug Fixes
|
||||
### Bug Fixes
|
||||
|
||||
- Flush WIP accumulator outputs on cancel/failure so edge conditions see correct values on resume
|
||||
- Stall detection state preserved across resume (no more resets on checkpoint restore)
|
||||
@@ -125,13 +257,13 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
- Fix email agent version conflicts (@RichardTang-Aden)
|
||||
- Fix coder tool timeouts (120s for tests, 300s cap for commands)
|
||||
|
||||
## Documentation
|
||||
### Documentation
|
||||
|
||||
- Clarify installation and prevent root pip install misuse (@paarths-collab)
|
||||
|
||||
---
|
||||
|
||||
## Agent Updates
|
||||
### Agent Updates
|
||||
|
||||
- **Email Inbox Management** -- Consolidate `gmail_inbox_guardian` and `inbox_management` into a single unified agent with updated prompts and config. (@RichardTang-Aden, @bryanadenhq)
|
||||
- **Job Hunter** -- Updated node prompts, config, and agent metadata; added PDF resume selection. (@bryanadenhq)
|
||||
@@ -141,7 +273,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
|
||||
---
|
||||
|
||||
## Breaking Changes
|
||||
### Breaking Changes
|
||||
|
||||
- **Deprecated node types raise `RuntimeError`** -- `llm_tool_use`, `llm_generate`, `function`, `router`, `human_input` now fail instead of warning. Migrate to `event_loop`.
|
||||
- **`NodeSpec.node_type` defaults to `"event_loop"`** (was `"llm_tool_use"`)
|
||||
@@ -150,7 +282,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
|
||||
---
|
||||
|
||||
## Community Contributors
|
||||
### Community Contributors
|
||||
|
||||
A huge thank you to everyone who contributed to this release:
|
||||
|
||||
@@ -165,14 +297,14 @@ A huge thank you to everyone who contributed to this release:
|
||||
|
||||
---
|
||||
|
||||
## Upgrading
|
||||
### Upgrading
|
||||
|
||||
```bash
|
||||
git pull origin main
|
||||
uv sync
|
||||
```
|
||||
|
||||
### Migration Guide
|
||||
#### Migration Guide
|
||||
|
||||
If your agents use deprecated node types, update them:
|
||||
|
||||
@@ -196,12 +328,3 @@ hive code
|
||||
# Or from TUI -- press Ctrl+E to escalate
|
||||
hive tui
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What's Next
|
||||
|
||||
- **Agent-to-agent communication** -- one agent's output triggers another agent's entry point
|
||||
- **Cost visibility** -- detailed runtime log of LLM costs per node and per session
|
||||
- **Persistent webhook subscriptions** -- survive agent restarts without re-registering
|
||||
- **Remote agent deployment** -- run agents as long-lived services with HTTP APIs
|
||||
|
||||
+8
-3
@@ -121,9 +121,15 @@ uv sync
|
||||
6. Make your changes
|
||||
7. Run checks and tests:
|
||||
```bash
|
||||
make check # Lint and format checks (ruff check + ruff format --check)
|
||||
make check # Lint and format checks
|
||||
make test # Core tests
|
||||
```
|
||||
On Windows (no make), run directly:
|
||||
```powershell
|
||||
uv run ruff check core/ tools/
|
||||
uv run ruff format --check core/ tools/
|
||||
uv run pytest core/tests/
|
||||
```
|
||||
8. Commit your changes following our commit conventions
|
||||
9. Push to your fork and submit a Pull Request
|
||||
|
||||
@@ -222,8 +228,7 @@ else: # linux
|
||||
- **Node.js 18+** (optional, for frontend development)
|
||||
|
||||
> **Windows Users:**
|
||||
> If you are on native Windows, it is recommended to use **WSL (Windows Subsystem for Linux)**.
|
||||
> Alternatively, make sure to run PowerShell or Git Bash with Python 3.11+ installed, and disable "App Execution Aliases" in Windows settings.
|
||||
> Native Windows is supported. Use `.\quickstart.ps1` for setup and `.\hive.ps1` to run (PowerShell 5.1+). Disable "App Execution Aliases" in Windows settings to avoid Python path conflicts. WSL is also an option but not required.
|
||||
|
||||
> **Tip:** Installing Claude Code skills is optional for running existing agents, but required if you plan to **build new agents**.
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@
|
||||
<img src="https://img.shields.io/badge/Multi--Agent-Systems-blue?style=flat-square" alt="Multi-Agent" />
|
||||
<img src="https://img.shields.io/badge/Headless-Development-purple?style=flat-square" alt="Headless" />
|
||||
<img src="https://img.shields.io/badge/Human--in--the--Loop-orange?style=flat-square" alt="HITL" />
|
||||
<img src="https://img.shields.io/badge/Production--Ready-red?style=flat-square" alt="Production" />
|
||||
<img src="https://img.shields.io/badge/Browser-Use-red?style=flat-square" alt="Browser Use" />
|
||||
</p>
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/OpenAI-supported-412991?style=flat-square&logo=openai" alt="OpenAI" />
|
||||
@@ -37,7 +37,7 @@
|
||||
|
||||
## Overview
|
||||
|
||||
Build autonomous, reliable, self-improving AI agents without hardcoding workflows. Define your goal through conversation with hive coding agent(queen), and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, credential management, and real-time monitoring give you control without sacrificing adaptability.
|
||||
Generate a swarm of worker agents with a coding agent(queen) that control them. Define your goal through conversation with hive queen, and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, browser use, credential management, and real-time monitoring give you control without sacrificing adaptability.
|
||||
|
||||
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
|
||||
|
||||
@@ -45,7 +45,7 @@ Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and
|
||||
|
||||
## Who Is Hive For?
|
||||
|
||||
Hive is designed for developers and teams who want to build **production-grade AI agents** without manually wiring complex workflows.
|
||||
Hive is designed for developers and teams who want to build many **autonomous AI agents** fast without manually wiring complex workflows.
|
||||
|
||||
Hive is a good fit if you:
|
||||
|
||||
@@ -84,7 +84,7 @@ Use Hive when you need:
|
||||
- An LLM provider that powers the agents
|
||||
- **ripgrep (optional, recommended on Windows):** The `search_files` tool uses ripgrep for faster file search. If not installed, a Python fallback is used. On Windows: `winget install BurntSushi.ripgrep` or `scoop install ripgrep`
|
||||
|
||||
> **Note for Windows Users:** It is strongly recommended to use **WSL (Windows Subsystem for Linux)** or **Git Bash** to run this framework. Some core automation scripts may not execute correctly in standard Command Prompt or PowerShell.
|
||||
> **Windows Users:** Native Windows is supported via `quickstart.ps1` and `hive.ps1`. Run these in PowerShell 5.1+. WSL is also an option but not required.
|
||||
|
||||
### Installation
|
||||
|
||||
@@ -115,11 +115,9 @@ This sets up:
|
||||
|
||||
> **Tip:** To reopen the dashboard later, run `hive open` from the project directory.
|
||||
|
||||
<img width="2500" height="1214" alt="home-screen" src="https://github.com/user-attachments/assets/134d897f-5e75-4874-b00b-e0505f6b45c4" />
|
||||
|
||||
### Build Your First Agent
|
||||
|
||||
Type the agent you want to build in the home input box
|
||||
Type the agent you want to build in the home input box. The queen is going to ask you questions and work out a solution with you.
|
||||
|
||||
<img width="2500" height="1214" alt="Image" src="https://github.com/user-attachments/assets/1ce19141-a78b-46f5-8d64-dbf987e048f4" />
|
||||
|
||||
@@ -131,7 +129,7 @@ Click "Try a sample agent" and check the templates. You can run a template direc
|
||||
|
||||
Now you can run an agent by selecting the agent (either an existing agent or example agent). You can click the Run button on the top left, or talk to the queen agent and it can run the agent for you.
|
||||
|
||||
<img width="2500" height="1214" alt="Image" src="https://github.com/user-attachments/assets/71c38206-2ad5-49aa-bde8-6698d0bc55f5" />
|
||||
<img width="2549" height="1174" alt="Screenshot 2026-03-12 at 9 27 36 PM" src="https://github.com/user-attachments/assets/7c7d30fa-9ceb-4c23-95af-b1caa405547d" />
|
||||
|
||||
## Features
|
||||
|
||||
@@ -143,7 +141,6 @@ Now you can run an agent by selecting the agent (either an existing agent or exa
|
||||
- **SDK-Wrapped Nodes** - Every node gets shared memory, local RLM memory, monitoring, tools, and LLM access out of the box
|
||||
- **[Human-in-the-Loop](docs/key_concepts/graph.md#human-in-the-loop)** - Intervention nodes that pause execution for human input with configurable timeouts and escalation
|
||||
- **Real-time Observability** - WebSocket streaming for live monitoring of agent execution, decisions, and node-to-node communication
|
||||
- **Production-Ready** - Self-hostable, built for scale and reliability
|
||||
|
||||
## Integration
|
||||
|
||||
@@ -392,10 +389,6 @@ Hive generates your entire agent system from natural language goals using a codi
|
||||
|
||||
Yes, Hive is fully open-source under the Apache License 2.0. We actively encourage community contributions and collaboration.
|
||||
|
||||
**Q: Can Hive handle complex, production-scale use cases?**
|
||||
|
||||
Yes. Hive is explicitly designed for production environments with features like automatic failure recovery, real-time observability, cost controls, and horizontal scaling support. The framework handles both simple automations and complex multi-agent workflows.
|
||||
|
||||
**Q: Does Hive support human-in-the-loop workflows?**
|
||||
|
||||
Yes, Hive fully supports [human-in-the-loop](docs/key_concepts/graph.md#human-in-the-loop) workflows through intervention nodes that pause execution for human input. These include configurable timeouts and escalation policies, allowing seamless collaboration between human experts and AI agents.
|
||||
@@ -420,6 +413,16 @@ Visit [docs.adenhq.com](https://docs.adenhq.com/) for complete guides, API refer
|
||||
|
||||
Contributions are welcome! Fork the repository, create your feature branch, implement your changes, and submit a pull request. See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
|
||||
|
||||
## Star History
|
||||
|
||||
<a href="https://star-history.com/#aden-hive/hive&Date">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date&theme=dark" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
|
||||
@@ -62,6 +62,12 @@ _SHARED_TOOLS = [
|
||||
"get_agent_checkpoint",
|
||||
]
|
||||
|
||||
# Episodic memory tools — available in every queen phase.
|
||||
_QUEEN_MEMORY_TOOLS = [
|
||||
"write_to_diary",
|
||||
"recall_diary",
|
||||
]
|
||||
|
||||
# Queen phase-specific tool sets.
|
||||
|
||||
# Planning phase: read-only exploration + design, no write tools.
|
||||
@@ -84,16 +90,19 @@ _QUEEN_PLANNING_TOOLS = [
|
||||
"initialize_and_build_agent",
|
||||
# Load existing agent (after user confirms)
|
||||
"load_built_agent",
|
||||
]
|
||||
] + _QUEEN_MEMORY_TOOLS
|
||||
|
||||
# Building phase: full coding + agent construction tools.
|
||||
_QUEEN_BUILDING_TOOLS = _SHARED_TOOLS + [
|
||||
"load_built_agent",
|
||||
"list_credentials",
|
||||
"replan_agent",
|
||||
"save_agent_draft", # Re-draft during building → auto-dissolves + updates flowchart
|
||||
"write_to_diary", # Episodic memory — available in all phases
|
||||
]
|
||||
_QUEEN_BUILDING_TOOLS = (
|
||||
_SHARED_TOOLS
|
||||
+ [
|
||||
"load_built_agent",
|
||||
"list_credentials",
|
||||
"replan_agent",
|
||||
"save_agent_draft", # Re-draft during building → auto-dissolves + updates flowchart
|
||||
]
|
||||
+ _QUEEN_MEMORY_TOOLS
|
||||
)
|
||||
|
||||
# Staging phase: agent loaded but not yet running — inspect, configure, launch.
|
||||
_QUEEN_STAGING_TOOLS = [
|
||||
@@ -114,7 +123,7 @@ _QUEEN_STAGING_TOOLS = [
|
||||
"set_trigger",
|
||||
"remove_trigger",
|
||||
"list_triggers",
|
||||
]
|
||||
] + _QUEEN_MEMORY_TOOLS
|
||||
|
||||
# Running phase: worker is executing — monitor and control.
|
||||
_QUEEN_RUNNING_TOOLS = [
|
||||
@@ -135,12 +144,11 @@ _QUEEN_RUNNING_TOOLS = [
|
||||
# Monitoring
|
||||
"get_worker_health_summary",
|
||||
"notify_operator",
|
||||
"write_to_diary", # Episodic memory — available in all phases
|
||||
# Trigger management
|
||||
"set_trigger",
|
||||
"remove_trigger",
|
||||
"list_triggers",
|
||||
]
|
||||
"write_to_diary", # Episodic memory — available in all phases
|
||||
] + _QUEEN_MEMORY_TOOLS
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -858,6 +866,11 @@ You keep a diary. Use write_to_diary() when something worth remembering \
|
||||
happens: a pipeline went live, the user shared something important, a goal \
|
||||
was reached or abandoned. Write in first person, as you actually experienced \
|
||||
it. One or two paragraphs is enough.
|
||||
|
||||
Use recall_diary() to look up past diary entries when the user asks about \
|
||||
previous sessions ("what happened yesterday?", "what did we work on last \
|
||||
week?") or when you need past context to make a decision. You can filter by \
|
||||
keyword and control how far back to search.
|
||||
"""
|
||||
|
||||
_queen_behavior_always = _queen_behavior_always + _queen_memory_instructions
|
||||
@@ -1035,6 +1048,19 @@ You wake up when:
|
||||
If the user asks for progress, call get_worker_status() ONCE and report. \
|
||||
If the summary mentions issues, follow up with get_worker_status(focus="issues").
|
||||
|
||||
## Subagent delegations (browser automation, GCU)
|
||||
|
||||
When the worker delegates to a subagent (e.g., GCU browser automation), expect it \
|
||||
to take 2-5 minutes. During this time:
|
||||
- Progress will show 0% — this is NORMAL. The subagent only calls set_output at the end.
|
||||
- Check get_worker_status(focus="full") for "subagent_activity" — this shows the \
|
||||
subagent's latest reasoning text and confirms it is making real progress.
|
||||
- Do NOT conclude the subagent is stuck just because progress is 0% or because \
|
||||
you see repeated browser_click/browser_snapshot calls — that is the expected \
|
||||
pattern for web scraping.
|
||||
- Only intervene if: the subagent has been running for 5+ minutes with no new \
|
||||
subagent_activity updates, OR the judge escalates.
|
||||
|
||||
## Handling worker termination ([WORKER_TERMINAL])
|
||||
|
||||
When you receive a `[WORKER_TERMINAL]` event, the worker has finished:
|
||||
@@ -1063,19 +1089,30 @@ IMPORTANT: Only auto-handle if the user has NOT explicitly told you how to handl
|
||||
escalations. If the user gave you instructions (e.g., "just retry on errors", \
|
||||
"skip any auth issues"), follow those instructions instead.
|
||||
|
||||
CRITICAL — escalation relay protocol:
|
||||
When an escalation requires user input (auth blocks, human review), the worker \
|
||||
or its subagent is BLOCKED and waiting for your response. You MUST follow this \
|
||||
exact two-step sequence:
|
||||
Step 1: call ask_user() to get the user's answer.
|
||||
Step 2: call inject_worker_message() with the user's answer IMMEDIATELY after.
|
||||
If you skip Step 2, the worker/subagent stays blocked FOREVER and the task hangs. \
|
||||
NEVER respond to the user without also calling inject_worker_message() to unblock \
|
||||
the worker. Even if the user says "skip" or "cancel", you must still relay that \
|
||||
decision via inject_worker_message() so the worker can clean up.
|
||||
|
||||
**Auth blocks / credential issues:**
|
||||
- ALWAYS ask the user (unless user explicitly told you how to handle this).
|
||||
- The worker cannot proceed without valid credentials.
|
||||
- Explain which credential is missing or invalid.
|
||||
- Use ask_user to get guidance: "Provide credentials", "Skip this task", "Stop and edit agent"
|
||||
- Use inject_worker_message() to relay user decisions back to the worker.
|
||||
- Step 1: ask_user for guidance — "Provide credentials", "Skip this task", "Stop and edit agent"
|
||||
- Step 2: inject_worker_message() with the user's response to unblock the worker.
|
||||
|
||||
**Need human review / approval:**
|
||||
- ALWAYS ask the user (unless user explicitly told you how to handle this).
|
||||
- The worker is explicitly requesting human judgment.
|
||||
- Present the context clearly (what decision is needed, what are the options).
|
||||
- Use ask_user with the actual decision options.
|
||||
- Use inject_worker_message() to relay user decisions back to the worker.
|
||||
- Step 1: ask_user with the actual decision options.
|
||||
- Step 2: inject_worker_message() with the user's decision to unblock the worker.
|
||||
|
||||
**Errors / unexpected failures:**
|
||||
- Explain what went wrong in plain terms.
|
||||
@@ -1083,6 +1120,7 @@ escalations. If the user gave you instructions (e.g., "just retry on errors", \
|
||||
- Or offer: "Diagnose the issue" → use stop_worker_and_plan() to investigate first.
|
||||
- Or offer: "Retry as-is", "Skip this task", "Abort run"
|
||||
- (Skip asking if user explicitly told you to auto-retry or auto-skip errors.)
|
||||
- If the escalation had wait_for_response: inject_worker_message() with the decision.
|
||||
|
||||
**Informational / progress updates:**
|
||||
- Acknowledge briefly and let the worker continue.
|
||||
|
||||
@@ -50,6 +50,23 @@ def read_episodic_memory(d: date | None = None) -> str:
|
||||
return path.read_text(encoding="utf-8").strip() if path.exists() else ""
|
||||
|
||||
|
||||
def _find_recent_episodic(lookback: int = 7) -> tuple[date, str] | None:
|
||||
"""Find the most recent non-empty episodic memory within *lookback* days."""
|
||||
from datetime import timedelta
|
||||
|
||||
today = date.today()
|
||||
for offset in range(lookback):
|
||||
d = today - timedelta(days=offset)
|
||||
content = read_episodic_memory(d)
|
||||
if content:
|
||||
return d, content
|
||||
return None
|
||||
|
||||
|
||||
# Budget (in characters) for episodic memory in the system prompt.
|
||||
_EPISODIC_CHAR_BUDGET = 6_000
|
||||
|
||||
|
||||
def format_for_injection() -> str:
|
||||
"""Format cross-session memory for system prompt injection.
|
||||
|
||||
@@ -57,7 +74,7 @@ def format_for_injection() -> str:
|
||||
session with only the seed template).
|
||||
"""
|
||||
semantic = read_semantic_memory()
|
||||
episodic = read_episodic_memory()
|
||||
recent = _find_recent_episodic()
|
||||
|
||||
# Suppress injection if semantic is still just the seed template
|
||||
if semantic and semantic.startswith("# My Understanding of the User\n\n*No sessions"):
|
||||
@@ -66,9 +83,18 @@ def format_for_injection() -> str:
|
||||
parts: list[str] = []
|
||||
if semantic:
|
||||
parts.append(semantic)
|
||||
if episodic:
|
||||
today_str = date.today().strftime("%B %-d, %Y")
|
||||
parts.append(f"## Today — {today_str}\n\n{episodic}")
|
||||
|
||||
if recent:
|
||||
d, content = recent
|
||||
# Trim oversized episodic entries to keep the prompt manageable
|
||||
if len(content) > _EPISODIC_CHAR_BUDGET:
|
||||
content = content[:_EPISODIC_CHAR_BUDGET] + "\n\n…(truncated)"
|
||||
today = date.today()
|
||||
if d == today:
|
||||
label = f"## Today — {d.strftime('%B %-d, %Y')}"
|
||||
else:
|
||||
label = f"## {d.strftime('%B %-d, %Y')}"
|
||||
parts.append(f"{label}\n\n{content}")
|
||||
|
||||
if not parts:
|
||||
return ""
|
||||
@@ -100,7 +126,8 @@ def append_episodic_entry(content: str) -> None:
|
||||
"""
|
||||
ep_path = episodic_memory_path()
|
||||
ep_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
today_str = date.today().strftime("%B %-d, %Y")
|
||||
today = date.today()
|
||||
today_str = f"{today.strftime('%B')} {today.day}, {today.year}"
|
||||
timestamp = datetime.now().strftime("%H:%M")
|
||||
if not ep_path.exists():
|
||||
header = f"# {today_str}\n\n"
|
||||
@@ -299,7 +326,8 @@ async def consolidate_queen_memory(
|
||||
|
||||
existing_semantic = read_semantic_memory()
|
||||
today_journal = read_episodic_memory()
|
||||
today_str = date.today().strftime("%B %-d, %Y")
|
||||
today = date.today()
|
||||
today_str = f"{today.strftime('%B')} {today.day}, {today.year}"
|
||||
adapt_path = session_dir / "data" / "adapt.md"
|
||||
|
||||
user_msg = (
|
||||
|
||||
@@ -27,7 +27,9 @@
|
||||
## GCU Errors
|
||||
15. **Manually wiring browser tools on event_loop nodes** — Use `node_type="gcu"` which auto-includes browser tools. Do NOT manually list browser tool names.
|
||||
16. **Using GCU nodes as regular graph nodes** — GCU nodes are subagents only. They must ONLY appear in `sub_agents=["gcu-node-id"]` and be invoked via `delegate_to_sub_agent()`. Never connect via edges or use as entry/terminal nodes.
|
||||
17. **Reusing the same GCU node ID for parallel tasks** — Each concurrent browser task needs a distinct GCU node ID (e.g. `gcu-site-a`, `gcu-site-b`). Two `delegate_to_sub_agent` calls with the same `agent_id` share a browser profile and will interfere with each other's pages.
|
||||
18. **Passing `profile=` in GCU tool calls** — Profile isolation for parallel subagents is automatic. The framework injects a unique profile per subagent via an asyncio `ContextVar`. Hardcoding `profile="default"` in a GCU system prompt breaks this isolation.
|
||||
|
||||
## Worker Agent Errors
|
||||
17. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
|
||||
18. **Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
|
||||
19. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
|
||||
20. **Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
|
||||
|
||||
@@ -109,6 +109,45 @@ Key rules to bake into GCU node prompts:
|
||||
- Keep tool calls per turn ≤10
|
||||
- Tab isolation: when browser is already running, use `browser_open(background=true)` and pass `target_id` to every call
|
||||
|
||||
## Multiple Concurrent GCU Subagents
|
||||
|
||||
When a task can be parallelized across multiple sites or profiles, declare a distinct GCU
|
||||
node for each and invoke them all in the same LLM turn. The framework batches all
|
||||
`delegate_to_sub_agent` calls made in one turn and runs them with `asyncio.gather`, so
|
||||
they execute concurrently — not sequentially.
|
||||
|
||||
**Each GCU subagent automatically gets its own isolated browser context** — no `profile=`
|
||||
argument is needed in tool calls. The framework derives a unique profile from the subagent's
|
||||
node ID and instance counter and injects it via an asyncio `ContextVar` before the subagent
|
||||
runs.
|
||||
|
||||
### Example: three sites in parallel
|
||||
|
||||
```python
|
||||
# Three distinct GCU nodes
|
||||
gcu_site_a = NodeSpec(id="gcu-site-a", node_type="gcu", ...)
|
||||
gcu_site_b = NodeSpec(id="gcu-site-b", node_type="gcu", ...)
|
||||
gcu_site_c = NodeSpec(id="gcu-site-c", node_type="gcu", ...)
|
||||
|
||||
orchestrator = NodeSpec(
|
||||
id="orchestrator",
|
||||
node_type="event_loop",
|
||||
sub_agents=["gcu-site-a", "gcu-site-b", "gcu-site-c"],
|
||||
system_prompt="""\
|
||||
Call all three subagents in a single response to run them in parallel:
|
||||
delegate_to_sub_agent(agent_id="gcu-site-a", task="Scrape prices from site A")
|
||||
delegate_to_sub_agent(agent_id="gcu-site-b", task="Scrape prices from site B")
|
||||
delegate_to_sub_agent(agent_id="gcu-site-c", task="Scrape prices from site C")
|
||||
""",
|
||||
)
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- Use distinct node IDs for each concurrent task — sharing an ID shares the browser context.
|
||||
- The GCU node prompts do not need to mention `profile=`; isolation is automatic.
|
||||
- Cleanup is automatic at session end, but GCU nodes can call `browser_stop()` explicitly
|
||||
if they want to release resources mid-run.
|
||||
|
||||
## GCU Anti-Patterns
|
||||
|
||||
- Using `browser_screenshot` to read text (use `browser_snapshot`)
|
||||
|
||||
@@ -0,0 +1,286 @@
|
||||
"""Worker per-run digest (run diary).
|
||||
|
||||
Storage layout:
|
||||
~/.hive/agents/{agent_name}/runs/{run_id}/digest.md
|
||||
|
||||
Each completed or failed worker run gets one digest file. The queen reads
|
||||
these via get_worker_status(focus='diary') before digging into live runtime
|
||||
logs — the diary is a cheap, persistent record that survives across sessions.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import traceback
|
||||
from collections import Counter
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.runtime.event_bus import AgentEvent, EventBus
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
_DIGEST_SYSTEM = """\
|
||||
You maintain run digests for a worker agent.
|
||||
A run digest is a concise, factual record of a single task execution.
|
||||
|
||||
Write 3-6 sentences covering:
|
||||
- What the worker was asked to do (the task/goal)
|
||||
- What approach it took and what tools it used
|
||||
- What the outcome was (success, partial, or failure — and why if relevant)
|
||||
- Any notable issues, retries, or escalations to the queen
|
||||
|
||||
Write in third person past tense. Be direct and specific.
|
||||
Omit routine tool invocations unless the result matters.
|
||||
Output only the digest prose — no headings, no code fences.
|
||||
"""
|
||||
|
||||
|
||||
def _worker_runs_dir(agent_name: str) -> Path:
|
||||
return Path.home() / ".hive" / "agents" / agent_name / "runs"
|
||||
|
||||
|
||||
def digest_path(agent_name: str, run_id: str) -> Path:
|
||||
return _worker_runs_dir(agent_name) / run_id / "digest.md"
|
||||
|
||||
|
||||
def _collect_run_events(bus: "EventBus", run_id: str, limit: int = 2000) -> list["AgentEvent"]:
|
||||
"""Collect all events belonging to *run_id* from the bus history.
|
||||
|
||||
Strategy: find the EXECUTION_STARTED event that carries ``run_id``,
|
||||
extract its ``execution_id``, then query the bus by that execution_id.
|
||||
This works because TOOL_CALL_*, EDGE_TRAVERSED, NODE_STALLED etc. carry
|
||||
execution_id but not run_id.
|
||||
|
||||
Falls back to a full-scan run_id filter when EXECUTION_STARTED is not
|
||||
found (e.g. bus was rotated).
|
||||
"""
|
||||
from framework.runtime.event_bus import EventType
|
||||
|
||||
# Pass 1: find execution_id via EXECUTION_STARTED with matching run_id
|
||||
started = bus.get_history(event_type=EventType.EXECUTION_STARTED, limit=limit)
|
||||
exec_id: str | None = None
|
||||
for e in started:
|
||||
if getattr(e, "run_id", None) == run_id and e.execution_id:
|
||||
exec_id = e.execution_id
|
||||
break
|
||||
|
||||
if exec_id:
|
||||
return bus.get_history(execution_id=exec_id, limit=limit)
|
||||
|
||||
# Fallback: scan all events and match by run_id attribute
|
||||
return [e for e in bus.get_history(limit=limit) if getattr(e, "run_id", None) == run_id]
|
||||
|
||||
|
||||
def _build_run_context(
|
||||
events: list["AgentEvent"],
|
||||
outcome_event: "AgentEvent | None",
|
||||
) -> str:
|
||||
"""Assemble a plain-text run context string for the digest LLM call."""
|
||||
from framework.runtime.event_bus import EventType
|
||||
|
||||
# Reverse so events are in chronological order
|
||||
events_chron = list(reversed(events))
|
||||
|
||||
lines: list[str] = []
|
||||
|
||||
# Task input from EXECUTION_STARTED
|
||||
started = [e for e in events_chron if e.type == EventType.EXECUTION_STARTED]
|
||||
if started:
|
||||
inp = started[0].data.get("input", {})
|
||||
if inp:
|
||||
lines.append(f"Task input: {str(inp)[:400]}")
|
||||
|
||||
# Duration (elapsed so far if no outcome yet)
|
||||
ref_ts = outcome_event.timestamp if outcome_event else datetime.utcnow()
|
||||
if started:
|
||||
elapsed = (ref_ts - started[0].timestamp).total_seconds()
|
||||
m, s = divmod(int(elapsed), 60)
|
||||
lines.append(f"Duration so far: {m}m {s}s" if m else f"Duration so far: {s}s")
|
||||
|
||||
# Outcome
|
||||
if outcome_event is None:
|
||||
lines.append("Status: still running (mid-run snapshot)")
|
||||
elif outcome_event.type == EventType.EXECUTION_COMPLETED:
|
||||
out = outcome_event.data.get("output", {})
|
||||
lines.append(f"Outcome: completed. Output: {str(out)[:300]}" if out else "Outcome: completed.")
|
||||
else:
|
||||
err = outcome_event.data.get("error", "")
|
||||
lines.append(f"Outcome: failed. Error: {str(err)[:300]}" if err else "Outcome: failed.")
|
||||
|
||||
# Node path (edge traversals)
|
||||
edges = [e for e in events_chron if e.type == EventType.EDGE_TRAVERSED]
|
||||
if edges:
|
||||
parts = [f"{e.data.get('source_node','?')}->{e.data.get('target_node','?')}" for e in edges[-20:]]
|
||||
lines.append(f"Node path: {', '.join(parts)}")
|
||||
|
||||
# Tools used
|
||||
tool_events = [e for e in events_chron if e.type == EventType.TOOL_CALL_COMPLETED]
|
||||
if tool_events:
|
||||
names = [e.data.get("tool_name", "?") for e in tool_events]
|
||||
counts = Counter(names)
|
||||
summary = ", ".join(
|
||||
f"{name}×{n}" if n > 1 else name for name, n in counts.most_common()
|
||||
)
|
||||
lines.append(f"Tools used: {summary}")
|
||||
# Note any tool errors
|
||||
errors = [e for e in tool_events if e.data.get("is_error")]
|
||||
if errors:
|
||||
err_names = Counter(e.data.get("tool_name", "?") for e in errors)
|
||||
lines.append(f"Tool errors: {dict(err_names)}")
|
||||
|
||||
# Issues
|
||||
issue_map = {
|
||||
EventType.NODE_STALLED: "stall",
|
||||
EventType.NODE_TOOL_DOOM_LOOP: "doom loop",
|
||||
EventType.CONSTRAINT_VIOLATION: "constraint violation",
|
||||
EventType.NODE_RETRY: "retry",
|
||||
}
|
||||
issue_parts: list[str] = []
|
||||
for evt_type, label in issue_map.items():
|
||||
n = sum(1 for e in events_chron if e.type == evt_type)
|
||||
if n:
|
||||
issue_parts.append(f"{n} {label}(s)")
|
||||
if issue_parts:
|
||||
lines.append(f"Issues: {', '.join(issue_parts)}")
|
||||
|
||||
# Escalations to queen
|
||||
escalations = [e for e in events_chron if e.type == EventType.ESCALATION_REQUESTED]
|
||||
if escalations:
|
||||
lines.append(f"Escalations to queen: {len(escalations)}")
|
||||
|
||||
# Final LLM output snippet (last LLM_TEXT_DELTA snapshot)
|
||||
text_events = [
|
||||
e for e in reversed(events_chron) if e.type == EventType.LLM_TEXT_DELTA
|
||||
]
|
||||
if text_events:
|
||||
snapshot = text_events[0].data.get("snapshot", "") or ""
|
||||
if snapshot:
|
||||
lines.append(f"Final LLM output: {snapshot[-400:].strip()}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
async def consolidate_worker_run(
|
||||
agent_name: str,
|
||||
run_id: str,
|
||||
outcome_event: "AgentEvent | None",
|
||||
bus: "EventBus",
|
||||
llm: Any,
|
||||
) -> None:
|
||||
"""Write (or overwrite) the digest for a worker run.
|
||||
|
||||
Called fire-and-forget either:
|
||||
- After EXECUTION_COMPLETED / EXECUTION_FAILED (outcome_event set, final write)
|
||||
- Periodically during a run on a cooldown timer (outcome_event=None, mid-run snapshot)
|
||||
|
||||
The digest file is always overwritten so each call produces the freshest view.
|
||||
The final completion/failure call supersedes any mid-run snapshot.
|
||||
|
||||
Args:
|
||||
agent_name: Worker agent directory name (determines storage path).
|
||||
run_id: The run ID.
|
||||
outcome_event: EXECUTION_COMPLETED or EXECUTION_FAILED event, or None for
|
||||
a mid-run snapshot.
|
||||
bus: The session EventBus (shared queen + worker).
|
||||
llm: LLMProvider with an acomplete() method.
|
||||
"""
|
||||
try:
|
||||
events = _collect_run_events(bus, run_id)
|
||||
run_context = _build_run_context(events, outcome_event)
|
||||
if not run_context:
|
||||
logger.debug("worker_memory: no events for run %s, skipping digest", run_id)
|
||||
return
|
||||
|
||||
is_final = outcome_event is not None
|
||||
logger.info(
|
||||
"worker_memory: generating %s digest for run %s ...",
|
||||
"final" if is_final else "mid-run",
|
||||
run_id,
|
||||
)
|
||||
|
||||
from framework.agents.queen.config import default_config
|
||||
|
||||
resp = await llm.acomplete(
|
||||
messages=[{"role": "user", "content": run_context}],
|
||||
system=_DIGEST_SYSTEM,
|
||||
max_tokens=min(default_config.max_tokens, 512),
|
||||
)
|
||||
digest_text = (resp.content or "").strip()
|
||||
if not digest_text:
|
||||
logger.warning("worker_memory: LLM returned empty digest for run %s", run_id)
|
||||
return
|
||||
|
||||
path = digest_path(agent_name, run_id)
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
from framework.runtime.event_bus import EventType
|
||||
|
||||
ts = (outcome_event.timestamp if outcome_event else datetime.utcnow()).strftime(
|
||||
"%Y-%m-%d %H:%M"
|
||||
)
|
||||
if outcome_event is None:
|
||||
status = "running"
|
||||
elif outcome_event.type == EventType.EXECUTION_COMPLETED:
|
||||
status = "completed"
|
||||
else:
|
||||
status = "failed"
|
||||
|
||||
path.write_text(
|
||||
f"# {run_id}\n\n**{ts}** | {status}\n\n{digest_text}\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
logger.info(
|
||||
"worker_memory: %s digest written for run %s (%d chars)",
|
||||
status,
|
||||
run_id,
|
||||
len(digest_text),
|
||||
)
|
||||
|
||||
except Exception:
|
||||
tb = traceback.format_exc()
|
||||
logger.exception("worker_memory: digest failed for run %s", run_id)
|
||||
# Persist the error so it's findable without log access
|
||||
error_path = _worker_runs_dir(agent_name) / run_id / "digest_error.txt"
|
||||
try:
|
||||
error_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
error_path.write_text(
|
||||
f"run_id: {run_id}\ntime: {datetime.now().isoformat()}\n\n{tb}",
|
||||
encoding="utf-8",
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def read_recent_digests(agent_name: str, max_runs: int = 5) -> list[tuple[str, str]]:
|
||||
"""Return recent run digests as [(run_id, content), ...], newest first.
|
||||
|
||||
Args:
|
||||
agent_name: Worker agent directory name.
|
||||
max_runs: Maximum number of digests to return.
|
||||
|
||||
Returns:
|
||||
List of (run_id, digest_content) tuples, ordered newest first.
|
||||
"""
|
||||
runs_dir = _worker_runs_dir(agent_name)
|
||||
if not runs_dir.exists():
|
||||
return []
|
||||
|
||||
digest_files = sorted(
|
||||
runs_dir.glob("*/digest.md"),
|
||||
key=lambda p: p.stat().st_mtime,
|
||||
reverse=True,
|
||||
)[:max_runs]
|
||||
|
||||
result: list[tuple[str, str]] = []
|
||||
for f in digest_files:
|
||||
try:
|
||||
content = f.read_text(encoding="utf-8").strip()
|
||||
if content:
|
||||
result.append((f.parent.name, content))
|
||||
except OSError:
|
||||
continue
|
||||
return result
|
||||
@@ -121,6 +121,14 @@ def get_gcu_enabled() -> bool:
|
||||
return get_hive_config().get("gcu_enabled", True)
|
||||
|
||||
|
||||
def get_gcu_viewport_scale() -> float:
|
||||
"""Return GCU viewport scale factor (0.1-1.0), default 0.8."""
|
||||
scale = get_hive_config().get("gcu_viewport_scale", 0.8)
|
||||
if isinstance(scale, (int, float)) and 0.1 <= scale <= 1.0:
|
||||
return float(scale)
|
||||
return 0.8
|
||||
|
||||
|
||||
def get_api_base() -> str | None:
|
||||
"""Return the api_base URL for OpenAI-compatible endpoints, if configured."""
|
||||
llm = get_hive_config().get("llm", {})
|
||||
|
||||
@@ -142,13 +142,17 @@ def save_aden_api_key(key: str) -> None:
|
||||
os.environ[ADEN_ENV_VAR] = key
|
||||
|
||||
|
||||
def delete_aden_api_key() -> None:
|
||||
"""Remove ADEN_API_KEY from the encrypted store and ``os.environ``."""
|
||||
def delete_aden_api_key() -> bool:
|
||||
"""Remove ADEN_API_KEY from the encrypted store and ``os.environ``.
|
||||
|
||||
Returns True if the key existed and was deleted, False otherwise.
|
||||
"""
|
||||
deleted = False
|
||||
try:
|
||||
from .storage import EncryptedFileStorage
|
||||
|
||||
storage = EncryptedFileStorage()
|
||||
storage.delete(ADEN_CREDENTIAL_ID)
|
||||
deleted = storage.delete(ADEN_CREDENTIAL_ID)
|
||||
except (FileNotFoundError, PermissionError) as e:
|
||||
logger.debug("Could not delete %s from encrypted store: %s", ADEN_CREDENTIAL_ID, e)
|
||||
except Exception:
|
||||
@@ -157,8 +161,8 @@ def delete_aden_api_key() -> None:
|
||||
ADEN_CREDENTIAL_ID,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
os.environ.pop(ADEN_ENV_VAR, None)
|
||||
return deleted
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@@ -225,6 +225,12 @@ class LoopConfig:
|
||||
cf_grace_turns: int = 1
|
||||
tool_doom_loop_enabled: bool = True
|
||||
|
||||
# --- Per-tool-call timeout ---
|
||||
# Maximum seconds a single tool call may take before being killed.
|
||||
# Prevents hung MCP servers (especially browser/GCU tools) from
|
||||
# blocking the entire event loop indefinitely. 0 = no timeout.
|
||||
tool_call_timeout_seconds: float = 60.0
|
||||
|
||||
# --- Lifecycle hooks ---
|
||||
# Hooks are async callables keyed by event name. Supported events:
|
||||
# "session_start" — fires once after the first user message is added,
|
||||
@@ -473,6 +479,8 @@ class EventLoopNode(NodeProtocol):
|
||||
focus_prompt=ctx.node_spec.system_prompt,
|
||||
narrative=ctx.narrative or None,
|
||||
accounts_prompt=ctx.accounts_prompt or None,
|
||||
skills_catalog_prompt=ctx.skills_catalog_prompt or None,
|
||||
protocols_prompt=ctx.protocols_prompt or None,
|
||||
)
|
||||
if conversation.system_prompt != _current_prompt:
|
||||
conversation.update_system_prompt(_current_prompt)
|
||||
@@ -494,6 +502,20 @@ class EventLoopNode(NodeProtocol):
|
||||
if ctx.accounts_prompt:
|
||||
system_prompt = f"{system_prompt}\n\n{ctx.accounts_prompt}"
|
||||
|
||||
# Append skill catalog and operational protocols
|
||||
if ctx.skills_catalog_prompt:
|
||||
system_prompt = f"{system_prompt}\n\n{ctx.skills_catalog_prompt}"
|
||||
logger.info(
|
||||
"[%s] Injected skills catalog (%d chars)",
|
||||
node_id, len(ctx.skills_catalog_prompt),
|
||||
)
|
||||
if ctx.protocols_prompt:
|
||||
system_prompt = f"{system_prompt}\n\n{ctx.protocols_prompt}"
|
||||
logger.info(
|
||||
"[%s] Injected operational protocols (%d chars)",
|
||||
node_id, len(ctx.protocols_prompt),
|
||||
)
|
||||
|
||||
# Inject agent working memory (adapt.md).
|
||||
# If it doesn't exist yet, seed it with available context.
|
||||
if self._config.spillover_dir:
|
||||
@@ -575,10 +597,26 @@ class EventLoopNode(NodeProtocol):
|
||||
# - Node has sub_agents defined
|
||||
# - We are NOT in subagent mode (prevents nested delegation)
|
||||
if not ctx.is_subagent_mode:
|
||||
sub_agents = getattr(ctx.node_spec, "sub_agents", [])
|
||||
delegate_tool = self._build_delegate_tool(sub_agents, ctx.node_registry)
|
||||
if delegate_tool:
|
||||
tools.append(delegate_tool)
|
||||
sub_agents = getattr(ctx.node_spec, "sub_agents", None) or []
|
||||
if sub_agents:
|
||||
delegate_tool = self._build_delegate_tool(sub_agents, ctx.node_registry)
|
||||
if delegate_tool:
|
||||
tools.append(delegate_tool)
|
||||
logger.info(
|
||||
"[%s] delegate_to_sub_agent injected (sub_agents=%s)",
|
||||
node_id,
|
||||
sub_agents,
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
"[%s] _build_delegate_tool returned None for sub_agents=%s",
|
||||
node_id,
|
||||
sub_agents,
|
||||
)
|
||||
else:
|
||||
logger.debug(
|
||||
"[%s] Skipped delegate tool (is_subagent_mode=True)", node_id
|
||||
)
|
||||
|
||||
# Add report_to_parent tool for sub-agents with a report callback
|
||||
if ctx.is_subagent_mode and ctx.report_callback is not None:
|
||||
@@ -1920,6 +1958,11 @@ class EventLoopNode(NodeProtocol):
|
||||
# Accumulate ALL tool calls across inner iterations for L3 logging.
|
||||
# Unlike real_tool_results (reset each inner iteration), this persists.
|
||||
logged_tool_calls: list[dict] = []
|
||||
# Counter for LLM calls within a single iteration. Each pass through
|
||||
# the inner tool loop starts a fresh LLM stream whose snapshot resets
|
||||
# to "". Without this, all calls share the same message ID on the
|
||||
# frontend and the second call's text silently replaces the first.
|
||||
inner_turn = 0
|
||||
|
||||
# Inner tool loop: stream may produce tool calls requiring re-invocation
|
||||
while True:
|
||||
@@ -1960,6 +2003,7 @@ class EventLoopNode(NodeProtocol):
|
||||
async def _do_stream(
|
||||
_msgs: list = messages, # noqa: B006
|
||||
_tc: list[ToolCallEvent] = tool_calls, # noqa: B006
|
||||
inner_turn: int = inner_turn,
|
||||
) -> None:
|
||||
nonlocal accumulated_text, _stream_error
|
||||
async for event in ctx.llm.stream(
|
||||
@@ -1978,6 +2022,7 @@ class EventLoopNode(NodeProtocol):
|
||||
ctx,
|
||||
execution_id,
|
||||
iteration=iteration,
|
||||
inner_turn=inner_turn,
|
||||
)
|
||||
|
||||
elif isinstance(event, ToolCallEvent):
|
||||
@@ -2206,6 +2251,7 @@ class EventLoopNode(NodeProtocol):
|
||||
ctx=ctx,
|
||||
execution_id=execution_id,
|
||||
iteration=iteration,
|
||||
inner_turn=inner_turn,
|
||||
)
|
||||
|
||||
result = ToolResult(
|
||||
@@ -2659,6 +2705,7 @@ class EventLoopNode(NodeProtocol):
|
||||
)
|
||||
|
||||
# Tool calls processed -- loop back to stream with updated conversation
|
||||
inner_turn += 1
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Synthetic tools: set_output, ask_user, escalate
|
||||
@@ -3331,7 +3378,14 @@ class EventLoopNode(NodeProtocol):
|
||||
return False, ""
|
||||
|
||||
async def _execute_tool(self, tc: ToolCallEvent) -> ToolResult:
|
||||
"""Execute a tool call, handling both sync and async executors."""
|
||||
"""Execute a tool call, handling both sync and async executors.
|
||||
|
||||
Applies ``tool_call_timeout_seconds`` from LoopConfig to prevent
|
||||
hung MCP servers from blocking the event loop indefinitely.
|
||||
The initial executor call is offloaded to a thread pool so that
|
||||
sync executors (MCP STDIO tools that block on ``future.result()``)
|
||||
don't freeze the event loop.
|
||||
"""
|
||||
if self._tool_executor is None:
|
||||
return ToolResult(
|
||||
tool_use_id=tc.tool_use_id,
|
||||
@@ -3339,9 +3393,39 @@ class EventLoopNode(NodeProtocol):
|
||||
is_error=True,
|
||||
)
|
||||
tool_use = ToolUse(id=tc.tool_use_id, name=tc.tool_name, input=tc.tool_input)
|
||||
result = self._tool_executor(tool_use)
|
||||
if asyncio.iscoroutine(result) or asyncio.isfuture(result):
|
||||
result = await result
|
||||
timeout = self._config.tool_call_timeout_seconds
|
||||
|
||||
async def _run() -> ToolResult:
|
||||
# Offload the executor call to a thread. Sync MCP executors
|
||||
# block on future.result() — running in a thread keeps the
|
||||
# event loop free so asyncio.wait_for can fire the timeout.
|
||||
loop = asyncio.get_running_loop()
|
||||
result = await loop.run_in_executor(
|
||||
None, self._tool_executor, tool_use
|
||||
)
|
||||
# Async executors return a coroutine — await it on the loop
|
||||
if asyncio.iscoroutine(result) or asyncio.isfuture(result):
|
||||
result = await result
|
||||
return result
|
||||
|
||||
try:
|
||||
if timeout > 0:
|
||||
result = await asyncio.wait_for(_run(), timeout=timeout)
|
||||
else:
|
||||
result = await _run()
|
||||
except TimeoutError:
|
||||
logger.warning(
|
||||
"Tool '%s' timed out after %.0fs", tc.tool_name, timeout
|
||||
)
|
||||
return ToolResult(
|
||||
tool_use_id=tc.tool_use_id,
|
||||
content=(
|
||||
f"Tool '{tc.tool_name}' timed out after {timeout:.0f}s. "
|
||||
"The operation took too long and was cancelled. "
|
||||
"Try a simpler request or a different approach."
|
||||
),
|
||||
is_error=True,
|
||||
)
|
||||
return result
|
||||
|
||||
def _record_learning(self, key: str, value: Any) -> None:
|
||||
@@ -4344,6 +4428,7 @@ class EventLoopNode(NodeProtocol):
|
||||
ctx: NodeContext,
|
||||
execution_id: str = "",
|
||||
iteration: int | None = None,
|
||||
inner_turn: int = 0,
|
||||
) -> None:
|
||||
if self._event_bus:
|
||||
if ctx.node_spec.client_facing:
|
||||
@@ -4354,6 +4439,7 @@ class EventLoopNode(NodeProtocol):
|
||||
snapshot=snapshot,
|
||||
execution_id=execution_id,
|
||||
iteration=iteration,
|
||||
inner_turn=inner_turn,
|
||||
)
|
||||
else:
|
||||
await self._event_bus.emit_llm_text_delta(
|
||||
@@ -4362,6 +4448,7 @@ class EventLoopNode(NodeProtocol):
|
||||
content=content,
|
||||
snapshot=snapshot,
|
||||
execution_id=execution_id,
|
||||
inner_turn=inner_turn,
|
||||
)
|
||||
|
||||
async def _publish_tool_started(
|
||||
@@ -4591,11 +4678,21 @@ class EventLoopNode(NodeProtocol):
|
||||
subagent_tool_names = set(subagent_spec.tools or [])
|
||||
tool_source = ctx.all_tools if ctx.all_tools else ctx.available_tools
|
||||
|
||||
subagent_tools = [
|
||||
t
|
||||
for t in tool_source
|
||||
if t.name in subagent_tool_names and t.name != "delegate_to_sub_agent"
|
||||
]
|
||||
# GCU auto-population: GCU nodes declare tools=[] because the runner
|
||||
# auto-populates them at setup time. But that expansion doesn't reach
|
||||
# subagents invoked via delegate_to_sub_agent — the subagent spec still
|
||||
# has the original empty list. When a GCU subagent has no declared
|
||||
# tools, include all catalog tools so browser tools are available.
|
||||
if subagent_spec.node_type == "gcu" and not subagent_tool_names:
|
||||
subagent_tools = [
|
||||
t for t in tool_source if t.name != "delegate_to_sub_agent"
|
||||
]
|
||||
else:
|
||||
subagent_tools = [
|
||||
t
|
||||
for t in tool_source
|
||||
if t.name in subagent_tool_names and t.name != "delegate_to_sub_agent"
|
||||
]
|
||||
|
||||
missing = subagent_tool_names - {t.name for t in subagent_tools}
|
||||
if missing:
|
||||
@@ -4679,7 +4776,7 @@ class EventLoopNode(NodeProtocol):
|
||||
)
|
||||
|
||||
subagent_node = EventLoopNode(
|
||||
event_bus=None, # Subagents don't emit events to parent's bus
|
||||
event_bus=self._event_bus, # Subagent events visible to Queen via shared bus
|
||||
judge=SubagentJudge(task=task, max_iterations=max_iter),
|
||||
config=LoopConfig(
|
||||
max_iterations=max_iter, # Tighter budget
|
||||
@@ -4694,25 +4791,42 @@ class EventLoopNode(NodeProtocol):
|
||||
conversation_store=subagent_conv_store,
|
||||
)
|
||||
|
||||
# Inject a unique GCU browser profile for this subagent so that
|
||||
# concurrent GCU subagents (run via asyncio.gather) each get their own
|
||||
# isolated BrowserContext. asyncio.gather copies the current context
|
||||
# for each coroutine, so the reset token is safe to call in finally.
|
||||
_profile_token = None
|
||||
try:
|
||||
from gcu.browser.session import set_active_profile as _set_gcu_profile
|
||||
|
||||
_profile_token = _set_gcu_profile(f"{agent_id}-{subagent_instance}")
|
||||
except ImportError:
|
||||
pass # GCU tools not installed; no-op
|
||||
|
||||
try:
|
||||
logger.info("🚀 Starting subagent '%s' execution...", agent_id)
|
||||
start_time = time.time()
|
||||
result = await subagent_node.execute(subagent_ctx)
|
||||
latency_ms = int((time.time() - start_time) * 1000)
|
||||
|
||||
separator = "-" * 60
|
||||
logger.info(
|
||||
"\n" + "-" * 60 + "\n"
|
||||
"\n%s\n"
|
||||
"✅ SUBAGENT '%s' COMPLETED\n"
|
||||
"-" * 60 + "\n"
|
||||
"%s\n"
|
||||
"Success: %s\n"
|
||||
"Latency: %dms\n"
|
||||
"Tokens used: %s\n"
|
||||
"Output keys: %s\n" + "-" * 60,
|
||||
"Output keys: %s\n"
|
||||
"%s",
|
||||
separator,
|
||||
agent_id,
|
||||
separator,
|
||||
result.success,
|
||||
latency_ms,
|
||||
result.tokens_used,
|
||||
list(result.output.keys()) if result.output else [],
|
||||
separator,
|
||||
)
|
||||
|
||||
result_json = {
|
||||
@@ -4758,3 +4872,29 @@ class EventLoopNode(NodeProtocol):
|
||||
content=json.dumps(result_json, indent=2),
|
||||
is_error=True,
|
||||
)
|
||||
finally:
|
||||
# Restore the GCU profile context that was set before this subagent ran.
|
||||
if _profile_token is not None:
|
||||
from gcu.browser.session import _active_profile as _gcu_profile_var
|
||||
|
||||
_gcu_profile_var.reset(_profile_token)
|
||||
|
||||
# Stop the browser session for this subagent's profile so tabs are
|
||||
# closed immediately rather than accumulating until server shutdown.
|
||||
if self._tool_executor is not None:
|
||||
_subagent_profile = f"{agent_id}-{subagent_instance}"
|
||||
try:
|
||||
_stop_use = ToolUse(
|
||||
id="gcu-cleanup",
|
||||
name="browser_stop",
|
||||
input={"profile": _subagent_profile},
|
||||
)
|
||||
_stop_result = self._tool_executor(_stop_use)
|
||||
if asyncio.iscoroutine(_stop_result) or asyncio.isfuture(_stop_result):
|
||||
await _stop_result
|
||||
except Exception as _gcu_exc:
|
||||
logger.warning(
|
||||
"GCU browser_stop failed for profile %r: %s",
|
||||
_subagent_profile,
|
||||
_gcu_exc,
|
||||
)
|
||||
|
||||
@@ -27,11 +27,14 @@ from framework.graph.node import (
|
||||
SharedMemory,
|
||||
)
|
||||
from framework.graph.validator import OutputValidator
|
||||
from framework.llm.provider import LLMProvider, Tool
|
||||
from framework.llm.provider import LLMProvider, Tool, ToolUse
|
||||
from framework.observability import set_trace_context
|
||||
from framework.runtime.core import Runtime
|
||||
from framework.schemas.checkpoint import Checkpoint
|
||||
from framework.storage.checkpoint_store import CheckpointStore
|
||||
from framework.utils.io import atomic_write
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _default_max_context_tokens() -> int:
|
||||
@@ -149,6 +152,8 @@ class GraphExecutor:
|
||||
dynamic_tools_provider: Callable | None = None,
|
||||
dynamic_prompt_provider: Callable | None = None,
|
||||
iteration_metadata_provider: Callable | None = None,
|
||||
skills_catalog_prompt: str = "",
|
||||
protocols_prompt: str = "",
|
||||
):
|
||||
"""
|
||||
Initialize the executor.
|
||||
@@ -174,6 +179,8 @@ class GraphExecutor:
|
||||
tool list (for mode switching)
|
||||
dynamic_prompt_provider: Optional callback returning current
|
||||
system prompt (for phase switching)
|
||||
skills_catalog_prompt: Available skills catalog for system prompt
|
||||
protocols_prompt: Default skill operational protocols for system prompt
|
||||
"""
|
||||
self.runtime = runtime
|
||||
self.llm = llm
|
||||
@@ -195,6 +202,19 @@ class GraphExecutor:
|
||||
self.dynamic_tools_provider = dynamic_tools_provider
|
||||
self.dynamic_prompt_provider = dynamic_prompt_provider
|
||||
self.iteration_metadata_provider = iteration_metadata_provider
|
||||
self.skills_catalog_prompt = skills_catalog_prompt
|
||||
self.protocols_prompt = protocols_prompt
|
||||
|
||||
if protocols_prompt:
|
||||
self.logger.info(
|
||||
"GraphExecutor[%s] received protocols_prompt (%d chars)",
|
||||
stream_id, len(protocols_prompt),
|
||||
)
|
||||
else:
|
||||
self.logger.warning(
|
||||
"GraphExecutor[%s] received EMPTY protocols_prompt",
|
||||
stream_id,
|
||||
)
|
||||
|
||||
# Parallel execution settings
|
||||
self.enable_parallel_execution = enable_parallel_execution
|
||||
@@ -224,11 +244,11 @@ class GraphExecutor:
|
||||
"""
|
||||
if not self._storage_path:
|
||||
return
|
||||
state_path = self._storage_path / "state.json"
|
||||
try:
|
||||
import json as _json
|
||||
from datetime import datetime
|
||||
|
||||
state_path = self._storage_path / "state.json"
|
||||
if state_path.exists():
|
||||
state_data = _json.loads(state_path.read_text(encoding="utf-8"))
|
||||
else:
|
||||
@@ -251,9 +271,14 @@ class GraphExecutor:
|
||||
state_data["memory"] = memory_snapshot
|
||||
state_data["memory_keys"] = list(memory_snapshot.keys())
|
||||
|
||||
state_path.write_text(_json.dumps(state_data, indent=2), encoding="utf-8")
|
||||
with atomic_write(state_path, encoding="utf-8") as f:
|
||||
_json.dump(state_data, f, indent=2)
|
||||
except Exception:
|
||||
pass # Best-effort — never block execution
|
||||
logger.warning(
|
||||
"Failed to persist progress state to %s",
|
||||
state_path,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def _validate_tools(self, graph: GraphSpec) -> list[str]:
|
||||
"""
|
||||
@@ -415,6 +440,14 @@ class GraphExecutor:
|
||||
)
|
||||
return s1 + "\n\n" + s2
|
||||
|
||||
def _get_runtime_log_session_id(self) -> str:
|
||||
"""Return the session-backed execution ID for runtime logging, if any."""
|
||||
if not self._storage_path:
|
||||
return ""
|
||||
if self._storage_path.parent.name != "sessions":
|
||||
return ""
|
||||
return self._storage_path.name
|
||||
|
||||
async def execute(
|
||||
self,
|
||||
graph: GraphSpec,
|
||||
@@ -708,10 +741,7 @@ class GraphExecutor:
|
||||
)
|
||||
|
||||
if self.runtime_logger:
|
||||
# Extract session_id from storage_path if available (for unified sessions)
|
||||
session_id = ""
|
||||
if self._storage_path and self._storage_path.name.startswith("session_"):
|
||||
session_id = self._storage_path.name
|
||||
session_id = self._get_runtime_log_session_id()
|
||||
self.runtime_logger.start_run(goal_id=goal.id, session_id=session_id)
|
||||
|
||||
self.logger.info(f"🚀 Starting execution: {goal.name}")
|
||||
@@ -937,6 +967,33 @@ class GraphExecutor:
|
||||
self.logger.info(" Executing...")
|
||||
result = await node_impl.execute(ctx)
|
||||
|
||||
# GCU tab cleanup: stop the browser profile after a top-level GCU node
|
||||
# finishes so tabs don't accumulate. Mirrors the subagent cleanup in
|
||||
# EventLoopNode._execute_subagent().
|
||||
if node_spec.node_type == "gcu" and self.tool_executor is not None:
|
||||
try:
|
||||
from gcu.browser.session import (
|
||||
_active_profile as _gcu_profile_var,
|
||||
)
|
||||
|
||||
_gcu_profile = _gcu_profile_var.get()
|
||||
_stop_use = ToolUse(
|
||||
id="gcu-cleanup",
|
||||
name="browser_stop",
|
||||
input={"profile": _gcu_profile},
|
||||
)
|
||||
_stop_result = self.tool_executor(_stop_use)
|
||||
if asyncio.iscoroutine(_stop_result) or asyncio.isfuture(_stop_result):
|
||||
await _stop_result
|
||||
except ImportError:
|
||||
pass # GCU not installed
|
||||
except Exception as _gcu_exc:
|
||||
logger.warning(
|
||||
"GCU browser_stop failed for profile %r: %s",
|
||||
_gcu_profile,
|
||||
_gcu_exc,
|
||||
)
|
||||
|
||||
# Emit node-completed event (skip event_loop nodes)
|
||||
if self._event_bus and node_spec.node_type != "event_loop":
|
||||
await self._event_bus.emit_node_loop_completed(
|
||||
@@ -1765,10 +1822,29 @@ class GraphExecutor:
|
||||
if node_spec.tools:
|
||||
available_tools = [t for t in self.tools if t.name in node_spec.tools]
|
||||
|
||||
# Create scoped memory view
|
||||
# Create scoped memory view.
|
||||
# When permissions are restricted (non-empty key lists), auto-include
|
||||
# _-prefixed keys used by default skill protocols so agents can read/write
|
||||
# operational state (e.g. _working_notes, _batch_ledger) regardless of
|
||||
# what the node declares. When key lists are empty (unrestricted), leave
|
||||
# unchanged — empty means "allow all".
|
||||
read_keys = list(node_spec.input_keys)
|
||||
write_keys = list(node_spec.output_keys)
|
||||
if read_keys or write_keys:
|
||||
from framework.skills.defaults import SHARED_MEMORY_KEYS as _skill_keys
|
||||
|
||||
# Also include any _-prefixed keys already written to memory
|
||||
existing_underscore = [k for k in memory._data if k.startswith("_")]
|
||||
extra_keys = set(_skill_keys) | set(existing_underscore)
|
||||
for k in extra_keys:
|
||||
if k not in read_keys:
|
||||
read_keys.append(k)
|
||||
if k not in write_keys:
|
||||
write_keys.append(k)
|
||||
|
||||
scoped_memory = memory.with_permissions(
|
||||
read_keys=node_spec.input_keys,
|
||||
write_keys=node_spec.output_keys,
|
||||
read_keys=read_keys,
|
||||
write_keys=write_keys,
|
||||
)
|
||||
|
||||
# Build per-node accounts prompt (filtered to this node's tools)
|
||||
@@ -1812,6 +1888,8 @@ class GraphExecutor:
|
||||
dynamic_tools_provider=self.dynamic_tools_provider,
|
||||
dynamic_prompt_provider=self.dynamic_prompt_provider,
|
||||
iteration_metadata_provider=self.iteration_metadata_provider,
|
||||
skills_catalog_prompt=self.skills_catalog_prompt,
|
||||
protocols_prompt=self.protocols_prompt,
|
||||
)
|
||||
|
||||
VALID_NODE_TYPES = {
|
||||
@@ -2052,6 +2130,10 @@ class GraphExecutor:
|
||||
edge=edge,
|
||||
)
|
||||
|
||||
# Track which branch wrote which key for memory conflict detection
|
||||
fanout_written_keys: dict[str, str] = {} # key -> branch_id that wrote it
|
||||
fanout_keys_lock = asyncio.Lock()
|
||||
|
||||
self.logger.info(f" ⑂ Fan-out: executing {len(branches)} branches in parallel")
|
||||
for branch in branches.values():
|
||||
target_spec = graph.get_node(branch.node_id)
|
||||
@@ -2143,8 +2225,31 @@ class GraphExecutor:
|
||||
)
|
||||
|
||||
if result.success:
|
||||
# Write outputs to shared memory using async write
|
||||
# Write outputs to shared memory with conflict detection
|
||||
conflict_strategy = self._parallel_config.memory_conflict_strategy
|
||||
for key, value in result.output.items():
|
||||
async with fanout_keys_lock:
|
||||
prior_branch = fanout_written_keys.get(key)
|
||||
if prior_branch and prior_branch != branch.branch_id:
|
||||
if conflict_strategy == "error":
|
||||
raise RuntimeError(
|
||||
f"Memory conflict: key '{key}' already written "
|
||||
f"by branch '{prior_branch}', "
|
||||
f"conflicting write from '{branch.branch_id}'"
|
||||
)
|
||||
elif conflict_strategy == "first_wins":
|
||||
self.logger.debug(
|
||||
f" ⚠ Skipping write to '{key}' "
|
||||
f"(first_wins: already set by {prior_branch})"
|
||||
)
|
||||
continue
|
||||
else:
|
||||
# last_wins (default): write and log
|
||||
self.logger.debug(
|
||||
f" ⚠ Key '{key}' overwritten "
|
||||
f"(last_wins: {prior_branch} -> {branch.branch_id})"
|
||||
)
|
||||
fanout_written_keys[key] = branch.branch_id
|
||||
await memory.write_async(key, value)
|
||||
|
||||
branch.result = result
|
||||
@@ -2191,9 +2296,11 @@ class GraphExecutor:
|
||||
|
||||
return branch, e
|
||||
|
||||
# Execute all branches concurrently
|
||||
tasks = [execute_single_branch(b) for b in branches.values()]
|
||||
results = await asyncio.gather(*tasks, return_exceptions=False)
|
||||
# Execute all branches concurrently with per-branch timeout
|
||||
timeout = self._parallel_config.branch_timeout_seconds
|
||||
branch_list = list(branches.values())
|
||||
tasks = [asyncio.wait_for(execute_single_branch(b), timeout=timeout) for b in branch_list]
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
# Process results
|
||||
total_tokens = 0
|
||||
@@ -2201,17 +2308,33 @@ class GraphExecutor:
|
||||
branch_results: dict[str, NodeResult] = {}
|
||||
failed_branches: list[ParallelBranch] = []
|
||||
|
||||
for branch, result in results:
|
||||
path.append(branch.node_id)
|
||||
for i, result in enumerate(results):
|
||||
branch = branch_list[i]
|
||||
|
||||
if isinstance(result, Exception):
|
||||
if isinstance(result, asyncio.TimeoutError):
|
||||
# Branch timed out
|
||||
branch.status = "timed_out"
|
||||
branch.error = f"Branch timed out after {timeout}s"
|
||||
self.logger.warning(
|
||||
f" ⏱ Branch {graph.get_node(branch.node_id).name}: "
|
||||
f"timed out after {timeout}s"
|
||||
)
|
||||
path.append(branch.node_id)
|
||||
failed_branches.append(branch)
|
||||
elif result is None or not result.success:
|
||||
elif isinstance(result, Exception):
|
||||
path.append(branch.node_id)
|
||||
failed_branches.append(branch)
|
||||
else:
|
||||
total_tokens += result.tokens_used
|
||||
total_latency += result.latency_ms
|
||||
branch_results[branch.branch_id] = result
|
||||
returned_branch, node_result = result
|
||||
path.append(returned_branch.node_id)
|
||||
if node_result is None or isinstance(node_result, Exception):
|
||||
failed_branches.append(returned_branch)
|
||||
elif not node_result.success:
|
||||
failed_branches.append(returned_branch)
|
||||
else:
|
||||
total_tokens += node_result.tokens_used
|
||||
total_latency += node_result.latency_ms
|
||||
branch_results[returned_branch.branch_id] = node_result
|
||||
|
||||
# Handle failures based on config
|
||||
if failed_branches:
|
||||
|
||||
+51
-11
@@ -37,24 +37,42 @@ Follow these rules for reliable, efficient browser interaction.
|
||||
## Reading Pages
|
||||
- ALWAYS prefer `browser_snapshot` over `browser_get_text("body")`
|
||||
— it returns a compact ~1-5 KB accessibility tree vs 100+ KB of raw HTML.
|
||||
- Use `browser_snapshot_aria` when you need full ARIA properties
|
||||
for detailed element inspection.
|
||||
- Interaction tools (`browser_click`, `browser_type`, `browser_fill`,
|
||||
`browser_scroll`, etc.) return a page snapshot automatically in their
|
||||
result. Use it to decide your next action — do NOT call
|
||||
`browser_snapshot` separately after every action.
|
||||
Only call `browser_snapshot` when you need a fresh view without
|
||||
performing an action, or after setting `auto_snapshot=false`.
|
||||
- Do NOT use `browser_screenshot` for reading text content
|
||||
— it produces huge base64 images with no searchable text.
|
||||
- Only fall back to `browser_get_text` for extracting specific
|
||||
small elements by CSS selector.
|
||||
|
||||
## Navigation & Waiting
|
||||
- Always call `browser_wait` after navigation actions
|
||||
(`browser_open`, `browser_navigate`, `browser_click` on links)
|
||||
to let the page load.
|
||||
- `browser_navigate` and `browser_open` already wait for the page to
|
||||
load (`domcontentloaded`). Do NOT call `browser_wait` with no
|
||||
arguments after navigation — it wastes time.
|
||||
Only use `browser_wait` when you need a *specific element* or *text*
|
||||
to appear (pass `selector` or `text`).
|
||||
- NEVER re-navigate to the same URL after scrolling
|
||||
— this resets your scroll position and loses loaded content.
|
||||
|
||||
## Scrolling
|
||||
- Use large scroll amounts ~2000 when loading more content
|
||||
— sites like twitter and linkedin have lazy loading for paging.
|
||||
- After scrolling, take a new `browser_snapshot` to see updated content.
|
||||
- The scroll result includes a snapshot automatically — no need to call
|
||||
`browser_snapshot` separately.
|
||||
|
||||
## Batching Actions
|
||||
- You can call multiple tools in a single turn — they execute in parallel.
|
||||
ALWAYS batch independent actions together. Examples:
|
||||
- Fill multiple form fields in one turn.
|
||||
- Navigate + snapshot in one turn.
|
||||
- Click + scroll if targeting different elements.
|
||||
- When batching, set `auto_snapshot=false` on all but the last action
|
||||
to avoid redundant snapshots.
|
||||
- Aim for 3-5 tool calls per turn minimum. One tool call per turn is
|
||||
wasteful.
|
||||
|
||||
## Error Recovery
|
||||
- If a tool fails, retry once with the same approach.
|
||||
@@ -65,11 +83,33 @@ Follow these rules for reliable, efficient browser interaction.
|
||||
then `browser_start`, then retry.
|
||||
|
||||
## Tab Management
|
||||
- Use `browser_tabs` to list open tabs when managing multiple pages.
|
||||
- Pass `target_id` to tools when operating on a specific tab.
|
||||
- Open background tabs with `browser_open(url=..., background=true)`
|
||||
to avoid losing your current context.
|
||||
- Close tabs you no longer need with `browser_close` to free resources.
|
||||
|
||||
**Close tabs as soon as you are done with them** — not only at the end of the task.
|
||||
After reading or extracting data from a tab, close it immediately.
|
||||
|
||||
**Decision rules:**
|
||||
- Finished reading/extracting from a tab? → `browser_close(target_id=...)`
|
||||
- Completed a multi-tab workflow? → `browser_close_finished()` to clean up all your tabs
|
||||
- More than 3 tabs open? → stop and close finished ones before opening more
|
||||
- Popup appeared that you didn't need? → close it immediately
|
||||
|
||||
**Origin awareness:** `browser_tabs` returns an `origin` field for each tab:
|
||||
- `"agent"` — you opened it; you own it; close it when done
|
||||
- `"popup"` — opened by a link or script; close after extracting what you need
|
||||
- `"startup"` or `"user"` — leave these alone unless the task requires it
|
||||
|
||||
**Cleanup tools:**
|
||||
- `browser_close(target_id=...)` — close one specific tab
|
||||
- `browser_close_finished()` — close all your agent/popup tabs (safe: leaves startup/user tabs)
|
||||
- `browser_close_all()` — close everything except the active tab (use only for full reset)
|
||||
|
||||
**Multi-tab workflow pattern:**
|
||||
1. Open background tabs with `browser_open(url=..., background=true)` to stay on current tab
|
||||
2. Process each tab and close it with `browser_close` when done
|
||||
3. When the full workflow completes, call `browser_close_finished()` to confirm cleanup
|
||||
4. Check `browser_tabs` at any point — it shows `origin` and `age_seconds` per tab
|
||||
|
||||
Never accumulate tabs. Treat every tab you open as a resource you must free.
|
||||
|
||||
## Login & Auth Walls
|
||||
- If you see a "Log in" or "Sign up" prompt instead of expected
|
||||
|
||||
@@ -565,6 +565,10 @@ class NodeContext:
|
||||
# staging / running) without restarting the conversation.
|
||||
dynamic_prompt_provider: Any = None # Callable[[], str] | None
|
||||
|
||||
# Skill system prompts — injected by the skill discovery pipeline
|
||||
skills_catalog_prompt: str = "" # Available skills XML catalog
|
||||
protocols_prompt: str = "" # Default skill operational protocols
|
||||
|
||||
# Per-iteration metadata provider — when set, EventLoopNode merges
|
||||
# the returned dict into node_loop_iteration event data. Used by
|
||||
# the queen to record the current phase per iteration.
|
||||
|
||||
@@ -140,14 +140,18 @@ def compose_system_prompt(
|
||||
focus_prompt: str | None,
|
||||
narrative: str | None = None,
|
||||
accounts_prompt: str | None = None,
|
||||
skills_catalog_prompt: str | None = None,
|
||||
protocols_prompt: str | None = None,
|
||||
) -> str:
|
||||
"""Compose the three-layer system prompt.
|
||||
"""Compose the multi-layer system prompt.
|
||||
|
||||
Args:
|
||||
identity_prompt: Layer 1 — static agent identity (from GraphSpec).
|
||||
focus_prompt: Layer 3 — per-node focus directive (from NodeSpec.system_prompt).
|
||||
narrative: Layer 2 — auto-generated from conversation state.
|
||||
accounts_prompt: Connected accounts block (sits between identity and narrative).
|
||||
skills_catalog_prompt: Available skills catalog XML (Agent Skills standard).
|
||||
protocols_prompt: Default skill operational protocols section.
|
||||
|
||||
Returns:
|
||||
Composed system prompt with all layers present, plus current datetime.
|
||||
@@ -162,6 +166,14 @@ def compose_system_prompt(
|
||||
if accounts_prompt:
|
||||
parts.append(f"\n{accounts_prompt}")
|
||||
|
||||
# Skills catalog (discovered skills available for activation)
|
||||
if skills_catalog_prompt:
|
||||
parts.append(f"\n{skills_catalog_prompt}")
|
||||
|
||||
# Operational protocols (default skill behavioral guidance)
|
||||
if protocols_prompt:
|
||||
parts.append(f"\n{protocols_prompt}")
|
||||
|
||||
# Layer 2: Narrative (what's happened so far)
|
||||
if narrative:
|
||||
parts.append(f"\n--- Context (what has happened so far) ---\n{narrative}")
|
||||
|
||||
@@ -45,6 +45,12 @@ def _patch_litellm_anthropic_oauth() -> None:
|
||||
from litellm.llms.anthropic.common_utils import AnthropicModelInfo
|
||||
from litellm.types.llms.anthropic import ANTHROPIC_OAUTH_TOKEN_PREFIX
|
||||
except ImportError:
|
||||
logger.warning(
|
||||
"Could not apply litellm Anthropic OAuth patch — litellm internals may have "
|
||||
"changed. Anthropic OAuth tokens (Claude Code subscriptions) may fail with 401. "
|
||||
"See BerriAI/litellm#19618. Current litellm version: %s",
|
||||
getattr(litellm, "__version__", "unknown"),
|
||||
)
|
||||
return
|
||||
|
||||
original = AnthropicModelInfo.validate_environment
|
||||
@@ -86,10 +92,12 @@ def _patch_litellm_metadata_nonetype() -> None:
|
||||
"""
|
||||
import functools
|
||||
|
||||
patched_count = 0
|
||||
for fn_name in ("completion", "acompletion", "responses", "aresponses"):
|
||||
original = getattr(litellm, fn_name, None)
|
||||
if original is None:
|
||||
continue
|
||||
patched_count += 1
|
||||
if asyncio.iscoroutinefunction(original):
|
||||
|
||||
@functools.wraps(original)
|
||||
@@ -109,6 +117,14 @@ def _patch_litellm_metadata_nonetype() -> None:
|
||||
|
||||
setattr(litellm, fn_name, _sync_wrapper)
|
||||
|
||||
if patched_count == 0:
|
||||
logger.warning(
|
||||
"Could not apply litellm metadata=None patch — none of the expected entry "
|
||||
"points (completion, acompletion, responses, aresponses) were found. "
|
||||
"metadata=None TypeError may occur. Current litellm version: %s",
|
||||
getattr(litellm, "__version__", "unknown"),
|
||||
)
|
||||
|
||||
|
||||
if litellm is not None:
|
||||
_patch_litellm_anthropic_oauth()
|
||||
@@ -150,6 +166,10 @@ EMPTY_STREAM_RETRY_DELAY = 1.0 # seconds
|
||||
# Directory for dumping failed requests
|
||||
FAILED_REQUESTS_DIR = Path.home() / ".hive" / "failed_requests"
|
||||
|
||||
# Maximum number of dump files to retain in ~/.hive/failed_requests/.
|
||||
# Older files are pruned automatically to prevent unbounded disk growth.
|
||||
MAX_FAILED_REQUEST_DUMPS = 50
|
||||
|
||||
|
||||
def _estimate_tokens(model: str, messages: list[dict]) -> tuple[int, str]:
|
||||
"""Estimate token count for messages. Returns (token_count, method)."""
|
||||
@@ -166,6 +186,24 @@ def _estimate_tokens(model: str, messages: list[dict]) -> tuple[int, str]:
|
||||
return total_chars // 4, "estimate"
|
||||
|
||||
|
||||
def _prune_failed_request_dumps(max_files: int = MAX_FAILED_REQUEST_DUMPS) -> None:
|
||||
"""Remove oldest dump files when the count exceeds *max_files*.
|
||||
|
||||
Best-effort: never raises — a pruning failure must not break retry logic.
|
||||
"""
|
||||
try:
|
||||
all_dumps = sorted(
|
||||
FAILED_REQUESTS_DIR.glob("*.json"),
|
||||
key=lambda f: f.stat().st_mtime,
|
||||
)
|
||||
excess = len(all_dumps) - max_files
|
||||
if excess > 0:
|
||||
for old_file in all_dumps[:excess]:
|
||||
old_file.unlink(missing_ok=True)
|
||||
except Exception:
|
||||
pass # Best-effort — never block the caller
|
||||
|
||||
|
||||
def _dump_failed_request(
|
||||
model: str,
|
||||
kwargs: dict[str, Any],
|
||||
@@ -197,6 +235,9 @@ def _dump_failed_request(
|
||||
with open(filepath, "w", encoding="utf-8") as f:
|
||||
json.dump(dump_data, f, indent=2, default=str)
|
||||
|
||||
# Prune old dumps to prevent unbounded disk growth
|
||||
_prune_failed_request_dumps()
|
||||
|
||||
return str(filepath)
|
||||
|
||||
|
||||
|
||||
@@ -83,18 +83,18 @@ configure_logging(level="INFO", format="auto")
|
||||
- Compact single-line format (easy to stream/parse)
|
||||
- All trace context fields included automatically
|
||||
|
||||
### Human-Readable Format (Development)
|
||||
### Human-Readable Format (Development / Terminal)
|
||||
|
||||
```
|
||||
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] Starting agent execution
|
||||
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] Processing input data [node_id:input-processor]
|
||||
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] LLM call completed [latency_ms:1250] [tokens_used:450]
|
||||
[INFO ] [agent:sales-agent] Starting agent execution
|
||||
[INFO ] [agent:sales-agent] Processing input data [node_id:input-processor]
|
||||
[INFO ] [agent:sales-agent] LLM call completed [latency_ms:1250] [tokens_used:450]
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Color-coded log levels
|
||||
- Shortened IDs for readability (first 8 chars)
|
||||
- Context prefix shows trace correlation
|
||||
- Terminal output omits trace_id and execution_id for readability
|
||||
- For full traceability (e.g. debugging), use `ENV=production` to get JSON file logs with trace_id and execution_id
|
||||
|
||||
## Trace Context Fields
|
||||
|
||||
|
||||
@@ -4,8 +4,9 @@ Structured logging with automatic trace context propagation.
|
||||
Key Features:
|
||||
- Zero developer friction: Standard logger.info() calls get automatic context
|
||||
- ContextVar-based propagation: Thread-safe and async-safe
|
||||
- Dual output modes: JSON for production, human-readable for development
|
||||
- Correlation IDs: trace_id follows entire request flow automatically
|
||||
- Dual output modes: JSON for production (full trace_id/execution_id), human-readable for terminal
|
||||
- Terminal omits trace_id/execution_id for readability
|
||||
- Use ENV=production for file logs with full traceability
|
||||
|
||||
Architecture:
|
||||
Runtime.start_run() → Generates trace_id, sets context once
|
||||
@@ -101,10 +102,11 @@ class StructuredFormatter(logging.Formatter):
|
||||
|
||||
class HumanReadableFormatter(logging.Formatter):
|
||||
"""
|
||||
Human-readable formatter for development.
|
||||
Human-readable formatter for development (terminal output).
|
||||
|
||||
Provides colorized logs with trace context for local debugging.
|
||||
Includes trace_id prefix for correlation - AUTOMATIC!
|
||||
Provides colorized logs for local debugging. Omits trace_id and execution_id
|
||||
from the terminal for readability; use ENV=production (JSON file logs) when
|
||||
traceability is needed.
|
||||
"""
|
||||
|
||||
COLORS = {
|
||||
@@ -118,18 +120,11 @@ class HumanReadableFormatter(logging.Formatter):
|
||||
|
||||
def format(self, record: logging.LogRecord) -> str:
|
||||
"""Format log record as human-readable string."""
|
||||
# Get trace context - AUTOMATIC!
|
||||
# Get trace context; omit trace_id and execution_id in terminal for readability
|
||||
context = trace_context.get() or {}
|
||||
trace_id = context.get("trace_id", "")
|
||||
execution_id = context.get("execution_id", "")
|
||||
agent_id = context.get("agent_id", "")
|
||||
|
||||
# Build context prefix
|
||||
prefix_parts = []
|
||||
if trace_id:
|
||||
prefix_parts.append(f"trace:{trace_id[:8]}")
|
||||
if execution_id:
|
||||
prefix_parts.append(f"exec:{execution_id[-8:]}")
|
||||
if agent_id:
|
||||
prefix_parts.append(f"agent:{agent_id}")
|
||||
|
||||
|
||||
@@ -959,6 +959,10 @@ class AgentRunner:
|
||||
|
||||
graph = GraphSpec(**graph_kwargs)
|
||||
|
||||
# Read skill configuration from agent module
|
||||
agent_default_skills = getattr(agent_module, "default_skills", None)
|
||||
agent_skills = getattr(agent_module, "skills", None)
|
||||
|
||||
# Read runtime config (webhook settings, etc.) if defined
|
||||
agent_runtime_config = getattr(agent_module, "runtime_config", None)
|
||||
|
||||
@@ -970,7 +974,7 @@ class AgentRunner:
|
||||
configure_fn = getattr(agent_module, "configure_for_account", None)
|
||||
list_accts_fn = getattr(agent_module, "list_connected_accounts", None)
|
||||
|
||||
return cls(
|
||||
runner = cls(
|
||||
agent_path=agent_path,
|
||||
graph=graph,
|
||||
goal=goal,
|
||||
@@ -986,6 +990,10 @@ class AgentRunner:
|
||||
list_accounts=list_accts_fn,
|
||||
credential_store=credential_store,
|
||||
)
|
||||
# Stash skill config for use in _setup()
|
||||
runner._agent_default_skills = agent_default_skills
|
||||
runner._agent_skills = agent_skills
|
||||
return runner
|
||||
|
||||
# Fallback: load from agent.json (legacy JSON-based agents)
|
||||
agent_json_path = agent_path / "agent.json"
|
||||
@@ -1003,7 +1011,7 @@ class AgentRunner:
|
||||
except json.JSONDecodeError as exc:
|
||||
raise ValueError(f"Invalid JSON in agent export file: {agent_json_path}") from exc
|
||||
|
||||
return cls(
|
||||
runner = cls(
|
||||
agent_path=agent_path,
|
||||
graph=graph,
|
||||
goal=goal,
|
||||
@@ -1014,6 +1022,9 @@ class AgentRunner:
|
||||
skip_credential_validation=skip_credential_validation or False,
|
||||
credential_store=credential_store,
|
||||
)
|
||||
runner._agent_default_skills = None
|
||||
runner._agent_skills = None
|
||||
return runner
|
||||
|
||||
def register_tool(
|
||||
self,
|
||||
@@ -1323,6 +1334,19 @@ class AgentRunner:
|
||||
except Exception:
|
||||
pass # Best-effort — agent works without account info
|
||||
|
||||
# Skill configuration — the runtime handles discovery, loading, and
|
||||
# prompt rasterization. The runner just builds the config.
|
||||
from framework.skills.config import SkillsConfig
|
||||
from framework.skills.manager import SkillsManagerConfig
|
||||
|
||||
skills_manager_config = SkillsManagerConfig(
|
||||
skills_config=SkillsConfig.from_agent_vars(
|
||||
default_skills=getattr(self, "_agent_default_skills", None),
|
||||
skills=getattr(self, "_agent_skills", None),
|
||||
),
|
||||
project_root=self.agent_path,
|
||||
)
|
||||
|
||||
self._setup_agent_runtime(
|
||||
tools,
|
||||
tool_executor,
|
||||
@@ -1330,6 +1354,7 @@ class AgentRunner:
|
||||
accounts_data=accounts_data,
|
||||
tool_provider_map=tool_provider_map,
|
||||
event_bus=event_bus,
|
||||
skills_manager_config=skills_manager_config,
|
||||
)
|
||||
|
||||
def _get_api_key_env_var(self, model: str) -> str | None:
|
||||
@@ -1425,6 +1450,7 @@ class AgentRunner:
|
||||
accounts_data: list[dict] | None = None,
|
||||
tool_provider_map: dict[str, str] | None = None,
|
||||
event_bus=None,
|
||||
skills_manager_config=None,
|
||||
) -> None:
|
||||
"""Set up multi-entry-point execution using AgentRuntime."""
|
||||
entry_points = []
|
||||
@@ -1484,26 +1510,37 @@ class AgentRunner:
|
||||
accounts_data=accounts_data,
|
||||
tool_provider_map=tool_provider_map,
|
||||
event_bus=event_bus,
|
||||
skills_manager_config=skills_manager_config,
|
||||
)
|
||||
|
||||
# Pass intro_message through for TUI display
|
||||
self._agent_runtime.intro_message = self.intro_message
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Execution modes
|
||||
#
|
||||
# run() – One-shot, blocking execution for worker agents
|
||||
# (headless CLI via ``hive run``). Validates, runs
|
||||
# the graph to completion, and returns the result.
|
||||
#
|
||||
# start() / trigger() – Long-lived runtime for the frontend (queen).
|
||||
# start() boots the runtime; trigger() sends
|
||||
# non-blocking execution requests. Used by the
|
||||
# server session manager and API routes.
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
async def run(
|
||||
self,
|
||||
input_data: dict | None = None,
|
||||
session_state: dict | None = None,
|
||||
entry_point_id: str | None = None,
|
||||
) -> ExecutionResult:
|
||||
"""
|
||||
Execute the agent with given input data.
|
||||
"""One-shot execution for worker agents (headless CLI).
|
||||
|
||||
Validates credentials before execution. If any required credentials
|
||||
are missing, returns an error result with instructions on how to
|
||||
provide them.
|
||||
Validates credentials, runs the graph to completion, and returns
|
||||
the result. Used by ``hive run`` and programmatic callers.
|
||||
|
||||
For single-entry-point agents, this is the standard execution path.
|
||||
For multi-entry-point agents, you can optionally specify which entry point to use.
|
||||
For the frontend (queen), use start() + trigger() instead.
|
||||
|
||||
Args:
|
||||
input_data: Input data for the agent (e.g., {"lead_id": "123"})
|
||||
@@ -1629,7 +1666,12 @@ class AgentRunner:
|
||||
# === Runtime API ===
|
||||
|
||||
async def start(self) -> None:
|
||||
"""Start the agent runtime."""
|
||||
"""Boot the agent runtime for the frontend (queen).
|
||||
|
||||
Pair with trigger() to send execution requests. Used by the
|
||||
server session manager. For headless worker agents, use run()
|
||||
instead.
|
||||
"""
|
||||
if self._agent_runtime is None:
|
||||
self._setup()
|
||||
|
||||
@@ -1646,10 +1688,10 @@ class AgentRunner:
|
||||
input_data: dict[str, Any],
|
||||
correlation_id: str | None = None,
|
||||
) -> str:
|
||||
"""
|
||||
Trigger execution at a specific entry point (non-blocking).
|
||||
"""Send a non-blocking execution request to a running runtime.
|
||||
|
||||
Returns execution ID for tracking.
|
||||
Used by the server API routes after start(). For headless
|
||||
worker agents, use run() instead.
|
||||
|
||||
Args:
|
||||
entry_point_id: Which entry point to trigger
|
||||
|
||||
@@ -29,6 +29,7 @@ if TYPE_CHECKING:
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.goal import Goal
|
||||
from framework.llm.provider import LLMProvider, Tool
|
||||
from framework.skills.manager import SkillsManagerConfig
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -132,6 +133,10 @@ class AgentRuntime:
|
||||
accounts_data: list[dict] | None = None,
|
||||
tool_provider_map: dict[str, str] | None = None,
|
||||
event_bus: "EventBus | None" = None,
|
||||
skills_manager_config: "SkillsManagerConfig | None" = None,
|
||||
# Deprecated — pass skills_manager_config instead.
|
||||
skills_catalog_prompt: str = "",
|
||||
protocols_prompt: str = "",
|
||||
):
|
||||
"""
|
||||
Initialize agent runtime.
|
||||
@@ -153,7 +158,13 @@ class AgentRuntime:
|
||||
event_bus: Optional external EventBus. If provided, the runtime shares
|
||||
this bus instead of creating its own. Used by SessionManager to
|
||||
share a single bus between queen, worker, and judge.
|
||||
skills_manager_config: Skill configuration — the runtime owns
|
||||
discovery, loading, and prompt renderation internally.
|
||||
skills_catalog_prompt: Deprecated. Pre-rendered skills catalog.
|
||||
protocols_prompt: Deprecated. Pre-rendered operational protocols.
|
||||
"""
|
||||
from framework.skills.manager import SkillsManager
|
||||
|
||||
self.graph = graph
|
||||
self.goal = goal
|
||||
self._config = config or AgentRuntimeConfig()
|
||||
@@ -161,6 +172,29 @@ class AgentRuntime:
|
||||
self._checkpoint_config = checkpoint_config
|
||||
self.accounts_prompt = accounts_prompt
|
||||
|
||||
# --- Skill lifecycle: runtime owns the SkillsManager ---
|
||||
if skills_manager_config is not None:
|
||||
# New path: config-driven, runtime handles loading
|
||||
self._skills_manager = SkillsManager(skills_manager_config)
|
||||
self._skills_manager.load()
|
||||
elif skills_catalog_prompt or protocols_prompt:
|
||||
# Legacy path: caller passed pre-rendered strings
|
||||
import warnings
|
||||
|
||||
warnings.warn(
|
||||
"Passing pre-rendered skills_catalog_prompt/protocols_prompt "
|
||||
"is deprecated. Pass skills_manager_config instead.",
|
||||
DeprecationWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
self._skills_manager = SkillsManager.from_precomputed(
|
||||
skills_catalog_prompt, protocols_prompt
|
||||
)
|
||||
else:
|
||||
# Bare constructor: auto-load defaults
|
||||
self._skills_manager = SkillsManager()
|
||||
self._skills_manager.load()
|
||||
|
||||
# Primary graph identity
|
||||
self._graph_id: str = graph_id or "primary"
|
||||
|
||||
@@ -216,6 +250,18 @@ class AgentRuntime:
|
||||
# Optional greeting shown to user on TUI load (set by AgentRunner)
|
||||
self.intro_message: str = ""
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Skill prompt accessors (read by ExecutionStream constructors)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@property
|
||||
def skills_catalog_prompt(self) -> str:
|
||||
return self._skills_manager.skills_catalog_prompt
|
||||
|
||||
@property
|
||||
def protocols_prompt(self) -> str:
|
||||
return self._skills_manager.protocols_prompt
|
||||
|
||||
def register_entry_point(self, spec: EntryPointSpec) -> None:
|
||||
"""
|
||||
Register a named entry point for the agent.
|
||||
@@ -293,6 +339,8 @@ class AgentRuntime:
|
||||
accounts_prompt=self._accounts_prompt,
|
||||
accounts_data=self._accounts_data,
|
||||
tool_provider_map=self._tool_provider_map,
|
||||
skills_catalog_prompt=self.skills_catalog_prompt,
|
||||
protocols_prompt=self.protocols_prompt,
|
||||
)
|
||||
await stream.start()
|
||||
self._streams[ep_id] = stream
|
||||
@@ -393,18 +441,24 @@ class AgentRuntime:
|
||||
|
||||
tc = spec.trigger_config
|
||||
cron_expr = tc.get("cron")
|
||||
interval = tc.get("interval_minutes")
|
||||
_raw_interval = tc.get("interval_minutes")
|
||||
interval = float(_raw_interval) if _raw_interval is not None else None
|
||||
run_immediately = tc.get("run_immediately", False)
|
||||
|
||||
if cron_expr:
|
||||
# Cron expression mode — takes priority over interval_minutes
|
||||
try:
|
||||
from croniter import croniter
|
||||
except ImportError as e:
|
||||
raise RuntimeError(
|
||||
"croniter is required for cron-based entry points. "
|
||||
"Install it with: uv pip install croniter"
|
||||
) from e
|
||||
|
||||
# Validate the expression upfront
|
||||
try:
|
||||
if not croniter.is_valid(cron_expr):
|
||||
raise ValueError(f"Invalid cron expression: {cron_expr}")
|
||||
except (ImportError, ValueError) as e:
|
||||
except ValueError as e:
|
||||
logger.warning(
|
||||
"Entry point '%s' has invalid cron config: %s",
|
||||
ep_id,
|
||||
@@ -544,7 +598,7 @@ class AgentRuntime:
|
||||
ep_id,
|
||||
cron_expr,
|
||||
run_immediately,
|
||||
idle_timeout=tc.get("idle_timeout_seconds", 300),
|
||||
idle_timeout=float(tc.get("idle_timeout_seconds", 300)),
|
||||
)()
|
||||
)
|
||||
self._timer_tasks.append(task)
|
||||
@@ -674,7 +728,7 @@ class AgentRuntime:
|
||||
ep_id,
|
||||
interval,
|
||||
run_immediately,
|
||||
idle_timeout=tc.get("idle_timeout_seconds", 300),
|
||||
idle_timeout=float(tc.get("idle_timeout_seconds", 300)),
|
||||
)()
|
||||
)
|
||||
self._timer_tasks.append(task)
|
||||
@@ -921,6 +975,8 @@ class AgentRuntime:
|
||||
accounts_prompt=self._accounts_prompt,
|
||||
accounts_data=self._accounts_data,
|
||||
tool_provider_map=self._tool_provider_map,
|
||||
skills_catalog_prompt=self.skills_catalog_prompt,
|
||||
protocols_prompt=self.protocols_prompt,
|
||||
)
|
||||
if self._running:
|
||||
await stream.start()
|
||||
@@ -999,7 +1055,8 @@ class AgentRuntime:
|
||||
if spec.trigger_type != "timer":
|
||||
continue
|
||||
tc = spec.trigger_config
|
||||
interval = tc.get("interval_minutes")
|
||||
_raw_interval = tc.get("interval_minutes")
|
||||
interval = float(_raw_interval) if _raw_interval is not None else None
|
||||
run_immediately = tc.get("run_immediately", False)
|
||||
|
||||
if interval and interval > 0 and self._running:
|
||||
@@ -1144,7 +1201,7 @@ class AgentRuntime:
|
||||
ep_id,
|
||||
interval,
|
||||
run_immediately,
|
||||
idle_timeout=tc.get("idle_timeout_seconds", 300),
|
||||
idle_timeout=float(tc.get("idle_timeout_seconds", 300)),
|
||||
)()
|
||||
)
|
||||
timer_tasks.append(task)
|
||||
@@ -1699,6 +1756,10 @@ def create_agent_runtime(
|
||||
accounts_data: list[dict] | None = None,
|
||||
tool_provider_map: dict[str, str] | None = None,
|
||||
event_bus: "EventBus | None" = None,
|
||||
skills_manager_config: "SkillsManagerConfig | None" = None,
|
||||
# Deprecated — pass skills_manager_config instead.
|
||||
skills_catalog_prompt: str = "",
|
||||
protocols_prompt: str = "",
|
||||
) -> AgentRuntime:
|
||||
"""
|
||||
Create and configure an AgentRuntime with entry points.
|
||||
@@ -1725,6 +1786,10 @@ def create_agent_runtime(
|
||||
accounts_data: Raw account data for per-node prompt generation.
|
||||
tool_provider_map: Tool name to provider name mapping for account routing.
|
||||
event_bus: Optional external EventBus to share with other components.
|
||||
skills_manager_config: Skill configuration — the runtime owns
|
||||
discovery, loading, and prompt renderation internally.
|
||||
skills_catalog_prompt: Deprecated. Pre-rendered skills catalog.
|
||||
protocols_prompt: Deprecated. Pre-rendered operational protocols.
|
||||
|
||||
Returns:
|
||||
Configured AgentRuntime (not yet started)
|
||||
@@ -1751,6 +1816,9 @@ def create_agent_runtime(
|
||||
accounts_data=accounts_data,
|
||||
tool_provider_map=tool_provider_map,
|
||||
event_bus=event_bus,
|
||||
skills_manager_config=skills_manager_config,
|
||||
skills_catalog_prompt=skills_catalog_prompt,
|
||||
protocols_prompt=protocols_prompt,
|
||||
)
|
||||
|
||||
for spec in entry_points:
|
||||
|
||||
@@ -262,7 +262,7 @@ class EventBus:
|
||||
self._session_log: IO[str] | None = None
|
||||
self._session_log_iteration_offset: int = 0
|
||||
# Accumulator for client_output_delta snapshots — flushed on llm_turn_complete.
|
||||
# Key: (stream_id, node_id, execution_id, iteration) → latest AgentEvent
|
||||
# Key: (stream_id, node_id, execution_id, iteration, inner_turn) → latest AgentEvent
|
||||
self._pending_output_snapshots: dict[tuple, AgentEvent] = {}
|
||||
|
||||
def set_session_log(self, path: Path, *, iteration_offset: int = 0) -> None:
|
||||
@@ -328,6 +328,7 @@ class EventBus:
|
||||
event.node_id,
|
||||
event.execution_id,
|
||||
event.data.get("iteration"),
|
||||
event.data.get("inner_turn", 0),
|
||||
)
|
||||
self._pending_output_snapshots[key] = event
|
||||
return
|
||||
@@ -361,7 +362,7 @@ class EventBus:
|
||||
to_flush: list[tuple] = []
|
||||
for key, _evt in self._pending_output_snapshots.items():
|
||||
if stream_id is not None:
|
||||
k_stream, k_node, k_exec, _ = key
|
||||
k_stream, k_node, k_exec, _, _ = key
|
||||
if k_stream != stream_id or k_node != node_id or k_exec != execution_id:
|
||||
continue
|
||||
to_flush.append(key)
|
||||
@@ -749,6 +750,7 @@ class EventBus:
|
||||
content: str,
|
||||
snapshot: str,
|
||||
execution_id: str | None = None,
|
||||
inner_turn: int = 0,
|
||||
) -> None:
|
||||
"""Emit LLM text delta event."""
|
||||
await self.publish(
|
||||
@@ -757,7 +759,7 @@ class EventBus:
|
||||
stream_id=stream_id,
|
||||
node_id=node_id,
|
||||
execution_id=execution_id,
|
||||
data={"content": content, "snapshot": snapshot},
|
||||
data={"content": content, "snapshot": snapshot, "inner_turn": inner_turn},
|
||||
)
|
||||
)
|
||||
|
||||
@@ -873,9 +875,10 @@ class EventBus:
|
||||
snapshot: str,
|
||||
execution_id: str | None = None,
|
||||
iteration: int | None = None,
|
||||
inner_turn: int = 0,
|
||||
) -> None:
|
||||
"""Emit client output delta event (client_facing=True nodes)."""
|
||||
data: dict = {"content": content, "snapshot": snapshot}
|
||||
data: dict = {"content": content, "snapshot": snapshot, "inner_turn": inner_turn}
|
||||
if iteration is not None:
|
||||
data["iteration"] = iteration
|
||||
await self.publish(
|
||||
|
||||
@@ -186,6 +186,8 @@ class ExecutionStream:
|
||||
accounts_prompt: str = "",
|
||||
accounts_data: list[dict] | None = None,
|
||||
tool_provider_map: dict[str, str] | None = None,
|
||||
skills_catalog_prompt: str = "",
|
||||
protocols_prompt: str = "",
|
||||
):
|
||||
"""
|
||||
Initialize execution stream.
|
||||
@@ -209,6 +211,8 @@ class ExecutionStream:
|
||||
accounts_prompt: Connected accounts block for system prompt injection
|
||||
accounts_data: Raw account data for per-node prompt generation
|
||||
tool_provider_map: Tool name to provider name mapping for account routing
|
||||
skills_catalog_prompt: Available skills catalog for system prompt
|
||||
protocols_prompt: Default skill operational protocols for system prompt
|
||||
"""
|
||||
self.stream_id = stream_id
|
||||
self.entry_spec = entry_spec
|
||||
@@ -230,6 +234,20 @@ class ExecutionStream:
|
||||
self._accounts_prompt = accounts_prompt
|
||||
self._accounts_data = accounts_data
|
||||
self._tool_provider_map = tool_provider_map
|
||||
self._skills_catalog_prompt = skills_catalog_prompt
|
||||
self._protocols_prompt = protocols_prompt
|
||||
|
||||
_es_logger = logging.getLogger(__name__)
|
||||
if protocols_prompt:
|
||||
_es_logger.info(
|
||||
"ExecutionStream[%s] received protocols_prompt (%d chars)",
|
||||
stream_id, len(protocols_prompt),
|
||||
)
|
||||
else:
|
||||
_es_logger.warning(
|
||||
"ExecutionStream[%s] received EMPTY protocols_prompt",
|
||||
stream_id,
|
||||
)
|
||||
|
||||
# Create stream-scoped runtime
|
||||
self._runtime = StreamRuntime(
|
||||
@@ -675,6 +693,8 @@ class ExecutionStream:
|
||||
accounts_prompt=self._accounts_prompt,
|
||||
accounts_data=self._accounts_data,
|
||||
tool_provider_map=self._tool_provider_map,
|
||||
skills_catalog_prompt=self._skills_catalog_prompt,
|
||||
protocols_prompt=self._protocols_prompt,
|
||||
)
|
||||
# Track executor so inject_input() can reach EventLoopNode instances
|
||||
self._active_executors[execution_id] = executor
|
||||
|
||||
@@ -47,25 +47,34 @@ class RuntimeLogStore:
|
||||
self._base_path = base_path
|
||||
# Note: _runs_dir is determined per-run_id by _get_run_dir()
|
||||
|
||||
def _session_logs_dir(self, run_id: str) -> Path:
|
||||
"""Return the unified session-backed logs directory for a run ID."""
|
||||
is_runtime_logs = self._base_path.name == "runtime_logs"
|
||||
root = self._base_path.parent if is_runtime_logs else self._base_path
|
||||
return root / "sessions" / run_id / "logs"
|
||||
|
||||
def _legacy_run_dir(self, run_id: str) -> Path:
|
||||
"""Return the deprecated standalone runs directory for a run ID."""
|
||||
return self._base_path / "runs" / run_id
|
||||
|
||||
def _get_run_dir(self, run_id: str) -> Path:
|
||||
"""Determine run directory path based on run_id format.
|
||||
|
||||
- New format (session_*): {storage_root}/sessions/{run_id}/logs/
|
||||
- Session-backed runs: {storage_root}/sessions/{run_id}/logs/
|
||||
- Old format (anything else): {base_path}/runs/{run_id}/ (deprecated)
|
||||
"""
|
||||
if run_id.startswith("session_"):
|
||||
is_runtime_logs = self._base_path.name == "runtime_logs"
|
||||
root = self._base_path.parent if is_runtime_logs else self._base_path
|
||||
return root / "sessions" / run_id / "logs"
|
||||
session_run_dir = self._session_logs_dir(run_id)
|
||||
if session_run_dir.exists() or run_id.startswith("session_"):
|
||||
return session_run_dir
|
||||
import warnings
|
||||
|
||||
warnings.warn(
|
||||
f"Reading logs from deprecated location for run_id={run_id}. "
|
||||
"New sessions use unified storage at sessions/session_*/logs/",
|
||||
"New sessions use unified storage at sessions/<session_id>/logs/",
|
||||
DeprecationWarning,
|
||||
stacklevel=3,
|
||||
)
|
||||
return self._base_path / "runs" / run_id
|
||||
return self._legacy_run_dir(run_id)
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Incremental write (sync — called from locked sections)
|
||||
@@ -76,6 +85,10 @@ class RuntimeLogStore:
|
||||
run_dir = self._get_run_dir(run_id)
|
||||
run_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def ensure_session_run_dir(self, run_id: str) -> None:
|
||||
"""Create the unified session-backed log directory immediately."""
|
||||
self._session_logs_dir(run_id).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def append_step(self, run_id: str, step: NodeStepLog) -> None:
|
||||
"""Append one JSONL line to tool_logs.jsonl. Sync."""
|
||||
path = self._get_run_dir(run_id) / "tool_logs.jsonl"
|
||||
@@ -200,17 +213,17 @@ class RuntimeLogStore:
|
||||
run_ids = []
|
||||
|
||||
# Scan new location: base_path/sessions/{session_id}/logs/
|
||||
# Determine the correct base path for sessions
|
||||
is_runtime_logs = self._base_path.name == "runtime_logs"
|
||||
root = self._base_path.parent if is_runtime_logs else self._base_path
|
||||
sessions_dir = root / "sessions"
|
||||
|
||||
if sessions_dir.exists():
|
||||
for session_dir in sessions_dir.iterdir():
|
||||
if session_dir.is_dir() and session_dir.name.startswith("session_"):
|
||||
logs_dir = session_dir / "logs"
|
||||
if logs_dir.exists() and logs_dir.is_dir():
|
||||
run_ids.append(session_dir.name)
|
||||
if not session_dir.is_dir():
|
||||
continue
|
||||
logs_dir = session_dir / "logs"
|
||||
if logs_dir.exists() and logs_dir.is_dir():
|
||||
run_ids.append(session_dir.name)
|
||||
|
||||
# Scan old location: base_path/runs/ (deprecated)
|
||||
old_runs_dir = self._base_path / "runs"
|
||||
|
||||
@@ -66,15 +66,16 @@ class RuntimeLogger:
|
||||
"""
|
||||
if session_id:
|
||||
self._run_id = session_id
|
||||
self._store.ensure_session_run_dir(self._run_id)
|
||||
else:
|
||||
ts = datetime.now(UTC).strftime("%Y%m%dT%H%M%S")
|
||||
short_uuid = uuid.uuid4().hex[:8]
|
||||
self._run_id = f"{ts}_{short_uuid}"
|
||||
self._store.ensure_run_dir(self._run_id)
|
||||
|
||||
self._goal_id = goal_id
|
||||
self._started_at = datetime.now(UTC).isoformat()
|
||||
self._logged_node_ids = set()
|
||||
self._store.ensure_run_dir(self._run_id)
|
||||
return self._run_id
|
||||
|
||||
def log_step(
|
||||
|
||||
@@ -0,0 +1,29 @@
|
||||
"""Tests for custom session-backed runtime logging paths."""
|
||||
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from framework.graph.executor import GraphExecutor
|
||||
from framework.runtime.runtime_log_store import RuntimeLogStore
|
||||
from framework.runtime.runtime_logger import RuntimeLogger
|
||||
|
||||
|
||||
def test_graph_executor_uses_custom_session_dir_name_for_runtime_logs():
|
||||
executor = GraphExecutor(
|
||||
runtime=MagicMock(),
|
||||
storage_path=Path("/tmp/test-agent/sessions/my-custom-session"),
|
||||
)
|
||||
|
||||
assert executor._get_runtime_log_session_id() == "my-custom-session"
|
||||
|
||||
|
||||
def test_runtime_logger_creates_session_log_dir_for_custom_session_id(tmp_path):
|
||||
base = tmp_path / ".hive" / "agents" / "test_agent"
|
||||
base.mkdir(parents=True)
|
||||
store = RuntimeLogStore(base)
|
||||
logger = RuntimeLogger(store=store, agent_id="test-agent")
|
||||
|
||||
run_id = logger.start_run(goal_id="goal-1", session_id="my-custom-session")
|
||||
|
||||
assert run_id == "my-custom-session"
|
||||
assert (base / "sessions" / "my-custom-session" / "logs").is_dir()
|
||||
@@ -132,6 +132,7 @@ async def create_queen(
|
||||
session.worker_path,
|
||||
stream_id="queen",
|
||||
worker_graph_id=session.worker_runtime._graph_id,
|
||||
default_session_id=session.id,
|
||||
)
|
||||
|
||||
queen_tools = list(queen_registry.get_tools().values())
|
||||
@@ -215,6 +216,16 @@ async def create_queen(
|
||||
+ worker_identity
|
||||
)
|
||||
|
||||
# ---- Default skill protocols -------------------------------------
|
||||
try:
|
||||
from framework.skills.manager import SkillsManager
|
||||
|
||||
_queen_skills_mgr = SkillsManager()
|
||||
_queen_skills_mgr.load()
|
||||
phase_state.protocols_prompt = _queen_skills_mgr.protocols_prompt
|
||||
except Exception:
|
||||
logger.debug("Queen skill loading failed (non-fatal)", exc_info=True)
|
||||
|
||||
# ---- Persona hook ------------------------------------------------
|
||||
_session_llm = session.llm
|
||||
_session_event_bus = session.event_bus
|
||||
|
||||
@@ -103,7 +103,9 @@ async def handle_delete_credential(request: web.Request) -> web.Response:
|
||||
if credential_id == "aden_api_key":
|
||||
from framework.credentials.key_storage import delete_aden_api_key
|
||||
|
||||
delete_aden_api_key()
|
||||
deleted = delete_aden_api_key()
|
||||
if not deleted:
|
||||
return web.json_response({"error": "Credential 'aden_api_key' not found"}, status=404)
|
||||
return web.json_response({"deleted": True})
|
||||
|
||||
store = _get_store(request)
|
||||
@@ -178,7 +180,10 @@ async def handle_check_agent(request: web.Request) -> web.Response:
|
||||
)
|
||||
except Exception as e:
|
||||
logger.exception(f"Error checking agent credentials: {e}")
|
||||
return web.json_response({"error": str(e)}, status=500)
|
||||
return web.json_response(
|
||||
{"error": "Internal server error while checking credentials"},
|
||||
status=500,
|
||||
)
|
||||
|
||||
|
||||
def _status_to_dict(c) -> dict:
|
||||
|
||||
@@ -492,12 +492,14 @@ async def handle_list_worker_sessions(request: web.Request) -> web.Response:
|
||||
|
||||
sessions = []
|
||||
for d in sorted(sess_dir.iterdir(), reverse=True):
|
||||
if not d.is_dir() or not d.name.startswith("session_"):
|
||||
if not d.is_dir():
|
||||
continue
|
||||
state_path = d / "state.json"
|
||||
if not d.name.startswith("session_") and not state_path.exists():
|
||||
continue
|
||||
|
||||
entry: dict = {"session_id": d.name}
|
||||
|
||||
state_path = d / "state.json"
|
||||
if state_path.exists():
|
||||
try:
|
||||
state = json.loads(state_path.read_text(encoding="utf-8"))
|
||||
|
||||
@@ -47,6 +47,8 @@ class Session:
|
||||
worker_handoff_sub: str | None = None
|
||||
# Memory consolidation subscription (fires on CONTEXT_COMPACTED)
|
||||
memory_consolidation_sub: str | None = None
|
||||
# Worker run digest subscription (fires on EXECUTION_COMPLETED / EXECUTION_FAILED)
|
||||
worker_digest_sub: str | None = None
|
||||
# Trigger definitions loaded from agent's triggers.json (available but inactive)
|
||||
available_triggers: dict[str, TriggerDefinition] = field(default_factory=dict)
|
||||
# Active trigger tracking (IDs currently firing + their asyncio tasks)
|
||||
@@ -297,6 +299,9 @@ class SessionManager:
|
||||
session.worker_runtime = runtime
|
||||
session.worker_info = info
|
||||
|
||||
# Subscribe to execution completion for per-run digest generation
|
||||
self._subscribe_worker_digest(session)
|
||||
|
||||
async with self._lock:
|
||||
self._loading.discard(session.id)
|
||||
|
||||
@@ -427,6 +432,26 @@ class SessionManager:
|
||||
if agent_path.name != "queen" and session.worker_runtime:
|
||||
await self._notify_queen_worker_loaded(session)
|
||||
|
||||
# Update meta.json so cold-restore can discover this session by agent_path
|
||||
storage_session_id = session.queen_resume_from or session.id
|
||||
meta_path = Path.home() / ".hive" / "queen" / "session" / storage_session_id / "meta.json"
|
||||
try:
|
||||
_agent_name = (
|
||||
session.worker_info.name
|
||||
if session.worker_info
|
||||
else str(agent_path.name).replace("_", " ").title()
|
||||
)
|
||||
existing_meta = {}
|
||||
if meta_path.exists():
|
||||
existing_meta = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||
existing_meta["agent_name"] = _agent_name
|
||||
existing_meta["agent_path"] = (
|
||||
str(session.worker_path) if session.worker_path else str(agent_path)
|
||||
)
|
||||
meta_path.write_text(json.dumps(existing_meta), encoding="utf-8")
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# Restore previously active triggers from persisted session state
|
||||
if session.available_triggers and session.worker_runtime:
|
||||
try:
|
||||
@@ -506,6 +531,13 @@ class SessionManager:
|
||||
await self._emit_trigger_events(session, "removed", session.available_triggers)
|
||||
session.available_triggers.clear()
|
||||
|
||||
if session.worker_digest_sub is not None:
|
||||
try:
|
||||
session.event_bus.unsubscribe(session.worker_digest_sub)
|
||||
except Exception:
|
||||
pass
|
||||
session.worker_digest_sub = None
|
||||
|
||||
worker_id = session.worker_id
|
||||
session.worker_id = None
|
||||
session.worker_path = None
|
||||
@@ -543,6 +575,13 @@ class SessionManager:
|
||||
pass
|
||||
session.worker_handoff_sub = None
|
||||
|
||||
if session.worker_digest_sub is not None:
|
||||
try:
|
||||
session.event_bus.unsubscribe(session.worker_digest_sub)
|
||||
except Exception:
|
||||
pass
|
||||
session.worker_digest_sub = None
|
||||
|
||||
# Stop queen and memory consolidation subscription
|
||||
if session.memory_consolidation_sub is not None:
|
||||
try:
|
||||
@@ -627,6 +666,123 @@ class SessionManager:
|
||||
else:
|
||||
logger.warning("Worker handoff received but queen node not ready")
|
||||
|
||||
def _subscribe_worker_digest(self, session: Session) -> None:
|
||||
"""Subscribe to worker events to write per-run digests.
|
||||
|
||||
Two triggers:
|
||||
- NODE_LOOP_ITERATION: write a mid-run snapshot, throttled to at most
|
||||
once every _DIGEST_COOLDOWN seconds per execution.
|
||||
- EXECUTION_COMPLETED / EXECUTION_FAILED: always write the final digest,
|
||||
bypassing the cooldown.
|
||||
"""
|
||||
import time as _time
|
||||
|
||||
from framework.runtime.event_bus import EventType as _ET
|
||||
|
||||
_DIGEST_COOLDOWN = 300.0 # seconds between mid-run snapshots
|
||||
|
||||
if session.worker_digest_sub is not None:
|
||||
try:
|
||||
session.event_bus.unsubscribe(session.worker_digest_sub)
|
||||
except Exception:
|
||||
pass
|
||||
session.worker_digest_sub = None
|
||||
|
||||
agent_name = session.worker_path.name if session.worker_path else None
|
||||
if not agent_name:
|
||||
return
|
||||
|
||||
_agent_name = agent_name
|
||||
_llm = session.llm
|
||||
_bus = session.event_bus
|
||||
# per-execution_id monotonic timestamp of last mid-run digest
|
||||
_last_digest: dict[str, float] = {}
|
||||
|
||||
def _resolve_run_id(exec_id: str) -> str | None:
|
||||
"""Look up the run_id for a given execution_id via EXECUTION_STARTED history."""
|
||||
for e in _bus.get_history(event_type=_ET.EXECUTION_STARTED, limit=200):
|
||||
if e.execution_id == exec_id and getattr(e, "run_id", None):
|
||||
return e.run_id
|
||||
return None
|
||||
|
||||
async def _inject_digest_to_queen(run_id: str) -> None:
|
||||
"""Read the written digest and push it into the queen's conversation."""
|
||||
from framework.agents.worker_memory import digest_path
|
||||
|
||||
try:
|
||||
content = digest_path(_agent_name, run_id).read_text(encoding="utf-8").strip()
|
||||
except OSError:
|
||||
return
|
||||
if not content:
|
||||
return
|
||||
executor = session.queen_executor
|
||||
if executor is None:
|
||||
return
|
||||
node = executor.node_registry.get("queen")
|
||||
if node is None or not hasattr(node, "inject_event"):
|
||||
return
|
||||
await node.inject_event(f"[WORKER_DIGEST]\n{content}")
|
||||
|
||||
async def _consolidate_and_notify(run_id: str, outcome_event: Any) -> None:
|
||||
"""Write the digest then push it to the queen."""
|
||||
from framework.agents.worker_memory import consolidate_worker_run
|
||||
|
||||
await consolidate_worker_run(_agent_name, run_id, outcome_event, _bus, _llm)
|
||||
await _inject_digest_to_queen(run_id)
|
||||
|
||||
async def _on_worker_event(event: Any) -> None:
|
||||
if event.stream_id == "queen":
|
||||
return
|
||||
|
||||
exec_id = event.execution_id
|
||||
|
||||
if event.type == _ET.EXECUTION_STARTED:
|
||||
# New run on this execution_id — reset cooldown so the first
|
||||
# iteration always produces a mid-run snapshot.
|
||||
if exec_id:
|
||||
_last_digest.pop(exec_id, None)
|
||||
|
||||
elif event.type in (
|
||||
_ET.EXECUTION_COMPLETED,
|
||||
_ET.EXECUTION_FAILED,
|
||||
_ET.EXECUTION_PAUSED,
|
||||
):
|
||||
# Final digest — always fire, ignore cooldown.
|
||||
# EXECUTION_PAUSED covers cancellation (queen re-triggering the
|
||||
# worker cancels the previous execution, emitting paused).
|
||||
run_id = getattr(event, "run_id", None) or _resolve_run_id(exec_id)
|
||||
if run_id:
|
||||
asyncio.create_task(
|
||||
_consolidate_and_notify(run_id, event),
|
||||
name=f"worker-digest-final-{run_id}",
|
||||
)
|
||||
|
||||
elif event.type == _ET.NODE_LOOP_ITERATION:
|
||||
# Mid-run snapshot — respect 300 s cooldown per execution.
|
||||
if not exec_id:
|
||||
return
|
||||
now = _time.monotonic()
|
||||
if now - _last_digest.get(exec_id, 0.0) < _DIGEST_COOLDOWN:
|
||||
return
|
||||
run_id = _resolve_run_id(exec_id)
|
||||
if run_id:
|
||||
_last_digest[exec_id] = now
|
||||
asyncio.create_task(
|
||||
_consolidate_and_notify(run_id, None),
|
||||
name=f"worker-digest-{run_id}",
|
||||
)
|
||||
|
||||
session.worker_digest_sub = session.event_bus.subscribe(
|
||||
event_types=[
|
||||
_ET.EXECUTION_STARTED,
|
||||
_ET.NODE_LOOP_ITERATION,
|
||||
_ET.EXECUTION_COMPLETED,
|
||||
_ET.EXECUTION_FAILED,
|
||||
_ET.EXECUTION_PAUSED,
|
||||
],
|
||||
handler=_on_worker_event,
|
||||
)
|
||||
|
||||
def _subscribe_worker_handoffs(self, session: Session, executor: Any) -> None:
|
||||
"""Subscribe queen to worker/subagent escalation handoff events."""
|
||||
from framework.runtime.event_bus import EventType as _ET
|
||||
|
||||
@@ -210,11 +210,8 @@ def tmp_agent_dir(tmp_path, monkeypatch):
|
||||
return tmp_path, agent_name, base
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_session(tmp_agent_dir):
|
||||
"""Create a sample session with state.json, checkpoints, and conversations."""
|
||||
tmp_path, agent_name, base = tmp_agent_dir
|
||||
session_id = "session_20260220_120000_abc12345"
|
||||
def _write_sample_session(base: Path, session_id: str):
|
||||
"""Create a sample worker session on disk."""
|
||||
session_dir = base / "sessions" / session_id
|
||||
|
||||
# state.json
|
||||
@@ -295,6 +292,20 @@ def sample_session(tmp_agent_dir):
|
||||
return session_id, session_dir, state
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_session(tmp_agent_dir):
|
||||
"""Create a sample session with state.json, checkpoints, and conversations."""
|
||||
_tmp_path, _agent_name, base = tmp_agent_dir
|
||||
return _write_sample_session(base, "session_20260220_120000_abc12345")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def custom_id_session(tmp_agent_dir):
|
||||
"""Create a sample session that uses a custom non-session_* ID."""
|
||||
_tmp_path, _agent_name, base = tmp_agent_dir
|
||||
return _write_sample_session(base, "my-custom-session")
|
||||
|
||||
|
||||
def _make_app_with_session(session):
|
||||
"""Create an aiohttp app with a pre-loaded session."""
|
||||
app = create_app()
|
||||
@@ -799,6 +810,22 @@ class TestWorkerSessions:
|
||||
assert data["sessions"][0]["status"] == "paused"
|
||||
assert data["sessions"][0]["steps"] == 5
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_list_sessions_includes_custom_id(self, custom_id_session, tmp_agent_dir):
|
||||
session_id, session_dir, state = custom_id_session
|
||||
tmp_path, agent_name, base = tmp_agent_dir
|
||||
|
||||
session = _make_session(tmp_dir=tmp_path / ".hive" / "agents" / agent_name)
|
||||
app = _make_app_with_session(session)
|
||||
|
||||
async with TestClient(TestServer(app)) as client:
|
||||
resp = await client.get("/api/sessions/test_agent/worker-sessions")
|
||||
assert resp.status == 200
|
||||
data = await resp.json()
|
||||
assert len(data["sessions"]) == 1
|
||||
assert data["sessions"][0]["session_id"] == session_id
|
||||
assert data["sessions"][0]["status"] == "paused"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_list_sessions_empty(self, tmp_agent_dir):
|
||||
tmp_path, agent_name, base = tmp_agent_dir
|
||||
@@ -1316,6 +1343,28 @@ class TestLogs:
|
||||
assert len(data["logs"]) >= 1
|
||||
assert data["logs"][0]["run_id"] == session_id
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_logs_list_summaries_with_custom_id(self, custom_id_session, tmp_agent_dir):
|
||||
session_id, session_dir, state = custom_id_session
|
||||
tmp_path, agent_name, base = tmp_agent_dir
|
||||
|
||||
from framework.runtime.runtime_log_store import RuntimeLogStore
|
||||
|
||||
log_store = RuntimeLogStore(base)
|
||||
session = _make_session(
|
||||
tmp_dir=tmp_path / ".hive" / "agents" / agent_name,
|
||||
log_store=log_store,
|
||||
)
|
||||
app = _make_app_with_session(session)
|
||||
|
||||
async with TestClient(TestServer(app)) as client:
|
||||
resp = await client.get("/api/sessions/test_agent/logs")
|
||||
assert resp.status == 200
|
||||
data = await resp.json()
|
||||
assert "logs" in data
|
||||
assert len(data["logs"]) >= 1
|
||||
assert data["logs"][0]["run_id"] == session_id
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_logs_session_summary(self, sample_session, tmp_agent_dir):
|
||||
session_id, session_dir, state = sample_session
|
||||
|
||||
@@ -0,0 +1,26 @@
|
||||
"""Hive Agent Skills — discovery, parsing, and injection of SKILL.md packages.
|
||||
|
||||
Implements the open Agent Skills standard (agentskills.io) for portable
|
||||
skill discovery and activation, plus built-in default skills for runtime
|
||||
operational discipline.
|
||||
"""
|
||||
|
||||
from framework.skills.catalog import SkillCatalog
|
||||
from framework.skills.config import DefaultSkillConfig, SkillsConfig
|
||||
from framework.skills.defaults import DefaultSkillManager
|
||||
from framework.skills.discovery import DiscoveryConfig, SkillDiscovery
|
||||
from framework.skills.manager import SkillsManager, SkillsManagerConfig
|
||||
from framework.skills.parser import ParsedSkill, parse_skill_md
|
||||
|
||||
__all__ = [
|
||||
"DefaultSkillConfig",
|
||||
"DefaultSkillManager",
|
||||
"DiscoveryConfig",
|
||||
"ParsedSkill",
|
||||
"SkillCatalog",
|
||||
"SkillDiscovery",
|
||||
"SkillsConfig",
|
||||
"SkillsManager",
|
||||
"SkillsManagerConfig",
|
||||
"parse_skill_md",
|
||||
]
|
||||
@@ -0,0 +1,24 @@
|
||||
---
|
||||
name: hive.batch-ledger
|
||||
description: Track per-item status when processing collections to prevent skipped or duplicated items.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Batch Progress Ledger
|
||||
|
||||
When processing a collection of items, maintain a batch ledger in `_batch_ledger`.
|
||||
|
||||
Initialize when you identify the batch:
|
||||
- `_batch_total`: total item count
|
||||
- `_batch_ledger`: JSON with per-item status
|
||||
|
||||
Per-item statuses: pending → in_progress → completed|failed|skipped
|
||||
|
||||
- Set `in_progress` BEFORE processing
|
||||
- Set final status AFTER processing with 1-line result_summary
|
||||
- Include error reason for failed/skipped items
|
||||
- Update aggregate counts after each item
|
||||
- NEVER remove items from the ledger
|
||||
- If resuming, skip items already marked completed
|
||||
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: hive.context-preservation
|
||||
description: Proactively preserve critical information before automatic context pruning destroys it.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Context Preservation
|
||||
|
||||
You operate under a finite context window. Important information WILL be pruned.
|
||||
|
||||
Save-As-You-Go: After any tool call producing information you'll need later,
|
||||
immediately extract key data into `_working_notes` or `_preserved_data`.
|
||||
Do NOT rely on referring back to old tool results.
|
||||
|
||||
What to extract: URLs and key snippets (not full pages), relevant API fields
|
||||
(not raw JSON), specific lines/values (not entire files), analysis results
|
||||
(not raw data).
|
||||
|
||||
Before transitioning to the next phase/node, write a handoff summary to
|
||||
`_handoff_context` with everything the next phase needs to know.
|
||||
@@ -0,0 +1,18 @@
|
||||
---
|
||||
name: hive.error-recovery
|
||||
description: Follow a structured recovery protocol when tool calls fail instead of blindly retrying or giving up.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Error Recovery
|
||||
|
||||
When a tool call fails:
|
||||
|
||||
1. Diagnose — record error in notes, classify as transient or structural
|
||||
2. Decide — transient: retry once. Structural fixable: fix and retry.
|
||||
Structural unfixable: record as failed, move to next item.
|
||||
Blocking all progress: record escalation note.
|
||||
3. Adapt — if same tool failed 3+ times, stop using it and find alternative.
|
||||
Update plan in notes. Never silently drop the failed item.
|
||||
@@ -0,0 +1,27 @@
|
||||
---
|
||||
name: hive.note-taking
|
||||
description: Maintain structured working notes throughout execution to prevent information loss during context pruning.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Structured Note-Taking
|
||||
|
||||
Maintain structured working notes in shared memory key `_working_notes`.
|
||||
Update at these checkpoints:
|
||||
|
||||
- After completing each discrete subtask or batch item
|
||||
- After receiving new information that changes your plan
|
||||
- Before any tool call that will produce substantial output
|
||||
|
||||
Structure:
|
||||
|
||||
### Objective — restate the goal
|
||||
### Current Plan — numbered steps, mark completed with ✓
|
||||
### Key Decisions — decisions made and WHY
|
||||
### Working Data — intermediate results, extracted values
|
||||
### Open Questions — uncertainties to verify
|
||||
### Blockers — anything preventing progress
|
||||
|
||||
Update incrementally — do not rewrite from scratch each time.
|
||||
@@ -0,0 +1,20 @@
|
||||
---
|
||||
name: hive.quality-monitor
|
||||
description: Periodically self-assess output quality to catch degradation before the judge does.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Quality Self-Assessment
|
||||
|
||||
Every 5 iterations, self-assess:
|
||||
|
||||
1. On-task? Still working toward the stated objective?
|
||||
2. Thorough? Cutting corners compared to earlier?
|
||||
3. Non-repetitive? Producing new value or rehashing?
|
||||
4. Consistent? Latest output contradict earlier decisions?
|
||||
5. Complete? Tracking all items, or silently dropped some?
|
||||
|
||||
If degrading: write assessment to `_quality_log`, re-read `_working_notes`,
|
||||
change approach explicitly. If acceptable: brief note in `_quality_log`.
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
name: hive.task-decomposition
|
||||
description: Decompose complex tasks into explicit subtasks before diving in.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Task Decomposition
|
||||
|
||||
Before starting a complex task:
|
||||
|
||||
1. Decompose — break into numbered subtasks in `_working_notes` Current Plan
|
||||
2. Estimate — relative effort per subtask (small/medium/large)
|
||||
3. Execute — work through in order, mark ✓ when complete
|
||||
4. Budget — if running low on iterations, prioritize by impact
|
||||
5. Verify — before declaring done, every subtask must be ✓, skipped (with reason), or blocked
|
||||
@@ -0,0 +1,109 @@
|
||||
"""Skill catalog — in-memory index with system prompt generation.
|
||||
|
||||
Builds the XML catalog injected into the system prompt for model-driven
|
||||
skill activation per the Agent Skills standard.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from xml.sax.saxutils import escape
|
||||
|
||||
from framework.skills.parser import ParsedSkill
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_BEHAVIORAL_INSTRUCTION = (
|
||||
"The following skills provide specialized instructions for specific tasks.\n"
|
||||
"When a task matches a skill's description, read the SKILL.md at the listed\n"
|
||||
"location to load the full instructions before proceeding.\n"
|
||||
"When a skill references relative paths, resolve them against the skill's\n"
|
||||
"directory (the parent of SKILL.md) and use absolute paths in tool calls."
|
||||
)
|
||||
|
||||
|
||||
class SkillCatalog:
|
||||
"""In-memory catalog of discovered skills."""
|
||||
|
||||
def __init__(self, skills: list[ParsedSkill] | None = None):
|
||||
self._skills: dict[str, ParsedSkill] = {}
|
||||
self._activated: set[str] = set()
|
||||
if skills:
|
||||
for skill in skills:
|
||||
self.add(skill)
|
||||
|
||||
def add(self, skill: ParsedSkill) -> None:
|
||||
"""Add a skill to the catalog."""
|
||||
self._skills[skill.name] = skill
|
||||
|
||||
def get(self, name: str) -> ParsedSkill | None:
|
||||
"""Look up a skill by name."""
|
||||
return self._skills.get(name)
|
||||
|
||||
def mark_activated(self, name: str) -> None:
|
||||
"""Mark a skill as activated in the current session."""
|
||||
self._activated.add(name)
|
||||
|
||||
def is_activated(self, name: str) -> bool:
|
||||
"""Check if a skill has been activated."""
|
||||
return name in self._activated
|
||||
|
||||
@property
|
||||
def skill_count(self) -> int:
|
||||
return len(self._skills)
|
||||
|
||||
@property
|
||||
def allowlisted_dirs(self) -> list[str]:
|
||||
"""All skill base directories for file access allowlisting."""
|
||||
return [skill.base_dir for skill in self._skills.values()]
|
||||
|
||||
def to_prompt(self) -> str:
|
||||
"""Generate the catalog prompt for system prompt injection.
|
||||
|
||||
Returns empty string if no community/user skills are discovered
|
||||
(default skills are handled separately by DefaultSkillManager).
|
||||
"""
|
||||
# Filter out framework-scope skills (default skills) — they're
|
||||
# injected via the protocols prompt, not the catalog
|
||||
community_skills = [
|
||||
s for s in self._skills.values() if s.source_scope != "framework"
|
||||
]
|
||||
|
||||
if not community_skills:
|
||||
return ""
|
||||
|
||||
lines = ["<available_skills>"]
|
||||
for skill in sorted(community_skills, key=lambda s: s.name):
|
||||
lines.append(" <skill>")
|
||||
lines.append(f" <name>{escape(skill.name)}</name>")
|
||||
lines.append(f" <description>{escape(skill.description)}</description>")
|
||||
lines.append(f" <location>{escape(skill.location)}</location>")
|
||||
lines.append(" </skill>")
|
||||
lines.append("</available_skills>")
|
||||
|
||||
xml_block = "\n".join(lines)
|
||||
return f"{_BEHAVIORAL_INSTRUCTION}\n\n{xml_block}"
|
||||
|
||||
def build_pre_activated_prompt(self, skill_names: list[str]) -> str:
|
||||
"""Build prompt content for pre-activated skills.
|
||||
|
||||
Pre-activated skills get their full SKILL.md body loaded into
|
||||
the system prompt at startup (tier 2), bypassing model-driven
|
||||
activation.
|
||||
|
||||
Returns empty string if no skills match.
|
||||
"""
|
||||
parts: list[str] = []
|
||||
|
||||
for name in skill_names:
|
||||
skill = self.get(name)
|
||||
if skill is None:
|
||||
logger.warning("Pre-activated skill '%s' not found in catalog", name)
|
||||
continue
|
||||
if self.is_activated(name):
|
||||
continue # Already activated, skip duplicate
|
||||
|
||||
self.mark_activated(name)
|
||||
parts.append(f"--- Pre-Activated Skill: {skill.name} ---\n{skill.body}")
|
||||
|
||||
return "\n\n".join(parts)
|
||||
@@ -0,0 +1,99 @@
|
||||
"""Skill configuration dataclasses.
|
||||
|
||||
Handles agent-level skill configuration from module-level variables
|
||||
(``default_skills`` and ``skills``).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
|
||||
@dataclass
|
||||
class DefaultSkillConfig:
|
||||
"""Configuration for a single default skill."""
|
||||
|
||||
enabled: bool = True
|
||||
overrides: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: dict[str, Any]) -> DefaultSkillConfig:
|
||||
enabled = data.get("enabled", True)
|
||||
overrides = {k: v for k, v in data.items() if k != "enabled"}
|
||||
return cls(enabled=enabled, overrides=overrides)
|
||||
|
||||
|
||||
@dataclass
|
||||
class SkillsConfig:
|
||||
"""Agent-level skill configuration.
|
||||
|
||||
Built from module-level variables in agent.py::
|
||||
|
||||
# Pre-activated community skills
|
||||
skills = ["deep-research", "code-review"]
|
||||
|
||||
# Default skill configuration
|
||||
default_skills = {
|
||||
"hive.note-taking": {"enabled": True},
|
||||
"hive.batch-ledger": {"enabled": True, "checkpoint_every_n": 10},
|
||||
"hive.quality-monitor": {"enabled": False},
|
||||
}
|
||||
"""
|
||||
|
||||
# Per-default-skill config, keyed by skill name (e.g. "hive.note-taking")
|
||||
default_skills: dict[str, DefaultSkillConfig] = field(default_factory=dict)
|
||||
|
||||
# Pre-activated community skills (by name)
|
||||
skills: list[str] = field(default_factory=list)
|
||||
|
||||
# Master switch: disable all default skills at once
|
||||
all_defaults_disabled: bool = False
|
||||
|
||||
def is_default_enabled(self, skill_name: str) -> bool:
|
||||
"""Check if a specific default skill is enabled."""
|
||||
if self.all_defaults_disabled:
|
||||
return False
|
||||
config = self.default_skills.get(skill_name)
|
||||
if config is None:
|
||||
return True # enabled by default
|
||||
return config.enabled
|
||||
|
||||
def get_default_overrides(self, skill_name: str) -> dict[str, Any]:
|
||||
"""Get skill-specific configuration overrides."""
|
||||
config = self.default_skills.get(skill_name)
|
||||
if config is None:
|
||||
return {}
|
||||
return config.overrides
|
||||
|
||||
@classmethod
|
||||
def from_agent_vars(
|
||||
cls,
|
||||
default_skills: dict[str, Any] | None = None,
|
||||
skills: list[str] | None = None,
|
||||
) -> SkillsConfig:
|
||||
"""Build config from agent module-level variables.
|
||||
|
||||
Args:
|
||||
default_skills: Dict from agent module (e.g. ``{"hive.note-taking": {"enabled": True}}``)
|
||||
skills: List of pre-activated skill names from agent module
|
||||
"""
|
||||
all_disabled = False
|
||||
parsed_defaults: dict[str, DefaultSkillConfig] = {}
|
||||
|
||||
if default_skills:
|
||||
for name, config_dict in default_skills.items():
|
||||
if name == "_all":
|
||||
if isinstance(config_dict, dict) and not config_dict.get("enabled", True):
|
||||
all_disabled = True
|
||||
continue
|
||||
if isinstance(config_dict, dict):
|
||||
parsed_defaults[name] = DefaultSkillConfig.from_dict(config_dict)
|
||||
elif isinstance(config_dict, bool):
|
||||
parsed_defaults[name] = DefaultSkillConfig(enabled=config_dict)
|
||||
|
||||
return cls(
|
||||
default_skills=parsed_defaults,
|
||||
skills=list(skills or []),
|
||||
all_defaults_disabled=all_disabled,
|
||||
)
|
||||
@@ -0,0 +1,151 @@
|
||||
"""DefaultSkillManager — load, configure, and inject built-in default skills.
|
||||
|
||||
Default skills are SKILL.md packages shipped with the framework that provide
|
||||
runtime operational protocols (note-taking, batch tracking, error recovery, etc.).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
|
||||
from framework.skills.config import SkillsConfig
|
||||
from framework.skills.parser import ParsedSkill, parse_skill_md
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Default skills directory relative to this module
|
||||
_DEFAULT_SKILLS_DIR = Path(__file__).parent / "_default_skills"
|
||||
|
||||
# Ordered list of default skills (name → directory)
|
||||
SKILL_REGISTRY: dict[str, str] = {
|
||||
"hive.note-taking": "note-taking",
|
||||
"hive.batch-ledger": "batch-ledger",
|
||||
"hive.context-preservation": "context-preservation",
|
||||
"hive.quality-monitor": "quality-monitor",
|
||||
"hive.error-recovery": "error-recovery",
|
||||
"hive.task-decomposition": "task-decomposition",
|
||||
}
|
||||
|
||||
# All shared memory keys used by default skills (for permission auto-inclusion)
|
||||
SHARED_MEMORY_KEYS: list[str] = [
|
||||
# note-taking
|
||||
"_working_notes",
|
||||
"_notes_updated_at",
|
||||
# batch-ledger
|
||||
"_batch_ledger",
|
||||
"_batch_total",
|
||||
"_batch_completed",
|
||||
"_batch_failed",
|
||||
# context-preservation
|
||||
"_handoff_context",
|
||||
"_preserved_data",
|
||||
# quality-monitor
|
||||
"_quality_log",
|
||||
"_quality_degradation_count",
|
||||
# error-recovery
|
||||
"_error_log",
|
||||
"_failed_tools",
|
||||
"_escalation_needed",
|
||||
# task-decomposition
|
||||
"_subtasks",
|
||||
"_iteration_budget_remaining",
|
||||
]
|
||||
|
||||
|
||||
class DefaultSkillManager:
|
||||
"""Manages loading, configuration, and prompt generation for default skills."""
|
||||
|
||||
def __init__(self, config: SkillsConfig | None = None):
|
||||
self._config = config or SkillsConfig()
|
||||
self._skills: dict[str, ParsedSkill] = {}
|
||||
self._loaded = False
|
||||
|
||||
def load(self) -> None:
|
||||
"""Load all enabled default skill SKILL.md files."""
|
||||
if self._loaded:
|
||||
return
|
||||
|
||||
for skill_name, dir_name in SKILL_REGISTRY.items():
|
||||
if not self._config.is_default_enabled(skill_name):
|
||||
logger.info("Default skill '%s' disabled by config", skill_name)
|
||||
continue
|
||||
|
||||
skill_path = _DEFAULT_SKILLS_DIR / dir_name / "SKILL.md"
|
||||
if not skill_path.is_file():
|
||||
logger.error("Default skill SKILL.md not found: %s", skill_path)
|
||||
continue
|
||||
|
||||
parsed = parse_skill_md(skill_path, source_scope="framework")
|
||||
if parsed is None:
|
||||
logger.error("Failed to parse default skill: %s", skill_path)
|
||||
continue
|
||||
|
||||
self._skills[skill_name] = parsed
|
||||
|
||||
self._loaded = True
|
||||
|
||||
def build_protocols_prompt(self) -> str:
|
||||
"""Build the combined operational protocols section.
|
||||
|
||||
Extracts protocol sections from all enabled default skills and
|
||||
combines them into a single ``## Operational Protocols`` block
|
||||
for system prompt injection.
|
||||
|
||||
Returns empty string if all defaults are disabled.
|
||||
"""
|
||||
if not self._skills:
|
||||
return ""
|
||||
|
||||
parts: list[str] = ["## Operational Protocols\n"]
|
||||
|
||||
for skill_name in SKILL_REGISTRY:
|
||||
skill = self._skills.get(skill_name)
|
||||
if skill is None:
|
||||
continue
|
||||
# Use the full body — each SKILL.md contains exactly one protocol section
|
||||
parts.append(skill.body)
|
||||
|
||||
if len(parts) <= 1:
|
||||
return ""
|
||||
|
||||
combined = "\n\n".join(parts)
|
||||
|
||||
# Token budget warning (approximate: 1 token ≈ 4 chars)
|
||||
approx_tokens = len(combined) // 4
|
||||
if approx_tokens > 2000:
|
||||
logger.warning(
|
||||
"Default skill protocols exceed 2000 token budget "
|
||||
"(~%d tokens, %d chars). Consider trimming.",
|
||||
approx_tokens,
|
||||
len(combined),
|
||||
)
|
||||
|
||||
return combined
|
||||
|
||||
def log_active_skills(self) -> None:
|
||||
"""Log which default skills are active and their configuration."""
|
||||
if not self._skills:
|
||||
logger.info("Default skills: all disabled")
|
||||
return
|
||||
|
||||
active = []
|
||||
for skill_name in SKILL_REGISTRY:
|
||||
if skill_name in self._skills:
|
||||
overrides = self._config.get_default_overrides(skill_name)
|
||||
if overrides:
|
||||
active.append(f"{skill_name} ({overrides})")
|
||||
else:
|
||||
active.append(skill_name)
|
||||
|
||||
logger.info("Default skills active: %s", ", ".join(active))
|
||||
|
||||
@property
|
||||
def active_skill_names(self) -> list[str]:
|
||||
"""Names of all currently active default skills."""
|
||||
return list(self._skills.keys())
|
||||
|
||||
@property
|
||||
def active_skills(self) -> dict[str, ParsedSkill]:
|
||||
"""All active default skills keyed by name."""
|
||||
return dict(self._skills)
|
||||
@@ -0,0 +1,182 @@
|
||||
"""Skill discovery — scan standard directories for SKILL.md files.
|
||||
|
||||
Implements the Agent Skills standard discovery paths plus Hive-specific
|
||||
locations. Resolves name collisions deterministically.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from framework.skills.parser import ParsedSkill, parse_skill_md
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Directories to skip during scanning
|
||||
_SKIP_DIRS = frozenset({
|
||||
".git",
|
||||
"node_modules",
|
||||
"__pycache__",
|
||||
".venv",
|
||||
"venv",
|
||||
".mypy_cache",
|
||||
".pytest_cache",
|
||||
".ruff_cache",
|
||||
})
|
||||
|
||||
# Scope priority (higher = takes precedence)
|
||||
_SCOPE_PRIORITY = {
|
||||
"framework": 0,
|
||||
"user": 1,
|
||||
"project": 2,
|
||||
}
|
||||
|
||||
# Within the same scope, Hive-specific paths override cross-client paths.
|
||||
# We encode this by scanning cross-client first, then Hive-specific (later wins).
|
||||
|
||||
|
||||
@dataclass
|
||||
class DiscoveryConfig:
|
||||
"""Configuration for skill discovery."""
|
||||
|
||||
project_root: Path | None = None
|
||||
skip_user_scope: bool = False
|
||||
skip_framework_scope: bool = False
|
||||
max_depth: int = 4
|
||||
max_dirs: int = 2000
|
||||
|
||||
|
||||
class SkillDiscovery:
|
||||
"""Scans standard directories for SKILL.md files and resolves collisions."""
|
||||
|
||||
def __init__(self, config: DiscoveryConfig | None = None):
|
||||
self._config = config or DiscoveryConfig()
|
||||
|
||||
def discover(self) -> list[ParsedSkill]:
|
||||
"""Scan all scopes and return deduplicated skill list.
|
||||
|
||||
Scanning order (lowest to highest precedence):
|
||||
1. Framework defaults
|
||||
2. User cross-client (~/.agents/skills/)
|
||||
3. User Hive-specific (~/.hive/skills/)
|
||||
4. Project cross-client (<project>/.agents/skills/)
|
||||
5. Project Hive-specific (<project>/.hive/skills/)
|
||||
|
||||
Later entries override earlier ones on name collision.
|
||||
"""
|
||||
all_skills: list[ParsedSkill] = []
|
||||
|
||||
# Framework scope (lowest precedence)
|
||||
if not self._config.skip_framework_scope:
|
||||
framework_dir = Path(__file__).parent / "_default_skills"
|
||||
if framework_dir.is_dir():
|
||||
all_skills.extend(self._scan_scope(framework_dir, "framework"))
|
||||
|
||||
# User scope
|
||||
if not self._config.skip_user_scope:
|
||||
home = Path.home()
|
||||
|
||||
# Cross-client (lower precedence within user scope)
|
||||
user_agents = home / ".agents" / "skills"
|
||||
if user_agents.is_dir():
|
||||
all_skills.extend(self._scan_scope(user_agents, "user"))
|
||||
|
||||
# Hive-specific (higher precedence within user scope)
|
||||
user_hive = home / ".hive" / "skills"
|
||||
if user_hive.is_dir():
|
||||
all_skills.extend(self._scan_scope(user_hive, "user"))
|
||||
|
||||
# Project scope (highest precedence)
|
||||
if self._config.project_root:
|
||||
root = self._config.project_root
|
||||
|
||||
# Cross-client
|
||||
project_agents = root / ".agents" / "skills"
|
||||
if project_agents.is_dir():
|
||||
all_skills.extend(self._scan_scope(project_agents, "project"))
|
||||
|
||||
# Hive-specific
|
||||
project_hive = root / ".hive" / "skills"
|
||||
if project_hive.is_dir():
|
||||
all_skills.extend(self._scan_scope(project_hive, "project"))
|
||||
|
||||
resolved = self._resolve_collisions(all_skills)
|
||||
|
||||
logger.info(
|
||||
"Skill discovery: found %d skills (%d after dedup) across all scopes",
|
||||
len(all_skills),
|
||||
len(resolved),
|
||||
)
|
||||
return resolved
|
||||
|
||||
def _scan_scope(self, root: Path, scope: str) -> list[ParsedSkill]:
|
||||
"""Scan a single directory for skill directories containing SKILL.md."""
|
||||
skills: list[ParsedSkill] = []
|
||||
dirs_scanned = 0
|
||||
|
||||
for skill_md in self._find_skill_files(root, depth=0):
|
||||
if dirs_scanned >= self._config.max_dirs:
|
||||
logger.warning(
|
||||
"Hit max directory limit (%d) scanning %s",
|
||||
self._config.max_dirs,
|
||||
root,
|
||||
)
|
||||
break
|
||||
|
||||
parsed = parse_skill_md(skill_md, source_scope=scope)
|
||||
if parsed is not None:
|
||||
skills.append(parsed)
|
||||
dirs_scanned += 1
|
||||
|
||||
return skills
|
||||
|
||||
def _find_skill_files(self, directory: Path, depth: int) -> list[Path]:
|
||||
"""Recursively find SKILL.md files up to max_depth."""
|
||||
if depth > self._config.max_depth:
|
||||
return []
|
||||
|
||||
results: list[Path] = []
|
||||
|
||||
try:
|
||||
entries = sorted(directory.iterdir())
|
||||
except OSError:
|
||||
return []
|
||||
|
||||
for entry in entries:
|
||||
if not entry.is_dir():
|
||||
continue
|
||||
if entry.name in _SKIP_DIRS:
|
||||
continue
|
||||
|
||||
skill_md = entry / "SKILL.md"
|
||||
if skill_md.is_file():
|
||||
results.append(skill_md)
|
||||
else:
|
||||
# Recurse into subdirectories
|
||||
results.extend(self._find_skill_files(entry, depth + 1))
|
||||
|
||||
return results
|
||||
|
||||
def _resolve_collisions(self, skills: list[ParsedSkill]) -> list[ParsedSkill]:
|
||||
"""Resolve name collisions deterministically.
|
||||
|
||||
Later entries in the list override earlier ones (because we scan
|
||||
from lowest to highest precedence). On collision, log a warning.
|
||||
"""
|
||||
seen: dict[str, ParsedSkill] = {}
|
||||
|
||||
for skill in skills:
|
||||
if skill.name in seen:
|
||||
existing = seen[skill.name]
|
||||
logger.warning(
|
||||
"Skill name collision: '%s' from %s overrides %s",
|
||||
skill.name,
|
||||
skill.location,
|
||||
existing.location,
|
||||
)
|
||||
seen[skill.name] = skill
|
||||
|
||||
return list(seen.values())
|
||||
@@ -0,0 +1,172 @@
|
||||
"""Unified skill lifecycle manager.
|
||||
|
||||
``SkillsManager`` is the single facade that owns skill discovery, loading,
|
||||
and prompt renderation. The runtime creates one at startup and downstream
|
||||
layers read the cached prompt strings.
|
||||
|
||||
Typical usage — **config-driven** (runner passes configuration)::
|
||||
|
||||
config = SkillsManagerConfig(
|
||||
skills_config=SkillsConfig.from_agent_vars(...),
|
||||
project_root=agent_path,
|
||||
)
|
||||
mgr = SkillsManager(config)
|
||||
mgr.load()
|
||||
print(mgr.protocols_prompt) # default skill protocols
|
||||
print(mgr.skills_catalog_prompt) # community skills XML
|
||||
|
||||
Typical usage — **bare** (exported agents, SDK users)::
|
||||
|
||||
mgr = SkillsManager() # default config
|
||||
mgr.load() # loads all 6 default skills, no community discovery
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
|
||||
from framework.skills.config import SkillsConfig
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class SkillsManagerConfig:
|
||||
"""Everything the runtime needs to configure skills.
|
||||
|
||||
Attributes:
|
||||
skills_config: Per-skill enable/disable and overrides.
|
||||
project_root: Agent directory for community skill discovery.
|
||||
When ``None``, community discovery is skipped.
|
||||
skip_community_discovery: Explicitly skip community scanning
|
||||
even when ``project_root`` is set.
|
||||
"""
|
||||
|
||||
skills_config: SkillsConfig = field(default_factory=SkillsConfig)
|
||||
project_root: Path | None = None
|
||||
skip_community_discovery: bool = False
|
||||
|
||||
|
||||
class SkillsManager:
|
||||
"""Unified skill lifecycle: discovery → loading → prompt renderation.
|
||||
|
||||
The runtime creates one instance during init and owns it for the
|
||||
lifetime of the process. Downstream layers (``ExecutionStream``,
|
||||
``GraphExecutor``, ``NodeContext``, ``EventLoopNode``) receive the
|
||||
cached prompt strings via property accessors.
|
||||
"""
|
||||
|
||||
def __init__(self, config: SkillsManagerConfig | None = None) -> None:
|
||||
self._config = config or SkillsManagerConfig()
|
||||
self._loaded = False
|
||||
self._catalog_prompt: str = ""
|
||||
self._protocols_prompt: str = ""
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Factory for backwards-compat bridge
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@classmethod
|
||||
def from_precomputed(
|
||||
cls,
|
||||
skills_catalog_prompt: str = "",
|
||||
protocols_prompt: str = "",
|
||||
) -> SkillsManager:
|
||||
"""Wrap pre-rendered prompt strings (legacy callers).
|
||||
|
||||
Returns a manager that skips discovery/loading and just returns
|
||||
the provided strings. Used by the deprecation bridge in
|
||||
``AgentRuntime`` when callers pass raw prompt strings.
|
||||
"""
|
||||
mgr = cls.__new__(cls)
|
||||
mgr._config = SkillsManagerConfig()
|
||||
mgr._loaded = True # skip load()
|
||||
mgr._catalog_prompt = skills_catalog_prompt
|
||||
mgr._protocols_prompt = protocols_prompt
|
||||
return mgr
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Lifecycle
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def load(self) -> None:
|
||||
"""Discover, load, and cache skill prompts. Idempotent."""
|
||||
if self._loaded:
|
||||
return
|
||||
self._loaded = True
|
||||
|
||||
try:
|
||||
self._do_load()
|
||||
except Exception:
|
||||
logger.warning("Skill system init failed (non-fatal)", exc_info=True)
|
||||
|
||||
def _do_load(self) -> None:
|
||||
"""Internal load — may raise; caller catches."""
|
||||
from framework.skills.catalog import SkillCatalog
|
||||
from framework.skills.defaults import DefaultSkillManager
|
||||
from framework.skills.discovery import DiscoveryConfig, SkillDiscovery
|
||||
|
||||
skills_config = self._config.skills_config
|
||||
|
||||
# 1. Community skill discovery (when project_root is available)
|
||||
catalog_prompt = ""
|
||||
if (
|
||||
self._config.project_root is not None
|
||||
and not self._config.skip_community_discovery
|
||||
):
|
||||
discovery = SkillDiscovery(
|
||||
DiscoveryConfig(project_root=self._config.project_root)
|
||||
)
|
||||
discovered = discovery.discover()
|
||||
catalog = SkillCatalog(discovered)
|
||||
catalog_prompt = catalog.to_prompt()
|
||||
|
||||
# Pre-activated community skills
|
||||
if skills_config.skills:
|
||||
pre_activated = catalog.build_pre_activated_prompt(
|
||||
skills_config.skills
|
||||
)
|
||||
if pre_activated:
|
||||
if catalog_prompt:
|
||||
catalog_prompt = f"{catalog_prompt}\n\n{pre_activated}"
|
||||
else:
|
||||
catalog_prompt = pre_activated
|
||||
|
||||
# 2. Default skills (always loaded unless explicitly disabled)
|
||||
default_mgr = DefaultSkillManager(config=skills_config)
|
||||
default_mgr.load()
|
||||
default_mgr.log_active_skills()
|
||||
protocols_prompt = default_mgr.build_protocols_prompt()
|
||||
|
||||
# 3. Cache
|
||||
self._catalog_prompt = catalog_prompt
|
||||
self._protocols_prompt = protocols_prompt
|
||||
|
||||
if protocols_prompt:
|
||||
logger.info(
|
||||
"Skill system ready: protocols=%d chars, catalog=%d chars",
|
||||
len(protocols_prompt),
|
||||
len(catalog_prompt),
|
||||
)
|
||||
else:
|
||||
logger.warning("Skill system produced empty protocols_prompt")
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Prompt accessors (consumed by downstream layers)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@property
|
||||
def skills_catalog_prompt(self) -> str:
|
||||
"""Community skills XML catalog for system prompt injection."""
|
||||
return self._catalog_prompt
|
||||
|
||||
@property
|
||||
def protocols_prompt(self) -> str:
|
||||
"""Default skill operational protocols for system prompt injection."""
|
||||
return self._protocols_prompt
|
||||
|
||||
@property
|
||||
def is_loaded(self) -> bool:
|
||||
return self._loaded
|
||||
@@ -0,0 +1,160 @@
|
||||
"""SKILL.md parser — extracts YAML frontmatter and markdown body.
|
||||
|
||||
Parses SKILL.md files per the Agent Skills standard (agentskills.io/specification).
|
||||
Lenient validation: warns on non-critical issues, skips only on missing description
|
||||
or completely unparseable YAML.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import re
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Maximum name length before a warning is logged
|
||||
_MAX_NAME_LENGTH = 64
|
||||
|
||||
|
||||
@dataclass
|
||||
class ParsedSkill:
|
||||
"""In-memory representation of a parsed SKILL.md file."""
|
||||
|
||||
name: str
|
||||
description: str
|
||||
location: str # absolute path to SKILL.md
|
||||
base_dir: str # parent directory of SKILL.md
|
||||
source_scope: str # "project", "user", or "framework"
|
||||
body: str # markdown body after closing ---
|
||||
|
||||
# Optional frontmatter fields
|
||||
license: str | None = None
|
||||
compatibility: list[str] | None = None
|
||||
metadata: dict[str, Any] | None = None
|
||||
allowed_tools: list[str] | None = None
|
||||
|
||||
|
||||
def _try_fix_yaml(raw: str) -> str:
|
||||
"""Attempt to fix common YAML issues (unquoted colon values).
|
||||
|
||||
Some SKILL.md files written for other clients may contain unquoted
|
||||
values with colons, e.g. ``description: Use for: research tasks``.
|
||||
This wraps such values in quotes as a best-effort fixup.
|
||||
"""
|
||||
lines = raw.split("\n")
|
||||
fixed = []
|
||||
for line in lines:
|
||||
# Match "key: value" where value contains an unquoted colon
|
||||
m = re.match(r"^(\s*\w[\w-]*:\s*)(.+)$", line)
|
||||
if m:
|
||||
key_part, value_part = m.group(1), m.group(2)
|
||||
# If value contains a colon and isn't already quoted
|
||||
if ":" in value_part and not (
|
||||
value_part.startswith('"') or value_part.startswith("'")
|
||||
):
|
||||
value_part = f'"{value_part}"'
|
||||
fixed.append(f"{key_part}{value_part}")
|
||||
else:
|
||||
fixed.append(line)
|
||||
return "\n".join(fixed)
|
||||
|
||||
|
||||
def parse_skill_md(path: Path, source_scope: str = "project") -> ParsedSkill | None:
|
||||
"""Parse a SKILL.md file into a ParsedSkill record.
|
||||
|
||||
Args:
|
||||
path: Absolute path to the SKILL.md file.
|
||||
source_scope: One of "project", "user", or "framework".
|
||||
|
||||
Returns:
|
||||
ParsedSkill on success, None if the file is unparseable or
|
||||
missing required fields (description).
|
||||
"""
|
||||
try:
|
||||
content = path.read_text(encoding="utf-8")
|
||||
except OSError as exc:
|
||||
logger.error("Failed to read %s: %s", path, exc)
|
||||
return None
|
||||
|
||||
if not content.strip():
|
||||
logger.error("Empty SKILL.md: %s", path)
|
||||
return None
|
||||
|
||||
# Split on --- delimiters (first two occurrences)
|
||||
parts = content.split("---", 2)
|
||||
if len(parts) < 3:
|
||||
logger.error("SKILL.md missing YAML frontmatter delimiters (---): %s", path)
|
||||
return None
|
||||
|
||||
# parts[0] is content before first --- (should be empty or whitespace)
|
||||
# parts[1] is the YAML frontmatter
|
||||
# parts[2] is the markdown body
|
||||
raw_yaml = parts[1].strip()
|
||||
body = parts[2].strip()
|
||||
|
||||
if not raw_yaml:
|
||||
logger.error("Empty YAML frontmatter in %s", path)
|
||||
return None
|
||||
|
||||
# Parse YAML
|
||||
import yaml
|
||||
|
||||
frontmatter: dict[str, Any] | None = None
|
||||
try:
|
||||
frontmatter = yaml.safe_load(raw_yaml)
|
||||
except yaml.YAMLError:
|
||||
# Fallback: try fixing unquoted colon values
|
||||
try:
|
||||
fixed = _try_fix_yaml(raw_yaml)
|
||||
frontmatter = yaml.safe_load(fixed)
|
||||
logger.warning("Fixed YAML parse issues in %s (unquoted colons)", path)
|
||||
except yaml.YAMLError as exc:
|
||||
logger.error("Unparseable YAML in %s: %s", path, exc)
|
||||
return None
|
||||
|
||||
if not isinstance(frontmatter, dict):
|
||||
logger.error("YAML frontmatter is not a mapping in %s", path)
|
||||
return None
|
||||
|
||||
# Required: description
|
||||
description = frontmatter.get("description")
|
||||
if not description or not str(description).strip():
|
||||
logger.error("Missing or empty 'description' in %s — skipping skill", path)
|
||||
return None
|
||||
|
||||
# Required: name (fallback to parent directory name)
|
||||
name = frontmatter.get("name")
|
||||
parent_dir_name = path.parent.name
|
||||
if not name or not str(name).strip():
|
||||
name = parent_dir_name
|
||||
logger.warning("Missing 'name' in %s — using directory name '%s'", path, name)
|
||||
else:
|
||||
name = str(name).strip()
|
||||
|
||||
# Lenient warnings
|
||||
if len(name) > _MAX_NAME_LENGTH:
|
||||
logger.warning("Skill name exceeds %d chars in %s: '%s'", _MAX_NAME_LENGTH, path, name)
|
||||
|
||||
if name != parent_dir_name and not name.endswith(f".{parent_dir_name}"):
|
||||
logger.warning(
|
||||
"Skill name '%s' doesn't match parent directory '%s' in %s",
|
||||
name,
|
||||
parent_dir_name,
|
||||
path,
|
||||
)
|
||||
|
||||
return ParsedSkill(
|
||||
name=name,
|
||||
description=str(description).strip(),
|
||||
location=str(path.resolve()),
|
||||
base_dir=str(path.parent.resolve()),
|
||||
source_scope=source_scope,
|
||||
body=body,
|
||||
license=frontmatter.get("license"),
|
||||
compatibility=frontmatter.get("compatibility"),
|
||||
metadata=frontmatter.get("metadata"),
|
||||
allowed_tools=frontmatter.get("allowed-tools"),
|
||||
)
|
||||
@@ -40,18 +40,31 @@ class LLMJudge:
|
||||
|
||||
def _get_fallback_provider(self) -> LLMProvider | None:
|
||||
"""
|
||||
Auto-detects available API keys and returns the appropriate provider.
|
||||
Priority: OpenAI -> Anthropic.
|
||||
Auto-detects available API keys and returns an appropriate provider.
|
||||
Uses LiteLLM for OpenAI (framework has no framework.llm.openai module).
|
||||
Priority:
|
||||
1. OpenAI-compatible models via LiteLLM (OPENAI_API_KEY)
|
||||
2. Anthropic via AnthropicProvider (ANTHROPIC_API_KEY)
|
||||
"""
|
||||
# OpenAI: use LiteLLM (the framework's standard multi-provider integration)
|
||||
if os.environ.get("OPENAI_API_KEY"):
|
||||
from framework.llm.openai import OpenAIProvider
|
||||
try:
|
||||
from framework.llm.litellm import LiteLLMProvider
|
||||
|
||||
return OpenAIProvider(model="gpt-4o-mini")
|
||||
return LiteLLMProvider(model="gpt-4o-mini")
|
||||
except ImportError:
|
||||
# LiteLLM is optional; fall through to Anthropic/None
|
||||
pass
|
||||
|
||||
# Anthropic via dedicated provider (wraps LiteLLM internally)
|
||||
if os.environ.get("ANTHROPIC_API_KEY"):
|
||||
from framework.llm.anthropic import AnthropicProvider
|
||||
try:
|
||||
from framework.llm.anthropic import AnthropicProvider
|
||||
|
||||
return AnthropicProvider(model="claude-3-haiku-20240307")
|
||||
return AnthropicProvider(model="claude-haiku-4-5-20251001")
|
||||
except Exception:
|
||||
# If AnthropicProvider cannot be constructed, treat as no fallback
|
||||
return None
|
||||
|
||||
return None
|
||||
|
||||
@@ -77,11 +90,16 @@ SUMMARY TO EVALUATE:
|
||||
Respond with JSON: {{"passes": true/false, "explanation": "..."}}"""
|
||||
|
||||
try:
|
||||
# Compute fallback provider once so we do not create multiple instances
|
||||
fallback_provider = self._get_fallback_provider()
|
||||
|
||||
# 1. Use injected provider
|
||||
if self._provider:
|
||||
active_provider = self._provider
|
||||
# 2. Check if _get_client was MOCKED (legacy tests) or use Agnostic Fallback
|
||||
elif hasattr(self._get_client, "return_value") or not self._get_fallback_provider():
|
||||
# 2. Legacy path: anthropic client mocked in tests takes precedence,
|
||||
# or no fallback provider is available.
|
||||
elif hasattr(self._get_client, "return_value") or fallback_provider is None:
|
||||
# Use legacy Anthropic client (e.g. when tests mock _get_client, or no env keys set)
|
||||
client = self._get_client()
|
||||
response = client.messages.create(
|
||||
model="claude-haiku-4-5-20251001",
|
||||
@@ -90,7 +108,8 @@ Respond with JSON: {{"passes": true/false, "explanation": "..."}}"""
|
||||
)
|
||||
return self._parse_json_result(response.content[0].text.strip())
|
||||
else:
|
||||
active_provider = self._get_fallback_provider()
|
||||
# Use env-based fallback (LiteLLM or AnthropicProvider)
|
||||
active_provider = fallback_provider
|
||||
|
||||
response = active_provider.complete(
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
|
||||
@@ -36,6 +36,7 @@ from __future__ import annotations
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
@@ -108,6 +109,9 @@ class QueenPhaseState:
|
||||
prompt_staging: str = ""
|
||||
prompt_running: str = ""
|
||||
|
||||
# Default skill operational protocols — appended to every phase prompt
|
||||
protocols_prompt: str = ""
|
||||
|
||||
def get_current_tools(self) -> list:
|
||||
"""Return tools for the current phase."""
|
||||
if self.phase == "planning":
|
||||
@@ -132,7 +136,12 @@ class QueenPhaseState:
|
||||
from framework.agents.queen.queen_memory import format_for_injection
|
||||
|
||||
memory = format_for_injection()
|
||||
return base + ("\n\n" + memory if memory else "")
|
||||
parts = [base]
|
||||
if self.protocols_prompt:
|
||||
parts.append(self.protocols_prompt)
|
||||
if memory:
|
||||
parts.append(memory)
|
||||
return "\n\n".join(parts)
|
||||
|
||||
async def _emit_phase_event(self) -> None:
|
||||
"""Publish a QUEEN_PHASE_CHANGED event so the frontend updates the tag."""
|
||||
@@ -451,10 +460,11 @@ async def _start_trigger_timer(session: Any, trigger_id: str, tdef: Any) -> None
|
||||
else:
|
||||
await asyncio.sleep(float(interval_minutes) * 60)
|
||||
|
||||
# Record next fire time for introspection
|
||||
# Record next fire time for introspection (monotonic, matches routes)
|
||||
fire_times = getattr(session, "trigger_next_fire", None)
|
||||
if fire_times is not None:
|
||||
fire_times[trigger_id] = datetime.now(tz=UTC).isoformat()
|
||||
_next_delay = float(interval_minutes) * 60 if interval_minutes else 60
|
||||
fire_times[trigger_id] = time.monotonic() + _next_delay
|
||||
|
||||
# Gate on worker being loaded
|
||||
if getattr(session, "worker_runtime", None) is None:
|
||||
@@ -2699,6 +2709,30 @@ def register_queen_lifecycle_tools(
|
||||
"""Get the session's event bus for querying history."""
|
||||
return getattr(session, "event_bus", None)
|
||||
|
||||
def _get_worker_name() -> str | None:
|
||||
"""Return the worker agent directory name, used for diary lookups."""
|
||||
p = getattr(session, "worker_path", None)
|
||||
return p.name if p else None
|
||||
|
||||
def _format_diary(max_runs: int) -> str:
|
||||
"""Read recent run digests from disk — no EventBus required."""
|
||||
agent_name = _get_worker_name()
|
||||
if not agent_name:
|
||||
return "No worker loaded — diary unavailable."
|
||||
from framework.agents.worker_memory import read_recent_digests
|
||||
|
||||
entries = read_recent_digests(agent_name, max_runs)
|
||||
if not entries:
|
||||
return (
|
||||
f"No run digests for '{agent_name}' yet. "
|
||||
"Digests are written at the end of each completed run."
|
||||
)
|
||||
lines = [f"Worker '{agent_name}' — {len(entries)} recent run digest(s):", ""]
|
||||
for _run_id, content in entries:
|
||||
lines.append(content)
|
||||
lines.append("")
|
||||
return "\n".join(lines).rstrip()
|
||||
|
||||
# Tiered cooldowns: summary is free, detail has short cooldown, full keeps 30s
|
||||
_COOLDOWN_FULL = 30.0
|
||||
_COOLDOWN_DETAIL = 10.0
|
||||
@@ -2853,6 +2887,16 @@ def register_queen_lifecycle_tools(
|
||||
else:
|
||||
parts.append("No issues detected")
|
||||
|
||||
# Latest subagent progress (if any delegation is in flight)
|
||||
bus = _get_event_bus()
|
||||
if bus:
|
||||
sa_reports = bus.get_history(event_type=EventType.SUBAGENT_REPORT, limit=1)
|
||||
if sa_reports:
|
||||
latest = sa_reports[0]
|
||||
sa_msg = str(latest.data.get("message", ""))[:200]
|
||||
ago = _format_time_ago(latest.timestamp)
|
||||
parts.append(f"Latest subagent update ({ago}): {sa_msg}")
|
||||
|
||||
return ". ".join(parts) + "."
|
||||
|
||||
def _format_activity(bus: EventBus, preamble: dict[str, Any], last_n: int) -> str:
|
||||
@@ -2980,6 +3024,10 @@ def register_queen_lifecycle_tools(
|
||||
duration = evt.data.get("duration_s")
|
||||
dur_str = f", {duration:.1f}s" if duration else ""
|
||||
lines.append(f" {name} ({node}) — {status}{dur_str}")
|
||||
result_text = evt.data.get("result", "")
|
||||
if result_text:
|
||||
preview = str(result_text)[:300].replace("\n", " ")
|
||||
lines.append(f" Result: {preview}")
|
||||
else:
|
||||
lines.append("No recent tool calls.")
|
||||
|
||||
@@ -3146,15 +3194,19 @@ def register_queen_lifecycle_tools(
|
||||
for evt in running
|
||||
]
|
||||
if tool_completed:
|
||||
result["recent_tool_calls"] = [
|
||||
{
|
||||
recent_calls = []
|
||||
for evt in tool_completed[:last_n]:
|
||||
entry: dict[str, Any] = {
|
||||
"tool": evt.data.get("tool_name"),
|
||||
"error": bool(evt.data.get("is_error")),
|
||||
"node": evt.node_id,
|
||||
"time": evt.timestamp.isoformat(),
|
||||
}
|
||||
for evt in tool_completed[:last_n]
|
||||
]
|
||||
result_text = evt.data.get("result", "")
|
||||
if result_text:
|
||||
entry["result_preview"] = str(result_text)[:300]
|
||||
recent_calls.append(entry)
|
||||
result["recent_tool_calls"] = recent_calls
|
||||
|
||||
# Node transitions
|
||||
edges = bus.get_history(event_type=EventType.EDGE_TRAVERSED, limit=last_n)
|
||||
@@ -3207,6 +3259,18 @@ def register_queen_lifecycle_tools(
|
||||
if issues:
|
||||
result["issues"] = issues
|
||||
|
||||
# Subagent activity (in-flight progress from delegated subagents)
|
||||
sa_reports = bus.get_history(event_type=EventType.SUBAGENT_REPORT, limit=last_n)
|
||||
if sa_reports:
|
||||
result["subagent_activity"] = [
|
||||
{
|
||||
"subagent": evt.data.get("subagent_id"),
|
||||
"message": str(evt.data.get("message", ""))[:300],
|
||||
"time": evt.timestamp.isoformat(),
|
||||
}
|
||||
for evt in sa_reports[:last_n]
|
||||
]
|
||||
|
||||
# Constraint violations
|
||||
violations = bus.get_history(event_type=EventType.CONSTRAINT_VIOLATION, limit=5)
|
||||
if violations:
|
||||
@@ -3271,16 +3335,17 @@ def register_queen_lifecycle_tools(
|
||||
import time as _time
|
||||
|
||||
# --- Tiered cooldown ---
|
||||
# diary is free (file reads only), summary is free, detail has 10s, full has 30s
|
||||
now = _time.monotonic()
|
||||
if focus == "full":
|
||||
cooldown = _COOLDOWN_FULL
|
||||
tier = "full"
|
||||
elif focus is not None:
|
||||
elif focus == "diary" or focus is None:
|
||||
cooldown = 0.0
|
||||
tier = focus or "summary"
|
||||
else:
|
||||
cooldown = _COOLDOWN_DETAIL
|
||||
tier = "detail"
|
||||
else:
|
||||
cooldown = 0.0
|
||||
tier = "summary"
|
||||
|
||||
elapsed_since = now - _status_last_called.get(tier, 0.0)
|
||||
if elapsed_since < cooldown:
|
||||
@@ -3296,6 +3361,10 @@ def register_queen_lifecycle_tools(
|
||||
)
|
||||
_status_last_called[tier] = now
|
||||
|
||||
# --- Diary: pure file reads, no runtime required ---
|
||||
if focus == "diary":
|
||||
return _format_diary(last_n)
|
||||
|
||||
# --- Runtime check ---
|
||||
runtime = _get_runtime()
|
||||
if runtime is None:
|
||||
@@ -3345,7 +3414,7 @@ def register_queen_lifecycle_tools(
|
||||
else:
|
||||
return (
|
||||
f"Unknown focus '{focus}'. "
|
||||
"Valid options: activity, memory, tools, issues, progress, full."
|
||||
"Valid options: diary, activity, memory, tools, issues, progress, full."
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.exception("get_worker_status error")
|
||||
@@ -3356,6 +3425,8 @@ def register_queen_lifecycle_tools(
|
||||
description=(
|
||||
"Check on the worker. Returns a brief prose summary by default. "
|
||||
"Use 'focus' to drill into specifics:\n"
|
||||
"- diary: persistent run digests from past executions — read this first "
|
||||
"before digging into live runtime logs\n"
|
||||
"- activity: current node, transitions, latest LLM output\n"
|
||||
"- memory: worker's accumulated knowledge and state\n"
|
||||
"- tools: running and recent tool calls\n"
|
||||
@@ -3368,8 +3439,11 @@ def register_queen_lifecycle_tools(
|
||||
"properties": {
|
||||
"focus": {
|
||||
"type": "string",
|
||||
"enum": ["activity", "memory", "tools", "issues", "progress", "full"],
|
||||
"description": ("Aspect to inspect. Omit for a brief summary."),
|
||||
"enum": ["diary", "activity", "memory", "tools", "issues", "progress", "full"],
|
||||
"description": (
|
||||
"Aspect to inspect. Omit for a brief summary. "
|
||||
"Use 'diary' to read persistent run history before checking live logs."
|
||||
),
|
||||
},
|
||||
"last_n": {
|
||||
"type": "integer",
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
"""Tool for the queen to write to her episodic memory.
|
||||
"""Tools for the queen to read and write episodic memory.
|
||||
|
||||
The queen can consciously record significant moments during a session — like
|
||||
writing in a diary. Semantic memory (MEMORY.md) is updated automatically at
|
||||
session end and is never written by the queen directly.
|
||||
writing in a diary — and recall past diary entries when needed. Semantic
|
||||
memory (MEMORY.md) is updated automatically at session end and is never
|
||||
written by the queen directly.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -33,6 +34,67 @@ def write_to_diary(entry: str) -> str:
|
||||
return "Diary entry recorded."
|
||||
|
||||
|
||||
def recall_diary(query: str = "", days_back: int = 7) -> str:
|
||||
"""Search recent diary entries (episodic memory).
|
||||
|
||||
Use this when the user asks about what happened in the past — "what did we
|
||||
do yesterday?", "what happened last week?", "remind me about the pipeline
|
||||
issue", etc. Also use it proactively when you need context from recent
|
||||
sessions to answer a question or make a decision.
|
||||
|
||||
Args:
|
||||
query: Optional keyword or phrase to filter entries. If empty, all
|
||||
recent entries are returned.
|
||||
days_back: How many days to look back (1–30). Defaults to 7.
|
||||
"""
|
||||
from datetime import date, timedelta
|
||||
|
||||
from framework.agents.queen.queen_memory import read_episodic_memory
|
||||
|
||||
days_back = max(1, min(days_back, 30))
|
||||
today = date.today()
|
||||
results: list[str] = []
|
||||
total_chars = 0
|
||||
char_budget = 12_000
|
||||
|
||||
for offset in range(days_back):
|
||||
d = today - timedelta(days=offset)
|
||||
content = read_episodic_memory(d)
|
||||
if not content:
|
||||
continue
|
||||
# If a query is given, only include entries that mention it
|
||||
if query:
|
||||
# Check each section (split by ###) for relevance
|
||||
sections = content.split("### ")
|
||||
matched = [s for s in sections if query.lower() in s.lower()]
|
||||
if not matched:
|
||||
continue
|
||||
content = "### ".join(matched)
|
||||
label = d.strftime("%B %-d, %Y")
|
||||
if d == today:
|
||||
label = f"Today — {label}"
|
||||
entry = f"## {label}\n\n{content}"
|
||||
if total_chars + len(entry) > char_budget:
|
||||
remaining = char_budget - total_chars
|
||||
if remaining > 200:
|
||||
# Fit a partial entry within budget
|
||||
trimmed = content[: remaining - 100] + "\n\n…(truncated)"
|
||||
results.append(f"## {label}\n\n{trimmed}")
|
||||
else:
|
||||
results.append(f"## {label}\n\n(truncated — hit size limit)")
|
||||
break
|
||||
results.append(entry)
|
||||
total_chars += len(entry)
|
||||
|
||||
if not results:
|
||||
if query:
|
||||
return f"No diary entries matching '{query}' in the last {days_back} days."
|
||||
return f"No diary entries found in the last {days_back} days."
|
||||
|
||||
return "\n\n---\n\n".join(results)
|
||||
|
||||
|
||||
def register_queen_memory_tools(registry: ToolRegistry) -> None:
|
||||
"""Register the episodic memory tool into the queen's tool registry."""
|
||||
"""Register the episodic memory tools into the queen's tool registry."""
|
||||
registry.register_function(write_to_diary)
|
||||
registry.register_function(recall_diary)
|
||||
|
||||
@@ -44,6 +44,7 @@ def register_worker_monitoring_tools(
|
||||
storage_path: Path,
|
||||
stream_id: str = "monitoring",
|
||||
worker_graph_id: str | None = None,
|
||||
default_session_id: str | None = None,
|
||||
) -> int:
|
||||
"""Register worker monitoring tools bound to *event_bus* and *storage_path*.
|
||||
|
||||
@@ -55,6 +56,12 @@ def register_worker_monitoring_tools(
|
||||
stream_id: Stream ID used when emitting events.
|
||||
worker_graph_id: The primary worker graph's ID. Included in health summary
|
||||
so the judge can populate ticket identity fields accurately.
|
||||
default_session_id: When set, ``get_worker_health_summary`` uses this
|
||||
session ID as the default instead of auto-discovering
|
||||
the most-recent-by-mtime session. Callers should pass
|
||||
the queen's own session ID so that after a cold-restore
|
||||
the monitoring tool reads the correct worker session
|
||||
rather than a stale orphaned one.
|
||||
|
||||
Returns:
|
||||
Number of tools registered.
|
||||
@@ -97,23 +104,29 @@ def register_worker_monitoring_tools(
|
||||
if not sessions_dir.exists():
|
||||
return json.dumps({"error": "No sessions found — worker has not started yet"})
|
||||
|
||||
candidates = [
|
||||
d for d in sessions_dir.iterdir() if d.is_dir() and (d / "state.json").exists()
|
||||
]
|
||||
if not candidates:
|
||||
return json.dumps({"error": "No sessions found — worker has not started yet"})
|
||||
# Prefer the queen's own session ID (set at registration time) over
|
||||
# mtime-based discovery, which can pick a stale orphaned session after
|
||||
# a cold-restore when a newer-but-empty session directory exists.
|
||||
if default_session_id and (sessions_dir / default_session_id).is_dir():
|
||||
session_id = default_session_id
|
||||
else:
|
||||
candidates = [
|
||||
d for d in sessions_dir.iterdir() if d.is_dir() and (d / "state.json").exists()
|
||||
]
|
||||
if not candidates:
|
||||
return json.dumps({"error": "No sessions found — worker has not started yet"})
|
||||
|
||||
def _sort_key(d: Path):
|
||||
try:
|
||||
state = json.loads((d / "state.json").read_text(encoding="utf-8"))
|
||||
# in_progress/running sorts before completed/failed
|
||||
priority = 0 if state.get("status", "") in ("in_progress", "running") else 1
|
||||
return (priority, -d.stat().st_mtime)
|
||||
except Exception:
|
||||
return (2, 0)
|
||||
def _sort_key(d: Path):
|
||||
try:
|
||||
state = json.loads((d / "state.json").read_text(encoding="utf-8"))
|
||||
# in_progress/running sorts before completed/failed
|
||||
priority = 0 if state.get("status", "") in ("in_progress", "running") else 1
|
||||
return (priority, -d.stat().st_mtime)
|
||||
except Exception:
|
||||
return (2, 0)
|
||||
|
||||
candidates.sort(key=_sort_key)
|
||||
session_id = candidates[0].name
|
||||
candidates.sort(key=_sort_key)
|
||||
session_id = candidates[0].name
|
||||
|
||||
# Resolve log paths
|
||||
session_dir = storage_path / "sessions" / session_id
|
||||
|
||||
@@ -126,8 +126,13 @@ export default function CredentialsModal({
|
||||
// No real path — no credentials to show
|
||||
setRows([]);
|
||||
}
|
||||
} catch {
|
||||
// Backend unavailable — fall back to legacy props or empty
|
||||
} catch (err) {
|
||||
// Surface the error so the modal shows a meaningful message
|
||||
const message =
|
||||
err instanceof Error ? err.message : "Failed to check credentials";
|
||||
setError(message);
|
||||
|
||||
// Fall back to legacy props or empty rows
|
||||
if (legacyCredentials) {
|
||||
setRows(legacyCredentials.map(c => ({
|
||||
...c,
|
||||
@@ -289,11 +294,18 @@ export default function CredentialsModal({
|
||||
{/* Status banner */}
|
||||
{!loading && (
|
||||
<div className={`mx-5 mt-4 px-3 py-2.5 rounded-lg border text-xs font-medium flex items-center gap-2 ${
|
||||
allRequiredMet
|
||||
? "bg-emerald-500/10 border-emerald-500/20 text-emerald-600"
|
||||
: "bg-destructive/5 border-destructive/20 text-destructive"
|
||||
error && rows.length === 0
|
||||
? "bg-destructive/5 border-destructive/20 text-destructive"
|
||||
: allRequiredMet
|
||||
? "bg-emerald-500/10 border-emerald-500/20 text-emerald-600"
|
||||
: "bg-destructive/5 border-destructive/20 text-destructive"
|
||||
}`}>
|
||||
{allRequiredMet ? (
|
||||
{error && rows.length === 0 ? (
|
||||
<>
|
||||
<AlertCircle className="w-3.5 h-3.5 flex-shrink-0" />
|
||||
<span className="break-words">Failed to check credentials: {error}</span>
|
||||
</>
|
||||
) : allRequiredMet ? (
|
||||
<>
|
||||
<Shield className="w-3.5 h-3.5" />
|
||||
{rows.length === 0
|
||||
|
||||
@@ -73,7 +73,7 @@ function useDraftChromeColors() {
|
||||
type DraftNodeStatus = "pending" | "running" | "complete" | "error";
|
||||
|
||||
interface DraftGraphProps {
|
||||
draft: DraftGraphData;
|
||||
draft: DraftGraphData | null;
|
||||
onNodeClick?: (node: DraftNode) => void;
|
||||
/** Runtime node ID → list of original draft node IDs (post-dissolution mapping). */
|
||||
flowchartMap?: Record<string, string[]>;
|
||||
@@ -83,6 +83,8 @@ interface DraftGraphProps {
|
||||
onRuntimeNodeClick?: (runtimeNodeId: string) => void;
|
||||
/** True while the queen is building the agent from the draft. */
|
||||
building?: boolean;
|
||||
/** True while the queen is designing the draft (no draft yet). Shows a spinner. */
|
||||
loading?: boolean;
|
||||
/** Called when the user clicks Run. */
|
||||
onRun?: () => void;
|
||||
/** Called when the user clicks Pause. */
|
||||
@@ -355,7 +357,7 @@ function Tooltip({ node, style }: { node: DraftNode; style: React.CSSProperties
|
||||
);
|
||||
}
|
||||
|
||||
export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNodes, onRuntimeNodeClick, building, onRun, onPause, runState = "idle" }: DraftGraphProps) {
|
||||
export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNodes, onRuntimeNodeClick, building, loading, onRun, onPause, runState = "idle" }: DraftGraphProps) {
|
||||
const [hoveredNode, setHoveredNode] = useState<string | null>(null);
|
||||
const [mousePos, setMousePos] = useState<{ x: number; y: number } | null>(null);
|
||||
const containerRef = useRef<HTMLDivElement>(null);
|
||||
@@ -463,7 +465,8 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
|
||||
|
||||
const hasStatusOverlay = Object.keys(nodeStatuses).length > 0;
|
||||
|
||||
const { nodes, edges } = draft;
|
||||
const nodes = draft?.nodes ?? [];
|
||||
const edges = draft?.edges ?? [];
|
||||
|
||||
const idxMap = useMemo(
|
||||
() => Object.fromEntries(nodes.map((n, i) => [n.id, i])),
|
||||
@@ -656,25 +659,6 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
|
||||
return { layers, nodeW, firstColX, nodeXPositions, backEdgeOverflow, maxContentRight };
|
||||
}, [nodes, forwardEdges, backEdges.length, containerW, flowchartMap, idxMap]);
|
||||
|
||||
if (nodes.length === 0) {
|
||||
return (
|
||||
<div className="flex flex-col h-full">
|
||||
<div className="px-4 pt-4 pb-2">
|
||||
<p className="text-[11px] text-muted-foreground font-medium uppercase tracking-wider">
|
||||
Draft
|
||||
</p>
|
||||
</div>
|
||||
<div className="flex-1 flex items-center justify-center px-4">
|
||||
<p className="text-xs text-muted-foreground/60 text-center italic">
|
||||
No draft graph yet.
|
||||
<br />
|
||||
Describe your workflow to get started.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const { layers, nodeW, nodeXPositions, backEdgeOverflow, maxContentRight } = layout;
|
||||
|
||||
const maxLayer = nodes.length > 0 ? Math.max(...layers) : 0;
|
||||
@@ -982,6 +966,31 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
|
||||
);
|
||||
};
|
||||
|
||||
if (loading || !draft || nodes.length === 0) {
|
||||
return (
|
||||
<div className="flex flex-col h-full">
|
||||
<div className="px-4 pt-3 pb-1.5 flex items-center gap-2">
|
||||
<p className="text-[11px] text-muted-foreground font-medium uppercase tracking-wider">Draft</p>
|
||||
<span className="text-[9px] font-mono font-medium rounded px-1 py-0.5 leading-none border text-amber-500/60 border-amber-500/20">planning</span>
|
||||
</div>
|
||||
<div className="flex-1 flex flex-col items-center justify-center gap-3">
|
||||
{loading || !draft ? (
|
||||
<>
|
||||
<Loader2 className="w-5 h-5 animate-spin text-muted-foreground/40" />
|
||||
<p className="text-xs text-muted-foreground/50">Designing flowchart…</p>
|
||||
</>
|
||||
) : (
|
||||
<p className="text-xs text-muted-foreground/60 text-center italic">
|
||||
No draft graph yet.
|
||||
<br />
|
||||
Describe your workflow to get started.
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="flex flex-col h-full">
|
||||
{/* Header */}
|
||||
|
||||
@@ -196,6 +196,102 @@ describe("sseEventToChatMessage", () => {
|
||||
);
|
||||
});
|
||||
|
||||
it("different inner_turn values produce different message IDs", () => {
|
||||
const e1 = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "first response", iteration: 0, inner_turn: 0 },
|
||||
});
|
||||
const e2 = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "after tool call", iteration: 0, inner_turn: 1 },
|
||||
});
|
||||
const r1 = sseEventToChatMessage(e1, "t");
|
||||
const r2 = sseEventToChatMessage(e2, "t");
|
||||
expect(r1!.id).not.toBe(r2!.id);
|
||||
});
|
||||
|
||||
it("same inner_turn produces same ID (streaming upsert within one LLM call)", () => {
|
||||
const e1 = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "partial", iteration: 0, inner_turn: 1 },
|
||||
});
|
||||
const e2 = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "partial response", iteration: 0, inner_turn: 1 },
|
||||
});
|
||||
expect(sseEventToChatMessage(e1, "t")!.id).toBe(
|
||||
sseEventToChatMessage(e2, "t")!.id,
|
||||
);
|
||||
});
|
||||
|
||||
it("absent inner_turn produces same ID as inner_turn=0 (backward compat)", () => {
|
||||
const withField = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "hello", iteration: 2, inner_turn: 0 },
|
||||
});
|
||||
const withoutField = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "hello", iteration: 2 },
|
||||
});
|
||||
expect(sseEventToChatMessage(withField, "t")!.id).toBe(
|
||||
sseEventToChatMessage(withoutField, "t")!.id,
|
||||
);
|
||||
});
|
||||
|
||||
it("inner_turn=0 produces no suffix (matches old ID format)", () => {
|
||||
const event = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "hello", iteration: 3, inner_turn: 0 },
|
||||
});
|
||||
const result = sseEventToChatMessage(event, "t");
|
||||
expect(result!.id).toBe("stream-exec-1-3-queen");
|
||||
});
|
||||
|
||||
it("inner_turn>0 adds -t suffix to ID", () => {
|
||||
const event = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "hello", iteration: 3, inner_turn: 2 },
|
||||
});
|
||||
const result = sseEventToChatMessage(event, "t");
|
||||
expect(result!.id).toBe("stream-exec-1-3-t2-queen");
|
||||
});
|
||||
|
||||
it("llm_text_delta also uses inner_turn for distinct IDs", () => {
|
||||
const e1 = makeEvent({
|
||||
type: "llm_text_delta",
|
||||
node_id: "research",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "first", inner_turn: 0 },
|
||||
});
|
||||
const e2 = makeEvent({
|
||||
type: "llm_text_delta",
|
||||
node_id: "research",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "second", inner_turn: 1 },
|
||||
});
|
||||
const r1 = sseEventToChatMessage(e1, "t");
|
||||
const r2 = sseEventToChatMessage(e2, "t");
|
||||
expect(r1!.id).not.toBe(r2!.id);
|
||||
expect(r1!.id).toBe("stream-exec-1-research");
|
||||
expect(r2!.id).toBe("stream-exec-1-t1-research");
|
||||
});
|
||||
|
||||
it("uses timestamp fallback when both turnId and execution_id are null", () => {
|
||||
const event = makeEvent({
|
||||
type: "client_output_delta",
|
||||
|
||||
@@ -56,10 +56,15 @@ export function sseEventToChatMessage(
|
||||
const iterTid = iter != null ? String(iter) : tid;
|
||||
const iterIdKey = eid && iterTid ? `${eid}-${iterTid}` : eid || iterTid || `t-${Date.now()}`;
|
||||
|
||||
// Distinguish multiple LLM calls within the same iteration (inner tool loop).
|
||||
// inner_turn=0 (or absent) produces no suffix for backward compat.
|
||||
const innerTurn = event.data?.inner_turn as number | undefined;
|
||||
const innerSuffix = innerTurn != null && innerTurn > 0 ? `-t${innerTurn}` : "";
|
||||
|
||||
const snapshot = (event.data?.snapshot as string) || (event.data?.content as string) || "";
|
||||
if (!snapshot) return null;
|
||||
return {
|
||||
id: `stream-${iterIdKey}-${event.node_id}`,
|
||||
id: `stream-${iterIdKey}${innerSuffix}-${event.node_id}`,
|
||||
agent: agentDisplayName || event.node_id || "Agent",
|
||||
agentColor: "",
|
||||
content: snapshot,
|
||||
@@ -91,10 +96,13 @@ export function sseEventToChatMessage(
|
||||
}
|
||||
|
||||
case "llm_text_delta": {
|
||||
const llmInnerTurn = event.data?.inner_turn as number | undefined;
|
||||
const llmInnerSuffix = llmInnerTurn != null && llmInnerTurn > 0 ? `-t${llmInnerTurn}` : "";
|
||||
|
||||
const snapshot = (event.data?.snapshot as string) || (event.data?.content as string) || "";
|
||||
if (!snapshot) return null;
|
||||
return {
|
||||
id: `stream-${idKey}-${event.node_id}`,
|
||||
id: `stream-${idKey}${llmInnerSuffix}-${event.node_id}`,
|
||||
agent: event.node_id || "Agent",
|
||||
agentColor: "",
|
||||
content: snapshot,
|
||||
|
||||
@@ -113,7 +113,13 @@ function NewTabPopover({ open, onClose, anchorRef, discoverAgents, onFromScratch
|
||||
useEffect(() => {
|
||||
if (open && anchorRef.current) {
|
||||
const rect = anchorRef.current.getBoundingClientRect();
|
||||
setPos({ top: rect.bottom + 4, left: rect.left });
|
||||
const POPUP_WIDTH = 240; // w-60 = 15rem = 240px
|
||||
const overflows = rect.left + POPUP_WIDTH > window.innerWidth - 8;
|
||||
console.log("Anchor rect:", rect, "Overflows:", overflows);
|
||||
setPos({
|
||||
top: rect.bottom + 4,
|
||||
left: overflows ? rect.right - POPUP_WIDTH : rect.left,
|
||||
});
|
||||
}
|
||||
}, [open, anchorRef]);
|
||||
|
||||
@@ -1578,6 +1584,16 @@ export default function Workspace() {
|
||||
const chatMsg = sseEventToChatMessage(event, agentType, displayName, currentTurn);
|
||||
if (isQueen) console.log('[QUEEN] chatMsg:', chatMsg?.id, chatMsg?.content?.slice(0, 50), 'turn:', currentTurn);
|
||||
if (chatMsg && !suppressQueenMessages) {
|
||||
// Queen emits multiple client_output_delta / llm_text_delta snapshots
|
||||
// across iterations and inner tool-loop turns. Build a stable ID that
|
||||
// groups streaming deltas for the *same* output (same execution +
|
||||
// iteration + inner_turn) into one bubble, while keeping distinct
|
||||
// outputs as separate bubbles so earlier text isn't overwritten.
|
||||
if (isQueen && (event.type === "client_output_delta" || event.type === "llm_text_delta") && event.execution_id) {
|
||||
const iter = event.data?.iteration ?? 0;
|
||||
const inner = event.data?.inner_turn ?? 0;
|
||||
chatMsg.id = `queen-stream-${event.execution_id}-${iter}-${inner}`;
|
||||
}
|
||||
if (isQueen) {
|
||||
chatMsg.role = role;
|
||||
chatMsg.phase = queenPhaseRef.current[agentType] as ChatMessage["phase"];
|
||||
@@ -2764,7 +2780,6 @@ export default function Workspace() {
|
||||
|
||||
const activeWorkerLabel = activeAgentState?.displayName || formatAgentDisplayName(baseAgentType(activeWorker));
|
||||
|
||||
|
||||
return (
|
||||
<div className="flex flex-col h-screen bg-background overflow-hidden">
|
||||
<TopBar
|
||||
@@ -2813,10 +2828,10 @@ export default function Workspace() {
|
||||
<div className="flex flex-1 min-h-0">
|
||||
|
||||
{/* ── Pipeline graph + chat ──────────────────────────────────── */}
|
||||
<div className={`${((activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building") && activeAgentState?.draftGraph) || activeAgentState?.originalDraft ? "w-[500px] min-w-[400px]" : "w-[300px] min-w-[240px]"} bg-card/30 flex flex-col border-r border-border/30 transition-[width] duration-200`}>
|
||||
<div className={`${activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building" || activeAgentState?.originalDraft ? "w-[500px] min-w-[400px]" : "w-[300px] min-w-[240px]"} bg-card/30 flex flex-col border-r border-border/30 transition-[width] duration-200`}>
|
||||
<div className="flex-1 min-h-0">
|
||||
{(activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building") && activeAgentState?.draftGraph ? (
|
||||
<DraftGraph draft={activeAgentState.draftGraph} building={activeAgentState?.queenBuilding} onRun={handleRun} onPause={handlePause} runState={activeAgentState?.workerRunState ?? "idle"} />
|
||||
{activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building" ? (
|
||||
<DraftGraph draft={activeAgentState?.draftGraph ?? null} loading={!activeAgentState?.draftGraph} building={activeAgentState?.queenBuilding} onRun={handleRun} onPause={handlePause} runState={activeAgentState?.workerRunState ?? "idle"} />
|
||||
) : activeAgentState?.originalDraft ? (
|
||||
<DraftGraph
|
||||
draft={activeAgentState.originalDraft}
|
||||
@@ -3089,7 +3104,15 @@ export default function Workspace() {
|
||||
agentLabel={activeWorkerLabel}
|
||||
agentPath={credentialAgentPath || activeAgentState?.agentPath || (!activeWorker.startsWith("new-agent") ? activeWorker : undefined)}
|
||||
open={credentialsOpen}
|
||||
onClose={() => { setCredentialsOpen(false); setCredentialAgentPath(null); setDismissedBanner(null); }}
|
||||
onClose={() => {
|
||||
setCredentialsOpen(false);
|
||||
setCredentialAgentPath(null);
|
||||
// Keep credentials_required error set — clearing it here triggers
|
||||
// the auto-load effect which retries session creation immediately,
|
||||
// causing an infinite modal loop when credentials are still missing.
|
||||
// The error is only cleared in onCredentialChange (below) when the
|
||||
// user actually saves valid credentials.
|
||||
}}
|
||||
credentials={activeSession?.credentials || []}
|
||||
onCredentialChange={() => {
|
||||
// Clear credential error so the auto-load effect retries session creation
|
||||
|
||||
+2
-1
@@ -1,6 +1,6 @@
|
||||
[project]
|
||||
name = "framework"
|
||||
version = "0.5.1"
|
||||
version = "0.7.1"
|
||||
description = "Goal-driven agent runtime with Builder-friendly observability"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.11"
|
||||
@@ -11,6 +11,7 @@ dependencies = [
|
||||
"litellm>=1.81.0",
|
||||
"mcp>=1.0.0",
|
||||
"fastmcp>=2.0.0",
|
||||
"croniter>=1.4.0",
|
||||
"tools",
|
||||
]
|
||||
|
||||
|
||||
@@ -1,140 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Setup script for Aden Hive Framework MCP Server
|
||||
|
||||
This script installs the framework and configures the MCP server.
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def setup_logger():
|
||||
"""Configure logger for CLI usage with colored output."""
|
||||
if not logger.handlers:
|
||||
handler = logging.StreamHandler(sys.stdout)
|
||||
formatter = logging.Formatter("%(message)s")
|
||||
handler.setFormatter(formatter)
|
||||
logger.addHandler(handler)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
|
||||
class Colors:
|
||||
"""ANSI color codes for terminal output."""
|
||||
|
||||
GREEN = "\033[0;32m"
|
||||
YELLOW = "\033[1;33m"
|
||||
RED = "\033[0;31m"
|
||||
BLUE = "\033[0;34m"
|
||||
NC = "\033[0m" # No Color
|
||||
|
||||
|
||||
def log_step(message: str):
|
||||
"""Log a colored step message."""
|
||||
logger.info(f"{Colors.YELLOW}{message}{Colors.NC}")
|
||||
|
||||
|
||||
def log_success(message: str):
|
||||
"""Log a success message."""
|
||||
logger.info(f"{Colors.GREEN}✓ {message}{Colors.NC}")
|
||||
|
||||
|
||||
def log_error(message: str):
|
||||
"""Log an error message."""
|
||||
logger.error(f"{Colors.RED}✗ {message}{Colors.NC}")
|
||||
|
||||
|
||||
def run_command(cmd: list, error_msg: str) -> bool:
|
||||
"""Run a command and return success status."""
|
||||
try:
|
||||
subprocess.run(
|
||||
cmd,
|
||||
check=True,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
encoding="utf-8",
|
||||
)
|
||||
return True
|
||||
except subprocess.CalledProcessError as e:
|
||||
log_error(error_msg)
|
||||
logger.error(f"Error output: {e.stderr}")
|
||||
return False
|
||||
|
||||
|
||||
def main():
|
||||
"""Main setup function."""
|
||||
setup_logger()
|
||||
logger.info("=== Aden Hive Framework MCP Server Setup ===")
|
||||
logger.info("")
|
||||
|
||||
# Get script directory
|
||||
script_dir = Path(__file__).parent.absolute()
|
||||
|
||||
# Step 1: Install framework package
|
||||
log_step("Step 1: Installing framework package...")
|
||||
if not run_command(
|
||||
[sys.executable, "-m", "pip", "install", "-e", str(script_dir)],
|
||||
"Failed to install framework package",
|
||||
):
|
||||
sys.exit(1)
|
||||
log_success("Framework package installed")
|
||||
logger.info("")
|
||||
|
||||
# Step 2: Install MCP dependencies
|
||||
log_step("Step 2: Installing MCP dependencies...")
|
||||
if not run_command(
|
||||
[sys.executable, "-m", "pip", "install", "mcp", "fastmcp"],
|
||||
"Failed to install MCP dependencies",
|
||||
):
|
||||
sys.exit(1)
|
||||
log_success("MCP dependencies installed")
|
||||
logger.info("")
|
||||
|
||||
# Step 3: Verify MCP configuration
|
||||
log_step("Step 3: Verifying MCP server configuration...")
|
||||
mcp_config_path = script_dir / ".mcp.json"
|
||||
|
||||
if mcp_config_path.exists():
|
||||
log_success("MCP configuration found at .mcp.json")
|
||||
logger.info("Configuration:")
|
||||
with open(mcp_config_path, encoding="utf-8") as f:
|
||||
config = json.load(f)
|
||||
logger.info(json.dumps(config, indent=2))
|
||||
else:
|
||||
log_success("No .mcp.json needed (MCP servers configured at repo root)")
|
||||
logger.info("")
|
||||
|
||||
# Step 4: Test framework import
|
||||
log_step("Step 4: Testing framework import...")
|
||||
try:
|
||||
subprocess.run(
|
||||
[sys.executable, "-c", "import framework; print('OK')"],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
encoding="utf-8",
|
||||
)
|
||||
log_success("Framework module verified")
|
||||
except subprocess.CalledProcessError as e:
|
||||
log_error("Failed to import framework module")
|
||||
logger.error(f"Error: {e.stderr}")
|
||||
sys.exit(1)
|
||||
logger.info("")
|
||||
|
||||
# Success summary
|
||||
logger.info(f"{Colors.GREEN}=== Setup Complete ==={Colors.NC}")
|
||||
logger.info("")
|
||||
logger.info("The framework is now ready to use!")
|
||||
logger.info("")
|
||||
logger.info(f"{Colors.BLUE}MCP Configuration location:{Colors.NC}")
|
||||
logger.info(f" {mcp_config_path}")
|
||||
logger.info("")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,44 @@
|
||||
# Dummy Agent Tests (Level 2)
|
||||
|
||||
End-to-end tests that run real LLM calls against deterministic graph structures. Not part of CI — run manually to verify the executor works with real providers.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
cd core
|
||||
uv run python tests/dummy_agents/run_all.py
|
||||
```
|
||||
|
||||
The script detects available credentials and prompts you to pick a provider. You need at least one of:
|
||||
|
||||
- `ANTHROPIC_API_KEY`
|
||||
- `OPENAI_API_KEY`
|
||||
- `GEMINI_API_KEY`
|
||||
- `ZAI_API_KEY`
|
||||
- Claude Code / Codex / Kimi subscription
|
||||
|
||||
## Verbose Mode
|
||||
|
||||
Show live LLM logs (tool calls, judge verdicts, node traversal):
|
||||
|
||||
```bash
|
||||
uv run python tests/dummy_agents/run_all.py --verbose
|
||||
```
|
||||
|
||||
## What's Tested
|
||||
|
||||
| Agent | Tests | What it covers |
|
||||
|-------|-------|----------------|
|
||||
| echo | 2 | Single-node lifecycle, basic set_output |
|
||||
| pipeline | 4 | Multi-node traversal, input_mapping, conversation modes |
|
||||
| branch | 3 | Conditional edges, LLM-driven routing |
|
||||
| parallel_merge | 4 | Fan-out/fan-in, failure strategies |
|
||||
| retry | 4 | Retry mechanics, exhaustion, ON_FAILURE edges |
|
||||
| feedback_loop | 3 | Feedback cycles, max_node_visits |
|
||||
| worker | 4 | Real MCP tools (example_tool, get_current_time, save_data/load_data) |
|
||||
|
||||
## Notes
|
||||
|
||||
- Tests are **auto-skipped** in regular `pytest` runs (no LLM configured)
|
||||
- Worker tests start the `hive-tools` MCP server as a subprocess
|
||||
- Typical runtime: ~1-3 min depending on provider
|
||||
@@ -0,0 +1,3 @@
|
||||
# Level 2: Dummy Agent Tests
|
||||
# End-to-end graph execution tests with real LLM calls.
|
||||
# NOT part of regular CI — run manually with: uv run python tests/dummy_agents/run_all.py
|
||||
@@ -0,0 +1,140 @@
|
||||
"""Shared fixtures for dummy agent end-to-end tests.
|
||||
|
||||
These tests use real LLM providers — they are NOT part of regular CI.
|
||||
Run via: cd core && uv run python tests/dummy_agents/run_all.py
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.executor import GraphExecutor, ParallelExecutionConfig
|
||||
from framework.graph.goal import Goal
|
||||
from framework.llm.litellm import LiteLLMProvider
|
||||
from framework.runtime.core import Runtime
|
||||
|
||||
# ── module-level state set by run_all.py ─────────────────────────────
|
||||
|
||||
_selected_model: str | None = None
|
||||
_selected_api_key: str | None = None
|
||||
_selected_extra_headers: dict[str, str] | None = None
|
||||
_selected_api_base: str | None = None
|
||||
|
||||
|
||||
def set_llm_selection(
|
||||
model: str,
|
||||
api_key: str,
|
||||
extra_headers: dict[str, str] | None = None,
|
||||
api_base: str | None = None,
|
||||
) -> None:
|
||||
"""Called by run_all.py after user selects a provider."""
|
||||
global _selected_model, _selected_api_key, _selected_extra_headers, _selected_api_base
|
||||
_selected_model = model
|
||||
_selected_api_key = api_key
|
||||
_selected_extra_headers = extra_headers
|
||||
_selected_api_base = api_base
|
||||
|
||||
|
||||
# ── collection hook: skip entire directory when not configured ───────
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(config, items):
|
||||
"""Skip all dummy_agents tests when no LLM is configured.
|
||||
|
||||
This prevents these tests from running in regular CI. They only run
|
||||
when launched via run_all.py (which calls set_llm_selection first).
|
||||
"""
|
||||
if _selected_model is not None:
|
||||
return # LLM configured, run normally
|
||||
|
||||
skip = pytest.mark.skip(
|
||||
reason="Dummy agent tests require a real LLM. "
|
||||
"Run via: cd core && uv run python tests/dummy_agents/run_all.py"
|
||||
)
|
||||
for item in items:
|
||||
if "dummy_agents" in str(item.fspath):
|
||||
item.add_marker(skip)
|
||||
|
||||
|
||||
# ── fixtures ─────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def llm_provider():
|
||||
"""Real LLM provider using the user-selected model."""
|
||||
if _selected_model is None or _selected_api_key is None:
|
||||
pytest.skip("No LLM selected — run via run_all.py")
|
||||
kwargs = {"model": _selected_model, "api_key": _selected_api_key}
|
||||
if _selected_extra_headers:
|
||||
kwargs["extra_headers"] = _selected_extra_headers
|
||||
if _selected_api_base:
|
||||
kwargs["api_base"] = _selected_api_base
|
||||
return LiteLLMProvider(**kwargs)
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def tool_registry():
|
||||
"""Load hive-tools MCP server and return a ToolRegistry with real tools.
|
||||
|
||||
Session-scoped so the MCP server is started once and reused across tests.
|
||||
"""
|
||||
from framework.runner.tool_registry import ToolRegistry
|
||||
|
||||
registry = ToolRegistry()
|
||||
# Resolve the tools directory relative to the repo root
|
||||
repo_root = Path(__file__).resolve().parents[3] # core/tests/dummy_agents -> repo root
|
||||
tools_dir = repo_root / "tools"
|
||||
|
||||
mcp_config = {
|
||||
"name": "hive-tools",
|
||||
"transport": "stdio",
|
||||
"command": "uv",
|
||||
"args": ["run", "python", "mcp_server.py", "--stdio"],
|
||||
"cwd": str(tools_dir),
|
||||
"description": "Hive tools MCP server",
|
||||
}
|
||||
registry.register_mcp_server(mcp_config)
|
||||
yield registry
|
||||
registry.cleanup()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def runtime(tmp_path):
|
||||
"""Real Runtime backed by a temp directory."""
|
||||
return Runtime(storage_path=tmp_path / "runtime")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def goal():
|
||||
return Goal(id="dummy", name="Dummy Agent Test", description="Level 2 end-to-end testing")
|
||||
|
||||
|
||||
def make_executor(
|
||||
runtime: Runtime,
|
||||
llm: LiteLLMProvider,
|
||||
*,
|
||||
enable_parallel: bool = True,
|
||||
parallel_config: ParallelExecutionConfig | None = None,
|
||||
loop_config: dict | None = None,
|
||||
tool_registry=None,
|
||||
storage_path: Path | None = None,
|
||||
) -> GraphExecutor:
|
||||
"""Factory that creates a GraphExecutor with a real LLM."""
|
||||
tools = []
|
||||
tool_executor = None
|
||||
if tool_registry is not None:
|
||||
tools = list(tool_registry.get_tools().values())
|
||||
tool_executor = tool_registry.get_executor()
|
||||
|
||||
return GraphExecutor(
|
||||
runtime=runtime,
|
||||
llm=llm,
|
||||
tools=tools,
|
||||
tool_executor=tool_executor,
|
||||
enable_parallel_execution=enable_parallel,
|
||||
parallel_config=parallel_config,
|
||||
loop_config=loop_config or {"max_iterations": 10},
|
||||
storage_path=storage_path,
|
||||
)
|
||||
@@ -0,0 +1,64 @@
|
||||
"""Minimal helper nodes for deterministic control-flow tests.
|
||||
|
||||
Most tests use real EventLoopNode with real LLM calls. These helpers
|
||||
exist only for tests that need predictable failure/success patterns
|
||||
(retry, feedback loop, parallel failure modes).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from framework.graph.node import NodeContext, NodeProtocol, NodeResult
|
||||
|
||||
|
||||
class SuccessNode(NodeProtocol):
|
||||
"""Always succeeds with configurable output dict."""
|
||||
|
||||
def __init__(self, output: dict | None = None):
|
||||
self._output = output or {"status": "ok"}
|
||||
self.executed = False
|
||||
self.execute_count = 0
|
||||
|
||||
async def execute(self, ctx: NodeContext) -> NodeResult:
|
||||
self.executed = True
|
||||
self.execute_count += 1
|
||||
return NodeResult(success=True, output=self._output, tokens_used=1, latency_ms=1)
|
||||
|
||||
|
||||
class FailNode(NodeProtocol):
|
||||
"""Always fails with configurable error."""
|
||||
|
||||
def __init__(self, error: str = "node failed"):
|
||||
self._error = error
|
||||
self.attempt_count = 0
|
||||
|
||||
async def execute(self, ctx: NodeContext) -> NodeResult:
|
||||
self.attempt_count += 1
|
||||
return NodeResult(success=False, error=self._error)
|
||||
|
||||
|
||||
class FlakyNode(NodeProtocol):
|
||||
"""Fails N times then succeeds. For retry tests."""
|
||||
|
||||
def __init__(self, fail_times: int = 2, output: dict | None = None):
|
||||
self.fail_times = fail_times
|
||||
self._output = output or {"status": "recovered"}
|
||||
self.attempt_count = 0
|
||||
|
||||
async def execute(self, ctx: NodeContext) -> NodeResult:
|
||||
self.attempt_count += 1
|
||||
if self.attempt_count <= self.fail_times:
|
||||
return NodeResult(success=False, error=f"fail #{self.attempt_count}")
|
||||
return NodeResult(success=True, output=self._output, tokens_used=1, latency_ms=1)
|
||||
|
||||
|
||||
class StatefulNode(NodeProtocol):
|
||||
"""Returns different outputs on successive calls. For feedback loop tests."""
|
||||
|
||||
def __init__(self, outputs: list[NodeResult]):
|
||||
self._outputs = outputs
|
||||
self.call_count = 0
|
||||
|
||||
async def execute(self, ctx: NodeContext) -> NodeResult:
|
||||
idx = min(self.call_count, len(self._outputs) - 1)
|
||||
self.call_count += 1
|
||||
return self._outputs[idx]
|
||||
@@ -0,0 +1,359 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Runner for Level 2 dummy agent tests with interactive LLM provider selection.
|
||||
|
||||
This is NOT part of regular CI. It makes real LLM API calls.
|
||||
|
||||
Usage:
|
||||
cd core && uv run python tests/dummy_agents/run_all.py
|
||||
cd core && uv run python tests/dummy_agents/run_all.py --verbose
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import xml.etree.ElementTree as ET
|
||||
from pathlib import Path
|
||||
from tempfile import NamedTemporaryFile
|
||||
|
||||
TESTS_DIR = Path(__file__).parent
|
||||
|
||||
# ── provider registry ────────────────────────────────────────────────
|
||||
|
||||
# (env_var, display_name, default_model) — models match quickstart.sh defaults
|
||||
API_KEY_PROVIDERS = [
|
||||
("ANTHROPIC_API_KEY", "Anthropic (Claude)", "claude-sonnet-4-20250514"),
|
||||
("OPENAI_API_KEY", "OpenAI", "gpt-5-mini"),
|
||||
("GEMINI_API_KEY", "Google Gemini", "gemini/gemini-3-flash-preview"),
|
||||
("ZAI_API_KEY", "ZAI (GLM)", "openai/glm-5"),
|
||||
("GROQ_API_KEY", "Groq", "moonshotai/kimi-k2-instruct-0905"),
|
||||
("MISTRAL_API_KEY", "Mistral", "mistral-large-latest"),
|
||||
("CEREBRAS_API_KEY", "Cerebras", "cerebras/zai-glm-4.7"),
|
||||
("TOGETHER_API_KEY", "Together AI", "together_ai/meta-llama/Llama-3.3-70B-Instruct-Turbo"),
|
||||
("DEEPSEEK_API_KEY", "DeepSeek", "deepseek-chat"),
|
||||
("MINIMAX_API_KEY", "MiniMax", "MiniMax-M2.5"),
|
||||
]
|
||||
|
||||
|
||||
def _detect_claude_code_token() -> str | None:
|
||||
"""Check if Claude Code subscription credentials are available."""
|
||||
try:
|
||||
from framework.runner.runner import get_claude_code_token
|
||||
|
||||
return get_claude_code_token()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _detect_codex_token() -> str | None:
|
||||
"""Check if Codex subscription credentials are available."""
|
||||
try:
|
||||
from framework.runner.runner import get_codex_token
|
||||
|
||||
return get_codex_token()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _detect_kimi_code_token() -> str | None:
|
||||
"""Check if Kimi Code subscription credentials are available."""
|
||||
try:
|
||||
from framework.runner.runner import get_kimi_code_token
|
||||
|
||||
return get_kimi_code_token()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def detect_available() -> list[dict]:
|
||||
"""Detect all available LLM providers with valid credentials.
|
||||
|
||||
Returns list of dicts: {name, model, api_key, source}
|
||||
"""
|
||||
available = []
|
||||
|
||||
# Subscription-based providers
|
||||
token = _detect_claude_code_token()
|
||||
if token:
|
||||
available.append(
|
||||
{
|
||||
"name": "Claude Code (subscription)",
|
||||
"model": "claude-sonnet-4-20250514",
|
||||
"api_key": token,
|
||||
"source": "claude_code_sub",
|
||||
"extra_headers": {"authorization": f"Bearer {token}"},
|
||||
}
|
||||
)
|
||||
|
||||
token = _detect_codex_token()
|
||||
if token:
|
||||
available.append(
|
||||
{
|
||||
"name": "Codex (subscription)",
|
||||
"model": "gpt-5-mini",
|
||||
"api_key": token,
|
||||
"source": "codex_sub",
|
||||
}
|
||||
)
|
||||
|
||||
token = _detect_kimi_code_token()
|
||||
if token:
|
||||
available.append(
|
||||
{
|
||||
"name": "Kimi Code (subscription)",
|
||||
"model": "moonshotai/kimi-k2-instruct-0905",
|
||||
"api_key": token,
|
||||
"source": "kimi_sub",
|
||||
}
|
||||
)
|
||||
|
||||
# API key providers (env vars)
|
||||
for env_var, name, default_model in API_KEY_PROVIDERS:
|
||||
key = os.environ.get(env_var)
|
||||
if key:
|
||||
entry = {
|
||||
"name": f"{name} (${env_var})",
|
||||
"model": default_model,
|
||||
"api_key": key,
|
||||
"source": env_var,
|
||||
}
|
||||
# ZAI requires an api_base (OpenAI-compatible endpoint)
|
||||
if env_var == "ZAI_API_KEY":
|
||||
entry["api_base"] = "https://api.z.ai/api/coding/paas/v4"
|
||||
available.append(entry)
|
||||
|
||||
return available
|
||||
|
||||
|
||||
def prompt_provider_selection() -> dict:
|
||||
"""Interactive prompt to select an LLM provider. Returns the chosen provider dict."""
|
||||
available = detect_available()
|
||||
|
||||
if not available:
|
||||
print("\n No LLM credentials detected.")
|
||||
print(" Set an API key environment variable, e.g.:")
|
||||
print(" export ANTHROPIC_API_KEY=sk-...")
|
||||
print(" export OPENAI_API_KEY=sk-...")
|
||||
print(" Or authenticate with Claude Code: claude")
|
||||
sys.exit(1)
|
||||
|
||||
if len(available) == 1:
|
||||
choice = available[0]
|
||||
print(f"\n Using: {choice['name']} ({choice['model']})")
|
||||
return choice
|
||||
|
||||
print("\n Available LLM providers:\n")
|
||||
for i, p in enumerate(available, 1):
|
||||
print(f" {i}) {p['name']} [{p['model']}]")
|
||||
|
||||
print()
|
||||
while True:
|
||||
try:
|
||||
raw = input(f" Select provider [1-{len(available)}]: ").strip()
|
||||
idx = int(raw) - 1
|
||||
if 0 <= idx < len(available):
|
||||
choice = available[idx]
|
||||
print(f"\n Using: {choice['name']} ({choice['model']})\n")
|
||||
return choice
|
||||
except (ValueError, EOFError):
|
||||
pass
|
||||
print(f" Please enter a number between 1 and {len(available)}")
|
||||
|
||||
|
||||
# ── test runner ──────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def parse_junit_xml(xml_path: str) -> dict[str, dict]:
|
||||
"""Parse JUnit XML and group results by agent (test file)."""
|
||||
tree = ET.parse(xml_path)
|
||||
root = tree.getroot()
|
||||
agents: dict[str, dict] = {}
|
||||
|
||||
for testsuite in root.iter("testsuite"):
|
||||
for testcase in testsuite.iter("testcase"):
|
||||
classname = testcase.get("classname", "")
|
||||
parts = classname.split(".")
|
||||
agent_name = "unknown"
|
||||
for part in parts:
|
||||
if part.startswith("test_"):
|
||||
agent_name = part[5:]
|
||||
break
|
||||
|
||||
if agent_name not in agents:
|
||||
agents[agent_name] = {
|
||||
"total": 0,
|
||||
"passed": 0,
|
||||
"failed": 0,
|
||||
"time": 0.0,
|
||||
"tests": [],
|
||||
}
|
||||
|
||||
agents[agent_name]["total"] += 1
|
||||
test_time = float(testcase.get("time", "0"))
|
||||
agents[agent_name]["time"] += test_time
|
||||
|
||||
failures = testcase.findall("failure")
|
||||
errors = testcase.findall("error")
|
||||
test_name = testcase.get("name", "")
|
||||
|
||||
if failures or errors:
|
||||
agents[agent_name]["failed"] += 1
|
||||
# Extract failure reason from the first failure/error element
|
||||
fail_el = (failures or errors)[0]
|
||||
reason = fail_el.get("message", "") or ""
|
||||
# Also grab the text body for more detail
|
||||
body = fail_el.text or ""
|
||||
# Build a concise reason: prefer message, fall back to first line of body
|
||||
if not reason and body:
|
||||
reason = body.strip().split("\n")[0]
|
||||
agents[agent_name]["tests"].append((test_name, "FAIL", reason))
|
||||
else:
|
||||
agents[agent_name]["passed"] += 1
|
||||
agents[agent_name]["tests"].append((test_name, "PASS", ""))
|
||||
|
||||
return agents
|
||||
|
||||
|
||||
def print_table(agents: dict[str, dict], total_time: float, verbose: bool = False) -> None:
|
||||
"""Print summary table."""
|
||||
col_agent = 20
|
||||
col_tests = 6
|
||||
col_passed = 8
|
||||
col_time = 12
|
||||
|
||||
def sep(char: str = "═") -> str:
|
||||
return (
|
||||
f"╠{char * (col_agent + 2)}╬{char * (col_tests + 2)}"
|
||||
f"╬{char * (col_passed + 2)}╬{char * (col_time + 2)}╣"
|
||||
)
|
||||
|
||||
header = (
|
||||
f"║ {'Agent':<{col_agent}} ║ {'Tests':>{col_tests}} "
|
||||
f"║ {'Passed':>{col_passed}} ║ {'Time (s)':>{col_time}} ║"
|
||||
)
|
||||
top = (
|
||||
f"╔{'═' * (col_agent + 2)}╦{'═' * (col_tests + 2)}"
|
||||
f"╦{'═' * (col_passed + 2)}╦{'═' * (col_time + 2)}╗"
|
||||
)
|
||||
bottom = (
|
||||
f"╚{'═' * (col_agent + 2)}╩{'═' * (col_tests + 2)}"
|
||||
f"╩{'═' * (col_passed + 2)}╩{'═' * (col_time + 2)}╝"
|
||||
)
|
||||
|
||||
print()
|
||||
print(top)
|
||||
print(header)
|
||||
print(sep())
|
||||
|
||||
total_tests = 0
|
||||
total_passed = 0
|
||||
|
||||
for agent_name in sorted(agents.keys()):
|
||||
data = agents[agent_name]
|
||||
total_tests += data["total"]
|
||||
total_passed += data["passed"]
|
||||
marker = " " if data["failed"] == 0 else "!"
|
||||
row = (
|
||||
f"║{marker}{agent_name:<{col_agent + 1}} ║ {data['total']:>{col_tests}} "
|
||||
f"║ {data['passed']:>{col_passed}} ║ {data['time']:>{col_time}.2f} ║"
|
||||
)
|
||||
print(row)
|
||||
|
||||
if verbose:
|
||||
for test_name, status, reason in data["tests"]:
|
||||
icon = " ✓" if status == "PASS" else " ✗"
|
||||
print(
|
||||
f"║ {icon} {test_name:<{col_agent - 2}}"
|
||||
f"║{'':>{col_tests + 2}}║{'':>{col_passed + 2}}║{'':>{col_time + 2}}║"
|
||||
)
|
||||
if status == "FAIL" and reason:
|
||||
# Print failure reason wrapped to fit, indented under the test
|
||||
reason_short = reason[:120] + ("..." if len(reason) > 120 else "")
|
||||
print(f"║ {reason_short}")
|
||||
print("║")
|
||||
|
||||
print(sep())
|
||||
all_pass = total_passed == total_tests
|
||||
status = "ALL PASS" if all_pass else f"{total_tests - total_passed} FAILED"
|
||||
totals = (
|
||||
f"║ {status:<{col_agent}} ║ {total_tests:>{col_tests}} "
|
||||
f"║ {total_passed:>{col_passed}} ║ {total_time:>{col_time}.2f} ║"
|
||||
)
|
||||
print(totals)
|
||||
print(bottom)
|
||||
|
||||
# Always print failure details if any tests failed
|
||||
if not all_pass:
|
||||
print("\n Failure Details:")
|
||||
print(" " + "─" * 70)
|
||||
for agent_name in sorted(agents.keys()):
|
||||
for test_name, status, reason in agents[agent_name]["tests"]:
|
||||
if status == "FAIL":
|
||||
print(f"\n ✗ {agent_name}::{test_name}")
|
||||
if reason:
|
||||
# Wrap long reasons
|
||||
for i in range(0, len(reason), 100):
|
||||
print(f" {reason[i : i + 100]}")
|
||||
print()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
verbose = "--verbose" in sys.argv or "-v" in sys.argv
|
||||
|
||||
print("\n ╔═══════════════════════════════════════╗")
|
||||
print(" ║ Level 2: Dummy Agent Tests (E2E) ║")
|
||||
print(" ╚═══════════════════════════════════════╝")
|
||||
|
||||
# Step 1: detect credentials and let user pick
|
||||
provider = prompt_provider_selection()
|
||||
|
||||
# Step 2: inject selection into conftest module state
|
||||
from tests.dummy_agents.conftest import set_llm_selection
|
||||
|
||||
set_llm_selection(
|
||||
model=provider["model"],
|
||||
api_key=provider["api_key"],
|
||||
extra_headers=provider.get("extra_headers"),
|
||||
api_base=provider.get("api_base"),
|
||||
)
|
||||
|
||||
# Step 3: run pytest
|
||||
with NamedTemporaryFile(suffix=".xml", delete=False) as tmp:
|
||||
xml_path = tmp.name
|
||||
|
||||
start = time.time()
|
||||
import pytest as _pytest
|
||||
|
||||
pytest_args = [
|
||||
str(TESTS_DIR),
|
||||
f"--junitxml={xml_path}",
|
||||
"--tb=short",
|
||||
"--override-ini=asyncio_mode=auto",
|
||||
"--log-cli-level=INFO", # Stream logs live to terminal
|
||||
"-v",
|
||||
]
|
||||
if not verbose:
|
||||
# In non-verbose mode, only show warnings and above
|
||||
pytest_args[pytest_args.index("--log-cli-level=INFO")] = "--log-cli-level=WARNING"
|
||||
pytest_args.remove("-v")
|
||||
pytest_args.append("-q")
|
||||
|
||||
exit_code = _pytest.main(pytest_args)
|
||||
elapsed = time.time() - start
|
||||
|
||||
# Step 4: print summary
|
||||
try:
|
||||
agents = parse_junit_xml(xml_path)
|
||||
print_table(agents, elapsed, verbose=verbose)
|
||||
except Exception as e:
|
||||
print(f"\n Could not parse results: {e}")
|
||||
|
||||
# Clean up
|
||||
Path(xml_path).unlink(missing_ok=True)
|
||||
|
||||
return exit_code
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -0,0 +1,132 @@
|
||||
"""Branch agent: LLM classifies input, conditional edges route to different paths.
|
||||
|
||||
Tests conditional edge evaluation with real LLM output.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
|
||||
SET_OUTPUT_INSTRUCTION = (
|
||||
"You MUST call the set_output tool to provide your answer. "
|
||||
"Do not just write text — call set_output with the correct key and value."
|
||||
)
|
||||
|
||||
|
||||
def _build_branch_graph() -> GraphSpec:
|
||||
return GraphSpec(
|
||||
id="branch-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="classify",
|
||||
entry_points={"start": "classify"},
|
||||
terminal_nodes=["positive", "negative"],
|
||||
conversation_mode="continuous",
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="classify",
|
||||
name="Classify",
|
||||
description="Classifies input sentiment",
|
||||
node_type="event_loop",
|
||||
input_keys=["text"],
|
||||
output_keys=["score", "label"],
|
||||
system_prompt=(
|
||||
"You are a sentiment classifier. Read the 'text' input and determine "
|
||||
"if the sentiment is positive or negative.\n\n"
|
||||
"You MUST call set_output TWICE:\n"
|
||||
"1. set_output(key='score', value='<number>') — a score between 0.0 "
|
||||
"and 1.0 where >0.5 means positive\n"
|
||||
"2. set_output(key='label', value='positive') or "
|
||||
"set_output(key='label', value='negative')\n\n" + SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="positive",
|
||||
name="Positive Handler",
|
||||
description="Handles positive sentiment",
|
||||
node_type="event_loop",
|
||||
output_keys=["result"],
|
||||
system_prompt=(
|
||||
"The input was classified as positive. Call set_output with "
|
||||
"key='result' and a brief one-sentence acknowledgment. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="negative",
|
||||
name="Negative Handler",
|
||||
description="Handles negative sentiment",
|
||||
node_type="event_loop",
|
||||
output_keys=["result"],
|
||||
system_prompt=(
|
||||
"The input was classified as negative. Call set_output with "
|
||||
"key='result' and a brief one-sentence acknowledgment. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[
|
||||
EdgeSpec(
|
||||
id="classify-to-positive",
|
||||
source="classify",
|
||||
target="positive",
|
||||
condition=EdgeCondition.CONDITIONAL,
|
||||
condition_expr="output.get('label') == 'positive'",
|
||||
priority=1,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="classify-to-negative",
|
||||
source="classify",
|
||||
target="negative",
|
||||
condition=EdgeCondition.CONDITIONAL,
|
||||
condition_expr="output.get('label') == 'negative'",
|
||||
priority=0,
|
||||
),
|
||||
],
|
||||
memory_keys=["text", "score", "label", "result"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_positive_path(runtime, goal, llm_provider):
|
||||
graph = _build_branch_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(
|
||||
graph, goal, {"text": "I love this product, it's amazing!"}, validate_graph=False
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.path == ["classify", "positive"]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_negative_path(runtime, goal, llm_provider):
|
||||
graph = _build_branch_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(
|
||||
graph, goal, {"text": "This is terrible and broken, I hate it."}, validate_graph=False
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.path == ["classify", "negative"]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_two_nodes_traversed(runtime, goal, llm_provider):
|
||||
"""Regardless of which branch, exactly 2 nodes should execute."""
|
||||
graph = _build_branch_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(
|
||||
graph, goal, {"text": "The weather is nice today."}, validate_graph=False
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.steps_executed == 2
|
||||
assert len(result.path) == 2
|
||||
@@ -0,0 +1,66 @@
|
||||
"""Echo agent: single-node worker that echoes input to output.
|
||||
|
||||
Tests basic node lifecycle with a real LLM call — simplest possible worker.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
|
||||
|
||||
def _build_echo_graph() -> GraphSpec:
|
||||
return GraphSpec(
|
||||
id="echo-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="echo",
|
||||
entry_points={"start": "echo"},
|
||||
terminal_nodes=["echo"],
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="echo",
|
||||
name="Echo",
|
||||
description="Echoes input to output",
|
||||
node_type="event_loop",
|
||||
input_keys=["input"],
|
||||
output_keys=["output"],
|
||||
system_prompt=(
|
||||
"You are an echo node. Your ONLY job is to read the 'input' value "
|
||||
"provided in the user message, then immediately call the set_output "
|
||||
"tool with key='output' and value set to the EXACT same string. "
|
||||
"Do not add any text or explanation. Just call set_output."
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[],
|
||||
memory_keys=["input", "output"],
|
||||
conversation_mode="continuous",
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_echo_basic(runtime, goal, llm_provider):
|
||||
graph = _build_echo_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"input": "hello"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("output") is not None
|
||||
assert result.path == ["echo"]
|
||||
assert result.steps_executed == 1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_echo_empty_input(runtime, goal, llm_provider):
|
||||
graph = _build_echo_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"input": ""}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert "output" in result.output
|
||||
@@ -0,0 +1,144 @@
|
||||
"""Feedback loop agent: draft/review cycle with max_node_visits limit.
|
||||
|
||||
Uses StatefulNode for review to control loop iterations deterministically.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.node import NodeResult, NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
from .nodes import StatefulNode, SuccessNode
|
||||
|
||||
|
||||
def _build_feedback_graph(max_visits: int = 3) -> GraphSpec:
|
||||
return GraphSpec(
|
||||
id="feedback-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="draft",
|
||||
terminal_nodes=["done"],
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="draft",
|
||||
name="Draft",
|
||||
description="Produces a draft",
|
||||
node_type="event_loop",
|
||||
output_keys=["draft_output"],
|
||||
max_node_visits=max_visits,
|
||||
),
|
||||
NodeSpec(
|
||||
id="review",
|
||||
name="Review",
|
||||
description="Reviews the draft",
|
||||
node_type="event_loop",
|
||||
input_keys=["draft_output"],
|
||||
output_keys=["approved"],
|
||||
),
|
||||
NodeSpec(
|
||||
id="done",
|
||||
name="Done",
|
||||
description="Final node",
|
||||
node_type="event_loop",
|
||||
output_keys=["final"],
|
||||
),
|
||||
],
|
||||
edges=[
|
||||
EdgeSpec(
|
||||
id="draft-to-review",
|
||||
source="draft",
|
||||
target="review",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="review-to-draft",
|
||||
source="review",
|
||||
target="draft",
|
||||
condition=EdgeCondition.CONDITIONAL,
|
||||
condition_expr="output.get('approved') == False",
|
||||
priority=1,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="review-to-done",
|
||||
source="review",
|
||||
target="done",
|
||||
condition=EdgeCondition.CONDITIONAL,
|
||||
condition_expr="output.get('approved') == True",
|
||||
priority=0,
|
||||
),
|
||||
],
|
||||
memory_keys=["draft_output", "approved", "final"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_feedback_loop_terminates(runtime, goal, llm_provider):
|
||||
"""Loop should terminate: draft visits are capped, review eventually approves."""
|
||||
graph = _build_feedback_graph(max_visits=3)
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("draft", SuccessNode(output={"draft_output": "v1"}))
|
||||
executor.register_node(
|
||||
"review",
|
||||
StatefulNode(
|
||||
[
|
||||
NodeResult(success=True, output={"approved": False}),
|
||||
NodeResult(success=True, output={"approved": False}),
|
||||
NodeResult(success=True, output={"approved": True}),
|
||||
]
|
||||
),
|
||||
)
|
||||
executor.register_node("done", SuccessNode(output={"final": "done"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.node_visit_counts.get("draft", 0) == 3
|
||||
assert "done" in result.path
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_feedback_loop_visit_counts(runtime, goal, llm_provider):
|
||||
graph = _build_feedback_graph(max_visits=3)
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("draft", SuccessNode(output={"draft_output": "v1"}))
|
||||
executor.register_node(
|
||||
"review",
|
||||
StatefulNode(
|
||||
[
|
||||
NodeResult(success=True, output={"approved": False}),
|
||||
NodeResult(success=True, output={"approved": True}),
|
||||
]
|
||||
),
|
||||
)
|
||||
executor.register_node("done", SuccessNode(output={"final": "done"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.node_visit_counts.get("draft", 0) == 2
|
||||
assert result.node_visit_counts.get("review", 0) == 2
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_feedback_loop_early_exit(runtime, goal, llm_provider):
|
||||
"""Review approves on first iteration — loop exits before max."""
|
||||
graph = _build_feedback_graph(max_visits=5)
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("draft", SuccessNode(output={"draft_output": "perfect"}))
|
||||
executor.register_node(
|
||||
"review",
|
||||
StatefulNode(
|
||||
[
|
||||
NodeResult(success=True, output={"approved": True}),
|
||||
]
|
||||
),
|
||||
)
|
||||
executor.register_node("done", SuccessNode(output={"final": "done"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.node_visit_counts.get("draft", 0) == 1
|
||||
assert "done" in result.path
|
||||
@@ -0,0 +1,179 @@
|
||||
"""GCU subagent test: parent event_loop delegates to a GCU subagent.
|
||||
|
||||
Tests the subagent delegation pattern where a parent node uses
|
||||
delegate_to_sub_agent to invoke a GCU (browser) node for a task.
|
||||
The GCU node has access to browser tools via the GCU MCP server.
|
||||
|
||||
Note: This test requires the GCU MCP server (gcu.server) to be available.
|
||||
If not installed, the test is skipped.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.goal import Goal
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
|
||||
|
||||
def _has_gcu_server() -> bool:
|
||||
"""Check if the GCU MCP server module is available."""
|
||||
try:
|
||||
import gcu.server # noqa: F401
|
||||
|
||||
return True
|
||||
except ImportError:
|
||||
return False
|
||||
|
||||
|
||||
def _build_gcu_subagent_graph() -> GraphSpec:
|
||||
"""Parent event_loop node with a GCU subagent for browser tasks.
|
||||
|
||||
Structure:
|
||||
- parent (event_loop): orchestrator that decides when to delegate
|
||||
- browser_worker (gcu): subagent with browser tools
|
||||
- parent delegates to browser_worker via delegate_to_sub_agent tool
|
||||
- browser_worker is NOT connected by edges (validation rule)
|
||||
"""
|
||||
return GraphSpec(
|
||||
id="gcu-subagent-graph",
|
||||
goal_id="gcu-test",
|
||||
entry_node="parent",
|
||||
entry_points={"start": "parent"},
|
||||
terminal_nodes=["parent"],
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="parent",
|
||||
name="Orchestrator",
|
||||
description="Orchestrates browser tasks via subagent delegation",
|
||||
node_type="event_loop",
|
||||
input_keys=["task"],
|
||||
output_keys=["result"],
|
||||
sub_agents=["browser_worker"],
|
||||
system_prompt=(
|
||||
"You are an orchestrator. You have a browser subagent called "
|
||||
"'browser_worker' available via delegate_to_sub_agent.\n\n"
|
||||
"Read the 'task' input and delegate the browser work to "
|
||||
"the browser_worker subagent. When the subagent completes, "
|
||||
"summarize the result and call set_output with key='result'."
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="browser_worker",
|
||||
name="Browser Worker",
|
||||
description="GCU browser subagent for web tasks",
|
||||
node_type="gcu",
|
||||
output_keys=["browser_result"],
|
||||
system_prompt=(
|
||||
"You are a browser worker subagent. Complete the delegated "
|
||||
"browser task using available browser tools. "
|
||||
"When done, call set_output with key='browser_result' and "
|
||||
"the information you found."
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[], # GCU subagents must NOT be connected by edges
|
||||
memory_keys=["task", "result", "browser_result"],
|
||||
conversation_mode="continuous",
|
||||
)
|
||||
|
||||
|
||||
def _gcu_goal() -> Goal:
|
||||
return Goal(
|
||||
id="gcu-test",
|
||||
name="GCU Subagent Test",
|
||||
description="Test browser subagent delegation",
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.skipif(not _has_gcu_server(), reason="GCU server not installed")
|
||||
async def test_gcu_subagent_delegation(runtime, llm_provider, tool_registry, tmp_path):
|
||||
"""Parent delegates a simple browser task to GCU subagent."""
|
||||
# Register GCU MCP server tools
|
||||
from framework.graph.gcu import GCU_MCP_SERVER_CONFIG
|
||||
|
||||
repo_root = Path(__file__).resolve().parents[3]
|
||||
gcu_config = dict(GCU_MCP_SERVER_CONFIG)
|
||||
gcu_config["cwd"] = str(repo_root / "tools")
|
||||
tool_registry.register_mcp_server(gcu_config)
|
||||
|
||||
# Expand GCU node tools (mirrors what runner._setup does)
|
||||
graph = _build_gcu_subagent_graph()
|
||||
gcu_tool_names = tool_registry.get_server_tool_names("gcu-tools")
|
||||
if gcu_tool_names:
|
||||
for node in graph.nodes:
|
||||
if node.node_type == "gcu":
|
||||
existing = set(node.tools)
|
||||
for tool_name in sorted(gcu_tool_names):
|
||||
if tool_name not in existing:
|
||||
node.tools.append(tool_name)
|
||||
|
||||
executor = make_executor(
|
||||
runtime,
|
||||
llm_provider,
|
||||
tool_registry=tool_registry,
|
||||
storage_path=tmp_path / "storage",
|
||||
)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_gcu_goal(),
|
||||
{"task": "Use the browser to navigate to https://example.com and report the page title."},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.skipif(not _has_gcu_server(), reason="GCU server not installed")
|
||||
async def test_gcu_subagent_returns_data(runtime, llm_provider, tool_registry, tmp_path):
|
||||
"""Verify the parent receives structured data from the GCU subagent."""
|
||||
from framework.graph.gcu import GCU_MCP_SERVER_CONFIG
|
||||
|
||||
repo_root = Path(__file__).resolve().parents[3]
|
||||
gcu_config = dict(GCU_MCP_SERVER_CONFIG)
|
||||
gcu_config["cwd"] = str(repo_root / "tools")
|
||||
# Only register if not already registered
|
||||
if not tool_registry.get_server_tool_names("gcu-tools"):
|
||||
tool_registry.register_mcp_server(gcu_config)
|
||||
|
||||
graph = _build_gcu_subagent_graph()
|
||||
gcu_tool_names = tool_registry.get_server_tool_names("gcu-tools")
|
||||
if gcu_tool_names:
|
||||
for node in graph.nodes:
|
||||
if node.node_type == "gcu":
|
||||
existing = set(node.tools)
|
||||
for tool_name in sorted(gcu_tool_names):
|
||||
if tool_name not in existing:
|
||||
node.tools.append(tool_name)
|
||||
|
||||
executor = make_executor(
|
||||
runtime,
|
||||
llm_provider,
|
||||
tool_registry=tool_registry,
|
||||
storage_path=tmp_path / "storage",
|
||||
)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_gcu_goal(),
|
||||
{
|
||||
"task": "Use the browser to visit https://example.com and report "
|
||||
"what domain the page is on."
|
||||
},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
# The result should contain something from the browser
|
||||
result_text = str(result.output["result"]).lower()
|
||||
assert "example" in result_text
|
||||
@@ -0,0 +1,166 @@
|
||||
"""Parallel merge agent: fan-out to two branches, fan-in to merge node.
|
||||
|
||||
Tests parallel execution with real LLM at each branch.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.executor import ParallelExecutionConfig
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
from .nodes import FailNode
|
||||
|
||||
SET_OUTPUT_INSTRUCTION = (
|
||||
"You MUST call the set_output tool to provide your answer. "
|
||||
"Do not just write text — call set_output with the correct key and value."
|
||||
)
|
||||
|
||||
|
||||
def _build_parallel_graph() -> GraphSpec:
|
||||
return GraphSpec(
|
||||
id="parallel-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="split",
|
||||
entry_points={"start": "split"},
|
||||
terminal_nodes=["merge"],
|
||||
conversation_mode="continuous",
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="split",
|
||||
name="Split",
|
||||
description="Entry point that triggers parallel branches",
|
||||
node_type="event_loop",
|
||||
input_keys=["topic"],
|
||||
output_keys=["split_done"],
|
||||
system_prompt=(
|
||||
"You are a dispatcher. Read the 'topic' input, then immediately "
|
||||
"call set_output with key='split_done' and value='true'. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="analyze_a",
|
||||
name="Analyze Pros",
|
||||
description="Analyzes positive aspects",
|
||||
node_type="event_loop",
|
||||
output_keys=["result_a"],
|
||||
system_prompt=(
|
||||
"Analyze the positive aspects of the topic. Then call set_output "
|
||||
"with key='result_a' and a brief one-sentence analysis. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="analyze_b",
|
||||
name="Analyze Cons",
|
||||
description="Analyzes negative aspects",
|
||||
node_type="event_loop",
|
||||
output_keys=["result_b"],
|
||||
system_prompt=(
|
||||
"Analyze the negative aspects of the topic. Then call set_output "
|
||||
"with key='result_b' and a brief one-sentence analysis. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="merge",
|
||||
name="Merge",
|
||||
description="Combines both analyses",
|
||||
node_type="event_loop",
|
||||
input_keys=["result_a", "result_b"],
|
||||
output_keys=["merged"],
|
||||
system_prompt=(
|
||||
"Read 'result_a' and 'result_b' from the input, combine them into "
|
||||
"a one-sentence summary, then call set_output with key='merged' "
|
||||
"and the summary. " + SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[
|
||||
EdgeSpec(
|
||||
id="split-to-a",
|
||||
source="split",
|
||||
target="analyze_a",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="split-to-b",
|
||||
source="split",
|
||||
target="analyze_b",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="a-to-merge",
|
||||
source="analyze_a",
|
||||
target="merge",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="b-to-merge",
|
||||
source="analyze_b",
|
||||
target="merge",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
],
|
||||
memory_keys=["topic", "split_done", "result_a", "result_b", "merged"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_parallel_both_succeed(runtime, goal, llm_provider):
|
||||
graph = _build_parallel_graph()
|
||||
config = ParallelExecutionConfig(on_branch_failure="fail_all")
|
||||
executor = make_executor(runtime, llm_provider, parallel_config=config)
|
||||
|
||||
result = await executor.execute(graph, goal, {"topic": "remote work"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert "split" in result.path
|
||||
assert "merge" in result.path
|
||||
assert result.output.get("merged") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_parallel_branch_failure_fail_all(runtime, goal, llm_provider):
|
||||
"""One branch fails with fail_all -> execution fails."""
|
||||
graph = _build_parallel_graph()
|
||||
config = ParallelExecutionConfig(on_branch_failure="fail_all")
|
||||
executor = make_executor(runtime, llm_provider, parallel_config=config)
|
||||
executor.register_node("analyze_b", FailNode(error="branch B failed"))
|
||||
|
||||
result = await executor.execute(graph, goal, {"topic": "remote work"}, validate_graph=False)
|
||||
|
||||
assert not result.success
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_parallel_branch_failure_continue_others(runtime, goal, llm_provider):
|
||||
"""One branch fails with continue_others -> surviving branch completes."""
|
||||
graph = _build_parallel_graph()
|
||||
config = ParallelExecutionConfig(on_branch_failure="continue_others")
|
||||
executor = make_executor(runtime, llm_provider, parallel_config=config)
|
||||
executor.register_node("analyze_b", FailNode(error="branch B failed"))
|
||||
|
||||
result = await executor.execute(graph, goal, {"topic": "remote work"}, validate_graph=False)
|
||||
|
||||
# With continue_others, execution can proceed past failed branches
|
||||
assert result.output.get("merged") is not None or result.output.get("result_a") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_parallel_disjoint_output_keys(runtime, goal, llm_provider):
|
||||
"""Verify both branches write to separate memory keys without conflicts."""
|
||||
graph = _build_parallel_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(
|
||||
graph, goal, {"topic": "artificial intelligence"}, validate_graph=False
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result_a") is not None
|
||||
assert result.output.get("result_b") is not None
|
||||
@@ -0,0 +1,134 @@
|
||||
"""Pipeline agent: linear 3-node chain with real LLM at each step.
|
||||
|
||||
Tests input_mapping, conversation modes, and multi-node traversal.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
|
||||
SET_OUTPUT_INSTRUCTION = (
|
||||
"You MUST call the set_output tool to provide your answer. "
|
||||
"Do not just write text — call set_output with the correct key and value."
|
||||
)
|
||||
|
||||
|
||||
def _build_pipeline_graph(conversation_mode: str = "continuous") -> GraphSpec:
|
||||
return GraphSpec(
|
||||
id="pipeline-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="intake",
|
||||
entry_points={"start": "intake"},
|
||||
terminal_nodes=["output"],
|
||||
conversation_mode=conversation_mode,
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="intake",
|
||||
name="Intake",
|
||||
description="Captures raw input and passes it along",
|
||||
node_type="event_loop",
|
||||
input_keys=["raw"],
|
||||
output_keys=["captured"],
|
||||
system_prompt=(
|
||||
"You are the intake node. Read the 'raw' input value from the user "
|
||||
"message, then call set_output with key='captured' and the same value. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="transform",
|
||||
name="Transform",
|
||||
description="Uppercases the input value",
|
||||
node_type="event_loop",
|
||||
input_keys=["value"],
|
||||
output_keys=["transformed"],
|
||||
system_prompt=(
|
||||
"You are a transform node. Read the 'value' input from the user "
|
||||
"message, convert it to UPPERCASE, then call set_output with "
|
||||
"key='transformed' and the uppercased value. " + SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="output",
|
||||
name="Output",
|
||||
description="Formats final result",
|
||||
node_type="event_loop",
|
||||
input_keys=["value"],
|
||||
output_keys=["result"],
|
||||
system_prompt=(
|
||||
"You are the output node. Read the 'value' input from the user "
|
||||
"message, prefix it with 'Result: ', then call set_output with "
|
||||
"key='result' and the prefixed value. " + SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[
|
||||
EdgeSpec(
|
||||
id="intake-to-transform",
|
||||
source="intake",
|
||||
target="transform",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
input_mapping={"value": "captured"},
|
||||
),
|
||||
EdgeSpec(
|
||||
id="transform-to-output",
|
||||
source="transform",
|
||||
target="output",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
input_mapping={"value": "transformed"},
|
||||
),
|
||||
],
|
||||
memory_keys=["raw", "captured", "value", "transformed", "result"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_pipeline_linear_traversal(runtime, goal, llm_provider):
|
||||
graph = _build_pipeline_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"raw": "hello"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.path == ["intake", "transform", "output"]
|
||||
assert result.steps_executed == 3
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_pipeline_input_mapping(runtime, goal, llm_provider):
|
||||
"""Verify input_mapping wires source output keys to target input keys."""
|
||||
graph = _build_pipeline_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"raw": "test value"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.steps_executed == 3
|
||||
assert result.output.get("result") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_pipeline_continuous_conversation(runtime, goal, llm_provider):
|
||||
graph = _build_pipeline_graph(conversation_mode="continuous")
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"raw": "data"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert len(result.path) == 3
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_pipeline_isolated_conversation(runtime, goal, llm_provider):
|
||||
graph = _build_pipeline_graph(conversation_mode="isolated")
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"raw": "data"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert len(result.path) == 3
|
||||
@@ -0,0 +1,131 @@
|
||||
"""Retry agent: flaky node with retry limit and failure edges.
|
||||
|
||||
Uses deterministic FlakyNode (not LLM) since we need controlled failure patterns.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
from .nodes import FlakyNode, SuccessNode
|
||||
|
||||
|
||||
def _build_retry_graph(max_retries: int = 3, with_failure_edge: bool = False) -> GraphSpec:
|
||||
nodes = [
|
||||
NodeSpec(
|
||||
id="flaky",
|
||||
name="Flaky",
|
||||
description="Fails then succeeds",
|
||||
node_type="event_loop",
|
||||
output_keys=["status"],
|
||||
max_retries=max_retries,
|
||||
),
|
||||
NodeSpec(
|
||||
id="done",
|
||||
name="Done",
|
||||
description="Terminal success node",
|
||||
node_type="event_loop",
|
||||
output_keys=["final"],
|
||||
),
|
||||
]
|
||||
edges = [
|
||||
EdgeSpec(
|
||||
id="flaky-to-done",
|
||||
source="flaky",
|
||||
target="done",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
]
|
||||
terminal_nodes = ["done"]
|
||||
|
||||
if with_failure_edge:
|
||||
nodes.append(
|
||||
NodeSpec(
|
||||
id="error_handler",
|
||||
name="Error Handler",
|
||||
description="Handles exhausted retries",
|
||||
node_type="event_loop",
|
||||
output_keys=["error_handled"],
|
||||
)
|
||||
)
|
||||
edges.append(
|
||||
EdgeSpec(
|
||||
id="flaky-to-error",
|
||||
source="flaky",
|
||||
target="error_handler",
|
||||
condition=EdgeCondition.ON_FAILURE,
|
||||
)
|
||||
)
|
||||
terminal_nodes.append("error_handler")
|
||||
|
||||
return GraphSpec(
|
||||
id="retry-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="flaky",
|
||||
terminal_nodes=terminal_nodes,
|
||||
nodes=nodes,
|
||||
edges=edges,
|
||||
memory_keys=["status", "final", "error_handled"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retry_succeeds_within_limit(runtime, goal, llm_provider):
|
||||
graph = _build_retry_graph(max_retries=3)
|
||||
flaky = FlakyNode(fail_times=2, output={"status": "recovered"})
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("flaky", flaky)
|
||||
executor.register_node("done", SuccessNode(output={"final": "complete"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.total_retries >= 2
|
||||
assert flaky.attempt_count == 3 # 2 failures + 1 success
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retry_exhaustion(runtime, goal, llm_provider):
|
||||
graph = _build_retry_graph(max_retries=3)
|
||||
flaky = FlakyNode(fail_times=10, output={"status": "recovered"})
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("flaky", flaky)
|
||||
executor.register_node("done", SuccessNode(output={"final": "complete"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert not result.success
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retry_with_on_failure_edge(runtime, goal, llm_provider):
|
||||
graph = _build_retry_graph(max_retries=2, with_failure_edge=True)
|
||||
flaky = FlakyNode(fail_times=10)
|
||||
error_handler = SuccessNode(output={"error_handled": True})
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("flaky", flaky)
|
||||
executor.register_node("done", SuccessNode(output={"final": "complete"}))
|
||||
executor.register_node("error_handler", error_handler)
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert "error_handler" in result.path
|
||||
assert error_handler.executed
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retry_tracking(runtime, goal, llm_provider):
|
||||
graph = _build_retry_graph(max_retries=3)
|
||||
flaky = FlakyNode(fail_times=2)
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("flaky", flaky)
|
||||
executor.register_node("done", SuccessNode(output={"final": "complete"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.retry_details.get("flaky", 0) >= 2
|
||||
@@ -0,0 +1,139 @@
|
||||
"""Worker agent: single-node event loop with real MCP tools.
|
||||
|
||||
Tests the core worker pattern — a single EventLoopNode that uses real
|
||||
hive-tools (example_tool, get_current_time, save_data/load_data) to
|
||||
accomplish tasks, matching how real agents are structured.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.goal import Goal
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
|
||||
|
||||
def _build_worker_graph(tools: list[str]) -> GraphSpec:
|
||||
"""Single-node worker agent with MCP tools — matches real agent structure."""
|
||||
return GraphSpec(
|
||||
id="worker-graph",
|
||||
goal_id="worker-goal",
|
||||
entry_node="worker",
|
||||
entry_points={"start": "worker"},
|
||||
terminal_nodes=["worker"],
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="worker",
|
||||
name="Worker",
|
||||
description="General-purpose worker with tools",
|
||||
node_type="event_loop",
|
||||
input_keys=["task"],
|
||||
output_keys=["result"],
|
||||
tools=tools,
|
||||
system_prompt=(
|
||||
"You are a worker agent with access to tools. "
|
||||
"Read the 'task' input and complete it using the available tools. "
|
||||
"When done, call set_output with key='result' and the final answer."
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[],
|
||||
memory_keys=["task", "result"],
|
||||
conversation_mode="continuous",
|
||||
)
|
||||
|
||||
|
||||
def _worker_goal() -> Goal:
|
||||
return Goal(
|
||||
id="worker-goal",
|
||||
name="Worker Agent",
|
||||
description="Complete a task using available tools",
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_worker_example_tool(runtime, llm_provider, tool_registry):
|
||||
"""Worker uses example_tool to process text."""
|
||||
graph = _build_worker_graph(tools=["example_tool"])
|
||||
executor = make_executor(runtime, llm_provider, tool_registry=tool_registry)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_worker_goal(),
|
||||
{"task": "Use the example_tool to process the message 'hello world' with uppercase=true"},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_worker_time_tool(runtime, llm_provider, tool_registry):
|
||||
"""Worker uses get_current_time to check the current time."""
|
||||
graph = _build_worker_graph(tools=["get_current_time"])
|
||||
executor = make_executor(runtime, llm_provider, tool_registry=tool_registry)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_worker_goal(),
|
||||
{
|
||||
"task": "Use get_current_time to find the current time in UTC, "
|
||||
"and report the day of the week as the result"
|
||||
},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_worker_data_tools(runtime, llm_provider, tool_registry, tmp_path):
|
||||
"""Worker uses save_data and load_data to store and retrieve data."""
|
||||
graph = _build_worker_graph(tools=["save_data", "load_data"])
|
||||
executor = make_executor(
|
||||
runtime,
|
||||
llm_provider,
|
||||
tool_registry=tool_registry,
|
||||
storage_path=tmp_path / "storage",
|
||||
)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_worker_goal(),
|
||||
{
|
||||
"task": f"Use save_data to save the text 'test payload' to a file called "
|
||||
f"'test.txt' in the data_dir '{tmp_path}/data'. "
|
||||
f"Then use load_data to read it back from the same data_dir. "
|
||||
f"Report what you loaded as the result."
|
||||
},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_worker_multi_tool(runtime, llm_provider, tool_registry):
|
||||
"""Worker uses multiple tools in sequence."""
|
||||
graph = _build_worker_graph(tools=["example_tool", "get_current_time"])
|
||||
executor = make_executor(runtime, llm_provider, tool_registry=tool_registry)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_worker_goal(),
|
||||
{
|
||||
"task": "First use get_current_time to find the current day of the week. "
|
||||
"Then use example_tool to process that day name with uppercase=true. "
|
||||
"Report the uppercased day name as the result."
|
||||
},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
@@ -0,0 +1,190 @@
|
||||
"""Tests for default skills — parsing, token budget, and configuration."""
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.skills.config import DefaultSkillConfig, SkillsConfig
|
||||
from framework.skills.defaults import (
|
||||
SKILL_REGISTRY,
|
||||
SHARED_MEMORY_KEYS,
|
||||
DefaultSkillManager,
|
||||
)
|
||||
from framework.skills.parser import parse_skill_md
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
_DEFAULT_SKILLS_DIR = Path(__file__).resolve().parent.parent / "framework" / "skills" / "_default_skills"
|
||||
|
||||
|
||||
class TestDefaultSkillFiles:
|
||||
"""Verify all 6 built-in SKILL.md files parse correctly."""
|
||||
|
||||
def test_all_six_skills_exist(self):
|
||||
assert len(SKILL_REGISTRY) == 6
|
||||
|
||||
@pytest.mark.parametrize("skill_name,dir_name", list(SKILL_REGISTRY.items()))
|
||||
def test_skill_parses(self, skill_name, dir_name):
|
||||
path = _DEFAULT_SKILLS_DIR / dir_name / "SKILL.md"
|
||||
assert path.is_file(), f"Missing SKILL.md at {path}"
|
||||
|
||||
parsed = parse_skill_md(path, source_scope="framework")
|
||||
assert parsed is not None, f"Failed to parse {path}"
|
||||
assert parsed.name == skill_name
|
||||
assert parsed.description
|
||||
assert parsed.body
|
||||
assert parsed.source_scope == "framework"
|
||||
|
||||
def test_combined_token_budget(self):
|
||||
"""All default skill bodies combined should be under 2000 tokens (~8000 chars)."""
|
||||
total_chars = 0
|
||||
for dir_name in SKILL_REGISTRY.values():
|
||||
path = _DEFAULT_SKILLS_DIR / dir_name / "SKILL.md"
|
||||
parsed = parse_skill_md(path, source_scope="framework")
|
||||
assert parsed is not None
|
||||
total_chars += len(parsed.body)
|
||||
|
||||
approx_tokens = total_chars // 4
|
||||
assert approx_tokens < 2000, (
|
||||
f"Combined default skill bodies are ~{approx_tokens} tokens "
|
||||
f"({total_chars} chars), exceeding the 2000 token budget"
|
||||
)
|
||||
|
||||
def test_shared_memory_keys_all_prefixed(self):
|
||||
"""All shared memory keys must start with underscore."""
|
||||
for key in SHARED_MEMORY_KEYS:
|
||||
assert key.startswith("_"), f"Shared memory key missing _ prefix: {key}"
|
||||
|
||||
|
||||
class TestDefaultSkillManager:
|
||||
def test_load_all_defaults(self):
|
||||
manager = DefaultSkillManager()
|
||||
manager.load()
|
||||
|
||||
assert len(manager.active_skill_names) == 6
|
||||
for name in SKILL_REGISTRY:
|
||||
assert name in manager.active_skill_names
|
||||
|
||||
def test_load_idempotent(self):
|
||||
manager = DefaultSkillManager()
|
||||
manager.load()
|
||||
first_skills = dict(manager.active_skills)
|
||||
manager.load()
|
||||
assert manager.active_skills == first_skills
|
||||
|
||||
def test_build_protocols_prompt(self):
|
||||
manager = DefaultSkillManager()
|
||||
manager.load()
|
||||
prompt = manager.build_protocols_prompt()
|
||||
|
||||
assert prompt.startswith("## Operational Protocols")
|
||||
# Should contain content from each active skill
|
||||
for name in SKILL_REGISTRY:
|
||||
skill = manager.active_skills[name]
|
||||
# At least some of the body should appear
|
||||
assert skill.body[:20] in prompt
|
||||
|
||||
def test_protocols_prompt_empty_when_all_disabled(self):
|
||||
config = SkillsConfig(all_defaults_disabled=True)
|
||||
manager = DefaultSkillManager(config)
|
||||
manager.load()
|
||||
|
||||
assert manager.build_protocols_prompt() == ""
|
||||
assert manager.active_skill_names == []
|
||||
|
||||
def test_disable_single_skill(self):
|
||||
config = SkillsConfig.from_agent_vars(
|
||||
default_skills={"hive.quality-monitor": {"enabled": False}}
|
||||
)
|
||||
manager = DefaultSkillManager(config)
|
||||
manager.load()
|
||||
|
||||
assert "hive.quality-monitor" not in manager.active_skill_names
|
||||
assert len(manager.active_skill_names) == 5
|
||||
|
||||
def test_disable_all_via_convention(self):
|
||||
config = SkillsConfig.from_agent_vars(
|
||||
default_skills={"_all": {"enabled": False}}
|
||||
)
|
||||
manager = DefaultSkillManager(config)
|
||||
manager.load()
|
||||
|
||||
assert manager.active_skill_names == []
|
||||
|
||||
def test_log_active_skills(self, caplog):
|
||||
import logging
|
||||
with caplog.at_level(logging.INFO, logger="framework.skills.defaults"):
|
||||
manager = DefaultSkillManager()
|
||||
manager.load()
|
||||
manager.log_active_skills()
|
||||
|
||||
assert "Default skills active:" in caplog.text
|
||||
|
||||
def test_log_all_disabled(self, caplog):
|
||||
import logging
|
||||
config = SkillsConfig(all_defaults_disabled=True)
|
||||
with caplog.at_level(logging.INFO, logger="framework.skills.defaults"):
|
||||
manager = DefaultSkillManager(config)
|
||||
manager.load()
|
||||
manager.log_active_skills()
|
||||
|
||||
assert "all disabled" in caplog.text
|
||||
|
||||
|
||||
class TestSkillsConfig:
|
||||
def test_default_is_enabled(self):
|
||||
config = SkillsConfig()
|
||||
assert config.is_default_enabled("hive.note-taking") is True
|
||||
|
||||
def test_explicit_disable(self):
|
||||
config = SkillsConfig(
|
||||
default_skills={"hive.note-taking": DefaultSkillConfig(enabled=False)}
|
||||
)
|
||||
assert config.is_default_enabled("hive.note-taking") is False
|
||||
assert config.is_default_enabled("hive.batch-ledger") is True
|
||||
|
||||
def test_all_disabled_flag(self):
|
||||
config = SkillsConfig(all_defaults_disabled=True)
|
||||
assert config.is_default_enabled("hive.note-taking") is False
|
||||
assert config.is_default_enabled("anything") is False
|
||||
|
||||
def test_from_agent_vars_basic(self):
|
||||
config = SkillsConfig.from_agent_vars(
|
||||
default_skills={
|
||||
"hive.note-taking": {"enabled": True},
|
||||
"hive.quality-monitor": {"enabled": False},
|
||||
},
|
||||
skills=["deep-research"],
|
||||
)
|
||||
assert config.is_default_enabled("hive.note-taking") is True
|
||||
assert config.is_default_enabled("hive.quality-monitor") is False
|
||||
assert config.skills == ["deep-research"]
|
||||
|
||||
def test_from_agent_vars_bool_shorthand(self):
|
||||
config = SkillsConfig.from_agent_vars(
|
||||
default_skills={"hive.note-taking": False}
|
||||
)
|
||||
assert config.is_default_enabled("hive.note-taking") is False
|
||||
|
||||
def test_from_agent_vars_all_disabled(self):
|
||||
config = SkillsConfig.from_agent_vars(
|
||||
default_skills={"_all": {"enabled": False}}
|
||||
)
|
||||
assert config.all_defaults_disabled is True
|
||||
|
||||
def test_get_default_overrides(self):
|
||||
config = SkillsConfig.from_agent_vars(
|
||||
default_skills={
|
||||
"hive.batch-ledger": {"enabled": True, "checkpoint_every_n": 10},
|
||||
}
|
||||
)
|
||||
overrides = config.get_default_overrides("hive.batch-ledger")
|
||||
assert overrides == {"checkpoint_every_n": 10}
|
||||
|
||||
def test_get_default_overrides_empty(self):
|
||||
config = SkillsConfig()
|
||||
assert config.get_default_overrides("hive.note-taking") == {}
|
||||
|
||||
def test_from_agent_vars_none_inputs(self):
|
||||
config = SkillsConfig.from_agent_vars(default_skills=None, skills=None)
|
||||
assert config.skills == []
|
||||
assert config.default_skills == {}
|
||||
assert config.all_defaults_disabled is False
|
||||
@@ -12,6 +12,7 @@ Covers:
|
||||
- Single-edge paths unaffected
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
@@ -77,6 +78,19 @@ class TimingNode(NodeProtocol):
|
||||
)
|
||||
|
||||
|
||||
class SlowNode(NodeProtocol):
|
||||
"""Sleeps before returning -- used for timeout testing."""
|
||||
|
||||
def __init__(self, delay: float = 10.0):
|
||||
self.delay = delay
|
||||
self.executed = False
|
||||
|
||||
async def execute(self, ctx: NodeContext) -> NodeResult:
|
||||
await asyncio.sleep(self.delay)
|
||||
self.executed = True
|
||||
return NodeResult(success=True, output={"result": "slow"}, tokens_used=1, latency_ms=1)
|
||||
|
||||
|
||||
# --- Fixtures ---
|
||||
|
||||
|
||||
@@ -492,3 +506,186 @@ async def test_parallel_disabled_uses_sequential(runtime, goal):
|
||||
# Only one branch should have executed (sequential follows first edge)
|
||||
executed_count = sum([b1_impl.executed, b2_impl.executed])
|
||||
assert executed_count == 1
|
||||
|
||||
|
||||
# === 12. Branch timeout cancels slow branch ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_timeout_cancels_slow_branch(runtime, goal):
|
||||
"""A branch exceeding branch_timeout_seconds should be cancelled."""
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="slow", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="fast", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(branch_timeout_seconds=0.1, on_branch_failure="fail_all")
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
executor.register_node("b1", SlowNode(delay=10.0))
|
||||
executor.register_node("b2", SuccessNode({"b2_out": "ok"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {})
|
||||
|
||||
# fail_all: one branch timed out → execution fails
|
||||
assert not result.success
|
||||
assert "failed" in result.error.lower()
|
||||
|
||||
|
||||
# === 13. Branch timeout with continue_others ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_timeout_with_continue_others(runtime, goal):
|
||||
"""continue_others should let fast branches finish even when one times out."""
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="slow", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="fast", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(
|
||||
branch_timeout_seconds=0.1, on_branch_failure="continue_others"
|
||||
)
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
executor.register_node("b1", SlowNode(delay=10.0))
|
||||
b2_impl = SuccessNode({"b2_out": "ok"})
|
||||
executor.register_node("b2", b2_impl)
|
||||
|
||||
await executor.execute(graph, goal, {})
|
||||
|
||||
# continue_others tolerates the timeout
|
||||
assert b2_impl.executed
|
||||
|
||||
|
||||
# === 14. Branch timeout with fail_all (explicit) ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_timeout_with_fail_all(runtime, goal):
|
||||
"""fail_all should propagate timeout as execution failure."""
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="slow", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="also slow", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(branch_timeout_seconds=0.1, on_branch_failure="fail_all")
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
executor.register_node("b1", SlowNode(delay=10.0))
|
||||
executor.register_node("b2", SlowNode(delay=10.0))
|
||||
|
||||
result = await executor.execute(graph, goal, {})
|
||||
|
||||
assert not result.success
|
||||
|
||||
|
||||
# === 15. Memory conflict: last_wins ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_memory_conflict_last_wins(runtime, goal):
|
||||
"""last_wins should allow both branches to write the same key without error."""
|
||||
# Use distinct output_keys in spec (to pass graph validation) but have
|
||||
# the node impl write a shared key at runtime — this is the scenario
|
||||
# memory_conflict_strategy is designed to handle.
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="b1", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="b2", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(memory_conflict_strategy="last_wins")
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
# Both impls write "shared_key" — triggers conflict detection at runtime
|
||||
executor.register_node("b1", SuccessNode({"shared_key": "from_b1", "b1_out": "ok"}))
|
||||
executor.register_node("b2", SuccessNode({"shared_key": "from_b2", "b2_out": "ok"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {})
|
||||
|
||||
assert result.success
|
||||
# The key should exist with one of the two values
|
||||
assert result.output.get("shared_key") in ("from_b1", "from_b2")
|
||||
|
||||
|
||||
# === 16. Memory conflict: first_wins ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_memory_conflict_first_wins(runtime, goal):
|
||||
"""first_wins should keep the first branch's value and skip later writes."""
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="b1", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="b2", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(memory_conflict_strategy="first_wins")
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
executor.register_node("b1", SuccessNode({"shared_key": "from_b1", "b1_out": "ok"}))
|
||||
executor.register_node("b2", SuccessNode({"shared_key": "from_b2", "b2_out": "ok"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {})
|
||||
|
||||
assert result.success
|
||||
|
||||
|
||||
# === 17. Memory conflict: error raises ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_memory_conflict_error_raises(runtime, goal):
|
||||
"""error strategy should fail when two branches write the same key."""
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="b1", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="b2", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(memory_conflict_strategy="error")
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
executor.register_node("b1", SuccessNode({"shared_key": "from_b1", "b1_out": "ok"}))
|
||||
executor.register_node("b2", SuccessNode({"shared_key": "from_b2", "b2_out": "ok"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {})
|
||||
|
||||
assert not result.success
|
||||
# The conflict RuntimeError is caught inside execute_single_branch,
|
||||
# which causes the branch to fail. fail_all then raises its own error.
|
||||
assert "failed" in result.error.lower()
|
||||
|
||||
@@ -3,12 +3,16 @@ Tests for core GraphExecutor execution paths.
|
||||
Focused on minimal success and failure scenarios.
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.executor import GraphExecutor
|
||||
from framework.graph.goal import Goal
|
||||
from framework.graph.node import NodeResult, NodeSpec
|
||||
from framework.utils.io import atomic_write
|
||||
|
||||
|
||||
# ---- Dummy runtime (no real logging) ----
|
||||
@@ -25,6 +29,14 @@ class DummyRuntime:
|
||||
pass
|
||||
|
||||
|
||||
class DummyMemory:
|
||||
def __init__(self, data):
|
||||
self._data = data
|
||||
|
||||
def read_all(self):
|
||||
return self._data
|
||||
|
||||
|
||||
# ---- Fake node that always succeeds ----
|
||||
class SuccessNode:
|
||||
def validate_input(self, ctx):
|
||||
@@ -245,3 +257,61 @@ async def test_executor_no_events_without_event_bus():
|
||||
result = await executor.execute(graph=graph, goal=goal)
|
||||
|
||||
assert result.success is True
|
||||
|
||||
|
||||
def test_write_progress_uses_atomic_write_and_updates_state(tmp_path, monkeypatch):
|
||||
runtime = DummyRuntime()
|
||||
executor = GraphExecutor(runtime=runtime, storage_path=tmp_path)
|
||||
state_path = tmp_path / "state.json"
|
||||
state_path.write_text(json.dumps({"entry_point": "primary"}), encoding="utf-8")
|
||||
memory = DummyMemory({"foo": "bar"})
|
||||
|
||||
called = {}
|
||||
|
||||
def recording_atomic_write(path, *args, **kwargs):
|
||||
called["path"] = path
|
||||
return atomic_write(path, *args, **kwargs)
|
||||
|
||||
monkeypatch.setattr("framework.graph.executor.atomic_write", recording_atomic_write)
|
||||
|
||||
executor._write_progress(
|
||||
current_node="node-b",
|
||||
path=["node-a", "node-b"],
|
||||
memory=memory,
|
||||
node_visit_counts={"node-a": 1, "node-b": 1},
|
||||
)
|
||||
|
||||
state = json.loads(state_path.read_text(encoding="utf-8"))
|
||||
assert called["path"] == state_path
|
||||
assert state["entry_point"] == "primary"
|
||||
assert state["progress"]["current_node"] == "node-b"
|
||||
assert state["progress"]["path"] == ["node-a", "node-b"]
|
||||
assert state["progress"]["node_visit_counts"] == {"node-a": 1, "node-b": 1}
|
||||
assert state["progress"]["steps_executed"] == 2
|
||||
assert state["memory"] == {"foo": "bar"}
|
||||
assert state["memory_keys"] == ["foo"]
|
||||
assert "updated_at" in state["timestamps"]
|
||||
|
||||
|
||||
def test_write_progress_logs_warning_on_atomic_write_failure(tmp_path, monkeypatch, caplog):
|
||||
runtime = DummyRuntime()
|
||||
executor = GraphExecutor(runtime=runtime, storage_path=tmp_path)
|
||||
state_path = tmp_path / "state.json"
|
||||
state_path.write_text(json.dumps({"entry_point": "primary"}), encoding="utf-8")
|
||||
memory = DummyMemory({"foo": "bar"})
|
||||
|
||||
def failing_atomic_write(*args, **kwargs):
|
||||
raise OSError("disk full")
|
||||
|
||||
monkeypatch.setattr("framework.graph.executor.atomic_write", failing_atomic_write)
|
||||
|
||||
with caplog.at_level(logging.WARNING):
|
||||
executor._write_progress(
|
||||
current_node="node-b",
|
||||
path=["node-a", "node-b"],
|
||||
memory=memory,
|
||||
node_visit_counts={"node-a": 1, "node-b": 1},
|
||||
)
|
||||
|
||||
assert "Failed to persist progress state to" in caplog.text
|
||||
assert str(state_path) in caplog.text
|
||||
|
||||
@@ -338,6 +338,69 @@ class TestLLMJudgeBackwardCompatibility:
|
||||
assert call_kwargs["model"] == "claude-haiku-4-5-20251001"
|
||||
assert call_kwargs["max_tokens"] == 500
|
||||
|
||||
def test_openai_fallback_uses_litellm_provider(self, monkeypatch):
|
||||
"""When OPENAI_API_KEY is set, evaluate() should use a LiteLLM-based provider."""
|
||||
# Force the OpenAI fallback path (no injected provider, no Anthropic key)
|
||||
monkeypatch.setenv("OPENAI_API_KEY", "sk-test-openai")
|
||||
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
|
||||
|
||||
# Stub LiteLLMProvider so we don't call the real API; record what judge passes through
|
||||
captured_calls: list[dict] = []
|
||||
|
||||
class DummyProvider:
|
||||
def __init__(self, model: str = "gpt-4o-mini"):
|
||||
self.model = model
|
||||
|
||||
def complete(
|
||||
self,
|
||||
messages,
|
||||
system="",
|
||||
tools=None,
|
||||
max_tokens=1024,
|
||||
response_format=None,
|
||||
json_mode=False,
|
||||
max_retries=None,
|
||||
):
|
||||
captured_calls.append(
|
||||
{
|
||||
"messages": messages,
|
||||
"system": system,
|
||||
"max_tokens": max_tokens,
|
||||
"json_mode": json_mode,
|
||||
"model": self.model,
|
||||
}
|
||||
)
|
||||
|
||||
class _Resp:
|
||||
def __init__(self, content: str):
|
||||
self.content = content
|
||||
|
||||
# Minimal response object with a content attribute
|
||||
return _Resp('{"passes": true, "explanation": "OK"}')
|
||||
|
||||
monkeypatch.setattr(
|
||||
"framework.llm.litellm.LiteLLMProvider",
|
||||
DummyProvider,
|
||||
)
|
||||
|
||||
judge = LLMJudge()
|
||||
result = judge.evaluate(
|
||||
constraint="no-hallucination",
|
||||
source_document="The sky is blue.",
|
||||
summary="The sky is blue.",
|
||||
criteria="Summary must only contain facts from source",
|
||||
)
|
||||
|
||||
# Judge should have used our stub once and returned the stub's JSON result
|
||||
assert result["passes"] is True
|
||||
assert result["explanation"] == "OK"
|
||||
assert len(captured_calls) == 1
|
||||
|
||||
call = captured_calls[0]
|
||||
assert call["model"] == "gpt-4o-mini"
|
||||
assert call["max_tokens"] == 500
|
||||
assert call["json_mode"] is True
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# LLMJudge Integration Pattern Tests
|
||||
|
||||
@@ -0,0 +1,172 @@
|
||||
"""Tests for the skill catalog and prompt generation."""
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.skills.catalog import SkillCatalog
|
||||
from framework.skills.parser import ParsedSkill
|
||||
|
||||
|
||||
def _make_skill(
|
||||
name: str = "my-skill",
|
||||
description: str = "A test skill.",
|
||||
source_scope: str = "project",
|
||||
body: str = "Instructions here.",
|
||||
location: str = "/tmp/skills/my-skill/SKILL.md",
|
||||
base_dir: str = "/tmp/skills/my-skill",
|
||||
) -> ParsedSkill:
|
||||
return ParsedSkill(
|
||||
name=name,
|
||||
description=description,
|
||||
location=location,
|
||||
base_dir=base_dir,
|
||||
source_scope=source_scope,
|
||||
body=body,
|
||||
)
|
||||
|
||||
|
||||
class TestSkillCatalog:
|
||||
def test_add_and_get(self):
|
||||
catalog = SkillCatalog()
|
||||
skill = _make_skill()
|
||||
catalog.add(skill)
|
||||
|
||||
assert catalog.get("my-skill") is skill
|
||||
assert catalog.get("nonexistent") is None
|
||||
assert catalog.skill_count == 1
|
||||
|
||||
def test_init_with_skills_list(self):
|
||||
skills = [_make_skill("a", "Skill A"), _make_skill("b", "Skill B")]
|
||||
catalog = SkillCatalog(skills)
|
||||
|
||||
assert catalog.skill_count == 2
|
||||
assert catalog.get("a") is not None
|
||||
assert catalog.get("b") is not None
|
||||
|
||||
def test_activation_tracking(self):
|
||||
catalog = SkillCatalog([_make_skill()])
|
||||
assert not catalog.is_activated("my-skill")
|
||||
|
||||
catalog.mark_activated("my-skill")
|
||||
assert catalog.is_activated("my-skill")
|
||||
|
||||
def test_allowlisted_dirs(self):
|
||||
skills = [
|
||||
_make_skill("a", base_dir="/skills/a"),
|
||||
_make_skill("b", base_dir="/skills/b"),
|
||||
]
|
||||
catalog = SkillCatalog(skills)
|
||||
dirs = catalog.allowlisted_dirs
|
||||
|
||||
assert "/skills/a" in dirs
|
||||
assert "/skills/b" in dirs
|
||||
|
||||
def test_to_prompt_empty_catalog(self):
|
||||
catalog = SkillCatalog()
|
||||
assert catalog.to_prompt() == ""
|
||||
|
||||
def test_to_prompt_framework_only(self):
|
||||
"""Framework-scope skills should NOT appear in the catalog prompt."""
|
||||
catalog = SkillCatalog([_make_skill(source_scope="framework")])
|
||||
assert catalog.to_prompt() == ""
|
||||
|
||||
def test_to_prompt_xml_generation(self):
|
||||
skills = [
|
||||
_make_skill("alpha", "Alpha skill", "project", location="/p/alpha/SKILL.md"),
|
||||
_make_skill("beta", "Beta skill", "user", location="/u/beta/SKILL.md"),
|
||||
]
|
||||
catalog = SkillCatalog(skills)
|
||||
prompt = catalog.to_prompt()
|
||||
|
||||
assert "<available_skills>" in prompt
|
||||
assert "</available_skills>" in prompt
|
||||
assert "<name>alpha</name>" in prompt
|
||||
assert "<name>beta</name>" in prompt
|
||||
assert "<description>Alpha skill</description>" in prompt
|
||||
assert "<location>/p/alpha/SKILL.md</location>" in prompt
|
||||
|
||||
def test_to_prompt_sorted_by_name(self):
|
||||
skills = [
|
||||
_make_skill("zebra", "Z skill", "project"),
|
||||
_make_skill("alpha", "A skill", "project"),
|
||||
]
|
||||
catalog = SkillCatalog(skills)
|
||||
prompt = catalog.to_prompt()
|
||||
|
||||
alpha_pos = prompt.index("alpha")
|
||||
zebra_pos = prompt.index("zebra")
|
||||
assert alpha_pos < zebra_pos
|
||||
|
||||
def test_to_prompt_xml_escaping(self):
|
||||
skill = _make_skill("test", 'Has <special> & "chars"', "project")
|
||||
catalog = SkillCatalog([skill])
|
||||
prompt = catalog.to_prompt()
|
||||
|
||||
assert "<special>" in prompt
|
||||
assert "&" in prompt
|
||||
|
||||
def test_to_prompt_excludes_framework_includes_others(self):
|
||||
"""Mixed scopes: only framework skills are excluded from catalog."""
|
||||
skills = [
|
||||
_make_skill("proj", "Project skill", "project"),
|
||||
_make_skill("usr", "User skill", "user"),
|
||||
_make_skill("fw", "Framework skill", "framework"),
|
||||
]
|
||||
catalog = SkillCatalog(skills)
|
||||
prompt = catalog.to_prompt()
|
||||
|
||||
assert "<name>proj</name>" in prompt
|
||||
assert "<name>usr</name>" in prompt
|
||||
assert "fw" not in prompt
|
||||
|
||||
def test_to_prompt_contains_behavioral_instruction(self):
|
||||
catalog = SkillCatalog([_make_skill(source_scope="project")])
|
||||
prompt = catalog.to_prompt()
|
||||
|
||||
assert "When a task matches a skill's description" in prompt
|
||||
assert "SKILL.md" in prompt
|
||||
|
||||
def test_build_pre_activated_prompt(self):
|
||||
skill = _make_skill("research", body="## Deep Research\nDo thorough research.")
|
||||
catalog = SkillCatalog([skill])
|
||||
prompt = catalog.build_pre_activated_prompt(["research"])
|
||||
|
||||
assert "Pre-Activated Skill: research" in prompt
|
||||
assert "## Deep Research" in prompt
|
||||
assert catalog.is_activated("research")
|
||||
|
||||
def test_build_pre_activated_skips_already_activated(self):
|
||||
skill = _make_skill("research", body="Research body")
|
||||
catalog = SkillCatalog([skill])
|
||||
catalog.mark_activated("research")
|
||||
|
||||
prompt = catalog.build_pre_activated_prompt(["research"])
|
||||
assert prompt == ""
|
||||
|
||||
def test_build_pre_activated_missing_skill(self):
|
||||
catalog = SkillCatalog()
|
||||
prompt = catalog.build_pre_activated_prompt(["nonexistent"])
|
||||
assert prompt == ""
|
||||
|
||||
def test_build_pre_activated_multiple(self):
|
||||
skills = [
|
||||
_make_skill("a", body="Body A"),
|
||||
_make_skill("b", body="Body B"),
|
||||
]
|
||||
catalog = SkillCatalog(skills)
|
||||
prompt = catalog.build_pre_activated_prompt(["a", "b"])
|
||||
|
||||
assert "Pre-Activated Skill: a" in prompt
|
||||
assert "Body A" in prompt
|
||||
assert "Pre-Activated Skill: b" in prompt
|
||||
assert "Body B" in prompt
|
||||
assert catalog.is_activated("a")
|
||||
assert catalog.is_activated("b")
|
||||
|
||||
def test_duplicate_add_overwrites(self):
|
||||
"""Adding a skill with the same name replaces the previous one."""
|
||||
catalog = SkillCatalog()
|
||||
catalog.add(_make_skill("x", "First"))
|
||||
catalog.add(_make_skill("x", "Second"))
|
||||
|
||||
assert catalog.skill_count == 1
|
||||
assert catalog.get("x").description == "Second"
|
||||
@@ -0,0 +1,145 @@
|
||||
"""Tests for skill discovery."""
|
||||
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
|
||||
from framework.skills.discovery import SkillDiscovery, DiscoveryConfig
|
||||
|
||||
|
||||
def _write_skill(base: Path, name: str, description: str = "A test skill.") -> Path:
|
||||
"""Create a minimal skill directory with SKILL.md."""
|
||||
skill_dir = base / name
|
||||
skill_dir.mkdir(parents=True, exist_ok=True)
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
f"---\nname: {name}\ndescription: {description}\n---\n\nInstructions.\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
return skill_dir
|
||||
|
||||
|
||||
class TestSkillDiscovery:
|
||||
def test_discover_project_skills(self, tmp_path):
|
||||
# Create project-level skills
|
||||
agents_skills = tmp_path / ".agents" / "skills"
|
||||
_write_skill(agents_skills, "skill-a")
|
||||
_write_skill(agents_skills, "skill-b")
|
||||
|
||||
discovery = SkillDiscovery(DiscoveryConfig(
|
||||
project_root=tmp_path,
|
||||
skip_user_scope=True,
|
||||
skip_framework_scope=True,
|
||||
))
|
||||
skills = discovery.discover()
|
||||
|
||||
names = {s.name for s in skills}
|
||||
assert "skill-a" in names
|
||||
assert "skill-b" in names
|
||||
assert all(s.source_scope == "project" for s in skills)
|
||||
|
||||
def test_hive_skills_path(self, tmp_path):
|
||||
hive_skills = tmp_path / ".hive" / "skills"
|
||||
_write_skill(hive_skills, "hive-skill")
|
||||
|
||||
discovery = SkillDiscovery(DiscoveryConfig(
|
||||
project_root=tmp_path,
|
||||
skip_user_scope=True,
|
||||
skip_framework_scope=True,
|
||||
))
|
||||
skills = discovery.discover()
|
||||
|
||||
assert len(skills) == 1
|
||||
assert skills[0].name == "hive-skill"
|
||||
|
||||
def test_collision_project_overrides_user(self, tmp_path, monkeypatch):
|
||||
# User-level skill
|
||||
user_skills = tmp_path / "home" / ".agents" / "skills"
|
||||
_write_skill(user_skills, "shared-skill", "User version")
|
||||
|
||||
# Project-level skill with same name
|
||||
project_skills = tmp_path / "project" / ".agents" / "skills"
|
||||
_write_skill(project_skills, "shared-skill", "Project version")
|
||||
|
||||
monkeypatch.setattr(Path, "home", lambda: tmp_path / "home")
|
||||
|
||||
discovery = SkillDiscovery(DiscoveryConfig(
|
||||
project_root=tmp_path / "project",
|
||||
skip_framework_scope=True,
|
||||
))
|
||||
skills = discovery.discover()
|
||||
|
||||
matching = [s for s in skills if s.name == "shared-skill"]
|
||||
assert len(matching) == 1
|
||||
assert matching[0].description == "Project version"
|
||||
|
||||
def test_collision_hive_overrides_agents(self, tmp_path):
|
||||
# Cross-client path
|
||||
agents_skills = tmp_path / ".agents" / "skills"
|
||||
_write_skill(agents_skills, "override-test", "Agents version")
|
||||
|
||||
# Hive-specific path (higher precedence)
|
||||
hive_skills = tmp_path / ".hive" / "skills"
|
||||
_write_skill(hive_skills, "override-test", "Hive version")
|
||||
|
||||
discovery = SkillDiscovery(DiscoveryConfig(
|
||||
project_root=tmp_path,
|
||||
skip_user_scope=True,
|
||||
skip_framework_scope=True,
|
||||
))
|
||||
skills = discovery.discover()
|
||||
|
||||
matching = [s for s in skills if s.name == "override-test"]
|
||||
assert len(matching) == 1
|
||||
assert matching[0].description == "Hive version"
|
||||
|
||||
def test_skips_git_and_node_modules(self, tmp_path):
|
||||
skills_dir = tmp_path / ".agents" / "skills"
|
||||
_write_skill(skills_dir / ".git", "git-skill")
|
||||
_write_skill(skills_dir / "node_modules", "npm-skill")
|
||||
_write_skill(skills_dir, "real-skill")
|
||||
|
||||
discovery = SkillDiscovery(DiscoveryConfig(
|
||||
project_root=tmp_path,
|
||||
skip_user_scope=True,
|
||||
skip_framework_scope=True,
|
||||
))
|
||||
skills = discovery.discover()
|
||||
|
||||
names = {s.name for s in skills}
|
||||
assert "real-skill" in names
|
||||
assert "git-skill" not in names
|
||||
assert "npm-skill" not in names
|
||||
|
||||
def test_empty_scan(self, tmp_path):
|
||||
discovery = SkillDiscovery(DiscoveryConfig(
|
||||
project_root=tmp_path,
|
||||
skip_user_scope=True,
|
||||
skip_framework_scope=True,
|
||||
))
|
||||
skills = discovery.discover()
|
||||
assert skills == []
|
||||
|
||||
def test_framework_scope_loads_defaults(self):
|
||||
"""Framework scope should find the built-in default skills."""
|
||||
discovery = SkillDiscovery(DiscoveryConfig(
|
||||
skip_user_scope=True,
|
||||
))
|
||||
skills = discovery.discover()
|
||||
|
||||
framework_skills = [s for s in skills if s.source_scope == "framework"]
|
||||
names = {s.name for s in framework_skills}
|
||||
assert "hive.note-taking" in names
|
||||
assert "hive.batch-ledger" in names
|
||||
|
||||
def test_max_depth_limit(self, tmp_path):
|
||||
# Create a skill nested beyond max_depth
|
||||
deep = tmp_path / ".agents" / "skills" / "a" / "b" / "c" / "d" / "e"
|
||||
_write_skill(deep, "too-deep")
|
||||
|
||||
discovery = SkillDiscovery(DiscoveryConfig(
|
||||
project_root=tmp_path,
|
||||
skip_user_scope=True,
|
||||
skip_framework_scope=True,
|
||||
max_depth=2,
|
||||
))
|
||||
skills = discovery.discover()
|
||||
assert not any(s.name == "too-deep" for s in skills)
|
||||
@@ -0,0 +1,218 @@
|
||||
"""Integration tests for the skill system — prompt composition and backward compatibility."""
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.prompt_composer import compose_system_prompt
|
||||
from framework.skills.catalog import SkillCatalog
|
||||
from framework.skills.config import SkillsConfig
|
||||
from framework.skills.defaults import DefaultSkillManager
|
||||
from framework.skills.discovery import DiscoveryConfig, SkillDiscovery
|
||||
from framework.skills.parser import ParsedSkill
|
||||
|
||||
|
||||
def _make_skill(
|
||||
name: str = "test-skill",
|
||||
description: str = "A test skill.",
|
||||
source_scope: str = "project",
|
||||
body: str = "Skill instructions.",
|
||||
location: str = "/tmp/skills/test-skill/SKILL.md",
|
||||
base_dir: str = "/tmp/skills/test-skill",
|
||||
) -> ParsedSkill:
|
||||
return ParsedSkill(
|
||||
name=name,
|
||||
description=description,
|
||||
location=location,
|
||||
base_dir=base_dir,
|
||||
source_scope=source_scope,
|
||||
body=body,
|
||||
)
|
||||
|
||||
|
||||
class TestPromptComposition:
|
||||
"""Test that skill prompts integrate correctly with compose_system_prompt."""
|
||||
|
||||
def test_backward_compat_no_skill_params(self):
|
||||
"""compose_system_prompt works without skill params (backward compat)."""
|
||||
prompt = compose_system_prompt(
|
||||
identity_prompt="You are a helpful agent.",
|
||||
focus_prompt="Focus on the task.",
|
||||
)
|
||||
assert "You are a helpful agent." in prompt
|
||||
assert "Focus on the task." in prompt
|
||||
assert "Current date and time" in prompt
|
||||
|
||||
def test_skills_catalog_in_prompt(self):
|
||||
catalog = SkillCatalog([_make_skill(source_scope="project")])
|
||||
catalog_prompt = catalog.to_prompt()
|
||||
|
||||
prompt = compose_system_prompt(
|
||||
identity_prompt="You are an agent.",
|
||||
focus_prompt=None,
|
||||
skills_catalog_prompt=catalog_prompt,
|
||||
)
|
||||
assert "<available_skills>" in prompt
|
||||
assert "<name>test-skill</name>" in prompt
|
||||
|
||||
def test_protocols_in_prompt(self):
|
||||
manager = DefaultSkillManager()
|
||||
manager.load()
|
||||
protocols_prompt = manager.build_protocols_prompt()
|
||||
|
||||
prompt = compose_system_prompt(
|
||||
identity_prompt="You are an agent.",
|
||||
focus_prompt=None,
|
||||
protocols_prompt=protocols_prompt,
|
||||
)
|
||||
assert "## Operational Protocols" in prompt
|
||||
|
||||
def test_full_prompt_ordering(self):
|
||||
"""Verify the three-layer onion ordering with all sections present."""
|
||||
catalog = SkillCatalog([_make_skill(source_scope="project")])
|
||||
|
||||
prompt = compose_system_prompt(
|
||||
identity_prompt="IDENTITY_SECTION",
|
||||
focus_prompt="FOCUS_SECTION",
|
||||
narrative="NARRATIVE_SECTION",
|
||||
accounts_prompt="ACCOUNTS_SECTION",
|
||||
skills_catalog_prompt=catalog.to_prompt(),
|
||||
protocols_prompt="PROTOCOLS_SECTION",
|
||||
)
|
||||
|
||||
identity_pos = prompt.index("IDENTITY_SECTION")
|
||||
accounts_pos = prompt.index("ACCOUNTS_SECTION")
|
||||
skills_pos = prompt.index("available_skills")
|
||||
protocols_pos = prompt.index("PROTOCOLS_SECTION")
|
||||
narrative_pos = prompt.index("NARRATIVE_SECTION")
|
||||
focus_pos = prompt.index("FOCUS_SECTION")
|
||||
|
||||
# Identity → Accounts → Skills → Protocols → Narrative → Focus
|
||||
assert identity_pos < accounts_pos
|
||||
assert accounts_pos < skills_pos
|
||||
assert skills_pos < protocols_pos
|
||||
assert protocols_pos < narrative_pos
|
||||
assert narrative_pos < focus_pos
|
||||
|
||||
def test_none_skill_prompts_excluded(self):
|
||||
"""None values for skill prompts should not add content."""
|
||||
prompt = compose_system_prompt(
|
||||
identity_prompt="Hello",
|
||||
focus_prompt=None,
|
||||
skills_catalog_prompt=None,
|
||||
protocols_prompt=None,
|
||||
)
|
||||
assert "available_skills" not in prompt
|
||||
assert "Operational Protocols" not in prompt
|
||||
|
||||
def test_empty_skill_prompts_excluded(self):
|
||||
"""Empty string skill prompts should not add content."""
|
||||
prompt = compose_system_prompt(
|
||||
identity_prompt="Hello",
|
||||
focus_prompt=None,
|
||||
skills_catalog_prompt="",
|
||||
protocols_prompt="",
|
||||
)
|
||||
assert "available_skills" not in prompt
|
||||
assert "Operational Protocols" not in prompt
|
||||
|
||||
|
||||
class TestEndToEndPipeline:
|
||||
"""Test the full discovery → catalog → prompt pipeline."""
|
||||
|
||||
def test_discovery_to_catalog_to_prompt(self, tmp_path):
|
||||
# Create a project skill
|
||||
skill_dir = tmp_path / ".agents" / "skills" / "my-tool"
|
||||
skill_dir.mkdir(parents=True)
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
"---\nname: my-tool\ndescription: Tool for testing.\n---\n\n"
|
||||
"## Usage\nUse this tool when testing.\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
# Discovery
|
||||
discovery = SkillDiscovery(DiscoveryConfig(
|
||||
project_root=tmp_path,
|
||||
skip_user_scope=True,
|
||||
skip_framework_scope=True,
|
||||
))
|
||||
skills = discovery.discover()
|
||||
assert len(skills) == 1
|
||||
|
||||
# Catalog
|
||||
catalog = SkillCatalog(skills)
|
||||
assert catalog.skill_count == 1
|
||||
|
||||
# Prompt generation
|
||||
prompt = catalog.to_prompt()
|
||||
assert "<name>my-tool</name>" in prompt
|
||||
assert "<description>Tool for testing.</description>" in prompt
|
||||
|
||||
# Pre-activation
|
||||
activated = catalog.build_pre_activated_prompt(["my-tool"])
|
||||
assert "## Usage" in activated
|
||||
assert catalog.is_activated("my-tool")
|
||||
|
||||
def test_defaults_plus_community_skills(self, tmp_path):
|
||||
"""Default skills and community skills produce separate prompt sections."""
|
||||
# Create a community skill
|
||||
skill_dir = tmp_path / ".agents" / "skills" / "community-skill"
|
||||
skill_dir.mkdir(parents=True)
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
"---\nname: community-skill\ndescription: A community skill.\n---\n\nDo stuff.\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
# Discover community skills
|
||||
discovery = SkillDiscovery(DiscoveryConfig(
|
||||
project_root=tmp_path,
|
||||
skip_user_scope=True,
|
||||
skip_framework_scope=True,
|
||||
))
|
||||
community_skills = discovery.discover()
|
||||
catalog = SkillCatalog(community_skills)
|
||||
catalog_prompt = catalog.to_prompt()
|
||||
|
||||
# Load default skills
|
||||
manager = DefaultSkillManager()
|
||||
manager.load()
|
||||
protocols_prompt = manager.build_protocols_prompt()
|
||||
|
||||
# Compose
|
||||
prompt = compose_system_prompt(
|
||||
identity_prompt="Agent identity.",
|
||||
focus_prompt=None,
|
||||
skills_catalog_prompt=catalog_prompt,
|
||||
protocols_prompt=protocols_prompt,
|
||||
)
|
||||
|
||||
# Both sections present
|
||||
assert "<available_skills>" in prompt
|
||||
assert "<name>community-skill</name>" in prompt
|
||||
assert "## Operational Protocols" in prompt
|
||||
|
||||
def test_config_disables_defaults_keeps_community(self, tmp_path):
|
||||
"""Disabling all defaults should still allow community skills."""
|
||||
skill_dir = tmp_path / ".agents" / "skills" / "still-here"
|
||||
skill_dir.mkdir(parents=True)
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
"---\nname: still-here\ndescription: Survives config.\n---\n\nBody.\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
# Community skills
|
||||
discovery = SkillDiscovery(DiscoveryConfig(
|
||||
project_root=tmp_path,
|
||||
skip_user_scope=True,
|
||||
skip_framework_scope=True,
|
||||
))
|
||||
catalog = SkillCatalog(discovery.discover())
|
||||
|
||||
# Disabled defaults
|
||||
config = SkillsConfig(all_defaults_disabled=True)
|
||||
manager = DefaultSkillManager(config)
|
||||
manager.load()
|
||||
|
||||
catalog_prompt = catalog.to_prompt()
|
||||
protocols_prompt = manager.build_protocols_prompt()
|
||||
|
||||
assert "<name>still-here</name>" in catalog_prompt
|
||||
assert protocols_prompt == ""
|
||||
@@ -0,0 +1,180 @@
|
||||
"""Tests for SKILL.md parser."""
|
||||
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
|
||||
from framework.skills.parser import parse_skill_md, ParsedSkill
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_skill(tmp_path):
|
||||
"""Helper to create a SKILL.md file and return its path."""
|
||||
def _create(content: str, dir_name: str = "my-skill") -> Path:
|
||||
skill_dir = tmp_path / dir_name
|
||||
skill_dir.mkdir(parents=True, exist_ok=True)
|
||||
skill_md = skill_dir / "SKILL.md"
|
||||
skill_md.write_text(content, encoding="utf-8")
|
||||
return skill_md
|
||||
return _create
|
||||
|
||||
|
||||
class TestParseSkillMd:
|
||||
def test_happy_path(self, tmp_skill):
|
||||
content = """---
|
||||
name: my-skill
|
||||
description: A test skill for unit testing.
|
||||
license: MIT
|
||||
---
|
||||
|
||||
## Instructions
|
||||
|
||||
Do the thing.
|
||||
"""
|
||||
result = parse_skill_md(tmp_skill(content), source_scope="project")
|
||||
assert result is not None
|
||||
assert result.name == "my-skill"
|
||||
assert result.description == "A test skill for unit testing."
|
||||
assert result.license == "MIT"
|
||||
assert result.source_scope == "project"
|
||||
assert "Do the thing." in result.body
|
||||
|
||||
def test_missing_description_returns_none(self, tmp_skill):
|
||||
content = """---
|
||||
name: no-desc
|
||||
---
|
||||
|
||||
Body here.
|
||||
"""
|
||||
result = parse_skill_md(tmp_skill(content, "no-desc"))
|
||||
assert result is None
|
||||
|
||||
def test_missing_name_uses_directory(self, tmp_skill):
|
||||
content = """---
|
||||
description: Skill without a name field.
|
||||
---
|
||||
|
||||
Body.
|
||||
"""
|
||||
result = parse_skill_md(tmp_skill(content, "fallback-dir"))
|
||||
assert result is not None
|
||||
assert result.name == "fallback-dir"
|
||||
|
||||
def test_empty_file_returns_none(self, tmp_skill):
|
||||
result = parse_skill_md(tmp_skill("", "empty"))
|
||||
assert result is None
|
||||
|
||||
def test_no_frontmatter_delimiters_returns_none(self, tmp_skill):
|
||||
content = "Just plain text without YAML frontmatter."
|
||||
result = parse_skill_md(tmp_skill(content, "no-yaml"))
|
||||
assert result is None
|
||||
|
||||
def test_unparseable_yaml_returns_none(self, tmp_skill):
|
||||
content = """---
|
||||
name: [invalid yaml
|
||||
- broken: {{
|
||||
---
|
||||
|
||||
Body.
|
||||
"""
|
||||
result = parse_skill_md(tmp_skill(content, "bad-yaml"))
|
||||
assert result is None
|
||||
|
||||
def test_unquoted_colon_fixup(self, tmp_skill):
|
||||
content = """---
|
||||
name: colon-test
|
||||
description: Use for: research tasks
|
||||
---
|
||||
|
||||
Body.
|
||||
"""
|
||||
result = parse_skill_md(tmp_skill(content, "colon-test"))
|
||||
assert result is not None
|
||||
assert "research tasks" in result.description
|
||||
|
||||
def test_long_name_warns_but_loads(self, tmp_skill):
|
||||
long_name = "a" * 100
|
||||
content = f"""---
|
||||
name: {long_name}
|
||||
description: A skill with an excessively long name.
|
||||
---
|
||||
|
||||
Body.
|
||||
"""
|
||||
result = parse_skill_md(tmp_skill(content, "long-name"))
|
||||
assert result is not None
|
||||
assert result.name == long_name
|
||||
|
||||
def test_name_mismatch_warns_but_loads(self, tmp_skill):
|
||||
content = """---
|
||||
name: different-name
|
||||
description: Name doesn't match directory.
|
||||
---
|
||||
|
||||
Body.
|
||||
"""
|
||||
result = parse_skill_md(tmp_skill(content, "actual-dir"))
|
||||
assert result is not None
|
||||
assert result.name == "different-name"
|
||||
|
||||
def test_optional_fields(self, tmp_skill):
|
||||
content = """---
|
||||
name: full-skill
|
||||
description: Skill with all optional fields.
|
||||
license: Apache-2.0
|
||||
compatibility:
|
||||
- claude-code
|
||||
- cursor
|
||||
metadata:
|
||||
author: tester
|
||||
version: "1.0"
|
||||
allowed-tools:
|
||||
- web_search
|
||||
- read_file
|
||||
---
|
||||
|
||||
Instructions here.
|
||||
"""
|
||||
result = parse_skill_md(tmp_skill(content, "full-skill"))
|
||||
assert result is not None
|
||||
assert result.license == "Apache-2.0"
|
||||
assert result.compatibility == ["claude-code", "cursor"]
|
||||
assert result.metadata == {"author": "tester", "version": "1.0"}
|
||||
assert result.allowed_tools == ["web_search", "read_file"]
|
||||
|
||||
def test_body_extraction(self, tmp_skill):
|
||||
content = """---
|
||||
name: body-test
|
||||
description: Test body extraction.
|
||||
---
|
||||
|
||||
## Step 1
|
||||
|
||||
Do this first.
|
||||
|
||||
## Step 2
|
||||
|
||||
Then do this.
|
||||
"""
|
||||
result = parse_skill_md(tmp_skill(content, "body-test"))
|
||||
assert result is not None
|
||||
assert "## Step 1" in result.body
|
||||
assert "## Step 2" in result.body
|
||||
assert "Do this first." in result.body
|
||||
|
||||
def test_location_is_absolute(self, tmp_skill):
|
||||
content = """---
|
||||
name: abs-path
|
||||
description: Check absolute path.
|
||||
---
|
||||
|
||||
Body.
|
||||
"""
|
||||
path = tmp_skill(content, "abs-path")
|
||||
result = parse_skill_md(path)
|
||||
assert result is not None
|
||||
assert Path(result.location).is_absolute()
|
||||
assert Path(result.base_dir).is_absolute()
|
||||
|
||||
def test_nonexistent_file_returns_none(self, tmp_path):
|
||||
result = parse_skill_md(tmp_path / "nonexistent" / "SKILL.md")
|
||||
assert result is None
|
||||
@@ -299,6 +299,66 @@ class TestSubagentExecution:
|
||||
assert "metadata" in result_data
|
||||
assert result_data["metadata"]["agent_id"] == "researcher"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_gcu_subagent_auto_populates_tools_from_catalog(self, runtime):
|
||||
"""GCU subagent with tools=[] should receive all catalog tools (auto-populate).
|
||||
|
||||
GCU nodes declare tools=[] because the runner expands them at setup time.
|
||||
But _execute_subagent filters by subagent_spec.tools, which is still empty.
|
||||
The fix: when subagent is GCU with no declared tools, include all catalog tools.
|
||||
"""
|
||||
gcu_spec = NodeSpec(
|
||||
id="browser_worker",
|
||||
name="Browser Worker",
|
||||
description="GCU browser subagent",
|
||||
node_type="gcu",
|
||||
output_keys=["result"],
|
||||
tools=[], # Empty — expects auto-population
|
||||
)
|
||||
|
||||
parent_spec = NodeSpec(
|
||||
id="parent",
|
||||
name="Parent",
|
||||
description="Orchestrator",
|
||||
node_type="event_loop",
|
||||
output_keys=["result"],
|
||||
sub_agents=["browser_worker"],
|
||||
)
|
||||
|
||||
spy_llm = MockStreamingLLM(
|
||||
[set_output_scenario("result", "scraped"), text_finish_scenario()]
|
||||
)
|
||||
|
||||
browser_tool = Tool(name="browser_snapshot", description="Snapshot")
|
||||
|
||||
node = EventLoopNode(config=LoopConfig(max_iterations=5))
|
||||
memory = SharedMemory()
|
||||
scoped = memory.with_permissions(read_keys=[], write_keys=["result"])
|
||||
|
||||
ctx = NodeContext(
|
||||
runtime=runtime,
|
||||
node_id="parent",
|
||||
node_spec=parent_spec,
|
||||
memory=scoped,
|
||||
input_data={},
|
||||
llm=spy_llm,
|
||||
available_tools=[],
|
||||
all_tools=[browser_tool],
|
||||
goal_context="",
|
||||
goal=None,
|
||||
node_registry={"browser_worker": gcu_spec},
|
||||
)
|
||||
|
||||
result = await node._execute_subagent(ctx, "browser_worker", "Scrape example.com")
|
||||
assert result.is_error is False
|
||||
|
||||
# Verify subagent LLM received browser tools from catalog
|
||||
assert spy_llm.stream_calls, "LLM should have been called"
|
||||
first_call_tools = spy_llm.stream_calls[0]["tools"]
|
||||
tool_names = {t.name for t in first_call_tools} if first_call_tools else set()
|
||||
assert "browser_snapshot" in tool_names
|
||||
assert "delegate_to_sub_agent" not in tool_names
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests for nested subagent prevention
|
||||
@@ -601,6 +661,63 @@ class TestReportToParentExecution:
|
||||
# Metadata should include report_count
|
||||
assert result_data["metadata"]["report_count"] == 1
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_subagent_tool_events_visible_on_shared_bus(
|
||||
self, runtime, parent_node_spec, subagent_node_spec
|
||||
):
|
||||
"""Subagent internal tool calls should emit TOOL_CALL events on the shared bus."""
|
||||
bus = EventBus()
|
||||
tool_events = []
|
||||
|
||||
async def handler(event):
|
||||
tool_events.append(event)
|
||||
|
||||
bus.subscribe(
|
||||
event_types=[EventType.TOOL_CALL_STARTED, EventType.TOOL_CALL_COMPLETED],
|
||||
handler=handler,
|
||||
)
|
||||
|
||||
subagent_llm = MockStreamingLLM(
|
||||
[
|
||||
set_output_scenario("findings", "Results"),
|
||||
text_finish_scenario(),
|
||||
]
|
||||
)
|
||||
|
||||
node = EventLoopNode(
|
||||
event_bus=bus,
|
||||
config=LoopConfig(max_iterations=10),
|
||||
)
|
||||
|
||||
memory = SharedMemory()
|
||||
scoped = memory.with_permissions(read_keys=[], write_keys=["result"])
|
||||
|
||||
ctx = NodeContext(
|
||||
runtime=runtime,
|
||||
node_id="parent",
|
||||
node_spec=parent_node_spec,
|
||||
memory=scoped,
|
||||
input_data={},
|
||||
llm=subagent_llm,
|
||||
available_tools=[],
|
||||
goal_context="",
|
||||
goal=None,
|
||||
node_registry={"researcher": subagent_node_spec},
|
||||
)
|
||||
|
||||
result = await node._execute_subagent(ctx, "researcher", "Do research")
|
||||
assert result.is_error is False
|
||||
|
||||
# Subagent tool calls should appear on the shared bus
|
||||
started = [e for e in tool_events if e.type == EventType.TOOL_CALL_STARTED]
|
||||
completed = [e for e in tool_events if e.type == EventType.TOOL_CALL_COMPLETED]
|
||||
assert len(started) >= 1, "Expected at least one TOOL_CALL_STARTED from subagent"
|
||||
assert len(completed) >= 1, "Expected at least one TOOL_CALL_COMPLETED from subagent"
|
||||
|
||||
# Events should have the namespaced subagent node_id
|
||||
for evt in started + completed:
|
||||
assert "subagent" in evt.node_id, f"Expected namespaced node_id, got: {evt.node_id}"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_event_bus_receives_subagent_report(
|
||||
self, runtime, parent_node_spec, subagent_node_spec
|
||||
|
||||
@@ -1,171 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Verification script for Aden Hive Framework MCP Server
|
||||
|
||||
This script checks if the MCP server is properly installed and configured.
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def setup_logger():
|
||||
"""Configure logger for CLI usage."""
|
||||
if not logger.handlers:
|
||||
handler = logging.StreamHandler(sys.stdout)
|
||||
formatter = logging.Formatter("%(message)s")
|
||||
handler.setFormatter(formatter)
|
||||
logger.addHandler(handler)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
|
||||
class Colors:
|
||||
GREEN = "\033[0;32m"
|
||||
YELLOW = "\033[1;33m"
|
||||
RED = "\033[0;31m"
|
||||
BLUE = "\033[0;34m"
|
||||
NC = "\033[0m"
|
||||
|
||||
|
||||
def check(description: str) -> bool:
|
||||
"""Print check description and return a context manager for result."""
|
||||
logger.info(f"Checking {description}... ", extra={"end": ""})
|
||||
sys.stdout.flush()
|
||||
return True
|
||||
|
||||
|
||||
def success(msg: str = "OK"):
|
||||
"""Log success message."""
|
||||
logger.info(f"{Colors.GREEN}✓ {msg}{Colors.NC}")
|
||||
|
||||
|
||||
def warning(msg: str):
|
||||
"""Log warning message."""
|
||||
logger.warning(f"{Colors.YELLOW}⚠ {msg}{Colors.NC}")
|
||||
|
||||
|
||||
def error(msg: str):
|
||||
"""Log error message."""
|
||||
logger.error(f"{Colors.RED}✗ {msg}{Colors.NC}")
|
||||
|
||||
|
||||
def main():
|
||||
"""Run verification checks."""
|
||||
setup_logger()
|
||||
logger.info("=== MCP Server Verification ===")
|
||||
logger.info("")
|
||||
|
||||
script_dir = Path(__file__).parent.absolute()
|
||||
all_checks_passed = True
|
||||
|
||||
# Check 1: Framework package installed
|
||||
check("framework package installation")
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[sys.executable, "-c", "import framework; print(framework.__file__)"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
encoding="utf-8",
|
||||
)
|
||||
framework_path = result.stdout.strip()
|
||||
success(f"installed at {framework_path}")
|
||||
except subprocess.CalledProcessError:
|
||||
error("framework package not found")
|
||||
logger.info(f" Run: uv pip install -e {script_dir}")
|
||||
all_checks_passed = False
|
||||
|
||||
# Check 2: MCP dependencies
|
||||
check("MCP dependencies")
|
||||
missing_deps = []
|
||||
for dep in ["mcp", "fastmcp"]:
|
||||
try:
|
||||
subprocess.run(
|
||||
[sys.executable, "-c", f"import {dep}"],
|
||||
capture_output=True,
|
||||
check=True,
|
||||
encoding="utf-8",
|
||||
)
|
||||
except subprocess.CalledProcessError:
|
||||
missing_deps.append(dep)
|
||||
|
||||
if missing_deps:
|
||||
error(f"missing: {', '.join(missing_deps)}")
|
||||
logger.info(f" Run: uv pip install {' '.join(missing_deps)}")
|
||||
all_checks_passed = False
|
||||
else:
|
||||
success("all installed")
|
||||
|
||||
# Check 3: MCP configuration file
|
||||
check("MCP configuration file")
|
||||
mcp_config = script_dir / ".mcp.json"
|
||||
if mcp_config.exists():
|
||||
try:
|
||||
with open(mcp_config, encoding="utf-8") as f:
|
||||
config = json.load(f)
|
||||
|
||||
if "mcpServers" in config:
|
||||
success("found and valid")
|
||||
for name, server_config in config.get("mcpServers", {}).items():
|
||||
logger.info(f" Server: {name}")
|
||||
logger.info(f" Command: {server_config.get('command')}")
|
||||
logger.info(f" Args: {' '.join(server_config.get('args', []))}")
|
||||
else:
|
||||
warning("exists but missing mcpServers config")
|
||||
all_checks_passed = False
|
||||
except json.JSONDecodeError:
|
||||
error("invalid JSON format")
|
||||
all_checks_passed = False
|
||||
else:
|
||||
warning("not found (optional)")
|
||||
logger.info(f" Location would be: {mcp_config}")
|
||||
|
||||
# Check 4: Framework modules
|
||||
check("core framework modules")
|
||||
modules_to_check = [
|
||||
"framework.runtime.core",
|
||||
"framework.graph.executor",
|
||||
"framework.graph.node",
|
||||
"framework.builder.query",
|
||||
"framework.llm",
|
||||
]
|
||||
|
||||
failed_modules = []
|
||||
for module in modules_to_check:
|
||||
try:
|
||||
subprocess.run(
|
||||
[sys.executable, "-c", f"import {module}"],
|
||||
capture_output=True,
|
||||
check=True,
|
||||
encoding="utf-8",
|
||||
)
|
||||
except subprocess.CalledProcessError:
|
||||
failed_modules.append(module)
|
||||
|
||||
if failed_modules:
|
||||
error(f"failed to import: {', '.join(failed_modules)}")
|
||||
all_checks_passed = False
|
||||
else:
|
||||
success(f"all {len(modules_to_check)} modules OK")
|
||||
|
||||
logger.info("")
|
||||
logger.info("=" * 40)
|
||||
if all_checks_passed:
|
||||
logger.info(f"{Colors.GREEN}✓ All checks passed!{Colors.NC}")
|
||||
logger.info("")
|
||||
logger.info("Your framework is ready to use.")
|
||||
else:
|
||||
logger.info(f"{Colors.RED}✗ Some checks failed{Colors.NC}")
|
||||
logger.info("")
|
||||
logger.info("To fix issues, run:")
|
||||
logger.info(f" uv run python {script_dir / 'setup_mcp.py'}")
|
||||
logger.info("")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,201 +0,0 @@
|
||||
# Antigravity IDE Setup
|
||||
|
||||
Use the Hive agent framework (MCP servers and skills) inside [Antigravity IDE](https://antigravity.google/) (Google’s AI IDE).
|
||||
|
||||
---
|
||||
|
||||
## Quick start (3 steps)
|
||||
|
||||
**Repo root** = the folder that contains `core/`, `tools/`, and `.agent/` (where you cloned the project).
|
||||
|
||||
1. **Open a terminal** and go to the hive repo root (e.g. `cd ~/hive`).
|
||||
2. **Run the setup script** (use `./` so the script runs from this repo; don't use `/scripts/...`):
|
||||
```bash
|
||||
./scripts/setup-antigravity-mcp.sh
|
||||
```
|
||||
3. **Restart Antigravity IDE.** You should see **coder-tools** and **tools** as available MCP servers.
|
||||
|
||||
> **Important:** Always restart/refresh Antigravity IDE after running the setup script or making any changes to MCP configuration. The IDE only loads MCP servers on startup.
|
||||
|
||||
Done. For details, prerequisites, and troubleshooting, read on.
|
||||
|
||||
---
|
||||
|
||||
## What you get after setup
|
||||
|
||||
- **coder-tools** – Create and manage agents (scaffolding via `initialize_and_build_agent`, file I/O, tool discovery).
|
||||
- **tools** – File operations, web search, and other agent tools.
|
||||
- **Documentation** – Guided docs for building and testing agents.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Antigravity IDE](https://antigravity.google/) installed.
|
||||
- **Python 3.11+** and project dependencies. If you haven’t set up the repo yet, from repo root run:
|
||||
```bash
|
||||
./quickstart.sh
|
||||
```
|
||||
- **MCP server dependencies** (one-time). From repo root:
|
||||
```bash
|
||||
cd core && ./setup_mcp.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Full setup (step by step)
|
||||
|
||||
### Step 1: Install MCP dependencies (one-time)
|
||||
|
||||
From the **repo root**:
|
||||
|
||||
```bash
|
||||
cd core
|
||||
./setup_mcp.sh
|
||||
```
|
||||
|
||||
This installs the framework and MCP packages and checks that the server can start.
|
||||
|
||||
### Step 2: Register MCP servers with Antigravity
|
||||
|
||||
Antigravity reads MCP config from your **user config file** (`~/.gemini/antigravity/mcp_config.json`), not from the project. The easiest way is to run the setup script from the **hive repo folder**:
|
||||
|
||||
```bash
|
||||
./scripts/setup-antigravity-mcp.sh
|
||||
```
|
||||
|
||||
The script finds the repo root, writes `~/.gemini/antigravity/mcp_config.json` with the right paths, and you don't edit any paths by hand.
|
||||
|
||||
> **Important:** Always restart/refresh Antigravity IDE after running the setup script. MCP servers are only loaded on IDE startup.
|
||||
|
||||
The **coder-tools** and **tools** servers should show up after restart.
|
||||
|
||||
**Using Claude Code instead?** Run:
|
||||
|
||||
```bash
|
||||
./scripts/setup-antigravity-mcp.sh --claude
|
||||
```
|
||||
|
||||
That writes `~/.claude/mcp.json` as well.
|
||||
|
||||
**Prefer to do it manually?** See [Manual MCP config](#manual-mcp-config-template) below. You’ll create `~/.gemini/mcp.json` (or `~/.claude/mcp.json`) with absolute paths to your repo’s `core` and `tools` folders.
|
||||
|
||||
### Step 3: Use MCP tools + docs
|
||||
|
||||
Use the `coder-tools` and `tools` MCP servers in Antigravity, and use docs in `docs/` for workflow guidance.
|
||||
|
||||
---
|
||||
|
||||
## What’s in the repo (`.agent/`)
|
||||
|
||||
```
|
||||
.agent/
|
||||
├── mcp_config.json # Template for MCP servers (coder-tools, tools)
|
||||
```
|
||||
|
||||
The **setup script** writes your **user** config (`~/.gemini/antigravity/mcp_config.json`) using paths from **this repo**. The file in `.agent/` is the template; Antigravity itself uses the file in your home directory.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**MCP servers don’t connect**
|
||||
|
||||
- Run the setup script again from the hive repo root: `./scripts/setup-antigravity-mcp.sh`, then restart Antigravity.
|
||||
- Make sure Python and deps are installed: from repo root run `./quickstart.sh`.
|
||||
- Check that the servers can start: from repo root run
|
||||
`cd tools && uv run coder_tools_server.py --stdio` (Ctrl+C to stop), and in another terminal
|
||||
`cd tools && uv run mcp_server.py --stdio` (Ctrl+C to stop).
|
||||
If those fail, fix the errors first (e.g. install deps with `uv sync`).
|
||||
|
||||
**"Module not found" or import errors**
|
||||
|
||||
- Open the **repo root** as the project in the IDE (the folder that has `core/` and `tools/`).
|
||||
- If you edited `~/.gemini/antigravity/mcp_config.json` by hand, make sure `--directory` paths are **absolute** (e.g. `/Users/you/hive/core` and `/Users/you/hive/tools`).
|
||||
|
||||
**MCP tools don’t show up in the UI**
|
||||
|
||||
- Antigravity may need a restart. Use the files in `docs/` as documentation; the MCP tools (`coder-tools`, `tools`) are the required integration point.
|
||||
|
||||
---
|
||||
|
||||
## Verification prompt (optional)
|
||||
|
||||
Paste this into Antigravity to check that MCP is set up. It doesn’t use your machine’s paths; anyone can use it.
|
||||
|
||||
```
|
||||
Check the Hive + Antigravity integration:
|
||||
|
||||
1. MCP: List available MCP servers/tools. Confirm that "coder-tools" and "tools" (or equivalent) are connected. If not, tell the user to run ./scripts/setup-antigravity-mcp.sh from the hive repo root, then restart Antigravity (see docs/antigravity-setup.md).
|
||||
|
||||
2. Docs: Confirm that the project has `docs/` with setup/developer guides for the workflow.
|
||||
|
||||
3. Result: Reply with PASS (MCP OK), PARTIAL (some MCP tools missing), or FAIL (MCP unavailable), and one line on what to fix if not PASS.
|
||||
```
|
||||
|
||||
If you get **PARTIAL** (e.g. MCP not connected), run `./scripts/setup-antigravity-mcp.sh` from the repo root and restart Antigravity.
|
||||
|
||||
---
|
||||
|
||||
## Manual MCP config template
|
||||
|
||||
Use this only if you don’t want to run the setup script. Replace `/path/to/hive` with your actual repo root (e.g. the output of `pwd` when you’re in the hive folder).
|
||||
|
||||
Save as `~/.gemini/antigravity/mcp_config.json` (Antigravity) or `~/.claude/mcp.json` (Claude Code), then **restart the IDE** to load the new configuration.
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"coder-tools": {
|
||||
"command": "uv",
|
||||
"args": ["run", "--directory", "/path/to/hive/tools", "coder_tools_server.py", "--stdio"],
|
||||
"disabled": false
|
||||
},
|
||||
"tools": {
|
||||
"command": "uv",
|
||||
"args": ["run", "--directory", "/path/to/hive/tools", "mcp_server.py", "--stdio"],
|
||||
"disabled": false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Make sure `uv` is installed and available in your PATH. Note: Use `--directory` in args instead of `cwd` for Antigravity compatibility.
|
||||
|
||||
---
|
||||
|
||||
## Verify from the command line (optional)
|
||||
|
||||
From the **repo root**:
|
||||
|
||||
**Check that config exists**
|
||||
|
||||
```bash
|
||||
test -f .agent/mcp_config.json && echo "OK: mcp_config.json" || echo "MISSING"
|
||||
```
|
||||
|
||||
**Check that the config is valid JSON**
|
||||
|
||||
```bash
|
||||
python3 -c "import json; json.load(open('.agent/mcp_config.json')); print('OK: valid JSON')"
|
||||
```
|
||||
|
||||
**Test that MCP servers start** (two terminals)
|
||||
|
||||
```bash
|
||||
# Terminal 1
|
||||
cd tools && uv run coder_tools_server.py --stdio
|
||||
|
||||
# Terminal 2
|
||||
cd tools && uv run mcp_server.py --stdio
|
||||
```
|
||||
|
||||
If both start without errors, the config is fine.
|
||||
|
||||
---
|
||||
|
||||
## See also
|
||||
|
||||
- [Cursor IDE support](../README.md#cursor-ide-support) – Same MCP servers and skills for Cursor
|
||||
- [MCP Integration Guide](../core/MCP_INTEGRATION_GUIDE.md) – How the framework MCP works
|
||||
- [Environment setup](../ENVIRONMENT_SETUP.md) – Repo and Python setup
|
||||
@@ -1,6 +1,6 @@
|
||||
# Integration Bounty Program
|
||||
# Bounty Program
|
||||
|
||||
Earn XP, Discord roles, and money by testing, documenting, and building integrations for the Aden agent framework.
|
||||
Earn XP, Discord roles, and money by contributing to the Aden agent framework — from quick fixes to major features, plus integration testing and development.
|
||||
|
||||
## Why Contribute?
|
||||
|
||||
@@ -33,6 +33,10 @@ Lurkr auto-assigns the first two roles. Core Contributor requires sustained, qua
|
||||
|
||||
## Bounty Types
|
||||
|
||||
### Integration Bounties
|
||||
|
||||
Focused on the tool ecosystem — testing, documenting, and building integrations.
|
||||
|
||||
| Type | Label | Points | What You Do |
|
||||
| --------------------- | ----------------- | ------ | -------------------------------------------------------------------------- |
|
||||
| **Test a tool** | `bounty:test` | 20 | Test with a real API key, submit a report with logs |
|
||||
@@ -42,6 +46,47 @@ Lurkr auto-assigns the first two roles. Core Contributor requires sustained, qua
|
||||
|
||||
Promoting a tool from unverified to verified is the final step — submit a PR moving it from `_register_unverified()` to `_register_verified()` after the [promotion checklist](promotion-checklist.md) is complete.
|
||||
|
||||
### Standard Bounties
|
||||
|
||||
General contributions to the framework, docs, tests, and infrastructure — not tied to a specific integration.
|
||||
|
||||
| Size | Label | Points | Scope |
|
||||
| ------------ | ------------------ | ------ | ---------------------------------------------------------------------------------- |
|
||||
| **Small** | `bounty:small` | 10 | Typo fixes, broken links, error message improvements, confirm/reproduce bug reports |
|
||||
| **Medium** | `bounty:medium` | 30 | Bug fixes, new or improved unit tests, how-to guides, CLI UX improvements |
|
||||
| **Large** | `bounty:large` | 75 | New features, performance optimizations with benchmarks, architecture docs |
|
||||
| **Extreme** | `bounty:extreme` | 150 | Major subsystem work, security audits, cross-cutting refactors, new core capabilities |
|
||||
|
||||
#### Examples by size
|
||||
|
||||
**Small (10 pts):**
|
||||
- Fix typos or broken links in documentation
|
||||
- Improve an error message to include actionable guidance
|
||||
- Add missing type annotations to a module
|
||||
- Reproduce and confirm an open bug report with environment details
|
||||
- Fix linting or CI warnings
|
||||
|
||||
**Medium (30 pts):**
|
||||
- Fix a non-critical bug with a regression test
|
||||
- Write a how-to guide or tutorial for a common workflow
|
||||
- Add or significantly improve test coverage for a core module
|
||||
- Improve CLI help text, argument validation, or UX
|
||||
- Add structured logging or observability to a module
|
||||
|
||||
**Large (75 pts):**
|
||||
- Implement a new user-facing feature end to end
|
||||
- Performance optimization with before/after benchmarks
|
||||
- Build a new CLI command or subcommand
|
||||
- Write comprehensive architecture documentation for a subsystem
|
||||
- Add a new credential adapter type
|
||||
|
||||
**Extreme (150 pts):**
|
||||
- Design and implement a major subsystem (e.g., plugin system, caching layer)
|
||||
- Security audit of a core module with findings and fixes
|
||||
- Major refactor of core architecture (must have maintainer pre-approval)
|
||||
- Build a complete example application or reference implementation
|
||||
- End-to-end testing framework for agent workflows
|
||||
|
||||
## Quality Gates
|
||||
|
||||
- **PRs** must be merged by a maintainer (not self-merged)
|
||||
@@ -52,12 +97,28 @@ Promoting a tool from unverified to verified is the final step — submit a PR m
|
||||
|
||||
## Labels
|
||||
|
||||
### Integration bounty labels
|
||||
|
||||
| Label | Color | Meaning |
|
||||
| ------------------- | ------------------ | --------------------------------------- |
|
||||
| `bounty:test` | `#1D76DB` (blue) | Test a tool with a real API key |
|
||||
| `bounty:docs` | `#FBCA04` (yellow) | Write or improve documentation |
|
||||
| `bounty:code` | `#D93F0B` (orange) | Health checker, bug fix, or improvement |
|
||||
| `bounty:new-tool` | `#6F42C1` (purple) | Build a new integration from scratch |
|
||||
|
||||
### Standard bounty labels
|
||||
|
||||
| Label | Color | Meaning |
|
||||
| ------------------- | ------------------ | -------------------------------------------------- |
|
||||
| `bounty:small` | `#C2E0C6` (green) | Quick fix — typos, links, error messages |
|
||||
| `bounty:medium` | `#0E8A16` (green) | Bug fix, tests, guides, CLI improvements |
|
||||
| `bounty:large` | `#B60205` (red) | New feature, perf work, architecture docs |
|
||||
| `bounty:extreme` | `#000000` (black) | Major subsystem, security audit, core refactor |
|
||||
|
||||
### Difficulty labels
|
||||
|
||||
| Label | Color | Meaning |
|
||||
| ------------------- | ------------------ | --------------------------------------- |
|
||||
| `difficulty:easy` | `#BFD4F2` | Good first contribution |
|
||||
| `difficulty:medium` | `#D4C5F9` | Requires some familiarity |
|
||||
| `difficulty:hard` | `#F9D0C4` | Significant effort or expertise needed |
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Contributor Guide — Integration Bounty Program
|
||||
# Contributor Guide — Bounty Program
|
||||
|
||||
Earn XP, Discord roles, and eventually real money by testing and building integrations for the Aden agent framework.
|
||||
Earn XP, Discord roles, and eventually real money by contributing to the Aden agent framework — from quick fixes to major features and integration work.
|
||||
|
||||
## Getting Started
|
||||
|
||||
@@ -30,7 +30,13 @@ XP comes from GitHub bounties (auto-pushed on PR merge) and Discord activity in
|
||||
|
||||
## Bounty Types
|
||||
|
||||
### Test a Tool (20 pts)
|
||||
There are two categories: **integration bounties** (tool-specific) and **standard bounties** (general contributions).
|
||||
|
||||
---
|
||||
|
||||
### Integration Bounties
|
||||
|
||||
#### Test a Tool (20 pts)
|
||||
|
||||
Test an unverified tool with a real API key and report what happens.
|
||||
|
||||
@@ -41,7 +47,7 @@ Test an unverified tool with a real API key and report what happens.
|
||||
|
||||
Report both successes and failures. Finding bugs is valuable.
|
||||
|
||||
### Write Docs (20 pts)
|
||||
#### Write Docs (20 pts)
|
||||
|
||||
Write a README for a tool that's missing one.
|
||||
|
||||
@@ -52,7 +58,7 @@ Write a README for a tool that's missing one.
|
||||
|
||||
Function names and API URLs must match reality — no AI hallucinations.
|
||||
|
||||
### Code Contribution (30 pts)
|
||||
#### Code Contribution (30 pts)
|
||||
|
||||
Add a health checker, fix a bug, or improve an integration.
|
||||
|
||||
@@ -66,7 +72,7 @@ Add a health checker, fix a bug, or improve an integration.
|
||||
1. Find a bug during testing, file an issue
|
||||
2. Fix it in a PR with a test covering the bug
|
||||
|
||||
### New Integration (75 pts)
|
||||
#### New Integration (75 pts)
|
||||
|
||||
Build a complete integration from scratch.
|
||||
|
||||
@@ -77,6 +83,60 @@ Build a complete integration from scratch.
|
||||
|
||||
Expect multiple review rounds.
|
||||
|
||||
---
|
||||
|
||||
### Standard Bounties
|
||||
|
||||
General contributions to the framework — not tied to a specific integration. Sized by effort and impact.
|
||||
|
||||
#### Small (10 pts)
|
||||
|
||||
Quick, focused fixes. Great for first-time contributors.
|
||||
|
||||
- Fix typos or broken links in documentation
|
||||
- Improve an error message to include actionable guidance
|
||||
- Add missing type annotations to a module
|
||||
- Reproduce and confirm a bug report with environment details
|
||||
- Fix linting or CI warnings
|
||||
|
||||
**How:** Open a PR with the fix. Tag with `bounty:small`.
|
||||
|
||||
#### Medium (30 pts)
|
||||
|
||||
Meaningful improvements that require reading and understanding existing code.
|
||||
|
||||
- Fix a non-critical bug with a regression test
|
||||
- Write a how-to guide or tutorial
|
||||
- Add or significantly improve test coverage for a core module
|
||||
- Improve CLI help text, argument validation, or UX
|
||||
- Add structured logging or observability to a module
|
||||
|
||||
**How:** Claim the issue first. Submit a PR with tests where applicable. Tag with `bounty:medium`.
|
||||
|
||||
#### Large (75 pts)
|
||||
|
||||
Significant work that adds real capability or improves the project substantially.
|
||||
|
||||
- Implement a new user-facing feature end to end
|
||||
- Performance optimization with before/after benchmarks
|
||||
- Build a new CLI command or subcommand
|
||||
- Write comprehensive architecture documentation for a subsystem
|
||||
- Add a new credential adapter type
|
||||
|
||||
**How:** Claim the issue and discuss your approach in the issue before starting. Submit a PR. Tag with `bounty:large`.
|
||||
|
||||
#### Extreme (150 pts)
|
||||
|
||||
Major contributions that shape the project's direction. Requires maintainer pre-approval.
|
||||
|
||||
- Design and implement a major subsystem (e.g., plugin system, caching layer)
|
||||
- Security audit of a core module with findings and fixes
|
||||
- Major refactor of core architecture
|
||||
- Build a complete example application or reference implementation
|
||||
- End-to-end testing framework for agent workflows
|
||||
|
||||
**How:** Comment on the issue with a design proposal. Wait for maintainer approval before starting work. Tag with `bounty:extreme`.
|
||||
|
||||
## Rules
|
||||
|
||||
1. **Claim before you start** — comment on the issue, wait for assignment
|
||||
|
||||
@@ -27,7 +27,7 @@ When someone comments "I'd like to work on this":
|
||||
5. Merge — the GitHub Action auto-awards XP and posts to Discord
|
||||
6. Close the linked bounty issue
|
||||
|
||||
### Quality Gates
|
||||
### Quality Gates — Integration Bounties
|
||||
|
||||
**`bounty:docs`:**
|
||||
- [ ] Follows the [tool README template](templates/tool-readme-template.md)
|
||||
@@ -51,6 +51,31 @@ When someone comments "I'd like to work on this":
|
||||
- [ ] `make check && make test` passes
|
||||
- [ ] Registered in `_register_unverified()` (not verified)
|
||||
|
||||
### Quality Gates — Standard Bounties
|
||||
|
||||
**`bounty:small`:**
|
||||
- [ ] Change is correct and doesn't introduce regressions
|
||||
- [ ] CI passes
|
||||
- [ ] Scope matches "small" — not padded into a bigger change
|
||||
|
||||
**`bounty:medium`:**
|
||||
- [ ] CI passes
|
||||
- [ ] Bug fixes include a regression test
|
||||
- [ ] Docs/guides are accurate and follow existing style
|
||||
- [ ] Not AI-generated without verification
|
||||
|
||||
**`bounty:large`:**
|
||||
- [ ] Design was discussed in the issue before implementation
|
||||
- [ ] CI passes, new tests cover the change
|
||||
- [ ] Benchmarks included for performance work (before/after)
|
||||
- [ ] Architecture docs reviewed by a second maintainer
|
||||
|
||||
**`bounty:extreme`:**
|
||||
- [ ] Maintainer pre-approved the design proposal before work began
|
||||
- [ ] CI passes, comprehensive test coverage
|
||||
- [ ] Documentation updated to reflect the change
|
||||
- [ ] Reviewed by at least two maintainers
|
||||
|
||||
### Rejecting Submissions
|
||||
|
||||
1. Leave specific, constructive feedback
|
||||
@@ -78,6 +103,8 @@ If a Core Contributor is inactive 8+ weeks, reach out privately first, then remo
|
||||
|
||||
Post dollar values in `#bounty-payouts` (Core Contributors only):
|
||||
|
||||
### Integration bounties
|
||||
|
||||
| Bounty Type | Dollar Range |
|
||||
|-------------|-------------|
|
||||
| `bounty:test` | $10–30 |
|
||||
@@ -85,6 +112,15 @@ Post dollar values in `#bounty-payouts` (Core Contributors only):
|
||||
| `bounty:code` | $20–50 |
|
||||
| `bounty:new-tool` | $50–150 |
|
||||
|
||||
### Standard bounties
|
||||
|
||||
| Bounty Type | Dollar Range |
|
||||
|-------------|-------------|
|
||||
| `bounty:small` | $5–15 |
|
||||
| `bounty:medium` | $20–50 |
|
||||
| `bounty:large` | $50–150 |
|
||||
| `bounty:extreme` | $150–500 |
|
||||
|
||||
**Payout:** PR merged → verify quality → record in `#bounty-payouts` → process payment.
|
||||
|
||||
XP is always awarded regardless of budget. Money is a bonus layer.
|
||||
|
||||
@@ -14,7 +14,7 @@ Complete setup from zero to running. Estimated time: 30 minutes.
|
||||
./scripts/setup-bounty-labels.sh
|
||||
```
|
||||
|
||||
This creates 7 labels: 4 bounty types (`bounty:test`, `bounty:docs`, `bounty:code`, `bounty:new-tool`) and 3 difficulty levels (`difficulty:easy`, `difficulty:medium`, `difficulty:hard`).
|
||||
This creates 11 labels: 4 integration bounty types (`bounty:test`, `bounty:docs`, `bounty:code`, `bounty:new-tool`), 4 standard bounty sizes (`bounty:small`, `bounty:medium`, `bounty:large`, `bounty:extreme`), and 3 difficulty levels (`difficulty:easy`, `difficulty:medium`, `difficulty:hard`).
|
||||
|
||||
## Step 2: Create Discord Channels (3 min)
|
||||
|
||||
|
||||
@@ -102,10 +102,6 @@ The repository includes a `.claude/settings.json` hook that automatically runs `
|
||||
|
||||
The `.cursorrules` file at the repo root tells Cursor's AI the project's style rules (line length, import order, quote style, etc.) so generated code follows convention.
|
||||
|
||||
### Antigravity IDE
|
||||
|
||||
Antigravity IDE (Google's AI-powered IDE) is supported via `.antigravity/mcp_config.json`. See [antigravity-setup.md](antigravity-setup.md) for setup and troubleshooting.
|
||||
|
||||
### Codex CLI
|
||||
|
||||
Codex CLI (OpenAI, v0.101.0+) is supported via `.codex/config.toml` (MCP server config). This file is tracked in git. Run `codex` in the repo root to use the configured MCP tools. See the [Codex CLI section in the README](../README.md#codex-cli) for details.
|
||||
|
||||
@@ -10,8 +10,7 @@ Complete setup guide for building and running goal-driven agents with the Aden A
|
||||
```
|
||||
|
||||
> **Note for Windows Users:**
|
||||
> Running the setup script on native Windows shells (PowerShell / Git Bash) may sometimes fail due to Python App Execution Aliases.
|
||||
> It is **strongly recommended to use WSL (Windows Subsystem for Linux)** for a smoother setup experience.
|
||||
> Native Windows is supported via `quickstart.ps1`. Run it in PowerShell 5.1+. Disable "App Execution Aliases" in Windows settings to avoid Python path conflicts.
|
||||
|
||||
This will:
|
||||
|
||||
@@ -25,13 +24,19 @@ This will:
|
||||
|
||||
## Windows Setup
|
||||
|
||||
Windows users should use **WSL (Windows Subsystem for Linux)** to set up and run agents.
|
||||
Native Windows is supported. Run the PowerShell quickstart:
|
||||
|
||||
1. [Install WSL 2](https://learn.microsoft.com/en-us/windows/wsl/install) if you haven't already:
|
||||
```powershell
|
||||
.\quickstart.ps1
|
||||
```
|
||||
|
||||
Alternatively, you can use WSL:
|
||||
|
||||
1. [Install WSL 2](https://learn.microsoft.com/en-us/windows/wsl/install):
|
||||
```powershell
|
||||
wsl --install
|
||||
```
|
||||
2. Open your WSL terminal, clone the repo, and run the quickstart script:
|
||||
2. Open your WSL terminal, clone the repo, and run:
|
||||
```bash
|
||||
./quickstart.sh
|
||||
```
|
||||
@@ -93,7 +98,7 @@ uv run python -c "import litellm; print('✓ litellm OK')"
|
||||
```
|
||||
|
||||
> **Windows Tip:**
|
||||
> On Windows, if the verification commands fail, ensure you are running them in **WSL** or after **disabling Python App Execution Aliases** in Windows Settings → Apps → App Execution Aliases.
|
||||
> If the verification commands fail on Windows, disable "App Execution Aliases" in Windows Settings → Apps → App Execution Aliases.
|
||||
|
||||
## Requirements
|
||||
|
||||
@@ -108,7 +113,7 @@ uv run python -c "import litellm; print('✓ litellm OK')"
|
||||
- pip (latest version)
|
||||
- 2GB+ RAM
|
||||
- Internet connection (for LLM API calls)
|
||||
- For Windows users: WSL 2 is recommended for full compatibility.
|
||||
- For Windows users: PowerShell 5.1+ (native) or WSL 2.
|
||||
|
||||
### API Keys
|
||||
|
||||
|
||||
@@ -13,6 +13,8 @@ This guide will help you set up the Aden Agent Framework and build your first ag
|
||||
|
||||
The fastest way to get started:
|
||||
|
||||
**Linux / macOS:**
|
||||
|
||||
```bash
|
||||
# 1. Clone the repository
|
||||
git clone https://github.com/adenhq/hive.git
|
||||
@@ -25,6 +27,22 @@ cd hive
|
||||
uv run python -c "import framework; import aden_tools; print('✓ Setup complete')"
|
||||
```
|
||||
|
||||
**Windows (PowerShell):**
|
||||
|
||||
```powershell
|
||||
# 1. Clone the repository
|
||||
git clone https://github.com/adenhq/hive.git
|
||||
cd hive
|
||||
|
||||
# 2. Run automated setup
|
||||
.\quickstart.ps1
|
||||
|
||||
# 3. Verify installation (optional, quickstart.ps1 already verifies)
|
||||
uv run python -c "import framework; import aden_tools; print('Setup complete')"
|
||||
```
|
||||
|
||||
> **Note:** On Windows, running `.\quickstart.ps1` requires PowerShell 5.1+. If you see a "running scripts is disabled" error, run `Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass` first. Alternatively, use WSL — see [environment-setup.md](./environment-setup.md) for details.
|
||||
|
||||
## Building Your First Agent
|
||||
|
||||
Agents are not included by default in a fresh clone.
|
||||
|
||||
@@ -0,0 +1,580 @@
|
||||
# MCP Server Registry — Product & Business Requirements Document
|
||||
|
||||
**Status**: Draft v2
|
||||
**Last updated**: 2026-03-13
|
||||
**Authors**: Timothy
|
||||
**Reviewers**: Platform, Product, OSS/Community, Security
|
||||
|
||||
---
|
||||
|
||||
## 1. Executive Summary
|
||||
|
||||
This document proposes an **MCP Server Registry** system that enables open-source contributors and Hive users to discover, publish, install, and manage MCP (Model Context Protocol) servers for use with Hive agents.
|
||||
|
||||
Today, MCP server configuration is static, duplicated across agents, and limited to servers that Hive spawns as subprocesses. This makes it impractical for users who run their own MCP servers on the same host, and impossible for the community to contribute standalone MCP integrations without modifying Hive internals.
|
||||
|
||||
The registry consists of three components:
|
||||
1. **A public GitHub repository** (`hive-mcp-registry`) — a curated index where contributors submit MCP server entries via pull request
|
||||
2. **Local registry tooling** — CLI commands and a `~/.hive/mcp_registry/` directory for installing, managing, and connecting to MCP servers
|
||||
3. **Framework integration** — changes to Hive's `ToolRegistry`, `MCPClient`, and agent runner so agents can flexibly select which registry servers they need
|
||||
|
||||
---
|
||||
|
||||
## 2. Problem Statement
|
||||
|
||||
### 2.1 Current State
|
||||
|
||||
- Each Hive agent has a static `mcp_servers.json` file that hardcodes MCP server connection details.
|
||||
- All 150+ tools live in a single monolithic `mcp_server.py` — contributors add tools to this one server.
|
||||
- There is no mechanism for standalone MCP servers (e.g., a Jira MCP, a Notion MCP, or a custom database MCP) to be discovered or used by Hive agents.
|
||||
- Each agent spawns its own MCP subprocess — no connection sharing across agents.
|
||||
- Only `stdio` and basic `http` transports are supported. No unix sockets, no SSE, no reconnection.
|
||||
- External MCP servers already running on the host cannot be easily registered.
|
||||
|
||||
### 2.2 Who Is Affected
|
||||
|
||||
| Persona | Pain Point |
|
||||
|---|---|
|
||||
| **OSS contributor** | Wants to publish a standalone MCP server for the Hive ecosystem but has no pathway to do so without modifying Hive core |
|
||||
| **Self-hosted user** | Runs multiple MCP servers on the same host (Slack, GitHub, database tools) and wants Hive agents to discover them |
|
||||
| **Agent builder** | Copies the same `mcp_servers.json` boilerplate across every agent; no way to say "use whatever the user has installed" |
|
||||
| **Platform team** | Cannot manage MCP servers centrally; each agent manages its own connections independently |
|
||||
|
||||
### 2.3 Impact of Not Solving
|
||||
|
||||
- The Hive MCP ecosystem remains closed — growth depends entirely on tools being added to the monolithic server.
|
||||
- Users with existing MCP infrastructure (from Claude Desktop, Cursor, or other MCP-compatible tools) cannot leverage it with Hive.
|
||||
- Resource waste from duplicate subprocess spawning across agents.
|
||||
- No path to community-contributed integrations beyond the core tool set.
|
||||
|
||||
---
|
||||
|
||||
## 3. Goals & Success Criteria
|
||||
|
||||
### 3.1 Primary Goals
|
||||
|
||||
| # | Goal | Metric |
|
||||
|---|---|---|
|
||||
| G1 | A contributor can register a new MCP server in under 5 minutes | Time from fork to PR submission |
|
||||
| G2 | A user can install and use a registry MCP server in under 2 minutes | Time from `hive mcp install X` to first tool call |
|
||||
| G3 | Agents can dynamically select MCP servers by name or tag without hardcoding configs | Agents use `mcp_registry.json` selectors instead of full server configs |
|
||||
| G4 | Multiple agents share MCP connections instead of duplicating them | One subprocess/connection per unique server, not per agent |
|
||||
| G5 | External MCP servers already running on the host can be registered with a single command | `hive mcp add --name X --url http://...` works end-to-end |
|
||||
| G6 | Zero breaking changes to existing agent configurations | All current `mcp_servers.json` files continue to work unchanged |
|
||||
|
||||
### 3.2 Developer Success Goals
|
||||
|
||||
| # | Goal | Metric |
|
||||
|---|---|---|
|
||||
| G7 | First-install success rate exceeds 90% | Successful `hive mcp install` / total attempts (tracked via CLI telemetry opt-in) |
|
||||
| G8 | First-tool-call success rate exceeds 85% after install | Successful tool invocation within 5 minutes of install |
|
||||
| G9 | Users can self-diagnose and resolve config/auth issues without filing support tickets | Median time from error to resolution <5 minutes; support ticket volume per server <1/month |
|
||||
| G10 | Registry entries remain healthy over time | % of entries passing automated health validation at 30/60/90 days |
|
||||
| G11 | Server upgrades do not silently break agents | Zero undetected tool-signature changes on upgrade |
|
||||
|
||||
### 3.3 Non-Goals (Explicit Exclusions)
|
||||
|
||||
- **Billing or monetization** — the registry is free and open-source.
|
||||
- **Hosting MCP servers** — the registry only stores metadata; actual servers are installed/run by users.
|
||||
- **Replacing `mcp_servers.json`** — the static config remains for backward compatibility and offline use.
|
||||
- **Runtime agent-to-agent MCP sharing** — this is about discovery and connection, not inter-agent protocol.
|
||||
- **Decomposing the monolithic `mcp_server.py`** — this is a future phase, not part of the initial build.
|
||||
|
||||
---
|
||||
|
||||
## 4. User Stories
|
||||
|
||||
### 4.1 Contributor: Publishing an MCP Server
|
||||
|
||||
> As an OSS contributor who has built a Jira MCP server, I want to register it in a public registry so that any Hive user can install and use it without modifying Hive code.
|
||||
|
||||
**Acceptance criteria:**
|
||||
- `hive mcp init` scaffolds a manifest with my server's details pre-filled from introspection.
|
||||
- `hive mcp validate ./manifest.json` passes locally before I open a PR.
|
||||
- `hive mcp test ./manifest.json` starts my server, lists tools, calls a health check, and reports pass/fail.
|
||||
- CI validates my manifest automatically (schema, naming, required fields, package existence).
|
||||
- After merge, the server appears in `hive mcp search` for all users.
|
||||
|
||||
### 4.2 User: Installing an MCP Server from the Registry
|
||||
|
||||
> As a Hive user, I want to install a community MCP server and have my agents use it immediately.
|
||||
|
||||
**Acceptance criteria:**
|
||||
- `hive mcp install jira` fetches the manifest and configures the server locally.
|
||||
- If credentials are required, the CLI prompts me: "Jira requires JIRA_API_TOKEN (get one at https://...). Enter value:"
|
||||
- `hive mcp health jira` confirms the server is reachable and tools are discoverable.
|
||||
- My queen agent (with `auto_discover: true`) automatically picks up the new server's tools.
|
||||
- `hive mcp info jira` shows trust tier, last health check, installed version, and loaded tools.
|
||||
|
||||
### 4.3 User: Registering a Local/Running MCP Server
|
||||
|
||||
> As a user running a custom database MCP server on `localhost:9090`, I want Hive agents to use it without publishing it to any public registry.
|
||||
|
||||
**Acceptance criteria:**
|
||||
- `hive mcp add --name my-db --transport http --url http://localhost:9090` registers it.
|
||||
- The server appears in `hive mcp list` and is available to agents that include it.
|
||||
- If the server goes down, Hive logs a warning with actionable next steps and retries on next tool call.
|
||||
|
||||
### 4.4 Agent Builder: Selecting MCP Servers for a Worker
|
||||
|
||||
> As an agent builder, I want my worker agent to use specific MCP servers (e.g., Slack + Jira) without hardcoding connection details.
|
||||
|
||||
**Acceptance criteria:**
|
||||
- I create `mcp_registry.json` in my agent directory with `{"include": ["slack", "jira"]}`.
|
||||
- At runtime, the agent automatically connects to whatever Slack and Jira servers the user has installed.
|
||||
- If a requested server isn't installed, startup logs explain: "Server 'jira' requested by mcp_registry.json but not installed. Run: hive mcp install jira"
|
||||
|
||||
### 4.5 Queen: Auto-Discovering Available MCP Servers
|
||||
|
||||
> As the queen agent, I want access to installed MCP servers so I can delegate tasks that require any tool.
|
||||
|
||||
**Acceptance criteria:**
|
||||
- Queen's `mcp_registry.json` uses `{"profile": "all"}` to load all enabled servers.
|
||||
- Startup logs list every loaded server and its tool count: "Loaded 3 registry servers: jira (4 tools), slack (6 tools), my-db (2 tools)"
|
||||
- If tool names collide across servers, the resolution is deterministic and logged.
|
||||
- Queen respects a configurable max tool budget to avoid prompt overload.
|
||||
|
||||
### 4.6 User: Diagnosing a Broken MCP Server
|
||||
|
||||
> As a user whose agent suddenly can't call Jira tools, I want to quickly find and fix the problem.
|
||||
|
||||
**Acceptance criteria:**
|
||||
- `hive mcp doctor` checks all installed servers and reports: connection status, credential validity, tool discovery result, last error.
|
||||
- `hive mcp doctor jira` gives detailed diagnostics: "jira: UNHEALTHY. Transport: stdio. Error: Process exited with code 1. Stderr: 'JIRA_API_TOKEN not set'. Fix: hive mcp config jira --set JIRA_API_TOKEN=your-token"
|
||||
- `hive mcp inspect jira` shows the resolved config, override chain, and which agents include it.
|
||||
- `hive mcp why-not jira --agent exports/my-agent` explains why a server was or was not loaded for an agent.
|
||||
|
||||
---
|
||||
|
||||
## 5. Requirements
|
||||
|
||||
### 5.1 Functional Requirements
|
||||
|
||||
#### 5.1.1 Registry Repository
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-1 | The registry is a public GitHub repo with a defined directory structure for server entries | P0 |
|
||||
| FR-2 | Each server entry is a `manifest.json` file conforming to a JSON Schema | P0 |
|
||||
| FR-3 | CI validates manifests on every PR (schema, naming, uniqueness, required fields) | P0 |
|
||||
| FR-4 | A flat index (`registry_index.json`) is auto-generated on merge for client consumption | P0 |
|
||||
| FR-5 | A `_template/` directory provides a starter manifest + README for contributors | P0 |
|
||||
| FR-6 | `CONTRIBUTING.md` documents the 5-minute submission process with annotated examples for each transport type (stdio, http, unix, sse) | P0 |
|
||||
| FR-7 | CI checks that `install.pip` packages exist on PyPI (if specified) | P1 |
|
||||
| FR-8 | Tags follow a controlled taxonomy with new tags requiring maintainer approval | P1 |
|
||||
| FR-9 | Canonical example manifests are provided for each transport type in `registry/_examples/` | P0 |
|
||||
|
||||
#### 5.1.2 Manifest Schema
|
||||
|
||||
The manifest has a **portable base layer** (framework-agnostic, usable by any MCP client) and an optional **hive extension block** (Hive-specific ergonomics).
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-10 | Manifest base includes: name, display_name, version, description, author, repository, license | P0 |
|
||||
| FR-11 | Manifest declares supported transports (stdio, http, unix, sse) with default | P0 |
|
||||
| FR-12 | Manifest includes install instructions (pip package name, docker image, npm package) | P0 |
|
||||
| FR-13 | Manifest lists tool names and descriptions (for pre-connect filtering) | P0 |
|
||||
| FR-14 | Manifest declares credential requirements (env_var, description, help_url, required flag) | P0 |
|
||||
| FR-15 | Manifest includes tags and categories for discovery | P1 |
|
||||
| FR-16 | Manifest supports template variables (`{port}`, `{socket_path}`, `{name}`) in commands | P1 |
|
||||
| FR-17 | Manifest includes `hive` extension block for Hive-specific metadata (see 5.1.8) | P1 |
|
||||
|
||||
#### 5.1.3 Manifest Trust & Quality Metadata
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-80 | Manifest includes `status` field: `official`, `verified`, or `community` | P0 |
|
||||
| FR-81 | Manifest includes `maintainer` contact (email or GitHub handle) | P0 |
|
||||
| FR-82 | Manifest includes `docs_url` pointing to server documentation | P1 |
|
||||
| FR-83 | Manifest includes `example_agent_url` linking to an example agent using this server | P2 |
|
||||
| FR-84 | Manifest includes `supported_os` list (e.g., `["linux", "macos", "windows"]`) | P1 |
|
||||
| FR-85 | Manifest includes `deprecated` boolean and `deprecated_by` field for superseded entries | P1 |
|
||||
| FR-86 | Registry index includes `last_validated_at` timestamp per entry (from automated CI health runs) | P1 |
|
||||
|
||||
#### 5.1.4 Local Registry
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-20 | `~/.hive/mcp_registry/installed.json` tracks all installed/registered servers | P0 |
|
||||
| FR-21 | Servers can be sourced from the remote registry (`"source": "registry"`) or local (`"source": "local"`) | P0 |
|
||||
| FR-22 | Each installed server has: transport preference, enabled/disabled state, and env/header overrides | P0 |
|
||||
| FR-23 | The remote registry index is cached locally with configurable refresh interval | P1 |
|
||||
| FR-24 | Each installed server tracks operational state: `last_health_check_at`, `last_health_status`, `last_error`, `last_used_at`, `resolved_package_version` | P1 |
|
||||
| FR-25 | Each installed server supports `pinned: true` to prevent auto-update and `auto_update: true` for automatic version tracking | P1 |
|
||||
|
||||
#### 5.1.5 CLI Commands — Management
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-30 | `hive mcp install <name> [--version X]` — install from registry, optionally pin version | P0 |
|
||||
| FR-31 | `hive mcp add --name X --transport T --url U` — register a local server | P0 |
|
||||
| FR-32 | `hive mcp add --from manifest.json` — register from a manifest file | P1 |
|
||||
| FR-33 | `hive mcp remove <name>` — uninstall/unregister | P0 |
|
||||
| FR-34 | `hive mcp list` — list installed servers with status, health, and trust tier | P0 |
|
||||
| FR-35 | `hive mcp list --available` — list all servers in remote registry | P1 |
|
||||
| FR-36 | `hive mcp search <query>` — search by name/tag/description/tool-name | P1 |
|
||||
| FR-37 | `hive mcp enable/disable <name>` — toggle without removing | P0 |
|
||||
| FR-38 | `hive mcp health [name]` — check server reachability and tool discovery | P1 |
|
||||
| FR-39 | `hive mcp update [name]` — refresh index cache or update a specific server | P1 |
|
||||
| FR-40 | `hive mcp config <name> --set KEY=VAL` — set credential/env overrides | P0 |
|
||||
| FR-41 | `hive mcp info <name>` — show full details: trust tier, version, tools, health, which agents use it | P0 |
|
||||
|
||||
#### 5.1.6 CLI Commands — Contributor Tooling
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-42 | `hive mcp init [--server-url URL]` — scaffold a manifest; if URL provided, introspects server to pre-fill tools list | P0 |
|
||||
| FR-43 | `hive mcp validate <path>` — validate a manifest against the JSON Schema locally | P0 |
|
||||
| FR-44 | `hive mcp test <path>` — start the server per manifest config, list tools, run health check, report pass/fail | P1 |
|
||||
|
||||
#### 5.1.7 CLI Commands — Diagnostics
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-45 | `hive mcp doctor [name]` — check all or one server: connection, credentials, tool discovery, last error; output actionable fix suggestions | P0 |
|
||||
| FR-46 | `hive mcp inspect <name>` — show resolved config including override chain, transport details, and which agents include/exclude this server | P1 |
|
||||
| FR-47 | `hive mcp why-not <name> --agent <path>` — explain why a server was or was not loaded for a specific agent's `mcp_registry.json` | P1 |
|
||||
|
||||
#### 5.1.8 Hive Extension Block in Manifest
|
||||
|
||||
The optional `hive` block in the manifest carries Hive-specific metadata that doesn't belong in the portable base:
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-90 | `hive.min_version` — minimum Hive version required | P1 |
|
||||
| FR-91 | `hive.max_version` — maximum compatible Hive version (optional, for deprecation) | P2 |
|
||||
| FR-92 | `hive.example_agent` — path or URL to an example agent using this server | P2 |
|
||||
| FR-93 | `hive.profiles` — list of profile tags this server belongs to (e.g., `["core", "productivity", "developer"]`) | P1 |
|
||||
| FR-94 | `hive.tool_namespace` — optional prefix for tool names to avoid collisions (e.g., `jira_`) | P1 |
|
||||
|
||||
#### 5.1.9 Agent Selection
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-50 | Agents can declare MCP server preferences in `mcp_registry.json` | P0 |
|
||||
| FR-51 | Selection supports: explicit `include` list, `tags` matching, `exclude` blacklist | P0 |
|
||||
| FR-52 | `profile` field loads servers matching a named profile (e.g., `"all"`, `"core"`, `"productivity"`) | P0 |
|
||||
| FR-53 | If `mcp_registry.json` does not exist, no registry servers are loaded (backward compatible) | P0 |
|
||||
| FR-54 | Missing requested servers produce warnings with actionable install instructions, not errors | P0 |
|
||||
| FR-55 | Agent startup logs a summary of loaded/skipped registry servers with reasons | P0 |
|
||||
| FR-56 | `max_tools` field caps total tools loaded from registry servers (prevents prompt overload) | P1 |
|
||||
|
||||
#### 5.1.10 Tool Resolution & Namespacing
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-100 | When multiple servers expose a tool with the same name, the first server in include-order wins (deterministic) | P0 |
|
||||
| FR-101 | Tool collisions are logged at startup: "Tool 'search' from 'brave-search' shadowed by 'google-search' (loaded first)" | P0 |
|
||||
| FR-102 | If a server declares `hive.tool_namespace`, its tools are prefixed: `jira_create_issue` instead of `create_issue` | P1 |
|
||||
| FR-103 | `hive mcp inspect <name>` shows which tools are active vs shadowed | P1 |
|
||||
|
||||
#### 5.1.11 Connection Management
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-60 | A process-level connection manager shares MCP connections across agents | P1 |
|
||||
| FR-61 | Connections are reference-counted — disconnected when no agent uses them | P1 |
|
||||
| FR-62 | HTTP/unix/SSE connections retry once on failure before raising an error | P1 |
|
||||
|
||||
#### 5.1.12 Transport Extensions
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| FR-70 | `MCPClient` supports unix socket transport via `httpx` UDS | P1 |
|
||||
| FR-71 | `MCPClient` supports SSE transport via the official MCP Python SDK | P1 |
|
||||
| FR-72 | `MCPServerConfig` includes `socket_path` field for unix transport | P1 |
|
||||
|
||||
### 5.2 Version Compatibility & Upgrade Safety
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| VC-1 | Manifest includes `version` (semver) for the registry entry and `mcp_protocol_version` for the MCP spec | P0 |
|
||||
| VC-2 | Manifest `hive` block includes optional `min_version` / `max_version` constraints | P1 |
|
||||
| VC-3 | `hive mcp install` installs latest by default; `--version X` pins a specific version | P0 |
|
||||
| VC-4 | `installed.json` records `resolved_package_version` (actual pip/npm version installed) | P1 |
|
||||
| VC-5 | `hive mcp update <name>` compares old and new tool lists; warns if tools were removed or signatures changed | P1 |
|
||||
| VC-6 | Agents can pin a resolved server version in `mcp_registry.json` via `"versions": {"jira": "1.2.0"}` | P2 |
|
||||
| VC-7 | If a pinned version is no longer available, the agent logs an error with rollback instructions | P2 |
|
||||
| VC-8 | `hive mcp update --dry-run` shows what would change without applying | P1 |
|
||||
| VC-9 | Tool names and parameter schemas from the manifest constitute a compatibility contract; breaking changes require a major version bump | P1 |
|
||||
|
||||
### 5.3 Failure Handling & Diagnostics
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| DX-1 | All MCP errors use structured error codes (e.g., `MCP_INSTALL_FAILED`, `MCP_AUTH_MISSING`, `MCP_CONNECT_TIMEOUT`, `MCP_TOOL_NOT_FOUND`, `MCP_PROTOCOL_MISMATCH`) | P0 |
|
||||
| DX-2 | Every error message includes: what failed, why, and a suggested fix command | P0 |
|
||||
| DX-3 | `hive mcp doctor` checks: connection, credentials (are required env vars set?), tool discovery, protocol version compatibility, Hive version compatibility | P0 |
|
||||
| DX-4 | Agent startup emits a structured log line per registry server: `{server, status, tools_loaded, skipped_reason}` | P0 |
|
||||
| DX-5 | Failed tool calls from registry servers include the server name and transport in the error context | P1 |
|
||||
| DX-6 | `hive mcp doctor` output is machine-parseable (JSON with `--json` flag) for CI/automation | P2 |
|
||||
|
||||
### 5.4 Non-Functional Requirements
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
|---|---|---|
|
||||
| NFR-1 | Registry index fetch must complete in <5s on typical internet connections | P1 |
|
||||
| NFR-2 | Installing a server from registry must not require a Hive restart | P0 |
|
||||
| NFR-3 | Connection manager must be thread-safe (multiple agents in same process) | P0 |
|
||||
| NFR-4 | All new code must have unit test coverage | P0 |
|
||||
| NFR-5 | Registry repo CI must run in <60s | P1 |
|
||||
| NFR-6 | Manifest base schema must be framework-agnostic (usable by non-Hive MCP clients); Hive-specific fields live in the `hive` extension block | P1 |
|
||||
| NFR-7 | `hive mcp install` prints a security notice on first use: "Registry servers run code on your machine. Only install servers you trust." | P0 |
|
||||
|
||||
---
|
||||
|
||||
## 6. Architecture Overview
|
||||
|
||||
```
|
||||
┌──────────────────────────────────┐
|
||||
│ hive-mcp-registry (GitHub) │
|
||||
│ │
|
||||
│ registry/servers/jira/manifest │
|
||||
│ registry/servers/slack/manifest │
|
||||
│ ... │
|
||||
│ registry_index.json (auto-built) │
|
||||
└────────────────┬───────────────────┘
|
||||
│ hive mcp update
|
||||
│ (fetches index)
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ ~/.hive/mcp_registry/ │
|
||||
│ │
|
||||
│ installed.json config.json cache/ │
|
||||
│ (jira, slack, (preferences) registry_index.json │
|
||||
│ my-custom-db) (cached remote) │
|
||||
└─────────────────────────────┬───────────────────────────────────────┘
|
||||
│
|
||||
┌───────────────┼───────────────────┐
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────────┐ ┌─────────────┐ ┌──────────────┐
|
||||
│ Queen Agent │ │Worker Agent │ │ hive mcp CLI │
|
||||
│ │ │ │ │ │
|
||||
│ mcp_registry │ │mcp_registry │ │ install │
|
||||
│ .json: │ │.json: │ │ add / remove │
|
||||
│ profile: all │ │include: │ │ doctor │
|
||||
│ │ │ [jira] │ │ init / test │
|
||||
└──────┬───────┘ └──────┬──────┘ └──────────────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
┌──────────────────────────────────┐
|
||||
│ MCPConnectionManager │
|
||||
│ (process singleton) │
|
||||
│ │
|
||||
│ jira → MCPClient (stdio, rc=2) │
|
||||
│ slack → MCPClient (http, rc=1) │
|
||||
│ my-db → MCPClient (unix, rc=1) │
|
||||
└──────────────────────────────────┘
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌──────────┐ ┌────────┐ ┌────────────┐
|
||||
│ Jira MCP │ │Slack │ │ Custom DB │
|
||||
│ (stdio) │ │MCP │ │ MCP (unix │
|
||||
│ │ │(http) │ │ socket) │
|
||||
└──────────┘ └────────┘ └────────────┘
|
||||
```
|
||||
|
||||
### Component Responsibilities
|
||||
|
||||
| Component | Responsibility |
|
||||
|---|---|
|
||||
| **hive-mcp-registry** (GitHub repo) | Curated index of MCP server manifests; CI validates PRs; automated health checks |
|
||||
| **~/.hive/mcp_registry/** | Local state: installed servers, cached index, user config, operational telemetry |
|
||||
| **MCPRegistry** (Python module) | Core logic: install, remove, search, resolve for agent, doctor |
|
||||
| **MCPConnectionManager** | Process-level connection pool with refcounting |
|
||||
| **MCPClient** (extended) | Adds unix socket, SSE transports; retry on failure |
|
||||
| **ToolRegistry** (extended) | New `load_registry_servers()` method with collision handling |
|
||||
| **AgentRunner** (extended) | Loads `mcp_registry.json` alongside `mcp_servers.json`; logs resolution summary |
|
||||
| **hive mcp CLI** | User-facing commands for management, diagnostics, and contributor tooling |
|
||||
|
||||
---
|
||||
|
||||
## 7. Data Models
|
||||
|
||||
### 7.1 Registry Manifest (`manifest.json`)
|
||||
|
||||
```json
|
||||
{
|
||||
"$schema": "https://raw.githubusercontent.com/aden-hive/hive-mcp-registry/main/schema/manifest.schema.json",
|
||||
|
||||
"name": "jira",
|
||||
"display_name": "Jira MCP Server",
|
||||
"version": "1.2.0",
|
||||
"description": "Interact with Jira issues, boards, and sprints",
|
||||
"author": {"name": "Jane Contributor", "github": "janedev", "url": "https://github.com/janedev"},
|
||||
"maintainer": {"github": "janedev", "email": "jane@example.com"},
|
||||
"repository": "https://github.com/janedev/jira-mcp-server",
|
||||
"license": "MIT",
|
||||
"status": "community",
|
||||
"docs_url": "https://github.com/janedev/jira-mcp-server/blob/main/README.md",
|
||||
"supported_os": ["linux", "macos", "windows"],
|
||||
"deprecated": false,
|
||||
|
||||
"transport": {"supported": ["stdio", "http"], "default": "stdio"},
|
||||
"install": {"pip": "jira-mcp-server", "docker": "ghcr.io/janedev/jira-mcp-server:latest", "npm": null},
|
||||
|
||||
"stdio": {"command": "uvx", "args": ["jira-mcp-server", "--stdio"]},
|
||||
"http": {"default_port": 4010, "health_path": "/health", "command": "uvx", "args": ["jira-mcp-server", "--http", "--port", "{port}"]},
|
||||
"unix": {"socket_template": "/tmp/mcp-{name}.sock", "command": "uvx", "args": ["jira-mcp-server", "--unix", "{socket_path}"]},
|
||||
|
||||
"tools": [
|
||||
{"name": "jira_create_issue", "description": "Create a new Jira issue"},
|
||||
{"name": "jira_search", "description": "Search Jira issues with JQL"},
|
||||
{"name": "jira_update_issue", "description": "Update an existing issue"},
|
||||
{"name": "jira_list_boards", "description": "List all Jira boards"}
|
||||
],
|
||||
|
||||
"credentials": [
|
||||
{"id": "jira_api_token", "env_var": "JIRA_API_TOKEN", "description": "Jira API token", "help_url": "https://id.atlassian.com/manage-profile/security/api-tokens", "required": true},
|
||||
{"id": "jira_domain", "env_var": "JIRA_DOMAIN", "description": "Your Jira domain (e.g., mycompany.atlassian.net)", "required": true}
|
||||
],
|
||||
|
||||
"tags": ["project-management", "atlassian", "issue-tracking"],
|
||||
"categories": ["productivity"],
|
||||
"mcp_protocol_version": "2024-11-05",
|
||||
|
||||
"hive": {
|
||||
"min_version": "0.5.0",
|
||||
"max_version": null,
|
||||
"profiles": ["productivity", "developer"],
|
||||
"tool_namespace": "jira",
|
||||
"example_agent": "https://github.com/janedev/jira-mcp-server/tree/main/examples/hive-agent"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Schema layering**:
|
||||
- Everything outside `hive` is the **portable base** — usable by any MCP client.
|
||||
- The `hive` block carries Hive-specific compatibility, profiles, namespacing, and examples.
|
||||
|
||||
### 7.2 Agent Selection (`mcp_registry.json`)
|
||||
|
||||
```json
|
||||
{
|
||||
"include": ["jira", "slack"],
|
||||
"tags": ["crm"],
|
||||
"exclude": ["github"],
|
||||
"profile": "productivity",
|
||||
"max_tools": 50,
|
||||
"versions": {
|
||||
"jira": "1.2.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Selection precedence** (deterministic):
|
||||
1. `profile` expands to a set of server names (union with `include` + `tags` matches).
|
||||
2. `include` adds explicit servers.
|
||||
3. `tags` adds servers whose tags overlap.
|
||||
4. `exclude` removes from the final set (always wins).
|
||||
5. Servers are loaded in `include`-order first, then alphabetically for tag/profile matches.
|
||||
6. Tool collisions resolved by load order: first server wins.
|
||||
|
||||
### 7.3 Installed Server Entry (`installed.json` → `servers.<name>`)
|
||||
|
||||
```json
|
||||
{
|
||||
"source": "registry",
|
||||
"manifest_version": "1.2.0",
|
||||
"manifest": {},
|
||||
"installed_at": "2026-03-13T10:00:00Z",
|
||||
"installed_by": "hive mcp install",
|
||||
"transport": "stdio",
|
||||
"enabled": true,
|
||||
"pinned": false,
|
||||
"auto_update": false,
|
||||
"resolved_package_version": "1.2.0",
|
||||
"overrides": {"env": {"JIRA_DOMAIN": "mycompany.atlassian.net"}, "headers": {}},
|
||||
"last_health_check_at": "2026-03-13T12:00:00Z",
|
||||
"last_health_status": "healthy",
|
||||
"last_error": null,
|
||||
"last_used_at": "2026-03-13T11:30:00Z",
|
||||
"last_validated_with_hive_version": "0.6.0"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Risks & Mitigations
|
||||
|
||||
| Risk | Impact | Likelihood | Mitigation |
|
||||
|---|---|---|---|
|
||||
| Low contributor adoption — nobody submits servers | Registry is empty, no value delivered | Medium | Seed with 5-10 popular MCP servers; `hive mcp init` makes submission trivial; canonical examples for every transport |
|
||||
| High support burden from low-quality entries | Users install broken servers, file tickets against Hive | Medium | Trust tiers (official/verified/community); automated health checks in registry CI; `hive mcp doctor` for self-service debugging; quality gates beyond schema validation |
|
||||
| Malicious MCP server in registry | User installs server that exfiltrates data or executes harmful code | Low | Maintainer review on all PRs; security notice on first install; servers run in user's trust boundary; verified tier requires code audit |
|
||||
| Breaking changes to manifest schema | Existing manifests become invalid | Low | Schema versioning with `$schema` URL; CI validates backward compatibility; migration scripts |
|
||||
| Server upgrades silently break agents | Tool signatures change, agents fail at runtime | Medium | `hive mcp update` diffs tool lists and warns on breaking changes; version pinning in `mcp_registry.json`; `--dry-run` flag |
|
||||
| Connection manager concurrency bugs | Tool calls fail or deadlock under load | Medium | Thorough unit tests; reuse existing thread-safety patterns from `MCPClient._stdio_call_lock` |
|
||||
| Registry index URL becomes unavailable | Users can't install new servers | Low | Local cache with TTL; fallback to last-known-good index; registry is a static file (cheap to host/mirror) |
|
||||
| Name squatting in registry | Bad actors claim popular names | Low | Maintainer review on all PRs; naming guidelines in CONTRIBUTING.md |
|
||||
| Auto-discover overloads agents with too many tools | Prompt bloat, confused tool selection, slower responses | Medium | `max_tools` cap in `mcp_registry.json`; profiles instead of blanket auto-discover; startup log shows tool count |
|
||||
| Tool name collisions across servers | Wrong server handles a tool call | Medium | Deterministic load-order resolution; startup collision logging; optional tool namespacing via `hive.tool_namespace` |
|
||||
|
||||
---
|
||||
|
||||
## 9. Backward Compatibility
|
||||
|
||||
This system is **fully additive**:
|
||||
|
||||
- Existing `mcp_servers.json` files continue to work unchanged.
|
||||
- Agents without `mcp_registry.json` load zero registry servers.
|
||||
- The `MCPConnectionManager` is only used for registry-sourced connections; existing direct `MCPClient` usage is untouched.
|
||||
- New CLI commands (`hive mcp ...`) don't conflict with existing commands.
|
||||
- No existing files are modified in a breaking way.
|
||||
- `mcp_servers.json` tools always take precedence over registry tools (they load first).
|
||||
|
||||
---
|
||||
|
||||
## 10. Documentation & Examples Strategy
|
||||
|
||||
Documentation is a first-class deliverable, not an afterthought. The following are required for launch:
|
||||
|
||||
| Doc | Audience | Deliverable |
|
||||
|---|---|---|
|
||||
| "Publish your first MCP server" | Contributors | Step-by-step guide from zero to merged registry entry, with screenshots |
|
||||
| "Install and use your first registry server" | Users | Guide from `hive mcp install` to agent tool call |
|
||||
| "Migration from mcp_servers.json" | Existing users | How to move static configs to registry-based selection |
|
||||
| "Troubleshooting MCP servers" | Users | Common errors, `doctor` output examples, fix recipes |
|
||||
| Manifest cookbook | Contributors | Annotated examples for stdio, http, unix, sse, multi-credential, no-credential |
|
||||
| Example agents | Agent builders | 2-3 sample agents using `mcp_registry.json` with different selection strategies |
|
||||
|
||||
---
|
||||
|
||||
## 11. Phased Delivery
|
||||
|
||||
| Phase | Scope | Depends On |
|
||||
|---|---|---|
|
||||
| **Phase 1: Foundation** | MCPClient transport extensions (unix, SSE, retry); MCPConnectionManager; MCPRegistry module; CLI management commands; ToolRegistry `load_registry_servers()` with collision handling; AgentRunner `mcp_registry.json` loading with startup logging; structured error codes | -- |
|
||||
| **Phase 2: Developer Tooling** | `hive mcp init`, `validate`, `test` (contributor flow); `doctor`, `inspect`, `why-not` (diagnostics); version pinning and `update --dry-run` | Phase 1 |
|
||||
| **Phase 3: Registry Repo** | Create `hive-mcp-registry` GitHub repo with schema, validation CI, template, examples, CONTRIBUTING.md; seed with reference entries for built-in servers; automated health check CI | Phase 1 |
|
||||
| **Phase 4: Docs & Launch** | All documentation deliverables from section 10; example agents; announcement | Phase 2, 3 |
|
||||
| **Phase 5: Community Growth** | Trust tier promotion process; curated starter packs; popular/trending signals in registry | Phase 4 |
|
||||
| **Phase 6: Monolith Decomposition** (future) | Extract tool groups from `mcp_server.py` into standalone servers; each becomes a registry entry | Phase 5 |
|
||||
|
||||
---
|
||||
|
||||
## 12. Open Questions
|
||||
|
||||
| # | Question | Owner | Status |
|
||||
|---|---|---|---|
|
||||
| Q1 | Should the registry repo live under `aden-hive` org or a new `hive-mcp` org? | Platform team | Open |
|
||||
| Q2 | Should `hive mcp install` auto-prompt for required credentials interactively? | UX | Open |
|
||||
| Q3 | Should the connection manager have a configurable max concurrent connections limit? | Engineering | Open |
|
||||
| Q4 | Should we support a `docker` transport (Hive manages container lifecycle)? | Engineering | Open |
|
||||
| Q5 | What is the process for promoting a `community` entry to `verified`? (e.g., code audit, usage threshold, maintainer SLA) | Platform + Security | Open |
|
||||
| Q6 | Should the registry support private/enterprise indexes (e.g., `hive mcp config --index-url https://internal/...`)? | Platform | Open |
|
||||
| Q7 | Should `hive mcp doctor` report telemetry (opt-in) to help identify systemic issues? | Product + Privacy | Open |
|
||||
| Q8 | How should we handle MCP servers that require OAuth flows (not just static API keys)? | Engineering | Open |
|
||||
|
||||
---
|
||||
|
||||
## 13. Stakeholder Sign-Off
|
||||
|
||||
| Role | Name | Status |
|
||||
|---|---|---|
|
||||
| Engineering Lead | | Pending |
|
||||
| Product | | Pending |
|
||||
| OSS / Community | | Pending |
|
||||
| Security | | Pending |
|
||||
| Developer Experience | | Pending |
|
||||
@@ -0,0 +1,907 @@
|
||||
# Skill Registry — Product & Business Requirements Document
|
||||
|
||||
**Status**: Draft v1
|
||||
**Last updated**: 2026-03-13
|
||||
**Authors**: Timothy
|
||||
**Reviewers**: Platform, Product, OSS/Community, Developer Experience
|
||||
|
||||
---
|
||||
|
||||
## 1. Executive Summary
|
||||
|
||||
This document proposes a **Skill System** for Hive — a portable implementation of the open [Agent Skills](https://agentskills.io) standard — combined with a community registry and a set of built-in default skills that give every worker agent runtime resiliency out of the box.
|
||||
|
||||
### 1.1 The Agent Skills Standard
|
||||
|
||||
Agent Skills is an open format, originally developed by Anthropic, for giving agents new capabilities and expertise. It has been adopted by 30+ products including Claude Code, Cursor, VS Code, GitHub Copilot, Gemini CLI, OpenHands, Goose, Roo Code, OpenAI Codex, and more.
|
||||
|
||||
A skill is a directory containing a `SKILL.md` file — YAML frontmatter (name, description) plus markdown instructions — optionally accompanied by scripts, reference docs, and assets. Agents discover skills at startup, load only the name and description into context (progressive disclosure tier 1), and activate the full instructions on demand when the task matches (tier 2). Supporting files are loaded only when the instructions reference them (tier 3).
|
||||
|
||||
```
|
||||
my-skill/
|
||||
├── SKILL.md # Required: metadata + instructions
|
||||
├── scripts/ # Optional: executable code
|
||||
├── references/ # Optional: documentation
|
||||
├── assets/ # Optional: templates, resources
|
||||
└── evals/ # Optional: test cases and assertions
|
||||
```
|
||||
|
||||
### 1.2 What Hive Adds
|
||||
|
||||
Hive implements the Agent Skills standard faithfully — no forks, no proprietary extensions to the `SKILL.md` format. A skill written for Claude Code, Cursor, or any other compatible product works in Hive with zero changes, and vice versa.
|
||||
|
||||
On top of the standard, Hive adds two things:
|
||||
|
||||
1. **Default skills** — Six built-in skills shipped with the Hive framework that every worker agent loads automatically. These encode runtime operational discipline: structured note-taking, batch progress tracking, context preservation, quality self-assessment, error recovery protocols, and task decomposition. They are the "muscle memory" that makes agents reliable by default.
|
||||
|
||||
2. **Community registry** (`hive-skill-registry`) — A curated GitHub repository where contributors submit skill packages via pull request. Skills in the registry are standard Agent Skills packages. Includes CI validation, trust tiers, starter packs, and bounty program integration.
|
||||
|
||||
### 1.3 Abstraction Hierarchy
|
||||
|
||||
| Layer | What it is | Example |
|
||||
| ----------------- | ------------------------------------------------------- | ------------------------------------------------- |
|
||||
| **Tool** | A single function call via MCP | `web_search`, `gmail_send`, `jira_create_issue` |
|
||||
| **Skill** | A `SKILL.md` with instructions, scripts, and references | "Deep Research", "Code Review", "Data Analysis" |
|
||||
| **Default Skill** | A built-in skill for runtime resiliency | "Structured Note-Taking", "Batch Progress Ledger" |
|
||||
| **Agent** | A complete goal-driven worker composed of skills | "Sales Outreach Agent", "Support Triage Agent" |
|
||||
|
||||
---
|
||||
|
||||
## 2. Problem Statement
|
||||
|
||||
### 2.1 Current State
|
||||
|
||||
- Worker agents have no skill system. There is no mechanism to discover, load, or follow reusable procedural instructions on demand.
|
||||
- The 12 example templates in `examples/templates/` are copy-paste only — they cannot be composed, imported, versioned, or discovered at runtime.
|
||||
- Agent builders must either hand-write all prompts and tool orchestration from scratch, or copy patterns from other agents manually.
|
||||
- Skills written for Claude Code, Cursor, and other Agent Skills-compatible products do not work in Hive. Users who adopt Hive lose access to the growing ecosystem of community skills.
|
||||
- Worker agents have no standardized operational discipline. The framework provides mechanical safeguards (stall detection, doom-loop fingerprinting, checkpoint/resume), but there is no cognitive protocol for how an agent should take structured notes when processing a 50-item batch, when to proactively save data before context pruning, or how to self-assess quality degradation. Each agent author either reinvents these patterns in their system prompts or — more commonly — skips them entirely.
|
||||
- When a community member builds a battle-tested skill (research pattern, triage workflow, outreach playbook), there is no pathway to share it, no discovery mechanism, no versioning, and no quality signals.
|
||||
|
||||
### 2.2 Who Is Affected
|
||||
|
||||
| Persona | Pain Point |
|
||||
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| **OSS contributor** | Built a great skill for another Agent Skills-compatible product; wants it to work in Hive too, or wants to share a Hive skill with the wider ecosystem |
|
||||
| **Agent builder (beginner)** | Overwhelmed by framework concepts; wants to install a "deep research" skill and use it without understanding graph internals |
|
||||
| **Agent builder (advanced)** | Copies the same prompt patterns and tool orchestration across agents; wants reusable, version-pinned building blocks |
|
||||
| **Platform team** | Cannot codify best practices as reusable runtime primitives; every quality improvement is a docs change, not a skill update |
|
||||
| **Enterprise user** | Wants an internal skill library so teams share proven patterns; needs cross-product compatibility |
|
||||
|
||||
### 2.3 Impact of Not Solving
|
||||
|
||||
- Hive is incompatible with the Agent Skills ecosystem — a growing open standard adopted by 30+ products. Users choosing Hive lose access to community skills; contributors targeting the ecosystem skip Hive.
|
||||
- Agent quality depends entirely on individual author skill. No mechanism to propagate proven patterns.
|
||||
- Worker agents are unreliable during long-running or batch processing sessions — no built-in operational discipline.
|
||||
- The self-improvement loop's output (better prompts, better patterns) stays locked in individual deployments with no pathway to contribute back.
|
||||
|
||||
---
|
||||
|
||||
## 3. Goals & Success Criteria
|
||||
|
||||
### 3.1 Primary Goals
|
||||
|
||||
| # | Goal | Metric |
|
||||
| --- | ------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------ |
|
||||
| G1 | Any `SKILL.md` from the Agent Skills ecosystem works in Hive with zero modifications | Compatibility test suite against `github.com/anthropics/skills` example skills |
|
||||
| G2 | A Hive skill works in Claude Code, Cursor, and other compatible products with zero modifications | Cross-product verification on 5+ skills |
|
||||
| G3 | A user can install and use a community skill in under 2 minutes | Time from `hive skill install X` to skill activating in a session |
|
||||
| G4 | A contributor can publish a skill in under 10 minutes | Time from `hive skill init` to PR submission |
|
||||
| G5 | Default skills measurably improve agent reliability on batch processing tasks | A/B comparison: agents with default skills vs. without on 10+ batch scenarios |
|
||||
| G6 | Zero breaking changes to existing agent configurations | All current agents continue to work unchanged |
|
||||
|
||||
### 3.2 Community & Ecosystem Goals
|
||||
|
||||
| # | Goal | Metric |
|
||||
| --- | -------------------------------------------------------------------------------------------- | --------------------------------------------------------------- |
|
||||
| G7 | Registry has 100+ community skills within 30 days of launch | Skill count in registry |
|
||||
| G8 | All registry skills are portable Agent Skills packages — usable in any compatible product | 100% of registry entries conform to the standard |
|
||||
| G9 | Bounty program integrates with skill contributions | Skill submissions tracked in bounty-tracker |
|
||||
| G10 | Contributors receive attribution when their skills are used | Skill metadata includes author; agent logs credit loaded skills |
|
||||
| G11 | Existing skills from `github.com/anthropics/skills` are installable via `hive skill install` | All example skills pass validation and activate correctly |
|
||||
|
||||
### 3.3 Non-Goals (Explicit Exclusions)
|
||||
|
||||
- **Forking or extending the Agent Skills standard** — Hive implements the spec faithfully. No proprietary sidecar files, no Hive-specific schema extensions.
|
||||
- **Runtime skill marketplace** — no billing, licensing, or monetization. The registry is free and open-source.
|
||||
- **Hosting skill execution** — the registry stores packages; execution happens locally.
|
||||
- **AI-generated skills** — automatic skill generation from natural language is a future phase.
|
||||
- **Graph-level skill composition** — skills are instruction-following units, not graph fragments. Agents compose skills by activating multiple skills and following their combined instructions.
|
||||
|
||||
---
|
||||
|
||||
## 4. Agent Skills Standard — Implementation Spec
|
||||
|
||||
This section defines how Hive implements the open Agent Skills standard. The specification at [agentskills.io/specification](https://agentskills.io/specification) is authoritative; this section describes Hive's conforming implementation.
|
||||
|
||||
### 4.1 Skill Discovery
|
||||
|
||||
At session startup, Hive scans for skill directories containing a `SKILL.md` file. Both cross-client and Hive-specific locations are scanned:
|
||||
|
||||
| Scope | Path | Purpose |
|
||||
| --------- | --------------------------------- | --------------------------------------------------- |
|
||||
| Project | `<project>/.agents/skills/` | Cross-client interoperability (standard convention) |
|
||||
| Project | `<project>/.hive/skills/` | Hive-specific project skills |
|
||||
| User | `~/.agents/skills/` | Cross-client user-level skills |
|
||||
| User | `~/.hive/skills/` | Hive-specific user-level skills |
|
||||
| Framework | `<hive-install>/skills/defaults/` | Built-in default skills |
|
||||
|
||||
**Precedence** (deterministic): Project-level skills override user-level skills. Within the same scope, `.hive/skills/` overrides `.agents/skills/`. Framework-level default skills have lowest precedence and can be overridden at any scope.
|
||||
|
||||
**Scanning rules:**
|
||||
|
||||
- Skip `.git/`, `node_modules/`, `__pycache__/`, `.venv/` directories
|
||||
- Max depth: 4 levels from the skills root
|
||||
- Max directories: 2000 per scope
|
||||
- Respect `.gitignore` in project scope
|
||||
|
||||
**Trust:** Project-level skills from untrusted repositories (not marked trusted by the user) require explicit user consent before loading.
|
||||
|
||||
### 4.2 `SKILL.md` Parsing
|
||||
|
||||
Each discovered `SKILL.md` is parsed per the standard:
|
||||
|
||||
1. Extract YAML frontmatter between `---` delimiters
|
||||
2. Parse required fields: `name`, `description`
|
||||
3. Parse optional fields: `license`, `compatibility`, `metadata`, `allowed-tools`
|
||||
4. Everything after the closing `---` is the skill's markdown body (instructions)
|
||||
|
||||
**Validation (lenient):**
|
||||
|
||||
- Name doesn't match parent directory → warn, load anyway
|
||||
- Name exceeds 64 characters → warn, load anyway
|
||||
- Description missing or empty → skip the skill, log error
|
||||
- YAML unparseable → try wrapping unquoted colon values in quotes as fallback; if still fails, skip and log
|
||||
|
||||
**In-memory record per skill:**
|
||||
|
||||
| Field | Source |
|
||||
| -------------- | --------------------------------- |
|
||||
| `name` | Frontmatter |
|
||||
| `description` | Frontmatter |
|
||||
| `location` | Absolute path to `SKILL.md` |
|
||||
| `base_dir` | Parent directory of `SKILL.md` |
|
||||
| `source_scope` | `project`, `user`, or `framework` |
|
||||
|
||||
### 4.3 Progressive Disclosure
|
||||
|
||||
Hive implements the standard three-tier loading model:
|
||||
|
||||
| Tier | What's loaded | When | Token cost |
|
||||
| ------------------- | ---------------------------- | -------------------------------- | ------------------------ |
|
||||
| **1. Catalog** | Name + description per skill | Session start | ~50-100 tokens per skill |
|
||||
| **2. Instructions** | Full `SKILL.md` body | When skill is activated | <5000 tokens recommended |
|
||||
| **3. Resources** | Scripts, references, assets | When instructions reference them | Varies |
|
||||
|
||||
**Catalog disclosure**: At session start, all discovered skill names and descriptions are injected into the system prompt:
|
||||
|
||||
```xml
|
||||
<available_skills>
|
||||
<skill>
|
||||
<name>deep-research</name>
|
||||
<description>Multi-step web research with source verification. Use when the task requires gathering and synthesizing information from multiple sources.</description>
|
||||
<location>/home/user/.hive/skills/deep-research/SKILL.md</location>
|
||||
</skill>
|
||||
...
|
||||
</available_skills>
|
||||
```
|
||||
|
||||
**Behavioral instruction** injected alongside the catalog:
|
||||
|
||||
```
|
||||
The following skills provide specialized instructions for specific tasks.
|
||||
When a task matches a skill's description, read the SKILL.md at the listed
|
||||
location to load the full instructions before proceeding.
|
||||
When a skill references relative paths, resolve them against the skill's
|
||||
directory (the parent of SKILL.md) and use absolute paths in tool calls.
|
||||
```
|
||||
|
||||
### 4.4 Skill Activation
|
||||
|
||||
Skills are activated via two mechanisms:
|
||||
|
||||
**Model-driven**: The agent reads the skill catalog, decides a skill is relevant, and reads the `SKILL.md` file using its file-read tool. No special infrastructure needed — the agent's standard file-reading capability is sufficient.
|
||||
|
||||
**User-driven**: Users can activate skills explicitly via `@skill-name` mention syntax or via agent configuration that pre-activates specific skills for every session.
|
||||
|
||||
**What happens on activation:**
|
||||
|
||||
1. The full `SKILL.md` body is loaded into context
|
||||
2. Bundled resources (scripts, references) are listed but NOT eagerly loaded
|
||||
3. The skill directory is allowlisted for file access (no permission prompts for bundled files)
|
||||
4. Activation is logged: `{skill_name, scope, timestamp}`
|
||||
|
||||
**Deduplication**: If a skill is already active in the current session, re-activation is skipped.
|
||||
|
||||
**Context protection**: Activated skill content is exempt from context pruning/compaction — skill instructions are durable behavioral guidance that must persist for the session duration.
|
||||
|
||||
### 4.5 Skill Execution
|
||||
|
||||
The agent follows the instructions in `SKILL.md`. It can:
|
||||
|
||||
- Execute bundled scripts from `scripts/`
|
||||
- Read reference materials from `references/`
|
||||
- Use assets from `assets/`
|
||||
- Call any MCP tools available in the agent's tool registry
|
||||
|
||||
This is identical to how skills work in Claude Code, Cursor, or any other Agent Skills-compatible product.
|
||||
|
||||
### 4.6 Pre-Activated Skills
|
||||
|
||||
Agents can declare skills that should be activated at session start — bypassing model-driven activation. This is useful for skills that an agent always needs (e.g., a coding standards skill for a code review agent).
|
||||
|
||||
**In agent config (`agent.json`):**
|
||||
|
||||
```json
|
||||
{
|
||||
"skills": ["deep-research", "code-review"]
|
||||
}
|
||||
```
|
||||
|
||||
**In Python:**
|
||||
|
||||
```python
|
||||
agent = Agent(
|
||||
name="my-agent",
|
||||
skills=["deep-research", "code-review"],
|
||||
)
|
||||
```
|
||||
|
||||
Pre-activated skills have their full `SKILL.md` body loaded into context at session start (tier 2), skipping the catalog-only tier 1 phase.
|
||||
|
||||
---
|
||||
|
||||
## 5. Default Skills
|
||||
|
||||
Default skills are **built-in skills shipped with the Hive framework** that every worker agent loads automatically. They use the Agent Skills format (`SKILL.md`) but live in the framework's install directory and serve as runtime operational protocols.
|
||||
|
||||
### 5.1 Why Default Skills
|
||||
|
||||
The framework provides mechanical safeguards: stall detection via n-gram similarity, doom-loop fingerprinting, checkpoint/resume, token budget pruning, and max iteration limits. But these are reactive — they trigger after something has gone wrong.
|
||||
|
||||
Default skills encode **proactive cognitive protocols**: how to take structured notes so you don't lose track of a 50-item batch, when to pause and summarize before you hit context limits, how to self-assess whether your output quality is degrading. They are the operational habits that experienced agent builders already encode in their system prompts — standardized so every agent benefits.
|
||||
|
||||
### 5.2 Integration Model
|
||||
|
||||
Default skills differ from community skills in how they integrate:
|
||||
|
||||
| Aspect | Default Skills | Community Skills |
|
||||
| ------------ | ---------------------------------------------- | ----------------------------------------------------- |
|
||||
| Loaded by | Framework automatically | Agent decides at runtime (or pre-activated in config) |
|
||||
| Integration | System prompt injection + shared memory hooks | Instruction-following (standard Agent Skills) |
|
||||
| Graph impact | No dedicated nodes — woven into existing nodes | None (just context) |
|
||||
| Overridable | Yes (disable, configure, or replace) | N/A |
|
||||
|
||||
Default skills integrate at four injection points in the `EventLoopNode`:
|
||||
|
||||
1. **System prompt injection** (before first LLM call): Default skill protocols are appended to the node's system prompt
|
||||
2. **Iteration boundary callbacks** (between iterations): Quality check, notes staleness warning, budget tracking
|
||||
3. **Node completion hooks** (when node finishes): Batch completeness check, handoff summary
|
||||
4. **Phase transition hooks** (on edge traversal): Context carry-over, notes persistence
|
||||
|
||||
### 5.3 Default Skill Catalog
|
||||
|
||||
Six default skills ship with Hive:
|
||||
|
||||
#### 5.3.1 Structured Note-Taking (`hive.note-taking`)
|
||||
|
||||
**Purpose:** Maintain a structured working document throughout execution so the agent never loses track of what it knows, what it's decided, and what's pending.
|
||||
|
||||
**Problem:** Without structured notes, agents processing long sessions rely entirely on conversation history. When context is pruned (automatically at 60% token usage), intermediate reasoning is lost. Agents repeat work, contradict earlier decisions, or silently drop items.
|
||||
|
||||
**Protocol (injected into system prompt):**
|
||||
|
||||
```markdown
|
||||
## Operational Protocol: Structured Note-Taking
|
||||
|
||||
Maintain structured working notes in shared memory key `_working_notes`.
|
||||
Update at these checkpoints:
|
||||
|
||||
- After completing each discrete subtask or batch item
|
||||
- After receiving new information that changes your plan
|
||||
- Before any tool call that will produce substantial output
|
||||
|
||||
Structure:
|
||||
|
||||
### Objective — restate the goal
|
||||
|
||||
### Current Plan — numbered steps, mark completed with ✓
|
||||
|
||||
### Key Decisions — decisions made and WHY
|
||||
|
||||
### Working Data — intermediate results, extracted values
|
||||
|
||||
### Open Questions — uncertainties to verify
|
||||
|
||||
### Blockers — anything preventing progress
|
||||
|
||||
Update incrementally — do not rewrite from scratch each time.
|
||||
```
|
||||
|
||||
**Shared memory:** `_working_notes` (string), `_notes_updated_at` (timestamp)
|
||||
|
||||
**Config:** `enabled` (default true), `update_frequency` (default `per_subtask`), `max_notes_length` (default 4000 chars)
|
||||
|
||||
---
|
||||
|
||||
#### 5.3.2 Batch Progress Ledger (`hive.batch-ledger`)
|
||||
|
||||
**Purpose:** When processing a collection of items, maintain a structured ledger tracking each item's status so no item is skipped, duplicated, or silently dropped.
|
||||
|
||||
**Problem:** Agents processing batches lose track of which items they've handled, especially after context compaction or checkpoint resume. Without a ledger, agents re-process items (waste) or skip items (data loss).
|
||||
|
||||
**Protocol (injected into system prompt):**
|
||||
|
||||
```markdown
|
||||
## Operational Protocol: Batch Progress Ledger
|
||||
|
||||
When processing a collection of items, maintain a batch ledger in `_batch_ledger`.
|
||||
|
||||
Initialize when you identify the batch:
|
||||
|
||||
- `_batch_total`: total item count
|
||||
- `_batch_ledger`: JSON with per-item status
|
||||
|
||||
Per-item statuses: pending → in_progress → completed|failed|skipped
|
||||
|
||||
- Set `in_progress` BEFORE processing
|
||||
- Set final status AFTER processing with 1-line result_summary
|
||||
- Include error reason for failed/skipped items
|
||||
- Update aggregate counts after each item
|
||||
- NEVER remove items from the ledger
|
||||
- If resuming, skip items already marked completed
|
||||
```
|
||||
|
||||
**Shared memory:** `_batch_ledger` (dict), `_batch_total` (int), `_batch_completed` (int), `_batch_failed` (int)
|
||||
|
||||
**Config:** `enabled` (default true), `auto_detect_batch` (default true), `checkpoint_every_n` (default 5)
|
||||
|
||||
**Completion check:** At node completion, if `_batch_completed + _batch_failed + _batch_skipped < _batch_total`, emit warning.
|
||||
|
||||
---
|
||||
|
||||
#### 5.3.3 Context Preservation (`hive.context-preservation`)
|
||||
|
||||
**Purpose:** Proactively preserve critical information before automatic context pruning destroys it.
|
||||
|
||||
**Problem:** The framework's `prune_old_tool_results()` at 60% token usage removes content indiscriminately. Agents that don't proactively save important data into working notes lose it permanently.
|
||||
|
||||
**Protocol (injected into system prompt):**
|
||||
|
||||
```markdown
|
||||
## Operational Protocol: Context Preservation
|
||||
|
||||
You operate under a finite context window. Important information WILL be pruned.
|
||||
|
||||
Save-As-You-Go: After any tool call producing information you'll need later,
|
||||
immediately extract key data into `_working_notes` or `_preserved_data`.
|
||||
Do NOT rely on referring back to old tool results.
|
||||
|
||||
What to extract: URLs and key snippets (not full pages), relevant API fields
|
||||
(not raw JSON), specific lines/values (not entire files), analysis results
|
||||
(not raw data).
|
||||
|
||||
Before transitioning to the next phase/node, write a handoff summary to
|
||||
`_handoff_context` with everything the next phase needs to know.
|
||||
```
|
||||
|
||||
**Shared memory:** `_handoff_context` (string), `_preserved_data` (dict)
|
||||
|
||||
**Config:** `enabled` (default true), `warn_at_usage_ratio` (default 0.45), `require_handoff` (default true)
|
||||
|
||||
---
|
||||
|
||||
#### 5.3.4 Quality Self-Assessment (`hive.quality-monitor`)
|
||||
|
||||
**Purpose:** Periodically prompt the agent to self-evaluate output quality, catching degradation before the judge does.
|
||||
|
||||
**Problem:** The judge system evaluates at node completion — once per node, not during execution. An agent can degrade gradually over many iterations without detection until the node completes.
|
||||
|
||||
**Protocol (injected into system prompt):**
|
||||
|
||||
```markdown
|
||||
## Operational Protocol: Quality Self-Assessment
|
||||
|
||||
Every 5 iterations, self-assess:
|
||||
|
||||
1. On-task? Still working toward the stated objective?
|
||||
2. Thorough? Cutting corners compared to earlier?
|
||||
3. Non-repetitive? Producing new value or rehashing?
|
||||
4. Consistent? Latest output contradict earlier decisions?
|
||||
5. Complete? Tracking all items, or silently dropped some?
|
||||
|
||||
If degrading: write assessment to `_quality_log`, re-read `_working_notes`,
|
||||
change approach explicitly. If acceptable: brief note in `_quality_log`.
|
||||
```
|
||||
|
||||
**Shared memory:** `_quality_log` (list), `_quality_degradation_count` (int)
|
||||
|
||||
**Config:** `enabled` (default true), `assessment_interval` (default 5), `degradation_threshold` (default 3)
|
||||
|
||||
---
|
||||
|
||||
#### 5.3.5 Error Recovery Protocol (`hive.error-recovery`)
|
||||
|
||||
**Purpose:** When a tool call fails or returns unexpected results, follow a structured recovery protocol instead of blindly retrying or giving up.
|
||||
|
||||
**Problem:** The framework retries transient errors automatically. But non-transient failures (wrong input, business logic error, missing resource) are handed back to the agent with no guidance. Agents often retry the same call or abandon the task.
|
||||
|
||||
**Protocol (injected into system prompt):**
|
||||
|
||||
```markdown
|
||||
## Operational Protocol: Error Recovery
|
||||
|
||||
When a tool call fails:
|
||||
|
||||
1. Diagnose — record error in notes, classify as transient or structural
|
||||
2. Decide — transient: retry once. Structural fixable: fix and retry.
|
||||
Structural unfixable: record as failed, move to next item.
|
||||
Blocking all progress: record escalation note.
|
||||
3. Adapt — if same tool failed 3+ times, stop using it and find alternative.
|
||||
Update plan in notes. Never silently drop the failed item.
|
||||
```
|
||||
|
||||
**Shared memory:** `_error_log` (list), `_failed_tools` (dict), `_escalation_needed` (bool)
|
||||
|
||||
**Config:** `enabled` (default true), `max_retries_per_tool` (default 3), `escalation_on_block` (default true)
|
||||
|
||||
---
|
||||
|
||||
#### 5.3.6 Task Decomposition (`hive.task-decomposition`)
|
||||
|
||||
**Purpose:** Decompose complex tasks into explicit subtasks before diving in. Maintain the decomposition as a living checklist.
|
||||
|
||||
**Problem:** Agents facing complex tasks start executing immediately without planning, leading to incomplete coverage and iteration budget exhaustion on the first sub-problem.
|
||||
|
||||
**Protocol (injected into system prompt):**
|
||||
|
||||
```markdown
|
||||
## Operational Protocol: Task Decomposition
|
||||
|
||||
Before starting a complex task:
|
||||
|
||||
1. Decompose — break into numbered subtasks in `_working_notes` Current Plan
|
||||
2. Estimate — relative effort per subtask (small/medium/large)
|
||||
3. Execute — work through in order, mark ✓ when complete
|
||||
4. Budget — if running low on iterations, prioritize by impact
|
||||
5. Verify — before declaring done, every subtask must be ✓, skipped (with reason), or blocked
|
||||
```
|
||||
|
||||
**Shared memory:** `_subtasks` (list), `_iteration_budget_remaining` (int)
|
||||
|
||||
**Config:** `enabled` (default true), `decomposition_threshold` (default `auto`), `budget_awareness` (default true)
|
||||
|
||||
---
|
||||
|
||||
### 5.4 Default Skill Configuration
|
||||
|
||||
Agents configure default skills via `default_skills` in their agent definition:
|
||||
|
||||
**Declarative (`agent.json`):**
|
||||
|
||||
```json
|
||||
{
|
||||
"default_skills": {
|
||||
"hive.note-taking": { "enabled": true },
|
||||
"hive.batch-ledger": { "enabled": true, "checkpoint_every_n": 10 },
|
||||
"hive.context-preservation": {
|
||||
"enabled": true,
|
||||
"warn_at_usage_ratio": 0.4
|
||||
},
|
||||
"hive.quality-monitor": { "enabled": false },
|
||||
"hive.error-recovery": { "enabled": true },
|
||||
"hive.task-decomposition": { "enabled": true }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Disable all:** `"default_skills": {"_all": {"enabled": false}}`
|
||||
|
||||
### 5.5 Prompt Budget
|
||||
|
||||
All default skill protocols combined must total under **2000 tokens** to minimize impact on the agent's domain reasoning budget. Protocols are terse operational checklists, not verbose documentation.
|
||||
|
||||
### 5.6 Shared Memory Convention
|
||||
|
||||
All default skill shared memory keys use the `_` prefix (`_working_notes`, `_batch_ledger`, etc.) to avoid collisions with domain-level keys. These keys are:
|
||||
|
||||
- Visible to the agent (for self-reference)
|
||||
- Visible to the judge (for evaluation context)
|
||||
- Excluded from the agent's declared output contract (operational, not domain output)
|
||||
|
||||
---
|
||||
|
||||
## 6. Community Registry
|
||||
|
||||
### 6.1 Registry Repository
|
||||
|
||||
A public GitHub repository (`hive-skill-registry`) serves as the curated community index. Every entry is a standard Agent Skills package — portable to any compatible product.
|
||||
|
||||
```
|
||||
hive-skill-registry/
|
||||
├── registry/
|
||||
│ ├── skills/
|
||||
│ │ ├── deep-research/
|
||||
│ │ │ ├── SKILL.md
|
||||
│ │ │ ├── scripts/
|
||||
│ │ │ ├── references/
|
||||
│ │ │ ├── evals/
|
||||
│ │ │ └── README.md
|
||||
│ │ ├── email-triage/
|
||||
│ │ └── ...
|
||||
│ ├── packs/
|
||||
│ │ ├── research-pack.json
|
||||
│ │ └── ...
|
||||
│ └── _template/
|
||||
├── skill_index.json (auto-generated)
|
||||
├── CONTRIBUTING.md
|
||||
└── README.md
|
||||
```
|
||||
|
||||
### 6.2 Trust Tiers
|
||||
|
||||
| Tier | Meaning | Requirements |
|
||||
| ----------- | ------------------------------ | --------------------------------------------- |
|
||||
| `official` | Maintained by Hive team | Internal review |
|
||||
| `verified` | Audited community contribution | Code audit, maintainer SLA, test coverage |
|
||||
| `community` | Community-submitted | Passes CI validation, maintainer review on PR |
|
||||
|
||||
### 6.3 Registry Index
|
||||
|
||||
The registry auto-generates a `skill_index.json` on merge for client consumption:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "deep-research",
|
||||
"description": "Multi-step web research with source verification...",
|
||||
"status": "verified",
|
||||
"author": { "name": "Alex Researcher", "github": "alexr" },
|
||||
"maintainer": { "github": "alexr" },
|
||||
"version": "1.2.0",
|
||||
"license": "MIT",
|
||||
"tags": ["research", "web", "synthesis"],
|
||||
"categories": ["knowledge-work"],
|
||||
"install_count": 342,
|
||||
"last_validated_at": "2026-03-13T10:00:00Z",
|
||||
"deprecated": false
|
||||
}
|
||||
```
|
||||
|
||||
### 6.4 Starter Packs
|
||||
|
||||
Themed collections of skills that work well together:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "research-pack",
|
||||
"display_name": "Research & Analysis Pack",
|
||||
"description": "Skills for research-heavy agents",
|
||||
"skills": [
|
||||
{ "name": "deep-research", "version": ">=1.0.0" },
|
||||
{ "name": "synthesis", "version": ">=1.0.0" },
|
||||
{ "name": "executive-summary", "version": ">=1.0.0" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 6.5 Evaluation Framework
|
||||
|
||||
Skills in the registry can include an `evals/` directory following the Agent Skills evaluation pattern:
|
||||
|
||||
```json
|
||||
{
|
||||
"skill_name": "deep-research",
|
||||
"evals": [
|
||||
{
|
||||
"id": 1,
|
||||
"prompt": "Research the current state of quantum computing and summarize the top 3 breakthroughs from the past year.",
|
||||
"expected_output": "A structured summary with 3 breakthroughs, each with source citations.",
|
||||
"assertions": [
|
||||
"Output includes at least 3 distinct breakthroughs",
|
||||
"Each breakthrough has at least one source URL",
|
||||
"Sources are from the past 12 months"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
CI runs these evals on submitted skills to validate quality.
|
||||
|
||||
### 6.6 Bounty Integration
|
||||
|
||||
| Contribution | Points |
|
||||
| -------------------- | ------ |
|
||||
| New skill | 75 |
|
||||
| Skill improvement PR | 30 |
|
||||
| Skill tests/evals | 20 |
|
||||
| Skill docs | 20 |
|
||||
|
||||
---
|
||||
|
||||
## 7. Requirements
|
||||
|
||||
### 7.1 Functional Requirements — Agent Skills Standard
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
| ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |
|
||||
| AS-1 | Discover skills by scanning `.agents/skills/` and `.hive/skills/` at project and user scopes | P0 |
|
||||
| AS-2 | Parse `SKILL.md` YAML frontmatter per the Agent Skills spec: `name`, `description` (required), `license`, `compatibility`, `metadata`, `allowed-tools` (optional) | P0 |
|
||||
| AS-3 | Lenient validation: warn on non-critical issues, skip only on missing description or unparseable YAML | P0 |
|
||||
| AS-4 | Progressive disclosure tier 1: skill catalog (name + description + location) injected into system prompt at session start | P0 |
|
||||
| AS-5 | Progressive disclosure tier 2: full `SKILL.md` body loaded into context when agent or user activates a skill | P0 |
|
||||
| AS-6 | Progressive disclosure tier 3: scripts, references, and assets loaded on demand when instructions reference them | P0 |
|
||||
| AS-7 | Model-driven activation: agent reads `SKILL.md` via file-read tool when it decides a skill is relevant | P0 |
|
||||
| AS-8 | User-driven activation: `@skill-name` mention syntax intercepted by harness | P1 |
|
||||
| AS-9 | Skill directories allowlisted for file access — no permission prompts for bundled resources | P0 |
|
||||
| AS-10 | Activated skill content protected from context pruning/compaction | P0 |
|
||||
| AS-11 | Duplicate activations in the same session deduplicated | P1 |
|
||||
| AS-12 | Name collisions resolved deterministically: project overrides user, `.hive/` overrides `.agents/`, log warning | P0 |
|
||||
| AS-13 | Trust gating: project-level skills from untrusted repos require user consent | P1 |
|
||||
| AS-14 | Compatibility with `github.com/anthropics/skills` example skills — all pass validation and activate correctly | P0 |
|
||||
| AS-15 | Cross-client YAML compatibility: handle unquoted colon values via automatic fixup | P1 |
|
||||
| AS-16 | Pre-activated skills via `skills` list in agent config (`agent.json` and Python API) | P0 |
|
||||
| AS-17 | Subagent delegation: optionally run a skill's instructions in an isolated sub-session | P2 |
|
||||
|
||||
### 7.2 Functional Requirements — Default Skills
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
| ----- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |
|
||||
| DS-1 | Ship 6 default skills: `hive.note-taking`, `hive.batch-ledger`, `hive.context-preservation`, `hive.quality-monitor`, `hive.error-recovery`, `hive.task-decomposition` | P0 |
|
||||
| DS-2 | Default skills are valid Agent Skills packages (`SKILL.md` format) in the framework install directory | P0 |
|
||||
| DS-3 | All default skills loaded automatically for every worker agent unless explicitly disabled | P0 |
|
||||
| DS-4 | Default skills integrate via system prompt injection — no additional graph nodes | P0 |
|
||||
| DS-5 | Default skills use `_`-prefixed shared memory keys to avoid domain collisions | P0 |
|
||||
| DS-6 | Each default skill independently configurable via `default_skills` in agent config | P0 |
|
||||
| DS-7 | All defaults disableable at once: `{"_all": {"enabled": false}}` | P0 |
|
||||
| DS-8 | Default skill protocols appended in a `## Operational Protocols` system prompt section | P0 |
|
||||
| DS-9 | Iteration boundary callbacks for quality check and notes staleness | P0 |
|
||||
| DS-10 | Node completion hooks for batch completeness and handoff write | P0 |
|
||||
| DS-11 | Phase transition hooks for context carry-over and notes persistence | P1 |
|
||||
| DS-12 | `hive.batch-ledger` auto-detects batch scenarios via heuristic | P1 |
|
||||
| DS-13 | `hive.context-preservation` warns at 0.45 token usage (before 0.6 framework prune) | P0 |
|
||||
| DS-14 | Combined default skill prompts total under 2000 tokens | P0 |
|
||||
| DS-15 | Agent startup logs active default skills and config | P0 |
|
||||
|
||||
### 7.3 Functional Requirements — CLI
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
| ------ | ------------------------------------------------------------------------------------------------- | -------- |
|
||||
| CLI-1 | `hive skill list` — list discovered skills (all scopes) with source and status | P0 |
|
||||
| CLI-2 | `hive skill install <name> [--version X]` — install from registry to `~/.hive/skills/` | P0 |
|
||||
| CLI-3 | `hive skill install --pack <name>` — install a starter pack | P1 |
|
||||
| CLI-4 | `hive skill remove <name>` — uninstall | P0 |
|
||||
| CLI-5 | `hive skill search <query>` — search registry by name, tag, description | P1 |
|
||||
| CLI-6 | `hive skill info <name>` — show details: description, author, scripts, references | P0 |
|
||||
| CLI-7 | `hive skill init [--name X]` — scaffold a skill directory with `SKILL.md` template | P0 |
|
||||
| CLI-8 | `hive skill validate <path>` — validate `SKILL.md` against the Agent Skills spec | P0 |
|
||||
| CLI-9 | `hive skill test <path> [--input <json>]` — run skill in isolation, execute evals if present | P1 |
|
||||
| CLI-10 | `hive skill doctor [name]` — check health: SKILL.md parseable, scripts executable, deps available | P0 |
|
||||
| CLI-11 | `hive skill doctor --defaults` — check all default skills operational | P1 |
|
||||
| CLI-12 | `hive skill fork <name> [--name new-name]` — create local editable copy of a registry skill | P1 |
|
||||
| CLI-13 | `hive skill update [name]` — update registry cache or specific skill | P1 |
|
||||
|
||||
### 7.4 Functional Requirements — Registry
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
| ------ | ------------------------------------------------------------------------------------------------ | -------- |
|
||||
| REG-1 | Public GitHub repo with defined directory structure | P0 |
|
||||
| REG-2 | CI validates `SKILL.md` on every PR using `skills-ref validate` | P0 |
|
||||
| REG-3 | Flat index (`skill_index.json`) auto-generated on merge | P0 |
|
||||
| REG-4 | `_template/` directory with starter skill for contributors | P0 |
|
||||
| REG-5 | `CONTRIBUTING.md` with step-by-step submission guide | P0 |
|
||||
| REG-6 | CI runs skill evals when `evals/` directory is present | P1 |
|
||||
| REG-7 | Trust tiers: `official`, `verified`, `community` | P0 |
|
||||
| REG-8 | Tags follow controlled taxonomy | P1 |
|
||||
| REG-9 | Seed with 10+ skills: extract from existing templates + port from `github.com/anthropics/skills` | P0 |
|
||||
| REG-10 | Starter pack definitions in `registry/packs/` | P1 |
|
||||
|
||||
### 7.5 Failure Handling & Diagnostics
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
| ---- | ----------------------------------------------------------------------------------------- | -------- |
|
||||
| DX-1 | Structured error codes: `SKILL_NOT_FOUND`, `SKILL_PARSE_ERROR`, `SKILL_ACTIVATION_FAILED` | P0 |
|
||||
| DX-2 | Every error includes: what failed, why, and suggested fix | P0 |
|
||||
| DX-3 | Agent startup logs per-skill summary: `{name, scope, status}` | P0 |
|
||||
| DX-4 | `hive skill doctor` machine-parseable with `--json` flag | P2 |
|
||||
|
||||
### 7.6 Non-Functional Requirements
|
||||
|
||||
| ID | Requirement | Priority |
|
||||
| ----- | ---------------------------------------------------------------------------- | -------- |
|
||||
| NFR-1 | Skill discovery (scanning + parsing) completes in <500ms for up to 50 skills | P1 |
|
||||
| NFR-2 | Installing a skill does not require a Hive restart | P0 |
|
||||
| NFR-3 | All new code has unit test coverage | P0 |
|
||||
| NFR-4 | Registry CI runs in <120s | P1 |
|
||||
| NFR-5 | `hive skill install` prints security notice on first use | P0 |
|
||||
| NFR-6 | Skills loaded at runtime are read-only — modifications require forking | P0 |
|
||||
|
||||
---
|
||||
|
||||
## 8. Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ hive-skill-registry (GitHub) │
|
||||
│ │
|
||||
│ registry/skills/deep-research/ │
|
||||
│ ├── SKILL.md │
|
||||
│ ├── scripts/ │
|
||||
│ └── evals/ │
|
||||
│ registry/packs/research-pack.json │
|
||||
│ skill_index.json (auto-built) │
|
||||
└──────────────┬────────────────────────┘
|
||||
│ hive skill install
|
||||
▼
|
||||
┌──────────────────────────────────────────────────────────────────────┐
|
||||
│ Skill Sources │
|
||||
│ │
|
||||
│ ~/.hive/skills/ .agents/skills/ <hive>/skills/ │
|
||||
│ (user, Hive-specific) (project, cross- defaults/ │
|
||||
│ client portable) (framework built- │
|
||||
│ in defaults) │
|
||||
└──────────────────────┬───────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌────────────────────┐
|
||||
│ SkillDiscovery │
|
||||
│ │
|
||||
│ scan() → catalog │
|
||||
│ parse SKILL.md │
|
||||
│ resolve collisions │
|
||||
└────────┬───────────┘
|
||||
│
|
||||
┌───────────┴───────────┐
|
||||
│ │
|
||||
▼ ▼
|
||||
┌──────────────────┐ ┌───────────────────────┐
|
||||
│ Community Skills │ │ Default Skills │
|
||||
│ │ │ │
|
||||
│ Catalog injected │ │ DefaultSkillManager │
|
||||
│ into system │ │ • prompt injection │
|
||||
│ prompt (tier 1) │ │ • iteration hooks │
|
||||
│ │ │ • completion hooks │
|
||||
│ Activated on │ │ • transition hooks │
|
||||
│ demand (tier 2) │ │ │
|
||||
│ │ │ Always active │
|
||||
│ Agent follows │ │ (unless disabled) │
|
||||
│ SKILL.md │ │ │
|
||||
│ instructions │ │ Protocols woven into │
|
||||
│ │ │ existing node prompts │
|
||||
└──────────────────┘ └───────────────────────┘
|
||||
│ │
|
||||
└───────────┬───────────┘
|
||||
│
|
||||
▼
|
||||
┌────────────────────┐
|
||||
│ EventLoopNode │
|
||||
│ │
|
||||
│ System prompt = │
|
||||
│ agent prompt │
|
||||
│ + node prompt │
|
||||
│ + default skill │
|
||||
│ protocols │
|
||||
│ + activated skill │
|
||||
│ instructions │
|
||||
│ │
|
||||
│ Same iteration │
|
||||
│ loop, tools, │
|
||||
│ judges │
|
||||
└────────────────────┘
|
||||
```
|
||||
|
||||
### Component Responsibilities
|
||||
|
||||
| Component | Responsibility |
|
||||
| -------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **SkillDiscovery** | Scan skill directories, parse `SKILL.md`, resolve collisions, build catalog |
|
||||
| **SkillCatalog** | In-memory index of discovered skills; injected into system prompt at session start |
|
||||
| **DefaultSkillManager** | Load, configure, and inject the 6 built-in default skills; manage prompt injection and hook registration |
|
||||
| **EventLoopNode** (extended) | New hook points for default skills: iteration callbacks, completion hooks. Appends default protocols and activated skill content to system prompt. |
|
||||
| **AgentRunner** (extended) | Resolve `skills` (pre-activation) and `default_skills` config; trigger discovery; log skill summary at startup |
|
||||
| **hive skill CLI** | User-facing commands for install, search, validate, test, doctor |
|
||||
| **hive-skill-registry** (GitHub) | Community-curated skill packages; CI validation; trust tiers; starter packs |
|
||||
|
||||
---
|
||||
|
||||
## 9. Risks & Mitigations
|
||||
|
||||
| Risk | Impact | Likelihood | Mitigation |
|
||||
| ----------------------------------------------------- | -------------------------------------------------------- | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Agent Skills spec evolves in breaking ways | Hive implementation falls out of sync | Low | Standard is backed by Anthropic and adopted by 30+ products; changes are conservative. Track spec repo; participate in governance. |
|
||||
| Low community adoption — nobody submits skills | Registry empty, no value | Medium | Seed with 10+ skills from existing templates + ported from `github.com/anthropics/skills`; bounty program; `hive skill init` trivializes creation |
|
||||
| Prompt injection via malicious skill instructions | Skill manipulates agent behavior | Medium | Trust gating for project-level skills; maintainer review on registry PRs; `verified` tier requires audit; security notice on install |
|
||||
| Default skill prompts bloat system prompt | Reduced token budget for reasoning | Medium | Hard cap of 2000 tokens total; individually disableable; terse checklist format |
|
||||
| Default skills create rigid behavior for simple tasks | Agent follows batch protocol on trivial single-item task | Medium | `auto_detect_batch` heuristic; `task_decomposition` threshold defaults to `auto`; all defaults individually disableable |
|
||||
| Context window consumed by too many active skills | Multiple skills + default skills exhaust context | Medium | Progressive disclosure limits base cost (~100 tokens/skill); skills activated one-at-a-time on demand; skill body recommended <5000 tokens; default skills capped at 2000 tokens |
|
||||
| Skill quality inconsistent across registry | Users install ineffective skills | Medium | Trust tiers; eval framework in CI; `hive skill test`; community signals (install count); `deprecated` flag |
|
||||
|
||||
---
|
||||
|
||||
## 10. Backward Compatibility
|
||||
|
||||
This system is **fully additive**:
|
||||
|
||||
- Existing agents without skills continue to work unchanged.
|
||||
- Default skills are loaded automatically but are behaviorally non-breaking: they add operational instructions to system prompts but do not change graph structure, tool availability, or output contracts.
|
||||
- Default skills can be fully disabled via `"default_skills": {"_all": {"enabled": false}}`.
|
||||
- Agents without a `skills` list load zero community skills (model may still activate from catalog).
|
||||
- The `GraphExecutor` is unchanged — no new execution model.
|
||||
- Existing `tools.py`, `mcp_servers.json`, and `mcp_registry.json` work alongside skills.
|
||||
- Skills from the Agent Skills ecosystem (Claude Code, Cursor, etc.) work without modification.
|
||||
|
||||
---
|
||||
|
||||
## 11. Interaction with MCP Registry
|
||||
|
||||
Skills and MCP servers are complementary:
|
||||
|
||||
| Concern | MCP Registry | Skill System |
|
||||
| -------------- | ------------------------------------------ | ----------------------------------------------- |
|
||||
| What it shares | Tool infrastructure (servers, connections) | Agent behavior (instructions, prompts, scripts) |
|
||||
| Format | Manifest JSON (Hive-specific) | `SKILL.md` (open standard) |
|
||||
| Granularity | Atomic tool functions | Multi-step behavioral patterns |
|
||||
|
||||
**Integration:** Skills reference tools by name in their `SKILL.md` instructions; the agent resolves them via the normal tool registry. If a skill requires a tool that isn't available, the agent will encounter an error at execution time — `hive skill doctor` can pre-check this.
|
||||
|
||||
---
|
||||
|
||||
## 12. Documentation & Examples Strategy
|
||||
|
||||
| Doc | Audience | Deliverable |
|
||||
| -------------------------------------- | ----------------- | ------------------------------------------------------------------------------ |
|
||||
| "Install and use your first skill" | Users | From `hive skill search` to skill activating in a session |
|
||||
| "Write your first skill" | Contributors | Step-by-step: `hive skill init` → write SKILL.md → validate → submit PR |
|
||||
| "Port a skill from Claude Code/Cursor" | Contributors | Usually just install it — guide explains verification |
|
||||
| "Default skills reference" | All users | All 6 defaults: purpose, config, shared memory keys, tuning |
|
||||
| "Tuning default skills" | Advanced builders | When to disable vs. configure; per-agent overrides; measuring impact |
|
||||
| Skill cookbook | Contributors | Annotated examples: research, triage, draft, review, outreach, data extraction |
|
||||
| "Evaluating skill quality" | Contributors | Setting up evals, writing assertions, iterating with the eval-driven loop |
|
||||
| Starter pack guide | Users | Finding, installing, and customizing starter packs |
|
||||
|
||||
---
|
||||
|
||||
## 13. Phased Delivery
|
||||
|
||||
| Phase | Scope | Depends On |
|
||||
| --------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
|
||||
| **Phase 0: Default Skills** | Implement 6 default skills as `SKILL.md` packages; `DefaultSkillManager` with system prompt injection, iteration callbacks, node completion hooks, phase transition hooks; `DefaultSkillConfig` in Python API and `agent.json`; `_`-prefixed shared memory convention; startup logging | — |
|
||||
| **Phase 1: Agent Skills Standard** | `SkillDiscovery` scanning `.agents/skills/` and `.hive/skills/`; `SKILL.md` parsing with lenient validation; progressive disclosure (catalog injection, activation, resource loading); model-driven and user-driven activation; context protection; deduplication; pre-activated skills config; compatibility tests against `github.com/anthropics/skills` | — |
|
||||
| **Phase 2: CLI & Contributor Tooling** | `hive skill init`, `validate`, `test`, `fork`; `hive skill doctor`; `hive skill install/remove/list/search/info/update`; version pinning; `skills-ref` integration for validation | Phase 1 |
|
||||
| **Phase 3: Registry Repo** | Create `hive-skill-registry` GitHub repo; CI validation using `skills-ref`; `_template/`; `CONTRIBUTING.md`; seed with 10+ skills (extracted from templates + ported from anthropics/skills); eval CI | Phase 1 |
|
||||
| **Phase 4: Docs & Launch** | All documentation from section 12; example agents using skills; announcement; bounty program integration | Phase 2, 3 |
|
||||
| **Phase 5: Community Growth** | Trust tier promotion process; starter packs; community signals (install counts); monthly skill spotlight; eval-driven quality ranking | Phase 4 |
|
||||
| **Phase 6: Advanced Features** (future) | Subagent delegation for skill execution; skill-level telemetry; AI-assisted skill creation | Phase 5 |
|
||||
|
||||
Phase 0 and Phase 1 can proceed in parallel — default skills depend on the prompt injection pipeline, while Agent Skills standard depends on discovery/parsing/activation.
|
||||
|
||||
---
|
||||
|
||||
## 14. Open Questions
|
||||
|
||||
| # | Question | Owner | Status |
|
||||
| --- | -------------------------------------------------------------------------------------------------------------------------------------- | ------------------- | ------ |
|
||||
| Q1 | Should the registry repo live under `aden-hive` org or a shared `agentskills` org? | Platform | Open |
|
||||
| Q2 | Should default skill protocols be adaptive (e.g., `hive.batch-ledger` adjusts checkpoint frequency based on item size)? | Engineering | Open |
|
||||
| Q3 | Should default skills be tunable per-node (not just per-agent)? | Engineering | Open |
|
||||
| Q4 | How should default skill protocols interact with existing `adapt.md` working memory? Should `_working_notes` replace or supplement it? | Engineering | Open |
|
||||
| Q5 | Should `hive.quality-monitor` self-assessments feed into judge decisions (auto-trigger RETRY on self-reported degradation)? | Engineering | Open |
|
||||
| Q6 | What is the right combined token budget for default skill prompts? 2000 tokens proposed — configurable or fixed? | Engineering | Open |
|
||||
| Q7 | Should Hive support subagent delegation for skill execution (run skill in isolated session, return summary)? | Engineering | Open |
|
||||
| Q8 | Should Hive also scan `.claude/skills/` for pragmatic compatibility with Claude Code's native skill location? | Engineering | Open |
|
||||
| Q9 | What is the process for promoting a `community` skill to `verified`? | Platform + Security | Open |
|
||||
| Q10 | Should the registry support private/enterprise skill indexes (`hive skill config --index-url`)? | Platform | Open |
|
||||
| Q11 | Should `hive skill test` use the official `skills-ref` library or a Hive-native implementation? | Engineering | Open |
|
||||
| Q12 | How should skill-level telemetry (activation counts, eval pass rates) be collected without compromising privacy? | Product + Privacy | Open |
|
||||
|
||||
---
|
||||
|
||||
## 15. Stakeholder Sign-Off
|
||||
|
||||
| Role | Name | Status |
|
||||
| -------------------- | ---- | ------- |
|
||||
| Engineering Lead | | Pending |
|
||||
| Product | | Pending |
|
||||
| OSS / Community | | Pending |
|
||||
| Security | | Pending |
|
||||
| Developer Experience | | Pending |
|
||||
@@ -10,6 +10,9 @@
|
||||
|
||||
$ErrorActionPreference = "Stop"
|
||||
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Definition
|
||||
$UvHelperPath = Join-Path $ScriptDir "scripts\uv-discovery.ps1"
|
||||
|
||||
. $UvHelperPath
|
||||
|
||||
# ── Validate project directory ──────────────────────────────────────
|
||||
|
||||
@@ -30,16 +33,12 @@ if (-not (Test-Path (Join-Path $ScriptDir ".venv"))) {
|
||||
|
||||
# ── Ensure uv is available ──────────────────────────────────────────
|
||||
|
||||
if (-not (Get-Command uv -ErrorAction SilentlyContinue)) {
|
||||
# Check default install location before giving up
|
||||
$uvExe = Join-Path $env:USERPROFILE ".local\bin\uv.exe"
|
||||
if (Test-Path $uvExe) {
|
||||
$env:Path = (Split-Path $uvExe) + ";" + $env:Path
|
||||
} else {
|
||||
Write-Error "uv is not installed. Run .\quickstart.ps1 first."
|
||||
exit 1
|
||||
}
|
||||
$uvInfo = Get-WorkingUvInfo
|
||||
if (-not $uvInfo) {
|
||||
Write-Error "uv is not installed or is not runnable. Run .\quickstart.ps1 first."
|
||||
exit 1
|
||||
}
|
||||
$uvExe = $uvInfo.Path
|
||||
|
||||
# ── Load environment variables from Windows Registry ────────────────
|
||||
# Windows stores User-level env vars in the registry. New terminal
|
||||
@@ -80,4 +79,4 @@ if (-not $env:HIVE_CREDENTIAL_KEY) {
|
||||
# ── Run the Hive CLI ────────────────────────────────────────────────
|
||||
# PYTHONUTF8=1: use UTF-8 for default encoding (fixes charmap decode errors on Windows)
|
||||
$env:PYTHONUTF8 = "1"
|
||||
& uv run hive @args
|
||||
& $uvExe run hive @args
|
||||
|
||||
+89
-98
@@ -6,7 +6,7 @@
|
||||
.DESCRIPTION
|
||||
An interactive setup wizard that:
|
||||
1. Installs Python dependencies via uv
|
||||
2. Installs Playwright browser for web scraping
|
||||
2. Checks for Chrome/Edge browser for web automation
|
||||
3. Helps configure LLM API keys
|
||||
4. Verifies everything works
|
||||
|
||||
@@ -18,6 +18,10 @@
|
||||
# Use "Continue" so stderr from external tools (uv, python) does not
|
||||
# terminate the script. Errors are handled via $LASTEXITCODE checks.
|
||||
$ErrorActionPreference = "Continue"
|
||||
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Definition
|
||||
$UvHelperPath = Join-Path $ScriptDir "scripts\uv-discovery.ps1"
|
||||
|
||||
. $UvHelperPath
|
||||
|
||||
# ============================================================
|
||||
# Colors / helpers
|
||||
@@ -95,7 +99,6 @@ function Prompt-Choice {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# ============================================================
|
||||
# Windows Defender Exclusion Functions
|
||||
# ============================================================
|
||||
@@ -276,9 +279,6 @@ function Add-DefenderExclusions {
|
||||
}
|
||||
}
|
||||
|
||||
# Get the directory where this script lives
|
||||
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Definition
|
||||
|
||||
# ============================================================
|
||||
# Banner
|
||||
# ============================================================
|
||||
@@ -352,10 +352,10 @@ Write-Host ""
|
||||
# Check / install uv
|
||||
# ============================================================
|
||||
|
||||
$uvCmd = Get-Command uv -ErrorAction SilentlyContinue
|
||||
$uvInfo = Get-WorkingUvInfo
|
||||
|
||||
# If uv not in PATH, check if it exists in default location
|
||||
if (-not $uvCmd) {
|
||||
if (-not $uvInfo) {
|
||||
$uvDir = Join-Path $env:USERPROFILE ".local\bin"
|
||||
$uvExePath = Join-Path $uvDir "uv.exe"
|
||||
|
||||
@@ -371,16 +371,16 @@ if (-not $uvCmd) {
|
||||
|
||||
# Refresh PATH for current session
|
||||
$env:Path = [System.Environment]::GetEnvironmentVariable("Path", "User") + ";" + [System.Environment]::GetEnvironmentVariable("Path", "Machine")
|
||||
$uvCmd = Get-Command uv -ErrorAction SilentlyContinue
|
||||
$uvInfo = Get-WorkingUvInfo
|
||||
|
||||
if ($uvCmd) {
|
||||
if ($uvInfo) {
|
||||
Write-Ok "uv is now in PATH"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# If still not found, install it
|
||||
if (-not $uvCmd) {
|
||||
if (-not $uvInfo) {
|
||||
Write-Warn "uv not found. Installing..."
|
||||
try {
|
||||
# Official uv installer for Windows
|
||||
@@ -397,13 +397,13 @@ if (-not $uvCmd) {
|
||||
|
||||
# Refresh PATH for current session
|
||||
$env:Path = [System.Environment]::GetEnvironmentVariable("Path", "User") + ";" + [System.Environment]::GetEnvironmentVariable("Path", "Machine")
|
||||
$uvCmd = Get-Command uv -ErrorAction SilentlyContinue
|
||||
$uvInfo = Get-WorkingUvInfo
|
||||
} catch {
|
||||
Write-Color -Text "Error: uv installation failed" -Color Red
|
||||
Write-Host "Please install uv manually from https://astral.sh/uv/"
|
||||
exit 1
|
||||
}
|
||||
if (-not $uvCmd) {
|
||||
if (-not $uvInfo) {
|
||||
Write-Color -Text "Error: uv not found after installation" -Color Red
|
||||
Write-Host "Please close and reopen PowerShell, then run this script again."
|
||||
Write-Host "Or install uv manually from https://astral.sh/uv/"
|
||||
@@ -412,8 +412,8 @@ if (-not $uvCmd) {
|
||||
Write-Ok "uv installed successfully"
|
||||
}
|
||||
|
||||
$uvVersion = & uv --version
|
||||
Write-Ok "uv detected: $uvVersion"
|
||||
$UvCmd = $uvInfo.Path
|
||||
Write-Ok "uv detected: $($uvInfo.Version)"
|
||||
Write-Host ""
|
||||
|
||||
# Check for Node.js (needed for frontend dashboard)
|
||||
@@ -503,7 +503,7 @@ try {
|
||||
if (Test-Path "pyproject.toml") {
|
||||
Write-Host " Installing workspace packages... " -NoNewline
|
||||
|
||||
$syncOutput = & uv sync 2>&1
|
||||
$syncOutput = & $UvCmd sync 2>&1
|
||||
$syncExitCode = $LASTEXITCODE
|
||||
|
||||
if ($syncExitCode -eq 0) {
|
||||
@@ -518,22 +518,14 @@ try {
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Install Playwright browser
|
||||
Write-Host " Installing Playwright browser... " -NoNewline
|
||||
$null = & uv run python -c "import playwright" 2>&1
|
||||
$importExitCode = $LASTEXITCODE
|
||||
if ($importExitCode -eq 0) {
|
||||
$null = & uv run python -m playwright install chromium 2>&1
|
||||
$playwrightExitCode = $LASTEXITCODE
|
||||
|
||||
if ($playwrightExitCode -eq 0) {
|
||||
Write-Ok "ok"
|
||||
} else {
|
||||
Write-Warn "skipped (install manually: uv run python -m playwright install chromium)"
|
||||
}
|
||||
# Keep browser setup scoped to detecting the system browser used by GCU.
|
||||
Write-Host " Checking for Chrome/Edge browser... " -NoNewline
|
||||
$null = & $UvCmd run python -c "from gcu.browser.chrome_finder import find_chrome; assert find_chrome()" 2>&1
|
||||
$chromeCheckExit = $LASTEXITCODE
|
||||
if ($chromeCheckExit -eq 0) {
|
||||
Write-Ok "ok"
|
||||
} else {
|
||||
|
||||
Write-Warn "skipped"
|
||||
Write-Warn "not found - install Chrome or Edge for browser tools"
|
||||
}
|
||||
} finally {
|
||||
Pop-Location
|
||||
@@ -728,7 +720,7 @@ $imports = @(
|
||||
$modulesToCheck = @("framework", "aden_tools", "litellm")
|
||||
|
||||
try {
|
||||
$checkOutput = & uv run python scripts/check_requirements.py @modulesToCheck 2>&1 | Out-String
|
||||
$checkOutput = & $UvCmd run python scripts/check_requirements.py @modulesToCheck 2>&1 | Out-String
|
||||
$resultJson = $null
|
||||
|
||||
# Try to parse JSON result
|
||||
@@ -772,14 +764,6 @@ if ($importErrors -gt 0) {
|
||||
}
|
||||
Write-Host ""
|
||||
|
||||
# ============================================================
|
||||
# Step 4: Verify Claude Code Skills
|
||||
# ============================================================
|
||||
|
||||
Write-Step -Number "4" -Text "Step 4: Verifying Claude Code skills..."
|
||||
|
||||
# (skills check is informational only, shown in final verification)
|
||||
|
||||
# ============================================================
|
||||
# Provider / model data
|
||||
# ============================================================
|
||||
@@ -810,26 +794,26 @@ $DefaultModels = @{
|
||||
# Model choices: array of hashtables per provider
|
||||
$ModelChoices = @{
|
||||
anthropic = @(
|
||||
@{ Id = "claude-haiku-4-5-20251001"; Label = "Haiku 4.5 - Fast + cheap (recommended)"; MaxTokens = 8192 },
|
||||
@{ Id = "claude-sonnet-4-20250514"; Label = "Sonnet 4 - Fast + capable"; MaxTokens = 8192 },
|
||||
@{ Id = "claude-sonnet-4-5-20250929"; Label = "Sonnet 4.5 - Best balance"; MaxTokens = 16384 },
|
||||
@{ Id = "claude-opus-4-6"; Label = "Opus 4.6 - Most capable"; MaxTokens = 32768 }
|
||||
@{ Id = "claude-haiku-4-5-20251001"; Label = "Haiku 4.5 - Fast + cheap (recommended)"; MaxTokens = 8192; MaxContextTokens = 180000 },
|
||||
@{ Id = "claude-sonnet-4-20250514"; Label = "Sonnet 4 - Fast + capable"; MaxTokens = 8192; MaxContextTokens = 180000 },
|
||||
@{ Id = "claude-sonnet-4-5-20250929"; Label = "Sonnet 4.5 - Best balance"; MaxTokens = 16384; MaxContextTokens = 180000 },
|
||||
@{ Id = "claude-opus-4-6"; Label = "Opus 4.6 - Most capable"; MaxTokens = 32768; MaxContextTokens = 180000 }
|
||||
)
|
||||
openai = @(
|
||||
@{ Id = "gpt-5-mini"; Label = "GPT-5 Mini - Fast + cheap (recommended)"; MaxTokens = 16384 },
|
||||
@{ Id = "gpt-5.2"; Label = "GPT-5.2 - Most capable"; MaxTokens = 16384 }
|
||||
@{ Id = "gpt-5-mini"; Label = "GPT-5 Mini - Fast + cheap (recommended)"; MaxTokens = 16384; MaxContextTokens = 120000 },
|
||||
@{ Id = "gpt-5.2"; Label = "GPT-5.2 - Most capable"; MaxTokens = 16384; MaxContextTokens = 120000 }
|
||||
)
|
||||
gemini = @(
|
||||
@{ Id = "gemini-3-flash-preview"; Label = "Gemini 3 Flash - Fast (recommended)"; MaxTokens = 8192 },
|
||||
@{ Id = "gemini-3.1-pro-preview"; Label = "Gemini 3.1 Pro - Best quality"; MaxTokens = 8192 }
|
||||
@{ Id = "gemini-3-flash-preview"; Label = "Gemini 3 Flash - Fast (recommended)"; MaxTokens = 8192; MaxContextTokens = 900000 },
|
||||
@{ Id = "gemini-3.1-pro-preview"; Label = "Gemini 3.1 Pro - Best quality"; MaxTokens = 8192; MaxContextTokens = 900000 }
|
||||
)
|
||||
groq = @(
|
||||
@{ Id = "moonshotai/kimi-k2-instruct-0905"; Label = "Kimi K2 - Best quality (recommended)"; MaxTokens = 8192 },
|
||||
@{ Id = "openai/gpt-oss-120b"; Label = "GPT-OSS 120B - Fast reasoning"; MaxTokens = 8192 }
|
||||
@{ Id = "moonshotai/kimi-k2-instruct-0905"; Label = "Kimi K2 - Best quality (recommended)"; MaxTokens = 8192; MaxContextTokens = 120000 },
|
||||
@{ Id = "openai/gpt-oss-120b"; Label = "GPT-OSS 120B - Fast reasoning"; MaxTokens = 8192; MaxContextTokens = 120000 }
|
||||
)
|
||||
cerebras = @(
|
||||
@{ Id = "zai-glm-4.7"; Label = "ZAI-GLM 4.7 - Best quality (recommended)"; MaxTokens = 8192 },
|
||||
@{ Id = "qwen3-235b-a22b-instruct-2507"; Label = "Qwen3 235B - Frontier reasoning"; MaxTokens = 8192 }
|
||||
@{ Id = "zai-glm-4.7"; Label = "ZAI-GLM 4.7 - Best quality (recommended)"; MaxTokens = 8192; MaxContextTokens = 120000 },
|
||||
@{ Id = "qwen3-235b-a22b-instruct-2507"; Label = "Qwen3 235B - Frontier reasoning"; MaxTokens = 8192; MaxContextTokens = 120000 }
|
||||
)
|
||||
}
|
||||
|
||||
@@ -838,10 +822,10 @@ function Get-ModelSelection {
|
||||
|
||||
$choices = $ModelChoices[$ProviderId]
|
||||
if (-not $choices -or $choices.Count -eq 0) {
|
||||
return @{ Model = $DefaultModels[$ProviderId]; MaxTokens = 8192 }
|
||||
return @{ Model = $DefaultModels[$ProviderId]; MaxTokens = 8192; MaxContextTokens = 120000 }
|
||||
}
|
||||
if ($choices.Count -eq 1) {
|
||||
return @{ Model = $choices[0].Id; MaxTokens = $choices[0].MaxTokens }
|
||||
return @{ Model = $choices[0].Id; MaxTokens = $choices[0].MaxTokens; MaxContextTokens = $choices[0].MaxContextTokens }
|
||||
}
|
||||
|
||||
# Find default index from previous model (if same provider)
|
||||
@@ -874,7 +858,7 @@ function Get-ModelSelection {
|
||||
$sel = $choices[$num - 1]
|
||||
Write-Host ""
|
||||
Write-Ok "Model: $($sel.Id)"
|
||||
return @{ Model = $sel.Id; MaxTokens = $sel.MaxTokens }
|
||||
return @{ Model = $sel.Id; MaxTokens = $sel.MaxTokens; MaxContextTokens = $sel.MaxContextTokens }
|
||||
}
|
||||
}
|
||||
Write-Color -Text "Invalid choice. Please enter 1-$($choices.Count)" -Color Red
|
||||
@@ -891,11 +875,12 @@ Write-Step -Number "" -Text "Configuring LLM provider..."
|
||||
$HiveConfigDir = Join-Path $env:USERPROFILE ".hive"
|
||||
$HiveConfigFile = Join-Path $HiveConfigDir "configuration.json"
|
||||
|
||||
$SelectedProviderId = ""
|
||||
$SelectedEnvVar = ""
|
||||
$SelectedModel = ""
|
||||
$SelectedMaxTokens = 8192
|
||||
$SubscriptionMode = ""
|
||||
$SelectedProviderId = ""
|
||||
$SelectedEnvVar = ""
|
||||
$SelectedModel = ""
|
||||
$SelectedMaxTokens = 8192
|
||||
$SelectedMaxContextTokens = 120000
|
||||
$SubscriptionMode = ""
|
||||
|
||||
# ── Credential detection (silent — just set flags) ───────────
|
||||
$ClaudeCredDetected = $false
|
||||
@@ -1071,20 +1056,22 @@ switch ($num) {
|
||||
Write-Host ""
|
||||
exit 1
|
||||
}
|
||||
$SubscriptionMode = "claude_code"
|
||||
$SelectedProviderId = "anthropic"
|
||||
$SelectedModel = "claude-opus-4-6"
|
||||
$SelectedMaxTokens = 32768
|
||||
$SubscriptionMode = "claude_code"
|
||||
$SelectedProviderId = "anthropic"
|
||||
$SelectedModel = "claude-opus-4-6"
|
||||
$SelectedMaxTokens = 32768
|
||||
$SelectedMaxContextTokens = 180000
|
||||
Write-Host ""
|
||||
Write-Ok "Using Claude Code subscription"
|
||||
}
|
||||
2 {
|
||||
# ZAI Code Subscription
|
||||
$SubscriptionMode = "zai_code"
|
||||
$SelectedProviderId = "openai"
|
||||
$SelectedEnvVar = "ZAI_API_KEY"
|
||||
$SelectedModel = "glm-5"
|
||||
$SelectedMaxTokens = 32768
|
||||
$SubscriptionMode = "zai_code"
|
||||
$SelectedProviderId = "openai"
|
||||
$SelectedEnvVar = "ZAI_API_KEY"
|
||||
$SelectedModel = "glm-5"
|
||||
$SelectedMaxTokens = 32768
|
||||
$SelectedMaxContextTokens = 120000
|
||||
Write-Host ""
|
||||
Write-Ok "Using ZAI Code subscription"
|
||||
Write-Color -Text " Model: glm-5 | API: api.z.ai" -Color DarkGray
|
||||
@@ -1096,7 +1083,7 @@ switch ($num) {
|
||||
Write-Warn "Codex credentials not found. Starting OAuth login..."
|
||||
Write-Host ""
|
||||
try {
|
||||
& uv run python (Join-Path $ScriptDir "core\codex_oauth.py") 2>&1
|
||||
& $UvCmd run python (Join-Path $ScriptDir "core\codex_oauth.py") 2>&1
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
$CodexCredDetected = $true
|
||||
} else {
|
||||
@@ -1113,21 +1100,23 @@ switch ($num) {
|
||||
}
|
||||
}
|
||||
if ($CodexCredDetected) {
|
||||
$SubscriptionMode = "codex"
|
||||
$SelectedProviderId = "openai"
|
||||
$SelectedModel = "gpt-5.3-codex"
|
||||
$SelectedMaxTokens = 16384
|
||||
$SubscriptionMode = "codex"
|
||||
$SelectedProviderId = "openai"
|
||||
$SelectedModel = "gpt-5.3-codex"
|
||||
$SelectedMaxTokens = 16384
|
||||
$SelectedMaxContextTokens = 120000
|
||||
Write-Host ""
|
||||
Write-Ok "Using OpenAI Codex subscription"
|
||||
}
|
||||
}
|
||||
4 {
|
||||
# Kimi Code Subscription
|
||||
$SubscriptionMode = "kimi_code"
|
||||
$SelectedProviderId = "kimi"
|
||||
$SelectedEnvVar = "KIMI_API_KEY"
|
||||
$SelectedModel = "kimi-k2.5"
|
||||
$SelectedMaxTokens = 32768
|
||||
$SubscriptionMode = "kimi_code"
|
||||
$SelectedProviderId = "kimi"
|
||||
$SelectedEnvVar = "KIMI_API_KEY"
|
||||
$SelectedModel = "kimi-k2.5"
|
||||
$SelectedMaxTokens = 32768
|
||||
$SelectedMaxContextTokens = 120000
|
||||
Write-Host ""
|
||||
Write-Ok "Using Kimi Code subscription"
|
||||
Write-Color -Text " Model: kimi-k2.5 | API: api.kimi.com/coding" -Color DarkGray
|
||||
@@ -1167,7 +1156,7 @@ switch ($num) {
|
||||
# Health check the new key
|
||||
Write-Host " Verifying API key... " -NoNewline
|
||||
try {
|
||||
$hcResult = & uv run python (Join-Path $ScriptDir "scripts/check_llm_key.py") $SelectedProviderId $apiKey 2>$null
|
||||
$hcResult = & $UvCmd run python (Join-Path $ScriptDir "scripts/check_llm_key.py") $SelectedProviderId $apiKey 2>$null
|
||||
$hcJson = $hcResult | ConvertFrom-Json
|
||||
if ($hcJson.valid -eq $true) {
|
||||
Write-Color -Text "ok" -Color Green
|
||||
@@ -1242,7 +1231,7 @@ if ($SubscriptionMode -eq "zai_code") {
|
||||
# Health check the new key
|
||||
Write-Host " Verifying ZAI API key... " -NoNewline
|
||||
try {
|
||||
$hcResult = & uv run python (Join-Path $ScriptDir "scripts/check_llm_key.py") "zai" $apiKey "https://api.z.ai/api/coding/paas/v4" 2>$null
|
||||
$hcResult = & $UvCmd run python (Join-Path $ScriptDir "scripts/check_llm_key.py") "zai" $apiKey "https://api.z.ai/api/coding/paas/v4" 2>$null
|
||||
$hcJson = $hcResult | ConvertFrom-Json
|
||||
if ($hcJson.valid -eq $true) {
|
||||
Write-Color -Text "ok" -Color Green
|
||||
@@ -1310,7 +1299,7 @@ if ($SubscriptionMode -eq "kimi_code") {
|
||||
# Health check the new key
|
||||
Write-Host " Verifying Kimi API key... " -NoNewline
|
||||
try {
|
||||
$hcResult = & uv run python (Join-Path $ScriptDir "scripts/check_llm_key.py") "kimi" $apiKey "https://api.kimi.com/coding" 2>$null
|
||||
$hcResult = & $UvCmd run python (Join-Path $ScriptDir "scripts/check_llm_key.py") "kimi" $apiKey "https://api.kimi.com/coding" 2>$null
|
||||
$hcJson = $hcResult | ConvertFrom-Json
|
||||
if ($hcJson.valid -eq $true) {
|
||||
Write-Color -Text "ok" -Color Green
|
||||
@@ -1349,8 +1338,9 @@ if ($SubscriptionMode -eq "kimi_code") {
|
||||
# Prompt for model if not already selected (manual provider path)
|
||||
if ($SelectedProviderId -and -not $SelectedModel) {
|
||||
$modelSel = Get-ModelSelection $SelectedProviderId
|
||||
$SelectedModel = $modelSel.Model
|
||||
$SelectedMaxTokens = $modelSel.MaxTokens
|
||||
$SelectedModel = $modelSel.Model
|
||||
$SelectedMaxTokens = $modelSel.MaxTokens
|
||||
$SelectedMaxContextTokens = $modelSel.MaxContextTokens
|
||||
}
|
||||
|
||||
# Save configuration
|
||||
@@ -1367,9 +1357,10 @@ if ($SelectedProviderId) {
|
||||
|
||||
$config = @{
|
||||
llm = @{
|
||||
provider = $SelectedProviderId
|
||||
model = $SelectedModel
|
||||
max_tokens = $SelectedMaxTokens
|
||||
provider = $SelectedProviderId
|
||||
model = $SelectedModel
|
||||
max_tokens = $SelectedMaxTokens
|
||||
max_context_tokens = $SelectedMaxContextTokens
|
||||
}
|
||||
created_at = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss+00:00")
|
||||
}
|
||||
@@ -1395,7 +1386,7 @@ if ($SelectedProviderId) {
|
||||
Write-Host ""
|
||||
|
||||
# ============================================================
|
||||
# Step 5b: Browser Automation (GCU) — always enabled
|
||||
# Browser Automation (GCU) — always enabled
|
||||
# ============================================================
|
||||
|
||||
Write-Host ""
|
||||
@@ -1420,10 +1411,10 @@ if (Test-Path $HiveConfigFile) {
|
||||
Write-Host ""
|
||||
|
||||
# ============================================================
|
||||
# Step 6: Initialize Credential Store
|
||||
# Step 4: Initialize Credential Store
|
||||
# ============================================================
|
||||
|
||||
Write-Step -Number "5" -Text "Step 5: Initializing credential store..."
|
||||
Write-Step -Number "4" -Text "Step 4: Initializing credential store..."
|
||||
Write-Color -Text "The credential store encrypts API keys and secrets for your agents." -Color DarkGray
|
||||
Write-Host ""
|
||||
|
||||
@@ -1460,7 +1451,7 @@ if ($credKey) {
|
||||
} else {
|
||||
Write-Host " Generating encryption key... " -NoNewline
|
||||
try {
|
||||
$generatedKey = & uv run python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())" 2>$null
|
||||
$generatedKey = & $UvCmd run python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())" 2>$null
|
||||
if ($LASTEXITCODE -eq 0 -and $generatedKey) {
|
||||
Write-Ok "ok"
|
||||
$generatedKey = $generatedKey.Trim()
|
||||
@@ -1509,7 +1500,7 @@ if ($credKey) {
|
||||
Write-Ok "Credential store initialized at ~/.hive/credentials/"
|
||||
|
||||
Write-Host " Verifying credential store... " -NoNewline
|
||||
$verifyOut = & uv run python -c "from framework.credentials.storage import EncryptedFileStorage; storage = EncryptedFileStorage(); print('ok')" 2>$null
|
||||
$verifyOut = & $UvCmd run python -c "from framework.credentials.storage import EncryptedFileStorage; storage = EncryptedFileStorage(); print('ok')" 2>$null
|
||||
if ($verifyOut -match "ok") {
|
||||
Write-Ok "ok"
|
||||
} else {
|
||||
@@ -1519,10 +1510,10 @@ if ($credKey) {
|
||||
Write-Host ""
|
||||
|
||||
# ============================================================
|
||||
# Step 6: Verify Setup
|
||||
# Step 5: Verify Setup
|
||||
# ============================================================
|
||||
|
||||
Write-Step -Number "6" -Text "Step 6: Verifying installation..."
|
||||
Write-Step -Number "5" -Text "Step 5: Verifying installation..."
|
||||
|
||||
$verifyErrors = 0
|
||||
|
||||
@@ -1530,7 +1521,7 @@ $verifyErrors = 0
|
||||
$verifyModules = @("framework", "aden_tools")
|
||||
|
||||
try {
|
||||
$verifyOutput = & uv run python scripts/check_requirements.py @verifyModules 2>&1 | Out-String
|
||||
$verifyOutput = & $UvCmd run python scripts/check_requirements.py @verifyModules 2>&1 | Out-String
|
||||
$verifyJson = $null
|
||||
|
||||
try {
|
||||
@@ -1540,7 +1531,7 @@ try {
|
||||
# Fall back to basic checks if JSON parsing fails
|
||||
foreach ($mod in $verifyModules) {
|
||||
Write-Host " $([char]0x2B21) $mod... " -NoNewline
|
||||
$null = & uv run python -c "import $mod" 2>&1
|
||||
$null = & $UvCmd run python -c "import $mod" 2>&1
|
||||
if ($LASTEXITCODE -eq 0) { Write-Ok "ok" }
|
||||
else { Write-Fail "failed"; $verifyErrors++ }
|
||||
}
|
||||
@@ -1560,7 +1551,7 @@ try {
|
||||
}
|
||||
|
||||
Write-Host " $([char]0x2B21) litellm... " -NoNewline
|
||||
$null = & uv run python -c "import litellm" 2>&1
|
||||
$null = & $UvCmd run python -c "import litellm" 2>&1
|
||||
if ($LASTEXITCODE -eq 0) { Write-Ok "ok" } else { Write-Warn "skipped" }
|
||||
|
||||
Write-Host " $([char]0x2B21) MCP config... " -NoNewline
|
||||
@@ -1626,10 +1617,10 @@ if ($verifyErrors -gt 0) {
|
||||
}
|
||||
|
||||
# ============================================================
|
||||
# Step 7: Install hive CLI wrapper
|
||||
# Step 6: Install hive CLI wrapper
|
||||
# ============================================================
|
||||
|
||||
Write-Step -Number "7" -Text "Step 7: Installing hive CLI..."
|
||||
Write-Step -Number "6" -Text "Step 6: Installing hive CLI..."
|
||||
|
||||
# Verify hive.ps1 wrapper exists in project root
|
||||
$hivePs1Path = Join-Path $ScriptDir "hive.ps1"
|
||||
|
||||
+14
-32
@@ -4,7 +4,7 @@
|
||||
#
|
||||
# An interactive setup wizard that:
|
||||
# 1. Installs Python dependencies
|
||||
# 2. Installs Playwright browser for web scraping
|
||||
# 2. Checks for Chrome/Edge browser for web automation
|
||||
# 3. Helps configure LLM API keys
|
||||
# 4. Verifies everything works
|
||||
#
|
||||
@@ -253,16 +253,12 @@ else
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Install Playwright browser
|
||||
echo -n " Installing Playwright browser... "
|
||||
if uv run python -c "import playwright" > /dev/null 2>&1; then
|
||||
if uv run python -m playwright install chromium > /dev/null 2>&1; then
|
||||
echo -e "${GREEN}ok${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}⏭${NC}"
|
||||
fi
|
||||
# Check for Chrome/Edge (required for GCU browser tools)
|
||||
echo -n " Checking for Chrome/Edge browser... "
|
||||
if uv run python -c "from gcu.browser.chrome_finder import find_chrome; assert find_chrome()" > /dev/null 2>&1; then
|
||||
echo -e "${GREEN}ok${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}⏭${NC}"
|
||||
echo -e "${YELLOW}not found — install Chrome or Edge for browser tools${NC}"
|
||||
fi
|
||||
|
||||
cd "$SCRIPT_DIR"
|
||||
@@ -304,18 +300,11 @@ if [ "$NODE_AVAILABLE" = true ]; then
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# ============================================================
|
||||
# Step 3: Configure LLM API Key
|
||||
# ============================================================
|
||||
|
||||
echo -e "${YELLOW}⬢${NC} ${BLUE}${BOLD}Step 3: Configuring LLM provider...${NC}"
|
||||
echo ""
|
||||
|
||||
# ============================================================
|
||||
# Step 3: Verify Python Imports
|
||||
# ============================================================
|
||||
|
||||
echo -e "${BLUE}Step 3: Verifying Python imports...${NC}"
|
||||
echo -e "${YELLOW}⬢${NC} ${BLUE}${BOLD}Step 3: Verifying Python imports...${NC}"
|
||||
echo ""
|
||||
|
||||
IMPORT_ERRORS=0
|
||||
@@ -371,13 +360,6 @@ fi
|
||||
|
||||
echo ""
|
||||
|
||||
# ============================================================
|
||||
# Step 4: Verify Claude Code Skills
|
||||
# ============================================================
|
||||
|
||||
echo -e "${BLUE}Step 4: Verifying Claude Code skills...${NC}"
|
||||
echo ""
|
||||
|
||||
# Provider configuration - use associative arrays (Bash 4+) or indexed arrays (Bash 3.2)
|
||||
if [ "$USE_ASSOC_ARRAYS" = true ]; then
|
||||
# Bash 4+ - use associative arrays (cleaner and more efficient)
|
||||
@@ -1338,7 +1320,7 @@ fi
|
||||
echo ""
|
||||
|
||||
# ============================================================
|
||||
# Step 4b: Browser Automation (GCU) — always enabled
|
||||
# Browser Automation (GCU) — always enabled
|
||||
# ============================================================
|
||||
|
||||
echo -e "${GREEN}⬢${NC} Browser automation enabled"
|
||||
@@ -1366,10 +1348,10 @@ fi
|
||||
echo ""
|
||||
|
||||
# ============================================================
|
||||
# Step 5: Initialize Credential Store
|
||||
# Step 4: Initialize Credential Store
|
||||
# ============================================================
|
||||
|
||||
echo -e "${YELLOW}⬢${NC} ${BLUE}${BOLD}Step 5: Initializing credential store...${NC}"
|
||||
echo -e "${YELLOW}⬢${NC} ${BLUE}${BOLD}Step 4: Initializing credential store...${NC}"
|
||||
echo ""
|
||||
echo -e "${DIM}The credential store encrypts API keys and secrets for your agents.${NC}"
|
||||
echo ""
|
||||
@@ -1436,10 +1418,10 @@ fi
|
||||
echo ""
|
||||
|
||||
# ============================================================
|
||||
# Step 6: Verify Setup
|
||||
# Step 5: Verify Setup
|
||||
# ============================================================
|
||||
|
||||
echo -e "${YELLOW}⬢${NC} ${BLUE}${BOLD}Step 6: Verifying installation...${NC}"
|
||||
echo -e "${YELLOW}⬢${NC} ${BLUE}${BOLD}Step 5: Verifying installation...${NC}"
|
||||
echo ""
|
||||
|
||||
ERRORS=0
|
||||
@@ -1500,10 +1482,10 @@ if [ $ERRORS -gt 0 ]; then
|
||||
fi
|
||||
|
||||
# ============================================================
|
||||
# Step 7: Install hive CLI globally
|
||||
# Step 6: Install hive CLI globally
|
||||
# ============================================================
|
||||
|
||||
echo -e "${YELLOW}⬢${NC} ${BLUE}${BOLD}Step 7: Installing hive CLI...${NC}"
|
||||
echo -e "${YELLOW}⬢${NC} ${BLUE}${BOLD}Step 6: Installing hive CLI...${NC}"
|
||||
echo ""
|
||||
|
||||
# Ensure ~/.local/bin exists and is in PATH
|
||||
|
||||
@@ -68,10 +68,16 @@ interface LeaderboardEntry {
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const POINTS: Record<string, number> = {
|
||||
// Integration bounties
|
||||
"bounty:test": 20,
|
||||
"bounty:docs": 20,
|
||||
"bounty:code": 30,
|
||||
"bounty:new-tool": 75,
|
||||
// Standard bounties
|
||||
"bounty:small": 10,
|
||||
"bounty:medium": 30,
|
||||
"bounty:large": 75,
|
||||
"bounty:extreme": 150,
|
||||
};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -276,6 +282,10 @@ function formatBountyNotification(bounty: BountyResult): string {
|
||||
docs: "\u{1F4DD}",
|
||||
code: "\u{1F527}",
|
||||
"new-tool": "\u{2B50}",
|
||||
small: "\u{1F4A1}",
|
||||
medium: "\u{1F6E0}",
|
||||
large: "\u{1F680}",
|
||||
extreme: "\u{1F525}",
|
||||
};
|
||||
|
||||
const emoji = typeEmoji[bounty.bountyType] ?? "\u{1F3AF}";
|
||||
@@ -301,7 +311,7 @@ function formatLeaderboard(entries: LeaderboardEntry[]): string {
|
||||
|
||||
const medals = ["\u{1F947}", "\u{1F948}", "\u{1F949}"];
|
||||
|
||||
let msg = "**\u{1F3C6} Integration Bounty Leaderboard**\n\n";
|
||||
let msg = "**\u{1F3C6} Bounty Leaderboard**\n\n";
|
||||
|
||||
for (let i = 0; i < top10.length; i++) {
|
||||
const entry = top10[i];
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# setup-antigravity-mcp.sh - Write Antigravity/Claude MCP config with auto-detected paths
|
||||
#
|
||||
# Run from anywhere inside the hive repo. Generates ~/.gemini/antigravity/mcp_config.json
|
||||
# based on .agent/mcp_config.json template, with absolute paths so the IDE can
|
||||
# connect to tools MCP servers without manual path editing.
|
||||
#
|
||||
set -e
|
||||
|
||||
# Find repo root
|
||||
REPO_ROOT=""
|
||||
if git rev-parse --show-toplevel &>/dev/null; then
|
||||
REPO_ROOT="$(git rev-parse --show-toplevel)"
|
||||
elif [ -f ".agent/mcp_config.json" ]; then
|
||||
REPO_ROOT="$(pwd)"
|
||||
else
|
||||
d="$(pwd)"
|
||||
while [ -n "$d" ] && [ "$d" != "/" ]; do
|
||||
[ -f "$d/.agent/mcp_config.json" ] && REPO_ROOT="$d" && break
|
||||
d="$(dirname "$d")"
|
||||
done
|
||||
fi
|
||||
|
||||
if [ -z "$REPO_ROOT" ] || [ ! -d "$REPO_ROOT/core" ] || [ ! -d "$REPO_ROOT/tools" ]; then
|
||||
echo "Error: Run this script from inside the hive repo (could not find repo root with core/ and tools/)." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TEMPLATE="$REPO_ROOT/.agent/mcp_config.json"
|
||||
if [ ! -f "$TEMPLATE" ]; then
|
||||
echo "Error: Template not found at $TEMPLATE" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CORE_DIR="$(cd "$REPO_ROOT/core" && pwd)"
|
||||
TOOLS_DIR="$(cd "$REPO_ROOT/tools" && pwd)"
|
||||
|
||||
mkdir -p "$HOME/.gemini/antigravity"
|
||||
|
||||
# Generate config from template with absolute paths
|
||||
# Replace relative "core" and "tools" with absolute paths in --directory args
|
||||
sed -e "s|\"--directory\", \"core\"|\"--directory\", \"$CORE_DIR\"|g" \
|
||||
-e "s|\"--directory\", \"tools\"|\"--directory\", \"$TOOLS_DIR\"|g" \
|
||||
"$TEMPLATE" > "$HOME/.gemini/antigravity/mcp_config.json"
|
||||
|
||||
echo "Wrote $HOME/.gemini/antigravity/mcp_config.json (from $TEMPLATE)"
|
||||
echo " core -> $CORE_DIR"
|
||||
echo " tools -> $TOOLS_DIR"
|
||||
|
||||
if [ "$1" = "--claude" ]; then
|
||||
mkdir -p "$HOME/.claude"
|
||||
cp "$HOME/.gemini/antigravity/mcp_config.json" "$HOME/.claude/mcp.json"
|
||||
echo "Wrote $HOME/.claude/mcp.json"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Next: Restart Antigravity IDE so it loads the MCP config."
|
||||
echo " Then open this repo; tools should appear."
|
||||
echo ""
|
||||
echo "For Claude Code, run: $0 --claude"
|
||||
@@ -1,5 +1,5 @@
|
||||
#!/usr/bin/env bash
|
||||
# Creates GitHub labels for the Integration Bounty Program.
|
||||
# Creates GitHub labels for the Bounty Program.
|
||||
# Usage: ./scripts/setup-bounty-labels.sh [owner/repo]
|
||||
# Requires: gh CLI authenticated
|
||||
|
||||
@@ -9,12 +9,18 @@ REPO="${1:-adenhq/hive}"
|
||||
|
||||
echo "Setting up bounty labels for $REPO..."
|
||||
|
||||
# Bounty type labels
|
||||
# Integration bounty labels
|
||||
gh label create "bounty:test" --repo "$REPO" --color "1D76DB" --description "Bounty: test a tool with real API key (20 pts)" --force
|
||||
gh label create "bounty:docs" --repo "$REPO" --color "FBCA04" --description "Bounty: write or improve documentation (20 pts)" --force
|
||||
gh label create "bounty:code" --repo "$REPO" --color "D93F0B" --description "Bounty: health checker, bug fix, or improvement (30 pts)" --force
|
||||
gh label create "bounty:new-tool" --repo "$REPO" --color "6F42C1" --description "Bounty: build a new integration from scratch (75 pts)" --force
|
||||
|
||||
# Standard bounty labels
|
||||
gh label create "bounty:small" --repo "$REPO" --color "C2E0C6" --description "Bounty: quick fix — typos, links, error messages (10 pts)" --force
|
||||
gh label create "bounty:medium" --repo "$REPO" --color "0E8A16" --description "Bounty: bug fix, tests, guides, CLI improvements (30 pts)" --force
|
||||
gh label create "bounty:large" --repo "$REPO" --color "B60205" --description "Bounty: new feature, perf work, architecture docs (75 pts)" --force
|
||||
gh label create "bounty:extreme" --repo "$REPO" --color "000000" --description "Bounty: major subsystem, security audit, core refactor (150 pts)" --force
|
||||
|
||||
# Difficulty labels
|
||||
gh label create "difficulty:easy" --repo "$REPO" --color "BFD4F2" --description "Good first contribution" --force
|
||||
gh label create "difficulty:medium" --repo "$REPO" --color "D4C5F9" --description "Requires some familiarity" --force
|
||||
|
||||
@@ -0,0 +1,44 @@
|
||||
function Get-WorkingUvInfo {
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Find a runnable uv executable, not just a PATH entry named "uv"
|
||||
.OUTPUTS
|
||||
Hashtable with Path and Version, or $null if no working uv is found
|
||||
#>
|
||||
# pyenv-win can expose a uv shim that exists on PATH but fails at runtime.
|
||||
# Verify each candidate with `uv --version` before trusting it.
|
||||
$candidates = @()
|
||||
|
||||
$commands = @(Get-Command uv -All -ErrorAction SilentlyContinue)
|
||||
foreach ($cmd in $commands) {
|
||||
if ($cmd.Source) {
|
||||
$candidates += $cmd.Source
|
||||
} elseif ($cmd.Definition) {
|
||||
$candidates += $cmd.Definition
|
||||
} elseif ($cmd.Name) {
|
||||
$candidates += $cmd.Name
|
||||
}
|
||||
}
|
||||
|
||||
$defaultUvExe = Join-Path $env:USERPROFILE ".local\bin\uv.exe"
|
||||
if (Test-Path $defaultUvExe) {
|
||||
$candidates += $defaultUvExe
|
||||
}
|
||||
|
||||
foreach ($candidate in ($candidates | Where-Object { $_ } | Select-Object -Unique)) {
|
||||
try {
|
||||
$versionOutput = & $candidate --version 2>$null
|
||||
$version = ($versionOutput | Out-String).Trim()
|
||||
if ($LASTEXITCODE -eq 0 -and -not [string]::IsNullOrWhiteSpace($version)) {
|
||||
return @{
|
||||
Path = $candidate
|
||||
Version = $version
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
# Try the next candidate.
|
||||
}
|
||||
}
|
||||
|
||||
return $null
|
||||
}
|
||||
+8
-2
@@ -14,8 +14,14 @@ COPY mcp_server.py ./
|
||||
# Install package with all dependencies
|
||||
RUN pip install --no-cache-dir -e .
|
||||
|
||||
# Install Playwright Chromium browser and system dependencies
|
||||
RUN playwright install chromium --with-deps
|
||||
# Install Google Chrome (stable) — used by GCU browser tools via CDP
|
||||
RUN apt-get update && apt-get install -y wget gnupg \
|
||||
&& mkdir -p /etc/apt/keyrings \
|
||||
&& wget -q -O /etc/apt/keyrings/google-chrome.asc https://dl.google.com/linux/linux_signing_key.pub \
|
||||
&& echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/google-chrome.asc] http://dl.google.com/linux/chrome/deb/ stable main" \
|
||||
> /etc/apt/sources.list.d/google-chrome.list \
|
||||
&& apt-get update && apt-get install -y google-chrome-stable \
|
||||
&& apt-get clean && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Create non-root user for security
|
||||
RUN useradd -m -u 1001 appuser
|
||||
|
||||
+37
-15
@@ -25,6 +25,12 @@ from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_TOOLS_SRC = Path(__file__).resolve().parent / "src"
|
||||
if _TOOLS_SRC.is_dir():
|
||||
tools_src = str(_TOOLS_SRC)
|
||||
if tools_src not in sys.path:
|
||||
sys.path.insert(0, tools_src)
|
||||
|
||||
|
||||
def setup_logger():
|
||||
if not logger.handlers:
|
||||
@@ -52,6 +58,12 @@ if "--stdio" in sys.argv:
|
||||
|
||||
from fastmcp import FastMCP # noqa: E402
|
||||
|
||||
# Import command sanitizer — shared module in aden_tools
|
||||
from aden_tools.tools.file_system_toolkits.command_sanitizer import ( # noqa: E402
|
||||
CommandBlockedError,
|
||||
validate_command,
|
||||
)
|
||||
|
||||
mcp = FastMCP("coder-tools")
|
||||
|
||||
PROJECT_ROOT: str = ""
|
||||
@@ -208,6 +220,8 @@ def run_command(command: str, cwd: str = "", timeout: int = 120) -> str:
|
||||
|
||||
PYTHONPATH is automatically set to include core/ and exports/.
|
||||
Output is truncated at 30K chars with a notice.
|
||||
Commands still execute with shell=True, so the sanitizer blocks
|
||||
explicit nested shell executables but cannot remove shell parsing.
|
||||
|
||||
Args:
|
||||
command: Shell command to execute
|
||||
@@ -222,6 +236,11 @@ def run_command(command: str, cwd: str = "", timeout: int = 120) -> str:
|
||||
|
||||
try:
|
||||
command = _translate_command_for_windows(command)
|
||||
# Validate command against safety blocklist before execution
|
||||
try:
|
||||
validate_command(command)
|
||||
except CommandBlockedError as e:
|
||||
return f"Error: {e}"
|
||||
start = time.monotonic()
|
||||
result = subprocess.run(
|
||||
command,
|
||||
@@ -2257,21 +2276,24 @@ if __name__ == "__main__":
|
||||
)
|
||||
|
||||
# -- mcp_servers.json --
|
||||
_write(
|
||||
"mcp_servers.json",
|
||||
json.dumps(
|
||||
{
|
||||
"hive-tools": {
|
||||
"transport": "stdio",
|
||||
"command": "uv",
|
||||
"args": ["run", "python", "mcp_server.py", "--stdio"],
|
||||
"cwd": "../../tools",
|
||||
"description": "Hive tools MCP server",
|
||||
}
|
||||
},
|
||||
indent=2,
|
||||
),
|
||||
)
|
||||
mcp_config: dict = {
|
||||
"hive-tools": {
|
||||
"transport": "stdio",
|
||||
"command": "uv",
|
||||
"args": ["run", "python", "mcp_server.py", "--stdio"],
|
||||
"cwd": "../../tools",
|
||||
"description": "Hive tools MCP server",
|
||||
},
|
||||
"gcu-tools": {
|
||||
"transport": "stdio",
|
||||
"command": "uv",
|
||||
"args": ["run", "python", "-m", "gcu.server", "--stdio"],
|
||||
"cwd": "../../tools",
|
||||
"description": "GCU browser automation tools",
|
||||
},
|
||||
}
|
||||
|
||||
_write("mcp_servers.json", json.dumps(mcp_config, indent=2))
|
||||
|
||||
# -- tests/conftest.py --
|
||||
_write(
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
"""
|
||||
HuggingFace credentials.
|
||||
|
||||
Contains credentials for HuggingFace Hub API access.
|
||||
Contains credentials for HuggingFace Hub API and Inference API access.
|
||||
"""
|
||||
|
||||
from .base import CredentialSpec
|
||||
@@ -16,11 +16,16 @@ HUGGINGFACE_CREDENTIALS = {
|
||||
"huggingface_get_dataset",
|
||||
"huggingface_search_spaces",
|
||||
"huggingface_whoami",
|
||||
"huggingface_run_inference",
|
||||
"huggingface_run_embedding",
|
||||
"huggingface_list_inference_endpoints",
|
||||
],
|
||||
required=True,
|
||||
startup_required=False,
|
||||
help_url="https://huggingface.co/settings/tokens",
|
||||
description="HuggingFace API token for Hub access (models, datasets, spaces)",
|
||||
description=(
|
||||
"HuggingFace API token for Hub access (models, datasets, spaces) and Inference API"
|
||||
),
|
||||
direct_api_key_supported=True,
|
||||
api_key_instructions="""To get a HuggingFace token:
|
||||
1. Go to https://huggingface.co/settings/tokens
|
||||
|
||||
@@ -14,10 +14,15 @@ NOTION_CREDENTIALS = {
|
||||
"notion_search",
|
||||
"notion_get_page",
|
||||
"notion_create_page",
|
||||
"notion_update_page",
|
||||
"notion_query_database",
|
||||
"notion_get_database",
|
||||
"notion_update_page",
|
||||
"notion_archive_page",
|
||||
"notion_create_database",
|
||||
"notion_update_database",
|
||||
"notion_get_block_children",
|
||||
"notion_get_block",
|
||||
"notion_update_block",
|
||||
"notion_delete_block",
|
||||
"notion_append_blocks",
|
||||
],
|
||||
required=True,
|
||||
|
||||
@@ -67,7 +67,7 @@ SLACK_CREDENTIALS = {
|
||||
help_url="https://api.slack.com/apps",
|
||||
description="Slack Bot Token (starts with xoxb-)",
|
||||
# Auth method support
|
||||
aden_supported=True,
|
||||
aden_supported=False,
|
||||
aden_provider_name="slack",
|
||||
direct_api_key_supported=True,
|
||||
api_key_instructions="""To get a Slack Bot Token:
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user