Compare commits

...

116 Commits

Author SHA1 Message Date
Richard Tang ff01c1fd99 chore: release v0.7.1 — Chrome-native GCU, browser isolation, dummy agent tests
Release / Create Release (push) Waiting to run
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-13 20:39:46 -07:00
RichardTang-Aden 421b25fdb7 Merge pull request #6313 from prasoonmhwr/bugFix/add_tab_ui
bugFix: micro-fix add tab UI
2026-03-13 20:29:30 -07:00
Richard Tang 795c3c33e2 docs: readme update 2026-03-13 20:26:44 -07:00
RichardTang-Aden 97821f4d80 Merge pull request #6346 from aden-hive/fix/session-resume-new-agent
fix: save json path for the new agent update meta.json when loaded worker
2026-03-13 20:19:48 -07:00
RichardTang-Aden 505e1e30fd Merge branch 'main' into fix/session-resume-new-agent 2026-03-13 20:19:36 -07:00
Timothy 3fb2b285fb chore: add star history widget 2026-03-13 20:17:35 -07:00
RichardTang-Aden a76109840c Merge pull request #6345 from aden-hive/feat/gcu-updates
feat: GCU browser cleanup, draft loading state, and inner_turn message fix
2026-03-13 20:16:38 -07:00
RichardTang-Aden 39212350ba Merge pull request #6342 from aden-hive/ci/level-2-dummy-agent-testing
Add Level 2 dummy agent end-to-end tests
2026-03-13 19:42:34 -07:00
Richard Tang f3399fe95b chore: ruff lint 2026-03-13 19:39:44 -07:00
Richard Tang d02e1155ed feat: dummy agent tests 2026-03-13 19:39:14 -07:00
bryan 7ede3ba171 feat: queen upsert fix 2026-03-13 19:34:26 -07:00
Richard Tang 2272491cf5 chore: remove dead code 2026-03-13 18:10:43 -07:00
RichardTang-Aden bb38cb974f Merge pull request #6333 from aden-hive/fix/new-agent-resume
Fix: new agent resume and GCU browser improvements
2026-03-13 17:20:49 -07:00
bryan 635d2976f4 feat: show loading spinner in draft panel during planning phase 2026-03-13 16:40:33 -07:00
bryan 4e1525880d feat: clean up browser profile after top-level GCU node execution 2026-03-13 16:40:20 -07:00
Richard Tang b80559df68 chore: ruff lint 2026-03-13 16:38:50 -07:00
RichardTang-Aden 08d93ef90a Merge pull request #6331 from RichardTang-Aden/main
fix: generate worker mcp.json correctly in initialize_agent_package
2026-03-13 15:35:18 -07:00
Richard Tang 22bf035522 chore: fix lint 2026-03-13 15:35:01 -07:00
Richard Tang 15944a42ab fix: generate worker mcp file correctly 2026-03-13 15:30:28 -07:00
Richard Tang 8440ec70ba chore: document the difference between runner mode run() and start() 2026-03-13 15:28:18 -07:00
Timothy eacf2520cf chore: skills prd 2026-03-13 15:22:09 -07:00
Richard Tang def4f62a51 fix: update meta.json when loaded worker 2026-03-13 14:05:57 -07:00
bryan b0c5bcd210 chore: update tab management guidelines and add concurrent subagent patterns 2026-03-13 14:04:40 -07:00
bryan 2fe1343343 feat: inject unique browser profile per GCU subagent 2026-03-13 14:03:21 -07:00
bryan de0dcff50f feat: add tab origin/age metadata and per-subagent profile isolation 2026-03-13 14:02:15 -07:00
Richard Tang 20427e213a fix: update meta.json when loaded worker 2026-03-13 13:52:15 -07:00
bryan 1fb5c6337a fix: anchor worker monitoring to queen's session ID on cold-restore 2026-03-13 12:50:50 -07:00
Timothy @aden 1e74f194a1 Update authors in MCP Server Registry document 2026-03-13 12:15:50 -07:00
Timothy 08157d2bd6 chore(docs): bounty program - standard 2026-03-13 12:10:21 -07:00
Timothy ef036257a9 docs(mcp): MCP integration PRD 2026-03-13 11:56:33 -07:00
Timothy 16ce984c74 chore: add default context limit on windows quickstart 2026-03-13 10:04:49 -07:00
bryan 1e8b5b96eb Merge branch 'main' into feat/gcu-updates 2026-03-13 09:26:06 -07:00
Prasoon Mahawar 094ba89f19 Merge branch 'main' of https://github.com/prasoonmhwr/hive into bugFix/add_tab_ui 2026-03-13 18:59:44 +05:30
Prasoon Mahawar 7008c9f310 bugFix: UI overflow issue when creating multiple agents – “Add tab” dropdown partially hidden 2026-03-13 18:58:38 +05:30
Prasoon Mahawar 94d7cbacc2 Revert "bugFix: Clipboard write in SystemPromptTab lacks error handling and may show false Copied feedback"
This reverts commit bddc2b413a.
2026-03-13 18:55:52 +05:30
Prasoon Mahawar bddc2b413a bugFix: Clipboard write in SystemPromptTab lacks error handling and may show false Copied feedback 2026-03-13 18:23:36 +05:30
RichardTang-Aden 52b1a3f472 Merge pull request #6282 from aden-hive/feat/refactor-session
Release / Create Release (push) Waiting to run
Refactor session lifecycle with flowchart planning and triggers
2026-03-12 21:15:10 -07:00
Richard Tang 079e00c8f7 Merge remote-tracking branch 'origin/main' into feat/refactor-session 2026-03-12 21:13:15 -07:00
Richard Tang 60bba38941 chore: ruff lint 2026-03-12 21:01:47 -07:00
Richard Tang ea8e7b11c6 Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 20:54:08 -07:00
Richard Tang 3dc2b25b01 fix: adding the trigger helpers 2026-03-12 20:53:45 -07:00
bryan 543b90b34f chore: tooltip update 2026-03-12 20:50:39 -07:00
Richard Tang 2ad78ec8a2 Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 20:48:09 -07:00
Timothy 412658e9f2 fix: remove subagent shapes 2026-03-12 20:46:09 -07:00
Richard Tang 9bfddec322 fix: missing _FLOWCHART_TYPES reference 2026-03-12 20:43:03 -07:00
Timothy bbd9c10169 fix: decision node cannot have subagents 2026-03-12 20:36:04 -07:00
Richard Tang 51fdc4ddde fix: always new session for new agent 2026-03-12 20:34:42 -07:00
Richard Tang 04685d33ca fix: solve the problem from merge conflict 2026-03-12 20:28:25 -07:00
Richard Tang 729a0e0cec fix: resolve merge conflict 2026-03-12 20:23:58 -07:00
bryan 2bcb0cacee added pause/run button 2026-03-12 20:15:25 -07:00
Timothy 44bf191f53 fix: no orphaned node by bfs 2026-03-12 20:04:00 -07:00
Richard Tang 993b31f19b Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 20:00:45 -07:00
Richard Tang 41b3b9619f Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feature/flowchart-linked-experimental 2026-03-12 19:45:45 -07:00
Richard Tang 2a4fe4020c feat: force the planning agent to ask questions 2026-03-12 19:45:07 -07:00
Ishan Chaurasia 9d1f268078 fix(server): honor session_id in one-step session creation (#6233)
Align POST /api/sessions behavior across queen-only and one-step worker creation so callers can rely on deterministic session IDs. Add a regression test covering the forwarded session_id contract.

Made-with: Cursor
2026-03-13 10:43:12 +08:00
bryan 2185e127b1 style: coder tools formatting and template quote fixes 2026-03-12 19:39:53 -07:00
bryan 99ed885fd0 fix: add cached_tokens to finish event test assertion 2026-03-12 19:39:53 -07:00
bryan d8a390a685 feat: flowchart rendering in DraftGraph with node shapes and layout 2026-03-12 19:39:53 -07:00
bryan f50cf1735b feat: CSS variable theming for agent graph components 2026-03-12 19:39:53 -07:00
bryan 04eb57f54e feat: auto-load worker on cold restore when queen resumes 2026-03-12 19:39:53 -07:00
bryan 7378408eb8 feat: add flowchart type system and draft-to-graph dissolution 2026-03-12 19:39:53 -07:00
bryan cf05420417 style: formatting and import cleanup across framework modules 2026-03-12 19:38:55 -07:00
Timothy f5ed4c7d43 fix: validate orphaned gcu node 2026-03-12 19:38:44 -07:00
Timothy 5547432b6e fix: queen defaults to global max context tokens 2026-03-12 19:29:14 -07:00
Ishan Chaurasia 336557d7c7 fix: pass browser_wait text as data (#6235)
Pass browser_wait text through Playwright's function argument channel so quoted and multiline strings do not break the generated wait expression. Add a regression test covering text that previously would have been interpolated unsafely.

Made-with: Cursor
2026-03-13 10:08:16 +08:00
Timothy 87c172227c fix: mandate flowchart topology correction 2026-03-12 19:03:46 -07:00
Richard Tang c2c4929de8 feat: remove the phase in the label 2026-03-12 18:55:24 -07:00
Timothy a978338738 fix: allow replanning 2026-03-12 18:54:01 -07:00
Timothy 8eb59b1f66 fix: mandate usage of ask tools and change pending behavior 2026-03-12 18:34:15 -07:00
Richard Tang f9d5f95936 Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 18:32:26 -07:00
Timothy 651e99ffe3 Merge branch 'feature/multiple-asks' into feature/flowchart-linked-experimental 2026-03-12 17:57:11 -07:00
Richard Tang c01cd528d2 feat: planning phase prompt improvements 2026-03-12 17:44:06 -07:00
bryan 2434c86cdf docs: clarify two-step escalation relay protocol in queen prompt 2026-03-12 16:50:17 -07:00
bryan c4a5e621aa docs: update GCU prompt with popup tracking and close_all guidance 2026-03-12 16:50:06 -07:00
bryan 0f5b83d86a feat: add browser_close_all tool for bulk tab cleanup 2026-03-12 16:49:55 -07:00
bryan b5aadcd51e feat: auto-track popup pages and improve session startup logging 2026-03-12 16:49:46 -07:00
bryan 290d2f6823 feat: add --no-startup-window to Chrome launch flags 2026-03-12 16:49:36 -07:00
Richard Tang 944567dc31 chore: ruff lint 2026-03-12 16:23:13 -07:00
Richard Tang 674cf05601 feat: track the number of runs 2026-03-12 15:19:13 -07:00
Richard Tang 6fa71fa27d feat: track queen phase by message 2026-03-12 14:58:35 -07:00
Richard Tang 8c7065ad37 refactor: remove the parts conversion logic 2026-03-12 14:36:27 -07:00
Richard Tang a18ed5bbe6 feat: restore queen phase 2026-03-12 14:29:01 -07:00
bryan 9f3339650d chore: linter update 2026-03-12 14:27:17 -07:00
bryan d5e5d3e83d feat: add subagent activity tracking to queen status and instructions 2026-03-12 14:26:49 -07:00
bryan 5ea27dda09 refactor: update GCU system prompt for auto-snapshots and batching 2026-03-12 14:26:38 -07:00
bryan 6f9066ef20 feat: return auto-snapshot from browser interaction tools 2026-03-12 14:26:24 -07:00
bryan c37185732a feat: kill orphaned Chrome processes on GCU server shutdown 2026-03-12 14:26:05 -07:00
bryan 0c900fb50e refactor: clean session startup and add page lifecycle management 2026-03-12 14:25:16 -07:00
bryan 4d3ac28878 feat: launch Chrome on macOS via open -n to coexist with user's browser 2026-03-12 14:24:55 -07:00
bryan 270c1f8c50 fix: use lazy %-formatting in subagent completion log to avoid f-string in logger 2026-03-12 14:24:30 -07:00
bryan 3d0859d06a fix: stop clearing credentials_required on modal close to prevent infinite loop 2026-03-12 14:24:14 -07:00
Richard Tang ed3d4bfe33 feat: resume cold session from event logs 2026-03-12 14:07:57 -07:00
Richard Tang 596ce9878d feat: unique run id 2026-03-12 11:09:36 -07:00
bryan ffe47c0f71 fix: credential modal eating errors, banner stays open 2026-03-12 09:41:53 -07:00
bryan bf4652db4b fix: share event bus so tool events are visible to parent 2026-03-12 08:41:34 -07:00
bryan 2acd526b71 feat: dynamic viewport sizing and suppress Chrome warning bar 2026-03-12 08:40:49 -07:00
bryan df71834e4b refactor: switch from Playwright browser to system Chrome via CDP 2026-03-12 08:39:43 -07:00
Richard Tang 726016d24a fix: remove the duplicated session logic 2026-03-11 17:11:03 -07:00
Richard Tang 4895cea08a chore: lint and micro-fix 2026-03-11 16:55:29 -07:00
Richard Tang c9723a3ff2 feat(wip): always resume the previous session 2026-03-11 16:48:31 -07:00
Richard Tang 6cb73a6fea refactor: remove the remaining old trigger format and change the trigger format in examples to the latest format 2026-03-11 16:13:37 -07:00
Richard Tang 0c7f43f595 refactor: remove reference of the unused session judge 2026-03-11 16:01:00 -07:00
Richard Tang ea5cfcc5d6 refactor: remove the unused session judge 2026-03-11 15:57:19 -07:00
Richard Tang 34e85019c3 feat: stop supporting the old scheduler 2026-03-11 15:54:48 -07:00
Richard Tang c979dba958 fix: reference error from the rename 2026-03-11 14:33:42 -07:00
Richard Tang b4caa045e1 Merge remote-tracking branch 'origin/main' into feat/agent-trigger 2026-03-11 14:32:36 -07:00
bryan cba0ec110f fix: linter update 2026-03-08 19:37:57 -07:00
bryan 0256e0c944 Merge branch 'main' into feat/agent-trigger 2026-03-08 19:28:36 -07:00
bryan 4d9d0362a0 fixes to make the timer trigger properly 2026-03-08 18:44:42 -07:00
bryan f474d0bc8e Merge branch 'main' into feat/agent-trigger 2026-03-08 16:59:14 -07:00
bryan 6a0681b9aa feat: fixing phase 4, continuing to test 2026-03-08 16:52:00 -07:00
bryan c7e634851b feat: phase 4 of trigger plan 2026-03-06 19:21:32 -08:00
bryan cdb7155960 feat: phase 3 of trigger plan 2026-03-06 18:07:26 -08:00
bryan 3f7790c26a feat: phase 2 of trigger plan 2026-03-06 17:22:57 -08:00
bryan 5676b115f4 Merge branch 'feat/queen-responsibility' into feat/agent-trigger 2026-03-06 16:58:06 -08:00
bryan 61c59d57e8 feat: phase 1 of trigger plan 2026-03-06 15:11:36 -08:00
111 changed files with 9744 additions and 3052 deletions
@@ -0,0 +1,78 @@
name: Standard Bounty
description: A bounty task for general framework contributions (not integration-specific)
title: "[Bounty]: "
labels: []
body:
- type: markdown
attributes:
value: |
## Standard Bounty
This issue is part of the [Bounty Program](../../docs/bounty-program/README.md).
**Claim this bounty** by commenting below — a maintainer will assign you within 24 hours.
- type: dropdown
id: bounty-size
attributes:
label: Bounty Size
options:
- "Small (10 pts)"
- "Medium (30 pts)"
- "Large (75 pts)"
- "Extreme (150 pts)"
validations:
required: true
- type: dropdown
id: difficulty
attributes:
label: Difficulty
options:
- Easy
- Medium
- Hard
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: What needs to be done to complete this bounty.
placeholder: |
Describe the specific task, including:
- What the contributor needs to do
- Links to relevant files in the repo
- Any context or motivation for the change
validations:
required: true
- type: textarea
id: acceptance-criteria
attributes:
label: Acceptance Criteria
description: What "done" looks like. The PR must meet all criteria.
placeholder: |
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] CI passes
validations:
required: true
- type: textarea
id: relevant-files
attributes:
label: Relevant Files
description: Links to files or directories related to this bounty.
placeholder: |
- `path/to/file.py`
- `path/to/directory/`
- type: textarea
id: resources
attributes:
label: Resources
description: Links to docs, issues, or external references that will help.
placeholder: |
- Related issue: #XXXX
- Docs: https://...
+150 -27
View File
@@ -1,17 +1,149 @@
# Release Notes
## v0.7.1
**Release Date:** March 13, 2026
**Tag:** v0.7.1
### Chrome-Native Browser Control
v0.7.1 replaces Playwright with direct Chrome DevTools Protocol (CDP) integration. The GCU now launches the user's system Chrome via `open -n` on macOS, connects over CDP, and manages browser lifecycle end-to-end -- no extra browser binary required.
---
### Highlights
#### System Chrome via CDP
The entire GCU browser stack has been rewritten:
- **Chrome finder & launcher** -- New `chrome_finder.py` discovers installed Chrome and `chrome_launcher.py` manages process lifecycle with `--remote-debugging-port`
- **Coexist with user's browser** -- `open -n` on macOS launches a separate Chrome instance so the user's tabs stay untouched
- **Dynamic viewport sizing** -- Viewport auto-sizes to the available display area, suppressing Chrome warning bars
- **Orphan cleanup** -- Chrome processes are killed on GCU server shutdown to prevent leaks
- **`--no-startup-window`** -- Chrome launches headlessly by default until a page is needed
#### Per-Subagent Browser Isolation
Each GCU subagent gets its own Chrome user-data directory, preventing cookie/session cross-contamination:
- Unique browser profiles injected per subagent
- Profiles cleaned up after top-level GCU node execution
- Tab origin and age metadata tracked per subagent
#### Dummy Agent Testing Framework
A comprehensive test suite for validating agent graph patterns without LLM calls:
- 8 test modules covering echo, pipeline, branch, parallel merge, retry, feedback loop, worker, and GCU subagent patterns
- Shared fixtures and a `run_all.py` runner for CI integration
- Subagent lifecycle tests
---
### What's New
#### GCU Browser
- **Switch from Playwright to system Chrome via CDP** -- Direct CDP connection replaces Playwright dependency. (@bryanadenhq)
- **Chrome finder and launcher modules** -- `chrome_finder.py` and `chrome_launcher.py` for cross-platform Chrome discovery and process management. (@bryanadenhq)
- **Dynamic viewport sizing** -- Auto-size viewport and suppress Chrome warning bar. (@bryanadenhq)
- **Per-subagent browser profile isolation** -- Unique user-data directories per subagent with cleanup. (@bryanadenhq)
- **Tab origin/age metadata** -- Track which subagent opened each tab and when. (@bryanadenhq)
- **`browser_close_all` tool** -- Bulk tab cleanup for agents managing many pages. (@bryanadenhq)
- **Auto-track popup pages** -- Popups are automatically captured and tracked. (@bryanadenhq)
- **Auto-snapshot from browser interactions** -- Browser interaction tools return screenshots automatically. (@bryanadenhq)
- **Kill orphaned Chrome processes** -- GCU server shutdown cleans up lingering Chrome instances. (@bryanadenhq)
- **`--no-startup-window` Chrome flag** -- Prevent empty window on launch. (@bryanadenhq)
- **Launch Chrome via `open -n` on macOS** -- Coexist with the user's running browser. (@bryanadenhq)
#### Framework & Runtime
- **Session resume fix for new agents** -- Correctly resume sessions when a new agent is loaded. (@bryanadenhq)
- **Queen upsert fix** -- Prevent duplicate queen entries on session restore. (@bryanadenhq)
- **Anchor worker monitoring to queen's session ID on cold-restore** -- Worker monitors reconnect to the correct queen after restart. (@bryanadenhq)
- **Update meta.json when loading workers** -- Worker metadata stays in sync with runtime state. (@RichardTang-Aden)
- **Generate worker MCP file correctly** -- Fix MCP config generation for spawned workers. (@RichardTang-Aden)
- **Share event bus so tool events are visible to parent** -- Tool execution events propagate up to parent graphs. (@bryanadenhq)
- **Subagent activity tracking in queen status** -- Queen instructions include live subagent status. (@bryanadenhq)
- **GCU system prompt updates** -- Auto-snapshots, batching, popup tracking, and close_all guidance. (@bryanadenhq)
#### Frontend
- **Loading spinner in draft panel** -- Shows spinner during planning phase instead of blank panel. (@bryanadenhq)
- **Fix credential modal errors** -- Modal no longer eats errors; banner stays visible. (@bryanadenhq)
- **Fix credentials_required loop** -- Stop clearing the flag on modal close to prevent infinite re-prompting. (@bryanadenhq)
- **Fix "Add tab" dropdown overflow** -- Dropdown no longer hidden when many agents are open. (@prasoonmhwr)
#### Testing
- **Dummy agent test framework** -- 8 test modules (echo, pipeline, branch, parallel merge, retry, feedback loop, worker, GCU subagent) with shared fixtures and CI runner. (@bryanadenhq)
- **Subagent lifecycle tests** -- Validate subagent spawn and completion flows. (@bryanadenhq)
#### Documentation & Infrastructure
- **MCP integration PRD** -- Product requirements for MCP server registry. (@TimothyZhang7)
- **Skills registry PRD** -- Product requirements for skill registry system. (@bryanadenhq)
- **Bounty program updates** -- Standard bounty issue template and updated contributor guide. (@bryanadenhq)
- **Windows quickstart** -- Add default context limit for PowerShell setup. (@bryanadenhq)
- **Remove deprecated files** -- Clean up `setup_mcp.py`, `verify_mcp.py`, `antigravity-setup.md`, and `setup-antigravity-mcp.sh`. (@bryanadenhq)
---
### Bug Fixes
- Fix credential modal eating errors and banner staying open
- Stop clearing `credentials_required` on modal close to prevent infinite loop
- Share event bus so tool events are visible to parent graph
- Use lazy %-formatting in subagent completion log to avoid f-string in logger
- Anchor worker monitoring to queen's session ID on cold-restore
- Update meta.json when loading workers
- Generate worker MCP file correctly
- Fix "Add tab" dropdown partially hidden when creating multiple agents
---
### Community Contributors
- **Prasoon Mahawar** (@prasoonmhwr) -- Fix UI overflow on agent tab dropdown
- **Richard Tang** (@RichardTang-Aden) -- Worker MCP generation and meta.json fixes
---
### Upgrading
```bash
git pull origin main
uv sync
```
The Playwright dependency is no longer required for GCU browser operations. Chrome must be installed on the host system.
---
## v0.7.0
**Release Date:** March 5, 2026
**Tag:** v0.7.0
Session management refactor release.
---
## v0.5.1
**Release Date:** February 18, 2026
**Tag:** v0.5.1
## The Hive Gets a Brain
### The Hive Gets a Brain
v0.5.1 is our most ambitious release yet. Hive agents can now **build other agents** -- the new Hive Coder meta-agent writes, tests, and fixes agent packages from natural language. The runtime grows multi-graph support so one session can orchestrate multiple agents simultaneously. The TUI gets a complete overhaul with an in-app agent picker, live streaming, and seamless escalation to the Coder. And we're now provider-agnostic: Claude Code subscriptions, OpenAI-compatible endpoints, and any LiteLLM-supported model work out of the box.
---
## Highlights
### Highlights
### Hive Coder -- The Agent That Builds Agents
#### Hive Coder -- The Agent That Builds Agents
A native meta-agent that lives inside the framework at `core/framework/agents/hive_coder/`. Give it a natural-language specification and it produces a complete agent package -- goal definition, node prompts, edge routing, MCP tool wiring, tests, and all boilerplate files.
@@ -30,7 +162,7 @@ The Coder ships with:
- **Coder Tools MCP server** -- file I/O, fuzzy-match editing, git snapshots, and sandboxed shell execution (`tools/coder_tools_server.py`)
- **Test generation** -- structural tests for forever-alive agents that don't hang on `runner.run()`
### Multi-Graph Agent Runtime
#### Multi-Graph Agent Runtime
`AgentRuntime` now supports loading, managing, and switching between multiple agent graphs within a single session. Six new lifecycle tools give agents (and the TUI) full control:
@@ -44,7 +176,7 @@ await runtime.add_graph("exports/deep_research_agent")
The Hive Coder uses multi-graph internally -- when you escalate from a worker agent, the Coder loads as a separate graph while the worker stays alive in the background.
### TUI Revamp
#### TUI Revamp
The Terminal UI gets a ground-up rebuild with five major additions:
@@ -54,7 +186,7 @@ The Terminal UI gets a ground-up rebuild with five major additions:
- **PDF attachments** -- `/attach` and `/detach` commands with native OS file dialog (macOS, Linux, Windows)
- **Multi-graph commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>` for managing agent graphs in-session
### Provider-Agnostic LLM Support
#### Provider-Agnostic LLM Support
Hive is no longer Anthropic-only. v0.5.1 adds first-class support for:
@@ -66,9 +198,9 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## What's New
### What's New
### Architecture & Runtime
#### Architecture & Runtime
- **Hive Coder meta-agent** -- Natural-language agent builder with reference docs, guardian watchdog, and `hive code` CLI command. (@TimothyZhang7)
- **Multi-graph agent sessions** -- `add_graph`/`remove_graph` on AgentRuntime with 6 lifecycle tools (`load_agent`, `unload_agent`, `start_agent`, `restart_agent`, `list_agents`, `get_user_presence`). (@TimothyZhang7)
@@ -79,7 +211,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
- **Pre-start confirmation prompt** -- Interactive prompt before agent execution allowing credential updates or abort. (@RichardTang-Aden)
- **Event bus multi-graph support** -- `graph_id` on events, `filter_graph` on subscriptions, `ESCALATION_REQUESTED` event type, `exclude_own_graph` filter. (@TimothyZhang7)
### TUI Improvements
#### TUI Improvements
- **In-app agent picker** (Ctrl+A) -- Tabbed modal for browsing agents with metadata badges (nodes, tools, sessions, tags). (@TimothyZhang7)
- **Runtime-optional TUI startup** -- Launches without a pre-loaded agent, shows agent picker on startup. (@TimothyZhang7)
@@ -89,7 +221,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
- **Multi-graph TUI commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>`. (@TimothyZhang7)
- **Agent Guardian watchdog** -- Event-driven monitor that catches secondary agent failures and triggers automatic remediation, with `--no-guardian` CLI flag. (@TimothyZhang7)
### New Tool Integrations
#### New Tool Integrations
| Tool | Description | Contributor |
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
@@ -99,7 +231,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
| **Google Docs** | Document creation, reading, and editing with OAuth credential support | @haliaeetusvocifer |
| **Gmail enhancements** | Expanded mail operations for inbox management | @bryanadenhq |
### Infrastructure
#### Infrastructure
- **Default node type → `event_loop`** -- `NodeSpec.node_type` defaults to `"event_loop"` instead of `"llm_tool_use"`. (@TimothyZhang7)
- **Default `max_node_visits` → 0 (unlimited)** -- Nodes default to unlimited visits, reducing friction for feedback loops and forever-alive agents. (@TimothyZhang7)
@@ -112,7 +244,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## Bug Fixes
### Bug Fixes
- Flush WIP accumulator outputs on cancel/failure so edge conditions see correct values on resume
- Stall detection state preserved across resume (no more resets on checkpoint restore)
@@ -125,13 +257,13 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
- Fix email agent version conflicts (@RichardTang-Aden)
- Fix coder tool timeouts (120s for tests, 300s cap for commands)
## Documentation
### Documentation
- Clarify installation and prevent root pip install misuse (@paarths-collab)
---
## Agent Updates
### Agent Updates
- **Email Inbox Management** -- Consolidate `gmail_inbox_guardian` and `inbox_management` into a single unified agent with updated prompts and config. (@RichardTang-Aden, @bryanadenhq)
- **Job Hunter** -- Updated node prompts, config, and agent metadata; added PDF resume selection. (@bryanadenhq)
@@ -141,7 +273,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## Breaking Changes
### Breaking Changes
- **Deprecated node types raise `RuntimeError`** -- `llm_tool_use`, `llm_generate`, `function`, `router`, `human_input` now fail instead of warning. Migrate to `event_loop`.
- **`NodeSpec.node_type` defaults to `"event_loop"`** (was `"llm_tool_use"`)
@@ -150,7 +282,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## Community Contributors
### Community Contributors
A huge thank you to everyone who contributed to this release:
@@ -165,14 +297,14 @@ A huge thank you to everyone who contributed to this release:
---
## Upgrading
### Upgrading
```bash
git pull origin main
uv sync
```
### Migration Guide
#### Migration Guide
If your agents use deprecated node types, update them:
@@ -196,12 +328,3 @@ hive code
# Or from TUI -- press Ctrl+E to escalate
hive tui
```
---
## What's Next
- **Agent-to-agent communication** -- one agent's output triggers another agent's entry point
- **Cost visibility** -- detailed runtime log of LLM costs per node and per session
- **Persistent webhook subscriptions** -- survive agent restarts without re-registering
- **Remote agent deployment** -- run agents as long-lived services with HTTP APIs
+10 -10
View File
@@ -5,20 +5,20 @@ help: ## Show this help
awk 'BEGIN {FS = ":.*?## "}; {printf " \033[36m%-15s\033[0m %s\n", $$1, $$2}'
lint: ## Run ruff linter and formatter (with auto-fix)
cd core && ruff check --fix .
cd tools && ruff check --fix .
cd core && ruff format .
cd tools && ruff format .
cd core && uv run ruff check --fix .
cd tools && uv run ruff check --fix .
cd core && uv run ruff format .
cd tools && uv run ruff format .
format: ## Run ruff formatter
cd core && ruff format .
cd tools && ruff format .
cd core && uv run ruff format .
cd tools && uv run ruff format .
check: ## Run all checks without modifying files (CI-safe)
cd core && ruff check .
cd tools && ruff check .
cd core && ruff format --check .
cd tools && ruff format --check .
cd core && uv run ruff check .
cd tools && uv run ruff check .
cd core && uv run ruff format --check .
cd tools && uv run ruff format --check .
test: ## Run all tests (core + tools, excludes live)
cd core && uv run python -m pytest tests/ -v
+13 -8
View File
@@ -27,7 +27,7 @@
<img src="https://img.shields.io/badge/Multi--Agent-Systems-blue?style=flat-square" alt="Multi-Agent" />
<img src="https://img.shields.io/badge/Headless-Development-purple?style=flat-square" alt="Headless" />
<img src="https://img.shields.io/badge/Human--in--the--Loop-orange?style=flat-square" alt="HITL" />
<img src="https://img.shields.io/badge/Production--Ready-red?style=flat-square" alt="Production" />
<img src="https://img.shields.io/badge/Browser-Use-red?style=flat-square" alt="Browser Use" />
</p>
<p align="center">
<img src="https://img.shields.io/badge/OpenAI-supported-412991?style=flat-square&logo=openai" alt="OpenAI" />
@@ -37,7 +37,7 @@
## Overview
Build autonomous, reliable, self-improving AI agents without hardcoding workflows. Define your goal through conversation with hive coding agent(queen), and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, credential management, and real-time monitoring give you control without sacrificing adaptability.
Generate a swarm of worker agents with a coding agent(queen) that control them. Define your goal through conversation with hive queen, and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, browser use, credential management, and real-time monitoring give you control without sacrificing adaptability.
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
@@ -45,7 +45,7 @@ Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and
## Who Is Hive For?
Hive is designed for developers and teams who want to build **production-grade AI agents** without manually wiring complex workflows.
Hive is designed for developers and teams who want to build many **autonomous AI agents** fast without manually wiring complex workflows.
Hive is a good fit if you:
@@ -143,7 +143,6 @@ Now you can run an agent by selecting the agent (either an existing agent or exa
- **SDK-Wrapped Nodes** - Every node gets shared memory, local RLM memory, monitoring, tools, and LLM access out of the box
- **[Human-in-the-Loop](docs/key_concepts/graph.md#human-in-the-loop)** - Intervention nodes that pause execution for human input with configurable timeouts and escalation
- **Real-time Observability** - WebSocket streaming for live monitoring of agent execution, decisions, and node-to-node communication
- **Production-Ready** - Self-hostable, built for scale and reliability
## Integration
@@ -392,10 +391,6 @@ Hive generates your entire agent system from natural language goals using a codi
Yes, Hive is fully open-source under the Apache License 2.0. We actively encourage community contributions and collaboration.
**Q: Can Hive handle complex, production-scale use cases?**
Yes. Hive is explicitly designed for production environments with features like automatic failure recovery, real-time observability, cost controls, and horizontal scaling support. The framework handles both simple automations and complex multi-agent workflows.
**Q: Does Hive support human-in-the-loop workflows?**
Yes, Hive fully supports [human-in-the-loop](docs/key_concepts/graph.md#human-in-the-loop) workflows through intervention nodes that pause execution for human input. These include configurable timeouts and escalation policies, allowing seamless collaboration between human experts and AI agents.
@@ -420,6 +415,16 @@ Visit [docs.adenhq.com](https://docs.adenhq.com/) for complete guides, API refer
Contributions are welcome! Fork the repository, create your feature branch, implement your changes, and submit a pull request. See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
## Star History
<a href="https://star-history.com/#aden-hive/hive&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date" />
</picture>
</a>
---
<p align="center">
@@ -1,8 +1,6 @@
"""CLI entry point for Credential Tester agent."""
import asyncio
import logging
import sys
import click
+27
View File
@@ -16,6 +16,7 @@ class AgentEntry:
description: str
category: str
session_count: int = 0
run_count: int = 0
node_count: int = 0
tool_count: int = 0
tags: list[str] = field(default_factory=list)
@@ -52,6 +53,31 @@ def _count_sessions(agent_name: str) -> int:
return sum(1 for d in sessions_dir.iterdir() if d.is_dir() and d.name.startswith("session_"))
def _count_runs(agent_name: str) -> int:
"""Count unique run_ids across all sessions for an agent."""
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
if not sessions_dir.exists():
return 0
run_ids: set[str] = set()
for session_dir in sessions_dir.iterdir():
if not session_dir.is_dir() or not session_dir.name.startswith("session_"):
continue
# runs.jsonl lives inside workspace subdirectories
for runs_file in session_dir.rglob("runs.jsonl"):
try:
for line in runs_file.read_text(encoding="utf-8").splitlines():
line = line.strip()
if not line:
continue
record = json.loads(line)
rid = record.get("run_id")
if rid:
run_ids.add(rid)
except Exception:
continue
return len(run_ids)
def _extract_agent_stats(agent_path: Path) -> tuple[int, int, list[str]]:
"""Extract node count, tool count, and tags from an agent directory.
@@ -139,6 +165,7 @@ def discover_agents() -> dict[str, list[AgentEntry]]:
description=desc,
category=category,
session_count=_count_sessions(path.name),
run_count=_count_runs(path.name),
node_count=node_count,
tool_count=tool_count,
tags=tags,
+1 -2
View File
@@ -14,8 +14,7 @@ queen_goal = Goal(
id="queen-manager",
name="Queen Manager",
description=(
"Manage the worker agent lifecycle and serve as the user's primary "
"interactive interface. Triage health escalations from the judge."
"Manage the worker agent lifecycle and serve as the user's primary interactive interface."
),
success_criteria=[],
constraints=[],
+190 -202
View File
@@ -110,6 +110,10 @@ _QUEEN_STAGING_TOOLS = [
"stop_worker_and_edit",
"stop_worker_and_plan",
"write_to_diary", # Episodic memory — available in all phases
# Trigger management
"set_trigger",
"remove_trigger",
"list_triggers",
]
# Running phase: worker is executing — monitor and control.
@@ -126,11 +130,16 @@ _QUEEN_RUNNING_TOOLS = [
"stop_worker_and_edit",
"stop_worker_and_plan",
"get_worker_status",
"run_agent_with_input",
"inject_worker_message",
# Monitoring
"get_worker_health_summary",
"notify_operator",
"write_to_diary", # Episodic memory — available in all phases
# Trigger management
"set_trigger",
"remove_trigger",
"list_triggers",
]
@@ -173,12 +182,8 @@ search_files, or list_directory — those are YOUR tools, not theirs.
)
_planning_knowledge = """\
**A responsible engineer doesn't jump into building. First, \
understand the problem and be transparent about what the framework can and cannot do.**
Use the user's selection (or their custom description if they chose "Other") \
as context when shaping the goal below. If the user already described \
what they want before this step, skip the question and proceed directly.
**Be responsible, understand the problem by asking practical qualify questions \
and be transparent about what the framework can and cannot do.**
# Core Mandates (Planning)
- **DO NOT propose a complete goal on your own.** Instead, \
@@ -194,10 +199,12 @@ Before designing any agent, discover tools progressively — start compact, dril
what you need. ONLY use tools from this list in your node definitions. \
NEVER guess or fabricate tool names from memory.
list_agent_tools() # Step 1: provider summary (counts + credential status)
list_agent_tools(group="google", output_schema="summary") # Step 2: service breakdown within a provider
list_agent_tools(group="google", service="gmail") # Step 3: tool names for one service
list_agent_tools(group="google", service="gmail", output_schema="full") # Step 4: full detail for specific tools
list_agent_tools() # Step 1: provider summary
list_agent_tools(group="google", output_schema="summary") # Step 2: service breakdown
list_agent_tools(group="google", service="gmail") # Step 3: tool names
list_agent_tools( # Step 4: full detail
group="google", service="gmail", output_schema="full"
)
Step 1 is MANDATORY. Returns provider names, tool counts, credential availability very compact. \
Step 2 breaks a provider into services (e.g. google gmail/calendar/sheets/drive). Only do this \
@@ -208,30 +215,13 @@ Use credentials="available" at any step to filter to tools whose credentials are
# Discovery & Design Workflow
## 1: Fast Discovery (3-6 Turns)
## 1: Discovery (3-6 Turns)
**The core principle**: Discovery should feel like progress, not paperwork. \
The stakeholder should walk away feeling like you understood them faster \
than anyone else would have.
**Communication sytle**: Be concise. Say less. Mean more. Impatient stakeholders \
don't want a wall of text — they want to know you get it. Every sentence you say \
should either move the conversation forward or prove you understood something. \
If it does neither, cut it.
**Ask Question Rules: Respect Their Time.** Every question must earn its place by:
1. **Preventing a costly wrong turn** you're about to build the wrong thing
2. **Unlocking a shortcut** their answer lets you simplify the design
3. **Surfacing a dealbreaker** there's a constraint that changes everything
4. **Provide Options** - Provide options to your questions if possible, \
but also always allow the user to type something beyong the options.
If a question doesn't do one of these, don't ask it. Make an assumption, state it, and move on.
---
### 1.1: Let Them Talk, But Listen Like an Solution Architect
Ask questions to help the user find bridge the goal and the solution \
When the stakeholder describes what they want, mentally construct:
- **The pain**: What about today's situation is broken, slow, or missing?
@@ -242,57 +232,6 @@ When the stakeholder describes what they want, mentally construct:
---
### 1.2: Use Domain Knowledge to Fill In the Blanks
You have broad knowledge of how systems work. Use it aggressively.
If they say "I need a research agent," you already know it probably involves: \
search, summarization, source tracking, and iteration. Don't ask about each — \
use them as your starting mental model and let their specifics override your defaults.
If they say "I need to monitor files and alert me," you know this probably involves: \
watch patterns, triggers, notifications, and state tracking.
---
### 1.3: Play Back a Proposed Model (Not a List of Questions)
After listening, present a **concrete picture** of what you think they need. \
Make it specific enough that they can spot what's wrong. \
Can you ASCII to show the user
**Pattern: "Here's what I heard — tell me where I'm off"**
> "OK here's how I'm picturing this: [User type] needs to [core action]. \
Right now they're [current painful workflow]. \
What you want is [proposed solution that replaces the pain].
> The way I'd structure this: [key entities] connected by [key relationships], \
with the main flow being [trigger steps outcome].
> For the MVP, I'd focus on [the one thing that delivers the most value] \
and hold off on [things that can wait].
> Before I start [1-2 specific questions you genuinely can't infer]."
---
### 1.4: Ask Only What You Cannot Infer
Your questions should be **narrow, specific, and consequential**. \
Never ask what you could answer yourself.
**Good questions** (high-stakes, can't infer):
- "Who's the primary user — you or your end customers?"
- "Is this replacing a spreadsheet, or is there literally nothing today?"
- "Does this need to integrate with anything, or standalone?"
- "Is there existing data to migrate, or starting fresh?"
**Bad questions** (low-stakes, inferable):
- "What should happen if there's an error?" *(handle gracefully, obviously)*
- "Should it have search?" *(if there's a list, yes)*
- "How should we handle permissions?" *(follow standard patterns)*
- "What tools should I use?" *(your call, not theirs)*
---
## 2: Capability Assessment & Gap Analysis
**After the user responds, assess fit and gaps together.** Be honest and specific. \
@@ -329,52 +268,10 @@ Example:
configured yet. Do you have a Google service account or OAuth credentials \
you can set up? If not, I can use CSV file output instead."
## 3: Design Graph and Create Draft
## 3: Design flowchart
Act like an experienced AI solution architect. Design the agent architecture:
- Goal: id, name, description, 3-5 success criteria, 2-4 constraints
- Nodes: **3-6 nodes** (HARD RULE: never fewer than 3, never more than 6). \
2 nodes is ALWAYS wrong it means you under-decomposed the task. \
Use as many nodes as the use case requires, but don't create nodes without \
tools merge them into nodes that do real work.
- Edges: on_success for linear, conditional for routing
- Lifecycle: ALWAYS have terminal_nodes
**MERGE nodes when:**
- Node has NO tools (pure LLM reasoning) merge into predecessor/successor
- Node sets only 1 trivial output collapse into predecessor
**SEPARATE nodes when:**
- Fundamentally different tool sets (e.g., search vs. write vs. validate)
- Fan-out parallelism (parallel branches MUST be separate)
- Different failure/retry semantics (e.g., gather can retry, transform cannot)
- Distinct phases of work (e.g., research, transform, validate, deliver)
- A node would need more than ~5 tools split by responsibility
**Typical patterns (queen manages all user interaction):**
- 3 nodes: `gather work review`
- 4 nodes: `gather analyze transform review`
- 5 nodes: `gather research transform validate deliver`
- WRONG: 2 nodes where everything is crammed into one giant node
- WRONG: 7 nodes where half have no tools and just do LLM reasoning
Read reference agents before designing:
list_agents()
read_file("exports/deep_research_agent/agent.py")
read_file("exports/deep_research_agent/nodes/__init__.py")
**IMPORTANT: Call save_agent_draft() early and often.** \
The flowchart is a live collaboration artifact, not a final deliverable. \
Call save_agent_draft() as soon as you have a rough shape even before \
all details are finalized. Then **update it interactively** as the \
conversation progresses:
- After the user gives feedback ("add a validation step", "split that node") \
immediately call save_agent_draft() with the updated graph so they see \
the change reflected in the visualizer.
- After you refine your understanding of requirements update the draft.
- When the user asks "what about X?" and it changes the design update.
- Don't wait until everything is perfect — iterate visually with the user.
Act like an experienced AI solution architect. Design the agent architecture \
in the flowchart
The flowchart is the shared canvas. Every structural change should be \
visible to the user immediately. The draft captures business logic \
@@ -411,16 +308,15 @@ with a unique color. You can override auto-detection by setting \
- **offpage_connector** (dark grey, pentagon): Cross-page link
**Domain-specific:**
- **browser** (dark indigo, hexagon): GCU browser automation
- **subagent** (dark teal, subroutine): Planning-only sub-agent delegation \
(dissolved into parent's sub_agents at build time)
- **browser** (dark indigo, hexagon): GCU browser automation / sub-agent \
delegation. At build time, browser nodes are dissolved into the parent \
node's sub_agents list. Use for any GCU or sub-agent leaf node.
Auto-detection works well for most cases: first node start, nodes with \
no outgoing edges terminal, nodes with multiple conditional outgoing \
edges decision, GCU nodes browser, nodes mentioning "database" \
database, nodes mentioning "report/document" document, etc. Set \
flowchart_type explicitly only when auto-detection would be wrong. \
Note: `subagent` is never auto-detected you must set it explicitly.
flowchart_type explicitly only when auto-detection would be wrong.
## Decision Nodes — Planning-Only Conditional Branching
@@ -469,11 +365,11 @@ sub-agent nodes are **dissolved** into their parent node:
- At runtime, the parent node can invoke the sub-agent via `delegate_to_sub_agent`
**Rules for sub-agent nodes (INCLUDING GCU nodes):**
- Set `flowchart_type: "subagent"` explicitly (never auto-detected)
- GCU nodes are auto-detected as `flowchart_type: "browser"` (hexagon)
- Connect from the managing parent node to the sub-agent node
- Sub-agent nodes must be **leaf nodes** NO outgoing edges to other nodes
- The sub-agent node's ID must match a real node ID in the runtime graph \
(the node it represents will be invokable as a sub-agent)
- At build time, browser/GCU nodes are dissolved into the parent's \
`sub_agents` list, just like decision nodes are dissolved into criteria
**CRITICAL: GCU nodes (`node_type: "gcu"`) are ALWAYS sub-agents.** \
They MUST NOT appear in the linear flow. NEVER chain GCU nodes \
@@ -481,50 +377,23 @@ sequentially (A → gcu1 → gcu2 → B is WRONG). Instead, attach them \
as leaves to the parent that orchestrates them:
```
WRONG: intake gcu_find_prospect gcu_scan_mutuals check_results
WRONG: decision_node gcu_node (as a yes/no branch)
RIGHT: intake (sub_agents: [gcu_find, gcu_scan]) check_results
```
The parent node delegates to its GCU sub-agents and collects results. \
The main flow continues from the parent, not from the GCU node.
The main flow continues from the parent, not from the GCU node. \
GCU nodes MUST NOT be children of decision nodes decision nodes \
dissolve at build time, which would leave the GCU as a dangling \
workflow step.
**How to show delegation in the flowchart:**
```
research (deep_searcher) subagent node, leaf
research (deep_searcher) browser/GCU node, leaf
research [Enough results?] decision node
```
After dissolution: `research` node gets `sub_agents: ["deep_searcher"]` \
and `success_criteria: "Enough results?"`.
After calling save_agent_draft(), also present an ASCII graph in your message \
alongside a brief summary of each node's purpose. The user sees both the \
interactive visualizer AND your textual explanation.
```
gather
subagent: gcu_search
input: user_request
tools: load_data,
save_data
on_success
work
subagent: gcu_interact
tools: load_data,
save_data
on_success
review
tools: save_data
serve_file_to_user
on_failure
back to gather
```
If the worker agent start from some initial input it is okay. \
The queen(you) owns intake: you gathers user requirements, then calls \
`run_agent_with_input(task)` with a structured task description. \
@@ -636,8 +505,8 @@ nodes/__init__.py
- Goal description, success criteria values, constraint values, edge \
definitions, identity_prompt in agent.py
- CLI options in __main__.py
- For async entry points (timers/webhooks), add AsyncEntryPointSpec \
and AgentRuntimeConfig to agent.py
- For triggers (timers/webhooks), add entries to triggers.json in the \
agent's export directory
Do NOT modify or rewrite:
- Import statements at top of agent.py (they are correct)
@@ -672,12 +541,15 @@ _package_builder_knowledge = _shared_building_knowledge + _planning_knowledge +
_queen_identity_planning = """\
You are an experienced, responsible and curious Solution Architect. \
"Queen" is the internal alias. \
You ask smart questions to guide user to the solution \
You are in PLANNING phase your job is to either: \
(a) understand what the user wants and design a new agent, or \
(b) diagnose issues with an existing agent, discuss a fix plan with the user, \
then transition to building to implement. \
You have read-only tools for exploration but no write/edit tools. \
Focus on conversation, research, and design.\
Focus on conversation, research, and design. \
You MUST use ask_user / ask_user_multiple tools for ALL questions \
never ask questions in plain text without calling the tool.\
"""
_queen_identity_building = """\
@@ -735,11 +607,12 @@ document, database, subprocess, etc.) with unique shapes and colors. Set \
flowchart_type on a node to override. Nodes need only an id. \
Use decision nodes (flowchart_type: "decision", with decision_clause and \
labeled yes/no edges) to make conditional branching explicit. \
Use subagent nodes (flowchart_type: "subagent") as leaf nodes connected \
to a parent to show sub-agent delegation visually.
GCU/sub-agent nodes (node_type: "gcu") are auto-detected as browser \
hexagons connect them as leaf nodes to their parent.
- confirm_and_build() Record user confirmation of the draft. Dissolves \
planning-only nodes (decision predecessor criteria; subagent predecessor \
sub_agents list). Call this ONLY after the user explicitly approves via ask_user.
planning-only nodes (decision predecessor criteria; browser/GCU \
predecessor sub_agents list). Call this ONLY after the user explicitly \
approves via ask_user.
- initialize_and_build_agent(agent_name?, nodes?) Scaffold the agent package \
and transition to BUILDING phase. For new agents, this REQUIRES \
save_agent_draft() + confirm_and_build() first. The draft metadata is used to \
@@ -773,13 +646,14 @@ list_agent_checkpoints, get_agent_checkpoint
- load_built_agent(agent_path) Load the agent and switch to STAGING phase
- list_credentials(credential_id?) List authorized credentials
- save_agent_draft(...) **Re-draft the flowchart during building.** When \
called during building, planning-only nodes (decision, subagent) are \
called during building, planning-only nodes (decision, browser/GCU) are \
dissolved automatically no re-confirmation needed. The user sees the \
updated flowchart immediately. Use this when you make structural changes \
(add/remove nodes, change edges) so the flowchart stays in sync.
- replan_agent() Switch back to PLANNING phase. The previous draft is \
restored (with decision/subagent nodes intact) so you can edit it. Use \
when the user requests a major redesign that needs their approval.
restored (with decision/browser nodes intact) so you can edit it. Use \
when the user wants to change integrations, swap tools, rethink the \
flow, or discuss any design changes before you build them.
When you finish building an agent, call load_built_agent(path) to stage it.
"""
@@ -795,6 +669,9 @@ The agent is loaded and ready to run. You can inspect it and launch it:
- stop_worker_and_plan() Go to PLANNING phase to discuss changes with the user \
first (DEFAULT for most modification requests)
- stop_worker_and_edit() Go to BUILDING phase for immediate, specific fixes
- set_trigger(trigger_id, trigger_type?, trigger_config?) Activate a trigger (timer)
- remove_trigger(trigger_id) Deactivate a trigger
- list_triggers() List all triggers and their active/inactive status
You do NOT have write tools. To modify the agent, prefer \
stop_worker_and_plan() unless the user gave a specific instruction.
@@ -817,6 +694,15 @@ with the user first (DEFAULT for most modification requests)
You do NOT have write tools. To modify the agent, prefer \
stop_worker_and_plan() unless the user gave a specific instruction. \
To just stop without modifying, call stop_worker().
- stop_worker_and_edit() Stop the worker and switch back to BUILDING phase
- set_trigger(trigger_id, trigger_type?, trigger_config?) Activate a trigger (timer)
- remove_trigger(trigger_id) Deactivate a trigger
- list_triggers() List all triggers and their active/inactive status
You do NOT have write tools or agent construction tools. \
If you need to modify the agent, call stop_worker_and_edit() to switch back \
to BUILDING phase. To stop the worker and ask the user what to do next, call \
stop_worker() to return to STAGING phase.
"""
# -- Behavior shared across all phases --
@@ -833,7 +719,8 @@ input unless you call one of these tools. You MUST call it as the LAST \
action in your response.
NEVER end a response with a question in text without calling ask_user. \
NEVER rely on the user seeing your text and replying call ask_user.
NEVER rely on the user seeing your text and replying call ask_user. \
NEVER list options as text bullets the tool renders interactive buttons.
**When you have 2+ questions**, use ask_user_multiple instead of ask_user. \
This renders all questions at once so the user answers in one interaction \
@@ -847,21 +734,36 @@ appearing. Keep your text to a brief context/intro sentence only.
Always provide 2-4 short options that cover the most likely answers. \
The user can always type a custom response.
### WRONG — never do this:
```
I need a few details:
- Documentation Source: Where should the agent look?
- Trigger: Should the agent poll or get a URL?
- Review Channel: Slack, Email, or Sheets?
Which of these would you like to define first?
1. Documentation source
2. Trigger
3. Review channel
```
This lists questions as plain text with NO tool call the user has no \
interactive widget and the system doesn't know you're waiting for input.
### RIGHT — always do this:
Write a brief intro (1-2 sentences), then call the tool:
- ask_user_multiple(questions=[
{"id": "docs", "prompt": "Where should the agent find answers?",
"options": ["GitHub repo", "Documentation website", "Internal wiki"]},
{"id": "trigger", "prompt": "How should questions be discovered?",
"options": ["Poll search automatically", "I provide a URL"]},
{"id": "review", "prompt": "Where to send drafted responses?",
"options": ["Slack", "Email", "Google Sheets"]}
])
Examples (single question):
- ask_user("What do you need?",
["Build a new agent", "Run the loaded worker", "Help with code"])
- ask_user("Ready to proceed?",
["Yes, go ahead", "Let me change something"])
Example (multiple questions ALWAYS use ask_user_multiple):
- ask_user_multiple(questions=[
{"id": "goal", "prompt": "What should this agent do?"},
{"id": "tools", "prompt": "Which integrations?",
"options": ["Slack", "Gmail", "Google Sheets"]},
{"id": "schedule", "prompt": "How often should it run?",
"options": ["On demand", "Every hour", "Daily"]}
])
## Greeting
When the user greets you, respond concisely (under 10 lines) with worker \
@@ -986,10 +888,30 @@ flowchart immediately.
- **Minor changes** (add a node, rename, adjust edges): call \
save_agent_draft() with the updated graph and keep building.
- **Major redesign** (user requests fundamental restructuring): call \
replan_agent() to go back to planning. The previous draft is restored \
so you can edit it with the user rather than starting from scratch. \
After they approve, confirm_and_build() continue building.
- **User wants to discuss, redesign, or change integrations/tools**: call \
replan_agent(). The previous draft is restored so you can edit it with \
the user. After they approve, confirm_and_build() continue building.
**When to call replan_agent():** Changing which tools or integrations a \
node uses, swapping data sources, rethinking the flow, or any time the \
user says "replan", "go back", "let's redesign", "change the approach", \
"use a different tool/API", etc. Do NOT stay in building to handle these \
switch to planning so the user can review and approve the new design.
## CRITICAL — Graph topology errors require replanning, not code edits
If you discover that the agent graph has structural problems GCU nodes \
in the linear flow, missing edges, wrong node connections, incorrect \
sub-agent assignments you MUST call replan_agent() and fix the draft. \
Do NOT attempt to fix topology by editing agent.py directly. The graph \
structure is defined by the draft dissolution code-gen pipeline. \
Editing code to rewire nodes bypasses the flowchart and creates drift \
between what the user sees and what the code does.
**WRONG:** "Let me fix agent.py to remove GCU nodes from edges..."
**RIGHT:** Call replan_agent(), fix the draft with save_agent_draft(), \
get user approval, then confirm_and_build() the corrected code is \
generated automatically.
"""
# -- STAGING phase behavior --
@@ -1067,6 +989,33 @@ Use stop_worker_and_edit() only when:
- The user gave a specific, concrete instruction ("add save_data to the gather node")
- You already discussed the fix in a previous planning session
- The change is trivial and unambiguous (rename, toggle a flag)
## Trigger Management
Use list_triggers() to see available triggers from the loaded worker.
Use set_trigger(trigger_id) to activate a timer. Once active, triggers \
fire periodically and inject [TRIGGER: ...] messages so you can decide \
whether to call run_agent_with_input(task).
### When the user says "Enable trigger <id>" (or clicks Enable in the UI):
1. Call get_worker_status(focus="memory") to check if the worker has \
saved configuration (rules, preferences, settings from a prior run).
2. If memory contains saved config: compose a task string from it \
(e.g. "Process inbox emails using saved rules") and call \
set_trigger(trigger_id, task="...") immediately. Tell the user the \
trigger is now active and what schedule it uses. Do NOT ask them to \
provide the task you derive it from memory.
3. If memory is empty (no prior run): tell the user the agent needs to \
run once first so its configuration can be saved. Offer to run it now. \
Once the worker finishes, enable the trigger.
4. If the user just provided config this session (rules/task context \
already in conversation): use that directly, no memory lookup needed. \
Enable the trigger immediately.
Never ask "what should the task be?" when enabling a trigger for an \
agent with a clear purpose. The task string is a brief description of \
what the worker does, derived from its saved state or your current context.
"""
# -- RUNNING phase behavior --
@@ -1081,12 +1030,24 @@ NOT ask the user directly.
You wake up when:
- The user explicitly addresses you
- A worker escalation arrives (`[WORKER_ESCALATION_REQUEST]`)
- An escalation ticket arrives from the judge
- The worker finishes (`[WORKER_TERMINAL]`)
If the user asks for progress, call get_worker_status() ONCE and report. \
If the summary mentions issues, follow up with get_worker_status(focus="issues").
## Subagent delegations (browser automation, GCU)
When the worker delegates to a subagent (e.g., GCU browser automation), expect it \
to take 2-5 minutes. During this time:
- Progress will show 0% this is NORMAL. The subagent only calls set_output at the end.
- Check get_worker_status(focus="full") for "subagent_activity" this shows the \
subagent's latest reasoning text and confirms it is making real progress.
- Do NOT conclude the subagent is stuck just because progress is 0% or because \
you see repeated browser_click/browser_snapshot calls that is the expected \
pattern for web scraping.
- Only intervene if: the subagent has been running for 5+ minutes with no new \
subagent_activity updates, OR the judge escalates.
## Handling worker termination ([WORKER_TERMINAL])
When you receive a `[WORKER_TERMINAL]` event, the worker has finished:
@@ -1115,19 +1076,30 @@ IMPORTANT: Only auto-handle if the user has NOT explicitly told you how to handl
escalations. If the user gave you instructions (e.g., "just retry on errors", \
"skip any auth issues"), follow those instructions instead.
CRITICAL escalation relay protocol:
When an escalation requires user input (auth blocks, human review), the worker \
or its subagent is BLOCKED and waiting for your response. You MUST follow this \
exact two-step sequence:
Step 1: call ask_user() to get the user's answer.
Step 2: call inject_worker_message() with the user's answer IMMEDIATELY after.
If you skip Step 2, the worker/subagent stays blocked FOREVER and the task hangs. \
NEVER respond to the user without also calling inject_worker_message() to unblock \
the worker. Even if the user says "skip" or "cancel", you must still relay that \
decision via inject_worker_message() so the worker can clean up.
**Auth blocks / credential issues:**
- ALWAYS ask the user (unless user explicitly told you how to handle this).
- The worker cannot proceed without valid credentials.
- Explain which credential is missing or invalid.
- Use ask_user to get guidance: "Provide credentials", "Skip this task", "Stop and edit agent"
- Use inject_worker_message() to relay user decisions back to the worker.
- Step 1: ask_user for guidance "Provide credentials", "Skip this task", "Stop and edit agent"
- Step 2: inject_worker_message() with the user's response to unblock the worker.
**Need human review / approval:**
- ALWAYS ask the user (unless user explicitly told you how to handle this).
- The worker is explicitly requesting human judgment.
- Present the context clearly (what decision is needed, what are the options).
- Use ask_user with the actual decision options.
- Use inject_worker_message() to relay user decisions back to the worker.
- Step 1: ask_user with the actual decision options.
- Step 2: inject_worker_message() with the user's decision to unblock the worker.
**Errors / unexpected failures:**
- Explain what went wrong in plain terms.
@@ -1135,6 +1107,7 @@ escalations. If the user gave you instructions (e.g., "just retry on errors", \
- Or offer: "Diagnose the issue" use stop_worker_and_plan() to investigate first.
- Or offer: "Retry as-is", "Skip this task", "Abort run"
- (Skip asking if user explicitly told you to auto-retry or auto-skip errors.)
- If the escalation had wait_for_response: inject_worker_message() with the decision.
**Informational / progress updates:**
- Acknowledge briefly and let the worker continue.
@@ -1159,6 +1132,21 @@ When the user asks to fix, change, modify, or update the loaded worker \
**Default: use stop_worker_and_plan().** Most modification requests need \
discussion first. Only use stop_worker_and_edit() when the user gave a \
specific, unambiguous instruction or you already agreed on the fix.
## Trigger Handling
You will receive [TRIGGER: ...] messages when a scheduled timer fires. \
These are framework-level signals, not user messages.
Rules:
- Check get_worker_status() before calling run_agent_with_input(task). If the worker \
is already RUNNING, decide: skip this trigger, or note it for after completion.
- When multiple [TRIGGER] messages arrive at once, read them all before acting. \
Batch your response do not call run_agent_with_input() once per trigger.
- If a trigger fires but the task no longer makes sense (e.g., user changed \
config since last run), skip it and inform the user.
- Never disable a trigger without telling the user. Use remove_trigger() only \
when explicitly asked or when the trigger is clearly obsolete.
"""
# -- Backward-compatible composed versions (used by queen_node.system_prompt default) --
@@ -1222,8 +1210,8 @@ ticket_triage_node = NodeSpec(
id="ticket_triage",
name="Ticket Triage",
description=(
"Queen's triage node. Receives an EscalationTicket from the Health Judge "
"via event-driven entry point and decides: dismiss or notify the operator."
"Queen's triage node. Receives an EscalationTicket via event-driven "
"entry point and decides: dismiss or notify the operator."
),
node_type="event_loop",
client_facing=True, # Operator can chat with queen once connected (Ctrl+Q)
@@ -1237,8 +1225,8 @@ ticket_triage_node = NodeSpec(
),
tools=["notify_operator"],
system_prompt="""\
You are the Queen. The Worker Health Judge has escalated a worker \
issue to you. The ticket is in your memory under key "ticket". Read it carefully.
You are the Queen. A worker health issue has been escalated to you. \
The ticket is in your memory under key "ticket". Read it carefully.
## Dismiss criteria — do NOT call notify_operator:
- severity is "low" AND steps_since_last_accept < 8
@@ -1277,7 +1265,7 @@ queen_node = NodeSpec(
description=(
"User's primary interactive interface with full coding capability. "
"Can build agents directly or delegate to the worker. Manages the "
"worker agent lifecycle and triages health escalations from the judge."
"worker agent lifecycle."
),
node_type="event_loop",
client_facing=True,
@@ -27,7 +27,9 @@
## GCU Errors
15. **Manually wiring browser tools on event_loop nodes** — Use `node_type="gcu"` which auto-includes browser tools. Do NOT manually list browser tool names.
16. **Using GCU nodes as regular graph nodes** — GCU nodes are subagents only. They must ONLY appear in `sub_agents=["gcu-node-id"]` and be invoked via `delegate_to_sub_agent()`. Never connect via edges or use as entry/terminal nodes.
17. **Reusing the same GCU node ID for parallel tasks** — Each concurrent browser task needs a distinct GCU node ID (e.g. `gcu-site-a`, `gcu-site-b`). Two `delegate_to_sub_agent` calls with the same `agent_id` share a browser profile and will interfere with each other's pages.
18. **Passing `profile=` in GCU tool calls** — Profile isolation for parallel subagents is automatic. The framework injects a unique profile per subagent via an asyncio `ContextVar`. Hardcoding `profile="default"` in a GCU system prompt breaks this isolation.
## Worker Agent Errors
17. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
18. **Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
19. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
20. **Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
@@ -332,81 +332,46 @@ class MyAgent:
default_agent = MyAgent()
```
## agent.py — Async Entry Points Variant
## triggers.json — Timer and Webhook Triggers
When an agent needs timers, webhooks, or event-driven triggers, add
`async_entry_points` and optionally `runtime_config` as module-level variables.
These are IN ADDITION to the standard variables above.
When an agent needs timers, webhooks, or event-driven triggers, create a
`triggers.json` file in the agent's directory (alongside `agent.py`).
The queen loads these at session start and the user can manage them via
the `set_trigger` / `remove_trigger` tools at runtime.
```python
# Additional imports for async entry points
from framework.graph.edge import GraphSpec, AsyncEntryPointSpec
from framework.runtime.agent_runtime import (
AgentRuntime, AgentRuntimeConfig, create_agent_runtime,
)
# ... (goal, nodes, edges, entry_node, entry_points, etc. as above) ...
# Async entry points — event-driven triggers
async_entry_points = [
# Timer with cron: daily at 9am
AsyncEntryPointSpec(
id="daily-check",
name="Daily Check",
entry_node="process-node",
trigger_type="timer",
trigger_config={"cron": "0 9 * * *"},
isolation_level="shared",
max_concurrent=1,
),
# Timer with fixed interval: every 20 minutes
AsyncEntryPointSpec(
id="scheduled-check",
name="Scheduled Check",
entry_node="process-node",
trigger_type="timer",
trigger_config={"interval_minutes": 20, "run_immediately": False},
isolation_level="shared",
max_concurrent=1,
),
# Event: reacts to webhook events
AsyncEntryPointSpec(
id="webhook-event",
name="Webhook Event Handler",
entry_node="process-node",
trigger_type="event",
trigger_config={"event_types": ["webhook_received"]},
isolation_level="shared",
max_concurrent=10,
),
```json
[
{
"id": "daily-check",
"name": "Daily Check",
"trigger_type": "timer",
"trigger_config": {"cron": "0 9 * * *"},
"task": "Run the daily check process"
},
{
"id": "scheduled-check",
"name": "Scheduled Check",
"trigger_type": "timer",
"trigger_config": {"interval_minutes": 20},
"task": "Run the scheduled check"
},
{
"id": "webhook-event",
"name": "Webhook Event Handler",
"trigger_type": "webhook",
"trigger_config": {"event_types": ["webhook_received"]},
"task": "Process incoming webhook event"
}
]
# Webhook server config (only needed if using webhooks)
runtime_config = AgentRuntimeConfig(
webhook_host="127.0.0.1",
webhook_port=8080,
webhook_routes=[
{
"source_id": "my-source",
"path": "/webhooks/my-source",
"methods": ["POST"],
},
],
)
```
**Key rules for async entry points:**
- `async_entry_points` is a list of `AsyncEntryPointSpec` (NOT `EntryPointSpec`)
- `runtime_config` is `AgentRuntimeConfig` (NOT `RuntimeConfig` from config.py)
- Valid trigger_types: `timer`, `event`, `webhook`, `manual`, `api`
- Valid isolation_levels: `isolated`, `shared`, `synchronized`
**Key rules for triggers.json:**
- Valid trigger_types: `timer`, `webhook`
- Timer trigger_config (cron): `{"cron": "0 9 * * *"}` — standard 5-field cron expression
- Timer trigger_config (interval): `{"interval_minutes": float, "run_immediately": bool}`
- Event trigger_config: `{"event_types": ["webhook_received"], "filter_stream": "...", "filter_node": "..."}`
- Use `isolation_level="shared"` for async entry points that need to read
the primary session's memory (e.g., user-configured rules)
- The `_build_graph()` method passes `async_entry_points` to GraphSpec
- Reference: `exports/gmail_inbox_guardian/agent.py`
- Timer trigger_config (interval): `{"interval_minutes": float}`
- Each trigger must have a unique `id`
- The `task` field describes what the worker should do when the trigger fires
- Triggers are persisted back to `triggers.json` when modified via queen tools
## __init__.py
@@ -453,21 +418,6 @@ __all__ = [
]
```
**If the agent uses async entry points**, also import and export:
```python
from .agent import (
...,
async_entry_points,
runtime_config, # Only if using webhooks
)
__all__ = [
...,
"async_entry_points",
"runtime_config",
]
```
## __main__.py
```python
@@ -31,8 +31,7 @@ module-level variables via `getattr()`:
| `conversation_mode` | no | not passed | Isolated mode (no context carryover) |
| `identity_prompt` | no | not passed | No agent-level identity |
| `loop_config` | no | `{}` | No iteration limits |
| `async_entry_points` | no | `[]` | No async triggers (timers, webhooks, events) |
| `runtime_config` | no | `None` | No webhook server |
| `triggers.json` (file) | no | not present | No triggers (timers, webhooks) |
**CRITICAL:** `__init__.py` MUST import and re-export ALL of these from
`agent.py`. Missing exports silently fall back to defaults, causing
@@ -257,44 +256,28 @@ Multiple ON_SUCCESS edges from same source → parallel execution via asyncio.ga
Judge is the SOLE acceptance mechanism — no ad-hoc framework gating.
## Async Entry Points (Webhooks, Timers, Events)
## Triggers (Timers, Webhooks)
For agents that react to external events, use `AsyncEntryPointSpec`:
For agents that react to external events, create a `triggers.json` file
in the agent's export directory:
```python
from framework.graph.edge import AsyncEntryPointSpec
from framework.runtime.agent_runtime import AgentRuntimeConfig
# Timer trigger (cron or interval)
async_entry_points = [
AsyncEntryPointSpec(
id="daily-check",
name="Daily Check",
entry_node="process",
trigger_type="timer",
trigger_config={"cron": "0 9 * * *"}, # daily at 9am
isolation_level="shared",
)
```json
[
{
"id": "daily-check",
"name": "Daily Check",
"trigger_type": "timer",
"trigger_config": {"cron": "0 9 * * *"},
"task": "Run the daily check process"
}
]
# Webhook server (optional)
runtime_config = AgentRuntimeConfig(
webhook_host="127.0.0.1",
webhook_port=8080,
webhook_routes=[{"source_id": "gmail", "path": "/webhooks/gmail", "methods": ["POST"]}],
)
```
### Key Fields
- `trigger_type`: `"timer"`, `"event"`, `"webhook"`, `"manual"`
- `trigger_type`: `"timer"` or `"webhook"`
- `trigger_config`: `{"cron": "0 9 * * *"}` or `{"interval_minutes": 20}`
- `isolation_level`: `"shared"` (recommended), `"isolated"`, `"synchronized"`
- `event_types`: For event triggers, e.g., `["webhook_received"]`
### Exports Required
Both `async_entry_points` and `runtime_config` must be exported from `__init__.py`.
See `exports/gmail_inbox_guardian/agent.py` for complete example.
- `task`: describes what the worker should do when the trigger fires
- Triggers can also be created/removed at runtime via `set_trigger` / `remove_trigger` queen tools
## Tool Discovery
@@ -109,6 +109,45 @@ Key rules to bake into GCU node prompts:
- Keep tool calls per turn ≤10
- Tab isolation: when browser is already running, use `browser_open(background=true)` and pass `target_id` to every call
## Multiple Concurrent GCU Subagents
When a task can be parallelized across multiple sites or profiles, declare a distinct GCU
node for each and invoke them all in the same LLM turn. The framework batches all
`delegate_to_sub_agent` calls made in one turn and runs them with `asyncio.gather`, so
they execute concurrently — not sequentially.
**Each GCU subagent automatically gets its own isolated browser context** — no `profile=`
argument is needed in tool calls. The framework derives a unique profile from the subagent's
node ID and instance counter and injects it via an asyncio `ContextVar` before the subagent
runs.
### Example: three sites in parallel
```python
# Three distinct GCU nodes
gcu_site_a = NodeSpec(id="gcu-site-a", node_type="gcu", ...)
gcu_site_b = NodeSpec(id="gcu-site-b", node_type="gcu", ...)
gcu_site_c = NodeSpec(id="gcu-site-c", node_type="gcu", ...)
orchestrator = NodeSpec(
id="orchestrator",
node_type="event_loop",
sub_agents=["gcu-site-a", "gcu-site-b", "gcu-site-c"],
system_prompt="""\
Call all three subagents in a single response to run them in parallel:
delegate_to_sub_agent(agent_id="gcu-site-a", task="Scrape prices from site A")
delegate_to_sub_agent(agent_id="gcu-site-b", task="Scrape prices from site B")
delegate_to_sub_agent(agent_id="gcu-site-c", task="Scrape prices from site C")
""",
)
```
**Rules:**
- Use distinct node IDs for each concurrent task — sharing an ID shares the browser context.
- The GCU node prompts do not need to mention `profile=`; isolation is automatic.
- Cleanup is automatic at session end, but GCU nodes can call `browser_stop()` explicitly
if they want to release resources mid-run.
## GCU Anti-Patterns
- Using `browser_screenshot` to read text (use `browser_snapshot`)
@@ -1,8 +1,8 @@
"""Queen's ticket receiver entry point.
When the Worker Health Judge emits a WORKER_ESCALATION_TICKET event on the
shared EventBus, this entry point fires and routes to the ``ticket_triage``
node, where the Queen deliberates and decides whether to notify the operator.
When a WORKER_ESCALATION_TICKET event is emitted on the shared EventBus,
this entry point fires and routes to the ``ticket_triage`` node, where the
Queen deliberates and decides whether to notify the operator.
Isolation level is ``isolated`` the queen's triage memory is kept separate
from the worker's shared memory. Each ticket triage runs in its own context.
+8
View File
@@ -121,6 +121,14 @@ def get_gcu_enabled() -> bool:
return get_hive_config().get("gcu_enabled", True)
def get_gcu_viewport_scale() -> float:
"""Return GCU viewport scale factor (0.1-1.0), default 0.8."""
scale = get_hive_config().get("gcu_viewport_scale", 0.8)
if isinstance(scale, (int, float)) and 0.1 <= scale <= 1.0:
return float(scale)
return 0.8
def get_api_base() -> str | None:
"""Return the api_base URL for OpenAI-compatible endpoints, if configured."""
llm = get_hive_config().get("llm", {})
+13 -85
View File
@@ -322,7 +322,11 @@ class AsyncEntryPointSpec(BaseModel):
id: str = Field(description="Unique identifier for this entry point")
name: str = Field(description="Human-readable name")
entry_node: str = Field(description="Node ID to start execution from")
entry_node: str = Field(
default="",
description="Deprecated: Node ID to start execution from. "
"Triggers are graph-level; worker always enters at GraphSpec.entry_node.",
)
trigger_type: str = Field(
default="manual",
description="How this entry point is triggered: webhook, api, timer, event, manual",
@@ -331,6 +335,10 @@ class AsyncEntryPointSpec(BaseModel):
default_factory=dict,
description="Trigger-specific configuration (e.g., webhook URL, timer interval)",
)
task: str = Field(
default="",
description="Worker task string when this trigger fires autonomously",
)
isolation_level: str = Field(
default="shared", description="State isolation: isolated, shared, or synchronized"
)
@@ -368,28 +376,8 @@ class GraphSpec(BaseModel):
edges=[...],
)
For multi-entry-point agents (concurrent streams):
GraphSpec(
id="support-agent-graph",
goal_id="support-001",
entry_node="process-webhook", # Default entry
async_entry_points=[
AsyncEntryPointSpec(
id="webhook",
name="Zendesk Webhook",
entry_node="process-webhook",
trigger_type="webhook",
),
AsyncEntryPointSpec(
id="api",
name="API Handler",
entry_node="process-request",
trigger_type="api",
),
],
nodes=[...],
edges=[...],
)
Triggers (timer, webhook, event) are now defined in ``triggers.json``
alongside the agent directory, not embedded in the graph spec.
"""
id: str
@@ -402,12 +390,6 @@ class GraphSpec(BaseModel):
default_factory=dict,
description="Named entry points for resuming execution. Format: {name: node_id}",
)
async_entry_points: list[AsyncEntryPointSpec] = Field(
default_factory=list,
description=(
"Asynchronous entry points for concurrent execution streams (used with AgentRuntime)"
),
)
terminal_nodes: list[str] = Field(
default_factory=list, description="IDs of nodes that end execution"
)
@@ -486,17 +468,6 @@ class GraphSpec(BaseModel):
return node
return None
def has_async_entry_points(self) -> bool:
"""Check if this graph uses async entry points (multi-stream execution)."""
return len(self.async_entry_points) > 0
def get_async_entry_point(self, entry_point_id: str) -> AsyncEntryPointSpec | None:
"""Get an async entry point by ID."""
for ep in self.async_entry_points:
if ep.id == entry_point_id:
return ep
return None
def get_outgoing_edges(self, node_id: str) -> list[EdgeSpec]:
"""Get all edges leaving a node, sorted by priority."""
edges = [e for e in self.edges if e.source == node_id]
@@ -587,37 +558,6 @@ class GraphSpec(BaseModel):
if not self.get_node(self.entry_node):
errors.append(f"Entry node '{self.entry_node}' not found")
# Check async entry points
seen_entry_ids = set()
for entry_point in self.async_entry_points:
# Check for duplicate IDs
if entry_point.id in seen_entry_ids:
errors.append(f"Duplicate async entry point ID: '{entry_point.id}'")
seen_entry_ids.add(entry_point.id)
# Check entry node exists
if not self.get_node(entry_point.entry_node):
errors.append(
f"Async entry point '{entry_point.id}' references "
f"missing node '{entry_point.entry_node}'"
)
# Validate isolation level
valid_isolation = {"isolated", "shared", "synchronized"}
if entry_point.isolation_level not in valid_isolation:
errors.append(
f"Async entry point '{entry_point.id}' has invalid isolation_level "
f"'{entry_point.isolation_level}'. Valid: {valid_isolation}"
)
# Validate trigger type
valid_triggers = {"webhook", "api", "timer", "event", "manual"}
if entry_point.trigger_type not in valid_triggers:
errors.append(
f"Async entry point '{entry_point.id}' has invalid trigger_type "
f"'{entry_point.trigger_type}'. Valid: {valid_triggers}"
)
# Check terminal nodes exist
for term in self.terminal_nodes:
if not self.get_node(term):
@@ -646,10 +586,6 @@ class GraphSpec(BaseModel):
for entry_point_node in self.entry_points.values():
to_visit.append(entry_point_node)
# Add all async entry points as valid starting points
for async_entry in self.async_entry_points:
to_visit.append(async_entry.entry_node)
# Traverse from all entry points
while to_visit:
current = to_visit.pop()
@@ -666,18 +602,10 @@ class GraphSpec(BaseModel):
for sub_agent_id in sub_agents:
reachable.add(sub_agent_id)
# Build set of async entry point nodes for quick lookup
async_entry_nodes = {ep.entry_node for ep in self.async_entry_points}
for node in self.nodes:
if node.id not in reachable:
# Skip if node is a pause node, entry point target, or async entry
# (pause/resume architecture and async entry points make reachable)
if (
node.id in self.pause_nodes
or node.id in self.entry_points.values()
or node.id in async_entry_nodes
):
# Skip if node is a pause node or entry point target
if node.id in self.pause_nodes or node.id in self.entry_points.values():
continue
errors.append(f"Node '{node.id}' is unreachable from entry")
+190 -30
View File
@@ -36,6 +36,21 @@ from framework.runtime.llm_debug_logger import log_llm_turn
logger = logging.getLogger(__name__)
@dataclass
class TriggerEvent:
"""A framework-level trigger signal (timer tick or webhook hit).
Triggers are queued separately from user messages / external events
and drained atomically so the LLM sees all pending triggers at once.
"""
trigger_type: str # "timer" | "webhook"
source_id: str # entry point ID or webhook route ID
payload: dict[str, Any] = field(default_factory=dict)
timestamp: float = field(default_factory=time.time)
# Pattern for detecting context-window-exceeded errors across LLM providers.
_CONTEXT_TOO_LARGE_RE = re.compile(
r"context.{0,20}(length|window|limit|size)|"
@@ -346,6 +361,7 @@ class EventLoopNode(NodeProtocol):
self._tool_executor = tool_executor
self._conversation_store = conversation_store
self._injection_queue: asyncio.Queue[tuple[str, bool]] = asyncio.Queue()
self._trigger_queue: asyncio.Queue[TriggerEvent] = asyncio.Queue()
# Client-facing input blocking state
self._input_ready = asyncio.Event()
self._awaiting_input = False
@@ -631,6 +647,8 @@ class EventLoopNode(NodeProtocol):
# 6b. Drain injection queue
await self._drain_injection_queue(conversation)
# 6b1. Drain trigger queue (framework-level signals)
await self._drain_trigger_queue(conversation)
# 6b2. Dynamic tool refresh (mode switching)
if ctx.dynamic_tools_provider is not None:
@@ -656,8 +674,20 @@ class EventLoopNode(NodeProtocol):
conversation.update_system_prompt(_new_prompt)
logger.info("[%s] Dynamic prompt updated (phase switch)", node_id)
# 6c. Publish iteration event
await self._publish_iteration(stream_id, node_id, iteration, execution_id)
# 6c. Publish iteration event (with per-iteration metadata when available)
_iter_meta = None
if ctx.iteration_metadata_provider is not None:
try:
_iter_meta = ctx.iteration_metadata_provider()
except Exception:
pass
await self._publish_iteration(
stream_id,
node_id,
iteration,
execution_id,
extra_data=_iter_meta,
)
# 6d. Pre-turn compaction check (tiered)
_compacted_this_iter = False
@@ -1062,8 +1092,12 @@ class EventLoopNode(NodeProtocol):
mcp_tool_calls = [
tc
for tc in logged_tool_calls
if tc.get("tool_name") not in (
"set_output", "ask_user", "ask_user_multiple", "escalate",
if tc.get("tool_name")
not in (
"set_output",
"ask_user",
"ask_user_multiple",
"escalate",
)
]
if mcp_tool_calls:
@@ -1262,9 +1296,24 @@ class EventLoopNode(NodeProtocol):
multi_qs = getattr(self, "_pending_multi_questions", None)
self._pending_multi_questions = None
got_input = await self._await_user_input(
ctx, prompt=_cf_prompt, options=ask_user_options,
ctx,
prompt=_cf_prompt,
options=ask_user_options,
questions=multi_qs,
)
# Emit deferred tool_call_completed for ask_user / ask_user_multiple
deferred = getattr(self, "_deferred_tool_complete", None)
if deferred:
self._deferred_tool_complete = None
await self._publish_tool_completed(
deferred["stream_id"],
deferred["node_id"],
deferred["tool_use_id"],
deferred["tool_name"],
deferred["content"],
deferred["is_error"],
deferred["execution_id"],
)
logger.info("[%s] iter=%d: unblocked, got_input=%s", node_id, iteration, got_input)
if not got_input:
await self._publish_loop_completed(
@@ -1719,6 +1768,15 @@ class EventLoopNode(NodeProtocol):
await self._injection_queue.put((content, is_client_input))
self._input_ready.set()
async def inject_trigger(self, trigger: TriggerEvent) -> None:
"""Inject a framework-level trigger into the running queen loop.
Triggers are queued separately from user messages and drained
atomically via _drain_trigger_queue().
"""
await self._trigger_queue.put(trigger)
self._input_ready.set()
def signal_shutdown(self) -> None:
"""Signal the node to exit its loop cleanly.
@@ -1769,9 +1827,9 @@ class EventLoopNode(NodeProtocol):
Returns True if input arrived, False if shutdown was signaled.
"""
# If messages arrived while the LLM was processing, skip blocking
# entirely — the next _drain_injection_queue() will pick them up.
if not self._injection_queue.empty():
# If messages or triggers arrived while the LLM was processing, skip
# blocking — the next drain pass will pick them up.
if not self._injection_queue.empty() or not self._trigger_queue.empty():
return True
# Clear BEFORE emitting so that synchronous handlers (e.g. the
@@ -1862,6 +1920,11 @@ class EventLoopNode(NodeProtocol):
# Accumulate ALL tool calls across inner iterations for L3 logging.
# Unlike real_tool_results (reset each inner iteration), this persists.
logged_tool_calls: list[dict] = []
# Counter for LLM calls within a single iteration. Each pass through
# the inner tool loop starts a fresh LLM stream whose snapshot resets
# to "". Without this, all calls share the same message ID on the
# frontend and the second call's text silently replaces the first.
inner_turn = 0
# Inner tool loop: stream may produce tool calls requiring re-invocation
while True:
@@ -1902,6 +1965,7 @@ class EventLoopNode(NodeProtocol):
async def _do_stream(
_msgs: list = messages, # noqa: B006
_tc: list[ToolCallEvent] = tool_calls, # noqa: B006
inner_turn: int = inner_turn,
) -> None:
nonlocal accumulated_text, _stream_error
async for event in ctx.llm.stream(
@@ -1920,6 +1984,7 @@ class EventLoopNode(NodeProtocol):
ctx,
execution_id,
iteration=iteration,
inner_turn=inner_turn,
)
elif isinstance(event, ToolCallEvent):
@@ -2148,6 +2213,7 @@ class EventLoopNode(NodeProtocol):
ctx=ctx,
execution_id=execution_id,
iteration=iteration,
inner_turn=inner_turn,
)
result = ToolResult(
@@ -2180,7 +2246,7 @@ class EventLoopNode(NodeProtocol):
for i, q in enumerate(raw_questions):
if not isinstance(q, dict):
continue
qid = str(q.get("id", f"q{i+1}"))
qid = str(q.get("id", f"q{i + 1}"))
prompt = str(q.get("prompt", ""))
opts = q.get("options", None)
if isinstance(opts, list):
@@ -2189,11 +2255,13 @@ class EventLoopNode(NodeProtocol):
opts = None
else:
opts = None
questions.append({
"id": qid,
"prompt": prompt,
**({"options": opts} if opts else {}),
})
questions.append(
{
"id": qid,
"prompt": prompt,
**({"options": opts} if opts else {}),
}
)
# Store as multi-question prompt/options for
# the event emission path
@@ -2477,15 +2545,27 @@ class EventLoopNode(NodeProtocol):
content=result.content,
is_error=result.is_error,
)
await self._publish_tool_completed(
stream_id,
node_id,
tc.tool_use_id,
tc.tool_name,
result.content,
result.is_error,
execution_id,
)
if tc.tool_name in ("ask_user", "ask_user_multiple"):
# Defer tool_call_completed until after user responds
self._deferred_tool_complete = {
"stream_id": stream_id,
"node_id": node_id,
"tool_use_id": tc.tool_use_id,
"tool_name": tc.tool_name,
"content": result.content,
"is_error": result.is_error,
"execution_id": execution_id,
}
else:
await self._publish_tool_completed(
stream_id,
node_id,
tc.tool_use_id,
tc.tool_name,
result.content,
result.is_error,
execution_id,
)
# If the limit was hit, add error results for every remaining
# tool call so the conversation stays consistent. Without this,
@@ -2587,6 +2667,7 @@ class EventLoopNode(NodeProtocol):
)
# Tool calls processed -- loop back to stream with updated conversation
inner_turn += 1
# -------------------------------------------------------------------
# Synthetic tools: set_output, ask_user, escalate
@@ -2685,8 +2766,7 @@ class EventLoopNode(NodeProtocol):
"id": {
"type": "string",
"description": (
"Short identifier for this question "
"(used in the response)."
"Short identifier for this question (used in the response)."
),
},
"prompt": {
@@ -4015,6 +4095,34 @@ class EventLoopNode(NodeProtocol):
break
return count
async def _drain_trigger_queue(self, conversation: NodeConversation) -> int:
"""Drain all pending trigger events as a single batched user message.
Multiple triggers are merged so the LLM sees them atomically and can
reason about all pending triggers before acting.
"""
triggers: list[TriggerEvent] = []
while not self._trigger_queue.empty():
try:
triggers.append(self._trigger_queue.get_nowait())
except asyncio.QueueEmpty:
break
if not triggers:
return 0
parts: list[str] = []
for t in triggers:
task = t.payload.get("task", "")
task_line = f"\nTask: {task}" if task else ""
payload_str = json.dumps(t.payload, default=str)
parts.append(f"[TRIGGER: {t.trigger_type}/{t.source_id}]{task_line}\n{payload_str}")
combined = "\n\n".join(parts)
logger.info("[drain] %d trigger(s): %s", len(triggers), combined[:200])
await conversation.add_user_message(combined)
return len(triggers)
async def _check_pause(
self,
ctx: NodeContext,
@@ -4149,7 +4257,12 @@ class EventLoopNode(NodeProtocol):
await conversation.add_user_message(result.inject)
async def _publish_iteration(
self, stream_id: str, node_id: str, iteration: int, execution_id: str = ""
self,
stream_id: str,
node_id: str,
iteration: int,
execution_id: str = "",
extra_data: dict | None = None,
) -> None:
if self._event_bus:
await self._event_bus.emit_node_loop_iteration(
@@ -4157,6 +4270,7 @@ class EventLoopNode(NodeProtocol):
node_id=node_id,
iteration=iteration,
execution_id=execution_id,
extra_data=extra_data,
)
async def _publish_llm_turn_complete(
@@ -4239,6 +4353,7 @@ class EventLoopNode(NodeProtocol):
ctx: NodeContext,
execution_id: str = "",
iteration: int | None = None,
inner_turn: int = 0,
) -> None:
if self._event_bus:
if ctx.node_spec.client_facing:
@@ -4249,6 +4364,7 @@ class EventLoopNode(NodeProtocol):
snapshot=snapshot,
execution_id=execution_id,
iteration=iteration,
inner_turn=inner_turn,
)
else:
await self._event_bus.emit_llm_text_delta(
@@ -4257,6 +4373,7 @@ class EventLoopNode(NodeProtocol):
content=content,
snapshot=snapshot,
execution_id=execution_id,
inner_turn=inner_turn,
)
async def _publish_tool_started(
@@ -4574,7 +4691,7 @@ class EventLoopNode(NodeProtocol):
)
subagent_node = EventLoopNode(
event_bus=None, # Subagents don't emit events to parent's bus
event_bus=self._event_bus, # Subagent events visible to Queen via shared bus
judge=SubagentJudge(task=task, max_iterations=max_iter),
config=LoopConfig(
max_iterations=max_iter, # Tighter budget
@@ -4589,25 +4706,42 @@ class EventLoopNode(NodeProtocol):
conversation_store=subagent_conv_store,
)
# Inject a unique GCU browser profile for this subagent so that
# concurrent GCU subagents (run via asyncio.gather) each get their own
# isolated BrowserContext. asyncio.gather copies the current context
# for each coroutine, so the reset token is safe to call in finally.
_profile_token = None
try:
from gcu.browser.session import set_active_profile as _set_gcu_profile
_profile_token = _set_gcu_profile(f"{agent_id}-{subagent_instance}")
except ImportError:
pass # GCU tools not installed; no-op
try:
logger.info("🚀 Starting subagent '%s' execution...", agent_id)
start_time = time.time()
result = await subagent_node.execute(subagent_ctx)
latency_ms = int((time.time() - start_time) * 1000)
separator = "-" * 60
logger.info(
"\n" + "-" * 60 + "\n"
"\n%s\n"
"✅ SUBAGENT '%s' COMPLETED\n"
"-" * 60 + "\n"
"%s\n"
"Success: %s\n"
"Latency: %dms\n"
"Tokens used: %s\n"
"Output keys: %s\n" + "-" * 60,
"Output keys: %s\n"
"%s",
separator,
agent_id,
separator,
result.success,
latency_ms,
result.tokens_used,
list(result.output.keys()) if result.output else [],
separator,
)
result_json = {
@@ -4653,3 +4787,29 @@ class EventLoopNode(NodeProtocol):
content=json.dumps(result_json, indent=2),
is_error=True,
)
finally:
# Restore the GCU profile context that was set before this subagent ran.
if _profile_token is not None:
from gcu.browser.session import _active_profile as _gcu_profile_var
_gcu_profile_var.reset(_profile_token)
# Stop the browser session for this subagent's profile so tabs are
# closed immediately rather than accumulating until server shutdown.
if self._tool_executor is not None:
_subagent_profile = f"{agent_id}-{subagent_instance}"
try:
_stop_use = ToolUse(
id="gcu-cleanup",
name="browser_stop",
input={"profile": _subagent_profile},
)
_stop_result = self._tool_executor(_stop_use)
if asyncio.iscoroutine(_stop_result) or asyncio.isfuture(_stop_result):
await _stop_result
except Exception as _gcu_exc:
logger.warning(
"GCU browser_stop failed for profile %r: %s",
_subagent_profile,
_gcu_exc,
)
+44 -2
View File
@@ -27,12 +27,24 @@ from framework.graph.node import (
SharedMemory,
)
from framework.graph.validator import OutputValidator
from framework.llm.provider import LLMProvider, Tool
from framework.llm.provider import LLMProvider, Tool, ToolUse
from framework.observability import set_trace_context
from framework.runtime.core import Runtime
from framework.schemas.checkpoint import Checkpoint
from framework.storage.checkpoint_store import CheckpointStore
logger = logging.getLogger(__name__)
def _default_max_context_tokens() -> int:
"""Resolve max_context_tokens from global config, falling back to 32000."""
try:
from framework.config import get_max_context_tokens
return get_max_context_tokens()
except Exception:
return 32_000
@dataclass
class ExecutionResult:
@@ -138,6 +150,7 @@ class GraphExecutor:
tool_provider_map: dict[str, str] | None = None,
dynamic_tools_provider: Callable | None = None,
dynamic_prompt_provider: Callable | None = None,
iteration_metadata_provider: Callable | None = None,
):
"""
Initialize the executor.
@@ -183,6 +196,7 @@ class GraphExecutor:
self.tool_provider_map = tool_provider_map
self.dynamic_tools_provider = dynamic_tools_provider
self.dynamic_prompt_provider = dynamic_prompt_provider
self.iteration_metadata_provider = iteration_metadata_provider
# Parallel execution settings
self.enable_parallel_execution = enable_parallel_execution
@@ -925,6 +939,33 @@ class GraphExecutor:
self.logger.info(" Executing...")
result = await node_impl.execute(ctx)
# GCU tab cleanup: stop the browser profile after a top-level GCU node
# finishes so tabs don't accumulate. Mirrors the subagent cleanup in
# EventLoopNode._execute_subagent().
if node_spec.node_type == "gcu" and self.tool_executor is not None:
try:
from gcu.browser.session import (
_active_profile as _gcu_profile_var,
)
_gcu_profile = _gcu_profile_var.get()
_stop_use = ToolUse(
id="gcu-cleanup",
name="browser_stop",
input={"profile": _gcu_profile},
)
_stop_result = self.tool_executor(_stop_use)
if asyncio.iscoroutine(_stop_result) or asyncio.isfuture(_stop_result):
await _stop_result
except ImportError:
pass # GCU not installed
except Exception as _gcu_exc:
logger.warning(
"GCU browser_stop failed for profile %r: %s",
_gcu_profile,
_gcu_exc,
)
# Emit node-completed event (skip event_loop nodes)
if self._event_bus and node_spec.node_type != "event_loop":
await self._event_bus.emit_node_loop_completed(
@@ -1799,6 +1840,7 @@ class GraphExecutor:
shared_node_registry=self.node_registry, # For subagent escalation routing
dynamic_tools_provider=self.dynamic_tools_provider,
dynamic_prompt_provider=self.dynamic_prompt_provider,
iteration_metadata_provider=self.iteration_metadata_provider,
)
VALID_NODE_TYPES = {
@@ -1872,7 +1914,7 @@ class GraphExecutor:
max_tool_calls_per_turn=lc.get("max_tool_calls_per_turn", 30),
tool_call_overflow_margin=lc.get("tool_call_overflow_margin", 0.5),
stall_detection_threshold=lc.get("stall_detection_threshold", 3),
max_context_tokens=lc.get("max_context_tokens", 32000),
max_context_tokens=lc.get("max_context_tokens", _default_max_context_tokens()),
max_tool_result_chars=lc.get("max_tool_result_chars", 30_000),
spillover_dir=spillover,
hooks=lc.get("hooks", {}),
+51 -11
View File
@@ -37,24 +37,42 @@ Follow these rules for reliable, efficient browser interaction.
## Reading Pages
- ALWAYS prefer `browser_snapshot` over `browser_get_text("body")`
it returns a compact ~1-5 KB accessibility tree vs 100+ KB of raw HTML.
- Use `browser_snapshot_aria` when you need full ARIA properties
for detailed element inspection.
- Interaction tools (`browser_click`, `browser_type`, `browser_fill`,
`browser_scroll`, etc.) return a page snapshot automatically in their
result. Use it to decide your next action do NOT call
`browser_snapshot` separately after every action.
Only call `browser_snapshot` when you need a fresh view without
performing an action, or after setting `auto_snapshot=false`.
- Do NOT use `browser_screenshot` for reading text content
it produces huge base64 images with no searchable text.
- Only fall back to `browser_get_text` for extracting specific
small elements by CSS selector.
## Navigation & Waiting
- Always call `browser_wait` after navigation actions
(`browser_open`, `browser_navigate`, `browser_click` on links)
to let the page load.
- `browser_navigate` and `browser_open` already wait for the page to
load (`domcontentloaded`). Do NOT call `browser_wait` with no
arguments after navigation it wastes time.
Only use `browser_wait` when you need a *specific element* or *text*
to appear (pass `selector` or `text`).
- NEVER re-navigate to the same URL after scrolling
this resets your scroll position and loses loaded content.
## Scrolling
- Use large scroll amounts ~2000 when loading more content
sites like twitter and linkedin have lazy loading for paging.
- After scrolling, take a new `browser_snapshot` to see updated content.
- The scroll result includes a snapshot automatically no need to call
`browser_snapshot` separately.
## Batching Actions
- You can call multiple tools in a single turn they execute in parallel.
ALWAYS batch independent actions together. Examples:
- Fill multiple form fields in one turn.
- Navigate + snapshot in one turn.
- Click + scroll if targeting different elements.
- When batching, set `auto_snapshot=false` on all but the last action
to avoid redundant snapshots.
- Aim for 3-5 tool calls per turn minimum. One tool call per turn is
wasteful.
## Error Recovery
- If a tool fails, retry once with the same approach.
@@ -65,11 +83,33 @@ Follow these rules for reliable, efficient browser interaction.
then `browser_start`, then retry.
## Tab Management
- Use `browser_tabs` to list open tabs when managing multiple pages.
- Pass `target_id` to tools when operating on a specific tab.
- Open background tabs with `browser_open(url=..., background=true)`
to avoid losing your current context.
- Close tabs you no longer need with `browser_close` to free resources.
**Close tabs as soon as you are done with them** not only at the end of the task.
After reading or extracting data from a tab, close it immediately.
**Decision rules:**
- Finished reading/extracting from a tab? `browser_close(target_id=...)`
- Completed a multi-tab workflow? `browser_close_finished()` to clean up all your tabs
- More than 3 tabs open? stop and close finished ones before opening more
- Popup appeared that you didn't need? → close it immediately
**Origin awareness:** `browser_tabs` returns an `origin` field for each tab:
- `"agent"` you opened it; you own it; close it when done
- `"popup"` opened by a link or script; close after extracting what you need
- `"startup"` or `"user"` leave these alone unless the task requires it
**Cleanup tools:**
- `browser_close(target_id=...)` close one specific tab
- `browser_close_finished()` close all your agent/popup tabs (safe: leaves startup/user tabs)
- `browser_close_all()` close everything except the active tab (use only for full reset)
**Multi-tab workflow pattern:**
1. Open background tabs with `browser_open(url=..., background=true)` to stay on current tab
2. Process each tab and close it with `browser_close` when done
3. When the full workflow completes, call `browser_close_finished()` to confirm cleanup
4. Check `browser_tabs` at any point it shows `origin` and `age_seconds` per tab
Never accumulate tabs. Treat every tab you open as a resource you must free.
## Login & Auth Walls
- If you see a "Log in" or "Sign up" prompt instead of expected
+5
View File
@@ -565,6 +565,11 @@ class NodeContext:
# staging / running) without restarting the conversation.
dynamic_prompt_provider: Any = None # Callable[[], str] | None
# Per-iteration metadata provider — when set, EventLoopNode merges
# the returned dict into node_loop_iteration event data. Used by
# the queen to record the current phase per iteration.
iteration_metadata_provider: Any = None # Callable[[], dict] | None
@dataclass
class NodeResult:
+13 -2
View File
@@ -122,11 +122,21 @@ MINIMAX_API_BASE = "https://api.minimax.io/v1"
# Providers that accept cache_control on message content blocks.
# Anthropic: native ephemeral caching. MiniMax & Z-AI/GLM: pass-through to their APIs.
# (OpenAI caches automatically server-side; Groq/Gemini/etc. strip the header.)
_CACHE_CONTROL_PREFIXES = ("anthropic/", "claude-", "minimax/", "minimax-", "MiniMax-", "zai-glm", "glm-")
_CACHE_CONTROL_PREFIXES = (
"anthropic/",
"claude-",
"minimax/",
"minimax-",
"MiniMax-",
"zai-glm",
"glm-",
)
def _model_supports_cache_control(model: str) -> bool:
return any(model.startswith(p) for p in _CACHE_CONTROL_PREFIXES)
# Kimi For Coding uses an Anthropic-compatible endpoint (no /v1 suffix).
# Claude Code integration uses this format; the /v1 OpenAI-compatible endpoint
# enforces a coding-agent whitelist that blocks unknown User-Agents.
@@ -1066,7 +1076,8 @@ class LiteLLMProvider(LLMProvider):
else getattr(usage, "cache_read_input_tokens", 0) or 0
)
logger.debug(
"[tokens] finish-chunk usage: input=%d output=%d cached=%d model=%s",
"[tokens] finish-chunk usage: "
"input=%d output=%d cached=%d model=%s",
input_tokens,
output_tokens,
cached_tokens,
+1 -33
View File
@@ -1,33 +1 @@
"""Framework-level worker monitoring package.
Provides the Worker Health Judge: a reusable secondary graph that attaches to
any worker agent runtime and monitors its execution health via periodic log
inspection. Emits structured EscalationTickets when degradation is detected.
Usage::
from framework.monitoring import HEALTH_JUDGE_ENTRY_POINT, judge_goal, judge_graph
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
# Register tools bound to the worker runtime's EventBus
monitoring_registry = ToolRegistry()
register_worker_monitoring_tools(monitoring_registry, worker_runtime._event_bus, storage_path)
# Load judge as secondary graph on the worker runtime
await worker_runtime.add_graph(
graph_id="judge",
graph=judge_graph,
goal=judge_goal,
entry_points={"health_check": HEALTH_JUDGE_ENTRY_POINT},
storage_subpath="graphs/judge",
)
"""
from .judge import HEALTH_JUDGE_ENTRY_POINT, judge_goal, judge_graph, judge_node
__all__ = [
"HEALTH_JUDGE_ENTRY_POINT",
"judge_goal",
"judge_graph",
"judge_node",
]
"""Framework-level worker monitoring package."""
-258
View File
@@ -1,258 +0,0 @@
"""Worker Health Judge — framework-level reusable monitoring graph.
Attaches to any worker agent runtime as a secondary graph. Fires on a
2-minute timer, reads the worker's session logs via ``get_worker_health_summary``,
accumulates observations in a continuous conversation context, and emits a
structured ``EscalationTicket`` when it detects a degradation pattern.
Usage::
from framework.monitoring import judge_graph, judge_goal, HEALTH_JUDGE_ENTRY_POINT
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
# Register tools bound to the worker runtime's event bus
monitoring_registry = ToolRegistry()
register_worker_monitoring_tools(
monitoring_registry, worker_runtime._event_bus, storage_path
)
monitoring_tools = list(monitoring_registry.get_tools().values())
monitoring_executor = monitoring_registry.get_executor()
# Load judge as secondary graph on the worker runtime
await worker_runtime.add_graph(
graph_id="judge",
graph=judge_graph,
goal=judge_goal,
entry_points={"health_check": HEALTH_JUDGE_ENTRY_POINT},
storage_subpath="graphs/judge",
)
Design:
- ``isolation_level="isolated"`` the judge has its own memory, not
polluting the worker's shared memory namespace.
- ``conversation_mode="continuous"`` the judge's conversation carries
across timer ticks. The conversation IS the judge's memory. It tracks
trends by referring to its own prior messages ("Last check I saw 47
steps; now 52; 5 new steps, 3 RETRY").
- No shared memory keys. No external state files.
"""
from __future__ import annotations
from framework.graph import Constraint, Goal, NodeSpec, SuccessCriterion
from framework.graph.edge import AsyncEntryPointSpec, GraphSpec
# ---------------------------------------------------------------------------
# Goal
# ---------------------------------------------------------------------------
judge_goal = Goal(
id="worker-health-monitor",
name="Worker Health Monitor",
description=(
"Periodically assess the health of the worker agent by reading its "
"execution logs. Detect degradation patterns (excessive retries, "
"stalls, doom loops) and emit structured EscalationTickets when the "
"worker needs attention."
),
success_criteria=[
SuccessCriterion(
id="accurate-detection",
description="Only escalates genuine degradation, not normal retry cycles",
metric="false_positive_rate",
target="low",
weight=0.5,
),
SuccessCriterion(
id="timely-detection",
description="Detects genuine stalls within 2 timer ticks (≤4 minutes)",
metric="detection_latency_minutes",
target="<=4",
weight=0.5,
),
],
constraints=[
Constraint(
id="conservative-escalation",
description=(
"Do not escalate on a single bad verdict or a brief stall. "
"Require clear patterns (10+ consecutive bad verdicts or 4+ minute stall) "
"before creating a ticket."
),
constraint_type="hard",
category="quality",
),
Constraint(
id="complete-ticket",
description=(
"Every EscalationTicket must have all required fields filled. "
"Do not emit partial or placeholder tickets."
),
constraint_type="hard",
category="correctness",
),
],
)
# ---------------------------------------------------------------------------
# Node
# ---------------------------------------------------------------------------
judge_node = NodeSpec(
id="judge",
name="Worker Health Judge",
description=(
"Autonomous health monitor for worker agents. Reads execution logs "
"on each timer tick, compares to prior observations (via conversation "
"history), and emits a structured EscalationTicket when a genuine "
"degradation pattern is detected."
),
node_type="event_loop",
client_facing=False, # Autonomous monitor, not interactive
max_node_visits=0, # Unbounded — runs on every timer tick
input_keys=[],
output_keys=["health_verdict"],
nullable_output_keys=["health_verdict"],
success_criteria=(
"A clear health verdict is produced each check: either 'healthy' with "
"a brief observation, or a complete EscalationTicket is emitted via "
"emit_escalation_ticket and health_verdict describes the issue."
),
tools=[
"get_worker_health_summary",
"emit_escalation_ticket",
],
system_prompt="""\
You are the Worker Health Judge. You run every 2 minutes alongside a worker \
agent to monitor its execution health.
# Your Role
You observe the worker's iteration patterns over time and escalate only when \
you see genuine degradation not normal retry cycles. Your conversation history \
IS your memory. On each check, refer to your previous observations to track trends.
# Check Procedure
On each timer tick (every 2 minutes):
## Step 1: Read health snapshot
Call get_worker_health_summary() with no arguments to auto-discover the active \
session. This returns:
- worker_agent_id: the worker's agent name — use this for ticket identity fields
- worker_graph_id: the worker's primary graph ID — use this for ticket identity fields
- session_id: the session being monitored use this for worker_session_id in tickets
- total_steps: how many log steps have been recorded
- recent_verdicts: list of recent ACCEPT/RETRY/CONTINUE verdicts
- steps_since_last_accept: consecutive non-ACCEPT steps
- stall_minutes: wall-clock since last step (null if active)
- evidence_snippet: recent LLM output
## Step 2: Compare to prior check
Look at your conversation history. What was total_steps last time?
- If total_steps is UNCHANGED from prior check AND prior check was also unchanged:
STALL confirmed (worker has produced no new iterations in 4+ minutes).
Escalate with severity="high" or "critical" depending on stall duration.
- If total_steps increased: worker is making progress. Examine verdicts.
## Step 3: Analyze verdict pattern
- Healthy: Mix of ACCEPT and RETRY, steps_since_last_accept < 5. No action.
- Warning: steps_since_last_accept is 5-9. Note it, no escalation yet.
- Degraded: steps_since_last_accept >= 10. Examine evidence_snippet.
- If evidence shows the agent is making real progress (complex reasoning,
exploring solutions, productive tool use): may be a hard problem. Note it.
- If evidence shows a loop (same error, same tool call, no new information):
Escalate with severity="medium" or "high".
- Critical: steps_since_last_accept >= 20, OR stall_minutes >= 4.
Escalate with severity="critical".
## Step 4: Decide
### If healthy:
set_output("health_verdict", "healthy: <brief observation>")
Done.
### If escalating:
Build an EscalationTicket JSON string with ALL required fields:
{
"worker_agent_id": "<worker_agent_id from get_worker_health_summary>",
"worker_session_id": "<session_id from get_worker_health_summary>",
"worker_node_id": "<worker_graph_id from get_worker_health_summary>",
"worker_graph_id": "<worker_graph_id from get_worker_health_summary>",
"severity": "<low|medium|high|critical>",
"cause": "<what you observed — concrete, specific>",
"judge_reasoning": "<why you decided to escalate, not just dismiss>",
"suggested_action": "<what you recommend: restart, human review, etc.>",
"recent_verdicts": [<list from get_worker_health_summary>],
"total_steps_checked": <int>,
"steps_since_last_accept": <int>,
"stall_minutes": <float or null>,
"evidence_snippet": "<from get_worker_health_summary>"
}
Call: emit_escalation_ticket(ticket_json=<the JSON string above>)
Then: set_output("health_verdict", "escalated: <one-line summary>")
# Severity Guide
- low: Mild concern, worth noting. 5-9 consecutive bad verdicts.
- medium: Clear degradation pattern. 10-15 bad verdicts or brief stall (1-2 min).
- high: Serious issue. 15+ bad verdicts or stall 2-4 minutes or clear doom loop.
- critical: Worker is definitively stuck. 20+ bad verdicts or stall > 4 minutes.
# Conservative Bias
You MUST resist the urge to escalate prematurely. Worker agents naturally retry.
A node may legitimately need 5-8 retries before succeeding. Do not escalate unless:
1. The pattern is clear and sustained across your observation window, AND
2. The evidence shows no genuine progress
One missed escalation is less costly than two false alarms. The Queen will filter \
further. But do not be passive genuine stalls and doom loops must be caught.
# Rules
- Never escalate on the FIRST check unless stall_minutes > 4
- Always call get_worker_health_summary FIRST before deciding anything
- All ticket fields are REQUIRED do not submit partial tickets
- After any emit_escalation_ticket call, always set_output to complete the check
""",
)
# ---------------------------------------------------------------------------
# Entry Point
# ---------------------------------------------------------------------------
HEALTH_JUDGE_ENTRY_POINT = AsyncEntryPointSpec(
id="health_check",
name="Worker Health Check",
entry_node="judge",
trigger_type="timer",
trigger_config={
"interval_minutes": 2,
"run_immediately": True, # Fire immediately to establish a baseline
},
isolation_level="isolated", # Own memory namespace, not polluting worker's
)
# ---------------------------------------------------------------------------
# Graph
# ---------------------------------------------------------------------------
judge_graph = GraphSpec(
id="judge-graph",
goal_id=judge_goal.id,
version="1.0.0",
entry_node="judge",
entry_points={"health_check": "judge"},
terminal_nodes=["judge"], # Judge node can terminate after each check
pause_nodes=[],
nodes=[judge_node],
edges=[],
conversation_mode="continuous", # Conversation persists across timer ticks
async_entry_points=[HEALTH_JUDGE_ENTRY_POINT],
loop_config={
"max_iterations": 10, # One check shouldn't take many turns
"max_tool_calls_per_turn": 3, # get_summary + optionally emit_ticket
"max_context_tokens": 16000, # Compact — judge only needs recent context
},
)
+7 -21
View File
@@ -243,12 +243,8 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
action="store_true",
help="Open dashboard in browser after server starts",
)
serve_parser.add_argument(
"--verbose", "-v", action="store_true", help="Enable INFO log level"
)
serve_parser.add_argument(
"--debug", action="store_true", help="Enable DEBUG log level"
)
serve_parser.add_argument("--verbose", "-v", action="store_true", help="Enable INFO log level")
serve_parser.add_argument("--debug", action="store_true", help="Enable DEBUG log level")
serve_parser.set_defaults(func=cmd_serve)
# open command (serve + auto-open browser)
@@ -286,12 +282,8 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
default=None,
help="LLM model for preloaded agents",
)
open_parser.add_argument(
"--verbose", "-v", action="store_true", help="Enable INFO log level"
)
open_parser.add_argument(
"--debug", action="store_true", help="Enable DEBUG log level"
)
open_parser.add_argument("--verbose", "-v", action="store_true", help="Enable INFO log level")
open_parser.add_argument("--debug", action="store_true", help="Enable DEBUG log level")
open_parser.set_defaults(func=cmd_open)
@@ -387,12 +379,10 @@ def _prompt_before_start(agent_path: str, runner, model: str | None = None):
def cmd_run(args: argparse.Namespace) -> int:
"""Run an exported agent."""
import logging
from framework.credentials.models import CredentialError
from framework.runner import AgentRunner
from framework.observability import configure_logging
from framework.runner import AgentRunner
# Set logging level (quiet by default for cleaner output)
if args.quiet:
@@ -932,12 +922,10 @@ def _format_natural_language_to_json(
def cmd_shell(args: argparse.Namespace) -> int:
"""Start an interactive agent session."""
import logging
from framework.credentials.models import CredentialError
from framework.runner import AgentRunner
from framework.observability import configure_logging
from framework.runner import AgentRunner
configure_logging(level="INFO")
@@ -1637,15 +1625,13 @@ def _build_frontend() -> bool:
def cmd_serve(args: argparse.Namespace) -> int:
"""Start the HTTP API server."""
import logging
from aiohttp import web
_build_frontend()
from framework.server.app import create_app
from framework.observability import configure_logging
from framework.server.app import create_app
if getattr(args, "debug", False):
configure_logging(level="DEBUG")
+28 -77
View File
@@ -16,7 +16,6 @@ from framework.credentials.validation import (
from framework.graph import Goal
from framework.graph.edge import (
DEFAULT_MAX_TOKENS,
AsyncEntryPointSpec,
EdgeCondition,
EdgeSpec,
GraphSpec,
@@ -570,9 +569,6 @@ class AgentInfo:
constraints: list[dict]
required_tools: list[str]
has_tools_module: bool
# Multi-entry-point support
async_entry_points: list[dict] = field(default_factory=list)
is_multi_entry_point: bool = False
@dataclass
@@ -630,22 +626,6 @@ def load_agent_export(data: str | dict) -> tuple[GraphSpec, Goal]:
)
edges.append(edge)
# Build AsyncEntryPointSpec objects for multi-entry-point support
async_entry_points = []
for aep_data in graph_data.get("async_entry_points", []):
async_entry_points.append(
AsyncEntryPointSpec(
id=aep_data["id"],
name=aep_data.get("name", aep_data["id"]),
entry_node=aep_data["entry_node"],
trigger_type=aep_data.get("trigger_type", "manual"),
trigger_config=aep_data.get("trigger_config", {}),
isolation_level=aep_data.get("isolation_level", "shared"),
priority=aep_data.get("priority", 0),
max_concurrent=aep_data.get("max_concurrent", 10),
)
)
# Build GraphSpec
graph = GraphSpec(
id=graph_data.get("id", "agent-graph"),
@@ -653,7 +633,6 @@ def load_agent_export(data: str | dict) -> tuple[GraphSpec, Goal]:
version=graph_data.get("version", "1.0.0"),
entry_node=graph_data.get("entry_node", ""),
entry_points=graph_data.get("entry_points", {}), # Support pause/resume architecture
async_entry_points=async_entry_points, # Support multi-entry-point agents
terminal_nodes=graph_data.get("terminal_nodes", []),
pause_nodes=graph_data.get("pause_nodes", []), # Support pause/resume architecture
nodes=nodes,
@@ -805,8 +784,6 @@ class AgentRunner:
# AgentRuntime — unified execution path for all agents
self._agent_runtime: AgentRuntime | None = None
self._uses_async_entry_points = self.graph.has_async_entry_points()
# Pre-load validation: structural checks + credentials.
# Fails fast with actionable guidance — no MCP noise on screen.
run_preload_validation(
@@ -927,7 +904,8 @@ class AgentRunner:
if agent_config and hasattr(agent_config, "max_tokens"):
max_tokens = agent_config.max_tokens
logger.info(
"Agent default_config overrides max_tokens: %d (configuration.json value ignored)",
"Agent default_config overrides max_tokens: %d "
"(configuration.json value ignored)",
max_tokens,
)
else:
@@ -964,7 +942,6 @@ class AgentRunner:
"version": "1.0.0",
"entry_node": getattr(agent_module, "entry_node", nodes[0].id),
"entry_points": getattr(agent_module, "entry_points", {}),
"async_entry_points": getattr(agent_module, "async_entry_points", []),
"terminal_nodes": getattr(agent_module, "terminal_nodes", []),
"pause_nodes": getattr(agent_module, "pause_nodes", []),
"nodes": nodes,
@@ -1450,21 +1427,7 @@ class AgentRunner:
event_bus=None,
) -> None:
"""Set up multi-entry-point execution using AgentRuntime."""
# Convert AsyncEntryPointSpec to EntryPointSpec for AgentRuntime
entry_points = []
for async_ep in self.graph.async_entry_points:
ep = EntryPointSpec(
id=async_ep.id,
name=async_ep.name,
entry_node=async_ep.entry_node,
trigger_type=async_ep.trigger_type,
trigger_config=async_ep.trigger_config,
isolation_level=async_ep.isolation_level,
priority=async_ep.priority,
max_concurrent=async_ep.max_concurrent,
max_resurrections=async_ep.max_resurrections,
)
entry_points.append(ep)
# Always create a primary entry point for the graph's entry node.
# For multi-entry-point agents this ensures the primary path (e.g.
@@ -1526,21 +1489,31 @@ class AgentRunner:
# Pass intro_message through for TUI display
self._agent_runtime.intro_message = self.intro_message
# ------------------------------------------------------------------
# Execution modes
#
# run() One-shot, blocking execution for worker agents
# (headless CLI via ``hive run``). Validates, runs
# the graph to completion, and returns the result.
#
# start() / trigger() Long-lived runtime for the frontend (queen).
# start() boots the runtime; trigger() sends
# non-blocking execution requests. Used by the
# server session manager and API routes.
# ------------------------------------------------------------------
async def run(
self,
input_data: dict | None = None,
session_state: dict | None = None,
entry_point_id: str | None = None,
) -> ExecutionResult:
"""
Execute the agent with given input data.
"""One-shot execution for worker agents (headless CLI).
Validates credentials before execution. If any required credentials
are missing, returns an error result with instructions on how to
provide them.
Validates credentials, runs the graph to completion, and returns
the result. Used by ``hive run`` and programmatic callers.
For single-entry-point agents, this is the standard execution path.
For multi-entry-point agents, you can optionally specify which entry point to use.
For the frontend (queen), use start() + trigger() instead.
Args:
input_data: Input data for the agent (e.g., {"lead_id": "123"})
@@ -1666,7 +1639,12 @@ class AgentRunner:
# === Runtime API ===
async def start(self) -> None:
"""Start the agent runtime."""
"""Boot the agent runtime for the frontend (queen).
Pair with trigger() to send execution requests. Used by the
server session manager. For headless worker agents, use run()
instead.
"""
if self._agent_runtime is None:
self._setup()
@@ -1683,10 +1661,10 @@ class AgentRunner:
input_data: dict[str, Any],
correlation_id: str | None = None,
) -> str:
"""
Trigger execution at a specific entry point (non-blocking).
"""Send a non-blocking execution request to a running runtime.
Returns execution ID for tracking.
Used by the server API routes after start(). For headless
worker agents, use run() instead.
Args:
entry_point_id: Which entry point to trigger
@@ -1771,19 +1749,6 @@ class AgentRunner:
for edge in self.graph.edges
]
# Build async entry points info
async_entry_points_info = [
{
"id": ep.id,
"name": ep.name,
"entry_node": ep.entry_node,
"trigger_type": ep.trigger_type,
"isolation_level": ep.isolation_level,
"max_concurrent": ep.max_concurrent,
}
for ep in self.graph.async_entry_points
]
return AgentInfo(
name=self.graph.id,
description=self.graph.description,
@@ -1810,8 +1775,6 @@ class AgentRunner:
],
required_tools=sorted(required_tools),
has_tools_module=(self.agent_path / "tools.py").exists(),
async_entry_points=async_entry_points_info,
is_multi_entry_point=self._uses_async_entry_points,
)
def validate(self) -> ValidationResult:
@@ -2126,18 +2089,6 @@ Respond with JSON only:
trigger_type="manual",
isolation_level="shared",
)
for aep in runner.graph.async_entry_points:
entry_points[aep.id] = EntryPointSpec(
id=aep.id,
name=aep.name,
entry_node=aep.entry_node,
trigger_type=aep.trigger_type,
trigger_config=aep.trigger_config,
isolation_level=aep.isolation_level,
priority=aep.priority,
max_concurrent=aep.max_concurrent,
)
await runtime.add_graph(
graph_id=gid,
graph=runner.graph,
+2 -2
View File
@@ -454,11 +454,11 @@ An agent has requested handoff to the Hive Coder (via the `escalate` synthetic t
## Worker Health Monitoring
These events form the **judge → queen → operator** escalation pipeline.
These events form the **queen → operator** escalation pipeline.
### `worker_escalation_ticket`
The Worker Health Judge has detected a degradation pattern and is escalating to the Queen.
A worker degradation pattern has been detected and is being escalated to the Queen.
| Data Field | Type | Description |
| ---------- | ------ | ------------------------------------ |
+5 -3
View File
@@ -8,6 +8,7 @@ while preserving the goal-driven approach.
import asyncio
import logging
import time
import uuid
from collections.abc import Callable
from dataclasses import dataclass, field
from datetime import datetime
@@ -822,7 +823,8 @@ class AgentRuntime:
if stream is None:
raise ValueError(f"Entry point '{entry_point_id}' not found")
return await stream.execute(input_data, correlation_id, session_state)
run_id = uuid.uuid4().hex[:12]
return await stream.execute(input_data, correlation_id, session_state, run_id=run_id)
async def trigger_and_wait(
self,
@@ -1359,8 +1361,8 @@ class AgentRuntime:
allowed_keys = set(entry_node.input_keys)
# Search primary graph's streams for an active session.
# Skip isolated streams (e.g. health judge) — they have their own
# session directories and must never be used as a shared session.
# Skip isolated streams — they have their own session directories
# and must never be used as a shared session.
all_streams: list[tuple[str, ExecutionStream]] = []
for _gid, reg in self._graphs.items():
for ep_id, stream in reg.streams.items():
+5 -5
View File
@@ -1,4 +1,4 @@
"""EscalationTicket — structured schema for worker health judge escalations."""
"""EscalationTicket — structured schema for worker health escalations."""
from __future__ import annotations
@@ -10,10 +10,10 @@ from pydantic import BaseModel, Field
class EscalationTicket(BaseModel):
"""Structured escalation report emitted by the Worker Health Judge.
"""Structured escalation report for worker health monitoring.
The judge must fill every field before calling emit_escalation_ticket.
Pydantic validation rejects partial tickets, preventing impulsive escalation.
All fields must be filled before calling emit_escalation_ticket.
Pydantic validation rejects partial tickets.
"""
ticket_id: str = Field(default_factory=lambda: str(uuid4()))
@@ -25,7 +25,7 @@ class EscalationTicket(BaseModel):
worker_node_id: str
worker_graph_id: str
# Problem characterization (filled by judge via LLM deliberation)
# Problem characterization
severity: Literal["low", "medium", "high", "critical"]
cause: str # Human-readable: "Node has produced 18 RETRY verdicts..."
judge_reasoning: str # Judge's own deliberation chain
+175 -7
View File
@@ -97,6 +97,7 @@ class EventType(StrEnum):
# Client I/O (client_facing=True nodes only)
CLIENT_OUTPUT_DELTA = "client_output_delta"
CLIENT_INPUT_REQUESTED = "client_input_requested"
CLIENT_INPUT_RECEIVED = "client_input_received"
# Internal node observability (client_facing=False nodes)
NODE_INTERNAL_OUTPUT = "node_internal_output"
@@ -104,7 +105,7 @@ class EventType(StrEnum):
NODE_STALLED = "node_stalled"
NODE_TOOL_DOOM_LOOP = "node_tool_doom_loop"
# Judge decisions
# Judge decisions (implicit judge in event loop nodes)
JUDGE_VERDICT = "judge_verdict"
# Output tracking
@@ -126,7 +127,7 @@ class EventType(StrEnum):
# Escalation (agent requests handoff to queen)
ESCALATION_REQUESTED = "escalation_requested"
# Worker health monitoring (judge → queen → operator)
# Worker health monitoring
WORKER_ESCALATION_TICKET = "worker_escalation_ticket"
QUEEN_INTERVENTION_REQUESTED = "queen_intervention_requested"
@@ -152,6 +153,13 @@ class EventType(StrEnum):
# Subagent reports (one-way progress updates from sub-agents)
SUBAGENT_REPORT = "subagent_report"
# Trigger lifecycle (queen-level triggers / heartbeats)
TRIGGER_AVAILABLE = "trigger_available"
TRIGGER_ACTIVATED = "trigger_activated"
TRIGGER_DEACTIVATED = "trigger_deactivated"
TRIGGER_FIRED = "trigger_fired"
TRIGGER_REMOVED = "trigger_removed"
@dataclass
class AgentEvent:
@@ -165,10 +173,11 @@ class AgentEvent:
timestamp: datetime = field(default_factory=datetime.now)
correlation_id: str | None = None # For tracking related events
graph_id: str | None = None # Which graph emitted this event (multi-graph sessions)
run_id: str | None = None # Unique ID per trigger() invocation — used for run dividers
def to_dict(self) -> dict:
"""Convert to dictionary for serialization."""
return {
d = {
"type": self.type.value,
"stream_id": self.stream_id,
"node_id": self.node_id,
@@ -178,6 +187,9 @@ class AgentEvent:
"correlation_id": self.correlation_id,
"graph_id": self.graph_id,
}
if self.run_id is not None:
d["run_id"] = self.run_id
return d
# Type for event handlers
@@ -246,6 +258,128 @@ class EventBus:
self._semaphore = asyncio.Semaphore(max_concurrent_handlers)
self._subscription_counter = 0
self._lock = asyncio.Lock()
# Per-session persistent event log (always-on, survives restarts)
self._session_log: IO[str] | None = None
self._session_log_iteration_offset: int = 0
# Accumulator for client_output_delta snapshots — flushed on llm_turn_complete.
# Key: (stream_id, node_id, execution_id, iteration, inner_turn) → latest AgentEvent
self._pending_output_snapshots: dict[tuple, AgentEvent] = {}
def set_session_log(self, path: Path, *, iteration_offset: int = 0) -> None:
"""Enable per-session event persistence to a JSONL file.
Called once when the queen starts so that all events survive server
restarts and can be replayed to reconstruct the frontend state.
``iteration_offset`` is added to the ``iteration`` field in logged
events so that cold-resumed sessions produce monotonically increasing
iteration values preventing frontend message ID collisions between
the original run and resumed runs.
"""
if self._session_log is not None:
try:
self._session_log.close()
except Exception:
pass
path.parent.mkdir(parents=True, exist_ok=True)
self._session_log = open(path, "a", encoding="utf-8") # noqa: SIM115
self._session_log_iteration_offset = iteration_offset
logger.info("Session event log → %s (iteration_offset=%d)", path, iteration_offset)
def close_session_log(self) -> None:
"""Close the per-session event log file."""
# Flush any pending output snapshots before closing
self._flush_pending_snapshots()
if self._session_log is not None:
try:
self._session_log.close()
except Exception:
pass
self._session_log = None
# Event types that are high-frequency streaming deltas — accumulated rather
# than written individually to the session log.
_STREAMING_DELTA_TYPES = frozenset(
{
EventType.CLIENT_OUTPUT_DELTA,
EventType.LLM_TEXT_DELTA,
EventType.LLM_REASONING_DELTA,
}
)
def _write_session_log_event(self, event: AgentEvent) -> None:
"""Write an event to the per-session log with streaming coalescing.
Streaming deltas (client_output_delta, llm_text_delta) are accumulated
in memory. When llm_turn_complete fires, any pending snapshots for that
(stream_id, node_id, execution_id) are flushed as single consolidated
events before the turn-complete event itself is written.
Note: iteration offset is already applied in publish() before this is
called, so events here already have correct iteration values.
"""
if self._session_log is None:
return
if event.type in self._STREAMING_DELTA_TYPES:
# Accumulate — keep only the latest event (which carries the full snapshot)
key = (
event.stream_id,
event.node_id,
event.execution_id,
event.data.get("iteration"),
event.data.get("inner_turn", 0),
)
self._pending_output_snapshots[key] = event
return
# On turn-complete, flush accumulated snapshots for this stream first
if event.type == EventType.LLM_TURN_COMPLETE:
self._flush_pending_snapshots(
stream_id=event.stream_id,
node_id=event.node_id,
execution_id=event.execution_id,
)
line = json.dumps(event.to_dict(), default=str)
self._session_log.write(line + "\n")
self._session_log.flush()
def _flush_pending_snapshots(
self,
stream_id: str | None = None,
node_id: str | None = None,
execution_id: str | None = None,
) -> None:
"""Flush accumulated streaming snapshots to the session log.
When called with filters, only matching entries are flushed.
When called without filters (e.g. on close), everything is flushed.
"""
if self._session_log is None or not self._pending_output_snapshots:
return
to_flush: list[tuple] = []
for key, _evt in self._pending_output_snapshots.items():
if stream_id is not None:
k_stream, k_node, k_exec, _, _ = key
if k_stream != stream_id or k_node != node_id or k_exec != execution_id:
continue
to_flush.append(key)
for key in to_flush:
evt = self._pending_output_snapshots.pop(key)
try:
line = json.dumps(evt.to_dict(), default=str)
self._session_log.write(line + "\n")
except Exception:
pass
if to_flush:
try:
self._session_log.flush()
except Exception:
pass
def subscribe(
self,
@@ -311,6 +445,19 @@ class EventBus:
Args:
event: Event to publish
"""
# Apply iteration offset at the source so ALL consumers (SSE subscribers,
# event history, session log) see the same monotonically increasing
# iteration values. Without this, live SSE would use raw iterations
# while events.jsonl would use offset iterations, causing ID collisions
# on the frontend when replaying after cold resume.
if (
self._session_log_iteration_offset
and isinstance(event.data, dict)
and "iteration" in event.data
):
offset = self._session_log_iteration_offset
event.data = {**event.data, "iteration": event.data["iteration"] + offset}
# Add to history
async with self._lock:
self._event_history.append(event)
@@ -331,6 +478,15 @@ class EventBus:
except Exception:
pass # never break event delivery
# Per-session persistent log (always-on when set_session_log was called).
# Streaming deltas are coalesced: client_output_delta and llm_text_delta
# are accumulated and flushed as a single snapshot event on llm_turn_complete.
if self._session_log is not None:
try:
self._write_session_log_event(event)
except Exception:
pass # never break event delivery
# Find matching subscriptions
matching_handlers: list[EventHandler] = []
@@ -391,6 +547,7 @@ class EventBus:
execution_id: str,
input_data: dict[str, Any] | None = None,
correlation_id: str | None = None,
run_id: str | None = None,
) -> None:
"""Emit execution started event."""
await self.publish(
@@ -400,6 +557,7 @@ class EventBus:
execution_id=execution_id,
data={"input": input_data or {}},
correlation_id=correlation_id,
run_id=run_id,
)
)
@@ -409,6 +567,7 @@ class EventBus:
execution_id: str,
output: dict[str, Any] | None = None,
correlation_id: str | None = None,
run_id: str | None = None,
) -> None:
"""Emit execution completed event."""
await self.publish(
@@ -418,6 +577,7 @@ class EventBus:
execution_id=execution_id,
data={"output": output or {}},
correlation_id=correlation_id,
run_id=run_id,
)
)
@@ -427,6 +587,7 @@ class EventBus:
execution_id: str,
error: str,
correlation_id: str | None = None,
run_id: str | None = None,
) -> None:
"""Emit execution failed event."""
await self.publish(
@@ -436,6 +597,7 @@ class EventBus:
execution_id=execution_id,
data={"error": error},
correlation_id=correlation_id,
run_id=run_id,
)
)
@@ -527,15 +689,19 @@ class EventBus:
node_id: str,
iteration: int,
execution_id: str | None = None,
extra_data: dict[str, Any] | None = None,
) -> None:
"""Emit node loop iteration event."""
data: dict[str, Any] = {"iteration": iteration}
if extra_data:
data.update(extra_data)
await self.publish(
AgentEvent(
type=EventType.NODE_LOOP_ITERATION,
stream_id=stream_id,
node_id=node_id,
execution_id=execution_id,
data={"iteration": iteration},
data=data,
)
)
@@ -584,6 +750,7 @@ class EventBus:
content: str,
snapshot: str,
execution_id: str | None = None,
inner_turn: int = 0,
) -> None:
"""Emit LLM text delta event."""
await self.publish(
@@ -592,7 +759,7 @@ class EventBus:
stream_id=stream_id,
node_id=node_id,
execution_id=execution_id,
data={"content": content, "snapshot": snapshot},
data={"content": content, "snapshot": snapshot, "inner_turn": inner_turn},
)
)
@@ -708,9 +875,10 @@ class EventBus:
snapshot: str,
execution_id: str | None = None,
iteration: int | None = None,
inner_turn: int = 0,
) -> None:
"""Emit client output delta event (client_facing=True nodes)."""
data: dict = {"content": content, "snapshot": snapshot}
data: dict = {"content": content, "snapshot": snapshot, "inner_turn": inner_turn}
if iteration is not None:
data["iteration"] = iteration
await self.publish(
@@ -1009,7 +1177,7 @@ class EventBus:
ticket: dict,
execution_id: str | None = None,
) -> None:
"""Emitted by health judge when worker shows a degradation pattern."""
"""Emitted when worker shows a degradation pattern."""
await self.publish(
AgentEvent(
type=EventType.WORKER_ESCALATION_TICKET,
+81 -3
View File
@@ -127,6 +127,7 @@ class ExecutionContext:
input_data: dict[str, Any]
isolation_level: IsolationLevel
session_state: dict[str, Any] | None = None # For resuming from pause
run_id: str | None = None # Unique ID per trigger() invocation
started_at: datetime = field(default_factory=datetime.now)
completed_at: datetime | None = None
status: str = "pending" # pending, running, completed, failed, paused
@@ -425,11 +426,36 @@ class ExecutionStream:
return True
return False
async def inject_trigger(
self,
node_id: str,
trigger: Any,
) -> bool:
"""Inject a trigger event into a running queen EventLoopNode.
Searches active executors for a node matching ``node_id`` and calls
its ``inject_trigger()`` method to wake the queen.
Args:
node_id: The queen EventLoopNode ID.
trigger: A ``TriggerEvent`` instance (typed as Any to avoid
circular imports with graph layer).
Returns True if the trigger was delivered, False otherwise.
"""
for executor in self._active_executors.values():
node = executor.node_registry.get(node_id)
if node is not None and hasattr(node, "inject_trigger"):
await node.inject_trigger(trigger)
return True
return False
async def execute(
self,
input_data: dict[str, Any],
correlation_id: str | None = None,
session_state: dict[str, Any] | None = None,
run_id: str | None = None,
) -> str:
"""
Queue an execution and return its ID.
@@ -440,6 +466,7 @@ class ExecutionStream:
input_data: Input data for this execution
correlation_id: Optional ID to correlate related executions
session_state: Optional session state to resume from (with paused_at, memory)
run_id: Unique ID for this trigger invocation (for run dividers)
Returns:
Execution ID for tracking
@@ -500,6 +527,7 @@ class ExecutionStream:
input_data=input_data,
isolation_level=self.entry_spec.get_isolation_level(),
session_state=session_state,
run_id=run_id,
)
async with self._lock:
@@ -575,7 +603,9 @@ class ExecutionStream:
execution_id=execution_id,
input_data=ctx.input_data,
correlation_id=ctx.correlation_id,
run_id=ctx.run_id,
)
self._write_run_event(execution_id, ctx.run_id, "run_started")
# Create execution-scoped memory
self._state_manager.create_memory(
@@ -740,6 +770,7 @@ class ExecutionStream:
execution_id=execution_id,
output=result.output,
correlation_id=ctx.correlation_id,
run_id=ctx.run_id,
)
elif result.paused_at:
# The executor returns paused_at on CancelledError but
@@ -757,8 +788,22 @@ class ExecutionStream:
execution_id=execution_id,
error=result.error or "Unknown error",
correlation_id=ctx.correlation_id,
run_id=ctx.run_id,
)
# Write run event for historical restoration
if result.success:
self._write_run_event(execution_id, ctx.run_id, "run_completed")
elif result.paused_at:
self._write_run_event(execution_id, ctx.run_id, "run_paused")
else:
self._write_run_event(
execution_id,
ctx.run_id,
"run_failed",
{"error": result.error or "Unknown error"},
)
logger.debug(f"Execution {execution_id} completed: success={result.success}")
except asyncio.CancelledError:
@@ -818,8 +863,10 @@ class ExecutionStream:
execution_id=execution_id,
error=cancel_reason,
correlation_id=ctx.correlation_id,
run_id=ctx.run_id,
)
self._write_run_event(execution_id, ctx.run_id, "run_cancelled")
# Don't re-raise - we've handled it and saved state
except Exception as e:
@@ -856,7 +903,9 @@ class ExecutionStream:
execution_id=execution_id,
error=str(e),
correlation_id=ctx.correlation_id,
run_id=ctx.run_id,
)
self._write_run_event(execution_id, ctx.run_id, "run_failed", {"error": str(e)})
finally:
# Clean up state
@@ -872,6 +921,36 @@ class ExecutionStream:
self._completion_events.pop(execution_id, None)
self._execution_tasks.pop(execution_id, None)
def _write_run_event(
self,
execution_id: str,
run_id: str | None,
event: str,
extra: dict[str, Any] | None = None,
) -> None:
"""Append a run lifecycle event to runs.jsonl for historical restoration."""
if not self._session_store or not run_id:
return
import json as _json
session_dir = self._session_store.get_session_path(execution_id)
runs_file = session_dir / "runs.jsonl"
now = datetime.now()
record = {
"run_id": run_id,
"event": event,
"timestamp": now.isoformat(),
"created_at": now.timestamp(),
}
if extra:
record.update(extra)
try:
runs_file.parent.mkdir(parents=True, exist_ok=True)
with open(runs_file, "a", encoding="utf-8") as f:
f.write(_json.dumps(record) + "\n")
except OSError:
pass # Non-critical — don't break execution
async def _write_session_state(
self,
execution_id: str,
@@ -978,8 +1057,8 @@ class ExecutionStream:
def _create_modified_graph(self) -> "GraphSpec":
"""Create a graph with the entry point overridden.
Preserves the original graph's entry_points and async_entry_points
so that validation correctly considers ALL entry nodes reachable.
Preserves the original graph's entry_points so that validation
correctly considers ALL entry nodes reachable.
Each stream only executes from its own entry_node, but the full
graph must validate with all entry points accounted for.
"""
@@ -1004,7 +1083,6 @@ class ExecutionStream:
version=self.graph.version,
entry_node=self.entry_spec.entry_node, # Use our entry point
entry_points=merged_entry_points,
async_entry_points=self.graph.async_entry_points,
terminal_nodes=self.graph.terminal_nodes,
pause_nodes=self.graph.pause_nodes,
nodes=self.graph.nodes,
@@ -17,7 +17,7 @@ from pathlib import Path
import pytest
from framework.graph import Goal
from framework.graph.edge import AsyncEntryPointSpec, EdgeCondition, EdgeSpec, GraphSpec
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
from framework.graph.goal import Constraint, SuccessCriterion
from framework.graph.node import NodeSpec
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
@@ -101,30 +101,12 @@ def sample_graph():
),
]
async_entry_points = [
AsyncEntryPointSpec(
id="webhook",
name="Webhook Handler",
entry_node="process-webhook",
trigger_type="webhook",
isolation_level="shared",
),
AsyncEntryPointSpec(
id="api",
name="API Handler",
entry_node="process-api",
trigger_type="api",
isolation_level="shared",
),
]
return GraphSpec(
id="test-graph",
goal_id="test-goal",
version="1.0.0",
entry_node="process-webhook",
entry_points={"start": "process-webhook"},
async_entry_points=async_entry_points,
terminal_nodes=["complete"],
pause_nodes=[],
nodes=nodes,
@@ -504,108 +486,6 @@ class TestAgentRuntime:
# === GraphSpec Validation Tests ===
class TestGraphSpecValidation:
"""Tests for GraphSpec with async_entry_points."""
def test_has_async_entry_points(self, sample_graph):
"""Test checking for async entry points."""
assert sample_graph.has_async_entry_points() is True
# Graph without async entry points
simple_graph = GraphSpec(
id="simple",
goal_id="goal",
entry_node="start",
nodes=[],
edges=[],
)
assert simple_graph.has_async_entry_points() is False
def test_get_async_entry_point(self, sample_graph):
"""Test getting async entry point by ID."""
ep = sample_graph.get_async_entry_point("webhook")
assert ep is not None
assert ep.id == "webhook"
assert ep.entry_node == "process-webhook"
ep_not_found = sample_graph.get_async_entry_point("nonexistent")
assert ep_not_found is None
def test_validate_async_entry_points(self):
"""Test validation catches async entry point errors."""
nodes = [
NodeSpec(
id="valid-node",
name="Valid Node",
description="A valid node",
node_type="event_loop",
input_keys=[],
output_keys=[],
),
]
# Invalid entry node
graph = GraphSpec(
id="test",
goal_id="goal",
entry_node="valid-node",
async_entry_points=[
AsyncEntryPointSpec(
id="invalid",
name="Invalid",
entry_node="nonexistent-node",
trigger_type="webhook",
),
],
nodes=nodes,
edges=[],
)
errors = graph.validate()["errors"]
assert any("nonexistent-node" in e for e in errors)
# Invalid isolation level
graph2 = GraphSpec(
id="test",
goal_id="goal",
entry_node="valid-node",
async_entry_points=[
AsyncEntryPointSpec(
id="bad-isolation",
name="Bad Isolation",
entry_node="valid-node",
trigger_type="webhook",
isolation_level="invalid",
),
],
nodes=nodes,
edges=[],
)
errors2 = graph2.validate()["errors"]
assert any("isolation_level" in e for e in errors2)
# Invalid trigger type
graph3 = GraphSpec(
id="test",
goal_id="goal",
entry_node="valid-node",
async_entry_points=[
AsyncEntryPointSpec(
id="bad-trigger",
name="Bad Trigger",
entry_node="valid-node",
trigger_type="invalid_trigger",
),
],
nodes=nodes,
edges=[],
)
errors3 = graph3.validate()["errors"]
assert any("trigger_type" in e for e in errors3)
# === Integration Tests ===
@@ -483,7 +483,6 @@ class TestEventDrivenEntryPoints:
version="1.0.0",
entry_node="process-event",
entry_points={"start": "process-event"},
async_entry_points=[],
terminal_nodes=[],
pause_nodes=[],
nodes=nodes,
+22
View File
@@ -0,0 +1,22 @@
"""Trigger definitions for queen-level heartbeats (timers, webhooks)."""
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Any
@dataclass
class TriggerDefinition:
"""A registered trigger that can be activated on the queen runtime.
Trigger *definitions* come from the worker's ``triggers.json``.
Activation state is per-session (persisted in ``SessionState.active_triggers``).
"""
id: str
trigger_type: str # "timer" | "webhook"
trigger_config: dict[str, Any] = field(default_factory=dict)
description: str = ""
task: str = ""
active: bool = False
+7
View File
@@ -144,6 +144,13 @@ class SessionState(BaseModel):
checkpoint_enabled: bool = False
latest_checkpoint_id: str | None = None
# Trigger activation state (IDs of triggers the queen/user turned on)
active_triggers: list[str] = Field(default_factory=list)
# Per-trigger task strings (user overrides, keyed by trigger ID)
trigger_tasks: dict[str, str] = Field(default_factory=dict)
# True after first successful worker execution (gates trigger delivery on restart)
worker_configured: bool = Field(default=False)
model_config = {"extra": "allow"}
@computed_field
+23
View File
@@ -94,6 +94,29 @@ def sessions_dir(session: Session) -> Path:
return Path.home() / ".hive" / "agents" / agent_name / "sessions"
def cold_sessions_dir(session_id: str) -> Path | None:
"""Resolve the worker sessions directory from disk for a cold/stopped session.
Reads agent_path from the queen session's meta.json to find the agent name,
then returns ~/.hive/agents/{agent_name}/sessions/.
Returns None if meta.json is missing or has no agent_path.
"""
import json
meta_path = Path.home() / ".hive" / "queen" / "session" / session_id / "meta.json"
if not meta_path.exists():
return None
try:
meta = json.loads(meta_path.read_text(encoding="utf-8"))
agent_path = meta.get("agent_path")
if not agent_path:
return None
agent_name = Path(agent_path).name
return Path.home() / ".hive" / "agents" / agent_name / "sessions"
except (json.JSONDecodeError, OSError):
return None
# Allowed CORS origins (localhost on any port)
_CORS_ORIGINS = {"http://localhost", "http://127.0.0.1"}
+28 -1
View File
@@ -90,6 +90,28 @@ async def create_queen(
phase_state = QueenPhaseState(phase=initial_phase, event_bus=session.event_bus)
session.phase_state = phase_state
# ---- Track ask rounds during planning ----------------------------
# Increment planning_ask_rounds each time the queen requests user
# input (ask_user or ask_user_multiple) while in the planning phase.
async def _track_planning_asks(event: AgentEvent) -> None:
if phase_state.phase != "planning":
return
# Only count explicit ask_user / ask_user_multiple calls, not
# auto-block (text-only turns emit CLIENT_INPUT_REQUESTED with
# an empty prompt and no options/questions).
data = event.data or {}
has_prompt = bool(data.get("prompt"))
has_questions = bool(data.get("questions"))
has_options = bool(data.get("options"))
if has_prompt or has_questions or has_options:
phase_state.planning_ask_rounds += 1
session.event_bus.subscribe(
[EventType.CLIENT_INPUT_REQUESTED],
_track_planning_asks,
filter_stream="queen",
)
# ---- Lifecycle tools (always registered) --------------------------
register_queen_lifecycle_tools(
queen_registry,
@@ -110,6 +132,7 @@ async def create_queen(
session.worker_path,
stream_id="queen",
worker_graph_id=session.worker_runtime._graph_id,
default_session_id=session.id,
)
queen_tools = list(queen_registry.get_tools().values())
@@ -149,7 +172,8 @@ async def create_queen(
worker_identity = (
"\n\n# Worker Profile\n"
"No worker agent loaded. You are operating independently.\n"
"Handle all tasks directly using your coding tools."
"Design or build the agent to solve the user's problem "
"according to your current phase."
)
_planning_body = (
@@ -252,6 +276,7 @@ async def create_queen(
execution_id=session.id,
dynamic_tools_provider=phase_state.get_current_tools,
dynamic_prompt_provider=phase_state.get_current_prompt,
iteration_metadata_provider=lambda: {"phase": phase_state.phase},
)
session.queen_executor = executor
@@ -269,6 +294,8 @@ async def create_queen(
return
if phase_state.phase == "running":
if event.type == EventType.EXECUTION_COMPLETED:
# Mark worker as configured after first successful run
session.worker_configured = True
output = event.data.get("output", {})
output_summary = ""
if output:
+8
View File
@@ -15,6 +15,7 @@ logger = logging.getLogger(__name__)
DEFAULT_EVENT_TYPES = [
EventType.CLIENT_OUTPUT_DELTA,
EventType.CLIENT_INPUT_REQUESTED,
EventType.CLIENT_INPUT_RECEIVED,
EventType.LLM_TEXT_DELTA,
EventType.TOOL_CALL_STARTED,
EventType.TOOL_CALL_COMPLETED,
@@ -40,6 +41,11 @@ DEFAULT_EVENT_TYPES = [
EventType.CREDENTIALS_REQUIRED,
EventType.SUBAGENT_REPORT,
EventType.QUEEN_PHASE_CHANGED,
EventType.TRIGGER_AVAILABLE,
EventType.TRIGGER_ACTIVATED,
EventType.TRIGGER_DEACTIVATED,
EventType.TRIGGER_FIRED,
EventType.TRIGGER_REMOVED,
EventType.DRAFT_GRAPH_UPDATED,
]
@@ -90,6 +96,7 @@ async def handle_events(request: web.Request) -> web.StreamResponse:
"execution_failed",
"execution_paused",
"client_input_requested",
"client_input_received",
"node_loop_iteration",
"node_loop_started",
"credentials_required",
@@ -143,6 +150,7 @@ async def handle_events(request: web.Request) -> web.StreamResponse:
EventType.CLIENT_OUTPUT_DELTA.value,
EventType.EXECUTION_STARTED.value,
EventType.CLIENT_INPUT_REQUESTED.value,
EventType.CLIENT_INPUT_RECEIVED.value,
}
event_type_values = {et.value for et in event_types}
replay_types = _REPLAY_TYPES & event_type_values
+12
View File
@@ -125,6 +125,18 @@ async def handle_chat(request: web.Request) -> web.Response:
node = queen_executor.node_registry.get("queen")
if node is not None and hasattr(node, "inject_event"):
await node.inject_event(message, is_client_input=True)
# Publish to EventBus so the session event log captures user messages
from framework.runtime.event_bus import AgentEvent, EventType
await session.event_bus.publish(
AgentEvent(
type=EventType.CLIENT_INPUT_RECEIVED,
stream_id="queen",
node_id="queen",
execution_id=session.id,
data={"content": message},
)
)
return web.json_response(
{
"status": "queen",
+27 -8
View File
@@ -2,6 +2,7 @@
import json
import logging
import time
from aiohttp import web
@@ -116,6 +117,20 @@ async def handle_list_nodes(request: web.Request) -> web.Response:
}
for ep in reg.entry_points.values()
]
# Append triggers from triggers.json (stored on session)
for t in getattr(session, "available_triggers", {}).values():
entry = {
"id": t.id,
"name": t.description or t.id,
"entry_node": graph.entry_node,
"trigger_type": t.trigger_type,
"trigger_config": t.trigger_config,
"task": t.task,
}
mono = getattr(session, "trigger_next_fire", {}).get(t.id)
if mono is not None:
entry["next_fire_in"] = max(0.0, mono - time.monotonic())
entry_points.append(entry)
return web.json_response(
{
"nodes": nodes,
@@ -261,10 +276,12 @@ async def handle_flowchart_map(request: web.Request) -> web.Response:
# Fast path: already in memory
if phase_state is not None and phase_state.original_draft_graph is not None:
return web.json_response({
"map": phase_state.flowchart_map,
"original_draft": phase_state.original_draft_graph,
})
return web.json_response(
{
"map": phase_state.flowchart_map,
"original_draft": phase_state.original_draft_graph,
}
)
# Try loading from flowchart.json in the agent folder
worker_path = getattr(session, "worker_path", None)
@@ -281,10 +298,12 @@ async def handle_flowchart_map(request: web.Request) -> web.Response:
if phase_state is not None and original_draft:
phase_state.original_draft_graph = original_draft
phase_state.flowchart_map = fmap
return web.json_response({
"map": fmap,
"original_draft": original_draft,
})
return web.json_response(
{
"map": fmap,
"original_draft": original_draft,
}
)
except Exception:
logger.warning("Failed to read flowchart.json from %s", worker_path)
+217 -49
View File
@@ -9,8 +9,10 @@ Session-primary routes:
- DELETE /api/sessions/{session_id}/worker unload worker from session
- GET /api/sessions/{session_id}/stats runtime statistics
- GET /api/sessions/{session_id}/entry-points list entry points
- PATCH /api/sessions/{session_id}/triggers/{id} update trigger task
- GET /api/sessions/{session_id}/graphs list graph IDs
- GET /api/sessions/{session_id}/queen-messages queen conversation history
- GET /api/sessions/{session_id}/events/history persisted eventbus log (for replay)
Worker session browsing (persisted execution runs on disk):
- GET /api/sessions/{session_id}/worker-sessions list
@@ -31,6 +33,7 @@ from pathlib import Path
from aiohttp import web
from framework.server.app import (
cold_sessions_dir,
resolve_session,
safe_path_segment,
sessions_dir,
@@ -140,6 +143,7 @@ async def handle_create_session(request: web.Request) -> web.Response:
session = await manager.create_session_with_worker(
agent_path,
agent_id=agent_id,
session_id=session_id,
model=model,
initial_prompt=initial_prompt,
queen_resume_from=queen_resume_from,
@@ -228,6 +232,22 @@ async def handle_get_live_session(request: web.Request) -> web.Response:
}
for ep in rt.get_entry_points()
]
# Append triggers from triggers.json (stored on session)
runner = getattr(session, "runner", None)
graph_entry = runner.graph.entry_node if runner else ""
for t in getattr(session, "available_triggers", {}).values():
entry = {
"id": t.id,
"name": t.description or t.id,
"entry_node": graph_entry,
"trigger_type": t.trigger_type,
"trigger_config": t.trigger_config,
"task": t.task,
}
mono = getattr(session, "trigger_next_fire", {}).get(t.id)
if mono is not None:
entry["next_fire_in"] = max(0.0, mono - time.monotonic())
data["entry_points"].append(entry)
data["graphs"] = session.worker_runtime.list_graphs()
return web.json_response(data)
@@ -351,23 +371,84 @@ async def handle_session_entry_points(request: web.Request) -> web.Response:
rt = session.worker_runtime
eps = rt.get_entry_points() if rt else []
entry_points = [
{
"id": ep.id,
"name": ep.name,
"entry_node": ep.entry_node,
"trigger_type": ep.trigger_type,
"trigger_config": ep.trigger_config,
**(
{"next_fire_in": nf}
if rt and (nf := rt.get_timer_next_fire_in(ep.id)) is not None
else {}
),
}
for ep in eps
]
# Append triggers from triggers.json (stored on session)
runner = getattr(session, "runner", None)
graph_entry = runner.graph.entry_node if runner else ""
for t in getattr(session, "available_triggers", {}).values():
entry = {
"id": t.id,
"name": t.description or t.id,
"entry_node": graph_entry,
"trigger_type": t.trigger_type,
"trigger_config": t.trigger_config,
"task": t.task,
}
mono = getattr(session, "trigger_next_fire", {}).get(t.id)
if mono is not None:
entry["next_fire_in"] = max(0.0, mono - time.monotonic())
entry_points.append(entry)
return web.json_response({"entry_points": entry_points})
async def handle_update_trigger_task(request: web.Request) -> web.Response:
"""PATCH /api/sessions/{session_id}/triggers/{trigger_id} — update trigger task."""
session, err = resolve_session(request)
if err:
return err
trigger_id = request.match_info["trigger_id"]
available = getattr(session, "available_triggers", {})
tdef = available.get(trigger_id)
if tdef is None:
return web.json_response(
{"error": f"Trigger '{trigger_id}' not found"},
status=404,
)
try:
body = await request.json()
except Exception:
return web.json_response({"error": "Invalid JSON body"}, status=400)
task = body.get("task")
if task is None:
return web.json_response({"error": "Missing 'task' field"}, status=400)
if not isinstance(task, str):
return web.json_response({"error": "'task' must be a string"}, status=400)
tdef.task = task
# Persist to session state and agent definition
from framework.tools.queen_lifecycle_tools import (
_persist_active_triggers,
_save_trigger_to_agent,
)
if trigger_id in getattr(session, "active_trigger_ids", set()):
session_id = request.match_info["session_id"]
await _persist_active_triggers(session, session_id)
_save_trigger_to_agent(session, trigger_id, tdef)
return web.json_response(
{
"entry_points": [
{
"id": ep.id,
"name": ep.name,
"entry_node": ep.entry_node,
"trigger_type": ep.trigger_type,
"trigger_config": ep.trigger_config,
**(
{"next_fire_in": nf}
if rt and (nf := rt.get_timer_next_fire_in(ep.id)) is not None
else {}
),
}
for ep in eps
]
"trigger_id": trigger_id,
"task": tdef.task,
}
)
@@ -397,12 +478,15 @@ async def handle_list_worker_sessions(request: web.Request) -> web.Response:
"""List worker sessions on disk."""
session, err = resolve_session(request)
if err:
return err
if not session.worker_path:
return web.json_response({"sessions": []})
sess_dir = sessions_dir(session)
# Fall back to cold session lookup from disk
sid = request.match_info["session_id"]
sess_dir = cold_sessions_dir(sid)
if sess_dir is None:
return err
else:
if not session.worker_path:
return web.json_response({"sessions": []})
sess_dir = sessions_dir(session)
if not sess_dir.exists():
return web.json_response({"sessions": []})
@@ -564,48 +648,85 @@ async def handle_messages(request: web.Request) -> web.Response:
"""Get messages for a worker session."""
session, err = resolve_session(request)
if err:
return err
if not session.worker_path:
return web.json_response({"error": "No worker loaded"}, status=503)
# Fall back to cold session lookup from disk
sid = request.match_info["session_id"]
sess_dir = cold_sessions_dir(sid)
if sess_dir is None:
return err
else:
if not session.worker_path:
return web.json_response({"error": "No worker loaded"}, status=503)
sess_dir = sessions_dir(session)
ws_id = request.match_info.get("ws_id") or request.match_info.get("session_id", "")
ws_id = safe_path_segment(ws_id)
convs_dir = sessions_dir(session) / ws_id / "conversations"
convs_dir = sess_dir / ws_id / "conversations"
if not convs_dir.exists():
return web.json_response({"messages": []})
filter_node = request.query.get("node_id")
all_messages = []
for node_dir in convs_dir.iterdir():
if not node_dir.is_dir():
continue
if filter_node and node_dir.name != filter_node:
continue
parts_dir = node_dir / "parts"
def _collect_msg_parts(parts_dir: Path, node_id: str) -> None:
if not parts_dir.exists():
continue
return
for part_file in sorted(parts_dir.iterdir()):
if part_file.suffix != ".json":
continue
try:
part = json.loads(part_file.read_text(encoding="utf-8"))
part["_node_id"] = node_dir.name
part["_node_id"] = node_id
part.setdefault("created_at", part_file.stat().st_mtime)
all_messages.append(part)
except (json.JSONDecodeError, OSError):
continue
# Flat layout: conversations/parts/*.json
if not filter_node:
_collect_msg_parts(convs_dir / "parts", "worker")
# Node-based layout: conversations/<node_id>/parts/*.json
for node_dir in convs_dir.iterdir():
if not node_dir.is_dir() or node_dir.name == "parts":
continue
if filter_node and node_dir.name != filter_node:
continue
_collect_msg_parts(node_dir / "parts", node_dir.name)
# Merge run lifecycle markers from runs.jsonl (for historical dividers)
runs_file = sess_dir / ws_id / "runs.jsonl"
if runs_file.exists():
try:
for line in runs_file.read_text(encoding="utf-8").splitlines():
line = line.strip()
if not line:
continue
try:
record = json.loads(line)
all_messages.append(
{
"seq": -1,
"role": "system",
"content": "",
"_node_id": "_run_marker",
"is_run_marker": True,
"run_id": record.get("run_id"),
"run_event": record.get("event"),
"created_at": record.get("created_at", 0),
}
)
except json.JSONDecodeError:
continue
except OSError:
pass
all_messages.sort(key=lambda m: m.get("created_at", m.get("seq", 0)))
client_only = request.query.get("client_only", "").lower() in ("true", "1")
if client_only:
client_facing_nodes: set[str] = set()
if session.runner and hasattr(session.runner, "graph"):
if session and session.runner and hasattr(session.runner, "graph"):
for node in session.runner.graph.nodes:
if node.client_facing:
client_facing_nodes.add(node.id)
@@ -614,12 +735,15 @@ async def handle_messages(request: web.Request) -> web.Response:
all_messages = [
m
for m in all_messages
if not m.get("is_transition_marker")
and m["role"] != "tool"
and not (m["role"] == "assistant" and m.get("tool_calls"))
and (
(m["role"] == "user" and m.get("is_client_input"))
or (m["role"] == "assistant" and m.get("_node_id") in client_facing_nodes)
if m.get("is_run_marker")
or (
not m.get("is_transition_marker")
and m["role"] != "tool"
and not (m["role"] == "assistant" and m.get("tool_calls"))
and (
(m["role"] == "user" and m.get("is_client_input"))
or (m["role"] == "assistant" and m.get("_node_id") in client_facing_nodes)
)
)
]
@@ -640,18 +764,16 @@ async def handle_queen_messages(request: web.Request) -> web.Response:
return web.json_response({"messages": [], "session_id": session_id})
all_messages: list[dict] = []
for node_dir in convs_dir.iterdir():
if not node_dir.is_dir():
continue
parts_dir = node_dir / "parts"
def _read_parts(parts_dir: Path, node_id: str) -> None:
if not parts_dir.exists():
continue
return
for part_file in sorted(parts_dir.iterdir()):
if part_file.suffix != ".json":
continue
try:
part = json.loads(part_file.read_text(encoding="utf-8"))
part["_node_id"] = node_dir.name
part["_node_id"] = node_id
# Use file mtime as created_at so frontend can order
# queen and worker messages chronologically.
part.setdefault("created_at", part_file.stat().st_mtime)
@@ -659,6 +781,15 @@ async def handle_queen_messages(request: web.Request) -> web.Response:
except (json.JSONDecodeError, OSError):
continue
# Flat layout: conversations/parts/*.json
_read_parts(convs_dir / "parts", "queen")
# Node-based layout: conversations/<node_id>/parts/*.json
for node_dir in convs_dir.iterdir():
if not node_dir.is_dir() or node_dir.name == "parts":
continue
_read_parts(node_dir / "parts", node_dir.name)
all_messages.sort(key=lambda m: m.get("created_at", m.get("seq", 0)))
# Filter to client-facing messages only
@@ -673,6 +804,38 @@ async def handle_queen_messages(request: web.Request) -> web.Response:
return web.json_response({"messages": all_messages, "session_id": session_id})
async def handle_session_events_history(request: web.Request) -> web.Response:
"""GET /api/sessions/{session_id}/events/history — persisted eventbus log.
Reads ``events.jsonl`` from the session directory on disk so it works for
both live sessions and cold (post-server-restart) sessions. The frontend
replays these events through ``sseEventToChatMessage`` to fully reconstruct
the UI state on resume.
"""
session_id = request.match_info["session_id"]
queen_dir = Path.home() / ".hive" / "queen" / "session" / session_id
events_path = queen_dir / "events.jsonl"
if not events_path.exists():
return web.json_response({"events": [], "session_id": session_id})
events: list[dict] = []
try:
with open(events_path, encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line:
continue
try:
events.append(json.loads(line))
except json.JSONDecodeError:
continue
except OSError:
return web.json_response({"events": [], "session_id": session_id})
return web.json_response({"events": events, "session_id": session_id})
async def handle_session_history(request: web.Request) -> web.Response:
"""GET /api/sessions/history — all queen sessions on disk (live + cold).
@@ -746,6 +909,7 @@ async def handle_discover(request: web.Request) -> web.Response:
"description": entry.description,
"category": entry.category,
"session_count": entry.session_count,
"run_count": entry.run_count,
"node_count": entry.node_count,
"tool_count": entry.tool_count,
"tags": entry.tags,
@@ -783,8 +947,12 @@ def register_routes(app: web.Application) -> None:
# Session info
app.router.add_get("/api/sessions/{session_id}/stats", handle_session_stats)
app.router.add_get("/api/sessions/{session_id}/entry-points", handle_session_entry_points)
app.router.add_patch(
"/api/sessions/{session_id}/triggers/{trigger_id}", handle_update_trigger_task
)
app.router.add_get("/api/sessions/{session_id}/graphs", handle_session_graphs)
app.router.add_get("/api/sessions/{session_id}/queen-messages", handle_queen_messages)
app.router.add_get("/api/sessions/{session_id}/events/history", handle_session_events_history)
# Worker session browsing (session-primary)
app.router.add_get("/api/sessions/{session_id}/worker-sessions", handle_list_worker_sessions)
+271 -146
View File
@@ -7,7 +7,6 @@ Architecture:
- Session owns EventBus + LLM, shared with queen and worker
- Queen is always present once a session starts
- Worker is optional loaded into an existing session
- Judge is active only when a worker is loaded
"""
import asyncio
@@ -15,11 +14,13 @@ import json
import logging
import time
import uuid
from dataclasses import dataclass
from dataclasses import dataclass, field
from datetime import datetime
from pathlib import Path
from typing import Any
from framework.runtime.triggers import TriggerDefinition
logger = logging.getLogger(__name__)
@@ -42,12 +43,23 @@ class Session:
worker_info: Any | None = None # AgentInfo
# Queen phase state (building/staging/running)
phase_state: Any = None # QueenPhaseState
# Judge (active when worker is loaded)
judge_task: asyncio.Task | None = None
escalation_sub: str | None = None
# Worker handoff subscription
worker_handoff_sub: str | None = None
# Memory consolidation subscription (fires on CONTEXT_COMPACTED)
memory_consolidation_sub: str | None = None
# Trigger definitions loaded from agent's triggers.json (available but inactive)
available_triggers: dict[str, TriggerDefinition] = field(default_factory=dict)
# Active trigger tracking (IDs currently firing + their asyncio tasks)
active_trigger_ids: set[str] = field(default_factory=set)
active_timer_tasks: dict[str, asyncio.Task] = field(default_factory=dict)
# Queen-owned webhook server (lazy singleton, created on first webhook trigger activation)
queen_webhook_server: Any = None
# EventBus subscription IDs for active webhook triggers (trigger_id -> sub_id)
active_webhook_subs: dict[str, str] = field(default_factory=dict)
# True after first successful worker execution (gates trigger delivery)
worker_configured: bool = False
# Monotonic timestamps for next trigger fire (mirrors AgentRuntime._timer_next_fire)
trigger_next_fire: dict[str, float] = field(default_factory=dict)
# Session directory resumption:
# When set, _start_queen writes queen conversations to this existing session's
# directory instead of creating a new one. This lets cold-restores accumulate
@@ -130,7 +142,9 @@ class SessionManager:
to that existing session's directory instead of creating a new one.
This preserves full conversation history across server restarts.
"""
session = await self._create_session_core(session_id=session_id, model=model)
# Reuse the original session ID when cold-restoring
resolved_session_id = queen_resume_from or session_id
session = await self._create_session_core(session_id=resolved_session_id, model=model)
session.queen_resume_from = queen_resume_from
# Start queen immediately (queen-only, no worker tools yet)
@@ -147,22 +161,28 @@ class SessionManager:
self,
agent_path: str | Path,
agent_id: str | None = None,
session_id: str | None = None,
model: str | None = None,
initial_prompt: str | None = None,
queen_resume_from: str | None = None,
) -> Session:
"""Create a session and load a worker in one step.
When ``queen_resume_from`` is set the queen writes conversation messages
to that existing session's directory instead of creating a new one.
When ``queen_resume_from`` is set the session reuses the original session
ID so the frontend sees a single continuous session. The queen writes
conversation messages to that existing directory, preserving full history.
"""
from framework.tools.queen_lifecycle_tools import build_worker_profile
agent_path = Path(agent_path)
resolved_worker_id = agent_id or agent_path.name
# Auto-generate session ID (not the agent name)
session = await self._create_session_core(model=model)
# Reuse the original session ID when cold-restoring so the frontend
# sees one continuous session instead of a new one each time.
session = await self._create_session_core(
session_id=queen_resume_from,
model=model,
)
session.queen_resume_from = queen_resume_from
try:
# Load worker FIRST (before queen) so queen gets full tools
@@ -202,8 +222,8 @@ class SessionManager:
) -> None:
"""Load a worker agent into a session (core logic).
Sets up the runner, runtime, and session fields. Does NOT start the
judge or notify the queen callers handle those steps.
Sets up the runner, runtime, and session fields. Does NOT notify
the queen callers handle that step.
"""
from framework.runner import AgentRunner
@@ -242,6 +262,25 @@ class SessionManager:
runtime = runner._agent_runtime
# Load triggers from the agent's triggers.json definition file.
from framework.tools.queen_lifecycle_tools import _read_agent_triggers_json
for tdata in _read_agent_triggers_json(agent_path):
tid = tdata.get("id", "")
ttype = tdata.get("trigger_type", "")
if tid and ttype in ("timer", "webhook"):
session.available_triggers[tid] = TriggerDefinition(
id=tid,
trigger_type=ttype,
trigger_config=tdata.get("trigger_config", {}),
description=tdata.get("name", tid),
task=tdata.get("task", ""),
)
logger.info("Loaded trigger '%s' (%s) from triggers.json", tid, ttype)
if session.available_triggers:
await self._emit_trigger_events(session, "available", session.available_triggers)
# Start runtime on event loop
if runtime and not runtime.is_running:
await runtime.start()
@@ -369,7 +408,7 @@ class SessionManager:
) -> Session:
"""Load a worker agent into an existing session (with running queen).
Starts the worker runtime, health judge, and notifies the queen.
Starts the worker runtime and notifies the queen.
"""
agent_path = Path(agent_path)
@@ -385,11 +424,68 @@ class SessionManager:
)
# Notify queen about the loaded worker (skip for queen itself).
# Health judge disabled for simplicity.
if agent_path.name != "queen" and session.worker_runtime:
# await self._start_judge(session, session.runner._storage_path)
await self._notify_queen_worker_loaded(session)
# Update meta.json so cold-restore can discover this session by agent_path
storage_session_id = session.queen_resume_from or session.id
meta_path = Path.home() / ".hive" / "queen" / "session" / storage_session_id / "meta.json"
try:
_agent_name = (
session.worker_info.name
if session.worker_info
else str(agent_path.name).replace("_", " ").title()
)
existing_meta = {}
if meta_path.exists():
existing_meta = json.loads(meta_path.read_text(encoding="utf-8"))
existing_meta["agent_name"] = _agent_name
existing_meta["agent_path"] = (
str(session.worker_path) if session.worker_path else str(agent_path)
)
meta_path.write_text(json.dumps(existing_meta), encoding="utf-8")
except OSError:
pass
# Restore previously active triggers from persisted session state
if session.available_triggers and session.worker_runtime:
try:
store = session.worker_runtime._session_store
state = await store.read_state(session_id)
if state and state.active_triggers:
from framework.tools.queen_lifecycle_tools import (
_start_trigger_timer,
_start_trigger_webhook,
)
saved_tasks = getattr(state, "trigger_tasks", {}) or {}
for tid in state.active_triggers:
tdef = session.available_triggers.get(tid)
if tdef:
# Restore user-configured task override
saved_task = saved_tasks.get(tid, "")
if saved_task:
tdef.task = saved_task
tdef.active = True
session.active_trigger_ids.add(tid)
if tdef.trigger_type == "timer":
await _start_trigger_timer(session, tid, tdef)
logger.info("Restored trigger timer '%s'", tid)
elif tdef.trigger_type == "webhook":
await _start_trigger_webhook(session, tid, tdef)
logger.info("Restored webhook trigger '%s'", tid)
else:
logger.warning(
"Saved trigger '%s' not found in worker entry points, skipping",
tid,
)
# Restore worker_configured flag
if state and getattr(state, "worker_configured", False):
session.worker_configured = True
except Exception as e:
logger.warning("Failed to restore active triggers: %s", e)
# Emit SSE event so the frontend can update UI
await self._emit_worker_loaded(session)
@@ -403,9 +499,6 @@ class SessionManager:
if session.worker_runtime is None:
return False
# Stop judge + escalation
self._stop_judge(session)
# Cleanup worker
if session.runner:
try:
@@ -413,6 +506,26 @@ class SessionManager:
except Exception as e:
logger.error("Error cleaning up worker '%s': %s", session.worker_id, e)
# Cancel active trigger timers
for tid, task in session.active_timer_tasks.items():
task.cancel()
logger.info("Cancelled trigger timer '%s' on unload", tid)
session.active_timer_tasks.clear()
# Unsubscribe webhook handlers (server stays alive — queen-owned)
for sub_id in session.active_webhook_subs.values():
try:
session.event_bus.unsubscribe(sub_id)
except Exception:
pass
session.active_webhook_subs.clear()
session.active_trigger_ids.clear()
# Clean up triggers
if session.available_triggers:
await self._emit_trigger_events(session, "removed", session.available_triggers)
session.available_triggers.clear()
worker_id = session.worker_id
session.worker_id = None
session.worker_path = None
@@ -443,8 +556,6 @@ class SessionManager:
_storage_id = getattr(session, "queen_resume_from", None) or session_id
_session_dir = Path.home() / ".hive" / "queen" / "session" / _storage_id
# Stop judge
self._stop_judge(session)
if session.worker_handoff_sub is not None:
try:
session.event_bus.unsubscribe(session.worker_handoff_sub)
@@ -464,6 +575,25 @@ class SessionManager:
session.queen_task = None
session.queen_executor = None
# Cancel active trigger timers
for task in session.active_timer_tasks.values():
task.cancel()
session.active_timer_tasks.clear()
# Unsubscribe webhook handlers and stop queen webhook server
for sub_id in session.active_webhook_subs.values():
try:
session.event_bus.unsubscribe(sub_id)
except Exception:
pass
session.active_webhook_subs.clear()
if session.queen_webhook_server is not None:
try:
await session.queen_webhook_server.stop()
except Exception:
logger.error("Error stopping queen webhook server", exc_info=True)
session.queen_webhook_server = None
# Cleanup worker
if session.runner:
try:
@@ -482,6 +612,9 @@ class SessionManager:
name=f"queen-memory-consolidation-{session_id}",
)
# Close per-session event log
session.event_bus.close_session_log()
logger.info("Session '%s' stopped", session_id)
return True
@@ -491,7 +624,7 @@ class SessionManager:
async def _handle_worker_handoff(self, session: Session, executor: Any, event: Any) -> None:
"""Route worker escalation events into the queen conversation."""
if event.stream_id in ("queen", "judge"):
if event.stream_id == "queen":
return
reason = str(event.data.get("reason", "")).strip()
@@ -580,6 +713,39 @@ class SessionManager:
except OSError:
pass
# Enable per-session event persistence so that all eventbus events
# survive server restarts and can be replayed on cold-session resume.
# Scan the existing event log to find the max iteration ever written,
# then use max+1 as offset so resumed sessions produce monotonically
# increasing iteration values — preventing frontend message ID collisions.
iteration_offset = 0
events_path = queen_dir / "events.jsonl"
try:
if events_path.exists():
max_iter = -1
with open(events_path, encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line:
continue
try:
evt = json.loads(line)
it = evt.get("data", {}).get("iteration")
if isinstance(it, int) and it > max_iter:
max_iter = it
except (json.JSONDecodeError, TypeError):
continue
if max_iter >= 0:
iteration_offset = max_iter + 1
logger.info(
"Session '%s' resuming with iteration_offset=%d (from events.jsonl max)",
session.id,
iteration_offset,
)
except OSError:
pass
session.event_bus.set_session_log(events_path, iteration_offset=iteration_offset)
session.queen_task = await create_queen(
session=session,
session_manager=self,
@@ -588,6 +754,22 @@ class SessionManager:
initial_prompt=initial_prompt,
)
# Auto-load worker on cold restore — the queen's conversation expects
# the agent to be loaded, but the new session has no worker.
if session.queen_resume_from and not session.worker_runtime:
meta_path = queen_dir / "meta.json"
if meta_path.exists():
try:
_meta = json.loads(meta_path.read_text(encoding="utf-8"))
_agent_path = _meta.get("agent_path")
if _agent_path and Path(_agent_path).exists():
await self.load_worker(session.id, _agent_path)
if session.phase_state:
await session.phase_state.switch_to_staging(source="auto")
logger.info("Cold restore: auto-loaded worker from %s", _agent_path)
except Exception:
logger.warning("Cold restore: failed to auto-load worker", exc_info=True)
# Memory consolidation — triggered by context compaction events.
# Compaction is a natural signal that "enough has happened to be worth remembering".
_consolidation_llm = session.llm
@@ -607,116 +789,6 @@ class SessionManager:
handler=_on_compaction,
)
# ------------------------------------------------------------------
# Judge startup / teardown
# ------------------------------------------------------------------
async def _start_judge(
self,
session: Session,
worker_storage_path: str | Path,
) -> None:
"""Start the health judge for a session's worker."""
from framework.graph.executor import GraphExecutor
from framework.monitoring import judge_goal, judge_graph
from framework.runner.tool_registry import ToolRegistry
from framework.runtime.core import Runtime
from framework.runtime.event_bus import EventType as _ET
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
worker_storage_path = Path(worker_storage_path)
try:
# Monitoring tools
monitoring_registry = ToolRegistry()
register_worker_monitoring_tools(
monitoring_registry,
session.event_bus,
worker_storage_path,
worker_graph_id=session.worker_runtime._graph_id,
)
hive_home = Path.home() / ".hive"
judge_dir = hive_home / "judge" / "session" / session.id
judge_dir.mkdir(parents=True, exist_ok=True)
judge_runtime = Runtime(hive_home / "judge")
monitoring_tools = list(monitoring_registry.get_tools().values())
monitoring_executor = monitoring_registry.get_executor()
async def _judge_loop():
interval = 300 # 5 minutes between checks
# Wait before the first check — let the worker actually do something
await asyncio.sleep(interval)
while True:
try:
executor = GraphExecutor(
runtime=judge_runtime,
llm=session.llm,
tools=monitoring_tools,
tool_executor=monitoring_executor,
event_bus=session.event_bus,
stream_id="judge",
storage_path=judge_dir,
loop_config=judge_graph.loop_config,
)
await executor.execute(
graph=judge_graph,
goal=judge_goal,
input_data={
"event": {"source": "timer", "reason": "scheduled"},
},
session_state={"resume_session_id": session.id},
)
except Exception:
logger.error("Health judge tick failed", exc_info=True)
await asyncio.sleep(interval)
session.judge_task = asyncio.create_task(_judge_loop())
# Escalation: judge → queen
async def _on_escalation(event):
ticket = event.data.get("ticket", {})
executor = session.queen_executor
if executor is None:
logger.warning("Escalation received but queen executor is None")
return
node = executor.node_registry.get("queen")
if node is not None and hasattr(node, "inject_event"):
msg = "[ESCALATION TICKET from Health Judge]\n" + json.dumps(
ticket, indent=2, ensure_ascii=False
)
await node.inject_event(msg)
else:
logger.warning("Escalation received but queen node not ready")
session.escalation_sub = session.event_bus.subscribe(
event_types=[_ET.WORKER_ESCALATION_TICKET],
handler=_on_escalation,
)
logger.info("Judge started for session '%s'", session.id)
except Exception as e:
logger.error(
"Failed to start judge for session '%s': %s",
session.id,
e,
exc_info=True,
)
def _stop_judge(self, session: Session) -> None:
"""Cancel judge task and unsubscribe escalation events."""
if session.judge_task is not None:
session.judge_task.cancel()
session.judge_task = None
if session.escalation_sub is not None:
try:
session.event_bus.unsubscribe(session.escalation_sub)
except Exception:
pass
session.escalation_sub = None
# ------------------------------------------------------------------
# Queen notifications
# ------------------------------------------------------------------
@@ -733,7 +805,22 @@ class SessionManager:
return
profile = build_worker_profile(session.worker_runtime, agent_path=session.worker_path)
await node.inject_event(f"[SYSTEM] Worker loaded.{profile}")
# Append available trigger info so the queen knows what's schedulable
trigger_lines = ""
if session.available_triggers:
parts = []
for t in session.available_triggers.values():
cfg = t.trigger_config
detail = cfg.get("cron") or f"every {cfg.get('interval_minutes', '?')} min"
task_info = f' -> task: "{t.task}"' if t.task else " (no task configured)"
parts.append(f" - {t.id} ({t.trigger_type}: {detail}){task_info}")
trigger_lines = (
"\n\nAvailable triggers (inactive — use set_trigger to activate):\n"
+ "\n".join(parts)
)
await node.inject_event(f"[SYSTEM] Worker loaded.{profile}{trigger_lines}")
async def _emit_worker_loaded(self, session: Session) -> None:
"""Publish a WORKER_LOADED event so the frontend can update."""
@@ -765,9 +852,35 @@ class SessionManager:
await node.inject_event(
"[SYSTEM] Worker unloaded. You are now operating independently. "
"Handle all tasks directly using your coding tools."
"Design or build the agent to solve the user's problem "
"according to your current phase."
)
async def _emit_trigger_events(
self,
session: Session,
kind: str,
triggers: dict[str, TriggerDefinition],
) -> None:
"""Emit TRIGGER_AVAILABLE or TRIGGER_REMOVED events for each trigger."""
from framework.runtime.event_bus import AgentEvent, EventType
event_type = (
EventType.TRIGGER_AVAILABLE if kind == "available" else EventType.TRIGGER_REMOVED
)
for t in triggers.values():
await session.event_bus.publish(
AgentEvent(
type=event_type,
stream_id="queen",
data={
"trigger_id": t.id,
"trigger_type": t.trigger_type,
"trigger_config": t.trigger_config,
},
)
)
async def revive_queen(self, session: Session, initial_prompt: str | None = None) -> None:
"""Revive a dead queen executor on an existing session.
@@ -839,13 +952,19 @@ class SessionManager:
# Check whether any message part files are actually present
has_messages = False
try:
for node_dir in convs_dir.iterdir():
if not node_dir.is_dir():
continue
parts_dir = node_dir / "parts"
if parts_dir.exists() and any(f.suffix == ".json" for f in parts_dir.iterdir()):
has_messages = True
break
# Flat layout: conversations/parts/*.json
flat_parts = convs_dir / "parts"
if flat_parts.exists() and any(f.suffix == ".json" for f in flat_parts.iterdir()):
has_messages = True
else:
# Node-based layout: conversations/<node_id>/parts/*.json
for node_dir in convs_dir.iterdir():
if not node_dir.is_dir() or node_dir.name == "parts":
continue
parts_dir = node_dir / "parts"
if parts_dir.exists() and any(f.suffix == ".json" for f in parts_dir.iterdir()):
has_messages = True
break
except OSError:
pass
@@ -922,21 +1041,27 @@ class SessionManager:
if convs_dir.exists():
try:
all_parts: list[dict] = []
for node_dir in convs_dir.iterdir():
if not node_dir.is_dir():
continue
parts_dir = node_dir / "parts"
def _collect_parts(parts_dir: Path, _dest: list[dict] = all_parts) -> None:
if not parts_dir.exists():
continue
return
for part_file in sorted(parts_dir.iterdir()):
if part_file.suffix != ".json":
continue
try:
part = json.loads(part_file.read_text(encoding="utf-8"))
part.setdefault("created_at", part_file.stat().st_mtime)
all_parts.append(part)
_dest.append(part)
except (json.JSONDecodeError, OSError):
continue
# Flat layout: conversations/parts/*.json
_collect_parts(convs_dir / "parts")
# Node-based layout: conversations/<node_id>/parts/*.json
for node_dir in convs_dir.iterdir():
if not node_dir.is_dir() or node_dir.name == "parts":
continue
_collect_parts(node_dir / "parts")
# Filter to client-facing messages only
client_msgs = [
p
+32
View File
@@ -16,6 +16,9 @@ from aiohttp.test_utils import TestClient, TestServer
from framework.server.app import create_app
from framework.server.session_manager import Session
REPO_ROOT = Path(__file__).resolve().parents[4]
EXAMPLE_AGENT_PATH = REPO_ROOT / "examples" / "templates" / "deep_research_agent"
# ---------------------------------------------------------------------------
# Mock helpers
# ---------------------------------------------------------------------------
@@ -347,6 +350,35 @@ class TestHealth:
class TestSessionCRUD:
@pytest.mark.asyncio
async def test_create_session_with_worker_forwards_session_id(self):
app = create_app()
manager = app["manager"]
manager.create_session_with_worker = AsyncMock(
return_value=_make_session(agent_id="my-custom-session")
)
async with TestClient(TestServer(app)) as client:
resp = await client.post(
"/api/sessions",
json={
"session_id": "my-custom-session",
"agent_path": str(EXAMPLE_AGENT_PATH),
},
)
data = await resp.json()
assert resp.status == 201
assert data["session_id"] == "my-custom-session"
manager.create_session_with_worker.assert_awaited_once_with(
str(EXAMPLE_AGENT_PATH.resolve()),
agent_id=None,
session_id="my-custom-session",
model=None,
initial_prompt=None,
queen_resume_from=None,
)
@pytest.mark.asyncio
async def test_list_sessions_empty(self):
app = create_app()
File diff suppressed because it is too large Load Diff
@@ -78,19 +78,6 @@ def register_graph_tools(registry: ToolRegistry, runtime: AgentRuntime) -> int:
isolation_level="shared",
)
# Async entry points
for aep in runner.graph.async_entry_points:
entry_points[aep.id] = EntryPointSpec(
id=aep.id,
name=aep.name,
entry_node=aep.entry_node,
trigger_type=aep.trigger_type,
trigger_config=aep.trigger_config,
isolation_level=aep.isolation_level,
priority=aep.priority,
max_concurrent=aep.max_concurrent,
)
await runtime.add_graph(
graph_id=graph_id,
graph=runner.graph,
+37 -30
View File
@@ -1,20 +1,17 @@
"""Worker monitoring tools for the Health Judge and Queen triage agents.
"""Worker monitoring tools for Queen triage agents.
Three tools are registered by ``register_worker_monitoring_tools()``:
- ``get_worker_health_summary`` reads the worker's session log files and
returns a compact health snapshot (recent verdicts, step count, timing).
session_id is optional: if omitted, the most recent active session is
auto-discovered from storage. No agent-side configuration required.
Used by the Health Judge on every timer tick.
auto-discovered from storage.
- ``emit_escalation_ticket`` validates and publishes an EscalationTicket
to the shared EventBus as a WORKER_ESCALATION_TICKET event.
Used by the Health Judge when it decides to escalate.
- ``notify_operator`` emits a QUEEN_INTERVENTION_REQUESTED event so the TUI
can surface a non-disruptive operator notification.
Used by the Queen's ticket_triage_node when it decides to intervene.
Usage::
@@ -45,8 +42,9 @@ def register_worker_monitoring_tools(
registry: ToolRegistry,
event_bus: EventBus,
storage_path: Path,
stream_id: str = "judge",
stream_id: str = "monitoring",
worker_graph_id: str | None = None,
default_session_id: str | None = None,
) -> int:
"""Register worker monitoring tools bound to *event_bus* and *storage_path*.
@@ -55,9 +53,15 @@ def register_worker_monitoring_tools(
event_bus: The shared EventBus for the worker runtime.
storage_path: Root storage path of the worker runtime
(e.g. ``~/.hive/agents/{name}``).
stream_id: Stream ID used when emitting events; defaults to judge's stream.
stream_id: Stream ID used when emitting events.
worker_graph_id: The primary worker graph's ID. Included in health summary
so the judge can populate ticket identity fields accurately.
default_session_id: When set, ``get_worker_health_summary`` uses this
session ID as the default instead of auto-discovering
the most-recent-by-mtime session. Callers should pass
the queen's own session ID so that after a cold-restore
the monitoring tool reads the correct worker session
rather than a stale orphaned one.
Returns:
Number of tools registered.
@@ -65,7 +69,7 @@ def register_worker_monitoring_tools(
from framework.llm.provider import Tool
storage_path = Path(storage_path)
# Derive agent identity from storage path so the judge can fill ticket fields.
# Derive agent identity from storage path for ticket fields.
# storage_path is ~/.hive/agents/{agent_name} — the name is the last component.
_worker_agent_id: str = storage_path.name
_worker_graph_id: str = worker_graph_id or storage_path.name
@@ -100,23 +104,29 @@ def register_worker_monitoring_tools(
if not sessions_dir.exists():
return json.dumps({"error": "No sessions found — worker has not started yet"})
candidates = [
d for d in sessions_dir.iterdir() if d.is_dir() and (d / "state.json").exists()
]
if not candidates:
return json.dumps({"error": "No sessions found — worker has not started yet"})
# Prefer the queen's own session ID (set at registration time) over
# mtime-based discovery, which can pick a stale orphaned session after
# a cold-restore when a newer-but-empty session directory exists.
if default_session_id and (sessions_dir / default_session_id).is_dir():
session_id = default_session_id
else:
candidates = [
d for d in sessions_dir.iterdir() if d.is_dir() and (d / "state.json").exists()
]
if not candidates:
return json.dumps({"error": "No sessions found — worker has not started yet"})
def _sort_key(d: Path):
try:
state = json.loads((d / "state.json").read_text(encoding="utf-8"))
# in_progress/running sorts before completed/failed
priority = 0 if state.get("status", "") in ("in_progress", "running") else 1
return (priority, -d.stat().st_mtime)
except Exception:
return (2, 0)
def _sort_key(d: Path):
try:
state = json.loads((d / "state.json").read_text(encoding="utf-8"))
# in_progress/running sorts before completed/failed
priority = 0 if state.get("status", "") in ("in_progress", "running") else 1
return (priority, -d.stat().st_mtime)
except Exception:
return (2, 0)
candidates.sort(key=_sort_key)
session_id = candidates[0].name
candidates.sort(key=_sort_key)
session_id = candidates[0].name
# Resolve log paths
session_dir = storage_path / "sessions" / session_id
@@ -201,10 +211,9 @@ def register_worker_monitoring_tools(
description=(
"Read the worker agent's execution logs and return a compact health snapshot. "
"Returns worker_agent_id and worker_graph_id (use these for ticket identity fields), "
"recent judge verdicts, step count, time since last step, and "
"recent verdicts, step count, time since last step, and "
"a snippet of the most recent LLM output. "
"session_id is optional — omit it to auto-discover the most recent active session. "
"Use this on every health check to observe trends."
"session_id is optional — omit it to auto-discover the most recent active session."
),
parameters={
"type": "object",
@@ -241,8 +250,7 @@ def register_worker_monitoring_tools(
"""Validate and publish an EscalationTicket to the shared EventBus.
ticket_json must be a JSON string containing all required EscalationTicket
fields. The ticket is validated before publishing this ensures the judge
has genuinely filled out all required evidence fields.
fields. The ticket is validated before publishing.
Returns a confirmation JSON with the ticket_id on success, or an error.
"""
@@ -257,7 +265,7 @@ def register_worker_monitoring_tools(
try:
await event_bus.emit_worker_escalation_ticket(
stream_id=stream_id,
node_id="judge",
node_id="monitoring",
ticket=ticket.model_dump(),
)
logger.info(
@@ -280,7 +288,6 @@ def register_worker_monitoring_tools(
name="emit_escalation_ticket",
description=(
"Validate and publish a structured EscalationTicket to the shared EventBus. "
"The Queen's ticket_receiver entry point will fire and triage the ticket. "
"ticket_json must be a JSON string with all required EscalationTicket fields: "
"worker_agent_id, worker_session_id, worker_node_id, worker_graph_id, "
"severity (low/medium/high/critical), cause, judge_reasoning, suggested_action, "
+5
View File
@@ -38,4 +38,9 @@ export const api = {
body: body ? JSON.stringify(body) : undefined,
}),
delete: <T>(path: string) => request<T>(path, { method: "DELETE" }),
patch: <T>(path: string, body?: unknown) =>
request<T>(path, {
method: "PATCH",
body: body ? JSON.stringify(body) : undefined,
}),
};
+10 -12
View File
@@ -1,11 +1,11 @@
import { api } from "./client";
import type {
AgentEvent,
LiveSession,
LiveSessionDetail,
SessionSummary,
SessionDetail,
Checkpoint,
Message,
EntryPoint,
} from "./types";
@@ -64,12 +64,18 @@ export const sessionsApi = {
`/sessions/${sessionId}/entry-points`,
),
updateTriggerTask: (sessionId: string, triggerId: string, task: string) =>
api.patch<{ trigger_id: string; task: string }>(
`/sessions/${sessionId}/triggers/${triggerId}`,
{ task },
),
graphs: (sessionId: string) =>
api.get<{ graphs: string[] }>(`/sessions/${sessionId}/graphs`),
/** Get queen conversation history for a session (works for cold/post-restart sessions too). */
queenMessages: (sessionId: string) =>
api.get<{ messages: Message[]; session_id: string }>(`/sessions/${sessionId}/queen-messages`),
/** Get persisted eventbus log for a session (works for cold sessions — used for full UI replay). */
eventsHistory: (sessionId: string) =>
api.get<{ events: AgentEvent[]; session_id: string }>(`/sessions/${sessionId}/events/history`),
/** List all queen sessions on disk — live + cold (post-restart). */
history: () =>
@@ -105,12 +111,4 @@ export const sessionsApi = {
api.post<{ execution_id: string }>(
`/sessions/${sessionId}/worker-sessions/${wsId}/checkpoints/${checkpointId}/restore`,
),
messages: (sessionId: string, wsId: string, nodeId?: string) => {
const params = new URLSearchParams({ client_only: "true" });
if (nodeId) params.set("node_id", nodeId);
return api.get<{ messages: Message[] }>(
`/sessions/${sessionId}/worker-sessions/${wsId}/messages?${params}`,
);
},
};
+11 -1
View File
@@ -31,6 +31,8 @@ export interface EntryPoint {
entry_node: string;
trigger_type: string;
trigger_config?: Record<string, unknown>;
/** Worker task string when this trigger fires autonomously. */
task?: string;
/** Seconds until the next timer fire (only present for timer entry points). */
next_fire_in?: number;
}
@@ -41,6 +43,7 @@ export interface DiscoverEntry {
description: string;
category: string;
session_count: number;
run_count: number;
node_count: number;
tool_count: number;
tags: string[];
@@ -311,6 +314,7 @@ export type EventTypeName =
| "tool_call_completed"
| "client_output_delta"
| "client_input_requested"
| "client_input_received"
| "node_internal_output"
| "node_input_blocked"
| "node_stalled"
@@ -328,7 +332,12 @@ export type EventTypeName =
| "queen_phase_changed"
| "subagent_report"
| "draft_graph_updated"
| "flowchart_map_updated";
| "flowchart_map_updated"
| "trigger_available"
| "trigger_activated"
| "trigger_deactivated"
| "trigger_fired"
| "trigger_removed";
export interface AgentEvent {
type: EventTypeName;
@@ -339,4 +348,5 @@ export interface AgentEvent {
timestamp: string;
correlation_id: string | null;
graph_id: string | null;
run_id?: string | null;
}
+228 -84
View File
@@ -1,4 +1,4 @@
import { memo, useMemo, useState, useRef } from "react";
import { memo, useMemo, useState, useRef, useEffect, useCallback } from "react";
import { Play, Pause, Loader2, CheckCircle2 } from "lucide-react";
export type NodeStatus = "running" | "complete" | "pending" | "error" | "looping";
@@ -20,7 +20,7 @@ export interface GraphNode {
edgeLabels?: Record<string, string>;
}
type RunState = "idle" | "deploying" | "running";
export type RunState = "idle" | "deploying" | "running";
interface AgentGraphProps {
nodes: GraphNode[];
@@ -35,7 +35,7 @@ interface AgentGraphProps {
}
// --- Extracted RunButton so hover state survives parent re-renders ---
interface RunButtonProps {
export interface RunButtonProps {
runState: RunState;
disabled: boolean;
onRun: () => void;
@@ -43,7 +43,7 @@ interface RunButtonProps {
btnRef: React.Ref<HTMLButtonElement>;
}
const RunButton = memo(function RunButton({ runState, disabled, onRun, onPause, btnRef }: RunButtonProps) {
export const RunButton = memo(function RunButton({ runState, disabled, onRun, onPause, btnRef }: RunButtonProps) {
const [hovered, setHovered] = useState(false);
const showPause = runState === "running" && hovered;
@@ -89,46 +89,94 @@ const MARGIN_RIGHT = 50; // space for back-edge arcs
const SVG_BASE_W = 320;
const GAP_X = 12;
// Unified amber/gold palette
const statusColors: Record<NodeStatus, { dot: string; bg: string; border: string; glow: string }> = {
running: {
dot: "hsl(45,95%,58%)",
bg: "hsl(45,95%,58%,0.08)",
border: "hsl(45,95%,58%,0.5)",
glow: "hsl(45,95%,58%,0.15)",
},
looping: {
dot: "hsl(38,90%,55%)",
bg: "hsl(38,90%,55%,0.08)",
border: "hsl(38,90%,55%,0.5)",
glow: "hsl(38,90%,55%,0.15)",
},
complete: {
dot: "hsl(43,70%,45%)",
bg: "hsl(43,70%,45%,0.05)",
border: "hsl(43,70%,45%,0.25)",
glow: "none",
},
pending: {
dot: "hsl(35,15%,28%)",
bg: "hsl(35,10%,12%)",
border: "hsl(35,10%,20%)",
glow: "none",
},
error: {
dot: "hsl(0,65%,55%)",
bg: "hsl(0,65%,55%,0.06)",
border: "hsl(0,65%,55%,0.3)",
glow: "hsl(0,65%,55%,0.1)",
},
};
// Read a CSS custom property value (space-separated HSL components)
function cssVar(name: string): string {
return getComputedStyle(document.documentElement).getPropertyValue(name).trim();
}
// Trigger node palette — cool blue-gray, visually distinct from amber execution nodes
const triggerColors = {
bg: "hsl(210,25%,14%)",
border: "hsl(210,30%,30%)",
text: "hsl(210,30%,65%)",
icon: "hsl(210,40%,55%)",
type StatusColorSet = Record<NodeStatus, { dot: string; bg: string; border: string; glow: string }>;
type TriggerColorSet = { bg: string; border: string; text: string; icon: string };
function buildStatusColors(): StatusColorSet {
const running = cssVar("--node-running") || "45 95% 58%";
const looping = cssVar("--node-looping") || "38 90% 55%";
const complete = cssVar("--node-complete") || "43 70% 45%";
const pending = cssVar("--node-pending") || "35 15% 28%";
const pendingBg = cssVar("--node-pending-bg") || "35 10% 12%";
const pendingBorder = cssVar("--node-pending-border") || "35 10% 20%";
const error = cssVar("--node-error") || "0 65% 55%";
return {
running: {
dot: `hsl(${running})`,
bg: `hsl(${running} / 0.08)`,
border: `hsl(${running} / 0.5)`,
glow: `hsl(${running} / 0.15)`,
},
looping: {
dot: `hsl(${looping})`,
bg: `hsl(${looping} / 0.08)`,
border: `hsl(${looping} / 0.5)`,
glow: `hsl(${looping} / 0.15)`,
},
complete: {
dot: `hsl(${complete})`,
bg: `hsl(${complete} / 0.05)`,
border: `hsl(${complete} / 0.25)`,
glow: "none",
},
pending: {
dot: `hsl(${pending})`,
bg: `hsl(${pendingBg})`,
border: `hsl(${pendingBorder})`,
glow: "none",
},
error: {
dot: `hsl(${error})`,
bg: `hsl(${error} / 0.06)`,
border: `hsl(${error} / 0.3)`,
glow: `hsl(${error} / 0.1)`,
},
};
}
function buildTriggerColors(): TriggerColorSet {
const bg = cssVar("--trigger-bg") || "210 25% 14%";
const border = cssVar("--trigger-border") || "210 30% 30%";
const text = cssVar("--trigger-text") || "210 30% 65%";
const icon = cssVar("--trigger-icon") || "210 40% 55%";
return {
bg: `hsl(${bg})`,
border: `hsl(${border})`,
text: `hsl(${text})`,
icon: `hsl(${icon})`,
};
}
/** Hook that reads node/trigger colors from CSS vars and updates on theme changes. */
function useThemeColors() {
const [statusColors, setStatusColors] = useState<StatusColorSet>(buildStatusColors);
const [triggerColors, setTriggerColors] = useState<TriggerColorSet>(buildTriggerColors);
useEffect(() => {
const rebuild = () => {
setStatusColors(buildStatusColors());
setTriggerColors(buildTriggerColors());
};
const obs = new MutationObserver(rebuild);
obs.observe(document.documentElement, { attributes: true, attributeFilter: ["class", "style"] });
return () => obs.disconnect();
}, []);
return { statusColors, triggerColors };
}
// Active trigger — brighter, more saturated blue
const activeTriggerColors = {
bg: "hsl(210,30%,18%)",
border: "hsl(210,50%,50%)",
text: "hsl(210,40%,75%)",
icon: "hsl(210,60%,65%)",
};
const triggerIcons: Record<string, string> = {
@@ -146,10 +194,96 @@ function truncateLabel(label: string, availablePx: number, fontSize: number): st
return label.slice(0, Math.max(maxChars - 1, 1)) + "\u2026";
}
// ─── Pan & Zoom wrapper ───
function PanZoomSvg({ svgW, svgH, className, children }: { svgW: number; svgH: number; className?: string; children: React.ReactNode }) {
const [zoom, setZoom] = useState(1);
const [pan, setPan] = useState({ x: 0, y: 0 });
const [dragging, setDragging] = useState(false);
const dragStart = useRef({ x: 0, y: 0, panX: 0, panY: 0 });
const MIN_ZOOM = 0.4;
const MAX_ZOOM = 3;
const handleWheel = useCallback((e: React.WheelEvent) => {
e.preventDefault();
const delta = e.deltaY > 0 ? 0.9 : 1.1;
setZoom(z => Math.min(MAX_ZOOM, Math.max(MIN_ZOOM, z * delta)));
}, []);
const handleMouseDown = useCallback((e: React.MouseEvent) => {
if (e.button !== 0) return;
setDragging(true);
dragStart.current = { x: e.clientX, y: e.clientY, panX: pan.x, panY: pan.y };
}, [pan]);
const handleMouseMove = useCallback((e: React.MouseEvent) => {
if (!dragging) return;
setPan({
x: dragStart.current.panX + (e.clientX - dragStart.current.x),
y: dragStart.current.panY + (e.clientY - dragStart.current.y),
});
}, [dragging]);
const handleMouseUp = useCallback(() => setDragging(false), []);
const resetView = useCallback(() => {
setZoom(1);
setPan({ x: 0, y: 0 });
}, []);
return (
<div className="flex-1 relative overflow-hidden px-1 pb-5">
<div
onWheel={handleWheel}
onMouseDown={handleMouseDown}
onMouseMove={handleMouseMove}
onMouseUp={handleMouseUp}
onMouseLeave={handleMouseUp}
className="w-full h-full"
style={{ cursor: dragging ? "grabbing" : "grab" }}
>
<svg
width="100%"
viewBox={`0 0 ${svgW} ${svgH}`}
preserveAspectRatio="xMidYMin meet"
className={`select-none ${className || ""}`}
style={{
fontFamily: "'Inter', system-ui, sans-serif",
transform: `translate(${pan.x}px, ${pan.y}px) scale(${zoom})`,
transformOrigin: "center top",
}}
>
{children}
</svg>
</div>
{/* Zoom controls */}
<div className="absolute bottom-7 right-3 flex items-center gap-1 bg-card/80 backdrop-blur-sm border border-border/40 rounded-lg p-0.5 shadow-sm">
<button
onClick={() => setZoom(z => Math.min(MAX_ZOOM, z * 1.2))}
className="w-6 h-6 flex items-center justify-center rounded text-muted-foreground hover:text-foreground hover:bg-muted/60 transition-colors text-xs font-bold"
aria-label="Zoom in"
>+</button>
<button
onClick={resetView}
className="px-1.5 h-6 flex items-center justify-center rounded text-[10px] font-mono text-muted-foreground hover:text-foreground hover:bg-muted/60 transition-colors"
aria-label="Reset zoom"
>{Math.round(zoom * 100)}%</button>
<button
onClick={() => setZoom(z => Math.max(MIN_ZOOM, z * 0.8))}
className="w-6 h-6 flex items-center justify-center rounded text-muted-foreground hover:text-foreground hover:bg-muted/60 transition-colors text-xs font-bold"
aria-label="Zoom out"
>{"\u2212"}</button>
</div>
</div>
);
}
export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, onPause, version, runState: externalRunState, building, queenPhase }: AgentGraphProps) {
const [localRunState, setLocalRunState] = useState<RunState>("idle");
const runState = externalRunState ?? localRunState;
const runBtnRef = useRef<HTMLButtonElement>(null);
const { statusColors, triggerColors } = useThemeColors();
const handleRun = () => {
if (runState !== "idle") return;
@@ -344,18 +478,21 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
let d: string;
if (skipsLayers && hasCollision(fromLayer, toLayer, from.x, to.x)) {
// Route around intermediate nodes: curve to the left
// Route around intermediate nodes: orthogonal detour to the left
const detourX = Math.min(from.x, to.x) - nodeW * 0.4;
d = `M ${startX} ${y1} C ${startX} ${y1 + 20}, ${detourX} ${y1 + 20}, ${detourX} ${midY} S ${toCenterX} ${y2 - 20} ${toCenterX} ${y2}`;
d = `M ${startX} ${y1} L ${startX} ${midY} L ${detourX} ${midY} L ${detourX} ${y2 - 10} L ${toCenterX} ${y2 - 10} L ${toCenterX} ${y2}`;
} else if (Math.abs(startX - toCenterX) < 2) {
// Straight vertical line when aligned
d = `M ${startX} ${y1} L ${toCenterX} ${y2}`;
} else {
// Standard bezier: from source bottom to target top
d = `M ${startX} ${y1} C ${startX} ${midY}, ${toCenterX} ${midY}, ${toCenterX} ${y2}`;
// Orthogonal: down, across, down
d = `M ${startX} ${y1} L ${startX} ${midY} L ${toCenterX} ${midY} L ${toCenterX} ${y2}`;
}
const fromNode = nodes[edge.fromIdx];
const isActive = fromNode.status === "complete" || fromNode.status === "running" || fromNode.status === "looping";
const strokeColor = isActive ? "hsl(43,70%,45%,0.35)" : "hsl(35,10%,20%)";
const arrowColor = isActive ? "hsl(43,70%,45%,0.5)" : "hsl(35,10%,22%)";
const strokeColor = isActive ? statusColors.complete.border : statusColors.pending.border;
const arrowColor = isActive ? statusColors.complete.dot : statusColors.pending.border;
return (
<g key={`fwd-${i}`}>
@@ -368,7 +505,7 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
<text
x={(startX + toCenterX) / 2 + 8}
y={midY - 2}
fill="hsl(35,15%,40%)"
fill={statusColors.pending.dot}
fontSize={9}
fontStyle="italic"
>
@@ -394,9 +531,9 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
const fromNode = nodes[edge.fromIdx];
const isActive = fromNode.status === "complete" || fromNode.status === "running" || fromNode.status === "looping";
const color = isActive ? "hsl(38,80%,50%,0.3)" : "hsl(35,10%,20%)";
const color = isActive ? statusColors.looping.border : statusColors.pending.border;
// Bezier curve with rounded corners
// Bezier curve with rounded corners (kept as curves for back edges)
const path = `M ${startX} ${startY} C ${startX + r} ${startY}, ${curveX} ${startY}, ${curveX} ${startY - r} L ${curveX} ${endY + r} C ${curveX} ${endY}, ${endX + r} ${endY}, ${endX + 6} ${endY}`;
return (
@@ -404,7 +541,7 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
<path d={path} fill="none" stroke={color} strokeWidth={1.5} strokeDasharray="4 3" />
<polygon
points={`${endX + 6},${endY - 3} ${endX + 6},${endY + 3} ${endX},${endY}`}
fill={isActive ? "hsl(38,80%,50%,0.45)" : "hsl(35,10%,22%)"}
fill={isActive ? statusColors.looping.dot : statusColors.pending.border}
/>
</g>
);
@@ -417,10 +554,12 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
const triggerAvailW = nodeW - 38;
const triggerDisplayLabel = truncateLabel(node.label, triggerAvailW, triggerFontSize);
const nextFireIn = node.triggerConfig?.next_fire_in as number | undefined;
const isActive = node.status === "running" || node.status === "complete";
const colors = isActive ? activeTriggerColors : triggerColors;
// Format countdown for display below node
let countdownLabel: string | null = null;
if (nextFireIn != null && nextFireIn > 0) {
if (isActive && nextFireIn != null && nextFireIn > 0) {
const h = Math.floor(nextFireIn / 3600);
const m = Math.floor((nextFireIn % 3600) / 60);
const s = Math.floor(nextFireIn % 60);
@@ -429,24 +568,28 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
: `next in ${m}m ${String(s).padStart(2, "0")}s`;
}
// Status label below countdown
const statusLabel = isActive ? "active" : "inactive";
const statusColor = isActive ? "hsl(140,40%,50%)" : "hsl(210,20%,40%)";
return (
<g key={node.id} onClick={() => onNodeClick?.(node)} style={{ cursor: onNodeClick ? "pointer" : "default" }}>
<title>{node.label}</title>
{/* Pill-shaped background with dashed border */}
{/* Pill-shaped background — solid border when active, dashed when inactive */}
<rect
x={pos.x} y={pos.y}
width={nodeW} height={NODE_H}
rx={NODE_H / 2}
fill={triggerColors.bg}
stroke={triggerColors.border}
strokeWidth={1}
strokeDasharray="4 2"
fill={colors.bg}
stroke={colors.border}
strokeWidth={isActive ? 1.5 : 1}
strokeDasharray={isActive ? undefined : "4 2"}
/>
{/* Trigger type icon */}
<text
x={pos.x + 18} y={pos.y + NODE_H / 2}
fill={triggerColors.icon} fontSize={13}
fill={colors.icon} fontSize={13}
textAnchor="middle" dominantBaseline="middle"
>
{icon}
@@ -455,7 +598,7 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
{/* Label */}
<text
x={pos.x + 32} y={pos.y + NODE_H / 2}
fill={triggerColors.text}
fill={colors.text}
fontSize={triggerFontSize}
fontWeight={500}
dominantBaseline="middle"
@@ -468,12 +611,21 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
{countdownLabel && (
<text
x={pos.x + nodeW / 2} y={pos.y + NODE_H + 13}
fill="hsl(210,30%,50%)" fontSize={9.5}
fill={triggerColors.text} fontSize={9.5}
textAnchor="middle" fontStyle="italic" opacity={0.7}
>
{countdownLabel}
</text>
)}
{/* Status label */}
<text
x={pos.x + nodeW / 2} y={pos.y + NODE_H + (countdownLabel ? 25 : 13)}
fill={statusColor} fontSize={9}
textAnchor="middle" opacity={0.8}
>
{statusLabel}
</text>
</g>
);
};
@@ -543,7 +695,7 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
{/* Label -- truncated with ellipsis for narrow nodes */}
<text
x={pos.x + 32} y={pos.y + NODE_H / 2}
fill={isActive ? "hsl(45,90%,85%)" : isDone ? "hsl(40,20%,75%)" : "hsl(35,10%,45%)"}
fill={isActive ? statusColors.running.dot : isDone ? statusColors.complete.dot : statusColors.pending.dot}
fontSize={fontSize}
fontWeight={isActive ? 600 : isDone ? 500 : 400}
dominantBaseline="middle"
@@ -556,7 +708,7 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
{node.statusLabel && isActive && (
<text
x={pos.x + nodeW + 10} y={pos.y + NODE_H / 2}
fill="hsl(45,80%,60%)" fontSize={10.5} fontStyle="italic"
fill={statusColors.running.dot} fontSize={10.5} fontStyle="italic"
dominantBaseline="middle" opacity={0.8}
>
{node.statusLabel}
@@ -600,27 +752,19 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
</div>
{/* Graph */}
<div className="flex-1 overflow-y-auto overflow-x-hidden px-3 pb-5 relative">
<svg
width={svgWidth}
height={svgHeight}
viewBox={`0 0 ${svgWidth} ${svgHeight}`}
className={`select-none${building ? " opacity-30" : ""}`}
style={{ fontFamily: "'Inter', system-ui, sans-serif" }}
>
{forwardEdges.map((e, i) => renderForwardEdge(e, i))}
{backEdges.map((e, i) => renderBackEdge(e, i))}
{nodes.map((n, i) => renderNode(n, i))}
</svg>
{building && (
<div className="absolute inset-0 flex items-center justify-center">
<div className="flex flex-col items-center gap-3">
<Loader2 className="w-6 h-6 animate-spin text-primary/60" />
<p className="text-xs text-muted-foreground/80">Rebuilding agent...</p>
</div>
<PanZoomSvg svgW={svgWidth} svgH={svgHeight} className={building ? "opacity-30" : ""}>
{forwardEdges.map((e, i) => renderForwardEdge(e, i))}
{backEdges.map((e, i) => renderBackEdge(e, i))}
{nodes.map((n, i) => renderNode(n, i))}
</PanZoomSvg>
{building && (
<div className="absolute inset-0 flex items-center justify-center">
<div className="flex flex-col items-center gap-3">
<Loader2 className="w-6 h-6 animate-spin text-primary/60" />
<p className="text-xs text-muted-foreground/80">Rebuilding agent...</p>
</div>
)}
</div>
</div>
)}
</div>
);
}
+23 -9
View File
@@ -10,12 +10,14 @@ export interface ChatMessage {
agentColor: string;
content: string;
timestamp: string;
type?: "system" | "agent" | "user" | "tool_status" | "worker_input_request";
type?: "system" | "agent" | "user" | "tool_status" | "worker_input_request" | "run_divider";
role?: "queen" | "worker";
/** Which worker thread this message belongs to (worker agent name) */
thread?: string;
/** Epoch ms when this message was first created — used for ordering queen/worker interleaving */
createdAt?: number;
/** Queen phase active when this message was created */
phase?: "planning" | "building" | "staging" | "running";
}
interface ChatPanelProps {
@@ -154,6 +156,18 @@ const MessageBubble = memo(function MessageBubble({ msg, queenPhase }: { msg: Ch
const isQueen = msg.role === "queen";
const color = getColor(msg.agent, msg.role);
if (msg.type === "run_divider") {
return (
<div className="flex items-center gap-3 py-2 my-1">
<div className="flex-1 h-px bg-border/60" />
<span className="text-[10px] text-muted-foreground font-medium uppercase tracking-wider">
{msg.content}
</span>
<div className="flex-1 h-px bg-border/60" />
</div>
);
}
if (msg.type === "system") {
return (
<div className="flex justify-center py-1">
@@ -205,13 +219,13 @@ const MessageBubble = memo(function MessageBubble({ msg, queenPhase }: { msg: Ch
}`}
>
{isQueen
? queenPhase === "running"
? "running phase"
: queenPhase === "staging"
? "staging phase"
: queenPhase === "planning"
? "planning phase"
: "building phase"
? ((msg.phase ?? queenPhase) === "running"
? "running"
: (msg.phase ?? queenPhase) === "staging"
? "staging"
: (msg.phase ?? queenPhase) === "planning"
? "planning"
: "building")
: "Worker"}
</span>
</div>
@@ -225,7 +239,7 @@ const MessageBubble = memo(function MessageBubble({ msg, queenPhase }: { msg: Ch
</div>
</div>
);
}, (prev, next) => prev.msg.id === next.msg.id && prev.msg.content === next.msg.content && prev.queenPhase === next.queenPhase);
}, (prev, next) => prev.msg.id === next.msg.id && prev.msg.content === next.msg.content && prev.msg.phase === next.msg.phase && prev.queenPhase === next.queenPhase);
export default function ChatPanel({ messages, onSend, isWaiting, isWorkerWaiting, isBusy, activeThread, disabled, onCancel, pendingQuestion, pendingOptions, pendingQuestions, onQuestionSubmit, onMultiQuestionSubmit, onQuestionDismiss, queenPhase }: ChatPanelProps) {
const [input, setInput] = useState("");
@@ -126,8 +126,13 @@ export default function CredentialsModal({
// No real path — no credentials to show
setRows([]);
}
} catch {
// Backend unavailable — fall back to legacy props or empty
} catch (err) {
// Surface the error so the modal shows a meaningful message
const message =
err instanceof Error ? err.message : "Failed to check credentials";
setError(message);
// Fall back to legacy props or empty rows
if (legacyCredentials) {
setRows(legacyCredentials.map(c => ({
...c,
@@ -289,11 +294,18 @@ export default function CredentialsModal({
{/* Status banner */}
{!loading && (
<div className={`mx-5 mt-4 px-3 py-2.5 rounded-lg border text-xs font-medium flex items-center gap-2 ${
allRequiredMet
? "bg-emerald-500/10 border-emerald-500/20 text-emerald-600"
: "bg-destructive/5 border-destructive/20 text-destructive"
error && rows.length === 0
? "bg-destructive/5 border-destructive/20 text-destructive"
: allRequiredMet
? "bg-emerald-500/10 border-emerald-500/20 text-emerald-600"
: "bg-destructive/5 border-destructive/20 text-destructive"
}`}>
{allRequiredMet ? (
{error && rows.length === 0 ? (
<>
<AlertCircle className="w-3.5 h-3.5 flex-shrink-0" />
<span className="break-words">Failed to check credentials: {error}</span>
</>
) : allRequiredMet ? (
<>
<Shield className="w-3.5 h-3.5" />
{rows.length === 0
+476 -128
View File
@@ -1,11 +1,79 @@
import { useEffect, useMemo, useRef, useState } from "react";
import { useEffect, useMemo, useRef, useState, useCallback } from "react";
import { Loader2 } from "lucide-react";
import type { DraftGraph as DraftGraphData, DraftNode } from "@/api/types";
import type { GraphNode } from "./AgentGraph";
import { RunButton } from "./AgentGraph";
import type { GraphNode, RunState } from "./AgentGraph";
// Read a CSS custom property value (space-separated HSL components)
function cssVar(name: string): string {
return getComputedStyle(document.documentElement).getPropertyValue(name).trim();
}
interface DraftChromeColors {
edge: string;
edgeArrow: string;
edgeLabel: string;
backEdge: string;
groupFill: string;
groupStroke: string;
chromeText: string;
chromeTextDim: string;
nodeText: string;
nodeTextHover: string;
statusRunning: string;
statusComplete: string;
statusError: string;
}
function buildDraftChromeColors(): DraftChromeColors {
const edge = cssVar("--draft-edge") || "220 10% 30%";
const edgeArrow = cssVar("--draft-edge-arrow") || "220 10% 35%";
const edgeLabel = cssVar("--draft-edge-label") || "220 10% 45%";
const backEdge = cssVar("--draft-back-edge") || "220 10% 25%";
const groupFill = cssVar("--draft-group-fill") || "220 15% 18%";
const groupStroke = cssVar("--draft-group-stroke") || "220 10% 40%";
const chromeText = cssVar("--draft-chrome-text") || "220 10% 50%";
const chromeTextDim = cssVar("--draft-chrome-text-dim") || "220 10% 55%";
const nodeText = cssVar("--draft-node-text") || "0 0% 78%";
const nodeTextHover = cssVar("--draft-node-text-hover") || "0 0% 92%";
const running = cssVar("--node-running") || "45 95% 58%";
const complete = cssVar("--node-complete") || "43 70% 45%";
const error = cssVar("--node-error") || "0 65% 55%";
return {
edge: `hsl(${edge})`,
edgeArrow: `hsl(${edgeArrow})`,
edgeLabel: `hsl(${edgeLabel})`,
backEdge: `hsl(${backEdge})`,
groupFill: `hsl(${groupFill})`,
groupStroke: `hsl(${groupStroke})`,
chromeText: `hsl(${chromeText})`,
chromeTextDim: `hsl(${chromeTextDim})`,
nodeText: `hsl(${nodeText})`,
nodeTextHover: `hsl(${nodeTextHover})`,
statusRunning: `hsl(${running})`,
statusComplete: `hsl(${complete})`,
statusError: `hsl(${error})`,
};
}
function useDraftChromeColors() {
const [colors, setColors] = useState<DraftChromeColors>(buildDraftChromeColors);
useEffect(() => {
const rebuild = () => setColors(buildDraftChromeColors());
const obs = new MutationObserver(rebuild);
obs.observe(document.documentElement, { attributes: true, attributeFilter: ["class", "style"] });
return () => obs.disconnect();
}, []);
return colors;
}
type DraftNodeStatus = "pending" | "running" | "complete" | "error";
interface DraftGraphProps {
draft: DraftGraphData;
draft: DraftGraphData | null;
onNodeClick?: (node: DraftNode) => void;
/** Runtime node ID → list of original draft node IDs (post-dissolution mapping). */
flowchartMap?: Record<string, string[]>;
@@ -13,6 +81,16 @@ interface DraftGraphProps {
runtimeNodes?: GraphNode[];
/** Called when a draft node is clicked in overlay mode — receives the runtime node ID. */
onRuntimeNodeClick?: (runtimeNodeId: string) => void;
/** True while the queen is building the agent from the draft. */
building?: boolean;
/** True while the queen is designing the draft (no draft yet). Shows a spinner. */
loading?: boolean;
/** Called when the user clicks Run. */
onRun?: () => void;
/** Called when the user clicks Pause. */
onPause?: () => void;
/** Current run state — drives the RunButton appearance. */
runState?: RunState;
}
// Layout constants — tuned for a ~500px panel (484px after px-2 padding)
@@ -21,6 +99,11 @@ const GAP_Y = 48;
const TOP_Y = 28;
const MARGIN_X = 16;
const GAP_X = 16;
const GROUP_GAP_COLS = 1; // extra column spacing between different groups
function formatNodeId(id: string): string {
return id.split("-").map(w => w.charAt(0).toUpperCase() + w.slice(1)).join(" ");
}
function truncateLabel(label: string, availablePx: number, fontSize: number): string {
const avgCharW = fontSize * 0.58;
@@ -29,6 +112,7 @@ function truncateLabel(label: string, availablePx: number, fontSize: number): st
return label.slice(0, Math.max(maxChars - 1, 1)) + "\u2026";
}
/** Return the bounding-rect corner radius for a given flowchart shape. */
/**
* Render an ISO 5807 flowchart shape as an SVG element.
*/
@@ -256,7 +340,6 @@ function FlowchartShape({
function Tooltip({ node, style }: { node: DraftNode; style: React.CSSProperties }) {
const lines: string[] = [];
if (node.description) lines.push(node.description);
if (node.tools.length > 0) lines.push(`Tools: ${node.tools.join(", ")}`);
if (node.success_criteria) lines.push(`Criteria: ${node.success_criteria}`);
if (lines.length === 0) return null;
@@ -274,10 +357,64 @@ function Tooltip({ node, style }: { node: DraftNode; style: React.CSSProperties
);
}
export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNodes, onRuntimeNodeClick }: DraftGraphProps) {
export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNodes, onRuntimeNodeClick, building, loading, onRun, onPause, runState = "idle" }: DraftGraphProps) {
const [hoveredNode, setHoveredNode] = useState<string | null>(null);
const [mousePos, setMousePos] = useState<{ x: number; y: number } | null>(null);
const containerRef = useRef<HTMLDivElement>(null);
const runBtnRef = useRef<HTMLButtonElement>(null);
const [containerW, setContainerW] = useState(484);
const chrome = useDraftChromeColors();
// Shift-to-pin tooltip
const shiftHeld = useRef(false);
useEffect(() => {
const onKeyDown = (e: KeyboardEvent) => { if (e.key === "Shift") shiftHeld.current = true; };
const onKeyUp = (e: KeyboardEvent) => {
if (e.key === "Shift") {
shiftHeld.current = false;
setHoveredNode(null);
setMousePos(null);
}
};
window.addEventListener("keydown", onKeyDown);
window.addEventListener("keyup", onKeyUp);
return () => { window.removeEventListener("keydown", onKeyDown); window.removeEventListener("keyup", onKeyUp); };
}, []);
// Pan & Zoom state
const [zoom, setZoom] = useState(1);
const [pan, setPan] = useState({ x: 0, y: 0 });
const [dragging, setDragging] = useState(false);
const dragStart = useRef({ x: 0, y: 0, panX: 0, panY: 0 });
const MIN_ZOOM = 0.4;
const MAX_ZOOM = 3;
const handleWheel = useCallback((e: React.WheelEvent) => {
e.preventDefault();
const delta = e.deltaY > 0 ? 0.9 : 1.1;
setZoom(z => Math.min(MAX_ZOOM, Math.max(MIN_ZOOM, z * delta)));
}, []);
const handleMouseDown = useCallback((e: React.MouseEvent) => {
if (e.button !== 0) return;
setDragging(true);
dragStart.current = { x: e.clientX, y: e.clientY, panX: pan.x, panY: pan.y };
}, [pan]);
const handleMouseMove = useCallback((e: React.MouseEvent) => {
if (!dragging) return;
setPan({
x: dragStart.current.panX + (e.clientX - dragStart.current.x),
y: dragStart.current.panY + (e.clientY - dragStart.current.y),
});
}, [dragging]);
const handleMouseUp = useCallback(() => setDragging(false), []);
const resetView = useCallback(() => {
setZoom(1);
setPan({ x: 0, y: 0 });
}, []);
// Measure actual container width so layout fills it exactly
useEffect(() => {
@@ -328,7 +465,8 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
const hasStatusOverlay = Object.keys(nodeStatuses).length > 0;
const { nodes, edges } = draft;
const nodes = draft?.nodes ?? [];
const edges = draft?.edges ?? [];
const idxMap = useMemo(
() => Object.fromEntries(nodes.map((n, i) => [n.id, i])),
@@ -402,11 +540,12 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
maxCols = Math.max(maxCols, group.length);
});
// Compute node width
const backEdgeMargin = backEdges.length > 0 ? 30 + backEdges.length * 14 : 8;
const totalMargin = MARGIN_X * 2 + backEdgeMargin;
// Compute node width — keep back-edge overflow out of node sizing so nodes
// get full width. The viewBox is expanded later to fit back-edge curves.
const totalMargin = MARGIN_X * 2 + 8;
const availW = containerW - totalMargin;
const nodeW = Math.min(360, Math.floor((availW - (maxCols - 1) * GAP_X) / maxCols));
const backEdgeOverflow = backEdges.length > 0 ? 20 + (backEdges.length - 1) * 14 + 14 : 0;
// Parent-aware column placement using fractional positions.
// Instead of snapping to a fixed grid, nodes inherit positions from parents
@@ -414,6 +553,17 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
const colPos = new Array(nodes.length).fill(0); // fractional column positions
const maxLayer = Math.max(...layers);
// Map each draft node index to its runtime group ID for group-aware spacing
const nodeGroup = new Map<number, string>();
if (flowchartMap) {
for (const [runtimeId, draftIds] of Object.entries(flowchartMap)) {
for (const did of draftIds) {
const idx = idxMap[did];
if (idx !== undefined) nodeGroup.set(idx, runtimeId);
}
}
}
// Process layers top-down
for (let layer = 0; layer <= maxLayer; layer++) {
const group = layerGroups.get(layer) || [];
@@ -460,7 +610,9 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
ideals.sort((a, b) => a.pos - b.pos);
// Ensure minimum spacing of 1 column between nodes in the same layer
// (wider gap between nodes from different groups to prevent box overlap)
const assigned: number[] = [];
const assignedIdxs: number[] = [];
for (const item of ideals) {
let pos = item.pos;
// Clamp to valid range
@@ -468,9 +620,17 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
// Push right if overlapping previous
if (assigned.length > 0) {
const prev = assigned[assigned.length - 1];
if (pos < prev + 1) pos = prev + 1;
const prevIdx = assignedIdxs[assignedIdxs.length - 1];
let minGap = 1;
const curGroup = nodeGroup.get(item.idx);
const prevGroup = nodeGroup.get(prevIdx);
if (curGroup !== prevGroup && (curGroup || prevGroup)) {
minGap = 1 + GROUP_GAP_COLS;
}
if (pos < prev + minGap) pos = prev + minGap;
}
assigned.push(pos);
assignedIdxs.push(item.idx);
colPos[item.idx] = pos;
}
@@ -494,46 +654,152 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
const nodeXPositions = colPos.map((c: number) => firstColX + (c - usedMin) * colSpacing);
return { layers, nodeW, firstColX, nodeXPositions };
}, [nodes, forwardEdges, backEdges.length, containerW]);
const maxContentRight = Math.max(containerW, ...nodeXPositions.map(x => x + nodeW));
if (nodes.length === 0) {
return (
<div className="flex flex-col h-full">
<div className="px-4 pt-4 pb-2">
<p className="text-[11px] text-muted-foreground font-medium uppercase tracking-wider">
Draft
</p>
</div>
<div className="flex-1 flex items-center justify-center px-4">
<p className="text-xs text-muted-foreground/60 text-center italic">
No draft graph yet.
<br />
Describe your workflow to get started.
</p>
</div>
</div>
);
}
return { layers, nodeW, firstColX, nodeXPositions, backEdgeOverflow, maxContentRight };
}, [nodes, forwardEdges, backEdges.length, containerW, flowchartMap, idxMap]);
const { layers, nodeW, nodeXPositions } = layout;
const { layers, nodeW, nodeXPositions, backEdgeOverflow, maxContentRight } = layout;
const maxLayer = nodes.length > 0 ? Math.max(...layers) : 0;
// Group-box collision resolution: compute per-node Y offsets so that group
// bounding boxes (dashed rectangles) never overlap. Handles both same-layer
// groups (sub-row splitting) and adjacent-layer groups (inter-box gap).
const { nodeYOffset, totalExtraY, groupBoxMaxX } = useMemo(() => {
const offsets = new Array(nodes.length).fill(0);
if (!flowchartMap || !Object.keys(flowchartMap).length) {
return { nodeYOffset: offsets, totalExtraY: 0, groupBoxMaxX: 0 };
}
const PAD = 7;
const LABEL_H = 14;
const MIN_GROUP_GAP = 16;
const SUB_ROW_GAP = NODE_H + 24; // spacing for same-layer sub-rows
// Build node index → group ID
const nodeToGroup = new Map<number, string>();
for (const [runtimeId, draftIds] of Object.entries(flowchartMap)) {
for (const did of draftIds) {
const idx = idxMap[did];
if (idx !== undefined) nodeToGroup.set(idx, runtimeId);
}
}
// Step 1: Same-layer sub-row splitting — when multiple groups share a layer,
// assign per-node offsets to separate them into sub-rows.
const layerGroupMap = new Map<number, Map<string, number[]>>();
nodes.forEach((_, i) => {
const group = nodeToGroup.get(i);
if (!group) return;
const layer = layers[i];
if (!layerGroupMap.has(layer)) layerGroupMap.set(layer, new Map());
const lg = layerGroupMap.get(layer)!;
if (!lg.has(group)) lg.set(group, []);
lg.get(group)!.push(i);
});
// Per-node sub-row offset and per-layer extra height from sub-rows
const layerSubRowExtra = new Array(maxLayer + 1).fill(0);
for (let L = 0; L <= maxLayer; L++) {
const groups = layerGroupMap.get(L);
if (!groups || groups.size <= 1) continue;
let subIdx = 0;
for (const [, nodeIndices] of groups) {
for (const idx of nodeIndices) {
offsets[idx] = subIdx * SUB_ROW_GAP;
}
subIdx++;
}
layerSubRowExtra[L] = (groups.size - 1) * SUB_ROW_GAP;
}
// Cumulative sub-row shift: layers after a split layer are pushed down
const subRowCumShift = new Array(maxLayer + 1).fill(0);
let subCum = 0;
for (let L = 0; L <= maxLayer; L++) {
subRowCumShift[L] = subCum;
subCum += layerSubRowExtra[L];
}
// Add cumulative sub-row shift to each node's offset
for (let i = 0; i < nodes.length; i++) {
offsets[i] += subRowCumShift[layers[i]];
}
// Step 2: Compute group bounding boxes using sub-row-adjusted positions
type GroupBox = { runtimeId: string; minLayer: number; maxLayer: number; minY: number; maxY: number; maxX: number };
const boxes: GroupBox[] = [];
for (const [runtimeId, draftIds] of Object.entries(flowchartMap)) {
const indices = draftIds.map(id => idxMap[id]).filter((idx): idx is number => idx !== undefined);
if (indices.length === 0) continue;
const memberLayers = indices.map(i => layers[i]);
const ys = indices.map(i => TOP_Y + layers[i] * (NODE_H + GAP_Y) + offsets[i]);
const xs = indices.map(i => nodeXPositions[i]);
boxes.push({
runtimeId,
minLayer: Math.min(...memberLayers),
maxLayer: Math.max(...memberLayers),
minY: Math.min(...ys) - PAD - LABEL_H,
maxY: Math.max(...ys) + NODE_H + PAD,
maxX: Math.max(...xs.map(x => x + nodeW)) + PAD,
});
}
boxes.sort((a, b) => a.minY - b.minY || a.minLayer - b.minLayer);
// Step 3: Resolve remaining overlaps between adjacent group boxes
// by pushing lower boxes down. Track shifts per-group so they apply
// only to that group's nodes.
const groupShift = new Map<string, number>();
for (let i = 1; i < boxes.length; i++) {
const prev = boxes[i - 1];
const curr = boxes[i];
const prevShift = groupShift.get(prev.runtimeId) ?? 0;
const currShift = groupShift.get(curr.runtimeId) ?? 0;
const prevBottom = prev.maxY + prevShift;
const currTop = curr.minY + currShift;
const overlap = prevBottom + MIN_GROUP_GAP - currTop;
if (overlap > 0) {
groupShift.set(curr.runtimeId, currShift + overlap);
}
}
// Apply group shifts to node offsets
let maxShift = 0;
for (let i = 0; i < nodes.length; i++) {
const group = nodeToGroup.get(i);
if (group) {
const shift = groupShift.get(group) ?? 0;
offsets[i] += shift;
maxShift = Math.max(maxShift, offsets[i]);
}
}
// Also shift ungrouped nodes by their layer's cumulative sub-row shift
// (they already have it from the subRowCumShift step above)
const totalExtra = subCum + Math.max(0, ...Array.from(groupShift.values()));
const maxGroupX = boxes.length > 0 ? Math.max(...boxes.map(b => b.maxX)) : 0;
return { nodeYOffset: offsets, totalExtraY: totalExtra, groupBoxMaxX: maxGroupX };
}, [nodes, maxLayer, flowchartMap, idxMap, layers, nodeXPositions, nodeW]);
const nodePos = (i: number) => ({
x: nodeXPositions[i],
y: TOP_Y + layers[i] * (NODE_H + GAP_Y),
y: TOP_Y + layers[i] * (NODE_H + GAP_Y) + nodeYOffset[i],
});
const maxLayer = Math.max(...layers);
const svgHeight = TOP_Y + (maxLayer + 1) * NODE_H + maxLayer * GAP_Y + 16;
const svgHeight = TOP_Y + (maxLayer + 1) * NODE_H + maxLayer * GAP_Y + totalExtraY + 16;
// Compute group areas for multi-node runtime groups
// Compute group areas for runtime node boundaries on the draft
const groupAreas = useMemo(() => {
if (!flowchartMap || !runtimeNodes?.length) return [];
const groups: { runtimeId: string; label: string; draftIds: string[] }[] = [];
for (const [runtimeId, draftIds] of Object.entries(flowchartMap)) {
if (draftIds.length < 2) continue;
const rn = runtimeNodes.find(n => n.id === runtimeId);
groups.push({ runtimeId, label: rn?.label ?? runtimeId, draftIds });
groups.push({ runtimeId, label: formatNodeId(runtimeId), draftIds });
}
return groups;
}, [flowchartMap, runtimeNodes]);
@@ -551,10 +817,7 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
const legendH = usedTypes.length * 18 + 20;
const totalH = svgHeight + legendH;
// Find hovered node for tooltip positioning
const hoveredNodeData = hoveredNode ? nodes.find(n => n.id === hoveredNode) : null;
const hoveredIdx = hoveredNode ? idxMap[hoveredNode] : -1;
const hoveredPos = hoveredIdx >= 0 ? nodePos(hoveredIdx) : null;
const renderEdge = (edge: typeof forwardEdges[number], i: number) => {
const from = nodePos(edge.fromIdx);
@@ -572,20 +835,23 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
}
const midY = (y1 + y2) / 2;
const d = `M ${startX} ${y1} C ${startX} ${midY}, ${toCenterX} ${midY}, ${toCenterX} ${y2}`;
// Orthogonal routing: straight when aligned, L-shape when offset
const d = Math.abs(startX - toCenterX) < 2
? `M ${startX} ${y1} L ${toCenterX} ${y2}`
: `M ${startX} ${y1} L ${startX} ${midY} L ${toCenterX} ${midY} L ${toCenterX} ${y2}`;
return (
<g key={`fwd-${i}`}>
<path d={d} fill="none" stroke="hsl(220,10%,30%)" strokeWidth={1.2} />
<path d={d} fill="none" stroke={chrome.edge} strokeWidth={1.2} />
<polygon
points={`${toCenterX - 3},${y2 - 5} ${toCenterX + 3},${y2 - 5} ${toCenterX},${y2 - 1}`}
fill="hsl(220,10%,35%)"
fill={chrome.edgeArrow}
/>
{edge.label && (
<text
x={(startX + toCenterX) / 2}
y={midY - 3}
fill="hsl(220,10%,45%)"
fill={chrome.edgeLabel}
fontSize={9}
fontStyle="italic"
textAnchor="middle"
@@ -613,27 +879,25 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
return (
<g key={`back-${i}`}>
<path d={path} fill="none" stroke="hsl(220,10%,25%)" strokeWidth={1.2} strokeDasharray="4 3" />
<path d={path} fill="none" stroke={chrome.backEdge} strokeWidth={1.2} strokeDasharray="4 3" />
<polygon
points={`${endX + 5},${endY - 2.5} ${endX + 5},${endY + 2.5} ${endX},${endY}`}
fill="hsl(220,10%,30%)"
fill={chrome.edge}
/>
</g>
);
};
const STATUS_COLORS: Record<DraftNodeStatus, string> = {
running: "#F59E0B", // amber
complete: "#22C55E", // green
error: "#EF4444", // red
pending: "", // no overlay
running: chrome.statusRunning,
complete: chrome.statusComplete,
error: chrome.statusError,
pending: "",
};
const renderNode = (node: DraftNode, i: number) => {
const pos = nodePos(i);
const isHovered = hoveredNode === node.id;
const status = nodeStatuses[node.id] as DraftNodeStatus | undefined;
const statusColor = status ? STATUS_COLORS[status] : "";
const fontSize = 13;
const labelAvailW = nodeW - 28;
const displayLabel = truncateLabel(node.name, labelAvailW, fontSize);
@@ -655,30 +919,15 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
onNodeClick?.(node);
}
}}
onMouseEnter={() => setHoveredNode(node.id)}
onMouseLeave={() => setHoveredNode(null)}
onMouseEnter={(e) => {
if (shiftHeld.current && hoveredNode) return;
setHoveredNode(node.id);
const rect = containerRef.current?.getBoundingClientRect();
if (rect) setMousePos({ x: e.clientX - rect.left, y: e.clientY - rect.top });
}}
onMouseLeave={() => { if (!shiftHeld.current) { setHoveredNode(null); setMousePos(null); } }}
style={{ cursor: "pointer" }}
>
<title>{`${node.name}\n${node.flowchart_type}`}</title>
{/* Status glow ring (runtime overlay) */}
{hasStatusOverlay && statusColor && (
<rect
x={pos.x - 3}
y={pos.y - 3}
width={nodeW + 6}
height={NODE_H + 6}
rx={8}
fill="none"
stroke={statusColor}
strokeWidth={2}
opacity={status === "running" ? 0.8 : 0.6}
>
{status === "running" && (
<animate attributeName="opacity" values="0.4;0.9;0.4" dur="1.5s" repeatCount="indefinite" />
)}
</rect>
)}
<FlowchartShape
shape={node.flowchart_shape}
@@ -693,7 +942,7 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
<text
x={textX}
y={textY - 5}
fill={isHovered ? "hsl(0,0%,92%)" : "hsl(0,0%,78%)"}
fill={isHovered ? chrome.nodeTextHover : chrome.nodeText}
fontSize={fontSize}
fontWeight={500}
textAnchor="middle"
@@ -705,7 +954,7 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
<text
x={textX}
y={textY + 11}
fill="hsl(220,10%,50%)"
fill={chrome.chromeText}
fontSize={9.5}
textAnchor="middle"
dominantBaseline="middle"
@@ -713,91 +962,152 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
{descLabel}
</text>
{/* Status dot indicator */}
{hasStatusOverlay && statusColor && (
<circle
cx={pos.x + nodeW - 6}
cy={pos.y + 6}
r={4}
fill={statusColor}
>
{status === "running" && (
<animate attributeName="r" values="3;5;3" dur="1s" repeatCount="indefinite" />
)}
</circle>
)}
</g>
);
};
if (loading || !draft || nodes.length === 0) {
return (
<div className="flex flex-col h-full">
<div className="px-4 pt-3 pb-1.5 flex items-center gap-2">
<p className="text-[11px] text-muted-foreground font-medium uppercase tracking-wider">Draft</p>
<span className="text-[9px] font-mono font-medium rounded px-1 py-0.5 leading-none border text-amber-500/60 border-amber-500/20">planning</span>
</div>
<div className="flex-1 flex flex-col items-center justify-center gap-3">
{loading || !draft ? (
<>
<Loader2 className="w-5 h-5 animate-spin text-muted-foreground/40" />
<p className="text-xs text-muted-foreground/50">Designing flowchart</p>
</>
) : (
<p className="text-xs text-muted-foreground/60 text-center italic">
No draft graph yet.
<br />
Describe your workflow to get started.
</p>
)}
</div>
</div>
);
}
return (
<div className="flex flex-col h-full">
{/* Header */}
<div className="px-4 pt-3 pb-1.5 flex items-center gap-2">
<p className="text-[11px] text-muted-foreground font-medium uppercase tracking-wider">
{hasStatusOverlay ? "Flowchart" : "Draft"}
</p>
<span className={`text-[9px] font-mono font-medium rounded px-1 py-0.5 leading-none border ${hasStatusOverlay ? "text-emerald-500/60 border-emerald-500/20" : "text-amber-500/60 border-amber-500/20"}`}>
{hasStatusOverlay ? "live" : "planning"}
</span>
</div>
{/* Agent name + goal */}
<div className="px-4 pb-2.5 border-b border-border/20">
<p className="text-[11px] font-medium text-foreground/80 truncate">
{draft.agent_name}
</p>
{draft.goal && (
<p className="text-[10px] text-muted-foreground/60 mt-0.5 line-clamp-2 leading-snug">
{draft.goal}
<div className="px-4 pt-3 pb-1.5 flex items-center justify-between">
<div className="flex items-center gap-2">
<p className="text-[11px] text-muted-foreground font-medium uppercase tracking-wider">
{hasStatusOverlay ? "Flowchart" : "Draft"}
</p>
{building ? (
<span className="text-[9px] font-mono font-medium rounded px-1 py-0.5 leading-none border text-primary/60 border-primary/20 flex items-center gap-1">
<Loader2 className="w-2.5 h-2.5 animate-spin" />
building
</span>
) : (
<span className={`text-[9px] font-mono font-medium rounded px-1 py-0.5 leading-none border ${hasStatusOverlay ? "text-emerald-500/60 border-emerald-500/20" : "text-amber-500/60 border-amber-500/20"}`}>
{hasStatusOverlay ? "live" : "planning"}
</span>
)}
</div>
{onRun && (
<RunButton runState={runState} disabled={draft.nodes.length === 0} onRun={onRun} onPause={onPause ?? (() => {})} btnRef={runBtnRef} />
)}
</div>
{/* Graph */}
<div ref={containerRef} className="flex-1 overflow-y-auto overflow-x-hidden px-2 pb-2 relative">
<div ref={containerRef} className="flex-1 overflow-hidden px-2 pb-2 relative">
<div
onWheel={handleWheel}
onMouseDown={handleMouseDown}
onMouseMove={handleMouseMove}
onMouseUp={handleMouseUp}
onMouseLeave={handleMouseUp}
className={`w-full h-full${building ? " opacity-30" : ""}`}
style={{ cursor: dragging ? "grabbing" : "grab" }}
>
<svg
width="100%"
viewBox={`0 0 ${containerW} ${totalH}`}
viewBox={`0 0 ${Math.max((maxContentRight ?? 0), groupBoxMaxX) + (backEdgeOverflow ?? 0)} ${totalH}`}
preserveAspectRatio="xMidYMin meet"
className="select-none"
style={{ fontFamily: "'Inter', system-ui, sans-serif" }}
style={{
fontFamily: "'Inter', system-ui, sans-serif",
transform: `translate(${pan.x}px, ${pan.y}px) scale(${zoom})`,
transformOrigin: "center top",
}}
>
{/* Group areas — dashed boxes behind multi-node runtime groups */}
{groupAreas.map((group) => {
const memberIndices = group.draftIds
.map(id => idxMap[id])
.filter((idx): idx is number => idx !== undefined);
if (memberIndices.length < 2) return null;
if (memberIndices.length === 0) return null;
const positions = memberIndices.map(i => nodePos(i));
const pad = 10;
const pad = 7;
const minX = Math.min(...positions.map(p => p.x)) - pad;
const minY = Math.min(...positions.map(p => p.y)) - pad - 14; // extra space for label
const maxX = Math.max(...positions.map(p => p.x + nodeW)) + pad;
const maxY = Math.max(...positions.map(p => p.y + NODE_H)) + pad;
// Runtime status for this group
const runtimeNode = runtimeNodes?.find(rn => rn.id === group.runtimeId);
const groupStatus: DraftNodeStatus | undefined = runtimeNode
? (runtimeNode.status === "running" || runtimeNode.status === "looping" ? "running"
: runtimeNode.status === "complete" ? "complete"
: runtimeNode.status === "error" ? "error" : "pending")
: undefined;
const groupStatusColor = groupStatus ? STATUS_COLORS[groupStatus] : "";
return (
<g key={`group-${group.runtimeId}`}>
{/* Status glow around group boundary */}
{(groupStatus === "running" || groupStatus === "error") && groupStatusColor && (
<rect
x={minX - 3}
y={minY - 3}
width={maxX - minX + 6}
height={maxY - minY + 6}
rx={10}
fill="none"
stroke={groupStatusColor}
strokeWidth={2}
opacity={groupStatus === "running" ? 0.8 : 0.6}
>
{groupStatus === "running" && (
<animate attributeName="opacity" values="0.4;0.9;0.4" dur="1.5s" repeatCount="indefinite" />
)}
</rect>
)}
<rect
x={minX}
y={minY}
width={maxX - minX}
height={maxY - minY}
rx={8}
fill="hsl(220,15%,18%)"
fill={chrome.groupFill}
fillOpacity={0.35}
stroke="hsl(220,10%,40%)"
stroke={chrome.groupStroke}
strokeWidth={1}
strokeDasharray="5 3"
/>
<text
x={minX + 8}
y={minY + 11}
fill="hsl(220,10%,50%)"
fill={chrome.chromeText}
fontSize={9}
fontWeight={500}
>
{truncateLabel(group.label, maxX - minX - 16, 9)}
</text>
{/* Status dot on group boundary */}
{hasStatusOverlay && (groupStatus === "running" || groupStatus === "error") && groupStatusColor && (
<circle cx={maxX - 6} cy={minY + 6} r={4} fill={groupStatusColor}>
{groupStatus === "running" && (
<animate attributeName="r" values="3;5;3" dur="1s" repeatCount="indefinite" />
)}
</circle>
)}
</g>
);
})}
@@ -808,7 +1118,7 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
{/* Legend */}
<g transform={`translate(${MARGIN_X}, ${svgHeight + 4})`}>
<text fill="hsl(220,10%,40%)" fontSize={9} fontWeight={600} y={4}>
<text fill={chrome.groupStroke} fontSize={9} fontWeight={600} y={4}>
LEGEND
</text>
{usedTypes.map(([type, meta], i) => (
@@ -822,26 +1132,64 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
color={meta.color}
selected={false}
/>
<text x={22} y={9} fill="hsl(220,10%,55%)" fontSize={9.5}>
<text x={22} y={9} fill={chrome.chromeTextDim} fontSize={9.5}>
{type.replace(/_/g, " ")}
</text>
</g>
))}
</g>
</svg>
</div>
{building && (
<div className="absolute inset-0 flex items-center justify-center">
<div className="flex flex-col items-center gap-3">
<Loader2 className="w-6 h-6 animate-spin text-primary/60" />
<p className="text-xs text-muted-foreground/80">Building agent...</p>
</div>
</div>
)}
{/* Zoom controls */}
<div className="absolute bottom-3 right-3 flex items-center gap-1 bg-card/80 backdrop-blur-sm border border-border/40 rounded-lg p-0.5 shadow-sm">
<button
onClick={() => setZoom(z => Math.min(MAX_ZOOM, z * 1.2))}
className="w-6 h-6 flex items-center justify-center rounded text-muted-foreground hover:text-foreground hover:bg-muted/60 transition-colors text-xs font-bold"
aria-label="Zoom in"
>+</button>
<button
onClick={resetView}
className="px-1.5 h-6 flex items-center justify-center rounded text-[10px] font-mono text-muted-foreground hover:text-foreground hover:bg-muted/60 transition-colors"
aria-label="Reset zoom"
>{Math.round(zoom * 100)}%</button>
<button
onClick={() => setZoom(z => Math.max(MIN_ZOOM, z * 0.8))}
className="w-6 h-6 flex items-center justify-center rounded text-muted-foreground hover:text-foreground hover:bg-muted/60 transition-colors text-xs font-bold"
aria-label="Zoom out"
>{"\u2212"}</button>
</div>
{/* HTML tooltip — rendered outside SVG so it's not clipped */}
{hoveredNodeData && hoveredPos && (
<Tooltip
node={hoveredNodeData}
style={{
left: 8,
right: 8,
// Position below the hovered node, scaled to container width
top: `calc(${((hoveredPos.y + NODE_H + 4) / totalH) * 100}%)`,
}}
/>
)}
{hoveredNodeData && mousePos && (() => {
const TOOLTIP_W = 260;
const OFFSET = 12;
const rect = containerRef.current?.getBoundingClientRect();
const cw = rect?.width ?? 0;
const ch = rect?.height ?? 0;
const flipX = mousePos.x + OFFSET + TOOLTIP_W > cw;
const flipY = mousePos.y + 16 + 60 > ch;
return (
<Tooltip
node={hoveredNodeData}
style={{
left: flipX ? undefined : mousePos.x + OFFSET,
right: flipX ? (cw - mousePos.x + OFFSET) : undefined,
top: flipY ? undefined : mousePos.y + 16,
bottom: flipY ? (ch - mousePos.y + 16) : undefined,
}}
/>
);
})()}
</div>
</div>
);
@@ -299,13 +299,13 @@ function SubagentsTab({ subAgentIds, allNodeSpecs, subagentReports }: { subAgent
);
}
type Tab = "overview" | "tools" | "logs" | "prompt" | "subagents";
type Tab = "overview" | "breakdown" | "tools" | "logs" | "subagents";
const tabs: { id: Tab; label: string; Icon: React.FC<{ className?: string }> }[] = [
{ id: "overview", label: "Overview", Icon: ({ className }) => <GitBranch className={className} /> },
{ id: "breakdown", label: "Breakdown", Icon: ({ className }) => <BookOpen className={className} /> },
{ id: "tools", label: "Tools", Icon: ({ className }) => <Wrench className={className} /> },
{ id: "logs", label: "Logs", Icon: ({ className }) => <Terminal className={className} /> },
{ id: "prompt", label: "Prompt", Icon: ({ className }) => <BookOpen className={className} /> },
{ id: "subagents", label: "Subagents", Icon: ({ className }) => <Bot className={className} /> },
];
@@ -331,7 +331,7 @@ export default function NodeDetailPanel({ node, nodeSpec, allNodeSpecs, subagent
// Fetch real criteria when Overview tab is active and session is loaded
useEffect(() => {
if (activeTab === "overview" && sessionId && graphId && node) {
if (activeTab === "breakdown" && sessionId && graphId && node) {
graphsApi.nodeCriteria(sessionId, graphId, node.id, workerSessionId || undefined)
.then(r => setRealCriteria(r))
.catch(() => setRealCriteria(null));
@@ -410,6 +410,10 @@ export default function NodeDetailPanel({ node, nodeSpec, allNodeSpecs, subagent
{/* Tab content */}
<div className="flex-1 overflow-auto px-4 py-4 flex flex-col gap-3">
{activeTab === "overview" && (
<SystemPromptTab systemPrompt={nodeSpec?.system_prompt} />
)}
{activeTab === "breakdown" && (
<>
<p className="text-[10px] font-medium text-muted-foreground uppercase tracking-wider">Action Plan</p>
{actionPlan ? (
@@ -489,10 +493,6 @@ export default function NodeDetailPanel({ node, nodeSpec, allNodeSpecs, subagent
<LogsTab nodeId={node.id} isActive={isActive} sessionId={sessionId} graphId={graphId} workerSessionId={workerSessionId} nodeLogs={nodeLogs} />
)}
{activeTab === "prompt" && (
<SystemPromptTab systemPrompt={nodeSpec?.system_prompt} />
)}
{activeTab === "subagents" && nodeSpec?.sub_agents && (
<SubagentsTab
subAgentIds={nodeSpec.sub_agents}
+3 -3
View File
@@ -1,8 +1,8 @@
import { useState, useCallback } from "react";
import { useNavigate } from "react-router-dom";
import { Crown, X } from "lucide-react";
import { loadPersistedTabs, savePersistedTabs, TAB_STORAGE_KEY, type PersistedTabState } from "@/lib/tab-persistence";
import { sessionsApi } from "@/api/sessions";
import { loadPersistedTabs, savePersistedTabs, TAB_STORAGE_KEY, type PersistedTabState } from "@/lib/tab-persistence";
export interface TopBarTab {
agentType: string;
@@ -51,10 +51,10 @@ export default function TopBar({ tabs: tabsProp, onTabClick, onCloseTab, canClos
onCloseTab(agentType);
return;
}
// Kill the backend session (queen/judge/worker) even outside workspace
// Kill the backend session (queen/worker) even outside workspace
sessionsApi.list()
.then(({ sessions }) => {
const match = sessions.find(s => s.agent_path === agentType);
const match = sessions.find(s => s.agent_path.endsWith(agentType));
if (match) return sessionsApi.stop(match.session_id);
})
.catch(() => {}); // fire-and-forget
+27
View File
@@ -72,6 +72,33 @@
--border: 240 3.7% 15.9%;
--input: 240 3.7% 15.9%;
--ring: 45 93% 47%;
/* Agent graph node status colors */
--node-running: 45 95% 58%;
--node-looping: 38 90% 55%;
--node-complete: 43 70% 45%;
--node-pending: 35 15% 28%;
--node-pending-bg: 35 10% 12%;
--node-pending-border: 35 10% 20%;
--node-error: 0 65% 55%;
/* Agent graph trigger node colors */
--trigger-bg: 210 25% 14%;
--trigger-border: 210 30% 30%;
--trigger-text: 210 30% 65%;
--trigger-icon: 210 40% 55%;
/* Draft graph chrome colors */
--draft-edge: 220 10% 30%;
--draft-edge-arrow: 220 10% 35%;
--draft-edge-label: 220 10% 45%;
--draft-back-edge: 220 10% 25%;
--draft-group-fill: 220 15% 18%;
--draft-group-stroke: 220 10% 40%;
--draft-chrome-text: 220 10% 50%;
--draft-chrome-text-dim: 220 10% 55%;
--draft-node-text: 0 0% 78%;
--draft-node-text-hover: 0 0% 92%;
}
}
+118 -65
View File
@@ -1,60 +1,6 @@
import { describe, it, expect } from "vitest";
import { backendMessageToChatMessage, sseEventToChatMessage, formatAgentDisplayName } from "./chat-helpers";
import type { AgentEvent, Message } from "@/api/types";
// ---------------------------------------------------------------------------
// backendMessageToChatMessage
// ---------------------------------------------------------------------------
describe("backendMessageToChatMessage", () => {
it("converts a user message", () => {
const msg: Message = { seq: 1, role: "user", content: "hello", _node_id: "chat" };
const result = backendMessageToChatMessage(msg, "inbox-management");
expect(result.type).toBe("user");
expect(result.agent).toBe("You");
expect(result.role).toBeUndefined();
expect(result.content).toBe("hello");
expect(result.thread).toBe("inbox-management");
});
it("converts an assistant message with node_id as agent", () => {
const msg: Message = { seq: 2, role: "assistant", content: "hi", _node_id: "intake" };
const result = backendMessageToChatMessage(msg, "inbox-management");
expect(result.agent).toBe("intake");
expect(result.role).toBe("worker");
expect(result.type).toBeUndefined();
});
it("defaults agent to 'Agent' when _node_id is empty", () => {
const msg: Message = { seq: 3, role: "assistant", content: "ok", _node_id: "" };
const result = backendMessageToChatMessage(msg, "inbox-management");
expect(result.agent).toBe("Agent");
});
it("produces deterministic ID from seq", () => {
const msg: Message = { seq: 42, role: "user", content: "test", _node_id: "x" };
const result = backendMessageToChatMessage(msg, "thread");
expect(result.id).toBe("backend-42");
});
it("passes through the thread parameter", () => {
const msg: Message = { seq: 1, role: "user", content: "hi", _node_id: "x" };
const result = backendMessageToChatMessage(msg, "my-thread");
expect(result.thread).toBe("my-thread");
});
it("uses agentDisplayName instead of node_id when provided", () => {
const msg: Message = { seq: 2, role: "assistant", content: "hi", _node_id: "intake" };
const result = backendMessageToChatMessage(msg, "thread", "Competitive Intel Agent");
expect(result.agent).toBe("Competitive Intel Agent");
});
it("still shows 'You' for user messages even when agentDisplayName is provided", () => {
const msg: Message = { seq: 1, role: "user", content: "hello", _node_id: "chat" };
const result = backendMessageToChatMessage(msg, "thread", "My Agent");
expect(result.agent).toBe("You");
});
});
import { sseEventToChatMessage, formatAgentDisplayName } from "./chat-helpers";
import type { AgentEvent } from "@/api/types";
// ---------------------------------------------------------------------------
// sseEventToChatMessage
@@ -250,6 +196,102 @@ describe("sseEventToChatMessage", () => {
);
});
it("different inner_turn values produce different message IDs", () => {
const e1 = makeEvent({
type: "client_output_delta",
node_id: "queen",
execution_id: "exec-1",
data: { snapshot: "first response", iteration: 0, inner_turn: 0 },
});
const e2 = makeEvent({
type: "client_output_delta",
node_id: "queen",
execution_id: "exec-1",
data: { snapshot: "after tool call", iteration: 0, inner_turn: 1 },
});
const r1 = sseEventToChatMessage(e1, "t");
const r2 = sseEventToChatMessage(e2, "t");
expect(r1!.id).not.toBe(r2!.id);
});
it("same inner_turn produces same ID (streaming upsert within one LLM call)", () => {
const e1 = makeEvent({
type: "client_output_delta",
node_id: "queen",
execution_id: "exec-1",
data: { snapshot: "partial", iteration: 0, inner_turn: 1 },
});
const e2 = makeEvent({
type: "client_output_delta",
node_id: "queen",
execution_id: "exec-1",
data: { snapshot: "partial response", iteration: 0, inner_turn: 1 },
});
expect(sseEventToChatMessage(e1, "t")!.id).toBe(
sseEventToChatMessage(e2, "t")!.id,
);
});
it("absent inner_turn produces same ID as inner_turn=0 (backward compat)", () => {
const withField = makeEvent({
type: "client_output_delta",
node_id: "queen",
execution_id: "exec-1",
data: { snapshot: "hello", iteration: 2, inner_turn: 0 },
});
const withoutField = makeEvent({
type: "client_output_delta",
node_id: "queen",
execution_id: "exec-1",
data: { snapshot: "hello", iteration: 2 },
});
expect(sseEventToChatMessage(withField, "t")!.id).toBe(
sseEventToChatMessage(withoutField, "t")!.id,
);
});
it("inner_turn=0 produces no suffix (matches old ID format)", () => {
const event = makeEvent({
type: "client_output_delta",
node_id: "queen",
execution_id: "exec-1",
data: { snapshot: "hello", iteration: 3, inner_turn: 0 },
});
const result = sseEventToChatMessage(event, "t");
expect(result!.id).toBe("stream-exec-1-3-queen");
});
it("inner_turn>0 adds -t suffix to ID", () => {
const event = makeEvent({
type: "client_output_delta",
node_id: "queen",
execution_id: "exec-1",
data: { snapshot: "hello", iteration: 3, inner_turn: 2 },
});
const result = sseEventToChatMessage(event, "t");
expect(result!.id).toBe("stream-exec-1-3-t2-queen");
});
it("llm_text_delta also uses inner_turn for distinct IDs", () => {
const e1 = makeEvent({
type: "llm_text_delta",
node_id: "research",
execution_id: "exec-1",
data: { snapshot: "first", inner_turn: 0 },
});
const e2 = makeEvent({
type: "llm_text_delta",
node_id: "research",
execution_id: "exec-1",
data: { snapshot: "second", inner_turn: 1 },
});
const r1 = sseEventToChatMessage(e1, "t");
const r2 = sseEventToChatMessage(e2, "t");
expect(r1!.id).not.toBe(r2!.id);
expect(r1!.id).toBe("stream-exec-1-research");
expect(r2!.id).toBe("stream-exec-1-t1-research");
});
it("uses timestamp fallback when both turnId and execution_id are null", () => {
const event = makeEvent({
type: "client_output_delta",
@@ -261,25 +303,36 @@ describe("sseEventToChatMessage", () => {
expect(result!.id).toMatch(/^stream-t-\d+-chat$/);
});
it("converts client_input_requested with prompt to message", () => {
it("returns null for client_input_requested (handled in workspace.tsx)", () => {
const event = makeEvent({
type: "client_input_requested",
node_id: "chat",
execution_id: "abc",
data: { prompt: "What next?" },
});
const result = sseEventToChatMessage(event, "t");
expect(result).not.toBeNull();
expect(result!.content).toBe("What next?");
expect(result!.role).toBe("worker");
expect(sseEventToChatMessage(event, "t")).toBeNull();
});
it("returns null for client_input_requested without prompt", () => {
it("converts client_input_received to user message", () => {
const event = makeEvent({
type: "client_input_requested",
node_id: "chat",
type: "client_input_received",
node_id: "queen",
execution_id: "abc",
data: { prompt: "" },
data: { content: "do the thing" },
});
const result = sseEventToChatMessage(event, "t");
expect(result).not.toBeNull();
expect(result!.agent).toBe("You");
expect(result!.type).toBe("user");
expect(result!.content).toBe("do the thing");
});
it("returns null for client_input_received with empty content", () => {
const event = makeEvent({
type: "client_input_received",
node_id: "queen",
execution_id: "abc",
data: { content: "" },
});
expect(sseEventToChatMessage(event, "t")).toBeNull();
});
+49 -30
View File
@@ -1,10 +1,10 @@
/**
* Pure functions for converting backend messages and SSE events into ChatMessage objects.
* Pure functions for converting SSE events into ChatMessage objects.
* No React dependencies — just JSON in, object out.
*/
import type { ChatMessage } from "@/components/ChatPanel";
import type { AgentEvent, Message } from "@/api/types";
import type { AgentEvent } from "@/api/types";
/**
* Derive a human-readable display name from a raw agent identifier.
@@ -27,32 +27,6 @@ export function formatAgentDisplayName(raw: string): string {
.trim();
}
/**
* Convert a backend Message (from sessionsApi.messages()) into a ChatMessage.
* When agentDisplayName is provided, it is used as the sender for all agent
* messages instead of the raw node_id.
*/
export function backendMessageToChatMessage(
msg: Message,
thread: string,
agentDisplayName?: string,
): ChatMessage {
// Use file-mtime created_at (epoch seconds → ms) for cross-conversation
// ordering; fall back to seq for backwards compatibility.
const createdAt = msg.created_at ? msg.created_at * 1000 : msg.seq;
return {
id: `backend-${msg._node_id}-${msg.seq}`,
agent: msg.role === "user" ? "You" : agentDisplayName || msg._node_id || "Agent",
agentColor: "",
content: msg.content,
timestamp: "",
type: msg.role === "user" ? "user" : undefined,
role: msg.role === "user" ? undefined : "worker",
thread,
createdAt,
};
}
/**
* Convert an SSE AgentEvent into a ChatMessage, or null if the event
* doesn't produce a visible chat message.
@@ -82,10 +56,15 @@ export function sseEventToChatMessage(
const iterTid = iter != null ? String(iter) : tid;
const iterIdKey = eid && iterTid ? `${eid}-${iterTid}` : eid || iterTid || `t-${Date.now()}`;
// Distinguish multiple LLM calls within the same iteration (inner tool loop).
// inner_turn=0 (or absent) produces no suffix for backward compat.
const innerTurn = event.data?.inner_turn as number | undefined;
const innerSuffix = innerTurn != null && innerTurn > 0 ? `-t${innerTurn}` : "";
const snapshot = (event.data?.snapshot as string) || (event.data?.content as string) || "";
if (!snapshot) return null;
return {
id: `stream-${iterIdKey}-${event.node_id}`,
id: `stream-${iterIdKey}${innerSuffix}-${event.node_id}`,
agent: agentDisplayName || event.node_id || "Agent",
agentColor: "",
content: snapshot,
@@ -101,11 +80,29 @@ export function sseEventToChatMessage(
// create a worker_input_request message and set awaitingInput state.
return null;
case "client_input_received": {
const userContent = (event.data?.content as string) || "";
if (!userContent) return null;
return {
id: `user-input-${event.timestamp}`,
agent: "You",
agentColor: "",
content: userContent,
timestamp: "",
type: "user",
thread,
createdAt,
};
}
case "llm_text_delta": {
const llmInnerTurn = event.data?.inner_turn as number | undefined;
const llmInnerSuffix = llmInnerTurn != null && llmInnerTurn > 0 ? `-t${llmInnerTurn}` : "";
const snapshot = (event.data?.snapshot as string) || (event.data?.content as string) || "";
if (!snapshot) return null;
return {
id: `stream-${idKey}-${event.node_id}`,
id: `stream-${idKey}${llmInnerSuffix}-${event.node_id}`,
agent: event.node_id || "Agent",
agentColor: "",
content: snapshot,
@@ -148,3 +145,25 @@ export function sseEventToChatMessage(
return null;
}
}
type QueenPhase = "planning" | "building" | "staging" | "running";
const VALID_PHASES = new Set<string>(["planning", "building", "staging", "running"]);
/**
* Scan an array of persisted events and return the last queen phase seen,
* or null if no phase event exists. Reads both `queen_phase_changed` events
* and the per-iteration `phase` metadata on `node_loop_iteration` events.
*/
export function extractLastPhase(events: AgentEvent[]): QueenPhase | null {
let last: QueenPhase | null = null;
for (const evt of events) {
const phase =
evt.type === "queen_phase_changed" ? (evt.data?.phase as string) :
evt.type === "node_loop_iteration" ? (evt.data?.phase as string | undefined) :
undefined;
if (phase && VALID_PHASES.has(phase)) {
last = phase as QueenPhase;
}
}
return last;
}
+1
View File
@@ -51,6 +51,7 @@ export function topologyToGraphNodes(topology: GraphTopology): GraphNode[] {
triggerConfig: {
...ep.trigger_config,
...(ep.next_fire_in != null ? { next_fire_in: ep.next_fire_in } : {}),
...(ep.task ? { task: ep.task } : {}),
},
next: [ep.entry_node],
});
+1 -1
View File
@@ -113,7 +113,7 @@ export default function MyAgents() {
<div className="flex items-center gap-1">
<Activity className="w-3 h-3" />
<span>
{agent.session_count} session{agent.session_count !== 1 ? "s" : ""}
{agent.run_count} run{agent.run_count !== 1 ? "s" : ""}
</span>
</div>
<span>{agent.last_active ? timeAgo(agent.last_active) : "Never run"}</span>
+547 -123
View File
@@ -14,8 +14,8 @@ import { executionApi } from "@/api/execution";
import { graphsApi } from "@/api/graphs";
import { sessionsApi } from "@/api/sessions";
import { useMultiSSE } from "@/hooks/use-sse";
import type { LiveSession, AgentEvent, DiscoverEntry, Message, NodeSpec, DraftGraph as DraftGraphData } from "@/api/types";
import { backendMessageToChatMessage, sseEventToChatMessage, formatAgentDisplayName } from "@/lib/chat-helpers";
import type { LiveSession, AgentEvent, DiscoverEntry, NodeSpec, DraftGraph as DraftGraphData } from "@/api/types";
import { sseEventToChatMessage, formatAgentDisplayName } from "@/lib/chat-helpers";
import { topologyToGraphNodes } from "@/lib/graph-converter";
import { ApiError } from "@/api/client";
@@ -113,7 +113,13 @@ function NewTabPopover({ open, onClose, anchorRef, discoverAgents, onFromScratch
useEffect(() => {
if (open && anchorRef.current) {
const rect = anchorRef.current.getBoundingClientRect();
setPos({ top: rect.bottom + 4, left: rect.left });
const POPUP_WIDTH = 240; // w-60 = 15rem = 240px
const overflows = rect.left + POPUP_WIDTH > window.innerWidth - 8;
console.log("Anchor rect:", rect, "Overflows:", overflows);
setPos({
top: rect.bottom + 4,
left: overflows ? rect.right - POPUP_WIDTH : rect.left,
});
}
}, [open, anchorRef]);
@@ -242,6 +248,49 @@ function truncate(s: string, max: number): string {
return s.length > max ? s.slice(0, max) + "..." : s;
}
type SessionRestoreResult = {
messages: ChatMessage[];
restoredPhase: "planning" | "building" | "staging" | "running" | null;
};
/**
* Restore session messages from the persisted event log.
* Returns an empty result if no event log exists.
*/
async function restoreSessionMessages(
sessionId: string,
thread: string,
agentDisplayName: string,
): Promise<SessionRestoreResult> {
try {
const { events } = await sessionsApi.eventsHistory(sessionId);
if (events.length > 0) {
const messages: ChatMessage[] = [];
let runningPhase: ChatMessage["phase"] = undefined;
for (const evt of events) {
// Track phase transitions so each message gets the phase it was created in
const p = evt.type === "queen_phase_changed" ? evt.data?.phase as string
: evt.type === "node_loop_iteration" ? evt.data?.phase as string | undefined
: undefined;
if (p && ["planning", "building", "staging", "running"].includes(p)) {
runningPhase = p as ChatMessage["phase"];
}
const msg = sseEventToChatMessage(evt, thread, agentDisplayName);
if (!msg) continue;
if (evt.stream_id === "queen") {
msg.role = "queen";
msg.phase = runningPhase;
}
messages.push(msg);
}
return { messages, restoredPhase: runningPhase ?? null };
}
} catch {
// Event log not available — session will start fresh.
}
return { messages: [], restoredPhase: null };
}
// --- Per-agent backend state (consolidated) ---
interface AgentBackendState {
sessionId: string | null;
@@ -266,6 +315,7 @@ interface AgentBackendState {
flowchartMap: Record<string, string[]> | null;
workerRunState: "idle" | "deploying" | "running";
currentExecutionId: string | null;
currentRunId: string | null;
nodeLogs: Record<string, string[]>;
nodeActionPlans: Record<string, string>;
subagentReports: { subagent_id: string; message: string; data?: Record<string, unknown>; timestamp: string }[];
@@ -309,6 +359,7 @@ function defaultAgentState(): AgentBackendState {
agentPath: null,
workerRunState: "idle",
currentExecutionId: null,
currentRunId: null,
nodeLogs: {},
nodeActionPlans: {},
subagentReports: [],
@@ -353,11 +404,8 @@ export default function Workspace() {
// tabKey is the actual key used in sessionsByAgent (may contain "::" suffix).
// Fall back to agentType for tabs persisted before this field was added.
const tabKey = tab.tabKey || tab.agentType;
// Skip new-agent tabs when starting fresh from home with a prompt
// to avoid creating duplicate sessions
if (initialPrompt && hasExplicitAgent && (tab.agentType === "new-agent" || tab.agentType.startsWith("new-agent-"))) {
continue;
}
// New-agent tabs each have a unique key (e.g. "new-agent-abc123"),
// so they never collide with the incoming tab — always restore them.
if (!initial[tabKey]) initial[tabKey] = [];
const session = createSession(tab.agentType, tab.label);
session.id = tab.id;
@@ -388,15 +436,26 @@ export default function Workspace() {
if (initial[initialAgent]?.length) {
return initial;
}
// Also check for existing tabs with instance suffixes (e.g. "agentType::instanceId")
const existingKey = Object.keys(initial).find(
k => baseAgentType(k) === initialAgent && initial[k]?.length > 0
);
if (existingKey && !initialPrompt) {
return initial;
}
// If the user submitted a new prompt from the home page, always create
// a fresh session so the prompt isn't lost into an existing session.
// initialAgent is already a unique key (e.g. "new-agent-abc123") when
// coming from home, so the new tab won't overwrite existing ones.
if (initialPrompt && hasExplicitAgent) {
const label = initialAgent.startsWith("new-agent")
const rawLabel = initialAgent.startsWith("new-agent")
? "New Agent"
: formatAgentDisplayName(initialAgent);
const existingNewAgentCount = Object.keys(initial).filter(
k => (k === "new-agent" || k.startsWith("new-agent-")) && (initial[k] || []).length > 0
).length;
const label = existingNewAgentCount === 0 ? rawLabel : `${rawLabel} #${existingNewAgentCount + 1}`;
const newSession = createSession(initialAgent, label);
initial[initialAgent] = [newSession];
return initial;
@@ -494,6 +553,8 @@ export default function Workspace() {
const [credentialAgentPath, setCredentialAgentPath] = useState<string | null>(null);
const [dismissedBanner, setDismissedBanner] = useState<string | null>(null);
const [selectedNode, setSelectedNode] = useState<GraphNode | null>(null);
const [triggerTaskDraft, setTriggerTaskDraft] = useState("");
const [triggerTaskSaving, setTriggerTaskSaving] = useState(false);
const [newTabOpen, setNewTabOpen] = useState(false);
const newTabBtnRef = useRef<HTMLButtonElement>(null);
@@ -512,6 +573,10 @@ export default function Workspace() {
// Using a ref avoids stale-closure bugs when multiple SSE events
// arrive in the same React batch.
const turnCounterRef = useRef<Record<string, number>>({});
// Per-agent queen phase ref — used to stamp each message with the phase
// it was created in (avoids stale-closure when phase change and message
// events arrive in the same React batch).
const queenPhaseRef = useRef<Record<string, string>>({});
// Synchronous ref to suppress the queen's auto-intro SSE messages
// after a cold-restore (where we already restored the conversation from disk).
@@ -658,6 +723,38 @@ export default function Workspace() {
let restoredMessageCount = 0;
// Before creating a new session, check if there's already a live backend
// session for this queen-only agent that no open tab owns.
// Skip this search when the tab has a prompt — it's a fresh agent from
// home and must always get its own session.
if (!liveSession && !coldRestoreId && !prompt) {
try {
const { sessions: allLive } = await sessionsApi.list();
const existing = allLive.find(s => !s.has_worker && !s.agent_path);
if (existing) {
const alreadyOwned = Object.values(sessionsRef.current).flat()
.some(s => s.backendSessionId === existing.session_id);
if (!alreadyOwned) {
liveSession = existing;
}
}
} catch { /* proceed to create */ }
// If no live session, check history for a cold queen-only session
if (!liveSession) {
try {
const { sessions: allHistory } = await sessionsApi.history();
const coldMatch = allHistory.find(
s => !s.agent_path && s.has_messages
);
if (coldMatch) {
coldRestoreId = coldMatch.session_id;
}
} catch { /* proceed to create fresh */ }
}
}
let restoredPhase: "planning" | "building" | "staging" | "running" | null = null;
if (!liveSession) {
// Fetch conversation history from disk BEFORE creating the new session.
// SKIP if messages were already pre-populated by handleHistoryOpen.
@@ -666,12 +763,9 @@ export default function Workspace() {
const alreadyHasMessages = (activeSess?.messages?.length ?? 0) > 0;
if (restoreFrom && !alreadyHasMessages) {
try {
const { messages: queenMsgs } = await sessionsApi.queenMessages(restoreFrom);
for (const m of queenMsgs as Message[]) {
const msg = backendMessageToChatMessage(m, agentType, "Queen Bee");
msg.role = "queen";
preRestoredMsgs.push(msg);
}
const restored = await restoreSessionMessages(restoreFrom, agentType, "Queen Bee");
preRestoredMsgs.push(...restored.messages);
restoredPhase = restored.restoredPhase;
} catch {
// Not available — will start fresh
}
@@ -741,12 +835,16 @@ export default function Workspace() {
// If no messages were actually restored, lift the intro suppression
if (restoredMessageCount === 0) suppressIntroRef.current.delete(agentType);
const qPhase = restoredPhase || liveSession.queen_phase || "planning";
queenPhaseRef.current[agentType] = qPhase;
updateAgentState(agentType, {
sessionId: liveSession.session_id,
displayName: "Queen Bee",
ready: true,
loading: false,
queenReady: true,
queenPhase: qPhase,
queenBuilding: qPhase === "building",
});
} catch (err: unknown) {
const msg = err instanceof Error ? err.message : String(err);
@@ -784,12 +882,44 @@ export default function Workspace() {
} catch {
// 404: session was explicitly stopped (via closeAgentTab) but conversation
// files likely still exist on disk. Treat it as cold so we can restore.
// Verify files exist before assuming cold — if queenMessages succeeds with
// content, files are there.
coldRestoreId = historySourceId || storedSessionId;
}
}
// No stored session — check for a live or cold session for this agent
// that we can reuse (e.g., tab was closed but backend session survived,
// or server restarted with conversation files on disk).
if (!liveSession && !coldRestoreId) {
try {
const { sessions: allLive } = await sessionsApi.list();
const existingLive = allLive.find(s => s.agent_path.endsWith(agentPath));
if (existingLive) {
const alreadyOwned = Object.values(sessionsRef.current).flat()
.some(s => s.backendSessionId === existingLive.session_id);
if (!alreadyOwned) {
liveSession = existingLive;
isResumedSession = true;
}
}
} catch { /* proceed */ }
// If no live session, check history for a cold session to restore
if (!liveSession) {
try {
const { sessions: allHistory } = await sessionsApi.history();
const coldMatch = allHistory.find(
s => s.agent_path?.endsWith(agentPath) && s.has_messages
);
if (coldMatch) {
coldRestoreId = coldMatch.session_id;
}
} catch { /* proceed to create fresh */ }
}
}
// Track the last queen phase seen in the event log for cold restore
let restoredPhase: "planning" | "building" | "staging" | "running" | null = null;
if (!liveSession) {
// Reconnect failed — clear stale cached messages from localStorage restore.
// NEVER wipe when: (a) doing a cold restore (we'll restore from disk) or
@@ -812,29 +942,10 @@ export default function Workspace() {
// double-fetch and greeting leakage).
let preQueenMsgs: ChatMessage[] = [];
if (coldRestoreId && !alreadyHasMessages) {
try {
const { messages: queenMsgs } = await sessionsApi.queenMessages(coldRestoreId);
// Also pre-fetch worker messages from the old session if a resumable worker exists
const displayNameTemp = formatAgentDisplayName(agentPath);
for (const m of queenMsgs as Message[]) {
const msg = backendMessageToChatMessage(m, agentType, "Queen Bee");
msg.role = "queen";
preQueenMsgs.push(msg);
}
// Also try to grab worker messages while we're here
try {
const { sessions: workerSessions } = await sessionsApi.workerSessions(coldRestoreId);
const resumable = workerSessions.find(s => s.status === "active" || s.status === "paused");
if (resumable) {
const { messages: wMsgs } = await sessionsApi.messages(coldRestoreId, resumable.session_id);
for (const m of wMsgs as Message[]) {
preQueenMsgs.push(backendMessageToChatMessage(m, agentType, displayNameTemp));
}
}
} catch { /* not critical */ }
} catch {
// Not available — will start fresh
}
const displayNameTemp = formatAgentDisplayName(agentPath);
const restored = await restoreSessionMessages(coldRestoreId, agentType, displayNameTemp);
preQueenMsgs = restored.messages;
restoredPhase = restored.restoredPhase;
}
// Suppress intro whenever we are about to restore a previous conversation.
@@ -908,7 +1019,8 @@ export default function Workspace() {
// failed, the throw inside the catch exits the outer try block.
const session = liveSession!;
const displayName = formatAgentDisplayName(session.worker_name || agentType);
const initialPhase = session.queen_phase || (session.has_worker ? "staging" : "planning");
const initialPhase = restoredPhase || session.queen_phase || (session.has_worker ? "staging" : "planning");
queenPhaseRef.current[agentType] = initialPhase;
updateAgentState(agentType, {
sessionId: session.session_id,
displayName,
@@ -945,37 +1057,23 @@ export default function Workspace() {
// For cold-restore, use the old session ID. For live resume, use current session.
const historyId = coldRestoreId ?? (isResumedSession ? session.session_id : undefined);
// For LIVE resume (not cold restore), fetch worker + queen messages now.
// For LIVE resume (not cold restore), fetch event log + worker status now.
// For cold restore they were already pre-fetched above (before create) so we skip to avoid
// double-restoring and to avoid capturing the new greeting.
if (historyId && !coldRestoreId) {
const restored = await restoreSessionMessages(historyId, agentType, displayName);
restoredMsgs.push(...restored.messages);
// Check worker status (needed for isWorkerRunning flag)
try {
const { sessions: workerSessions } = await sessionsApi.workerSessions(historyId);
const resumable = workerSessions.find(
(s) => s.status === "active" || s.status === "paused",
);
isWorkerRunning = resumable?.status === "active";
if (resumable) {
const { messages } = await sessionsApi.messages(historyId, resumable.session_id);
for (const m of messages as Message[]) {
restoredMsgs.push(backendMessageToChatMessage(m, agentType, displayName));
}
}
} catch {
// Worker session listing failed — not critical
}
try {
const { messages: queenMsgs } = await sessionsApi.queenMessages(historyId);
for (const m of queenMsgs as Message[]) {
const msg = backendMessageToChatMessage(m, agentType, "Queen Bee");
msg.role = "queen";
restoredMsgs.push(msg);
}
} catch {
// Queen messages not available — not critical
}
}
// Merge messages in chronological order (only for live resume; cold restore
@@ -1105,38 +1203,79 @@ export default function Workspace() {
}
}, [agentStates, updateAgentState]);
// Poll entry points every second for agents with timers to keep
// next_fire_in countdowns fresh without re-fetching the full topology.
// Poll entry points every second to keep next_fire_in countdowns fresh
// and discover dynamically created triggers (via set_trigger).
useEffect(() => {
const id = setInterval(async () => {
for (const [agentType, sessions] of Object.entries(sessionsByAgent)) {
const session = sessions[0];
if (!session) continue;
const timerNodes = session.graphNodes.filter(
(n) => n.nodeType === "trigger" && n.triggerType === "timer",
);
if (timerNodes.length === 0) continue;
const state = agentStates[agentType];
if (!state?.sessionId) continue;
try {
const { entry_points } = await sessionsApi.entryPoints(state.sessionId);
// Skip non-manual triggers only
const triggerEps = entry_points.filter(ep => ep.trigger_type !== "manual");
if (triggerEps.length === 0) continue;
const fireMap = new Map<string, number>();
for (const ep of entry_points) {
const taskMap = new Map<string, string>();
for (const ep of triggerEps) {
if (ep.next_fire_in != null) {
fireMap.set(`__trigger_${ep.id}`, ep.next_fire_in);
}
if (ep.task != null) {
taskMap.set(`__trigger_${ep.id}`, ep.task);
}
}
if (fireMap.size === 0) continue;
setSessionsByAgent((prev) => {
const ss = prev[agentType];
if (!ss?.length) return prev;
const updated = ss[0].graphNodes.map((n) => {
const existingIds = new Set(ss[0].graphNodes.map(n => n.id));
// Update existing trigger nodes
let updated = ss[0].graphNodes.map((n) => {
if (n.nodeType !== "trigger") return n;
const nfi = fireMap.get(n.id);
if (nfi == null || n.nodeType !== "trigger") return n;
return { ...n, triggerConfig: { ...n.triggerConfig, next_fire_in: nfi } };
const task = taskMap.get(n.id);
if (nfi == null && task == null) return n;
return {
...n,
triggerConfig: {
...n.triggerConfig,
...(nfi != null ? { next_fire_in: nfi } : {}),
...(task != null ? { task } : {}),
},
};
});
// Discover new triggers not yet in the graph
const entryNode = ss[0].graphNodes.find(n => n.nodeType !== "trigger")?.id;
const newNodes: GraphNode[] = [];
for (const ep of triggerEps) {
const nodeId = `__trigger_${ep.id}`;
if (existingIds.has(nodeId)) continue;
newNodes.push({
id: nodeId,
label: ep.name || ep.id,
status: "pending",
nodeType: "trigger",
triggerType: ep.trigger_type,
triggerConfig: {
...ep.trigger_config,
...(ep.next_fire_in != null ? { next_fire_in: ep.next_fire_in } : {}),
...(ep.task ? { task: ep.task } : {}),
},
...(entryNode ? { next: [entryNode] } : {}),
});
}
if (newNodes.length > 0) {
updated = [...newNodes, ...updated];
}
// Skip update if nothing changed
if (updated.every((n, idx) => n === ss[0].graphNodes[idx])) return prev;
if (newNodes.length === 0 && updated.every((n, idx) => n === ss[0].graphNodes[idx])) return prev;
return {
...prev,
[agentType]: ss.map((s, i) => (i === 0 ? { ...s, graphNodes: updated } : s)),
@@ -1275,7 +1414,7 @@ export default function Workspace() {
// --- SSE event handler ---
const upsertChatMessage = useCallback(
(agentType: string, chatMsg: ChatMessage) => {
(agentType: string, chatMsg: ChatMessage, options?: { reconcileOptimisticUser?: boolean }) => {
setSessionsByAgent((prev) => {
const sessions = prev[agentType] || [];
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
@@ -1291,6 +1430,25 @@ export default function Workspace() {
i === idx ? { ...chatMsg, createdAt: m.createdAt ?? chatMsg.createdAt } : m,
);
} else {
const shouldReconcileOptimisticUser =
!!options?.reconcileOptimisticUser && chatMsg.type === "user" && s.messages.length > 0;
if (shouldReconcileOptimisticUser) {
const lastIdx = s.messages.length - 1;
const lastMsg = s.messages[lastIdx];
const incomingTs = chatMsg.createdAt ?? Date.now();
const lastTs = lastMsg.createdAt ?? incomingTs;
const sameMessage =
lastMsg.type === "user"
&& lastMsg.content === chatMsg.content
&& Math.abs(incomingTs - lastTs) <= 15000;
if (sameMessage) {
newMessages = s.messages.map((m, i) =>
i === lastIdx ? { ...m, id: chatMsg.id } : m,
);
return { ...s, messages: newMessages };
}
}
// Append — SSE events arrive in server-timestamp order via the
// shared EventBus, so arrival order already interleaves queen
// and worker correctly. Local user messages are always created
@@ -1308,8 +1466,6 @@ export default function Workspace() {
const handleSSEEvent = useCallback(
(agentType: string, event: AgentEvent) => {
const streamId = event.stream_id;
if (streamId === "judge") return;
const isQueen = streamId === "queen";
if (isQueen) console.log('[QUEEN] handleSSEEvent:', event.type, 'agentType:', agentType);
// Drop queen message content while suppressing the auto-intro after a cold-restore.
@@ -1345,6 +1501,23 @@ export default function Workspace() {
if (Object.keys(priorSnapshots).length > 0) {
console.debug(`[hive] execution_started: dropping ${Object.keys(priorSnapshots).length} unflushed LLM snapshot(s)`);
}
// Insert a run divider when a new run_id is detected
const incomingRunId = event.run_id || null;
const prevRunId = agentStates[agentType]?.currentRunId;
if (incomingRunId && incomingRunId !== prevRunId) {
const dividerMsg: ChatMessage = {
id: `run-divider-${incomingRunId}`,
agent: "",
agentColor: "",
content: prevRunId ? "New Run" : "Run Started",
timestamp: ts,
type: "run_divider",
role: "worker",
thread: agentType,
createdAt: eventCreatedAt,
};
upsertChatMessage(agentType, dividerMsg);
}
turnCounterRef.current[turnKey] = currentTurn + 1;
updateAgentState(agentType, {
isTyping: true,
@@ -1353,6 +1526,7 @@ export default function Workspace() {
awaitingInput: false,
workerRunState: "running",
currentExecutionId: event.execution_id || agentStates[agentType]?.currentExecutionId || null,
currentRunId: incomingRunId,
nodeLogs: {},
subagentReports: [],
llmSnapshots: {},
@@ -1404,13 +1578,19 @@ export default function Workspace() {
case "execution_paused":
case "execution_failed":
case "client_output_delta":
case "client_input_received":
case "client_input_requested":
case "llm_text_delta": {
const chatMsg = sseEventToChatMessage(event, agentType, displayName, currentTurn);
if (isQueen) console.log('[QUEEN] chatMsg:', chatMsg?.id, chatMsg?.content?.slice(0, 50), 'turn:', currentTurn);
if (chatMsg && !suppressQueenMessages) {
if (isQueen) chatMsg.role = role;
upsertChatMessage(agentType, chatMsg);
if (isQueen) {
chatMsg.role = role;
chatMsg.phase = queenPhaseRef.current[agentType] as ChatMessage["phase"];
}
upsertChatMessage(agentType, chatMsg, {
reconcileOptimisticUser: event.type === "client_input_received",
});
}
// Mark streaming when LLM text is actively arriving
@@ -1850,14 +2030,19 @@ export default function Workspace() {
: rawPhase === "staging" ? "staging"
: rawPhase === "planning" ? "planning"
: "building";
queenPhaseRef.current[agentType] = newPhase;
updateAgentState(agentType, {
queenPhase: newPhase,
queenBuilding: newPhase === "building",
// Sync workerRunState so the RunButton reflects the phase
workerRunState: newPhase === "running" ? "running" : "idle",
// Clear draft graph once we leave planning; also clear dedup refs
// so re-entering planning or re-fetching flowchart map works
...(newPhase !== "planning" ? { draftGraph: null } : { originalDraft: null, flowchartMap: null }),
// Clear draft graph once we leave planning/building; keep it during
// building so the DraftGraph can show a loading overlay.
...(newPhase !== "planning" && newPhase !== "building"
? { draftGraph: null }
: newPhase === "planning"
? { originalDraft: null, flowchartMap: null }
: {}),
// Store agent path for credential queries
...(eventAgentPath ? { agentPath: eventAgentPath } : {}),
});
@@ -1946,6 +2131,136 @@ export default function Workspace() {
break;
}
case "trigger_activated": {
const triggerId = event.data?.trigger_id as string;
if (triggerId) {
const nodeId = `__trigger_${triggerId}`;
// If the trigger node doesn't exist yet (dynamically created via set_trigger),
// synthesize it before updating status.
setSessionsByAgent(prev => {
const sessions = prev[agentType] || [];
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
return {
...prev,
[agentType]: sessions.map(s => {
if (s.id !== activeId) return s;
const exists = s.graphNodes.some(n => n.id === nodeId);
if (exists) {
return {
...s,
graphNodes: s.graphNodes.map(n =>
n.id === nodeId ? { ...n, status: "running" as const } : n,
),
};
}
// Synthesize new trigger node at the front of the graph
const triggerType = (event.data?.trigger_type as string) || "timer";
const triggerConfig = (event.data?.trigger_config as Record<string, unknown>) || {};
const entryNode = s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
const newNode: GraphNode = {
id: nodeId,
label: triggerId,
status: "running",
nodeType: "trigger",
triggerType,
triggerConfig,
...(entryNode ? { next: [entryNode] } : {}),
};
return { ...s, graphNodes: [newNode, ...s.graphNodes] };
}),
};
});
}
break;
}
case "trigger_deactivated": {
const triggerId = event.data?.trigger_id as string;
if (triggerId) {
// Clear next_fire_in so countdown hides when inactive
setSessionsByAgent(prev => {
const sessions = prev[agentType] || [];
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
return {
...prev,
[agentType]: sessions.map(s => {
if (s.id !== activeId) return s;
return {
...s,
graphNodes: s.graphNodes.map(n => {
if (n.id !== `__trigger_${triggerId}`) return n;
const { next_fire_in: _, ...restConfig } = (n.triggerConfig || {}) as Record<string, unknown> & { next_fire_in?: unknown };
return { ...n, status: "pending" as const, triggerConfig: restConfig };
}),
};
}),
};
});
}
break;
}
case "trigger_fired": {
const triggerId = event.data?.trigger_id as string;
if (triggerId) {
const nodeId = `__trigger_${triggerId}`;
updateGraphNodeStatus(agentType, nodeId, "complete");
setTimeout(() => updateGraphNodeStatus(agentType, nodeId, "running"), 1500);
}
break;
}
case "trigger_available": {
const triggerId = event.data?.trigger_id as string;
if (triggerId) {
const nodeId = `__trigger_${triggerId}`;
setSessionsByAgent(prev => {
const sessions = prev[agentType] || [];
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
return {
...prev,
[agentType]: sessions.map(s => {
if (s.id !== activeId) return s;
if (s.graphNodes.some(n => n.id === nodeId)) return s;
const triggerType = (event.data?.trigger_type as string) || "timer";
const triggerConfig = (event.data?.trigger_config as Record<string, unknown>) || {};
const entryNode = s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
const newNode: GraphNode = {
id: nodeId,
label: triggerId,
status: "pending",
nodeType: "trigger",
triggerType,
triggerConfig,
...(entryNode ? { next: [entryNode] } : {}),
};
return { ...s, graphNodes: [newNode, ...s.graphNodes] };
}),
};
});
}
break;
}
case "trigger_removed": {
const triggerId = event.data?.trigger_id as string;
if (triggerId) {
const nodeId = `__trigger_${triggerId}`;
setSessionsByAgent(prev => {
const sessions = prev[agentType] || [];
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
return {
...prev,
[agentType]: sessions.map(s => {
if (s.id !== activeId) return s;
return { ...s, graphNodes: s.graphNodes.filter(n => n.id !== nodeId) };
}),
};
});
}
break;
}
default:
// Fallback: ensure queenReady is set even for unexpected first events
if (shouldMarkQueenReady) updateAgentState(agentType, { queenReady: true });
@@ -1976,6 +2291,18 @@ export default function Workspace() {
? { nodes: activeSession.graphNodes, title: activeAgentState?.displayName || formatAgentDisplayName(baseAgentType(activeWorker)) }
: { nodes: [] as GraphNode[], title: "" };
// Keep selectedNode in sync with live graphNodes (trigger status updates via SSE)
const liveSelectedNode = selectedNode && currentGraph.nodes.find(n => n.id === selectedNode.id);
const resolvedSelectedNode = liveSelectedNode || selectedNode;
// Sync trigger task draft when selected trigger node changes
useEffect(() => {
if (resolvedSelectedNode?.nodeType === "trigger") {
const tc = resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined;
setTriggerTaskDraft((tc?.task as string) || "");
}
}, [resolvedSelectedNode?.id]);
// Build a flat list of all agent-type tabs for the tab bar
const agentTabs = Object.entries(sessionsByAgent)
.filter(([, sessions]) => sessions.length > 0)
@@ -2272,7 +2599,7 @@ export default function Workspace() {
const closeAgentTab = useCallback((agentType: string) => {
setSelectedNode(null);
// Pause worker execution if running (saves checkpoint), then kill the
// entire backend session so the queen and judge don't keep running.
// entire backend session so the queen doesn't keep running.
const state = agentStates[agentType];
if (state?.sessionId) {
const pausePromise = (state.currentExecutionId && state.workerRunState === "running")
@@ -2312,28 +2639,37 @@ export default function Workspace() {
}
}, [sessionsByAgent, activeWorker, navigate, agentStates]);
// Create a new session for any agent type (used by NewTabPopover)
// Open a tab for an agent type. If a tab already exists, switch to it
// instead of creating a duplicate — each agent gets one session.
// Exception: "new-agent" tabs always create a new instance since each
// represents a distinct conversation the user is starting from scratch.
const addAgentSession = useCallback((agentType: string, agentLabel?: string) => {
// Count all existing open tabs for this base agent type (first tab uses agentType
// as key; subsequent tabs use "agentType::frontendSessionId" as unique keys).
const existingTabCount = Object.keys(sessionsByAgent).filter(
k => baseAgentType(k) === agentType && (sessionsByAgent[k] || []).length > 0,
).length;
const isNewAgent = agentType === "new-agent" || agentType.startsWith("new-agent-");
const newIndex = existingTabCount + 1;
const existingCreds = sessionsByAgent[agentType]?.[0]?.credentials;
const displayLabel = agentLabel || formatAgentDisplayName(agentType);
const label = newIndex === 1 ? displayLabel : `${displayLabel} #${newIndex}`;
const newSession = createSession(agentType, label, existingCreds);
// First tab keeps agentType as its key (backward-compatible with all existing
// logic). Additional tabs get a unique key so each has its own isolated
// agentStates slot, its own backend session, and its own tab-bar entry.
const tabKey = existingTabCount === 0 ? agentType : `${agentType}::${newSession.id}`;
if (tabKey !== agentType) {
newSession.tabKey = tabKey;
if (!isNewAgent) {
const existingTabKey = Object.keys(sessionsByAgent).find(
k => baseAgentType(k) === agentType && (sessionsByAgent[k] || []).length > 0,
);
if (existingTabKey) {
setActiveWorker(existingTabKey);
const existing = sessionsByAgent[existingTabKey]?.[0];
if (existing) {
setActiveSessionByAgent(prev => ({ ...prev, [existingTabKey]: existing.id }));
}
return;
}
}
const tabKey = isNewAgent ? `new-agent-${makeId()}` : agentType;
const existingNewAgentCount = isNewAgent
? Object.keys(sessionsByAgent).filter(
k => (k === "new-agent" || k.startsWith("new-agent-")) && (sessionsByAgent[k] || []).length > 0
).length
: 0;
const rawLabel = agentLabel || (isNewAgent ? "New Agent" : formatAgentDisplayName(agentType));
const displayLabel = existingNewAgentCount === 0 ? rawLabel : `${rawLabel} #${existingNewAgentCount + 1}`;
const newSession = createSession(tabKey, displayLabel);
setSessionsByAgent(prev => ({
...prev,
[tabKey]: [newSession],
@@ -2361,16 +2697,13 @@ export default function Workspace() {
}
// Pre-fetch messages from disk so the tab opens with conversation already shown.
// This happens BEFORE creating the tab so no "new session" empty state is visible.
// Prefer the persisted event log for full UI reconstruction; fall back to parts.
let prefetchedMessages: ChatMessage[] = [];
try {
const { messages: queenMsgs } = await sessionsApi.queenMessages(sessionId);
for (const m of queenMsgs as Message[]) {
const resolvedType = agentPath || "new-agent";
const msg = backendMessageToChatMessage(m, resolvedType, "Queen Bee");
msg.role = "queen";
prefetchedMessages.push(msg);
}
const resolvedType = agentPath || "new-agent";
const displayNameTemp = agentName || formatAgentDisplayName(resolvedType);
const restored = await restoreSessionMessages(sessionId, resolvedType, displayNameTemp);
prefetchedMessages = restored.messages;
if (prefetchedMessages.length > 0) {
prefetchedMessages.sort((a, b) => (a.createdAt ?? 0) - (b.createdAt ?? 0));
}
@@ -2486,13 +2819,17 @@ export default function Workspace() {
<div className="flex flex-1 min-h-0">
{/* ── Pipeline graph + chat ──────────────────────────────────── */}
<div className={`${(activeAgentState?.queenPhase === "planning" && activeAgentState?.draftGraph) || activeAgentState?.originalDraft ? "w-[500px] min-w-[400px]" : "w-[300px] min-w-[240px]"} bg-card/30 flex flex-col border-r border-border/30 transition-[width] duration-200`}>
<div className={`${activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building" || activeAgentState?.originalDraft ? "w-[500px] min-w-[400px]" : "w-[300px] min-w-[240px]"} bg-card/30 flex flex-col border-r border-border/30 transition-[width] duration-200`}>
<div className="flex-1 min-h-0">
{activeAgentState?.queenPhase === "planning" && activeAgentState.draftGraph ? (
<DraftGraph draft={activeAgentState.draftGraph} />
{activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building" ? (
<DraftGraph draft={activeAgentState?.draftGraph ?? null} loading={!activeAgentState?.draftGraph} building={activeAgentState?.queenBuilding} onRun={handleRun} onPause={handlePause} runState={activeAgentState?.workerRunState ?? "idle"} />
) : activeAgentState?.originalDraft ? (
<DraftGraph
draft={activeAgentState.originalDraft}
building={activeAgentState?.queenBuilding}
onRun={handleRun}
onPause={handlePause}
runState={activeAgentState?.workerRunState ?? "idle"}
flowchartMap={activeAgentState.flowchartMap ?? undefined}
runtimeNodes={currentGraph.nodes}
onRuntimeNodeClick={(runtimeNodeId) => {
@@ -2594,20 +2931,32 @@ export default function Workspace() {
/>
)}
</div>
{selectedNode && (
{resolvedSelectedNode && (
<div className="w-[480px] min-w-[400px] flex-shrink-0">
{selectedNode.nodeType === "trigger" ? (
{resolvedSelectedNode.nodeType === "trigger" ? (
<div className="flex flex-col h-full border-l border-border/40 bg-card/20 animate-in slide-in-from-right">
<div className="px-4 pt-4 pb-3 border-b border-border/30 flex items-start justify-between gap-2">
<div className="flex items-start gap-3 min-w-0">
<div className="w-8 h-8 rounded-lg flex items-center justify-center flex-shrink-0 mt-0.5 bg-[hsl(210,40%,55%)]/15 border border-[hsl(210,40%,55%)]/25">
<span className="text-sm" style={{ color: "hsl(210,40%,55%)" }}>
{{ "webhook": "\u26A1", "timer": "\u23F1", "api": "\u2192", "event": "\u223F" }[selectedNode.triggerType || ""] || "\u26A1"}
{{ "webhook": "\u26A1", "timer": "\u23F1", "api": "\u2192", "event": "\u223F" }[resolvedSelectedNode.triggerType || ""] || "\u26A1"}
</span>
</div>
<div className="min-w-0">
<h3 className="text-sm font-semibold text-foreground leading-tight">{selectedNode.label}</h3>
<p className="text-[11px] text-muted-foreground mt-0.5 capitalize">{selectedNode.triggerType} trigger</p>
<h3 className="text-sm font-semibold text-foreground leading-tight">{resolvedSelectedNode.label}</h3>
<p className="text-[11px] text-muted-foreground mt-0.5 capitalize flex items-center gap-1.5">
{resolvedSelectedNode.triggerType} trigger
<span className={`inline-block w-1.5 h-1.5 rounded-full ${
resolvedSelectedNode.status === "running" || resolvedSelectedNode.status === "complete"
? "bg-emerald-400" : "bg-muted-foreground/40"
}`} />
<span className={`text-[10px] ${
resolvedSelectedNode.status === "running" || resolvedSelectedNode.status === "complete"
? "text-emerald-400" : "text-muted-foreground/60"
}`}>
{resolvedSelectedNode.status === "running" || resolvedSelectedNode.status === "complete" ? "active" : "inactive"}
</span>
</p>
</div>
</div>
<button onClick={() => setSelectedNode(null)} className="p-1 rounded-md text-muted-foreground hover:text-foreground hover:bg-muted/50 transition-colors flex-shrink-0">
@@ -2616,7 +2965,7 @@ export default function Workspace() {
</div>
<div className="px-4 py-4 flex flex-col gap-3">
{(() => {
const tc = selectedNode.triggerConfig as Record<string, unknown> | undefined;
const tc = resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined;
const cron = tc?.cron as string | undefined;
const interval = tc?.interval_minutes as number | undefined;
const eventTypes = tc?.event_types as string[] | undefined;
@@ -2637,7 +2986,7 @@ export default function Workspace() {
) : null;
})()}
{(() => {
const nfi = (selectedNode.triggerConfig as Record<string, unknown> | undefined)?.next_fire_in as number | undefined;
const nfi = (resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined)?.next_fire_in as number | undefined;
return nfi != null ? (
<div>
<p className="text-[10px] font-medium text-muted-foreground uppercase tracking-wider mb-1.5">Next run</p>
@@ -2647,25 +2996,92 @@ export default function Workspace() {
</div>
) : null;
})()}
<div>
<p className="text-[10px] font-medium text-muted-foreground uppercase tracking-wider mb-1.5">Task</p>
<textarea
value={triggerTaskDraft}
onChange={(e) => setTriggerTaskDraft(e.target.value)}
placeholder="Describe what the worker should do when this trigger fires..."
className="w-full text-xs text-foreground/80 bg-muted/30 rounded-lg px-3 py-2 border border-border/20 resize-none min-h-[60px] font-mono focus:outline-none focus:border-primary/40"
rows={3}
/>
{(() => {
const currentTask = (resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined)?.task as string || "";
const hasChanged = triggerTaskDraft !== currentTask;
if (!hasChanged) return null;
return (
<button
disabled={triggerTaskSaving}
onClick={async () => {
const sessionId = activeAgentState?.sessionId;
const triggerId = resolvedSelectedNode.id.replace("__trigger_", "");
if (!sessionId) return;
setTriggerTaskSaving(true);
try {
await sessionsApi.updateTriggerTask(sessionId, triggerId, triggerTaskDraft);
} finally {
setTriggerTaskSaving(false);
}
}}
className="mt-1.5 w-full text-[11px] px-3 py-1.5 rounded-lg border border-primary/30 text-primary hover:bg-primary/10 transition-colors disabled:opacity-50"
>
{triggerTaskSaving ? "Saving..." : "Save Task"}
</button>
);
})()}
{!triggerTaskDraft && (
<p className="text-[10px] text-amber-400/80 mt-1">A task is required before enabling this trigger.</p>
)}
</div>
<div>
<p className="text-[10px] font-medium text-muted-foreground uppercase tracking-wider mb-1.5">Fires into</p>
<p className="text-xs text-foreground/80 font-mono bg-muted/30 rounded-lg px-3 py-2 border border-border/20">
{selectedNode.next?.[0]?.split("-").map(w => w.charAt(0).toUpperCase() + w.slice(1)).join(" ") || "—"}
{resolvedSelectedNode.next?.[0]?.split("-").map(w => w.charAt(0).toUpperCase() + w.slice(1)).join(" ") || "—"}
</p>
</div>
{activeAgentState?.queenPhase !== "building" && (() => {
const triggerIsActive = resolvedSelectedNode.status === "running" || resolvedSelectedNode.status === "complete";
const triggerId = resolvedSelectedNode.id.replace("__trigger_", "");
const taskMissing = !triggerTaskDraft;
return (
<div className="pt-1">
<button
disabled={!triggerIsActive && taskMissing}
onClick={async () => {
const sessionId = activeAgentState?.sessionId;
if (!sessionId) return;
const action = triggerIsActive ? "Disable" : "Enable";
await executionApi.chat(sessionId, `${action} trigger ${triggerId}`);
}}
className={`w-full text-xs px-3 py-2 rounded-lg border transition-colors ${
triggerIsActive
? "border-red-500/30 text-red-400 hover:bg-red-500/10"
: taskMissing
? "border-border/30 text-muted-foreground/40 cursor-not-allowed"
: "border-emerald-500/30 text-emerald-400 hover:bg-emerald-500/10"
}`}
>
{triggerIsActive ? "Disable Trigger" : "Enable Trigger"}
</button>
{!triggerIsActive && taskMissing && (
<p className="text-[10px] text-muted-foreground/50 mt-1 text-center">Configure a task first</p>
)}
</div>
);
})()}
</div>
</div>
) : (
<NodeDetailPanel
node={selectedNode}
nodeSpec={activeAgentState?.nodeSpecs.find(n => n.id === selectedNode.id) ?? null}
node={resolvedSelectedNode}
nodeSpec={activeAgentState?.nodeSpecs.find(n => n.id === resolvedSelectedNode.id) ?? null}
allNodeSpecs={activeAgentState?.nodeSpecs}
subagentReports={activeAgentState?.subagentReports}
sessionId={activeAgentState?.sessionId || undefined}
graphId={activeAgentState?.graphId || undefined}
workerSessionId={null}
nodeLogs={activeAgentState?.nodeLogs[selectedNode.id] || []}
actionPlan={activeAgentState?.nodeActionPlans[selectedNode.id]}
nodeLogs={activeAgentState?.nodeLogs[resolvedSelectedNode.id] || []}
actionPlan={activeAgentState?.nodeActionPlans[resolvedSelectedNode.id]}
onClose={() => setSelectedNode(null)}
/>
)}
@@ -2679,7 +3095,15 @@ export default function Workspace() {
agentLabel={activeWorkerLabel}
agentPath={credentialAgentPath || activeAgentState?.agentPath || (!activeWorker.startsWith("new-agent") ? activeWorker : undefined)}
open={credentialsOpen}
onClose={() => { setCredentialsOpen(false); setCredentialAgentPath(null); setDismissedBanner(null); }}
onClose={() => {
setCredentialsOpen(false);
setCredentialAgentPath(null);
// Keep credentials_required error set — clearing it here triggers
// the auto-load effect which retries session creation immediately,
// causing an infinite modal loop when credentials are still missing.
// The error is only cleared in onCredentialChange (below) when the
// user actually saves valid credentials.
}}
credentials={activeSession?.credentials || []}
onCredentialChange={() => {
// Clear credential error so the auto-load effect retries session creation
+1 -1
View File
@@ -1,6 +1,6 @@
[project]
name = "framework"
version = "0.5.1"
version = "0.7.1"
description = "Goal-driven agent runtime with Builder-friendly observability"
readme = "README.md"
requires-python = ">=3.11"
-140
View File
@@ -1,140 +0,0 @@
#!/usr/bin/env python3
"""
Setup script for Aden Hive Framework MCP Server
This script installs the framework and configures the MCP server.
"""
import json
import logging
import subprocess
import sys
from pathlib import Path
logger = logging.getLogger(__name__)
def setup_logger():
"""Configure logger for CLI usage with colored output."""
if not logger.handlers:
handler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter("%(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
class Colors:
"""ANSI color codes for terminal output."""
GREEN = "\033[0;32m"
YELLOW = "\033[1;33m"
RED = "\033[0;31m"
BLUE = "\033[0;34m"
NC = "\033[0m" # No Color
def log_step(message: str):
"""Log a colored step message."""
logger.info(f"{Colors.YELLOW}{message}{Colors.NC}")
def log_success(message: str):
"""Log a success message."""
logger.info(f"{Colors.GREEN}{message}{Colors.NC}")
def log_error(message: str):
"""Log an error message."""
logger.error(f"{Colors.RED}{message}{Colors.NC}")
def run_command(cmd: list, error_msg: str) -> bool:
"""Run a command and return success status."""
try:
subprocess.run(
cmd,
check=True,
capture_output=True,
text=True,
encoding="utf-8",
)
return True
except subprocess.CalledProcessError as e:
log_error(error_msg)
logger.error(f"Error output: {e.stderr}")
return False
def main():
"""Main setup function."""
setup_logger()
logger.info("=== Aden Hive Framework MCP Server Setup ===")
logger.info("")
# Get script directory
script_dir = Path(__file__).parent.absolute()
# Step 1: Install framework package
log_step("Step 1: Installing framework package...")
if not run_command(
[sys.executable, "-m", "pip", "install", "-e", str(script_dir)],
"Failed to install framework package",
):
sys.exit(1)
log_success("Framework package installed")
logger.info("")
# Step 2: Install MCP dependencies
log_step("Step 2: Installing MCP dependencies...")
if not run_command(
[sys.executable, "-m", "pip", "install", "mcp", "fastmcp"],
"Failed to install MCP dependencies",
):
sys.exit(1)
log_success("MCP dependencies installed")
logger.info("")
# Step 3: Verify MCP configuration
log_step("Step 3: Verifying MCP server configuration...")
mcp_config_path = script_dir / ".mcp.json"
if mcp_config_path.exists():
log_success("MCP configuration found at .mcp.json")
logger.info("Configuration:")
with open(mcp_config_path, encoding="utf-8") as f:
config = json.load(f)
logger.info(json.dumps(config, indent=2))
else:
log_success("No .mcp.json needed (MCP servers configured at repo root)")
logger.info("")
# Step 4: Test framework import
log_step("Step 4: Testing framework import...")
try:
subprocess.run(
[sys.executable, "-c", "import framework; print('OK')"],
check=True,
capture_output=True,
text=True,
encoding="utf-8",
)
log_success("Framework module verified")
except subprocess.CalledProcessError as e:
log_error("Failed to import framework module")
logger.error(f"Error: {e.stderr}")
sys.exit(1)
logger.info("")
# Success summary
logger.info(f"{Colors.GREEN}=== Setup Complete ==={Colors.NC}")
logger.info("")
logger.info("The framework is now ready to use!")
logger.info("")
logger.info(f"{Colors.BLUE}MCP Configuration location:{Colors.NC}")
logger.info(f" {mcp_config_path}")
logger.info("")
if __name__ == "__main__":
main()
+44
View File
@@ -0,0 +1,44 @@
# Dummy Agent Tests (Level 2)
End-to-end tests that run real LLM calls against deterministic graph structures. Not part of CI — run manually to verify the executor works with real providers.
## Quick Start
```bash
cd core
uv run python tests/dummy_agents/run_all.py
```
The script detects available credentials and prompts you to pick a provider. You need at least one of:
- `ANTHROPIC_API_KEY`
- `OPENAI_API_KEY`
- `GEMINI_API_KEY`
- `ZAI_API_KEY`
- Claude Code / Codex / Kimi subscription
## Verbose Mode
Show live LLM logs (tool calls, judge verdicts, node traversal):
```bash
uv run python tests/dummy_agents/run_all.py --verbose
```
## What's Tested
| Agent | Tests | What it covers |
|-------|-------|----------------|
| echo | 2 | Single-node lifecycle, basic set_output |
| pipeline | 4 | Multi-node traversal, input_mapping, conversation modes |
| branch | 3 | Conditional edges, LLM-driven routing |
| parallel_merge | 4 | Fan-out/fan-in, failure strategies |
| retry | 4 | Retry mechanics, exhaustion, ON_FAILURE edges |
| feedback_loop | 3 | Feedback cycles, max_node_visits |
| worker | 4 | Real MCP tools (example_tool, get_current_time, save_data/load_data) |
## Notes
- Tests are **auto-skipped** in regular `pytest` runs (no LLM configured)
- Worker tests start the `hive-tools` MCP server as a subprocess
- Typical runtime: ~1-3 min depending on provider
+3
View File
@@ -0,0 +1,3 @@
# Level 2: Dummy Agent Tests
# End-to-end graph execution tests with real LLM calls.
# NOT part of regular CI — run manually with: uv run python tests/dummy_agents/run_all.py
+140
View File
@@ -0,0 +1,140 @@
"""Shared fixtures for dummy agent end-to-end tests.
These tests use real LLM providers they are NOT part of regular CI.
Run via: cd core && uv run python tests/dummy_agents/run_all.py
"""
from __future__ import annotations
from pathlib import Path
import pytest
from framework.graph.executor import GraphExecutor, ParallelExecutionConfig
from framework.graph.goal import Goal
from framework.llm.litellm import LiteLLMProvider
from framework.runtime.core import Runtime
# ── module-level state set by run_all.py ─────────────────────────────
_selected_model: str | None = None
_selected_api_key: str | None = None
_selected_extra_headers: dict[str, str] | None = None
_selected_api_base: str | None = None
def set_llm_selection(
model: str,
api_key: str,
extra_headers: dict[str, str] | None = None,
api_base: str | None = None,
) -> None:
"""Called by run_all.py after user selects a provider."""
global _selected_model, _selected_api_key, _selected_extra_headers, _selected_api_base
_selected_model = model
_selected_api_key = api_key
_selected_extra_headers = extra_headers
_selected_api_base = api_base
# ── collection hook: skip entire directory when not configured ───────
def pytest_collection_modifyitems(config, items):
"""Skip all dummy_agents tests when no LLM is configured.
This prevents these tests from running in regular CI. They only run
when launched via run_all.py (which calls set_llm_selection first).
"""
if _selected_model is not None:
return # LLM configured, run normally
skip = pytest.mark.skip(
reason="Dummy agent tests require a real LLM. "
"Run via: cd core && uv run python tests/dummy_agents/run_all.py"
)
for item in items:
if "dummy_agents" in str(item.fspath):
item.add_marker(skip)
# ── fixtures ─────────────────────────────────────────────────────────
@pytest.fixture(scope="session")
def llm_provider():
"""Real LLM provider using the user-selected model."""
if _selected_model is None or _selected_api_key is None:
pytest.skip("No LLM selected — run via run_all.py")
kwargs = {"model": _selected_model, "api_key": _selected_api_key}
if _selected_extra_headers:
kwargs["extra_headers"] = _selected_extra_headers
if _selected_api_base:
kwargs["api_base"] = _selected_api_base
return LiteLLMProvider(**kwargs)
@pytest.fixture(scope="session")
def tool_registry():
"""Load hive-tools MCP server and return a ToolRegistry with real tools.
Session-scoped so the MCP server is started once and reused across tests.
"""
from framework.runner.tool_registry import ToolRegistry
registry = ToolRegistry()
# Resolve the tools directory relative to the repo root
repo_root = Path(__file__).resolve().parents[3] # core/tests/dummy_agents -> repo root
tools_dir = repo_root / "tools"
mcp_config = {
"name": "hive-tools",
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": str(tools_dir),
"description": "Hive tools MCP server",
}
registry.register_mcp_server(mcp_config)
yield registry
registry.cleanup()
@pytest.fixture
def runtime(tmp_path):
"""Real Runtime backed by a temp directory."""
return Runtime(storage_path=tmp_path / "runtime")
@pytest.fixture
def goal():
return Goal(id="dummy", name="Dummy Agent Test", description="Level 2 end-to-end testing")
def make_executor(
runtime: Runtime,
llm: LiteLLMProvider,
*,
enable_parallel: bool = True,
parallel_config: ParallelExecutionConfig | None = None,
loop_config: dict | None = None,
tool_registry=None,
storage_path: Path | None = None,
) -> GraphExecutor:
"""Factory that creates a GraphExecutor with a real LLM."""
tools = []
tool_executor = None
if tool_registry is not None:
tools = list(tool_registry.get_tools().values())
tool_executor = tool_registry.get_executor()
return GraphExecutor(
runtime=runtime,
llm=llm,
tools=tools,
tool_executor=tool_executor,
enable_parallel_execution=enable_parallel,
parallel_config=parallel_config,
loop_config=loop_config or {"max_iterations": 10},
storage_path=storage_path,
)
+64
View File
@@ -0,0 +1,64 @@
"""Minimal helper nodes for deterministic control-flow tests.
Most tests use real EventLoopNode with real LLM calls. These helpers
exist only for tests that need predictable failure/success patterns
(retry, feedback loop, parallel failure modes).
"""
from __future__ import annotations
from framework.graph.node import NodeContext, NodeProtocol, NodeResult
class SuccessNode(NodeProtocol):
"""Always succeeds with configurable output dict."""
def __init__(self, output: dict | None = None):
self._output = output or {"status": "ok"}
self.executed = False
self.execute_count = 0
async def execute(self, ctx: NodeContext) -> NodeResult:
self.executed = True
self.execute_count += 1
return NodeResult(success=True, output=self._output, tokens_used=1, latency_ms=1)
class FailNode(NodeProtocol):
"""Always fails with configurable error."""
def __init__(self, error: str = "node failed"):
self._error = error
self.attempt_count = 0
async def execute(self, ctx: NodeContext) -> NodeResult:
self.attempt_count += 1
return NodeResult(success=False, error=self._error)
class FlakyNode(NodeProtocol):
"""Fails N times then succeeds. For retry tests."""
def __init__(self, fail_times: int = 2, output: dict | None = None):
self.fail_times = fail_times
self._output = output or {"status": "recovered"}
self.attempt_count = 0
async def execute(self, ctx: NodeContext) -> NodeResult:
self.attempt_count += 1
if self.attempt_count <= self.fail_times:
return NodeResult(success=False, error=f"fail #{self.attempt_count}")
return NodeResult(success=True, output=self._output, tokens_used=1, latency_ms=1)
class StatefulNode(NodeProtocol):
"""Returns different outputs on successive calls. For feedback loop tests."""
def __init__(self, outputs: list[NodeResult]):
self._outputs = outputs
self.call_count = 0
async def execute(self, ctx: NodeContext) -> NodeResult:
idx = min(self.call_count, len(self._outputs) - 1)
self.call_count += 1
return self._outputs[idx]
+359
View File
@@ -0,0 +1,359 @@
#!/usr/bin/env python3
"""Runner for Level 2 dummy agent tests with interactive LLM provider selection.
This is NOT part of regular CI. It makes real LLM API calls.
Usage:
cd core && uv run python tests/dummy_agents/run_all.py
cd core && uv run python tests/dummy_agents/run_all.py --verbose
"""
from __future__ import annotations
import os
import sys
import time
import xml.etree.ElementTree as ET
from pathlib import Path
from tempfile import NamedTemporaryFile
TESTS_DIR = Path(__file__).parent
# ── provider registry ────────────────────────────────────────────────
# (env_var, display_name, default_model) — models match quickstart.sh defaults
API_KEY_PROVIDERS = [
("ANTHROPIC_API_KEY", "Anthropic (Claude)", "claude-sonnet-4-20250514"),
("OPENAI_API_KEY", "OpenAI", "gpt-5-mini"),
("GEMINI_API_KEY", "Google Gemini", "gemini/gemini-3-flash-preview"),
("ZAI_API_KEY", "ZAI (GLM)", "openai/glm-5"),
("GROQ_API_KEY", "Groq", "moonshotai/kimi-k2-instruct-0905"),
("MISTRAL_API_KEY", "Mistral", "mistral-large-latest"),
("CEREBRAS_API_KEY", "Cerebras", "cerebras/zai-glm-4.7"),
("TOGETHER_API_KEY", "Together AI", "together_ai/meta-llama/Llama-3.3-70B-Instruct-Turbo"),
("DEEPSEEK_API_KEY", "DeepSeek", "deepseek-chat"),
("MINIMAX_API_KEY", "MiniMax", "MiniMax-M2.5"),
]
def _detect_claude_code_token() -> str | None:
"""Check if Claude Code subscription credentials are available."""
try:
from framework.runner.runner import get_claude_code_token
return get_claude_code_token()
except Exception:
return None
def _detect_codex_token() -> str | None:
"""Check if Codex subscription credentials are available."""
try:
from framework.runner.runner import get_codex_token
return get_codex_token()
except Exception:
return None
def _detect_kimi_code_token() -> str | None:
"""Check if Kimi Code subscription credentials are available."""
try:
from framework.runner.runner import get_kimi_code_token
return get_kimi_code_token()
except Exception:
return None
def detect_available() -> list[dict]:
"""Detect all available LLM providers with valid credentials.
Returns list of dicts: {name, model, api_key, source}
"""
available = []
# Subscription-based providers
token = _detect_claude_code_token()
if token:
available.append(
{
"name": "Claude Code (subscription)",
"model": "claude-sonnet-4-20250514",
"api_key": token,
"source": "claude_code_sub",
"extra_headers": {"authorization": f"Bearer {token}"},
}
)
token = _detect_codex_token()
if token:
available.append(
{
"name": "Codex (subscription)",
"model": "gpt-5-mini",
"api_key": token,
"source": "codex_sub",
}
)
token = _detect_kimi_code_token()
if token:
available.append(
{
"name": "Kimi Code (subscription)",
"model": "moonshotai/kimi-k2-instruct-0905",
"api_key": token,
"source": "kimi_sub",
}
)
# API key providers (env vars)
for env_var, name, default_model in API_KEY_PROVIDERS:
key = os.environ.get(env_var)
if key:
entry = {
"name": f"{name} (${env_var})",
"model": default_model,
"api_key": key,
"source": env_var,
}
# ZAI requires an api_base (OpenAI-compatible endpoint)
if env_var == "ZAI_API_KEY":
entry["api_base"] = "https://api.z.ai/api/coding/paas/v4"
available.append(entry)
return available
def prompt_provider_selection() -> dict:
"""Interactive prompt to select an LLM provider. Returns the chosen provider dict."""
available = detect_available()
if not available:
print("\n No LLM credentials detected.")
print(" Set an API key environment variable, e.g.:")
print(" export ANTHROPIC_API_KEY=sk-...")
print(" export OPENAI_API_KEY=sk-...")
print(" Or authenticate with Claude Code: claude")
sys.exit(1)
if len(available) == 1:
choice = available[0]
print(f"\n Using: {choice['name']} ({choice['model']})")
return choice
print("\n Available LLM providers:\n")
for i, p in enumerate(available, 1):
print(f" {i}) {p['name']} [{p['model']}]")
print()
while True:
try:
raw = input(f" Select provider [1-{len(available)}]: ").strip()
idx = int(raw) - 1
if 0 <= idx < len(available):
choice = available[idx]
print(f"\n Using: {choice['name']} ({choice['model']})\n")
return choice
except (ValueError, EOFError):
pass
print(f" Please enter a number between 1 and {len(available)}")
# ── test runner ──────────────────────────────────────────────────────
def parse_junit_xml(xml_path: str) -> dict[str, dict]:
"""Parse JUnit XML and group results by agent (test file)."""
tree = ET.parse(xml_path)
root = tree.getroot()
agents: dict[str, dict] = {}
for testsuite in root.iter("testsuite"):
for testcase in testsuite.iter("testcase"):
classname = testcase.get("classname", "")
parts = classname.split(".")
agent_name = "unknown"
for part in parts:
if part.startswith("test_"):
agent_name = part[5:]
break
if agent_name not in agents:
agents[agent_name] = {
"total": 0,
"passed": 0,
"failed": 0,
"time": 0.0,
"tests": [],
}
agents[agent_name]["total"] += 1
test_time = float(testcase.get("time", "0"))
agents[agent_name]["time"] += test_time
failures = testcase.findall("failure")
errors = testcase.findall("error")
test_name = testcase.get("name", "")
if failures or errors:
agents[agent_name]["failed"] += 1
# Extract failure reason from the first failure/error element
fail_el = (failures or errors)[0]
reason = fail_el.get("message", "") or ""
# Also grab the text body for more detail
body = fail_el.text or ""
# Build a concise reason: prefer message, fall back to first line of body
if not reason and body:
reason = body.strip().split("\n")[0]
agents[agent_name]["tests"].append((test_name, "FAIL", reason))
else:
agents[agent_name]["passed"] += 1
agents[agent_name]["tests"].append((test_name, "PASS", ""))
return agents
def print_table(agents: dict[str, dict], total_time: float, verbose: bool = False) -> None:
"""Print summary table."""
col_agent = 20
col_tests = 6
col_passed = 8
col_time = 12
def sep(char: str = "") -> str:
return (
f"{char * (col_agent + 2)}{char * (col_tests + 2)}"
f"{char * (col_passed + 2)}{char * (col_time + 2)}"
)
header = (
f"{'Agent':<{col_agent}}{'Tests':>{col_tests}} "
f"{'Passed':>{col_passed}}{'Time (s)':>{col_time}}"
)
top = (
f"{'' * (col_agent + 2)}{'' * (col_tests + 2)}"
f"{'' * (col_passed + 2)}{'' * (col_time + 2)}"
)
bottom = (
f"{'' * (col_agent + 2)}{'' * (col_tests + 2)}"
f"{'' * (col_passed + 2)}{'' * (col_time + 2)}"
)
print()
print(top)
print(header)
print(sep())
total_tests = 0
total_passed = 0
for agent_name in sorted(agents.keys()):
data = agents[agent_name]
total_tests += data["total"]
total_passed += data["passed"]
marker = " " if data["failed"] == 0 else "!"
row = (
f"{marker}{agent_name:<{col_agent + 1}}{data['total']:>{col_tests}} "
f"{data['passed']:>{col_passed}}{data['time']:>{col_time}.2f}"
)
print(row)
if verbose:
for test_name, status, reason in data["tests"]:
icon = "" if status == "PASS" else ""
print(
f"{icon} {test_name:<{col_agent - 2}}"
f"{'':>{col_tests + 2}}{'':>{col_passed + 2}}{'':>{col_time + 2}}"
)
if status == "FAIL" and reason:
# Print failure reason wrapped to fit, indented under the test
reason_short = reason[:120] + ("..." if len(reason) > 120 else "")
print(f"{reason_short}")
print("")
print(sep())
all_pass = total_passed == total_tests
status = "ALL PASS" if all_pass else f"{total_tests - total_passed} FAILED"
totals = (
f"{status:<{col_agent}}{total_tests:>{col_tests}} "
f"{total_passed:>{col_passed}}{total_time:>{col_time}.2f}"
)
print(totals)
print(bottom)
# Always print failure details if any tests failed
if not all_pass:
print("\n Failure Details:")
print(" " + "" * 70)
for agent_name in sorted(agents.keys()):
for test_name, status, reason in agents[agent_name]["tests"]:
if status == "FAIL":
print(f"\n{agent_name}::{test_name}")
if reason:
# Wrap long reasons
for i in range(0, len(reason), 100):
print(f" {reason[i : i + 100]}")
print()
def main() -> int:
verbose = "--verbose" in sys.argv or "-v" in sys.argv
print("\n ╔═══════════════════════════════════════╗")
print(" ║ Level 2: Dummy Agent Tests (E2E) ║")
print(" ╚═══════════════════════════════════════╝")
# Step 1: detect credentials and let user pick
provider = prompt_provider_selection()
# Step 2: inject selection into conftest module state
from tests.dummy_agents.conftest import set_llm_selection
set_llm_selection(
model=provider["model"],
api_key=provider["api_key"],
extra_headers=provider.get("extra_headers"),
api_base=provider.get("api_base"),
)
# Step 3: run pytest
with NamedTemporaryFile(suffix=".xml", delete=False) as tmp:
xml_path = tmp.name
start = time.time()
import pytest as _pytest
pytest_args = [
str(TESTS_DIR),
f"--junitxml={xml_path}",
"--tb=short",
"--override-ini=asyncio_mode=auto",
"--log-cli-level=INFO", # Stream logs live to terminal
"-v",
]
if not verbose:
# In non-verbose mode, only show warnings and above
pytest_args[pytest_args.index("--log-cli-level=INFO")] = "--log-cli-level=WARNING"
pytest_args.remove("-v")
pytest_args.append("-q")
exit_code = _pytest.main(pytest_args)
elapsed = time.time() - start
# Step 4: print summary
try:
agents = parse_junit_xml(xml_path)
print_table(agents, elapsed, verbose=verbose)
except Exception as e:
print(f"\n Could not parse results: {e}")
# Clean up
Path(xml_path).unlink(missing_ok=True)
return exit_code
if __name__ == "__main__":
sys.exit(main())
+132
View File
@@ -0,0 +1,132 @@
"""Branch agent: LLM classifies input, conditional edges route to different paths.
Tests conditional edge evaluation with real LLM output.
"""
from __future__ import annotations
import pytest
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
from framework.graph.node import NodeSpec
from .conftest import make_executor
SET_OUTPUT_INSTRUCTION = (
"You MUST call the set_output tool to provide your answer. "
"Do not just write text — call set_output with the correct key and value."
)
def _build_branch_graph() -> GraphSpec:
return GraphSpec(
id="branch-graph",
goal_id="dummy",
entry_node="classify",
entry_points={"start": "classify"},
terminal_nodes=["positive", "negative"],
conversation_mode="continuous",
nodes=[
NodeSpec(
id="classify",
name="Classify",
description="Classifies input sentiment",
node_type="event_loop",
input_keys=["text"],
output_keys=["score", "label"],
system_prompt=(
"You are a sentiment classifier. Read the 'text' input and determine "
"if the sentiment is positive or negative.\n\n"
"You MUST call set_output TWICE:\n"
"1. set_output(key='score', value='<number>') — a score between 0.0 "
"and 1.0 where >0.5 means positive\n"
"2. set_output(key='label', value='positive') or "
"set_output(key='label', value='negative')\n\n" + SET_OUTPUT_INSTRUCTION
),
),
NodeSpec(
id="positive",
name="Positive Handler",
description="Handles positive sentiment",
node_type="event_loop",
output_keys=["result"],
system_prompt=(
"The input was classified as positive. Call set_output with "
"key='result' and a brief one-sentence acknowledgment. "
+ SET_OUTPUT_INSTRUCTION
),
),
NodeSpec(
id="negative",
name="Negative Handler",
description="Handles negative sentiment",
node_type="event_loop",
output_keys=["result"],
system_prompt=(
"The input was classified as negative. Call set_output with "
"key='result' and a brief one-sentence acknowledgment. "
+ SET_OUTPUT_INSTRUCTION
),
),
],
edges=[
EdgeSpec(
id="classify-to-positive",
source="classify",
target="positive",
condition=EdgeCondition.CONDITIONAL,
condition_expr="output.get('label') == 'positive'",
priority=1,
),
EdgeSpec(
id="classify-to-negative",
source="classify",
target="negative",
condition=EdgeCondition.CONDITIONAL,
condition_expr="output.get('label') == 'negative'",
priority=0,
),
],
memory_keys=["text", "score", "label", "result"],
)
@pytest.mark.asyncio
async def test_branch_positive_path(runtime, goal, llm_provider):
graph = _build_branch_graph()
executor = make_executor(runtime, llm_provider)
result = await executor.execute(
graph, goal, {"text": "I love this product, it's amazing!"}, validate_graph=False
)
assert result.success
assert result.path == ["classify", "positive"]
@pytest.mark.asyncio
async def test_branch_negative_path(runtime, goal, llm_provider):
graph = _build_branch_graph()
executor = make_executor(runtime, llm_provider)
result = await executor.execute(
graph, goal, {"text": "This is terrible and broken, I hate it."}, validate_graph=False
)
assert result.success
assert result.path == ["classify", "negative"]
@pytest.mark.asyncio
async def test_branch_two_nodes_traversed(runtime, goal, llm_provider):
"""Regardless of which branch, exactly 2 nodes should execute."""
graph = _build_branch_graph()
executor = make_executor(runtime, llm_provider)
result = await executor.execute(
graph, goal, {"text": "The weather is nice today."}, validate_graph=False
)
assert result.success
assert result.steps_executed == 2
assert len(result.path) == 2
+66
View File
@@ -0,0 +1,66 @@
"""Echo agent: single-node worker that echoes input to output.
Tests basic node lifecycle with a real LLM call simplest possible worker.
"""
from __future__ import annotations
import pytest
from framework.graph.edge import GraphSpec
from framework.graph.node import NodeSpec
from .conftest import make_executor
def _build_echo_graph() -> GraphSpec:
return GraphSpec(
id="echo-graph",
goal_id="dummy",
entry_node="echo",
entry_points={"start": "echo"},
terminal_nodes=["echo"],
nodes=[
NodeSpec(
id="echo",
name="Echo",
description="Echoes input to output",
node_type="event_loop",
input_keys=["input"],
output_keys=["output"],
system_prompt=(
"You are an echo node. Your ONLY job is to read the 'input' value "
"provided in the user message, then immediately call the set_output "
"tool with key='output' and value set to the EXACT same string. "
"Do not add any text or explanation. Just call set_output."
),
),
],
edges=[],
memory_keys=["input", "output"],
conversation_mode="continuous",
)
@pytest.mark.asyncio
async def test_echo_basic(runtime, goal, llm_provider):
graph = _build_echo_graph()
executor = make_executor(runtime, llm_provider)
result = await executor.execute(graph, goal, {"input": "hello"}, validate_graph=False)
assert result.success
assert result.output.get("output") is not None
assert result.path == ["echo"]
assert result.steps_executed == 1
@pytest.mark.asyncio
async def test_echo_empty_input(runtime, goal, llm_provider):
graph = _build_echo_graph()
executor = make_executor(runtime, llm_provider)
result = await executor.execute(graph, goal, {"input": ""}, validate_graph=False)
assert result.success
assert "output" in result.output
@@ -0,0 +1,144 @@
"""Feedback loop agent: draft/review cycle with max_node_visits limit.
Uses StatefulNode for review to control loop iterations deterministically.
"""
from __future__ import annotations
import pytest
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
from framework.graph.node import NodeResult, NodeSpec
from .conftest import make_executor
from .nodes import StatefulNode, SuccessNode
def _build_feedback_graph(max_visits: int = 3) -> GraphSpec:
return GraphSpec(
id="feedback-graph",
goal_id="dummy",
entry_node="draft",
terminal_nodes=["done"],
nodes=[
NodeSpec(
id="draft",
name="Draft",
description="Produces a draft",
node_type="event_loop",
output_keys=["draft_output"],
max_node_visits=max_visits,
),
NodeSpec(
id="review",
name="Review",
description="Reviews the draft",
node_type="event_loop",
input_keys=["draft_output"],
output_keys=["approved"],
),
NodeSpec(
id="done",
name="Done",
description="Final node",
node_type="event_loop",
output_keys=["final"],
),
],
edges=[
EdgeSpec(
id="draft-to-review",
source="draft",
target="review",
condition=EdgeCondition.ON_SUCCESS,
),
EdgeSpec(
id="review-to-draft",
source="review",
target="draft",
condition=EdgeCondition.CONDITIONAL,
condition_expr="output.get('approved') == False",
priority=1,
),
EdgeSpec(
id="review-to-done",
source="review",
target="done",
condition=EdgeCondition.CONDITIONAL,
condition_expr="output.get('approved') == True",
priority=0,
),
],
memory_keys=["draft_output", "approved", "final"],
)
@pytest.mark.asyncio
async def test_feedback_loop_terminates(runtime, goal, llm_provider):
"""Loop should terminate: draft visits are capped, review eventually approves."""
graph = _build_feedback_graph(max_visits=3)
executor = make_executor(runtime, llm_provider)
executor.register_node("draft", SuccessNode(output={"draft_output": "v1"}))
executor.register_node(
"review",
StatefulNode(
[
NodeResult(success=True, output={"approved": False}),
NodeResult(success=True, output={"approved": False}),
NodeResult(success=True, output={"approved": True}),
]
),
)
executor.register_node("done", SuccessNode(output={"final": "done"}))
result = await executor.execute(graph, goal, {}, validate_graph=False)
assert result.success
assert result.node_visit_counts.get("draft", 0) == 3
assert "done" in result.path
@pytest.mark.asyncio
async def test_feedback_loop_visit_counts(runtime, goal, llm_provider):
graph = _build_feedback_graph(max_visits=3)
executor = make_executor(runtime, llm_provider)
executor.register_node("draft", SuccessNode(output={"draft_output": "v1"}))
executor.register_node(
"review",
StatefulNode(
[
NodeResult(success=True, output={"approved": False}),
NodeResult(success=True, output={"approved": True}),
]
),
)
executor.register_node("done", SuccessNode(output={"final": "done"}))
result = await executor.execute(graph, goal, {}, validate_graph=False)
assert result.success
assert result.node_visit_counts.get("draft", 0) == 2
assert result.node_visit_counts.get("review", 0) == 2
@pytest.mark.asyncio
async def test_feedback_loop_early_exit(runtime, goal, llm_provider):
"""Review approves on first iteration — loop exits before max."""
graph = _build_feedback_graph(max_visits=5)
executor = make_executor(runtime, llm_provider)
executor.register_node("draft", SuccessNode(output={"draft_output": "perfect"}))
executor.register_node(
"review",
StatefulNode(
[
NodeResult(success=True, output={"approved": True}),
]
),
)
executor.register_node("done", SuccessNode(output={"final": "done"}))
result = await executor.execute(graph, goal, {}, validate_graph=False)
assert result.success
assert result.node_visit_counts.get("draft", 0) == 1
assert "done" in result.path
@@ -0,0 +1,179 @@
"""GCU subagent test: parent event_loop delegates to a GCU subagent.
Tests the subagent delegation pattern where a parent node uses
delegate_to_sub_agent to invoke a GCU (browser) node for a task.
The GCU node has access to browser tools via the GCU MCP server.
Note: This test requires the GCU MCP server (gcu.server) to be available.
If not installed, the test is skipped.
"""
from __future__ import annotations
from pathlib import Path
import pytest
from framework.graph.edge import GraphSpec
from framework.graph.goal import Goal
from framework.graph.node import NodeSpec
from .conftest import make_executor
def _has_gcu_server() -> bool:
"""Check if the GCU MCP server module is available."""
try:
import gcu.server # noqa: F401
return True
except ImportError:
return False
def _build_gcu_subagent_graph() -> GraphSpec:
"""Parent event_loop node with a GCU subagent for browser tasks.
Structure:
- parent (event_loop): orchestrator that decides when to delegate
- browser_worker (gcu): subagent with browser tools
- parent delegates to browser_worker via delegate_to_sub_agent tool
- browser_worker is NOT connected by edges (validation rule)
"""
return GraphSpec(
id="gcu-subagent-graph",
goal_id="gcu-test",
entry_node="parent",
entry_points={"start": "parent"},
terminal_nodes=["parent"],
nodes=[
NodeSpec(
id="parent",
name="Orchestrator",
description="Orchestrates browser tasks via subagent delegation",
node_type="event_loop",
input_keys=["task"],
output_keys=["result"],
sub_agents=["browser_worker"],
system_prompt=(
"You are an orchestrator. You have a browser subagent called "
"'browser_worker' available via delegate_to_sub_agent.\n\n"
"Read the 'task' input and delegate the browser work to "
"the browser_worker subagent. When the subagent completes, "
"summarize the result and call set_output with key='result'."
),
),
NodeSpec(
id="browser_worker",
name="Browser Worker",
description="GCU browser subagent for web tasks",
node_type="gcu",
output_keys=["browser_result"],
system_prompt=(
"You are a browser worker subagent. Complete the delegated "
"browser task using available browser tools. "
"When done, call set_output with key='browser_result' and "
"the information you found."
),
),
],
edges=[], # GCU subagents must NOT be connected by edges
memory_keys=["task", "result", "browser_result"],
conversation_mode="continuous",
)
def _gcu_goal() -> Goal:
return Goal(
id="gcu-test",
name="GCU Subagent Test",
description="Test browser subagent delegation",
)
@pytest.mark.asyncio
@pytest.mark.skipif(not _has_gcu_server(), reason="GCU server not installed")
async def test_gcu_subagent_delegation(runtime, llm_provider, tool_registry, tmp_path):
"""Parent delegates a simple browser task to GCU subagent."""
# Register GCU MCP server tools
from framework.graph.gcu import GCU_MCP_SERVER_CONFIG
repo_root = Path(__file__).resolve().parents[3]
gcu_config = dict(GCU_MCP_SERVER_CONFIG)
gcu_config["cwd"] = str(repo_root / "tools")
tool_registry.register_mcp_server(gcu_config)
# Expand GCU node tools (mirrors what runner._setup does)
graph = _build_gcu_subagent_graph()
gcu_tool_names = tool_registry.get_server_tool_names("gcu-tools")
if gcu_tool_names:
for node in graph.nodes:
if node.node_type == "gcu":
existing = set(node.tools)
for tool_name in sorted(gcu_tool_names):
if tool_name not in existing:
node.tools.append(tool_name)
executor = make_executor(
runtime,
llm_provider,
tool_registry=tool_registry,
storage_path=tmp_path / "storage",
)
result = await executor.execute(
graph,
_gcu_goal(),
{"task": "Use the browser to navigate to https://example.com and report the page title."},
validate_graph=False,
)
assert result.success
assert result.output.get("result") is not None
@pytest.mark.asyncio
@pytest.mark.skipif(not _has_gcu_server(), reason="GCU server not installed")
async def test_gcu_subagent_returns_data(runtime, llm_provider, tool_registry, tmp_path):
"""Verify the parent receives structured data from the GCU subagent."""
from framework.graph.gcu import GCU_MCP_SERVER_CONFIG
repo_root = Path(__file__).resolve().parents[3]
gcu_config = dict(GCU_MCP_SERVER_CONFIG)
gcu_config["cwd"] = str(repo_root / "tools")
# Only register if not already registered
if not tool_registry.get_server_tool_names("gcu-tools"):
tool_registry.register_mcp_server(gcu_config)
graph = _build_gcu_subagent_graph()
gcu_tool_names = tool_registry.get_server_tool_names("gcu-tools")
if gcu_tool_names:
for node in graph.nodes:
if node.node_type == "gcu":
existing = set(node.tools)
for tool_name in sorted(gcu_tool_names):
if tool_name not in existing:
node.tools.append(tool_name)
executor = make_executor(
runtime,
llm_provider,
tool_registry=tool_registry,
storage_path=tmp_path / "storage",
)
result = await executor.execute(
graph,
_gcu_goal(),
{
"task": "Use the browser to visit https://example.com and report "
"what domain the page is on."
},
validate_graph=False,
)
assert result.success
assert result.output.get("result") is not None
# The result should contain something from the browser
result_text = str(result.output["result"]).lower()
assert "example" in result_text
@@ -0,0 +1,166 @@
"""Parallel merge agent: fan-out to two branches, fan-in to merge node.
Tests parallel execution with real LLM at each branch.
"""
from __future__ import annotations
import pytest
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
from framework.graph.executor import ParallelExecutionConfig
from framework.graph.node import NodeSpec
from .conftest import make_executor
from .nodes import FailNode
SET_OUTPUT_INSTRUCTION = (
"You MUST call the set_output tool to provide your answer. "
"Do not just write text — call set_output with the correct key and value."
)
def _build_parallel_graph() -> GraphSpec:
return GraphSpec(
id="parallel-graph",
goal_id="dummy",
entry_node="split",
entry_points={"start": "split"},
terminal_nodes=["merge"],
conversation_mode="continuous",
nodes=[
NodeSpec(
id="split",
name="Split",
description="Entry point that triggers parallel branches",
node_type="event_loop",
input_keys=["topic"],
output_keys=["split_done"],
system_prompt=(
"You are a dispatcher. Read the 'topic' input, then immediately "
"call set_output with key='split_done' and value='true'. "
+ SET_OUTPUT_INSTRUCTION
),
),
NodeSpec(
id="analyze_a",
name="Analyze Pros",
description="Analyzes positive aspects",
node_type="event_loop",
output_keys=["result_a"],
system_prompt=(
"Analyze the positive aspects of the topic. Then call set_output "
"with key='result_a' and a brief one-sentence analysis. "
+ SET_OUTPUT_INSTRUCTION
),
),
NodeSpec(
id="analyze_b",
name="Analyze Cons",
description="Analyzes negative aspects",
node_type="event_loop",
output_keys=["result_b"],
system_prompt=(
"Analyze the negative aspects of the topic. Then call set_output "
"with key='result_b' and a brief one-sentence analysis. "
+ SET_OUTPUT_INSTRUCTION
),
),
NodeSpec(
id="merge",
name="Merge",
description="Combines both analyses",
node_type="event_loop",
input_keys=["result_a", "result_b"],
output_keys=["merged"],
system_prompt=(
"Read 'result_a' and 'result_b' from the input, combine them into "
"a one-sentence summary, then call set_output with key='merged' "
"and the summary. " + SET_OUTPUT_INSTRUCTION
),
),
],
edges=[
EdgeSpec(
id="split-to-a",
source="split",
target="analyze_a",
condition=EdgeCondition.ON_SUCCESS,
),
EdgeSpec(
id="split-to-b",
source="split",
target="analyze_b",
condition=EdgeCondition.ON_SUCCESS,
),
EdgeSpec(
id="a-to-merge",
source="analyze_a",
target="merge",
condition=EdgeCondition.ON_SUCCESS,
),
EdgeSpec(
id="b-to-merge",
source="analyze_b",
target="merge",
condition=EdgeCondition.ON_SUCCESS,
),
],
memory_keys=["topic", "split_done", "result_a", "result_b", "merged"],
)
@pytest.mark.asyncio
async def test_parallel_both_succeed(runtime, goal, llm_provider):
graph = _build_parallel_graph()
config = ParallelExecutionConfig(on_branch_failure="fail_all")
executor = make_executor(runtime, llm_provider, parallel_config=config)
result = await executor.execute(graph, goal, {"topic": "remote work"}, validate_graph=False)
assert result.success
assert "split" in result.path
assert "merge" in result.path
assert result.output.get("merged") is not None
@pytest.mark.asyncio
async def test_parallel_branch_failure_fail_all(runtime, goal, llm_provider):
"""One branch fails with fail_all -> execution fails."""
graph = _build_parallel_graph()
config = ParallelExecutionConfig(on_branch_failure="fail_all")
executor = make_executor(runtime, llm_provider, parallel_config=config)
executor.register_node("analyze_b", FailNode(error="branch B failed"))
result = await executor.execute(graph, goal, {"topic": "remote work"}, validate_graph=False)
assert not result.success
@pytest.mark.asyncio
async def test_parallel_branch_failure_continue_others(runtime, goal, llm_provider):
"""One branch fails with continue_others -> surviving branch completes."""
graph = _build_parallel_graph()
config = ParallelExecutionConfig(on_branch_failure="continue_others")
executor = make_executor(runtime, llm_provider, parallel_config=config)
executor.register_node("analyze_b", FailNode(error="branch B failed"))
result = await executor.execute(graph, goal, {"topic": "remote work"}, validate_graph=False)
# With continue_others, execution can proceed past failed branches
assert result.output.get("merged") is not None or result.output.get("result_a") is not None
@pytest.mark.asyncio
async def test_parallel_disjoint_output_keys(runtime, goal, llm_provider):
"""Verify both branches write to separate memory keys without conflicts."""
graph = _build_parallel_graph()
executor = make_executor(runtime, llm_provider)
result = await executor.execute(
graph, goal, {"topic": "artificial intelligence"}, validate_graph=False
)
assert result.success
assert result.output.get("result_a") is not None
assert result.output.get("result_b") is not None
+134
View File
@@ -0,0 +1,134 @@
"""Pipeline agent: linear 3-node chain with real LLM at each step.
Tests input_mapping, conversation modes, and multi-node traversal.
"""
from __future__ import annotations
import pytest
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
from framework.graph.node import NodeSpec
from .conftest import make_executor
SET_OUTPUT_INSTRUCTION = (
"You MUST call the set_output tool to provide your answer. "
"Do not just write text — call set_output with the correct key and value."
)
def _build_pipeline_graph(conversation_mode: str = "continuous") -> GraphSpec:
return GraphSpec(
id="pipeline-graph",
goal_id="dummy",
entry_node="intake",
entry_points={"start": "intake"},
terminal_nodes=["output"],
conversation_mode=conversation_mode,
nodes=[
NodeSpec(
id="intake",
name="Intake",
description="Captures raw input and passes it along",
node_type="event_loop",
input_keys=["raw"],
output_keys=["captured"],
system_prompt=(
"You are the intake node. Read the 'raw' input value from the user "
"message, then call set_output with key='captured' and the same value. "
+ SET_OUTPUT_INSTRUCTION
),
),
NodeSpec(
id="transform",
name="Transform",
description="Uppercases the input value",
node_type="event_loop",
input_keys=["value"],
output_keys=["transformed"],
system_prompt=(
"You are a transform node. Read the 'value' input from the user "
"message, convert it to UPPERCASE, then call set_output with "
"key='transformed' and the uppercased value. " + SET_OUTPUT_INSTRUCTION
),
),
NodeSpec(
id="output",
name="Output",
description="Formats final result",
node_type="event_loop",
input_keys=["value"],
output_keys=["result"],
system_prompt=(
"You are the output node. Read the 'value' input from the user "
"message, prefix it with 'Result: ', then call set_output with "
"key='result' and the prefixed value. " + SET_OUTPUT_INSTRUCTION
),
),
],
edges=[
EdgeSpec(
id="intake-to-transform",
source="intake",
target="transform",
condition=EdgeCondition.ON_SUCCESS,
input_mapping={"value": "captured"},
),
EdgeSpec(
id="transform-to-output",
source="transform",
target="output",
condition=EdgeCondition.ON_SUCCESS,
input_mapping={"value": "transformed"},
),
],
memory_keys=["raw", "captured", "value", "transformed", "result"],
)
@pytest.mark.asyncio
async def test_pipeline_linear_traversal(runtime, goal, llm_provider):
graph = _build_pipeline_graph()
executor = make_executor(runtime, llm_provider)
result = await executor.execute(graph, goal, {"raw": "hello"}, validate_graph=False)
assert result.success
assert result.path == ["intake", "transform", "output"]
assert result.steps_executed == 3
@pytest.mark.asyncio
async def test_pipeline_input_mapping(runtime, goal, llm_provider):
"""Verify input_mapping wires source output keys to target input keys."""
graph = _build_pipeline_graph()
executor = make_executor(runtime, llm_provider)
result = await executor.execute(graph, goal, {"raw": "test value"}, validate_graph=False)
assert result.success
assert result.steps_executed == 3
assert result.output.get("result") is not None
@pytest.mark.asyncio
async def test_pipeline_continuous_conversation(runtime, goal, llm_provider):
graph = _build_pipeline_graph(conversation_mode="continuous")
executor = make_executor(runtime, llm_provider)
result = await executor.execute(graph, goal, {"raw": "data"}, validate_graph=False)
assert result.success
assert len(result.path) == 3
@pytest.mark.asyncio
async def test_pipeline_isolated_conversation(runtime, goal, llm_provider):
graph = _build_pipeline_graph(conversation_mode="isolated")
executor = make_executor(runtime, llm_provider)
result = await executor.execute(graph, goal, {"raw": "data"}, validate_graph=False)
assert result.success
assert len(result.path) == 3
+131
View File
@@ -0,0 +1,131 @@
"""Retry agent: flaky node with retry limit and failure edges.
Uses deterministic FlakyNode (not LLM) since we need controlled failure patterns.
"""
from __future__ import annotations
import pytest
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
from framework.graph.node import NodeSpec
from .conftest import make_executor
from .nodes import FlakyNode, SuccessNode
def _build_retry_graph(max_retries: int = 3, with_failure_edge: bool = False) -> GraphSpec:
nodes = [
NodeSpec(
id="flaky",
name="Flaky",
description="Fails then succeeds",
node_type="event_loop",
output_keys=["status"],
max_retries=max_retries,
),
NodeSpec(
id="done",
name="Done",
description="Terminal success node",
node_type="event_loop",
output_keys=["final"],
),
]
edges = [
EdgeSpec(
id="flaky-to-done",
source="flaky",
target="done",
condition=EdgeCondition.ON_SUCCESS,
),
]
terminal_nodes = ["done"]
if with_failure_edge:
nodes.append(
NodeSpec(
id="error_handler",
name="Error Handler",
description="Handles exhausted retries",
node_type="event_loop",
output_keys=["error_handled"],
)
)
edges.append(
EdgeSpec(
id="flaky-to-error",
source="flaky",
target="error_handler",
condition=EdgeCondition.ON_FAILURE,
)
)
terminal_nodes.append("error_handler")
return GraphSpec(
id="retry-graph",
goal_id="dummy",
entry_node="flaky",
terminal_nodes=terminal_nodes,
nodes=nodes,
edges=edges,
memory_keys=["status", "final", "error_handled"],
)
@pytest.mark.asyncio
async def test_retry_succeeds_within_limit(runtime, goal, llm_provider):
graph = _build_retry_graph(max_retries=3)
flaky = FlakyNode(fail_times=2, output={"status": "recovered"})
executor = make_executor(runtime, llm_provider)
executor.register_node("flaky", flaky)
executor.register_node("done", SuccessNode(output={"final": "complete"}))
result = await executor.execute(graph, goal, {}, validate_graph=False)
assert result.success
assert result.total_retries >= 2
assert flaky.attempt_count == 3 # 2 failures + 1 success
@pytest.mark.asyncio
async def test_retry_exhaustion(runtime, goal, llm_provider):
graph = _build_retry_graph(max_retries=3)
flaky = FlakyNode(fail_times=10, output={"status": "recovered"})
executor = make_executor(runtime, llm_provider)
executor.register_node("flaky", flaky)
executor.register_node("done", SuccessNode(output={"final": "complete"}))
result = await executor.execute(graph, goal, {}, validate_graph=False)
assert not result.success
@pytest.mark.asyncio
async def test_retry_with_on_failure_edge(runtime, goal, llm_provider):
graph = _build_retry_graph(max_retries=2, with_failure_edge=True)
flaky = FlakyNode(fail_times=10)
error_handler = SuccessNode(output={"error_handled": True})
executor = make_executor(runtime, llm_provider)
executor.register_node("flaky", flaky)
executor.register_node("done", SuccessNode(output={"final": "complete"}))
executor.register_node("error_handler", error_handler)
result = await executor.execute(graph, goal, {}, validate_graph=False)
assert "error_handler" in result.path
assert error_handler.executed
@pytest.mark.asyncio
async def test_retry_tracking(runtime, goal, llm_provider):
graph = _build_retry_graph(max_retries=3)
flaky = FlakyNode(fail_times=2)
executor = make_executor(runtime, llm_provider)
executor.register_node("flaky", flaky)
executor.register_node("done", SuccessNode(output={"final": "complete"}))
result = await executor.execute(graph, goal, {}, validate_graph=False)
assert result.success
assert result.retry_details.get("flaky", 0) >= 2
+139
View File
@@ -0,0 +1,139 @@
"""Worker agent: single-node event loop with real MCP tools.
Tests the core worker pattern a single EventLoopNode that uses real
hive-tools (example_tool, get_current_time, save_data/load_data) to
accomplish tasks, matching how real agents are structured.
"""
from __future__ import annotations
import pytest
from framework.graph.edge import GraphSpec
from framework.graph.goal import Goal
from framework.graph.node import NodeSpec
from .conftest import make_executor
def _build_worker_graph(tools: list[str]) -> GraphSpec:
"""Single-node worker agent with MCP tools — matches real agent structure."""
return GraphSpec(
id="worker-graph",
goal_id="worker-goal",
entry_node="worker",
entry_points={"start": "worker"},
terminal_nodes=["worker"],
nodes=[
NodeSpec(
id="worker",
name="Worker",
description="General-purpose worker with tools",
node_type="event_loop",
input_keys=["task"],
output_keys=["result"],
tools=tools,
system_prompt=(
"You are a worker agent with access to tools. "
"Read the 'task' input and complete it using the available tools. "
"When done, call set_output with key='result' and the final answer."
),
),
],
edges=[],
memory_keys=["task", "result"],
conversation_mode="continuous",
)
def _worker_goal() -> Goal:
return Goal(
id="worker-goal",
name="Worker Agent",
description="Complete a task using available tools",
)
@pytest.mark.asyncio
async def test_worker_example_tool(runtime, llm_provider, tool_registry):
"""Worker uses example_tool to process text."""
graph = _build_worker_graph(tools=["example_tool"])
executor = make_executor(runtime, llm_provider, tool_registry=tool_registry)
result = await executor.execute(
graph,
_worker_goal(),
{"task": "Use the example_tool to process the message 'hello world' with uppercase=true"},
validate_graph=False,
)
assert result.success
assert result.output.get("result") is not None
@pytest.mark.asyncio
async def test_worker_time_tool(runtime, llm_provider, tool_registry):
"""Worker uses get_current_time to check the current time."""
graph = _build_worker_graph(tools=["get_current_time"])
executor = make_executor(runtime, llm_provider, tool_registry=tool_registry)
result = await executor.execute(
graph,
_worker_goal(),
{
"task": "Use get_current_time to find the current time in UTC, "
"and report the day of the week as the result"
},
validate_graph=False,
)
assert result.success
assert result.output.get("result") is not None
@pytest.mark.asyncio
async def test_worker_data_tools(runtime, llm_provider, tool_registry, tmp_path):
"""Worker uses save_data and load_data to store and retrieve data."""
graph = _build_worker_graph(tools=["save_data", "load_data"])
executor = make_executor(
runtime,
llm_provider,
tool_registry=tool_registry,
storage_path=tmp_path / "storage",
)
result = await executor.execute(
graph,
_worker_goal(),
{
"task": f"Use save_data to save the text 'test payload' to a file called "
f"'test.txt' in the data_dir '{tmp_path}/data'. "
f"Then use load_data to read it back from the same data_dir. "
f"Report what you loaded as the result."
},
validate_graph=False,
)
assert result.success
assert result.output.get("result") is not None
@pytest.mark.asyncio
async def test_worker_multi_tool(runtime, llm_provider, tool_registry):
"""Worker uses multiple tools in sequence."""
graph = _build_worker_graph(tools=["example_tool", "get_current_time"])
executor = make_executor(runtime, llm_provider, tool_registry=tool_registry)
result = await executor.execute(
graph,
_worker_goal(),
{
"task": "First use get_current_time to find the current day of the week. "
"Then use example_tool to process that day name with uppercase=true. "
"Report the uppercased day name as the result."
},
validate_graph=False,
)
assert result.success
assert result.output.get("result") is not None
@@ -50,7 +50,7 @@ async def test_worker_handoff_injects_formatted_request_into_queen() -> None:
@pytest.mark.asyncio
async def test_worker_handoff_ignores_queen_and_judge_streams() -> None:
async def test_worker_handoff_ignores_queen_stream() -> None:
bus = EventBus()
manager = SessionManager()
session = _make_session(bus)
@@ -63,11 +63,6 @@ async def test_worker_handoff_ignores_queen_and_judge_streams() -> None:
node_id="queen",
reason="should be ignored",
)
await bus.emit_escalation_requested(
stream_id="judge",
node_id="judge",
reason="should be ignored",
)
assert queen_node.inject_event.await_count == 0
+1
View File
@@ -240,6 +240,7 @@ class TestEventSerialization:
"stop_reason": "stop",
"input_tokens": 10,
"output_tokens": 20,
"cached_tokens": 0,
"model": "gpt-4",
}
+57
View File
@@ -601,6 +601,63 @@ class TestReportToParentExecution:
# Metadata should include report_count
assert result_data["metadata"]["report_count"] == 1
@pytest.mark.asyncio
async def test_subagent_tool_events_visible_on_shared_bus(
self, runtime, parent_node_spec, subagent_node_spec
):
"""Subagent internal tool calls should emit TOOL_CALL events on the shared bus."""
bus = EventBus()
tool_events = []
async def handler(event):
tool_events.append(event)
bus.subscribe(
event_types=[EventType.TOOL_CALL_STARTED, EventType.TOOL_CALL_COMPLETED],
handler=handler,
)
subagent_llm = MockStreamingLLM(
[
set_output_scenario("findings", "Results"),
text_finish_scenario(),
]
)
node = EventLoopNode(
event_bus=bus,
config=LoopConfig(max_iterations=10),
)
memory = SharedMemory()
scoped = memory.with_permissions(read_keys=[], write_keys=["result"])
ctx = NodeContext(
runtime=runtime,
node_id="parent",
node_spec=parent_node_spec,
memory=scoped,
input_data={},
llm=subagent_llm,
available_tools=[],
goal_context="",
goal=None,
node_registry={"researcher": subagent_node_spec},
)
result = await node._execute_subagent(ctx, "researcher", "Do research")
assert result.is_error is False
# Subagent tool calls should appear on the shared bus
started = [e for e in tool_events if e.type == EventType.TOOL_CALL_STARTED]
completed = [e for e in tool_events if e.type == EventType.TOOL_CALL_COMPLETED]
assert len(started) >= 1, "Expected at least one TOOL_CALL_STARTED from subagent"
assert len(completed) >= 1, "Expected at least one TOOL_CALL_COMPLETED from subagent"
# Events should have the namespaced subagent node_id
for evt in started + completed:
assert "subagent" in evt.node_id, f"Expected namespaced node_id, got: {evt.node_id}"
@pytest.mark.asyncio
async def test_event_bus_receives_subagent_report(
self, runtime, parent_node_spec, subagent_node_spec
+261
View File
@@ -0,0 +1,261 @@
"""Tests for queen-level trigger system.
Verifies that:
- Timer triggers fire inject_trigger() on the queen node
- Webhook triggers fire inject_trigger() via EventBus WEBHOOK_RECEIVED
- Queen node unavailable trigger skipped silently
- worker_runtime=None trigger discarded (gating)
- remove_trigger cleans up webhook subscription
- run_agent_with_input is in _QUEEN_RUNNING_TOOLS
- System prompts reference run_agent_with_input, not start_worker()
"""
from __future__ import annotations
import asyncio
from types import SimpleNamespace
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from framework.runtime.event_bus import EventBus
from framework.runtime.triggers import TriggerDefinition
from framework.server.session_manager import Session
def _make_session(event_bus: EventBus, session_id: str = "session_trigger_test") -> Session:
return Session(id=session_id, event_bus=event_bus, llm=object(), loaded_at=0.0)
def _make_executor(queen_node) -> SimpleNamespace:
return SimpleNamespace(node_registry={"queen": queen_node})
@pytest.mark.asyncio
async def test_interval_timer_fires_inject_trigger_on_queen_node() -> None:
"""Timer with interval_minutes fires inject_trigger() on the queen node."""
from framework.graph.event_loop_node import TriggerEvent
from framework.tools.queen_lifecycle_tools import _start_trigger_timer
bus = EventBus()
session = _make_session(bus)
session.worker_runtime = object() # non-None → worker is loaded
queen_node = SimpleNamespace(inject_trigger=AsyncMock())
session.queen_executor = _make_executor(queen_node)
tdef = TriggerDefinition(
id="test-timer",
trigger_type="timer",
trigger_config={"interval_minutes": 0.001}, # ~60ms
task="run it",
)
await _start_trigger_timer(session, "test-timer", tdef)
# Let the timer fire at least once
await asyncio.sleep(0.15)
# Cancel the background task
task = session.active_timer_tasks.get("test-timer")
if task:
task.cancel()
try:
await task
except asyncio.CancelledError:
pass
assert queen_node.inject_trigger.await_count >= 1
# Inspect the TriggerEvent passed to inject_trigger
call_args = queen_node.inject_trigger.await_args_list[0]
trigger: TriggerEvent = call_args.args[0]
assert trigger.trigger_type == "timer"
assert trigger.source_id == "test-timer"
assert trigger.payload.get("task") == "run it"
@pytest.mark.asyncio
async def test_timer_skipped_when_queen_node_unavailable() -> None:
"""No inject_trigger call and no exception when queen executor is not set."""
from framework.tools.queen_lifecycle_tools import _start_trigger_timer
bus = EventBus()
session = _make_session(bus)
session.worker_runtime = object()
session.queen_executor = None # queen not ready
tdef = TriggerDefinition(
id="no-queen-timer",
trigger_type="timer",
trigger_config={"interval_minutes": 0.001},
task="should not fire",
)
await _start_trigger_timer(session, "no-queen-timer", tdef)
await asyncio.sleep(0.15)
task = session.active_timer_tasks.get("no-queen-timer")
if task:
task.cancel()
try:
await task
except asyncio.CancelledError:
pass
# No exception raised, nothing to assert beyond completion
@pytest.mark.asyncio
async def test_webhook_trigger_fires_inject_trigger() -> None:
"""WEBHOOK_RECEIVED on EventBus → inject_trigger() on the queen node."""
from framework.graph.event_loop_node import TriggerEvent
from framework.tools.queen_lifecycle_tools import _start_trigger_webhook
bus = EventBus()
session = _make_session(bus)
session.worker_runtime = object()
queen_node = SimpleNamespace(inject_trigger=AsyncMock())
session.queen_executor = _make_executor(queen_node)
tdef = TriggerDefinition(
id="test-webhook",
trigger_type="webhook",
trigger_config={"path": "/hooks/test", "methods": ["POST"]},
task="process it",
)
# Patch WebhookServer to avoid binding a real port
mock_server = MagicMock()
mock_server.is_running = False
mock_server.add_route = MagicMock()
mock_server.start = AsyncMock()
with patch("framework.runtime.webhook_server.WebhookServer", return_value=mock_server):
with patch("framework.runtime.webhook_server.WebhookServerConfig"):
await _start_trigger_webhook(session, "test-webhook", tdef)
# Simulate an incoming webhook event on the EventBus
await bus.emit_webhook_received(
source_id="test-webhook",
path="/hooks/test",
method="POST",
headers={},
payload={"event": "push"},
)
await asyncio.sleep(0.05) # let handler run
assert queen_node.inject_trigger.await_count == 1
trigger: TriggerEvent = queen_node.inject_trigger.await_args_list[0].args[0]
assert trigger.trigger_type == "webhook"
assert trigger.source_id == "test-webhook"
assert trigger.payload["method"] == "POST"
assert trigger.payload["path"] == "/hooks/test"
assert trigger.payload["task"] == "process it"
assert trigger.payload["payload"] == {"event": "push"}
@pytest.mark.asyncio
async def test_webhook_trigger_discarded_when_no_worker() -> None:
"""inject_trigger is NOT called when no worker is loaded."""
from framework.tools.queen_lifecycle_tools import _start_trigger_webhook
bus = EventBus()
session = _make_session(bus)
session.worker_runtime = None # no worker
queen_node = SimpleNamespace(inject_trigger=AsyncMock())
session.queen_executor = _make_executor(queen_node)
tdef = TriggerDefinition(
id="no-worker-webhook",
trigger_type="webhook",
trigger_config={"path": "/hooks/noop", "methods": ["POST"]},
task="should not fire",
)
mock_server = MagicMock()
mock_server.is_running = False
mock_server.add_route = MagicMock()
mock_server.start = AsyncMock()
with patch("framework.runtime.webhook_server.WebhookServer", return_value=mock_server):
with patch("framework.runtime.webhook_server.WebhookServerConfig"):
await _start_trigger_webhook(session, "no-worker-webhook", tdef)
await bus.emit_webhook_received(
source_id="no-worker-webhook",
path="/hooks/noop",
method="POST",
headers={},
payload={},
)
await asyncio.sleep(0.05)
assert queen_node.inject_trigger.await_count == 0
@pytest.mark.asyncio
async def test_remove_trigger_cleans_up_webhook_subscription() -> None:
"""After remove_trigger(), WEBHOOK_RECEIVED no longer calls inject_trigger."""
from framework.tools.queen_lifecycle_tools import _start_trigger_webhook
bus = EventBus()
session = _make_session(bus)
session.worker_runtime = object()
queen_node = SimpleNamespace(inject_trigger=AsyncMock())
session.queen_executor = _make_executor(queen_node)
tdef = TriggerDefinition(
id="removable-webhook",
trigger_type="webhook",
trigger_config={"path": "/hooks/removable", "methods": ["POST"]},
task="run it",
)
mock_server = MagicMock()
mock_server.is_running = False
mock_server.add_route = MagicMock()
mock_server.start = AsyncMock()
with patch("framework.runtime.webhook_server.WebhookServer", return_value=mock_server):
with patch("framework.runtime.webhook_server.WebhookServerConfig"):
await _start_trigger_webhook(session, "removable-webhook", tdef)
# Manually unsubscribe (mirrors what remove_trigger does)
sub_id = session.active_webhook_subs.pop("removable-webhook", None)
assert sub_id is not None
bus.unsubscribe(sub_id)
# Now fire — should NOT reach queen
await bus.emit_webhook_received(
source_id="removable-webhook",
path="/hooks/removable",
method="POST",
headers={},
payload={},
)
await asyncio.sleep(0.05)
assert queen_node.inject_trigger.await_count == 0
assert "removable-webhook" not in session.active_webhook_subs
def test_run_agent_with_input_in_running_tools() -> None:
"""run_agent_with_input must be available to the queen in RUNNING phase."""
from framework.agents.queen.nodes import _QUEEN_RUNNING_TOOLS
assert "run_agent_with_input" in _QUEEN_RUNNING_TOOLS
def test_system_prompt_uses_correct_tool_name() -> None:
"""Trigger handling rules must reference run_agent_with_input, not start_worker()."""
from framework.agents.queen.nodes import (
_queen_behavior_running,
_queen_behavior_staging,
)
assert "run_agent_with_input" in _queen_behavior_running
assert "start_worker()" not in _queen_behavior_running
assert "run_agent_with_input" in _queen_behavior_staging
assert "start_worker()" not in _queen_behavior_staging
-171
View File
@@ -1,171 +0,0 @@
#!/usr/bin/env python3
"""
Verification script for Aden Hive Framework MCP Server
This script checks if the MCP server is properly installed and configured.
"""
import json
import logging
import subprocess
import sys
from pathlib import Path
logger = logging.getLogger(__name__)
def setup_logger():
"""Configure logger for CLI usage."""
if not logger.handlers:
handler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter("%(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
class Colors:
GREEN = "\033[0;32m"
YELLOW = "\033[1;33m"
RED = "\033[0;31m"
BLUE = "\033[0;34m"
NC = "\033[0m"
def check(description: str) -> bool:
"""Print check description and return a context manager for result."""
logger.info(f"Checking {description}... ", extra={"end": ""})
sys.stdout.flush()
return True
def success(msg: str = "OK"):
"""Log success message."""
logger.info(f"{Colors.GREEN}{msg}{Colors.NC}")
def warning(msg: str):
"""Log warning message."""
logger.warning(f"{Colors.YELLOW}{msg}{Colors.NC}")
def error(msg: str):
"""Log error message."""
logger.error(f"{Colors.RED}{msg}{Colors.NC}")
def main():
"""Run verification checks."""
setup_logger()
logger.info("=== MCP Server Verification ===")
logger.info("")
script_dir = Path(__file__).parent.absolute()
all_checks_passed = True
# Check 1: Framework package installed
check("framework package installation")
try:
result = subprocess.run(
[sys.executable, "-c", "import framework; print(framework.__file__)"],
capture_output=True,
text=True,
check=True,
encoding="utf-8",
)
framework_path = result.stdout.strip()
success(f"installed at {framework_path}")
except subprocess.CalledProcessError:
error("framework package not found")
logger.info(f" Run: uv pip install -e {script_dir}")
all_checks_passed = False
# Check 2: MCP dependencies
check("MCP dependencies")
missing_deps = []
for dep in ["mcp", "fastmcp"]:
try:
subprocess.run(
[sys.executable, "-c", f"import {dep}"],
capture_output=True,
check=True,
encoding="utf-8",
)
except subprocess.CalledProcessError:
missing_deps.append(dep)
if missing_deps:
error(f"missing: {', '.join(missing_deps)}")
logger.info(f" Run: uv pip install {' '.join(missing_deps)}")
all_checks_passed = False
else:
success("all installed")
# Check 3: MCP configuration file
check("MCP configuration file")
mcp_config = script_dir / ".mcp.json"
if mcp_config.exists():
try:
with open(mcp_config, encoding="utf-8") as f:
config = json.load(f)
if "mcpServers" in config:
success("found and valid")
for name, server_config in config.get("mcpServers", {}).items():
logger.info(f" Server: {name}")
logger.info(f" Command: {server_config.get('command')}")
logger.info(f" Args: {' '.join(server_config.get('args', []))}")
else:
warning("exists but missing mcpServers config")
all_checks_passed = False
except json.JSONDecodeError:
error("invalid JSON format")
all_checks_passed = False
else:
warning("not found (optional)")
logger.info(f" Location would be: {mcp_config}")
# Check 4: Framework modules
check("core framework modules")
modules_to_check = [
"framework.runtime.core",
"framework.graph.executor",
"framework.graph.node",
"framework.builder.query",
"framework.llm",
]
failed_modules = []
for module in modules_to_check:
try:
subprocess.run(
[sys.executable, "-c", f"import {module}"],
capture_output=True,
check=True,
encoding="utf-8",
)
except subprocess.CalledProcessError:
failed_modules.append(module)
if failed_modules:
error(f"failed to import: {', '.join(failed_modules)}")
all_checks_passed = False
else:
success(f"all {len(modules_to_check)} modules OK")
logger.info("")
logger.info("=" * 40)
if all_checks_passed:
logger.info(f"{Colors.GREEN}✓ All checks passed!{Colors.NC}")
logger.info("")
logger.info("Your framework is ready to use.")
else:
logger.info(f"{Colors.RED}✗ Some checks failed{Colors.NC}")
logger.info("")
logger.info("To fix issues, run:")
logger.info(f" uv run python {script_dir / 'setup_mcp.py'}")
logger.info("")
if __name__ == "__main__":
main()
-201
View File
@@ -1,201 +0,0 @@
# Antigravity IDE Setup
Use the Hive agent framework (MCP servers and skills) inside [Antigravity IDE](https://antigravity.google/) (Googles AI IDE).
---
## Quick start (3 steps)
**Repo root** = the folder that contains `core/`, `tools/`, and `.agent/` (where you cloned the project).
1. **Open a terminal** and go to the hive repo root (e.g. `cd ~/hive`).
2. **Run the setup script** (use `./` so the script runs from this repo; don't use `/scripts/...`):
```bash
./scripts/setup-antigravity-mcp.sh
```
3. **Restart Antigravity IDE.** You should see **coder-tools** and **tools** as available MCP servers.
> **Important:** Always restart/refresh Antigravity IDE after running the setup script or making any changes to MCP configuration. The IDE only loads MCP servers on startup.
Done. For details, prerequisites, and troubleshooting, read on.
---
## What you get after setup
- **coder-tools** Create and manage agents (scaffolding via `initialize_and_build_agent`, file I/O, tool discovery).
- **tools** File operations, web search, and other agent tools.
- **Documentation** Guided docs for building and testing agents.
---
## Prerequisites
- [Antigravity IDE](https://antigravity.google/) installed.
- **Python 3.11+** and project dependencies. If you havent set up the repo yet, from repo root run:
```bash
./quickstart.sh
```
- **MCP server dependencies** (one-time). From repo root:
```bash
cd core && ./setup_mcp.sh
```
---
## Full setup (step by step)
### Step 1: Install MCP dependencies (one-time)
From the **repo root**:
```bash
cd core
./setup_mcp.sh
```
This installs the framework and MCP packages and checks that the server can start.
### Step 2: Register MCP servers with Antigravity
Antigravity reads MCP config from your **user config file** (`~/.gemini/antigravity/mcp_config.json`), not from the project. The easiest way is to run the setup script from the **hive repo folder**:
```bash
./scripts/setup-antigravity-mcp.sh
```
The script finds the repo root, writes `~/.gemini/antigravity/mcp_config.json` with the right paths, and you don't edit any paths by hand.
> **Important:** Always restart/refresh Antigravity IDE after running the setup script. MCP servers are only loaded on IDE startup.
The **coder-tools** and **tools** servers should show up after restart.
**Using Claude Code instead?** Run:
```bash
./scripts/setup-antigravity-mcp.sh --claude
```
That writes `~/.claude/mcp.json` as well.
**Prefer to do it manually?** See [Manual MCP config](#manual-mcp-config-template) below. Youll create `~/.gemini/mcp.json` (or `~/.claude/mcp.json`) with absolute paths to your repos `core` and `tools` folders.
### Step 3: Use MCP tools + docs
Use the `coder-tools` and `tools` MCP servers in Antigravity, and use docs in `docs/` for workflow guidance.
---
## Whats in the repo (`.agent/`)
```
.agent/
├── mcp_config.json # Template for MCP servers (coder-tools, tools)
```
The **setup script** writes your **user** config (`~/.gemini/antigravity/mcp_config.json`) using paths from **this repo**. The file in `.agent/` is the template; Antigravity itself uses the file in your home directory.
---
## Troubleshooting
**MCP servers dont connect**
- Run the setup script again from the hive repo root: `./scripts/setup-antigravity-mcp.sh`, then restart Antigravity.
- Make sure Python and deps are installed: from repo root run `./quickstart.sh`.
- Check that the servers can start: from repo root run
`cd tools && uv run coder_tools_server.py --stdio` (Ctrl+C to stop), and in another terminal
`cd tools && uv run mcp_server.py --stdio` (Ctrl+C to stop).
If those fail, fix the errors first (e.g. install deps with `uv sync`).
**"Module not found" or import errors**
- Open the **repo root** as the project in the IDE (the folder that has `core/` and `tools/`).
- If you edited `~/.gemini/antigravity/mcp_config.json` by hand, make sure `--directory` paths are **absolute** (e.g. `/Users/you/hive/core` and `/Users/you/hive/tools`).
**MCP tools dont show up in the UI**
- Antigravity may need a restart. Use the files in `docs/` as documentation; the MCP tools (`coder-tools`, `tools`) are the required integration point.
---
## Verification prompt (optional)
Paste this into Antigravity to check that MCP is set up. It doesnt use your machines paths; anyone can use it.
```
Check the Hive + Antigravity integration:
1. MCP: List available MCP servers/tools. Confirm that "coder-tools" and "tools" (or equivalent) are connected. If not, tell the user to run ./scripts/setup-antigravity-mcp.sh from the hive repo root, then restart Antigravity (see docs/antigravity-setup.md).
2. Docs: Confirm that the project has `docs/` with setup/developer guides for the workflow.
3. Result: Reply with PASS (MCP OK), PARTIAL (some MCP tools missing), or FAIL (MCP unavailable), and one line on what to fix if not PASS.
```
If you get **PARTIAL** (e.g. MCP not connected), run `./scripts/setup-antigravity-mcp.sh` from the repo root and restart Antigravity.
---
## Manual MCP config template
Use this only if you dont want to run the setup script. Replace `/path/to/hive` with your actual repo root (e.g. the output of `pwd` when youre in the hive folder).
Save as `~/.gemini/antigravity/mcp_config.json` (Antigravity) or `~/.claude/mcp.json` (Claude Code), then **restart the IDE** to load the new configuration.
```json
{
"mcpServers": {
"coder-tools": {
"command": "uv",
"args": ["run", "--directory", "/path/to/hive/tools", "coder_tools_server.py", "--stdio"],
"disabled": false
},
"tools": {
"command": "uv",
"args": ["run", "--directory", "/path/to/hive/tools", "mcp_server.py", "--stdio"],
"disabled": false
}
}
}
```
Make sure `uv` is installed and available in your PATH. Note: Use `--directory` in args instead of `cwd` for Antigravity compatibility.
---
## Verify from the command line (optional)
From the **repo root**:
**Check that config exists**
```bash
test -f .agent/mcp_config.json && echo "OK: mcp_config.json" || echo "MISSING"
```
**Check that the config is valid JSON**
```bash
python3 -c "import json; json.load(open('.agent/mcp_config.json')); print('OK: valid JSON')"
```
**Test that MCP servers start** (two terminals)
```bash
# Terminal 1
cd tools && uv run coder_tools_server.py --stdio
# Terminal 2
cd tools && uv run mcp_server.py --stdio
```
If both start without errors, the config is fine.
---
## See also
- [Cursor IDE support](../README.md#cursor-ide-support) Same MCP servers and skills for Cursor
- [MCP Integration Guide](../core/MCP_INTEGRATION_GUIDE.md) How the framework MCP works
- [Environment setup](../ENVIRONMENT_SETUP.md) Repo and Python setup
+63 -2
View File
@@ -1,6 +1,6 @@
# Integration Bounty Program
# Bounty Program
Earn XP, Discord roles, and money by testing, documenting, and building integrations for the Aden agent framework.
Earn XP, Discord roles, and money by contributing to the Aden agent framework — from quick fixes to major features, plus integration testing and development.
## Why Contribute?
@@ -33,6 +33,10 @@ Lurkr auto-assigns the first two roles. Core Contributor requires sustained, qua
## Bounty Types
### Integration Bounties
Focused on the tool ecosystem — testing, documenting, and building integrations.
| Type | Label | Points | What You Do |
| --------------------- | ----------------- | ------ | -------------------------------------------------------------------------- |
| **Test a tool** | `bounty:test` | 20 | Test with a real API key, submit a report with logs |
@@ -42,6 +46,47 @@ Lurkr auto-assigns the first two roles. Core Contributor requires sustained, qua
Promoting a tool from unverified to verified is the final step — submit a PR moving it from `_register_unverified()` to `_register_verified()` after the [promotion checklist](promotion-checklist.md) is complete.
### Standard Bounties
General contributions to the framework, docs, tests, and infrastructure — not tied to a specific integration.
| Size | Label | Points | Scope |
| ------------ | ------------------ | ------ | ---------------------------------------------------------------------------------- |
| **Small** | `bounty:small` | 10 | Typo fixes, broken links, error message improvements, confirm/reproduce bug reports |
| **Medium** | `bounty:medium` | 30 | Bug fixes, new or improved unit tests, how-to guides, CLI UX improvements |
| **Large** | `bounty:large` | 75 | New features, performance optimizations with benchmarks, architecture docs |
| **Extreme** | `bounty:extreme` | 150 | Major subsystem work, security audits, cross-cutting refactors, new core capabilities |
#### Examples by size
**Small (10 pts):**
- Fix typos or broken links in documentation
- Improve an error message to include actionable guidance
- Add missing type annotations to a module
- Reproduce and confirm an open bug report with environment details
- Fix linting or CI warnings
**Medium (30 pts):**
- Fix a non-critical bug with a regression test
- Write a how-to guide or tutorial for a common workflow
- Add or significantly improve test coverage for a core module
- Improve CLI help text, argument validation, or UX
- Add structured logging or observability to a module
**Large (75 pts):**
- Implement a new user-facing feature end to end
- Performance optimization with before/after benchmarks
- Build a new CLI command or subcommand
- Write comprehensive architecture documentation for a subsystem
- Add a new credential adapter type
**Extreme (150 pts):**
- Design and implement a major subsystem (e.g., plugin system, caching layer)
- Security audit of a core module with findings and fixes
- Major refactor of core architecture (must have maintainer pre-approval)
- Build a complete example application or reference implementation
- End-to-end testing framework for agent workflows
## Quality Gates
- **PRs** must be merged by a maintainer (not self-merged)
@@ -52,12 +97,28 @@ Promoting a tool from unverified to verified is the final step — submit a PR m
## Labels
### Integration bounty labels
| Label | Color | Meaning |
| ------------------- | ------------------ | --------------------------------------- |
| `bounty:test` | `#1D76DB` (blue) | Test a tool with a real API key |
| `bounty:docs` | `#FBCA04` (yellow) | Write or improve documentation |
| `bounty:code` | `#D93F0B` (orange) | Health checker, bug fix, or improvement |
| `bounty:new-tool` | `#6F42C1` (purple) | Build a new integration from scratch |
### Standard bounty labels
| Label | Color | Meaning |
| ------------------- | ------------------ | -------------------------------------------------- |
| `bounty:small` | `#C2E0C6` (green) | Quick fix — typos, links, error messages |
| `bounty:medium` | `#0E8A16` (green) | Bug fix, tests, guides, CLI improvements |
| `bounty:large` | `#B60205` (red) | New feature, perf work, architecture docs |
| `bounty:extreme` | `#000000` (black) | Major subsystem, security audit, core refactor |
### Difficulty labels
| Label | Color | Meaning |
| ------------------- | ------------------ | --------------------------------------- |
| `difficulty:easy` | `#BFD4F2` | Good first contribution |
| `difficulty:medium` | `#D4C5F9` | Requires some familiarity |
| `difficulty:hard` | `#F9D0C4` | Significant effort or expertise needed |
+66 -6
View File
@@ -1,6 +1,6 @@
# Contributor Guide — Integration Bounty Program
# Contributor Guide — Bounty Program
Earn XP, Discord roles, and eventually real money by testing and building integrations for the Aden agent framework.
Earn XP, Discord roles, and eventually real money by contributing to the Aden agent framework — from quick fixes to major features and integration work.
## Getting Started
@@ -30,7 +30,13 @@ XP comes from GitHub bounties (auto-pushed on PR merge) and Discord activity in
## Bounty Types
### Test a Tool (20 pts)
There are two categories: **integration bounties** (tool-specific) and **standard bounties** (general contributions).
---
### Integration Bounties
#### Test a Tool (20 pts)
Test an unverified tool with a real API key and report what happens.
@@ -41,7 +47,7 @@ Test an unverified tool with a real API key and report what happens.
Report both successes and failures. Finding bugs is valuable.
### Write Docs (20 pts)
#### Write Docs (20 pts)
Write a README for a tool that's missing one.
@@ -52,7 +58,7 @@ Write a README for a tool that's missing one.
Function names and API URLs must match reality — no AI hallucinations.
### Code Contribution (30 pts)
#### Code Contribution (30 pts)
Add a health checker, fix a bug, or improve an integration.
@@ -66,7 +72,7 @@ Add a health checker, fix a bug, or improve an integration.
1. Find a bug during testing, file an issue
2. Fix it in a PR with a test covering the bug
### New Integration (75 pts)
#### New Integration (75 pts)
Build a complete integration from scratch.
@@ -77,6 +83,60 @@ Build a complete integration from scratch.
Expect multiple review rounds.
---
### Standard Bounties
General contributions to the framework — not tied to a specific integration. Sized by effort and impact.
#### Small (10 pts)
Quick, focused fixes. Great for first-time contributors.
- Fix typos or broken links in documentation
- Improve an error message to include actionable guidance
- Add missing type annotations to a module
- Reproduce and confirm a bug report with environment details
- Fix linting or CI warnings
**How:** Open a PR with the fix. Tag with `bounty:small`.
#### Medium (30 pts)
Meaningful improvements that require reading and understanding existing code.
- Fix a non-critical bug with a regression test
- Write a how-to guide or tutorial
- Add or significantly improve test coverage for a core module
- Improve CLI help text, argument validation, or UX
- Add structured logging or observability to a module
**How:** Claim the issue first. Submit a PR with tests where applicable. Tag with `bounty:medium`.
#### Large (75 pts)
Significant work that adds real capability or improves the project substantially.
- Implement a new user-facing feature end to end
- Performance optimization with before/after benchmarks
- Build a new CLI command or subcommand
- Write comprehensive architecture documentation for a subsystem
- Add a new credential adapter type
**How:** Claim the issue and discuss your approach in the issue before starting. Submit a PR. Tag with `bounty:large`.
#### Extreme (150 pts)
Major contributions that shape the project's direction. Requires maintainer pre-approval.
- Design and implement a major subsystem (e.g., plugin system, caching layer)
- Security audit of a core module with findings and fixes
- Major refactor of core architecture
- Build a complete example application or reference implementation
- End-to-end testing framework for agent workflows
**How:** Comment on the issue with a design proposal. Wait for maintainer approval before starting work. Tag with `bounty:extreme`.
## Rules
1. **Claim before you start** — comment on the issue, wait for assignment
+37 -1
View File
@@ -27,7 +27,7 @@ When someone comments "I'd like to work on this":
5. Merge — the GitHub Action auto-awards XP and posts to Discord
6. Close the linked bounty issue
### Quality Gates
### Quality Gates — Integration Bounties
**`bounty:docs`:**
- [ ] Follows the [tool README template](templates/tool-readme-template.md)
@@ -51,6 +51,31 @@ When someone comments "I'd like to work on this":
- [ ] `make check && make test` passes
- [ ] Registered in `_register_unverified()` (not verified)
### Quality Gates — Standard Bounties
**`bounty:small`:**
- [ ] Change is correct and doesn't introduce regressions
- [ ] CI passes
- [ ] Scope matches "small" — not padded into a bigger change
**`bounty:medium`:**
- [ ] CI passes
- [ ] Bug fixes include a regression test
- [ ] Docs/guides are accurate and follow existing style
- [ ] Not AI-generated without verification
**`bounty:large`:**
- [ ] Design was discussed in the issue before implementation
- [ ] CI passes, new tests cover the change
- [ ] Benchmarks included for performance work (before/after)
- [ ] Architecture docs reviewed by a second maintainer
**`bounty:extreme`:**
- [ ] Maintainer pre-approved the design proposal before work began
- [ ] CI passes, comprehensive test coverage
- [ ] Documentation updated to reflect the change
- [ ] Reviewed by at least two maintainers
### Rejecting Submissions
1. Leave specific, constructive feedback
@@ -78,6 +103,8 @@ If a Core Contributor is inactive 8+ weeks, reach out privately first, then remo
Post dollar values in `#bounty-payouts` (Core Contributors only):
### Integration bounties
| Bounty Type | Dollar Range |
|-------------|-------------|
| `bounty:test` | $1030 |
@@ -85,6 +112,15 @@ Post dollar values in `#bounty-payouts` (Core Contributors only):
| `bounty:code` | $2050 |
| `bounty:new-tool` | $50150 |
### Standard bounties
| Bounty Type | Dollar Range |
|-------------|-------------|
| `bounty:small` | $515 |
| `bounty:medium` | $2050 |
| `bounty:large` | $50150 |
| `bounty:extreme` | $150500 |
**Payout:** PR merged → verify quality → record in `#bounty-payouts` → process payment.
XP is always awarded regardless of budget. Money is a bonus layer.
+1 -1
View File
@@ -14,7 +14,7 @@ Complete setup from zero to running. Estimated time: 30 minutes.
./scripts/setup-bounty-labels.sh
```
This creates 7 labels: 4 bounty types (`bounty:test`, `bounty:docs`, `bounty:code`, `bounty:new-tool`) and 3 difficulty levels (`difficulty:easy`, `difficulty:medium`, `difficulty:hard`).
This creates 11 labels: 4 integration bounty types (`bounty:test`, `bounty:docs`, `bounty:code`, `bounty:new-tool`), 4 standard bounty sizes (`bounty:small`, `bounty:medium`, `bounty:large`, `bounty:extreme`), and 3 difficulty levels (`difficulty:easy`, `difficulty:medium`, `difficulty:hard`).
## Step 2: Create Discord Channels (3 min)
-4
View File
@@ -102,10 +102,6 @@ The repository includes a `.claude/settings.json` hook that automatically runs `
The `.cursorrules` file at the repo root tells Cursor's AI the project's style rules (line length, import order, quote style, etc.) so generated code follows convention.
### Antigravity IDE
Antigravity IDE (Google's AI-powered IDE) is supported via `.antigravity/mcp_config.json`. See [antigravity-setup.md](antigravity-setup.md) for setup and troubleshooting.
### Codex CLI
Codex CLI (OpenAI, v0.101.0+) is supported via `.codex/config.toml` (MCP server config). This file is tracked in git. Run `codex` in the repo root to use the configured MCP tools. See the [Codex CLI section in the README](../README.md#codex-cli) for details.
+580
View File
@@ -0,0 +1,580 @@
# MCP Server Registry — Product & Business Requirements Document
**Status**: Draft v2
**Last updated**: 2026-03-13
**Authors**: Timothy
**Reviewers**: Platform, Product, OSS/Community, Security
---
## 1. Executive Summary
This document proposes an **MCP Server Registry** system that enables open-source contributors and Hive users to discover, publish, install, and manage MCP (Model Context Protocol) servers for use with Hive agents.
Today, MCP server configuration is static, duplicated across agents, and limited to servers that Hive spawns as subprocesses. This makes it impractical for users who run their own MCP servers on the same host, and impossible for the community to contribute standalone MCP integrations without modifying Hive internals.
The registry consists of three components:
1. **A public GitHub repository** (`hive-mcp-registry`) — a curated index where contributors submit MCP server entries via pull request
2. **Local registry tooling** — CLI commands and a `~/.hive/mcp_registry/` directory for installing, managing, and connecting to MCP servers
3. **Framework integration** — changes to Hive's `ToolRegistry`, `MCPClient`, and agent runner so agents can flexibly select which registry servers they need
---
## 2. Problem Statement
### 2.1 Current State
- Each Hive agent has a static `mcp_servers.json` file that hardcodes MCP server connection details.
- All 150+ tools live in a single monolithic `mcp_server.py` — contributors add tools to this one server.
- There is no mechanism for standalone MCP servers (e.g., a Jira MCP, a Notion MCP, or a custom database MCP) to be discovered or used by Hive agents.
- Each agent spawns its own MCP subprocess — no connection sharing across agents.
- Only `stdio` and basic `http` transports are supported. No unix sockets, no SSE, no reconnection.
- External MCP servers already running on the host cannot be easily registered.
### 2.2 Who Is Affected
| Persona | Pain Point |
|---|---|
| **OSS contributor** | Wants to publish a standalone MCP server for the Hive ecosystem but has no pathway to do so without modifying Hive core |
| **Self-hosted user** | Runs multiple MCP servers on the same host (Slack, GitHub, database tools) and wants Hive agents to discover them |
| **Agent builder** | Copies the same `mcp_servers.json` boilerplate across every agent; no way to say "use whatever the user has installed" |
| **Platform team** | Cannot manage MCP servers centrally; each agent manages its own connections independently |
### 2.3 Impact of Not Solving
- The Hive MCP ecosystem remains closed — growth depends entirely on tools being added to the monolithic server.
- Users with existing MCP infrastructure (from Claude Desktop, Cursor, or other MCP-compatible tools) cannot leverage it with Hive.
- Resource waste from duplicate subprocess spawning across agents.
- No path to community-contributed integrations beyond the core tool set.
---
## 3. Goals & Success Criteria
### 3.1 Primary Goals
| # | Goal | Metric |
|---|---|---|
| G1 | A contributor can register a new MCP server in under 5 minutes | Time from fork to PR submission |
| G2 | A user can install and use a registry MCP server in under 2 minutes | Time from `hive mcp install X` to first tool call |
| G3 | Agents can dynamically select MCP servers by name or tag without hardcoding configs | Agents use `mcp_registry.json` selectors instead of full server configs |
| G4 | Multiple agents share MCP connections instead of duplicating them | One subprocess/connection per unique server, not per agent |
| G5 | External MCP servers already running on the host can be registered with a single command | `hive mcp add --name X --url http://...` works end-to-end |
| G6 | Zero breaking changes to existing agent configurations | All current `mcp_servers.json` files continue to work unchanged |
### 3.2 Developer Success Goals
| # | Goal | Metric |
|---|---|---|
| G7 | First-install success rate exceeds 90% | Successful `hive mcp install` / total attempts (tracked via CLI telemetry opt-in) |
| G8 | First-tool-call success rate exceeds 85% after install | Successful tool invocation within 5 minutes of install |
| G9 | Users can self-diagnose and resolve config/auth issues without filing support tickets | Median time from error to resolution <5 minutes; support ticket volume per server <1/month |
| G10 | Registry entries remain healthy over time | % of entries passing automated health validation at 30/60/90 days |
| G11 | Server upgrades do not silently break agents | Zero undetected tool-signature changes on upgrade |
### 3.3 Non-Goals (Explicit Exclusions)
- **Billing or monetization** — the registry is free and open-source.
- **Hosting MCP servers** — the registry only stores metadata; actual servers are installed/run by users.
- **Replacing `mcp_servers.json`** — the static config remains for backward compatibility and offline use.
- **Runtime agent-to-agent MCP sharing** — this is about discovery and connection, not inter-agent protocol.
- **Decomposing the monolithic `mcp_server.py`** — this is a future phase, not part of the initial build.
---
## 4. User Stories
### 4.1 Contributor: Publishing an MCP Server
> As an OSS contributor who has built a Jira MCP server, I want to register it in a public registry so that any Hive user can install and use it without modifying Hive code.
**Acceptance criteria:**
- `hive mcp init` scaffolds a manifest with my server's details pre-filled from introspection.
- `hive mcp validate ./manifest.json` passes locally before I open a PR.
- `hive mcp test ./manifest.json` starts my server, lists tools, calls a health check, and reports pass/fail.
- CI validates my manifest automatically (schema, naming, required fields, package existence).
- After merge, the server appears in `hive mcp search` for all users.
### 4.2 User: Installing an MCP Server from the Registry
> As a Hive user, I want to install a community MCP server and have my agents use it immediately.
**Acceptance criteria:**
- `hive mcp install jira` fetches the manifest and configures the server locally.
- If credentials are required, the CLI prompts me: "Jira requires JIRA_API_TOKEN (get one at https://...). Enter value:"
- `hive mcp health jira` confirms the server is reachable and tools are discoverable.
- My queen agent (with `auto_discover: true`) automatically picks up the new server's tools.
- `hive mcp info jira` shows trust tier, last health check, installed version, and loaded tools.
### 4.3 User: Registering a Local/Running MCP Server
> As a user running a custom database MCP server on `localhost:9090`, I want Hive agents to use it without publishing it to any public registry.
**Acceptance criteria:**
- `hive mcp add --name my-db --transport http --url http://localhost:9090` registers it.
- The server appears in `hive mcp list` and is available to agents that include it.
- If the server goes down, Hive logs a warning with actionable next steps and retries on next tool call.
### 4.4 Agent Builder: Selecting MCP Servers for a Worker
> As an agent builder, I want my worker agent to use specific MCP servers (e.g., Slack + Jira) without hardcoding connection details.
**Acceptance criteria:**
- I create `mcp_registry.json` in my agent directory with `{"include": ["slack", "jira"]}`.
- At runtime, the agent automatically connects to whatever Slack and Jira servers the user has installed.
- If a requested server isn't installed, startup logs explain: "Server 'jira' requested by mcp_registry.json but not installed. Run: hive mcp install jira"
### 4.5 Queen: Auto-Discovering Available MCP Servers
> As the queen agent, I want access to installed MCP servers so I can delegate tasks that require any tool.
**Acceptance criteria:**
- Queen's `mcp_registry.json` uses `{"profile": "all"}` to load all enabled servers.
- Startup logs list every loaded server and its tool count: "Loaded 3 registry servers: jira (4 tools), slack (6 tools), my-db (2 tools)"
- If tool names collide across servers, the resolution is deterministic and logged.
- Queen respects a configurable max tool budget to avoid prompt overload.
### 4.6 User: Diagnosing a Broken MCP Server
> As a user whose agent suddenly can't call Jira tools, I want to quickly find and fix the problem.
**Acceptance criteria:**
- `hive mcp doctor` checks all installed servers and reports: connection status, credential validity, tool discovery result, last error.
- `hive mcp doctor jira` gives detailed diagnostics: "jira: UNHEALTHY. Transport: stdio. Error: Process exited with code 1. Stderr: 'JIRA_API_TOKEN not set'. Fix: hive mcp config jira --set JIRA_API_TOKEN=your-token"
- `hive mcp inspect jira` shows the resolved config, override chain, and which agents include it.
- `hive mcp why-not jira --agent exports/my-agent` explains why a server was or was not loaded for an agent.
---
## 5. Requirements
### 5.1 Functional Requirements
#### 5.1.1 Registry Repository
| ID | Requirement | Priority |
|---|---|---|
| FR-1 | The registry is a public GitHub repo with a defined directory structure for server entries | P0 |
| FR-2 | Each server entry is a `manifest.json` file conforming to a JSON Schema | P0 |
| FR-3 | CI validates manifests on every PR (schema, naming, uniqueness, required fields) | P0 |
| FR-4 | A flat index (`registry_index.json`) is auto-generated on merge for client consumption | P0 |
| FR-5 | A `_template/` directory provides a starter manifest + README for contributors | P0 |
| FR-6 | `CONTRIBUTING.md` documents the 5-minute submission process with annotated examples for each transport type (stdio, http, unix, sse) | P0 |
| FR-7 | CI checks that `install.pip` packages exist on PyPI (if specified) | P1 |
| FR-8 | Tags follow a controlled taxonomy with new tags requiring maintainer approval | P1 |
| FR-9 | Canonical example manifests are provided for each transport type in `registry/_examples/` | P0 |
#### 5.1.2 Manifest Schema
The manifest has a **portable base layer** (framework-agnostic, usable by any MCP client) and an optional **hive extension block** (Hive-specific ergonomics).
| ID | Requirement | Priority |
|---|---|---|
| FR-10 | Manifest base includes: name, display_name, version, description, author, repository, license | P0 |
| FR-11 | Manifest declares supported transports (stdio, http, unix, sse) with default | P0 |
| FR-12 | Manifest includes install instructions (pip package name, docker image, npm package) | P0 |
| FR-13 | Manifest lists tool names and descriptions (for pre-connect filtering) | P0 |
| FR-14 | Manifest declares credential requirements (env_var, description, help_url, required flag) | P0 |
| FR-15 | Manifest includes tags and categories for discovery | P1 |
| FR-16 | Manifest supports template variables (`{port}`, `{socket_path}`, `{name}`) in commands | P1 |
| FR-17 | Manifest includes `hive` extension block for Hive-specific metadata (see 5.1.8) | P1 |
#### 5.1.3 Manifest Trust & Quality Metadata
| ID | Requirement | Priority |
|---|---|---|
| FR-80 | Manifest includes `status` field: `official`, `verified`, or `community` | P0 |
| FR-81 | Manifest includes `maintainer` contact (email or GitHub handle) | P0 |
| FR-82 | Manifest includes `docs_url` pointing to server documentation | P1 |
| FR-83 | Manifest includes `example_agent_url` linking to an example agent using this server | P2 |
| FR-84 | Manifest includes `supported_os` list (e.g., `["linux", "macos", "windows"]`) | P1 |
| FR-85 | Manifest includes `deprecated` boolean and `deprecated_by` field for superseded entries | P1 |
| FR-86 | Registry index includes `last_validated_at` timestamp per entry (from automated CI health runs) | P1 |
#### 5.1.4 Local Registry
| ID | Requirement | Priority |
|---|---|---|
| FR-20 | `~/.hive/mcp_registry/installed.json` tracks all installed/registered servers | P0 |
| FR-21 | Servers can be sourced from the remote registry (`"source": "registry"`) or local (`"source": "local"`) | P0 |
| FR-22 | Each installed server has: transport preference, enabled/disabled state, and env/header overrides | P0 |
| FR-23 | The remote registry index is cached locally with configurable refresh interval | P1 |
| FR-24 | Each installed server tracks operational state: `last_health_check_at`, `last_health_status`, `last_error`, `last_used_at`, `resolved_package_version` | P1 |
| FR-25 | Each installed server supports `pinned: true` to prevent auto-update and `auto_update: true` for automatic version tracking | P1 |
#### 5.1.5 CLI Commands — Management
| ID | Requirement | Priority |
|---|---|---|
| FR-30 | `hive mcp install <name> [--version X]` — install from registry, optionally pin version | P0 |
| FR-31 | `hive mcp add --name X --transport T --url U` — register a local server | P0 |
| FR-32 | `hive mcp add --from manifest.json` — register from a manifest file | P1 |
| FR-33 | `hive mcp remove <name>` — uninstall/unregister | P0 |
| FR-34 | `hive mcp list` — list installed servers with status, health, and trust tier | P0 |
| FR-35 | `hive mcp list --available` — list all servers in remote registry | P1 |
| FR-36 | `hive mcp search <query>` — search by name/tag/description/tool-name | P1 |
| FR-37 | `hive mcp enable/disable <name>` — toggle without removing | P0 |
| FR-38 | `hive mcp health [name]` — check server reachability and tool discovery | P1 |
| FR-39 | `hive mcp update [name]` — refresh index cache or update a specific server | P1 |
| FR-40 | `hive mcp config <name> --set KEY=VAL` — set credential/env overrides | P0 |
| FR-41 | `hive mcp info <name>` — show full details: trust tier, version, tools, health, which agents use it | P0 |
#### 5.1.6 CLI Commands — Contributor Tooling
| ID | Requirement | Priority |
|---|---|---|
| FR-42 | `hive mcp init [--server-url URL]` — scaffold a manifest; if URL provided, introspects server to pre-fill tools list | P0 |
| FR-43 | `hive mcp validate <path>` — validate a manifest against the JSON Schema locally | P0 |
| FR-44 | `hive mcp test <path>` — start the server per manifest config, list tools, run health check, report pass/fail | P1 |
#### 5.1.7 CLI Commands — Diagnostics
| ID | Requirement | Priority |
|---|---|---|
| FR-45 | `hive mcp doctor [name]` — check all or one server: connection, credentials, tool discovery, last error; output actionable fix suggestions | P0 |
| FR-46 | `hive mcp inspect <name>` — show resolved config including override chain, transport details, and which agents include/exclude this server | P1 |
| FR-47 | `hive mcp why-not <name> --agent <path>` — explain why a server was or was not loaded for a specific agent's `mcp_registry.json` | P1 |
#### 5.1.8 Hive Extension Block in Manifest
The optional `hive` block in the manifest carries Hive-specific metadata that doesn't belong in the portable base:
| ID | Requirement | Priority |
|---|---|---|
| FR-90 | `hive.min_version` — minimum Hive version required | P1 |
| FR-91 | `hive.max_version` — maximum compatible Hive version (optional, for deprecation) | P2 |
| FR-92 | `hive.example_agent` — path or URL to an example agent using this server | P2 |
| FR-93 | `hive.profiles` — list of profile tags this server belongs to (e.g., `["core", "productivity", "developer"]`) | P1 |
| FR-94 | `hive.tool_namespace` — optional prefix for tool names to avoid collisions (e.g., `jira_`) | P1 |
#### 5.1.9 Agent Selection
| ID | Requirement | Priority |
|---|---|---|
| FR-50 | Agents can declare MCP server preferences in `mcp_registry.json` | P0 |
| FR-51 | Selection supports: explicit `include` list, `tags` matching, `exclude` blacklist | P0 |
| FR-52 | `profile` field loads servers matching a named profile (e.g., `"all"`, `"core"`, `"productivity"`) | P0 |
| FR-53 | If `mcp_registry.json` does not exist, no registry servers are loaded (backward compatible) | P0 |
| FR-54 | Missing requested servers produce warnings with actionable install instructions, not errors | P0 |
| FR-55 | Agent startup logs a summary of loaded/skipped registry servers with reasons | P0 |
| FR-56 | `max_tools` field caps total tools loaded from registry servers (prevents prompt overload) | P1 |
#### 5.1.10 Tool Resolution & Namespacing
| ID | Requirement | Priority |
|---|---|---|
| FR-100 | When multiple servers expose a tool with the same name, the first server in include-order wins (deterministic) | P0 |
| FR-101 | Tool collisions are logged at startup: "Tool 'search' from 'brave-search' shadowed by 'google-search' (loaded first)" | P0 |
| FR-102 | If a server declares `hive.tool_namespace`, its tools are prefixed: `jira_create_issue` instead of `create_issue` | P1 |
| FR-103 | `hive mcp inspect <name>` shows which tools are active vs shadowed | P1 |
#### 5.1.11 Connection Management
| ID | Requirement | Priority |
|---|---|---|
| FR-60 | A process-level connection manager shares MCP connections across agents | P1 |
| FR-61 | Connections are reference-counted — disconnected when no agent uses them | P1 |
| FR-62 | HTTP/unix/SSE connections retry once on failure before raising an error | P1 |
#### 5.1.12 Transport Extensions
| ID | Requirement | Priority |
|---|---|---|
| FR-70 | `MCPClient` supports unix socket transport via `httpx` UDS | P1 |
| FR-71 | `MCPClient` supports SSE transport via the official MCP Python SDK | P1 |
| FR-72 | `MCPServerConfig` includes `socket_path` field for unix transport | P1 |
### 5.2 Version Compatibility & Upgrade Safety
| ID | Requirement | Priority |
|---|---|---|
| VC-1 | Manifest includes `version` (semver) for the registry entry and `mcp_protocol_version` for the MCP spec | P0 |
| VC-2 | Manifest `hive` block includes optional `min_version` / `max_version` constraints | P1 |
| VC-3 | `hive mcp install` installs latest by default; `--version X` pins a specific version | P0 |
| VC-4 | `installed.json` records `resolved_package_version` (actual pip/npm version installed) | P1 |
| VC-5 | `hive mcp update <name>` compares old and new tool lists; warns if tools were removed or signatures changed | P1 |
| VC-6 | Agents can pin a resolved server version in `mcp_registry.json` via `"versions": {"jira": "1.2.0"}` | P2 |
| VC-7 | If a pinned version is no longer available, the agent logs an error with rollback instructions | P2 |
| VC-8 | `hive mcp update --dry-run` shows what would change without applying | P1 |
| VC-9 | Tool names and parameter schemas from the manifest constitute a compatibility contract; breaking changes require a major version bump | P1 |
### 5.3 Failure Handling & Diagnostics
| ID | Requirement | Priority |
|---|---|---|
| DX-1 | All MCP errors use structured error codes (e.g., `MCP_INSTALL_FAILED`, `MCP_AUTH_MISSING`, `MCP_CONNECT_TIMEOUT`, `MCP_TOOL_NOT_FOUND`, `MCP_PROTOCOL_MISMATCH`) | P0 |
| DX-2 | Every error message includes: what failed, why, and a suggested fix command | P0 |
| DX-3 | `hive mcp doctor` checks: connection, credentials (are required env vars set?), tool discovery, protocol version compatibility, Hive version compatibility | P0 |
| DX-4 | Agent startup emits a structured log line per registry server: `{server, status, tools_loaded, skipped_reason}` | P0 |
| DX-5 | Failed tool calls from registry servers include the server name and transport in the error context | P1 |
| DX-6 | `hive mcp doctor` output is machine-parseable (JSON with `--json` flag) for CI/automation | P2 |
### 5.4 Non-Functional Requirements
| ID | Requirement | Priority |
|---|---|---|
| NFR-1 | Registry index fetch must complete in <5s on typical internet connections | P1 |
| NFR-2 | Installing a server from registry must not require a Hive restart | P0 |
| NFR-3 | Connection manager must be thread-safe (multiple agents in same process) | P0 |
| NFR-4 | All new code must have unit test coverage | P0 |
| NFR-5 | Registry repo CI must run in <60s | P1 |
| NFR-6 | Manifest base schema must be framework-agnostic (usable by non-Hive MCP clients); Hive-specific fields live in the `hive` extension block | P1 |
| NFR-7 | `hive mcp install` prints a security notice on first use: "Registry servers run code on your machine. Only install servers you trust." | P0 |
---
## 6. Architecture Overview
```
┌──────────────────────────────────┐
│ hive-mcp-registry (GitHub) │
│ │
│ registry/servers/jira/manifest │
│ registry/servers/slack/manifest │
│ ... │
│ registry_index.json (auto-built) │
└────────────────┬───────────────────┘
│ hive mcp update
│ (fetches index)
┌─────────────────────────────────────────────────────────────────────┐
│ ~/.hive/mcp_registry/ │
│ │
│ installed.json config.json cache/ │
│ (jira, slack, (preferences) registry_index.json │
│ my-custom-db) (cached remote) │
└─────────────────────────────┬───────────────────────────────────────┘
┌───────────────┼───────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌──────────────┐
│ Queen Agent │ │Worker Agent │ │ hive mcp CLI │
│ │ │ │ │ │
│ mcp_registry │ │mcp_registry │ │ install │
│ .json: │ │.json: │ │ add / remove │
│ profile: all │ │include: │ │ doctor │
│ │ │ [jira] │ │ init / test │
└──────┬───────┘ └──────┬──────┘ └──────────────┘
│ │
▼ ▼
┌──────────────────────────────────┐
│ MCPConnectionManager │
│ (process singleton) │
│ │
│ jira → MCPClient (stdio, rc=2) │
│ slack → MCPClient (http, rc=1) │
│ my-db → MCPClient (unix, rc=1) │
└──────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌──────────┐ ┌────────┐ ┌────────────┐
│ Jira MCP │ │Slack │ │ Custom DB │
│ (stdio) │ │MCP │ │ MCP (unix │
│ │ │(http) │ │ socket) │
└──────────┘ └────────┘ └────────────┘
```
### Component Responsibilities
| Component | Responsibility |
|---|---|
| **hive-mcp-registry** (GitHub repo) | Curated index of MCP server manifests; CI validates PRs; automated health checks |
| **~/.hive/mcp_registry/** | Local state: installed servers, cached index, user config, operational telemetry |
| **MCPRegistry** (Python module) | Core logic: install, remove, search, resolve for agent, doctor |
| **MCPConnectionManager** | Process-level connection pool with refcounting |
| **MCPClient** (extended) | Adds unix socket, SSE transports; retry on failure |
| **ToolRegistry** (extended) | New `load_registry_servers()` method with collision handling |
| **AgentRunner** (extended) | Loads `mcp_registry.json` alongside `mcp_servers.json`; logs resolution summary |
| **hive mcp CLI** | User-facing commands for management, diagnostics, and contributor tooling |
---
## 7. Data Models
### 7.1 Registry Manifest (`manifest.json`)
```json
{
"$schema": "https://raw.githubusercontent.com/aden-hive/hive-mcp-registry/main/schema/manifest.schema.json",
"name": "jira",
"display_name": "Jira MCP Server",
"version": "1.2.0",
"description": "Interact with Jira issues, boards, and sprints",
"author": {"name": "Jane Contributor", "github": "janedev", "url": "https://github.com/janedev"},
"maintainer": {"github": "janedev", "email": "jane@example.com"},
"repository": "https://github.com/janedev/jira-mcp-server",
"license": "MIT",
"status": "community",
"docs_url": "https://github.com/janedev/jira-mcp-server/blob/main/README.md",
"supported_os": ["linux", "macos", "windows"],
"deprecated": false,
"transport": {"supported": ["stdio", "http"], "default": "stdio"},
"install": {"pip": "jira-mcp-server", "docker": "ghcr.io/janedev/jira-mcp-server:latest", "npm": null},
"stdio": {"command": "uvx", "args": ["jira-mcp-server", "--stdio"]},
"http": {"default_port": 4010, "health_path": "/health", "command": "uvx", "args": ["jira-mcp-server", "--http", "--port", "{port}"]},
"unix": {"socket_template": "/tmp/mcp-{name}.sock", "command": "uvx", "args": ["jira-mcp-server", "--unix", "{socket_path}"]},
"tools": [
{"name": "jira_create_issue", "description": "Create a new Jira issue"},
{"name": "jira_search", "description": "Search Jira issues with JQL"},
{"name": "jira_update_issue", "description": "Update an existing issue"},
{"name": "jira_list_boards", "description": "List all Jira boards"}
],
"credentials": [
{"id": "jira_api_token", "env_var": "JIRA_API_TOKEN", "description": "Jira API token", "help_url": "https://id.atlassian.com/manage-profile/security/api-tokens", "required": true},
{"id": "jira_domain", "env_var": "JIRA_DOMAIN", "description": "Your Jira domain (e.g., mycompany.atlassian.net)", "required": true}
],
"tags": ["project-management", "atlassian", "issue-tracking"],
"categories": ["productivity"],
"mcp_protocol_version": "2024-11-05",
"hive": {
"min_version": "0.5.0",
"max_version": null,
"profiles": ["productivity", "developer"],
"tool_namespace": "jira",
"example_agent": "https://github.com/janedev/jira-mcp-server/tree/main/examples/hive-agent"
}
}
```
**Schema layering**:
- Everything outside `hive` is the **portable base** — usable by any MCP client.
- The `hive` block carries Hive-specific compatibility, profiles, namespacing, and examples.
### 7.2 Agent Selection (`mcp_registry.json`)
```json
{
"include": ["jira", "slack"],
"tags": ["crm"],
"exclude": ["github"],
"profile": "productivity",
"max_tools": 50,
"versions": {
"jira": "1.2.0"
}
}
```
**Selection precedence** (deterministic):
1. `profile` expands to a set of server names (union with `include` + `tags` matches).
2. `include` adds explicit servers.
3. `tags` adds servers whose tags overlap.
4. `exclude` removes from the final set (always wins).
5. Servers are loaded in `include`-order first, then alphabetically for tag/profile matches.
6. Tool collisions resolved by load order: first server wins.
### 7.3 Installed Server Entry (`installed.json``servers.<name>`)
```json
{
"source": "registry",
"manifest_version": "1.2.0",
"manifest": {},
"installed_at": "2026-03-13T10:00:00Z",
"installed_by": "hive mcp install",
"transport": "stdio",
"enabled": true,
"pinned": false,
"auto_update": false,
"resolved_package_version": "1.2.0",
"overrides": {"env": {"JIRA_DOMAIN": "mycompany.atlassian.net"}, "headers": {}},
"last_health_check_at": "2026-03-13T12:00:00Z",
"last_health_status": "healthy",
"last_error": null,
"last_used_at": "2026-03-13T11:30:00Z",
"last_validated_with_hive_version": "0.6.0"
}
```
---
## 8. Risks & Mitigations
| Risk | Impact | Likelihood | Mitigation |
|---|---|---|---|
| Low contributor adoption — nobody submits servers | Registry is empty, no value delivered | Medium | Seed with 5-10 popular MCP servers; `hive mcp init` makes submission trivial; canonical examples for every transport |
| High support burden from low-quality entries | Users install broken servers, file tickets against Hive | Medium | Trust tiers (official/verified/community); automated health checks in registry CI; `hive mcp doctor` for self-service debugging; quality gates beyond schema validation |
| Malicious MCP server in registry | User installs server that exfiltrates data or executes harmful code | Low | Maintainer review on all PRs; security notice on first install; servers run in user's trust boundary; verified tier requires code audit |
| Breaking changes to manifest schema | Existing manifests become invalid | Low | Schema versioning with `$schema` URL; CI validates backward compatibility; migration scripts |
| Server upgrades silently break agents | Tool signatures change, agents fail at runtime | Medium | `hive mcp update` diffs tool lists and warns on breaking changes; version pinning in `mcp_registry.json`; `--dry-run` flag |
| Connection manager concurrency bugs | Tool calls fail or deadlock under load | Medium | Thorough unit tests; reuse existing thread-safety patterns from `MCPClient._stdio_call_lock` |
| Registry index URL becomes unavailable | Users can't install new servers | Low | Local cache with TTL; fallback to last-known-good index; registry is a static file (cheap to host/mirror) |
| Name squatting in registry | Bad actors claim popular names | Low | Maintainer review on all PRs; naming guidelines in CONTRIBUTING.md |
| Auto-discover overloads agents with too many tools | Prompt bloat, confused tool selection, slower responses | Medium | `max_tools` cap in `mcp_registry.json`; profiles instead of blanket auto-discover; startup log shows tool count |
| Tool name collisions across servers | Wrong server handles a tool call | Medium | Deterministic load-order resolution; startup collision logging; optional tool namespacing via `hive.tool_namespace` |
---
## 9. Backward Compatibility
This system is **fully additive**:
- Existing `mcp_servers.json` files continue to work unchanged.
- Agents without `mcp_registry.json` load zero registry servers.
- The `MCPConnectionManager` is only used for registry-sourced connections; existing direct `MCPClient` usage is untouched.
- New CLI commands (`hive mcp ...`) don't conflict with existing commands.
- No existing files are modified in a breaking way.
- `mcp_servers.json` tools always take precedence over registry tools (they load first).
---
## 10. Documentation & Examples Strategy
Documentation is a first-class deliverable, not an afterthought. The following are required for launch:
| Doc | Audience | Deliverable |
|---|---|---|
| "Publish your first MCP server" | Contributors | Step-by-step guide from zero to merged registry entry, with screenshots |
| "Install and use your first registry server" | Users | Guide from `hive mcp install` to agent tool call |
| "Migration from mcp_servers.json" | Existing users | How to move static configs to registry-based selection |
| "Troubleshooting MCP servers" | Users | Common errors, `doctor` output examples, fix recipes |
| Manifest cookbook | Contributors | Annotated examples for stdio, http, unix, sse, multi-credential, no-credential |
| Example agents | Agent builders | 2-3 sample agents using `mcp_registry.json` with different selection strategies |
---
## 11. Phased Delivery
| Phase | Scope | Depends On |
|---|---|---|
| **Phase 1: Foundation** | MCPClient transport extensions (unix, SSE, retry); MCPConnectionManager; MCPRegistry module; CLI management commands; ToolRegistry `load_registry_servers()` with collision handling; AgentRunner `mcp_registry.json` loading with startup logging; structured error codes | -- |
| **Phase 2: Developer Tooling** | `hive mcp init`, `validate`, `test` (contributor flow); `doctor`, `inspect`, `why-not` (diagnostics); version pinning and `update --dry-run` | Phase 1 |
| **Phase 3: Registry Repo** | Create `hive-mcp-registry` GitHub repo with schema, validation CI, template, examples, CONTRIBUTING.md; seed with reference entries for built-in servers; automated health check CI | Phase 1 |
| **Phase 4: Docs & Launch** | All documentation deliverables from section 10; example agents; announcement | Phase 2, 3 |
| **Phase 5: Community Growth** | Trust tier promotion process; curated starter packs; popular/trending signals in registry | Phase 4 |
| **Phase 6: Monolith Decomposition** (future) | Extract tool groups from `mcp_server.py` into standalone servers; each becomes a registry entry | Phase 5 |
---
## 12. Open Questions
| # | Question | Owner | Status |
|---|---|---|---|
| Q1 | Should the registry repo live under `aden-hive` org or a new `hive-mcp` org? | Platform team | Open |
| Q2 | Should `hive mcp install` auto-prompt for required credentials interactively? | UX | Open |
| Q3 | Should the connection manager have a configurable max concurrent connections limit? | Engineering | Open |
| Q4 | Should we support a `docker` transport (Hive manages container lifecycle)? | Engineering | Open |
| Q5 | What is the process for promoting a `community` entry to `verified`? (e.g., code audit, usage threshold, maintainer SLA) | Platform + Security | Open |
| Q6 | Should the registry support private/enterprise indexes (e.g., `hive mcp config --index-url https://internal/...`)? | Platform | Open |
| Q7 | Should `hive mcp doctor` report telemetry (opt-in) to help identify systemic issues? | Product + Privacy | Open |
| Q8 | How should we handle MCP servers that require OAuth flows (not just static API keys)? | Engineering | Open |
---
## 13. Stakeholder Sign-Off
| Role | Name | Status |
|---|---|---|
| Engineering Lead | | Pending |
| Product | | Pending |
| OSS / Community | | Pending |
| Security | | Pending |
| Developer Experience | | Pending |
-111
View File
@@ -1,111 +0,0 @@
# Local credential parity: aliases, identity, status, and credential tester integration
## Summary
Gives local API key credentials (Brave Search, GitHub, Exa, Stripe, etc.) the same feature set as Aden OAuth credentials: named aliases, identity metadata, status tracking, CRUD management, and full visibility in the credential tester.
Fixes a bug where credentials configured with the existing `store_credential` MCP tool were invisible in the credential tester account picker.
---
## Changes
### New: `core/framework/credentials/local/`
**`models.py`** — `LocalAccountInfo` dataclass mirroring `AdenIntegrationInfo`:
- Fields: `credential_id`, `alias`, `status` (`active` / `failed` / `unknown`), `identity`, `last_validated`, `created_at`
- `storage_id` property returns `"{credential_id}/{alias}"` (e.g. `brave_search/work`)
- `to_account_dict()` returns same shape as Aden account dicts — feeds account picker without changes
**`registry.py`** — `LocalCredentialRegistry`, the core engine:
- `save_account(credential_id, alias, api_key)` — runs health check, extracts identity, stores at `{credential_id}/{alias}` in `EncryptedFileStorage`
- `list_accounts(credential_id=None)` — reads all `{x}/{y}` entries from storage
- `get_key(credential_id, alias)` — returns raw secret
- `delete_account(credential_id, alias)` — removes entry
- `validate_account(credential_id, alias)` — re-runs health check, updates `_status` and `last_refreshed` in-place
- `default()` classmethod — uses `~/.hive/credentials`
Storage convention: `{credential_id}/{alias}` as `CredentialObject.id`. Legacy flat entries (`brave_search`, no slash) continue to work — env var fallback is unchanged.
---
### Modified: `tools/src/aden_tools/credentials/store_adapter.py`
- `get(name, account=None)` — added `account=` param for per-call routing to a named local account; mirrors Aden `account=` routing
- `activate_local_account(credential_id, alias)` — injects a named account's key into `os.environ[spec.env_var]` for session-level activation
- `list_local_accounts(credential_id=None)` — delegates to `LocalCredentialRegistry`
---
### Modified: `core/framework/credentials/__init__.py`
Exports `LocalAccountInfo` and `LocalCredentialRegistry`.
---
### Modified: `core/framework/agents/credential_tester/agent.py`
Full rewrite of account listing and configuration:
- `_list_aden_accounts()` — extracted from old `list_connected_accounts()`
- `_list_local_accounts()` — uses `LocalCredentialRegistry`
- `_list_env_fallback_accounts()` — detects credentials configured via env var **or** in old flat encrypted format; fixes the invisible-credential bug
- `list_connected_accounts()` — combines all three, deduplicates
- `configure_for_account()` — branches on `source` field:
- `"aden"` → adds `get_account_info` tool, prompts with `account="alias"`
- `"local"` → calls `_activate_local_account()`, prompt has no `account=` param
- `_activate_local_account()` — handles three cases: named registry entry, old flat encrypted entry, env var already set; also handles grouped credentials (e.g. `google_custom_search` sets both `GOOGLE_API_KEY` and `GOOGLE_CSE_ID`)
- `get_tools_for_provider()` — fixed to match both `credential_id` AND `credential_group`
---
### Modified: `core/framework/builder/package_generator.py`
- `store_credential(name, value, alias="default", ...)` — added `alias` param; now delegates to `LocalCredentialRegistry.save_account()` with auto health check; returns `status` and `identity`
- `list_stored_credentials()` — delegates to `LocalCredentialRegistry.list_accounts()`; returns `credential_id`, `alias`, `status`, `identity`, `last_validated`
- `delete_stored_credential(name, alias="default")` — added `alias` param
- `validate_credential(name, alias="default")`**new tool** — re-runs health check via `LocalCredentialRegistry.validate_account()`, returns updated status and identity
---
### Modified: `core/framework/tui/screens/account_selection.py`
- Aden accounts rendered first, local accounts second
- Local accounts display a `[local]` badge
- Identity label shows email, username, or workspace when available
---
### New: `core/framework/tui/screens/add_local_credential.py`
Two-phase modal for adding a named local API key:
1. **Type selection** — filtered list of all `direct_api_key_supported=True` credentials
2. **Form** — alias input + password input → "Test & Save" runs health check inline, shows identity result, auto-dismisses on success
Exported from `core/framework/tui/screens/__init__.py`.
---
## Bug fix
**Credential tester not showing configured credentials** (e.g. Brave Search stored via `store_credential`):
- `_list_env_fallback_accounts()` previously used `CredentialStoreAdapter.with_env_storage()`, which only checked `os.environ`. Credentials stored in `EncryptedFileStorage` with the old flat format (`brave_search`, no slash) were invisible.
- `_activate_local_account()` early-returned when `alias == "default"`, assuming the env var was already set. Old flat encrypted credentials are not in `os.environ`.
**Fix**: `_list_env_fallback_accounts()` now also reads `EncryptedFileStorage.list_all()` and treats any flat entry (no `/`) as configured. `_activate_local_account()` now falls through to load from the flat encrypted entry when the env var is not set and the registry has no named entry.
---
## Test plan
- [ ] `store_credential("brave_search", "BSA-xxx", alias="work")` → health check runs, identity shown, stored as `brave_search/work`
- [ ] `list_stored_credentials()` → shows `credential_id`, `alias`, `status`, `identity`, `last_validated`
- [ ] `validate_credential("brave_search", "work")` → re-runs health check, updates status
- [ ] `delete_stored_credential("brave_search", alias="work")` → removes entry
- [ ] Credential tester account picker shows local accounts with `[local]` badge alongside Aden accounts
- [ ] Selecting a local account activates the key and tools work without `account=` param
- [ ] Selecting a legacy flat credential (stored before this PR) activates it correctly
- [ ] `AddLocalCredentialScreen` — select type, enter alias + key, health check runs inline, screen closes on success
- [ ] `CredentialStoreAdapter.get("brave_search", account="work")` returns key from registry
-56
View File
@@ -1,56 +0,0 @@
# feat(queen): Hive Queen Bee — native agent-building agent
## Summary
Introduces **Hive Coder** (codename "Queen Bee"), a framework-native coding agent that builds complete Hive agent packages from natural language descriptions. This is a single-node, forever-alive agent inspired by opencode's `while(true)` loop — one continuous conversation handles the full lifecycle: understand, qualify, design, implement, verify, and iterate.
The agent is deeply integrated with the framework: it can discover available MCP tools at runtime, inspect sessions and checkpoints of agents it builds, run their test suites, and self-verify its own output. It ships with a dedicated MCP tools server (`coder_tools_server.py`) providing rich file I/O, fuzzy-match editing, git snapshots, and shell execution — all scoped to a configurable project root.
## What's included
### New: `hive_coder` agent (`core/framework/agents/hive_coder/`)
- **`agent.py`** — Goal with 4 success criteria and 4 constraints, single-node graph, `HiveCoderAgent` class with full runtime lifecycle (start/stop/trigger_and_wait)
- **`nodes/__init__.py`** — Single `coder` EventLoopNode with a comprehensive system prompt covering coding mandates, tool discovery, meta-agent capabilities, node count rules, implementation templates, and a 6-phase workflow
- **`config.py`** — RuntimeConfig with auto-detection of preferred model from `~/.hive/configuration.json`
- **`__main__.py`** — Click CLI with `run`, `tui`, `info`, `validate`, and `shell` subcommands
- **`reference/`** — Framework guide, file templates, and anti-patterns docs embedded as agent reference material
### New: Coder Tools MCP Server (`tools/coder_tools_server.py`)
- 1500-line MCP server providing 13 tools: `read_file`, `write_file`, `edit_file` (with opencode-style 9-strategy fuzzy matching), `list_directory`, `search_files`, `run_command`, `undo_changes`, `discover_mcp_tools`, `list_agents`, `list_agent_sessions`, `list_agent_checkpoints`, `get_agent_checkpoint`, `run_agent_tests`
- Path-scoped security: all file operations sandboxed to project root
- Git-based undo: automatic snapshots before writes with `undo_changes` rollback
### Framework changes
- **`hive code` CLI command** — Direct launch shortcut for Hive Coder via `cmd_code` in `runner/cli.py`
- **`hive tui` updated** — Now discovers framework agents alongside exports/ and examples/
- **Cron timer support**`AgentRuntime` now supports cron expressions (`croniter`) in addition to fixed-interval timers for async entry points
- **Datetime in system prompts**`prompt_composer._with_datetime()` appends current datetime to all composed system prompts; EventLoopNode also applies it for isolated conversations
- **`max_node_visits` default → 0** — Changed from 1 to 0 (unbounded) across `NodeSpec` and executor, matching the forever-alive pattern as the standard default
- **TUI graph view** — Timer display updated to show cron expressions and hours in countdown
- **CredentialError handling**`_setup()` calls in TUI launch paths now catch and display credential errors gracefully
### Tests
- New `test_agent_runtime.py` tests for cron-based timer scheduling
## Architecture
```
User ──▶ [coder] (EventLoopNode, client_facing, forever-alive)
│ Tools: coder_tools_server.py (file I/O, shell, git)
│ + meta-agent tools (discover, inspect, test)
└──▶ loops continuously until user exits
```
Single node. No edges. No terminal nodes. The agent stays alive and handles multiple build requests in one session — context accumulates across interactions.
## Test plan
- [ ] `hive code` launches Hive Coder TUI successfully
- [ ] `hive tui` shows "Framework Agents" as a source option
- [ ] Agent can discover tools via `discover_mcp_tools()`
- [ ] Agent generates a valid agent package from a natural language request
- [ ] Generated packages pass `AgentRunner.load()` validation
- [ ] Cron timer tests pass (`test_agent_runtime.py`)
- [ ] Existing tests unaffected by `max_node_visits` default change
+907
View File
@@ -0,0 +1,907 @@
# Skill Registry — Product & Business Requirements Document
**Status**: Draft v1
**Last updated**: 2026-03-13
**Authors**: Timothy
**Reviewers**: Platform, Product, OSS/Community, Developer Experience
---
## 1. Executive Summary
This document proposes a **Skill System** for Hive — a portable implementation of the open [Agent Skills](https://agentskills.io) standard — combined with a community registry and a set of built-in default skills that give every worker agent runtime resiliency out of the box.
### 1.1 The Agent Skills Standard
Agent Skills is an open format, originally developed by Anthropic, for giving agents new capabilities and expertise. It has been adopted by 30+ products including Claude Code, Cursor, VS Code, GitHub Copilot, Gemini CLI, OpenHands, Goose, Roo Code, OpenAI Codex, and more.
A skill is a directory containing a `SKILL.md` file — YAML frontmatter (name, description) plus markdown instructions — optionally accompanied by scripts, reference docs, and assets. Agents discover skills at startup, load only the name and description into context (progressive disclosure tier 1), and activate the full instructions on demand when the task matches (tier 2). Supporting files are loaded only when the instructions reference them (tier 3).
```
my-skill/
├── SKILL.md # Required: metadata + instructions
├── scripts/ # Optional: executable code
├── references/ # Optional: documentation
├── assets/ # Optional: templates, resources
└── evals/ # Optional: test cases and assertions
```
### 1.2 What Hive Adds
Hive implements the Agent Skills standard faithfully — no forks, no proprietary extensions to the `SKILL.md` format. A skill written for Claude Code, Cursor, or any other compatible product works in Hive with zero changes, and vice versa.
On top of the standard, Hive adds two things:
1. **Default skills** — Six built-in skills shipped with the Hive framework that every worker agent loads automatically. These encode runtime operational discipline: structured note-taking, batch progress tracking, context preservation, quality self-assessment, error recovery protocols, and task decomposition. They are the "muscle memory" that makes agents reliable by default.
2. **Community registry** (`hive-skill-registry`) — A curated GitHub repository where contributors submit skill packages via pull request. Skills in the registry are standard Agent Skills packages. Includes CI validation, trust tiers, starter packs, and bounty program integration.
### 1.3 Abstraction Hierarchy
| Layer | What it is | Example |
| ----------------- | ------------------------------------------------------- | ------------------------------------------------- |
| **Tool** | A single function call via MCP | `web_search`, `gmail_send`, `jira_create_issue` |
| **Skill** | A `SKILL.md` with instructions, scripts, and references | "Deep Research", "Code Review", "Data Analysis" |
| **Default Skill** | A built-in skill for runtime resiliency | "Structured Note-Taking", "Batch Progress Ledger" |
| **Agent** | A complete goal-driven worker composed of skills | "Sales Outreach Agent", "Support Triage Agent" |
---
## 2. Problem Statement
### 2.1 Current State
- Worker agents have no skill system. There is no mechanism to discover, load, or follow reusable procedural instructions on demand.
- The 12 example templates in `examples/templates/` are copy-paste only — they cannot be composed, imported, versioned, or discovered at runtime.
- Agent builders must either hand-write all prompts and tool orchestration from scratch, or copy patterns from other agents manually.
- Skills written for Claude Code, Cursor, and other Agent Skills-compatible products do not work in Hive. Users who adopt Hive lose access to the growing ecosystem of community skills.
- Worker agents have no standardized operational discipline. The framework provides mechanical safeguards (stall detection, doom-loop fingerprinting, checkpoint/resume), but there is no cognitive protocol for how an agent should take structured notes when processing a 50-item batch, when to proactively save data before context pruning, or how to self-assess quality degradation. Each agent author either reinvents these patterns in their system prompts or — more commonly — skips them entirely.
- When a community member builds a battle-tested skill (research pattern, triage workflow, outreach playbook), there is no pathway to share it, no discovery mechanism, no versioning, and no quality signals.
### 2.2 Who Is Affected
| Persona | Pain Point |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **OSS contributor** | Built a great skill for another Agent Skills-compatible product; wants it to work in Hive too, or wants to share a Hive skill with the wider ecosystem |
| **Agent builder (beginner)** | Overwhelmed by framework concepts; wants to install a "deep research" skill and use it without understanding graph internals |
| **Agent builder (advanced)** | Copies the same prompt patterns and tool orchestration across agents; wants reusable, version-pinned building blocks |
| **Platform team** | Cannot codify best practices as reusable runtime primitives; every quality improvement is a docs change, not a skill update |
| **Enterprise user** | Wants an internal skill library so teams share proven patterns; needs cross-product compatibility |
### 2.3 Impact of Not Solving
- Hive is incompatible with the Agent Skills ecosystem — a growing open standard adopted by 30+ products. Users choosing Hive lose access to community skills; contributors targeting the ecosystem skip Hive.
- Agent quality depends entirely on individual author skill. No mechanism to propagate proven patterns.
- Worker agents are unreliable during long-running or batch processing sessions — no built-in operational discipline.
- The self-improvement loop's output (better prompts, better patterns) stays locked in individual deployments with no pathway to contribute back.
---
## 3. Goals & Success Criteria
### 3.1 Primary Goals
| # | Goal | Metric |
| --- | ------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------ |
| G1 | Any `SKILL.md` from the Agent Skills ecosystem works in Hive with zero modifications | Compatibility test suite against `github.com/anthropics/skills` example skills |
| G2 | A Hive skill works in Claude Code, Cursor, and other compatible products with zero modifications | Cross-product verification on 5+ skills |
| G3 | A user can install and use a community skill in under 2 minutes | Time from `hive skill install X` to skill activating in a session |
| G4 | A contributor can publish a skill in under 10 minutes | Time from `hive skill init` to PR submission |
| G5 | Default skills measurably improve agent reliability on batch processing tasks | A/B comparison: agents with default skills vs. without on 10+ batch scenarios |
| G6 | Zero breaking changes to existing agent configurations | All current agents continue to work unchanged |
### 3.2 Community & Ecosystem Goals
| # | Goal | Metric |
| --- | -------------------------------------------------------------------------------------------- | --------------------------------------------------------------- |
| G7 | Registry has 100+ community skills within 30 days of launch | Skill count in registry |
| G8 | All registry skills are portable Agent Skills packages — usable in any compatible product | 100% of registry entries conform to the standard |
| G9 | Bounty program integrates with skill contributions | Skill submissions tracked in bounty-tracker |
| G10 | Contributors receive attribution when their skills are used | Skill metadata includes author; agent logs credit loaded skills |
| G11 | Existing skills from `github.com/anthropics/skills` are installable via `hive skill install` | All example skills pass validation and activate correctly |
### 3.3 Non-Goals (Explicit Exclusions)
- **Forking or extending the Agent Skills standard** — Hive implements the spec faithfully. No proprietary sidecar files, no Hive-specific schema extensions.
- **Runtime skill marketplace** — no billing, licensing, or monetization. The registry is free and open-source.
- **Hosting skill execution** — the registry stores packages; execution happens locally.
- **AI-generated skills** — automatic skill generation from natural language is a future phase.
- **Graph-level skill composition** — skills are instruction-following units, not graph fragments. Agents compose skills by activating multiple skills and following their combined instructions.
---
## 4. Agent Skills Standard — Implementation Spec
This section defines how Hive implements the open Agent Skills standard. The specification at [agentskills.io/specification](https://agentskills.io/specification) is authoritative; this section describes Hive's conforming implementation.
### 4.1 Skill Discovery
At session startup, Hive scans for skill directories containing a `SKILL.md` file. Both cross-client and Hive-specific locations are scanned:
| Scope | Path | Purpose |
| --------- | --------------------------------- | --------------------------------------------------- |
| Project | `<project>/.agents/skills/` | Cross-client interoperability (standard convention) |
| Project | `<project>/.hive/skills/` | Hive-specific project skills |
| User | `~/.agents/skills/` | Cross-client user-level skills |
| User | `~/.hive/skills/` | Hive-specific user-level skills |
| Framework | `<hive-install>/skills/defaults/` | Built-in default skills |
**Precedence** (deterministic): Project-level skills override user-level skills. Within the same scope, `.hive/skills/` overrides `.agents/skills/`. Framework-level default skills have lowest precedence and can be overridden at any scope.
**Scanning rules:**
- Skip `.git/`, `node_modules/`, `__pycache__/`, `.venv/` directories
- Max depth: 4 levels from the skills root
- Max directories: 2000 per scope
- Respect `.gitignore` in project scope
**Trust:** Project-level skills from untrusted repositories (not marked trusted by the user) require explicit user consent before loading.
### 4.2 `SKILL.md` Parsing
Each discovered `SKILL.md` is parsed per the standard:
1. Extract YAML frontmatter between `---` delimiters
2. Parse required fields: `name`, `description`
3. Parse optional fields: `license`, `compatibility`, `metadata`, `allowed-tools`
4. Everything after the closing `---` is the skill's markdown body (instructions)
**Validation (lenient):**
- Name doesn't match parent directory → warn, load anyway
- Name exceeds 64 characters → warn, load anyway
- Description missing or empty → skip the skill, log error
- YAML unparseable → try wrapping unquoted colon values in quotes as fallback; if still fails, skip and log
**In-memory record per skill:**
| Field | Source |
| -------------- | --------------------------------- |
| `name` | Frontmatter |
| `description` | Frontmatter |
| `location` | Absolute path to `SKILL.md` |
| `base_dir` | Parent directory of `SKILL.md` |
| `source_scope` | `project`, `user`, or `framework` |
### 4.3 Progressive Disclosure
Hive implements the standard three-tier loading model:
| Tier | What's loaded | When | Token cost |
| ------------------- | ---------------------------- | -------------------------------- | ------------------------ |
| **1. Catalog** | Name + description per skill | Session start | ~50-100 tokens per skill |
| **2. Instructions** | Full `SKILL.md` body | When skill is activated | <5000 tokens recommended |
| **3. Resources** | Scripts, references, assets | When instructions reference them | Varies |
**Catalog disclosure**: At session start, all discovered skill names and descriptions are injected into the system prompt:
```xml
<available_skills>
<skill>
<name>deep-research</name>
<description>Multi-step web research with source verification. Use when the task requires gathering and synthesizing information from multiple sources.</description>
<location>/home/user/.hive/skills/deep-research/SKILL.md</location>
</skill>
...
</available_skills>
```
**Behavioral instruction** injected alongside the catalog:
```
The following skills provide specialized instructions for specific tasks.
When a task matches a skill's description, read the SKILL.md at the listed
location to load the full instructions before proceeding.
When a skill references relative paths, resolve them against the skill's
directory (the parent of SKILL.md) and use absolute paths in tool calls.
```
### 4.4 Skill Activation
Skills are activated via two mechanisms:
**Model-driven**: The agent reads the skill catalog, decides a skill is relevant, and reads the `SKILL.md` file using its file-read tool. No special infrastructure needed — the agent's standard file-reading capability is sufficient.
**User-driven**: Users can activate skills explicitly via `@skill-name` mention syntax or via agent configuration that pre-activates specific skills for every session.
**What happens on activation:**
1. The full `SKILL.md` body is loaded into context
2. Bundled resources (scripts, references) are listed but NOT eagerly loaded
3. The skill directory is allowlisted for file access (no permission prompts for bundled files)
4. Activation is logged: `{skill_name, scope, timestamp}`
**Deduplication**: If a skill is already active in the current session, re-activation is skipped.
**Context protection**: Activated skill content is exempt from context pruning/compaction — skill instructions are durable behavioral guidance that must persist for the session duration.
### 4.5 Skill Execution
The agent follows the instructions in `SKILL.md`. It can:
- Execute bundled scripts from `scripts/`
- Read reference materials from `references/`
- Use assets from `assets/`
- Call any MCP tools available in the agent's tool registry
This is identical to how skills work in Claude Code, Cursor, or any other Agent Skills-compatible product.
### 4.6 Pre-Activated Skills
Agents can declare skills that should be activated at session start — bypassing model-driven activation. This is useful for skills that an agent always needs (e.g., a coding standards skill for a code review agent).
**In agent config (`agent.json`):**
```json
{
"skills": ["deep-research", "code-review"]
}
```
**In Python:**
```python
agent = Agent(
name="my-agent",
skills=["deep-research", "code-review"],
)
```
Pre-activated skills have their full `SKILL.md` body loaded into context at session start (tier 2), skipping the catalog-only tier 1 phase.
---
## 5. Default Skills
Default skills are **built-in skills shipped with the Hive framework** that every worker agent loads automatically. They use the Agent Skills format (`SKILL.md`) but live in the framework's install directory and serve as runtime operational protocols.
### 5.1 Why Default Skills
The framework provides mechanical safeguards: stall detection via n-gram similarity, doom-loop fingerprinting, checkpoint/resume, token budget pruning, and max iteration limits. But these are reactive — they trigger after something has gone wrong.
Default skills encode **proactive cognitive protocols**: how to take structured notes so you don't lose track of a 50-item batch, when to pause and summarize before you hit context limits, how to self-assess whether your output quality is degrading. They are the operational habits that experienced agent builders already encode in their system prompts — standardized so every agent benefits.
### 5.2 Integration Model
Default skills differ from community skills in how they integrate:
| Aspect | Default Skills | Community Skills |
| ------------ | ---------------------------------------------- | ----------------------------------------------------- |
| Loaded by | Framework automatically | Agent decides at runtime (or pre-activated in config) |
| Integration | System prompt injection + shared memory hooks | Instruction-following (standard Agent Skills) |
| Graph impact | No dedicated nodes — woven into existing nodes | None (just context) |
| Overridable | Yes (disable, configure, or replace) | N/A |
Default skills integrate at four injection points in the `EventLoopNode`:
1. **System prompt injection** (before first LLM call): Default skill protocols are appended to the node's system prompt
2. **Iteration boundary callbacks** (between iterations): Quality check, notes staleness warning, budget tracking
3. **Node completion hooks** (when node finishes): Batch completeness check, handoff summary
4. **Phase transition hooks** (on edge traversal): Context carry-over, notes persistence
### 5.3 Default Skill Catalog
Six default skills ship with Hive:
#### 5.3.1 Structured Note-Taking (`hive.note-taking`)
**Purpose:** Maintain a structured working document throughout execution so the agent never loses track of what it knows, what it's decided, and what's pending.
**Problem:** Without structured notes, agents processing long sessions rely entirely on conversation history. When context is pruned (automatically at 60% token usage), intermediate reasoning is lost. Agents repeat work, contradict earlier decisions, or silently drop items.
**Protocol (injected into system prompt):**
```markdown
## Operational Protocol: Structured Note-Taking
Maintain structured working notes in shared memory key `_working_notes`.
Update at these checkpoints:
- After completing each discrete subtask or batch item
- After receiving new information that changes your plan
- Before any tool call that will produce substantial output
Structure:
### Objective — restate the goal
### Current Plan — numbered steps, mark completed with ✓
### Key Decisions — decisions made and WHY
### Working Data — intermediate results, extracted values
### Open Questions — uncertainties to verify
### Blockers — anything preventing progress
Update incrementally — do not rewrite from scratch each time.
```
**Shared memory:** `_working_notes` (string), `_notes_updated_at` (timestamp)
**Config:** `enabled` (default true), `update_frequency` (default `per_subtask`), `max_notes_length` (default 4000 chars)
---
#### 5.3.2 Batch Progress Ledger (`hive.batch-ledger`)
**Purpose:** When processing a collection of items, maintain a structured ledger tracking each item's status so no item is skipped, duplicated, or silently dropped.
**Problem:** Agents processing batches lose track of which items they've handled, especially after context compaction or checkpoint resume. Without a ledger, agents re-process items (waste) or skip items (data loss).
**Protocol (injected into system prompt):**
```markdown
## Operational Protocol: Batch Progress Ledger
When processing a collection of items, maintain a batch ledger in `_batch_ledger`.
Initialize when you identify the batch:
- `_batch_total`: total item count
- `_batch_ledger`: JSON with per-item status
Per-item statuses: pending → in_progress → completed|failed|skipped
- Set `in_progress` BEFORE processing
- Set final status AFTER processing with 1-line result_summary
- Include error reason for failed/skipped items
- Update aggregate counts after each item
- NEVER remove items from the ledger
- If resuming, skip items already marked completed
```
**Shared memory:** `_batch_ledger` (dict), `_batch_total` (int), `_batch_completed` (int), `_batch_failed` (int)
**Config:** `enabled` (default true), `auto_detect_batch` (default true), `checkpoint_every_n` (default 5)
**Completion check:** At node completion, if `_batch_completed + _batch_failed + _batch_skipped < _batch_total`, emit warning.
---
#### 5.3.3 Context Preservation (`hive.context-preservation`)
**Purpose:** Proactively preserve critical information before automatic context pruning destroys it.
**Problem:** The framework's `prune_old_tool_results()` at 60% token usage removes content indiscriminately. Agents that don't proactively save important data into working notes lose it permanently.
**Protocol (injected into system prompt):**
```markdown
## Operational Protocol: Context Preservation
You operate under a finite context window. Important information WILL be pruned.
Save-As-You-Go: After any tool call producing information you'll need later,
immediately extract key data into `_working_notes` or `_preserved_data`.
Do NOT rely on referring back to old tool results.
What to extract: URLs and key snippets (not full pages), relevant API fields
(not raw JSON), specific lines/values (not entire files), analysis results
(not raw data).
Before transitioning to the next phase/node, write a handoff summary to
`_handoff_context` with everything the next phase needs to know.
```
**Shared memory:** `_handoff_context` (string), `_preserved_data` (dict)
**Config:** `enabled` (default true), `warn_at_usage_ratio` (default 0.45), `require_handoff` (default true)
---
#### 5.3.4 Quality Self-Assessment (`hive.quality-monitor`)
**Purpose:** Periodically prompt the agent to self-evaluate output quality, catching degradation before the judge does.
**Problem:** The judge system evaluates at node completion — once per node, not during execution. An agent can degrade gradually over many iterations without detection until the node completes.
**Protocol (injected into system prompt):**
```markdown
## Operational Protocol: Quality Self-Assessment
Every 5 iterations, self-assess:
1. On-task? Still working toward the stated objective?
2. Thorough? Cutting corners compared to earlier?
3. Non-repetitive? Producing new value or rehashing?
4. Consistent? Latest output contradict earlier decisions?
5. Complete? Tracking all items, or silently dropped some?
If degrading: write assessment to `_quality_log`, re-read `_working_notes`,
change approach explicitly. If acceptable: brief note in `_quality_log`.
```
**Shared memory:** `_quality_log` (list), `_quality_degradation_count` (int)
**Config:** `enabled` (default true), `assessment_interval` (default 5), `degradation_threshold` (default 3)
---
#### 5.3.5 Error Recovery Protocol (`hive.error-recovery`)
**Purpose:** When a tool call fails or returns unexpected results, follow a structured recovery protocol instead of blindly retrying or giving up.
**Problem:** The framework retries transient errors automatically. But non-transient failures (wrong input, business logic error, missing resource) are handed back to the agent with no guidance. Agents often retry the same call or abandon the task.
**Protocol (injected into system prompt):**
```markdown
## Operational Protocol: Error Recovery
When a tool call fails:
1. Diagnose — record error in notes, classify as transient or structural
2. Decide — transient: retry once. Structural fixable: fix and retry.
Structural unfixable: record as failed, move to next item.
Blocking all progress: record escalation note.
3. Adapt — if same tool failed 3+ times, stop using it and find alternative.
Update plan in notes. Never silently drop the failed item.
```
**Shared memory:** `_error_log` (list), `_failed_tools` (dict), `_escalation_needed` (bool)
**Config:** `enabled` (default true), `max_retries_per_tool` (default 3), `escalation_on_block` (default true)
---
#### 5.3.6 Task Decomposition (`hive.task-decomposition`)
**Purpose:** Decompose complex tasks into explicit subtasks before diving in. Maintain the decomposition as a living checklist.
**Problem:** Agents facing complex tasks start executing immediately without planning, leading to incomplete coverage and iteration budget exhaustion on the first sub-problem.
**Protocol (injected into system prompt):**
```markdown
## Operational Protocol: Task Decomposition
Before starting a complex task:
1. Decompose — break into numbered subtasks in `_working_notes` Current Plan
2. Estimate — relative effort per subtask (small/medium/large)
3. Execute — work through in order, mark ✓ when complete
4. Budget — if running low on iterations, prioritize by impact
5. Verify — before declaring done, every subtask must be ✓, skipped (with reason), or blocked
```
**Shared memory:** `_subtasks` (list), `_iteration_budget_remaining` (int)
**Config:** `enabled` (default true), `decomposition_threshold` (default `auto`), `budget_awareness` (default true)
---
### 5.4 Default Skill Configuration
Agents configure default skills via `default_skills` in their agent definition:
**Declarative (`agent.json`):**
```json
{
"default_skills": {
"hive.note-taking": { "enabled": true },
"hive.batch-ledger": { "enabled": true, "checkpoint_every_n": 10 },
"hive.context-preservation": {
"enabled": true,
"warn_at_usage_ratio": 0.4
},
"hive.quality-monitor": { "enabled": false },
"hive.error-recovery": { "enabled": true },
"hive.task-decomposition": { "enabled": true }
}
}
```
**Disable all:** `"default_skills": {"_all": {"enabled": false}}`
### 5.5 Prompt Budget
All default skill protocols combined must total under **2000 tokens** to minimize impact on the agent's domain reasoning budget. Protocols are terse operational checklists, not verbose documentation.
### 5.6 Shared Memory Convention
All default skill shared memory keys use the `_` prefix (`_working_notes`, `_batch_ledger`, etc.) to avoid collisions with domain-level keys. These keys are:
- Visible to the agent (for self-reference)
- Visible to the judge (for evaluation context)
- Excluded from the agent's declared output contract (operational, not domain output)
---
## 6. Community Registry
### 6.1 Registry Repository
A public GitHub repository (`hive-skill-registry`) serves as the curated community index. Every entry is a standard Agent Skills package — portable to any compatible product.
```
hive-skill-registry/
├── registry/
│ ├── skills/
│ │ ├── deep-research/
│ │ │ ├── SKILL.md
│ │ │ ├── scripts/
│ │ │ ├── references/
│ │ │ ├── evals/
│ │ │ └── README.md
│ │ ├── email-triage/
│ │ └── ...
│ ├── packs/
│ │ ├── research-pack.json
│ │ └── ...
│ └── _template/
├── skill_index.json (auto-generated)
├── CONTRIBUTING.md
└── README.md
```
### 6.2 Trust Tiers
| Tier | Meaning | Requirements |
| ----------- | ------------------------------ | --------------------------------------------- |
| `official` | Maintained by Hive team | Internal review |
| `verified` | Audited community contribution | Code audit, maintainer SLA, test coverage |
| `community` | Community-submitted | Passes CI validation, maintainer review on PR |
### 6.3 Registry Index
The registry auto-generates a `skill_index.json` on merge for client consumption:
```json
{
"name": "deep-research",
"description": "Multi-step web research with source verification...",
"status": "verified",
"author": { "name": "Alex Researcher", "github": "alexr" },
"maintainer": { "github": "alexr" },
"version": "1.2.0",
"license": "MIT",
"tags": ["research", "web", "synthesis"],
"categories": ["knowledge-work"],
"install_count": 342,
"last_validated_at": "2026-03-13T10:00:00Z",
"deprecated": false
}
```
### 6.4 Starter Packs
Themed collections of skills that work well together:
```json
{
"name": "research-pack",
"display_name": "Research & Analysis Pack",
"description": "Skills for research-heavy agents",
"skills": [
{ "name": "deep-research", "version": ">=1.0.0" },
{ "name": "synthesis", "version": ">=1.0.0" },
{ "name": "executive-summary", "version": ">=1.0.0" }
]
}
```
### 6.5 Evaluation Framework
Skills in the registry can include an `evals/` directory following the Agent Skills evaluation pattern:
```json
{
"skill_name": "deep-research",
"evals": [
{
"id": 1,
"prompt": "Research the current state of quantum computing and summarize the top 3 breakthroughs from the past year.",
"expected_output": "A structured summary with 3 breakthroughs, each with source citations.",
"assertions": [
"Output includes at least 3 distinct breakthroughs",
"Each breakthrough has at least one source URL",
"Sources are from the past 12 months"
]
}
]
}
```
CI runs these evals on submitted skills to validate quality.
### 6.6 Bounty Integration
| Contribution | Points |
| -------------------- | ------ |
| New skill | 75 |
| Skill improvement PR | 30 |
| Skill tests/evals | 20 |
| Skill docs | 20 |
---
## 7. Requirements
### 7.1 Functional Requirements — Agent Skills Standard
| ID | Requirement | Priority |
| ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |
| AS-1 | Discover skills by scanning `.agents/skills/` and `.hive/skills/` at project and user scopes | P0 |
| AS-2 | Parse `SKILL.md` YAML frontmatter per the Agent Skills spec: `name`, `description` (required), `license`, `compatibility`, `metadata`, `allowed-tools` (optional) | P0 |
| AS-3 | Lenient validation: warn on non-critical issues, skip only on missing description or unparseable YAML | P0 |
| AS-4 | Progressive disclosure tier 1: skill catalog (name + description + location) injected into system prompt at session start | P0 |
| AS-5 | Progressive disclosure tier 2: full `SKILL.md` body loaded into context when agent or user activates a skill | P0 |
| AS-6 | Progressive disclosure tier 3: scripts, references, and assets loaded on demand when instructions reference them | P0 |
| AS-7 | Model-driven activation: agent reads `SKILL.md` via file-read tool when it decides a skill is relevant | P0 |
| AS-8 | User-driven activation: `@skill-name` mention syntax intercepted by harness | P1 |
| AS-9 | Skill directories allowlisted for file access — no permission prompts for bundled resources | P0 |
| AS-10 | Activated skill content protected from context pruning/compaction | P0 |
| AS-11 | Duplicate activations in the same session deduplicated | P1 |
| AS-12 | Name collisions resolved deterministically: project overrides user, `.hive/` overrides `.agents/`, log warning | P0 |
| AS-13 | Trust gating: project-level skills from untrusted repos require user consent | P1 |
| AS-14 | Compatibility with `github.com/anthropics/skills` example skills — all pass validation and activate correctly | P0 |
| AS-15 | Cross-client YAML compatibility: handle unquoted colon values via automatic fixup | P1 |
| AS-16 | Pre-activated skills via `skills` list in agent config (`agent.json` and Python API) | P0 |
| AS-17 | Subagent delegation: optionally run a skill's instructions in an isolated sub-session | P2 |
### 7.2 Functional Requirements — Default Skills
| ID | Requirement | Priority |
| ----- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |
| DS-1 | Ship 6 default skills: `hive.note-taking`, `hive.batch-ledger`, `hive.context-preservation`, `hive.quality-monitor`, `hive.error-recovery`, `hive.task-decomposition` | P0 |
| DS-2 | Default skills are valid Agent Skills packages (`SKILL.md` format) in the framework install directory | P0 |
| DS-3 | All default skills loaded automatically for every worker agent unless explicitly disabled | P0 |
| DS-4 | Default skills integrate via system prompt injection — no additional graph nodes | P0 |
| DS-5 | Default skills use `_`-prefixed shared memory keys to avoid domain collisions | P0 |
| DS-6 | Each default skill independently configurable via `default_skills` in agent config | P0 |
| DS-7 | All defaults disableable at once: `{"_all": {"enabled": false}}` | P0 |
| DS-8 | Default skill protocols appended in a `## Operational Protocols` system prompt section | P0 |
| DS-9 | Iteration boundary callbacks for quality check and notes staleness | P0 |
| DS-10 | Node completion hooks for batch completeness and handoff write | P0 |
| DS-11 | Phase transition hooks for context carry-over and notes persistence | P1 |
| DS-12 | `hive.batch-ledger` auto-detects batch scenarios via heuristic | P1 |
| DS-13 | `hive.context-preservation` warns at 0.45 token usage (before 0.6 framework prune) | P0 |
| DS-14 | Combined default skill prompts total under 2000 tokens | P0 |
| DS-15 | Agent startup logs active default skills and config | P0 |
### 7.3 Functional Requirements — CLI
| ID | Requirement | Priority |
| ------ | ------------------------------------------------------------------------------------------------- | -------- |
| CLI-1 | `hive skill list` — list discovered skills (all scopes) with source and status | P0 |
| CLI-2 | `hive skill install <name> [--version X]` — install from registry to `~/.hive/skills/` | P0 |
| CLI-3 | `hive skill install --pack <name>` — install a starter pack | P1 |
| CLI-4 | `hive skill remove <name>` — uninstall | P0 |
| CLI-5 | `hive skill search <query>` — search registry by name, tag, description | P1 |
| CLI-6 | `hive skill info <name>` — show details: description, author, scripts, references | P0 |
| CLI-7 | `hive skill init [--name X]` — scaffold a skill directory with `SKILL.md` template | P0 |
| CLI-8 | `hive skill validate <path>` — validate `SKILL.md` against the Agent Skills spec | P0 |
| CLI-9 | `hive skill test <path> [--input <json>]` — run skill in isolation, execute evals if present | P1 |
| CLI-10 | `hive skill doctor [name]` — check health: SKILL.md parseable, scripts executable, deps available | P0 |
| CLI-11 | `hive skill doctor --defaults` — check all default skills operational | P1 |
| CLI-12 | `hive skill fork <name> [--name new-name]` — create local editable copy of a registry skill | P1 |
| CLI-13 | `hive skill update [name]` — update registry cache or specific skill | P1 |
### 7.4 Functional Requirements — Registry
| ID | Requirement | Priority |
| ------ | ------------------------------------------------------------------------------------------------ | -------- |
| REG-1 | Public GitHub repo with defined directory structure | P0 |
| REG-2 | CI validates `SKILL.md` on every PR using `skills-ref validate` | P0 |
| REG-3 | Flat index (`skill_index.json`) auto-generated on merge | P0 |
| REG-4 | `_template/` directory with starter skill for contributors | P0 |
| REG-5 | `CONTRIBUTING.md` with step-by-step submission guide | P0 |
| REG-6 | CI runs skill evals when `evals/` directory is present | P1 |
| REG-7 | Trust tiers: `official`, `verified`, `community` | P0 |
| REG-8 | Tags follow controlled taxonomy | P1 |
| REG-9 | Seed with 10+ skills: extract from existing templates + port from `github.com/anthropics/skills` | P0 |
| REG-10 | Starter pack definitions in `registry/packs/` | P1 |
### 7.5 Failure Handling & Diagnostics
| ID | Requirement | Priority |
| ---- | ----------------------------------------------------------------------------------------- | -------- |
| DX-1 | Structured error codes: `SKILL_NOT_FOUND`, `SKILL_PARSE_ERROR`, `SKILL_ACTIVATION_FAILED` | P0 |
| DX-2 | Every error includes: what failed, why, and suggested fix | P0 |
| DX-3 | Agent startup logs per-skill summary: `{name, scope, status}` | P0 |
| DX-4 | `hive skill doctor` machine-parseable with `--json` flag | P2 |
### 7.6 Non-Functional Requirements
| ID | Requirement | Priority |
| ----- | ---------------------------------------------------------------------------- | -------- |
| NFR-1 | Skill discovery (scanning + parsing) completes in <500ms for up to 50 skills | P1 |
| NFR-2 | Installing a skill does not require a Hive restart | P0 |
| NFR-3 | All new code has unit test coverage | P0 |
| NFR-4 | Registry CI runs in <120s | P1 |
| NFR-5 | `hive skill install` prints security notice on first use | P0 |
| NFR-6 | Skills loaded at runtime are read-only — modifications require forking | P0 |
---
## 8. Architecture Overview
```
┌─────────────────────────────────────┐
│ hive-skill-registry (GitHub) │
│ │
│ registry/skills/deep-research/ │
│ ├── SKILL.md │
│ ├── scripts/ │
│ └── evals/ │
│ registry/packs/research-pack.json │
│ skill_index.json (auto-built) │
└──────────────┬────────────────────────┘
│ hive skill install
┌──────────────────────────────────────────────────────────────────────┐
│ Skill Sources │
│ │
│ ~/.hive/skills/ .agents/skills/ <hive>/skills/ │
│ (user, Hive-specific) (project, cross- defaults/ │
│ client portable) (framework built- │
│ in defaults) │
└──────────────────────┬───────────────────────────────────────────────┘
┌────────────────────┐
│ SkillDiscovery │
│ │
│ scan() → catalog │
│ parse SKILL.md │
│ resolve collisions │
└────────┬───────────┘
┌───────────┴───────────┐
│ │
▼ ▼
┌──────────────────┐ ┌───────────────────────┐
│ Community Skills │ │ Default Skills │
│ │ │ │
│ Catalog injected │ │ DefaultSkillManager │
│ into system │ │ • prompt injection │
│ prompt (tier 1) │ │ • iteration hooks │
│ │ │ • completion hooks │
│ Activated on │ │ • transition hooks │
│ demand (tier 2) │ │ │
│ │ │ Always active │
│ Agent follows │ │ (unless disabled) │
│ SKILL.md │ │ │
│ instructions │ │ Protocols woven into │
│ │ │ existing node prompts │
└──────────────────┘ └───────────────────────┘
│ │
└───────────┬───────────┘
┌────────────────────┐
│ EventLoopNode │
│ │
│ System prompt = │
│ agent prompt │
│ + node prompt │
│ + default skill │
│ protocols │
│ + activated skill │
│ instructions │
│ │
│ Same iteration │
│ loop, tools, │
│ judges │
└────────────────────┘
```
### Component Responsibilities
| Component | Responsibility |
| -------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| **SkillDiscovery** | Scan skill directories, parse `SKILL.md`, resolve collisions, build catalog |
| **SkillCatalog** | In-memory index of discovered skills; injected into system prompt at session start |
| **DefaultSkillManager** | Load, configure, and inject the 6 built-in default skills; manage prompt injection and hook registration |
| **EventLoopNode** (extended) | New hook points for default skills: iteration callbacks, completion hooks. Appends default protocols and activated skill content to system prompt. |
| **AgentRunner** (extended) | Resolve `skills` (pre-activation) and `default_skills` config; trigger discovery; log skill summary at startup |
| **hive skill CLI** | User-facing commands for install, search, validate, test, doctor |
| **hive-skill-registry** (GitHub) | Community-curated skill packages; CI validation; trust tiers; starter packs |
---
## 9. Risks & Mitigations
| Risk | Impact | Likelihood | Mitigation |
| ----------------------------------------------------- | -------------------------------------------------------- | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Agent Skills spec evolves in breaking ways | Hive implementation falls out of sync | Low | Standard is backed by Anthropic and adopted by 30+ products; changes are conservative. Track spec repo; participate in governance. |
| Low community adoption — nobody submits skills | Registry empty, no value | Medium | Seed with 10+ skills from existing templates + ported from `github.com/anthropics/skills`; bounty program; `hive skill init` trivializes creation |
| Prompt injection via malicious skill instructions | Skill manipulates agent behavior | Medium | Trust gating for project-level skills; maintainer review on registry PRs; `verified` tier requires audit; security notice on install |
| Default skill prompts bloat system prompt | Reduced token budget for reasoning | Medium | Hard cap of 2000 tokens total; individually disableable; terse checklist format |
| Default skills create rigid behavior for simple tasks | Agent follows batch protocol on trivial single-item task | Medium | `auto_detect_batch` heuristic; `task_decomposition` threshold defaults to `auto`; all defaults individually disableable |
| Context window consumed by too many active skills | Multiple skills + default skills exhaust context | Medium | Progressive disclosure limits base cost (~100 tokens/skill); skills activated one-at-a-time on demand; skill body recommended <5000 tokens; default skills capped at 2000 tokens |
| Skill quality inconsistent across registry | Users install ineffective skills | Medium | Trust tiers; eval framework in CI; `hive skill test`; community signals (install count); `deprecated` flag |
---
## 10. Backward Compatibility
This system is **fully additive**:
- Existing agents without skills continue to work unchanged.
- Default skills are loaded automatically but are behaviorally non-breaking: they add operational instructions to system prompts but do not change graph structure, tool availability, or output contracts.
- Default skills can be fully disabled via `"default_skills": {"_all": {"enabled": false}}`.
- Agents without a `skills` list load zero community skills (model may still activate from catalog).
- The `GraphExecutor` is unchanged — no new execution model.
- Existing `tools.py`, `mcp_servers.json`, and `mcp_registry.json` work alongside skills.
- Skills from the Agent Skills ecosystem (Claude Code, Cursor, etc.) work without modification.
---
## 11. Interaction with MCP Registry
Skills and MCP servers are complementary:
| Concern | MCP Registry | Skill System |
| -------------- | ------------------------------------------ | ----------------------------------------------- |
| What it shares | Tool infrastructure (servers, connections) | Agent behavior (instructions, prompts, scripts) |
| Format | Manifest JSON (Hive-specific) | `SKILL.md` (open standard) |
| Granularity | Atomic tool functions | Multi-step behavioral patterns |
**Integration:** Skills reference tools by name in their `SKILL.md` instructions; the agent resolves them via the normal tool registry. If a skill requires a tool that isn't available, the agent will encounter an error at execution time — `hive skill doctor` can pre-check this.
---
## 12. Documentation & Examples Strategy
| Doc | Audience | Deliverable |
| -------------------------------------- | ----------------- | ------------------------------------------------------------------------------ |
| "Install and use your first skill" | Users | From `hive skill search` to skill activating in a session |
| "Write your first skill" | Contributors | Step-by-step: `hive skill init` → write SKILL.md → validate → submit PR |
| "Port a skill from Claude Code/Cursor" | Contributors | Usually just install it — guide explains verification |
| "Default skills reference" | All users | All 6 defaults: purpose, config, shared memory keys, tuning |
| "Tuning default skills" | Advanced builders | When to disable vs. configure; per-agent overrides; measuring impact |
| Skill cookbook | Contributors | Annotated examples: research, triage, draft, review, outreach, data extraction |
| "Evaluating skill quality" | Contributors | Setting up evals, writing assertions, iterating with the eval-driven loop |
| Starter pack guide | Users | Finding, installing, and customizing starter packs |
---
## 13. Phased Delivery
| Phase | Scope | Depends On |
| --------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
| **Phase 0: Default Skills** | Implement 6 default skills as `SKILL.md` packages; `DefaultSkillManager` with system prompt injection, iteration callbacks, node completion hooks, phase transition hooks; `DefaultSkillConfig` in Python API and `agent.json`; `_`-prefixed shared memory convention; startup logging | — |
| **Phase 1: Agent Skills Standard** | `SkillDiscovery` scanning `.agents/skills/` and `.hive/skills/`; `SKILL.md` parsing with lenient validation; progressive disclosure (catalog injection, activation, resource loading); model-driven and user-driven activation; context protection; deduplication; pre-activated skills config; compatibility tests against `github.com/anthropics/skills` | — |
| **Phase 2: CLI & Contributor Tooling** | `hive skill init`, `validate`, `test`, `fork`; `hive skill doctor`; `hive skill install/remove/list/search/info/update`; version pinning; `skills-ref` integration for validation | Phase 1 |
| **Phase 3: Registry Repo** | Create `hive-skill-registry` GitHub repo; CI validation using `skills-ref`; `_template/`; `CONTRIBUTING.md`; seed with 10+ skills (extracted from templates + ported from anthropics/skills); eval CI | Phase 1 |
| **Phase 4: Docs & Launch** | All documentation from section 12; example agents using skills; announcement; bounty program integration | Phase 2, 3 |
| **Phase 5: Community Growth** | Trust tier promotion process; starter packs; community signals (install counts); monthly skill spotlight; eval-driven quality ranking | Phase 4 |
| **Phase 6: Advanced Features** (future) | Subagent delegation for skill execution; skill-level telemetry; AI-assisted skill creation | Phase 5 |
Phase 0 and Phase 1 can proceed in parallel — default skills depend on the prompt injection pipeline, while Agent Skills standard depends on discovery/parsing/activation.
---
## 14. Open Questions
| # | Question | Owner | Status |
| --- | -------------------------------------------------------------------------------------------------------------------------------------- | ------------------- | ------ |
| Q1 | Should the registry repo live under `aden-hive` org or a shared `agentskills` org? | Platform | Open |
| Q2 | Should default skill protocols be adaptive (e.g., `hive.batch-ledger` adjusts checkpoint frequency based on item size)? | Engineering | Open |
| Q3 | Should default skills be tunable per-node (not just per-agent)? | Engineering | Open |
| Q4 | How should default skill protocols interact with existing `adapt.md` working memory? Should `_working_notes` replace or supplement it? | Engineering | Open |
| Q5 | Should `hive.quality-monitor` self-assessments feed into judge decisions (auto-trigger RETRY on self-reported degradation)? | Engineering | Open |
| Q6 | What is the right combined token budget for default skill prompts? 2000 tokens proposed — configurable or fixed? | Engineering | Open |
| Q7 | Should Hive support subagent delegation for skill execution (run skill in isolated session, return summary)? | Engineering | Open |
| Q8 | Should Hive also scan `.claude/skills/` for pragmatic compatibility with Claude Code's native skill location? | Engineering | Open |
| Q9 | What is the process for promoting a `community` skill to `verified`? | Platform + Security | Open |
| Q10 | Should the registry support private/enterprise skill indexes (`hive skill config --index-url`)? | Platform | Open |
| Q11 | Should `hive skill test` use the official `skills-ref` library or a Hive-native implementation? | Engineering | Open |
| Q12 | How should skill-level telemetry (activation counts, eval pass rates) be collected without compromising privacy? | Product + Privacy | Open |
---
## 15. Stakeholder Sign-Off
| Role | Name | Status |
| -------------------- | ---- | ------- |
| Engineering Lead | | Pending |
| Product | | Pending |
| OSS / Community | | Pending |
| Security | | Pending |
| Developer Experience | | Pending |
@@ -12,7 +12,6 @@ from .agent import (
nodes,
edges,
loop_config,
async_entry_points,
entry_node,
entry_points,
pause_nodes,
@@ -31,7 +30,6 @@ __all__ = [
"nodes",
"edges",
"loop_config",
"async_entry_points",
"entry_node",
"entry_points",
"pause_nodes",
@@ -4,7 +4,7 @@ from pathlib import Path
from framework.graph import EdgeCondition, EdgeSpec, Goal, SuccessCriterion, Constraint
from framework.graph.checkpoint_config import CheckpointConfig
from framework.graph.edge import AsyncEntryPointSpec, GraphSpec
from framework.graph.edge import GraphSpec
from framework.graph.executor import ExecutionResult, GraphExecutor
from framework.llm import LiteLLMProvider
from framework.runner.tool_registry import ToolRegistry
@@ -152,17 +152,6 @@ edges = [
# Graph configuration
entry_node = "intake"
entry_points = {"start": "intake"}
async_entry_points = [
AsyncEntryPointSpec(
id="email-timer",
name="Scheduled Inbox Check",
entry_node="fetch-emails",
trigger_type="timer",
trigger_config={"interval_minutes": 5},
isolation_level="shared",
max_concurrent=1,
),
]
pause_nodes = []
terminal_nodes = []
loop_config = {
@@ -224,7 +213,6 @@ class EmailInboxManagementAgent:
loop_config=loop_config,
conversation_mode=conversation_mode,
identity_prompt=identity_prompt,
async_entry_points=async_entry_points,
)
def _setup(self, mock_mode=False) -> None:
@@ -275,16 +263,6 @@ class EmailInboxManagementAgent:
trigger_type="manual",
isolation_level="shared",
),
# Timer-driven entry point
EntryPointSpec(
id="email-timer",
name="Scheduled Inbox Check",
entry_node="fetch-emails",
trigger_type="timer",
trigger_config={"interval_minutes": 5},
isolation_level="shared",
max_concurrent=1,
),
]
self._agent_runtime = create_agent_runtime(
@@ -360,10 +338,6 @@ class EmailInboxManagementAgent:
"pause_nodes": self.pause_nodes,
"terminal_nodes": self.terminal_nodes,
"client_facing_nodes": [n.id for n in self.nodes if n.client_facing],
"async_entry_points": [
{"id": ep.id, "name": ep.name, "entry_node": ep.entry_node}
for ep in async_entry_points
],
}
def validate(self):
@@ -391,13 +365,6 @@ class EmailInboxManagementAgent:
f"Entry point '{ep_id}' references unknown node '{node_id}'"
)
# Validate async entry points
for ep in async_entry_points:
if ep.entry_node not in node_ids:
errors.append(
f"Async entry point '{ep.id}' references unknown node '{ep.entry_node}'"
)
return {
"valid": len(errors) == 0,
"errors": errors,
@@ -0,0 +1,11 @@
[
{
"id": "email-timer",
"name": "Scheduled Inbox Check",
"trigger_type": "timer",
"trigger_config": {
"interval_minutes": 5
},
"task": "Fetch and process inbox emails according to the user's rules"
}
]
+58 -59
View File
@@ -6,7 +6,7 @@
.DESCRIPTION
An interactive setup wizard that:
1. Installs Python dependencies via uv
2. Installs Playwright browser for web scraping
2. Checks for Chrome/Edge browser for web automation
3. Helps configure LLM API keys
4. Verifies everything works
@@ -518,22 +518,14 @@ try {
exit 1
}
# Install Playwright browser
Write-Host " Installing Playwright browser... " -NoNewline
$null = & uv run python -c "import playwright" 2>&1
$importExitCode = $LASTEXITCODE
if ($importExitCode -eq 0) {
$null = & uv run python -m playwright install chromium 2>&1
$playwrightExitCode = $LASTEXITCODE
if ($playwrightExitCode -eq 0) {
Write-Ok "ok"
} else {
Write-Warn "skipped (install manually: uv run python -m playwright install chromium)"
}
# Check for Chrome/Edge (required for GCU browser tools)
Write-Host " Checking for Chrome/Edge browser... " -NoNewline
$null = & uv run python -c "from gcu.browser.chrome_finder import find_chrome; assert find_chrome()" 2>&1
$chromeCheckExit = $LASTEXITCODE
if ($chromeCheckExit -eq 0) {
Write-Ok "ok"
} else {
Write-Warn "skipped"
Write-Warn "not found - install Chrome or Edge for browser tools"
}
} finally {
Pop-Location
@@ -810,26 +802,26 @@ $DefaultModels = @{
# Model choices: array of hashtables per provider
$ModelChoices = @{
anthropic = @(
@{ Id = "claude-haiku-4-5-20251001"; Label = "Haiku 4.5 - Fast + cheap (recommended)"; MaxTokens = 8192 },
@{ Id = "claude-sonnet-4-20250514"; Label = "Sonnet 4 - Fast + capable"; MaxTokens = 8192 },
@{ Id = "claude-sonnet-4-5-20250929"; Label = "Sonnet 4.5 - Best balance"; MaxTokens = 16384 },
@{ Id = "claude-opus-4-6"; Label = "Opus 4.6 - Most capable"; MaxTokens = 32768 }
@{ Id = "claude-haiku-4-5-20251001"; Label = "Haiku 4.5 - Fast + cheap (recommended)"; MaxTokens = 8192; MaxContextTokens = 180000 },
@{ Id = "claude-sonnet-4-20250514"; Label = "Sonnet 4 - Fast + capable"; MaxTokens = 8192; MaxContextTokens = 180000 },
@{ Id = "claude-sonnet-4-5-20250929"; Label = "Sonnet 4.5 - Best balance"; MaxTokens = 16384; MaxContextTokens = 180000 },
@{ Id = "claude-opus-4-6"; Label = "Opus 4.6 - Most capable"; MaxTokens = 32768; MaxContextTokens = 180000 }
)
openai = @(
@{ Id = "gpt-5-mini"; Label = "GPT-5 Mini - Fast + cheap (recommended)"; MaxTokens = 16384 },
@{ Id = "gpt-5.2"; Label = "GPT-5.2 - Most capable"; MaxTokens = 16384 }
@{ Id = "gpt-5-mini"; Label = "GPT-5 Mini - Fast + cheap (recommended)"; MaxTokens = 16384; MaxContextTokens = 120000 },
@{ Id = "gpt-5.2"; Label = "GPT-5.2 - Most capable"; MaxTokens = 16384; MaxContextTokens = 120000 }
)
gemini = @(
@{ Id = "gemini-3-flash-preview"; Label = "Gemini 3 Flash - Fast (recommended)"; MaxTokens = 8192 },
@{ Id = "gemini-3.1-pro-preview"; Label = "Gemini 3.1 Pro - Best quality"; MaxTokens = 8192 }
@{ Id = "gemini-3-flash-preview"; Label = "Gemini 3 Flash - Fast (recommended)"; MaxTokens = 8192; MaxContextTokens = 900000 },
@{ Id = "gemini-3.1-pro-preview"; Label = "Gemini 3.1 Pro - Best quality"; MaxTokens = 8192; MaxContextTokens = 900000 }
)
groq = @(
@{ Id = "moonshotai/kimi-k2-instruct-0905"; Label = "Kimi K2 - Best quality (recommended)"; MaxTokens = 8192 },
@{ Id = "openai/gpt-oss-120b"; Label = "GPT-OSS 120B - Fast reasoning"; MaxTokens = 8192 }
@{ Id = "moonshotai/kimi-k2-instruct-0905"; Label = "Kimi K2 - Best quality (recommended)"; MaxTokens = 8192; MaxContextTokens = 120000 },
@{ Id = "openai/gpt-oss-120b"; Label = "GPT-OSS 120B - Fast reasoning"; MaxTokens = 8192; MaxContextTokens = 120000 }
)
cerebras = @(
@{ Id = "zai-glm-4.7"; Label = "ZAI-GLM 4.7 - Best quality (recommended)"; MaxTokens = 8192 },
@{ Id = "qwen3-235b-a22b-instruct-2507"; Label = "Qwen3 235B - Frontier reasoning"; MaxTokens = 8192 }
@{ Id = "zai-glm-4.7"; Label = "ZAI-GLM 4.7 - Best quality (recommended)"; MaxTokens = 8192; MaxContextTokens = 120000 },
@{ Id = "qwen3-235b-a22b-instruct-2507"; Label = "Qwen3 235B - Frontier reasoning"; MaxTokens = 8192; MaxContextTokens = 120000 }
)
}
@@ -838,10 +830,10 @@ function Get-ModelSelection {
$choices = $ModelChoices[$ProviderId]
if (-not $choices -or $choices.Count -eq 0) {
return @{ Model = $DefaultModels[$ProviderId]; MaxTokens = 8192 }
return @{ Model = $DefaultModels[$ProviderId]; MaxTokens = 8192; MaxContextTokens = 120000 }
}
if ($choices.Count -eq 1) {
return @{ Model = $choices[0].Id; MaxTokens = $choices[0].MaxTokens }
return @{ Model = $choices[0].Id; MaxTokens = $choices[0].MaxTokens; MaxContextTokens = $choices[0].MaxContextTokens }
}
# Find default index from previous model (if same provider)
@@ -874,7 +866,7 @@ function Get-ModelSelection {
$sel = $choices[$num - 1]
Write-Host ""
Write-Ok "Model: $($sel.Id)"
return @{ Model = $sel.Id; MaxTokens = $sel.MaxTokens }
return @{ Model = $sel.Id; MaxTokens = $sel.MaxTokens; MaxContextTokens = $sel.MaxContextTokens }
}
}
Write-Color -Text "Invalid choice. Please enter 1-$($choices.Count)" -Color Red
@@ -891,11 +883,12 @@ Write-Step -Number "" -Text "Configuring LLM provider..."
$HiveConfigDir = Join-Path $env:USERPROFILE ".hive"
$HiveConfigFile = Join-Path $HiveConfigDir "configuration.json"
$SelectedProviderId = ""
$SelectedEnvVar = ""
$SelectedModel = ""
$SelectedMaxTokens = 8192
$SubscriptionMode = ""
$SelectedProviderId = ""
$SelectedEnvVar = ""
$SelectedModel = ""
$SelectedMaxTokens = 8192
$SelectedMaxContextTokens = 120000
$SubscriptionMode = ""
# ── Credential detection (silent — just set flags) ───────────
$ClaudeCredDetected = $false
@@ -1071,20 +1064,22 @@ switch ($num) {
Write-Host ""
exit 1
}
$SubscriptionMode = "claude_code"
$SelectedProviderId = "anthropic"
$SelectedModel = "claude-opus-4-6"
$SelectedMaxTokens = 32768
$SubscriptionMode = "claude_code"
$SelectedProviderId = "anthropic"
$SelectedModel = "claude-opus-4-6"
$SelectedMaxTokens = 32768
$SelectedMaxContextTokens = 180000
Write-Host ""
Write-Ok "Using Claude Code subscription"
}
2 {
# ZAI Code Subscription
$SubscriptionMode = "zai_code"
$SelectedProviderId = "openai"
$SelectedEnvVar = "ZAI_API_KEY"
$SelectedModel = "glm-5"
$SelectedMaxTokens = 32768
$SubscriptionMode = "zai_code"
$SelectedProviderId = "openai"
$SelectedEnvVar = "ZAI_API_KEY"
$SelectedModel = "glm-5"
$SelectedMaxTokens = 32768
$SelectedMaxContextTokens = 120000
Write-Host ""
Write-Ok "Using ZAI Code subscription"
Write-Color -Text " Model: glm-5 | API: api.z.ai" -Color DarkGray
@@ -1113,21 +1108,23 @@ switch ($num) {
}
}
if ($CodexCredDetected) {
$SubscriptionMode = "codex"
$SelectedProviderId = "openai"
$SelectedModel = "gpt-5.3-codex"
$SelectedMaxTokens = 16384
$SubscriptionMode = "codex"
$SelectedProviderId = "openai"
$SelectedModel = "gpt-5.3-codex"
$SelectedMaxTokens = 16384
$SelectedMaxContextTokens = 120000
Write-Host ""
Write-Ok "Using OpenAI Codex subscription"
}
}
4 {
# Kimi Code Subscription
$SubscriptionMode = "kimi_code"
$SelectedProviderId = "kimi"
$SelectedEnvVar = "KIMI_API_KEY"
$SelectedModel = "kimi-k2.5"
$SelectedMaxTokens = 32768
$SubscriptionMode = "kimi_code"
$SelectedProviderId = "kimi"
$SelectedEnvVar = "KIMI_API_KEY"
$SelectedModel = "kimi-k2.5"
$SelectedMaxTokens = 32768
$SelectedMaxContextTokens = 120000
Write-Host ""
Write-Ok "Using Kimi Code subscription"
Write-Color -Text " Model: kimi-k2.5 | API: api.kimi.com/coding" -Color DarkGray
@@ -1349,8 +1346,9 @@ if ($SubscriptionMode -eq "kimi_code") {
# Prompt for model if not already selected (manual provider path)
if ($SelectedProviderId -and -not $SelectedModel) {
$modelSel = Get-ModelSelection $SelectedProviderId
$SelectedModel = $modelSel.Model
$SelectedMaxTokens = $modelSel.MaxTokens
$SelectedModel = $modelSel.Model
$SelectedMaxTokens = $modelSel.MaxTokens
$SelectedMaxContextTokens = $modelSel.MaxContextTokens
}
# Save configuration
@@ -1367,9 +1365,10 @@ if ($SelectedProviderId) {
$config = @{
llm = @{
provider = $SelectedProviderId
model = $SelectedModel
max_tokens = $SelectedMaxTokens
provider = $SelectedProviderId
model = $SelectedModel
max_tokens = $SelectedMaxTokens
max_context_tokens = $SelectedMaxContextTokens
}
created_at = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss+00:00")
}
+6 -10
View File
@@ -4,7 +4,7 @@
#
# An interactive setup wizard that:
# 1. Installs Python dependencies
# 2. Installs Playwright browser for web scraping
# 2. Checks for Chrome/Edge browser for web automation
# 3. Helps configure LLM API keys
# 4. Verifies everything works
#
@@ -253,16 +253,12 @@ else
exit 1
fi
# Install Playwright browser
echo -n " Installing Playwright browser... "
if uv run python -c "import playwright" > /dev/null 2>&1; then
if uv run python -m playwright install chromium > /dev/null 2>&1; then
echo -e "${GREEN}ok${NC}"
else
echo -e "${YELLOW}${NC}"
fi
# Check for Chrome/Edge (required for GCU browser tools)
echo -n " Checking for Chrome/Edge browser... "
if uv run python -c "from gcu.browser.chrome_finder import find_chrome; assert find_chrome()" > /dev/null 2>&1; then
echo -e "${GREEN}ok${NC}"
else
echo -e "${YELLOW}${NC}"
echo -e "${YELLOW}not found — install Chrome or Edge for browser tools${NC}"
fi
cd "$SCRIPT_DIR"
+11 -1
View File
@@ -68,10 +68,16 @@ interface LeaderboardEntry {
// ---------------------------------------------------------------------------
const POINTS: Record<string, number> = {
// Integration bounties
"bounty:test": 20,
"bounty:docs": 20,
"bounty:code": 30,
"bounty:new-tool": 75,
// Standard bounties
"bounty:small": 10,
"bounty:medium": 30,
"bounty:large": 75,
"bounty:extreme": 150,
};
// ---------------------------------------------------------------------------
@@ -276,6 +282,10 @@ function formatBountyNotification(bounty: BountyResult): string {
docs: "\u{1F4DD}",
code: "\u{1F527}",
"new-tool": "\u{2B50}",
small: "\u{1F4A1}",
medium: "\u{1F6E0}",
large: "\u{1F680}",
extreme: "\u{1F525}",
};
const emoji = typeEmoji[bounty.bountyType] ?? "\u{1F3AF}";
@@ -301,7 +311,7 @@ function formatLeaderboard(entries: LeaderboardEntry[]): string {
const medals = ["\u{1F947}", "\u{1F948}", "\u{1F949}"];
let msg = "**\u{1F3C6} Integration Bounty Leaderboard**\n\n";
let msg = "**\u{1F3C6} Bounty Leaderboard**\n\n";
for (let i = 0; i < top10.length; i++) {
const entry = top10[i];
+2 -1
View File
@@ -13,7 +13,8 @@ from framework.agents.queen.nodes import (
_DEFAULT_WORKER_IDENTITY = (
"\n\n# Worker Profile\n"
"No worker agent loaded. You are operating independently.\n"
"Handle all tasks directly using your coding tools."
"Design or build the agent to solve the user's problem "
"according to your current phase."
)
-61
View File
@@ -1,61 +0,0 @@
#!/usr/bin/env bash
#
# setup-antigravity-mcp.sh - Write Antigravity/Claude MCP config with auto-detected paths
#
# Run from anywhere inside the hive repo. Generates ~/.gemini/antigravity/mcp_config.json
# based on .agent/mcp_config.json template, with absolute paths so the IDE can
# connect to tools MCP servers without manual path editing.
#
set -e
# Find repo root
REPO_ROOT=""
if git rev-parse --show-toplevel &>/dev/null; then
REPO_ROOT="$(git rev-parse --show-toplevel)"
elif [ -f ".agent/mcp_config.json" ]; then
REPO_ROOT="$(pwd)"
else
d="$(pwd)"
while [ -n "$d" ] && [ "$d" != "/" ]; do
[ -f "$d/.agent/mcp_config.json" ] && REPO_ROOT="$d" && break
d="$(dirname "$d")"
done
fi
if [ -z "$REPO_ROOT" ] || [ ! -d "$REPO_ROOT/core" ] || [ ! -d "$REPO_ROOT/tools" ]; then
echo "Error: Run this script from inside the hive repo (could not find repo root with core/ and tools/)." >&2
exit 1
fi
TEMPLATE="$REPO_ROOT/.agent/mcp_config.json"
if [ ! -f "$TEMPLATE" ]; then
echo "Error: Template not found at $TEMPLATE" >&2
exit 1
fi
CORE_DIR="$(cd "$REPO_ROOT/core" && pwd)"
TOOLS_DIR="$(cd "$REPO_ROOT/tools" && pwd)"
mkdir -p "$HOME/.gemini/antigravity"
# Generate config from template with absolute paths
# Replace relative "core" and "tools" with absolute paths in --directory args
sed -e "s|\"--directory\", \"core\"|\"--directory\", \"$CORE_DIR\"|g" \
-e "s|\"--directory\", \"tools\"|\"--directory\", \"$TOOLS_DIR\"|g" \
"$TEMPLATE" > "$HOME/.gemini/antigravity/mcp_config.json"
echo "Wrote $HOME/.gemini/antigravity/mcp_config.json (from $TEMPLATE)"
echo " core -> $CORE_DIR"
echo " tools -> $TOOLS_DIR"
if [ "$1" = "--claude" ]; then
mkdir -p "$HOME/.claude"
cp "$HOME/.gemini/antigravity/mcp_config.json" "$HOME/.claude/mcp.json"
echo "Wrote $HOME/.claude/mcp.json"
fi
echo ""
echo "Next: Restart Antigravity IDE so it loads the MCP config."
echo " Then open this repo; tools should appear."
echo ""
echo "For Claude Code, run: $0 --claude"
+8 -2
View File
@@ -1,5 +1,5 @@
#!/usr/bin/env bash
# Creates GitHub labels for the Integration Bounty Program.
# Creates GitHub labels for the Bounty Program.
# Usage: ./scripts/setup-bounty-labels.sh [owner/repo]
# Requires: gh CLI authenticated
@@ -9,12 +9,18 @@ REPO="${1:-adenhq/hive}"
echo "Setting up bounty labels for $REPO..."
# Bounty type labels
# Integration bounty labels
gh label create "bounty:test" --repo "$REPO" --color "1D76DB" --description "Bounty: test a tool with real API key (20 pts)" --force
gh label create "bounty:docs" --repo "$REPO" --color "FBCA04" --description "Bounty: write or improve documentation (20 pts)" --force
gh label create "bounty:code" --repo "$REPO" --color "D93F0B" --description "Bounty: health checker, bug fix, or improvement (30 pts)" --force
gh label create "bounty:new-tool" --repo "$REPO" --color "6F42C1" --description "Bounty: build a new integration from scratch (75 pts)" --force
# Standard bounty labels
gh label create "bounty:small" --repo "$REPO" --color "C2E0C6" --description "Bounty: quick fix — typos, links, error messages (10 pts)" --force
gh label create "bounty:medium" --repo "$REPO" --color "0E8A16" --description "Bounty: bug fix, tests, guides, CLI improvements (30 pts)" --force
gh label create "bounty:large" --repo "$REPO" --color "B60205" --description "Bounty: new feature, perf work, architecture docs (75 pts)" --force
gh label create "bounty:extreme" --repo "$REPO" --color "000000" --description "Bounty: major subsystem, security audit, core refactor (150 pts)" --force
# Difficulty labels
gh label create "difficulty:easy" --repo "$REPO" --color "BFD4F2" --description "Good first contribution" --force
gh label create "difficulty:medium" --repo "$REPO" --color "D4C5F9" --description "Requires some familiarity" --force
+8 -2
View File
@@ -14,8 +14,14 @@ COPY mcp_server.py ./
# Install package with all dependencies
RUN pip install --no-cache-dir -e .
# Install Playwright Chromium browser and system dependencies
RUN playwright install chromium --with-deps
# Install Google Chrome (stable) — used by GCU browser tools via CDP
RUN apt-get update && apt-get install -y wget gnupg \
&& mkdir -p /etc/apt/keyrings \
&& wget -q -O /etc/apt/keyrings/google-chrome.asc https://dl.google.com/linux/linux_signing_key.pub \
&& echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/google-chrome.asc] http://dl.google.com/linux/chrome/deb/ stable main" \
> /etc/apt/sources.list.d/google-chrome.list \
&& apt-get update && apt-get install -y google-chrome-stable \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Create non-root user for security
RUN useradd -m -u 1001 appuser
+52 -34
View File
@@ -346,8 +346,8 @@ def list_agent_tools(
tools from this list in node definitions never guess or fabricate.
Progressive disclosure workflow (start narrow, drill in):
list_agent_tools() # provider summary: counts + credential status
list_agent_tools(group="google", output_schema="summary") # service breakdown within google
list_agent_tools() # provider summary
list_agent_tools(group="google", output_schema="summary") # service breakdown
list_agent_tools(group="google", service="gmail") # tool names for just gmail
list_agent_tools(group="google", service="gmail", output_schema="full") # full detail
@@ -593,7 +593,8 @@ def list_agent_tools(
# Compute credential availability once (used for filtering and summary)
available_creds: set[str] = (
_get_available_credential_names() if credentials != "all" or output_schema == "summary"
_get_available_credential_names()
if credentials != "all" or output_schema == "summary"
else set()
)
@@ -603,7 +604,8 @@ def list_agent_tools(
filtered_tools = [
t
for t in all_tools
if (credentials == "available") == _tool_credentials_available(t["name"], available_creds)
if (credentials == "available")
== _tool_credentials_available(t["name"], available_creds)
]
provider_groups = _group_by_provider(filtered_tools)
@@ -628,7 +630,13 @@ def list_agent_tools(
for t in filtered_tools:
# Only include tools from the already-filtered provider set
tool_name = t["name"]
in_provider = any(tool_name in p.get("tool_names", [tool_entry.get("name") for tool_entry in p.get("tools", [])]) for p in provider_groups.values())
in_provider = any(
tool_name
in p.get(
"tool_names", [tool_entry.get("name") for tool_entry in p.get("tools", [])]
)
for p in provider_groups.values()
)
if in_provider and tool_name.startswith(service_prefix):
service_filtered.append(t)
provider_groups = _group_by_provider(service_filtered)
@@ -644,7 +652,9 @@ def list_agent_tools(
full_groups = _group_by_provider(all_tools) if credentials != "all" else provider_groups
summary_providers: dict = {}
for prov, bucket in full_groups.items():
cred_names = bucket.get("credentials_required", sorted(bucket.get("authorization", {}).keys()))
cred_names = bucket.get(
"credentials_required", sorted(bucket.get("authorization", {}).keys())
)
creds_ok = all(c in available_creds for c in cred_names) if cred_names else True
summary_providers[prov] = {
"tool_count": len(bucket.get("tool_names", bucket.get("tools", []))),
@@ -655,7 +665,8 @@ def list_agent_tools(
"total_tools": sum(v["tool_count"] for v in summary_providers.values()),
"providers": summary_providers,
"hint": (
"Use list_agent_tools(group='<provider>', output_schema='summary') for service breakdown, "
"Use list_agent_tools(group='<provider>', "
"output_schema='summary') for service breakdown, "
"list_agent_tools(group='<provider>', service='<service>') for tool names. "
"Filter by credentials='available' to see only ready-to-use tools."
),
@@ -1776,7 +1787,7 @@ default_config = RuntimeConfig()
class AgentMetadata:
name: str = "{human_name}"
version: str = "1.0.0"
description: str = "{_draft_desc or 'TODO: Add agent description.'}"
description: str = "{_draft_desc or "TODO: Add agent description."}"
intro_message: str = "TODO: Add intro message."
@@ -1868,53 +1879,57 @@ __all__ = {node_var_names!r}
edges_str = "\n".join(edge_defs) if edge_defs else " # TODO: Add edges"
# Pre-populate goal from draft metadata
_draft_goal = (_draft.get("goal") or "TODO: Describe the agent's goal.") if _draft else "TODO: Describe the agent's goal."
_draft_goal = (
(_draft.get("goal") or "TODO: Describe the agent's goal.")
if _draft
else "TODO: Describe the agent's goal."
)
_draft_sc = (_draft.get("success_criteria") or []) if _draft else []
_draft_constraints = (_draft.get("constraints") or []) if _draft else []
# Build success criteria entries
if _draft_sc:
sc_entries = "\n".join(
f'''\
f"""\
SuccessCriterion(
id="sc-{i+1}",
id="sc-{i + 1}",
description="{sc}",
metric="TODO",
target="TODO",
weight=1.0,
),'''
),"""
for i, sc in enumerate(_draft_sc)
)
else:
sc_entries = '''\
sc_entries = """\
SuccessCriterion(
id="sc-1",
description="TODO: Define success criterion.",
metric="TODO",
target="TODO",
weight=1.0,
),'''
),"""
# Build constraint entries
if _draft_constraints:
constraint_entries = "\n".join(
f'''\
f"""\
Constraint(
id="c-{i+1}",
id="c-{i + 1}",
description="{c}",
constraint_type="hard",
category="functional",
),'''
),"""
for i, c in enumerate(_draft_constraints)
)
else:
constraint_entries = '''\
constraint_entries = """\
Constraint(
id="c-1",
description="TODO: Define constraint.",
constraint_type="hard",
category="functional",
),'''
),"""
_write(
"agent.py",
@@ -2242,21 +2257,24 @@ if __name__ == "__main__":
)
# -- mcp_servers.json --
_write(
"mcp_servers.json",
json.dumps(
{
"hive-tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": "../../tools",
"description": "Hive tools MCP server",
}
},
indent=2,
),
)
mcp_config: dict = {
"hive-tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": "../../tools",
"description": "Hive tools MCP server",
},
"gcu-tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "-m", "gcu.server", "--stdio"],
"cwd": "../../tools",
"description": "GCU browser automation tools",
},
}
_write("mcp_servers.json", json.dumps(mcp_config, indent=2))
# -- tests/conftest.py --
_write(

Some files were not shown because too many files have changed in this diff Show More