Compare commits
148 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| a9afa0555c | |||
| 83b2183cf0 | |||
| f49e7a760e | |||
| 6e0255ebec | |||
| 379d3df46b | |||
| c7d70e0fb1 | |||
| ced64541b9 | |||
| 3c30cfe02b | |||
| 0d6267bcf1 | |||
| 6f23a30eed | |||
| ab995d8b96 | |||
| c2e560fc07 | |||
| 19f7ae862e | |||
| 5e9f74744a | |||
| 7787179a5a | |||
| b63205b91a | |||
| 347bccb9ee | |||
| 9d83f0298f | |||
| 7f7e8b4dff | |||
| f48a7380f5 | |||
| 3c7f129d86 | |||
| 4533b27aa1 | |||
| 3adf268c29 | |||
| ac8579900f | |||
| abbaaa68f3 | |||
| 11089093ef | |||
| 99b7cb07d5 | |||
| 70d61ae67a | |||
| dd054815a3 | |||
| 8e5eaae9dd | |||
| 2d0128eb5c | |||
| 06f1d4dcef | |||
| 0e7b11b5b2 | |||
| 291b78f934 | |||
| e196a03972 | |||
| a0abe2685d | |||
| e8f642c8b6 | |||
| 6260f628eb | |||
| 4a4f17ed40 | |||
| 36dcf2025b | |||
| 85c70c94e6 | |||
| 336e82ba22 | |||
| f2ddd1051d | |||
| 2dd60c8d52 | |||
| ff01c1fd99 | |||
| 421b25fdb7 | |||
| 795c3c33e2 | |||
| 97821f4d80 | |||
| 505e1e30fd | |||
| 3fb2b285fb | |||
| a76109840c | |||
| 1db8484402 | |||
| 39212350ba | |||
| f3399fe95b | |||
| d02e1155ed | |||
| 7ede3ba171 | |||
| cdaec8a837 | |||
| 2272491cf5 | |||
| bb38cb974f | |||
| 635d2976f4 | |||
| 4e1525880d | |||
| b80559df68 | |||
| 08d93ef90a | |||
| 22bf035522 | |||
| 15944a42ab | |||
| 8440ec70ba | |||
| eacf2520cf | |||
| def4f62a51 | |||
| b0c5bcd210 | |||
| 2fe1343343 | |||
| de0dcff50f | |||
| 20427e213a | |||
| 1fb5c6337a | |||
| 1e74f194a1 | |||
| 08157d2bd6 | |||
| ef036257a9 | |||
| 16ce984c74 | |||
| 1e8b5b96eb | |||
| 094ba89f19 | |||
| 7008c9f310 | |||
| 94d7cbacc2 | |||
| bddc2b413a | |||
| 48c8fb7fff | |||
| 52b1a3f472 | |||
| 079e00c8f7 | |||
| 60bba38941 | |||
| ea8e7b11c6 | |||
| 3dc2b25b01 | |||
| 2ad78ec8a2 | |||
| 9bfddec322 | |||
| 51fdc4ddde | |||
| 04685d33ca | |||
| 729a0e0cec | |||
| 993b31f19b | |||
| 9d1f268078 | |||
| 336557d7c7 | |||
| c2c4929de8 | |||
| f9d5f95936 | |||
| 2434c86cdf | |||
| c4a5e621aa | |||
| 0f5b83d86a | |||
| b5aadcd51e | |||
| 290d2f6823 | |||
| 944567dc31 | |||
| 674cf05601 | |||
| 6fa71fa27d | |||
| 8c7065ad37 | |||
| a18ed5bbe6 | |||
| 9f3339650d | |||
| d5e5d3e83d | |||
| 5ea27dda09 | |||
| 6f9066ef20 | |||
| c37185732a | |||
| 0c900fb50e | |||
| 4d3ac28878 | |||
| 270c1f8c50 | |||
| 3d0859d06a | |||
| ed3d4bfe33 | |||
| 596ce9878d | |||
| ffe47c0f71 | |||
| bf4652db4b | |||
| 2acd526b71 | |||
| df71834e4b | |||
| bc3c5a5899 | |||
| 726016d24a | |||
| 4895cea08a | |||
| c9723a3ff2 | |||
| 6cb73a6fea | |||
| 0c7f43f595 | |||
| ea5cfcc5d6 | |||
| 34e85019c3 | |||
| c979dba958 | |||
| b4caa045e1 | |||
| e82133741c | |||
| 5076278dcb | |||
| 2398e04e11 | |||
| d00f321627 | |||
| e76b6cb575 | |||
| cba0ec110f | |||
| 0256e0c944 | |||
| 4d9d0362a0 | |||
| f474d0bc8e | |||
| 6a0681b9aa | |||
| c7e634851b | |||
| cdb7155960 | |||
| 3f7790c26a | |||
| 5676b115f4 | |||
| 61c59d57e8 |
@@ -0,0 +1,78 @@
|
||||
name: Standard Bounty
|
||||
description: A bounty task for general framework contributions (not integration-specific)
|
||||
title: "[Bounty]: "
|
||||
labels: []
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
## Standard Bounty
|
||||
|
||||
This issue is part of the [Bounty Program](../../docs/bounty-program/README.md).
|
||||
**Claim this bounty** by commenting below — a maintainer will assign you within 24 hours.
|
||||
|
||||
- type: dropdown
|
||||
id: bounty-size
|
||||
attributes:
|
||||
label: Bounty Size
|
||||
options:
|
||||
- "Small (10 pts)"
|
||||
- "Medium (30 pts)"
|
||||
- "Large (75 pts)"
|
||||
- "Extreme (150 pts)"
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
id: difficulty
|
||||
attributes:
|
||||
label: Difficulty
|
||||
options:
|
||||
- Easy
|
||||
- Medium
|
||||
- Hard
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Description
|
||||
description: What needs to be done to complete this bounty.
|
||||
placeholder: |
|
||||
Describe the specific task, including:
|
||||
- What the contributor needs to do
|
||||
- Links to relevant files in the repo
|
||||
- Any context or motivation for the change
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: acceptance-criteria
|
||||
attributes:
|
||||
label: Acceptance Criteria
|
||||
description: What "done" looks like. The PR must meet all criteria.
|
||||
placeholder: |
|
||||
- [ ] Criterion 1
|
||||
- [ ] Criterion 2
|
||||
- [ ] CI passes
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: relevant-files
|
||||
attributes:
|
||||
label: Relevant Files
|
||||
description: Links to files or directories related to this bounty.
|
||||
placeholder: |
|
||||
- `path/to/file.py`
|
||||
- `path/to/directory/`
|
||||
|
||||
- type: textarea
|
||||
id: resources
|
||||
attributes:
|
||||
label: Resources
|
||||
description: Links to docs, issues, or external references that will help.
|
||||
placeholder: |
|
||||
- Related issue: #XXXX
|
||||
- Docs: https://...
|
||||
+150
-27
@@ -1,17 +1,149 @@
|
||||
# Release Notes
|
||||
|
||||
## v0.7.1
|
||||
|
||||
**Release Date:** March 13, 2026
|
||||
**Tag:** v0.7.1
|
||||
|
||||
### Chrome-Native Browser Control
|
||||
|
||||
v0.7.1 replaces Playwright with direct Chrome DevTools Protocol (CDP) integration. The GCU now launches the user's system Chrome via `open -n` on macOS, connects over CDP, and manages browser lifecycle end-to-end -- no extra browser binary required.
|
||||
|
||||
---
|
||||
|
||||
### Highlights
|
||||
|
||||
#### System Chrome via CDP
|
||||
|
||||
The entire GCU browser stack has been rewritten:
|
||||
|
||||
- **Chrome finder & launcher** -- New `chrome_finder.py` discovers installed Chrome and `chrome_launcher.py` manages process lifecycle with `--remote-debugging-port`
|
||||
- **Coexist with user's browser** -- `open -n` on macOS launches a separate Chrome instance so the user's tabs stay untouched
|
||||
- **Dynamic viewport sizing** -- Viewport auto-sizes to the available display area, suppressing Chrome warning bars
|
||||
- **Orphan cleanup** -- Chrome processes are killed on GCU server shutdown to prevent leaks
|
||||
- **`--no-startup-window`** -- Chrome launches headlessly by default until a page is needed
|
||||
|
||||
#### Per-Subagent Browser Isolation
|
||||
|
||||
Each GCU subagent gets its own Chrome user-data directory, preventing cookie/session cross-contamination:
|
||||
|
||||
- Unique browser profiles injected per subagent
|
||||
- Profiles cleaned up after top-level GCU node execution
|
||||
- Tab origin and age metadata tracked per subagent
|
||||
|
||||
#### Dummy Agent Testing Framework
|
||||
|
||||
A comprehensive test suite for validating agent graph patterns without LLM calls:
|
||||
|
||||
- 8 test modules covering echo, pipeline, branch, parallel merge, retry, feedback loop, worker, and GCU subagent patterns
|
||||
- Shared fixtures and a `run_all.py` runner for CI integration
|
||||
- Subagent lifecycle tests
|
||||
|
||||
---
|
||||
|
||||
### What's New
|
||||
|
||||
#### GCU Browser
|
||||
|
||||
- **Switch from Playwright to system Chrome via CDP** -- Direct CDP connection replaces Playwright dependency. (@bryanadenhq)
|
||||
- **Chrome finder and launcher modules** -- `chrome_finder.py` and `chrome_launcher.py` for cross-platform Chrome discovery and process management. (@bryanadenhq)
|
||||
- **Dynamic viewport sizing** -- Auto-size viewport and suppress Chrome warning bar. (@bryanadenhq)
|
||||
- **Per-subagent browser profile isolation** -- Unique user-data directories per subagent with cleanup. (@bryanadenhq)
|
||||
- **Tab origin/age metadata** -- Track which subagent opened each tab and when. (@bryanadenhq)
|
||||
- **`browser_close_all` tool** -- Bulk tab cleanup for agents managing many pages. (@bryanadenhq)
|
||||
- **Auto-track popup pages** -- Popups are automatically captured and tracked. (@bryanadenhq)
|
||||
- **Auto-snapshot from browser interactions** -- Browser interaction tools return screenshots automatically. (@bryanadenhq)
|
||||
- **Kill orphaned Chrome processes** -- GCU server shutdown cleans up lingering Chrome instances. (@bryanadenhq)
|
||||
- **`--no-startup-window` Chrome flag** -- Prevent empty window on launch. (@bryanadenhq)
|
||||
- **Launch Chrome via `open -n` on macOS** -- Coexist with the user's running browser. (@bryanadenhq)
|
||||
|
||||
#### Framework & Runtime
|
||||
|
||||
- **Session resume fix for new agents** -- Correctly resume sessions when a new agent is loaded. (@bryanadenhq)
|
||||
- **Queen upsert fix** -- Prevent duplicate queen entries on session restore. (@bryanadenhq)
|
||||
- **Anchor worker monitoring to queen's session ID on cold-restore** -- Worker monitors reconnect to the correct queen after restart. (@bryanadenhq)
|
||||
- **Update meta.json when loading workers** -- Worker metadata stays in sync with runtime state. (@RichardTang-Aden)
|
||||
- **Generate worker MCP file correctly** -- Fix MCP config generation for spawned workers. (@RichardTang-Aden)
|
||||
- **Share event bus so tool events are visible to parent** -- Tool execution events propagate up to parent graphs. (@bryanadenhq)
|
||||
- **Subagent activity tracking in queen status** -- Queen instructions include live subagent status. (@bryanadenhq)
|
||||
- **GCU system prompt updates** -- Auto-snapshots, batching, popup tracking, and close_all guidance. (@bryanadenhq)
|
||||
|
||||
#### Frontend
|
||||
|
||||
- **Loading spinner in draft panel** -- Shows spinner during planning phase instead of blank panel. (@bryanadenhq)
|
||||
- **Fix credential modal errors** -- Modal no longer eats errors; banner stays visible. (@bryanadenhq)
|
||||
- **Fix credentials_required loop** -- Stop clearing the flag on modal close to prevent infinite re-prompting. (@bryanadenhq)
|
||||
- **Fix "Add tab" dropdown overflow** -- Dropdown no longer hidden when many agents are open. (@prasoonmhwr)
|
||||
|
||||
#### Testing
|
||||
|
||||
- **Dummy agent test framework** -- 8 test modules (echo, pipeline, branch, parallel merge, retry, feedback loop, worker, GCU subagent) with shared fixtures and CI runner. (@bryanadenhq)
|
||||
- **Subagent lifecycle tests** -- Validate subagent spawn and completion flows. (@bryanadenhq)
|
||||
|
||||
#### Documentation & Infrastructure
|
||||
|
||||
- **MCP integration PRD** -- Product requirements for MCP server registry. (@TimothyZhang7)
|
||||
- **Skills registry PRD** -- Product requirements for skill registry system. (@bryanadenhq)
|
||||
- **Bounty program updates** -- Standard bounty issue template and updated contributor guide. (@bryanadenhq)
|
||||
- **Windows quickstart** -- Add default context limit for PowerShell setup. (@bryanadenhq)
|
||||
- **Remove deprecated files** -- Clean up `setup_mcp.py`, `verify_mcp.py`, `antigravity-setup.md`, and `setup-antigravity-mcp.sh`. (@bryanadenhq)
|
||||
|
||||
---
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Fix credential modal eating errors and banner staying open
|
||||
- Stop clearing `credentials_required` on modal close to prevent infinite loop
|
||||
- Share event bus so tool events are visible to parent graph
|
||||
- Use lazy %-formatting in subagent completion log to avoid f-string in logger
|
||||
- Anchor worker monitoring to queen's session ID on cold-restore
|
||||
- Update meta.json when loading workers
|
||||
- Generate worker MCP file correctly
|
||||
- Fix "Add tab" dropdown partially hidden when creating multiple agents
|
||||
|
||||
---
|
||||
|
||||
### Community Contributors
|
||||
|
||||
- **Prasoon Mahawar** (@prasoonmhwr) -- Fix UI overflow on agent tab dropdown
|
||||
- **Richard Tang** (@RichardTang-Aden) -- Worker MCP generation and meta.json fixes
|
||||
|
||||
---
|
||||
|
||||
### Upgrading
|
||||
|
||||
```bash
|
||||
git pull origin main
|
||||
uv sync
|
||||
```
|
||||
|
||||
The Playwright dependency is no longer required for GCU browser operations. Chrome must be installed on the host system.
|
||||
|
||||
---
|
||||
|
||||
## v0.7.0
|
||||
|
||||
**Release Date:** March 5, 2026
|
||||
**Tag:** v0.7.0
|
||||
|
||||
Session management refactor release.
|
||||
|
||||
---
|
||||
|
||||
## v0.5.1
|
||||
|
||||
**Release Date:** February 18, 2026
|
||||
**Tag:** v0.5.1
|
||||
|
||||
## The Hive Gets a Brain
|
||||
### The Hive Gets a Brain
|
||||
|
||||
v0.5.1 is our most ambitious release yet. Hive agents can now **build other agents** -- the new Hive Coder meta-agent writes, tests, and fixes agent packages from natural language. The runtime grows multi-graph support so one session can orchestrate multiple agents simultaneously. The TUI gets a complete overhaul with an in-app agent picker, live streaming, and seamless escalation to the Coder. And we're now provider-agnostic: Claude Code subscriptions, OpenAI-compatible endpoints, and any LiteLLM-supported model work out of the box.
|
||||
|
||||
---
|
||||
|
||||
## Highlights
|
||||
### Highlights
|
||||
|
||||
### Hive Coder -- The Agent That Builds Agents
|
||||
#### Hive Coder -- The Agent That Builds Agents
|
||||
|
||||
A native meta-agent that lives inside the framework at `core/framework/agents/hive_coder/`. Give it a natural-language specification and it produces a complete agent package -- goal definition, node prompts, edge routing, MCP tool wiring, tests, and all boilerplate files.
|
||||
|
||||
@@ -30,7 +162,7 @@ The Coder ships with:
|
||||
- **Coder Tools MCP server** -- file I/O, fuzzy-match editing, git snapshots, and sandboxed shell execution (`tools/coder_tools_server.py`)
|
||||
- **Test generation** -- structural tests for forever-alive agents that don't hang on `runner.run()`
|
||||
|
||||
### Multi-Graph Agent Runtime
|
||||
#### Multi-Graph Agent Runtime
|
||||
|
||||
`AgentRuntime` now supports loading, managing, and switching between multiple agent graphs within a single session. Six new lifecycle tools give agents (and the TUI) full control:
|
||||
|
||||
@@ -44,7 +176,7 @@ await runtime.add_graph("exports/deep_research_agent")
|
||||
|
||||
The Hive Coder uses multi-graph internally -- when you escalate from a worker agent, the Coder loads as a separate graph while the worker stays alive in the background.
|
||||
|
||||
### TUI Revamp
|
||||
#### TUI Revamp
|
||||
|
||||
The Terminal UI gets a ground-up rebuild with five major additions:
|
||||
|
||||
@@ -54,7 +186,7 @@ The Terminal UI gets a ground-up rebuild with five major additions:
|
||||
- **PDF attachments** -- `/attach` and `/detach` commands with native OS file dialog (macOS, Linux, Windows)
|
||||
- **Multi-graph commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>` for managing agent graphs in-session
|
||||
|
||||
### Provider-Agnostic LLM Support
|
||||
#### Provider-Agnostic LLM Support
|
||||
|
||||
Hive is no longer Anthropic-only. v0.5.1 adds first-class support for:
|
||||
|
||||
@@ -66,9 +198,9 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
|
||||
---
|
||||
|
||||
## What's New
|
||||
### What's New
|
||||
|
||||
### Architecture & Runtime
|
||||
#### Architecture & Runtime
|
||||
|
||||
- **Hive Coder meta-agent** -- Natural-language agent builder with reference docs, guardian watchdog, and `hive code` CLI command. (@TimothyZhang7)
|
||||
- **Multi-graph agent sessions** -- `add_graph`/`remove_graph` on AgentRuntime with 6 lifecycle tools (`load_agent`, `unload_agent`, `start_agent`, `restart_agent`, `list_agents`, `get_user_presence`). (@TimothyZhang7)
|
||||
@@ -79,7 +211,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
- **Pre-start confirmation prompt** -- Interactive prompt before agent execution allowing credential updates or abort. (@RichardTang-Aden)
|
||||
- **Event bus multi-graph support** -- `graph_id` on events, `filter_graph` on subscriptions, `ESCALATION_REQUESTED` event type, `exclude_own_graph` filter. (@TimothyZhang7)
|
||||
|
||||
### TUI Improvements
|
||||
#### TUI Improvements
|
||||
|
||||
- **In-app agent picker** (Ctrl+A) -- Tabbed modal for browsing agents with metadata badges (nodes, tools, sessions, tags). (@TimothyZhang7)
|
||||
- **Runtime-optional TUI startup** -- Launches without a pre-loaded agent, shows agent picker on startup. (@TimothyZhang7)
|
||||
@@ -89,7 +221,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
- **Multi-graph TUI commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>`. (@TimothyZhang7)
|
||||
- **Agent Guardian watchdog** -- Event-driven monitor that catches secondary agent failures and triggers automatic remediation, with `--no-guardian` CLI flag. (@TimothyZhang7)
|
||||
|
||||
### New Tool Integrations
|
||||
#### New Tool Integrations
|
||||
|
||||
| Tool | Description | Contributor |
|
||||
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
|
||||
@@ -99,7 +231,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
| **Google Docs** | Document creation, reading, and editing with OAuth credential support | @haliaeetusvocifer |
|
||||
| **Gmail enhancements** | Expanded mail operations for inbox management | @bryanadenhq |
|
||||
|
||||
### Infrastructure
|
||||
#### Infrastructure
|
||||
|
||||
- **Default node type → `event_loop`** -- `NodeSpec.node_type` defaults to `"event_loop"` instead of `"llm_tool_use"`. (@TimothyZhang7)
|
||||
- **Default `max_node_visits` → 0 (unlimited)** -- Nodes default to unlimited visits, reducing friction for feedback loops and forever-alive agents. (@TimothyZhang7)
|
||||
@@ -112,7 +244,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
|
||||
---
|
||||
|
||||
## Bug Fixes
|
||||
### Bug Fixes
|
||||
|
||||
- Flush WIP accumulator outputs on cancel/failure so edge conditions see correct values on resume
|
||||
- Stall detection state preserved across resume (no more resets on checkpoint restore)
|
||||
@@ -125,13 +257,13 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
- Fix email agent version conflicts (@RichardTang-Aden)
|
||||
- Fix coder tool timeouts (120s for tests, 300s cap for commands)
|
||||
|
||||
## Documentation
|
||||
### Documentation
|
||||
|
||||
- Clarify installation and prevent root pip install misuse (@paarths-collab)
|
||||
|
||||
---
|
||||
|
||||
## Agent Updates
|
||||
### Agent Updates
|
||||
|
||||
- **Email Inbox Management** -- Consolidate `gmail_inbox_guardian` and `inbox_management` into a single unified agent with updated prompts and config. (@RichardTang-Aden, @bryanadenhq)
|
||||
- **Job Hunter** -- Updated node prompts, config, and agent metadata; added PDF resume selection. (@bryanadenhq)
|
||||
@@ -141,7 +273,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
|
||||
---
|
||||
|
||||
## Breaking Changes
|
||||
### Breaking Changes
|
||||
|
||||
- **Deprecated node types raise `RuntimeError`** -- `llm_tool_use`, `llm_generate`, `function`, `router`, `human_input` now fail instead of warning. Migrate to `event_loop`.
|
||||
- **`NodeSpec.node_type` defaults to `"event_loop"`** (was `"llm_tool_use"`)
|
||||
@@ -150,7 +282,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
|
||||
|
||||
---
|
||||
|
||||
## Community Contributors
|
||||
### Community Contributors
|
||||
|
||||
A huge thank you to everyone who contributed to this release:
|
||||
|
||||
@@ -165,14 +297,14 @@ A huge thank you to everyone who contributed to this release:
|
||||
|
||||
---
|
||||
|
||||
## Upgrading
|
||||
### Upgrading
|
||||
|
||||
```bash
|
||||
git pull origin main
|
||||
uv sync
|
||||
```
|
||||
|
||||
### Migration Guide
|
||||
#### Migration Guide
|
||||
|
||||
If your agents use deprecated node types, update them:
|
||||
|
||||
@@ -196,12 +328,3 @@ hive code
|
||||
# Or from TUI -- press Ctrl+E to escalate
|
||||
hive tui
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What's Next
|
||||
|
||||
- **Agent-to-agent communication** -- one agent's output triggers another agent's entry point
|
||||
- **Cost visibility** -- detailed runtime log of LLM costs per node and per session
|
||||
- **Persistent webhook subscriptions** -- survive agent restarts without re-registering
|
||||
- **Remote agent deployment** -- run agents as long-lived services with HTTP APIs
|
||||
|
||||
+8
-3
@@ -121,9 +121,15 @@ uv sync
|
||||
6. Make your changes
|
||||
7. Run checks and tests:
|
||||
```bash
|
||||
make check # Lint and format checks (ruff check + ruff format --check)
|
||||
make check # Lint and format checks
|
||||
make test # Core tests
|
||||
```
|
||||
On Windows (no make), run directly:
|
||||
```powershell
|
||||
uv run ruff check core/ tools/
|
||||
uv run ruff format --check core/ tools/
|
||||
uv run pytest core/tests/
|
||||
```
|
||||
8. Commit your changes following our commit conventions
|
||||
9. Push to your fork and submit a Pull Request
|
||||
|
||||
@@ -222,8 +228,7 @@ else: # linux
|
||||
- **Node.js 18+** (optional, for frontend development)
|
||||
|
||||
> **Windows Users:**
|
||||
> If you are on native Windows, it is recommended to use **WSL (Windows Subsystem for Linux)**.
|
||||
> Alternatively, make sure to run PowerShell or Git Bash with Python 3.11+ installed, and disable "App Execution Aliases" in Windows settings.
|
||||
> Native Windows is supported. Use `.\quickstart.ps1` for setup and `.\hive.ps1` to run (PowerShell 5.1+). Disable "App Execution Aliases" in Windows settings to avoid Python path conflicts. WSL is also an option but not required.
|
||||
|
||||
> **Tip:** Installing Claude Code skills is optional for running existing agents, but required if you plan to **build new agents**.
|
||||
|
||||
|
||||
@@ -5,20 +5,20 @@ help: ## Show this help
|
||||
awk 'BEGIN {FS = ":.*?## "}; {printf " \033[36m%-15s\033[0m %s\n", $$1, $$2}'
|
||||
|
||||
lint: ## Run ruff linter and formatter (with auto-fix)
|
||||
cd core && ruff check --fix .
|
||||
cd tools && ruff check --fix .
|
||||
cd core && ruff format .
|
||||
cd tools && ruff format .
|
||||
cd core && uv run ruff check --fix .
|
||||
cd tools && uv run ruff check --fix .
|
||||
cd core && uv run ruff format .
|
||||
cd tools && uv run ruff format .
|
||||
|
||||
format: ## Run ruff formatter
|
||||
cd core && ruff format .
|
||||
cd tools && ruff format .
|
||||
cd core && uv run ruff format .
|
||||
cd tools && uv run ruff format .
|
||||
|
||||
check: ## Run all checks without modifying files (CI-safe)
|
||||
cd core && ruff check .
|
||||
cd tools && ruff check .
|
||||
cd core && ruff format --check .
|
||||
cd tools && ruff format --check .
|
||||
cd core && uv run ruff check .
|
||||
cd tools && uv run ruff check .
|
||||
cd core && uv run ruff format --check .
|
||||
cd tools && uv run ruff format --check .
|
||||
|
||||
test: ## Run all tests (core + tools, excludes live)
|
||||
cd core && uv run python -m pytest tests/ -v
|
||||
|
||||
@@ -27,7 +27,7 @@
|
||||
<img src="https://img.shields.io/badge/Multi--Agent-Systems-blue?style=flat-square" alt="Multi-Agent" />
|
||||
<img src="https://img.shields.io/badge/Headless-Development-purple?style=flat-square" alt="Headless" />
|
||||
<img src="https://img.shields.io/badge/Human--in--the--Loop-orange?style=flat-square" alt="HITL" />
|
||||
<img src="https://img.shields.io/badge/Production--Ready-red?style=flat-square" alt="Production" />
|
||||
<img src="https://img.shields.io/badge/Browser-Use-red?style=flat-square" alt="Browser Use" />
|
||||
</p>
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/OpenAI-supported-412991?style=flat-square&logo=openai" alt="OpenAI" />
|
||||
@@ -37,7 +37,7 @@
|
||||
|
||||
## Overview
|
||||
|
||||
Build autonomous, reliable, self-improving AI agents without hardcoding workflows. Define your goal through conversation with hive coding agent(queen), and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, credential management, and real-time monitoring give you control without sacrificing adaptability.
|
||||
Generate a swarm of worker agents with a coding agent(queen) that control them. Define your goal through conversation with hive queen, and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, browser use, credential management, and real-time monitoring give you control without sacrificing adaptability.
|
||||
|
||||
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
|
||||
|
||||
@@ -45,7 +45,7 @@ Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and
|
||||
|
||||
## Who Is Hive For?
|
||||
|
||||
Hive is designed for developers and teams who want to build **production-grade AI agents** without manually wiring complex workflows.
|
||||
Hive is designed for developers and teams who want to build many **autonomous AI agents** fast without manually wiring complex workflows.
|
||||
|
||||
Hive is a good fit if you:
|
||||
|
||||
@@ -84,7 +84,7 @@ Use Hive when you need:
|
||||
- An LLM provider that powers the agents
|
||||
- **ripgrep (optional, recommended on Windows):** The `search_files` tool uses ripgrep for faster file search. If not installed, a Python fallback is used. On Windows: `winget install BurntSushi.ripgrep` or `scoop install ripgrep`
|
||||
|
||||
> **Note for Windows Users:** It is strongly recommended to use **WSL (Windows Subsystem for Linux)** or **Git Bash** to run this framework. Some core automation scripts may not execute correctly in standard Command Prompt or PowerShell.
|
||||
> **Windows Users:** Native Windows is supported via `quickstart.ps1` and `hive.ps1`. Run these in PowerShell 5.1+. WSL is also an option but not required.
|
||||
|
||||
### Installation
|
||||
|
||||
@@ -115,11 +115,9 @@ This sets up:
|
||||
|
||||
> **Tip:** To reopen the dashboard later, run `hive open` from the project directory.
|
||||
|
||||
<img width="2500" height="1214" alt="home-screen" src="https://github.com/user-attachments/assets/134d897f-5e75-4874-b00b-e0505f6b45c4" />
|
||||
|
||||
### Build Your First Agent
|
||||
|
||||
Type the agent you want to build in the home input box
|
||||
Type the agent you want to build in the home input box. The queen is going to ask you questions and work out a solution with you.
|
||||
|
||||
<img width="2500" height="1214" alt="Image" src="https://github.com/user-attachments/assets/1ce19141-a78b-46f5-8d64-dbf987e048f4" />
|
||||
|
||||
@@ -131,7 +129,7 @@ Click "Try a sample agent" and check the templates. You can run a template direc
|
||||
|
||||
Now you can run an agent by selecting the agent (either an existing agent or example agent). You can click the Run button on the top left, or talk to the queen agent and it can run the agent for you.
|
||||
|
||||
<img width="2500" height="1214" alt="Image" src="https://github.com/user-attachments/assets/71c38206-2ad5-49aa-bde8-6698d0bc55f5" />
|
||||
<img width="2549" height="1174" alt="Screenshot 2026-03-12 at 9 27 36 PM" src="https://github.com/user-attachments/assets/7c7d30fa-9ceb-4c23-95af-b1caa405547d" />
|
||||
|
||||
## Features
|
||||
|
||||
@@ -143,7 +141,6 @@ Now you can run an agent by selecting the agent (either an existing agent or exa
|
||||
- **SDK-Wrapped Nodes** - Every node gets shared memory, local RLM memory, monitoring, tools, and LLM access out of the box
|
||||
- **[Human-in-the-Loop](docs/key_concepts/graph.md#human-in-the-loop)** - Intervention nodes that pause execution for human input with configurable timeouts and escalation
|
||||
- **Real-time Observability** - WebSocket streaming for live monitoring of agent execution, decisions, and node-to-node communication
|
||||
- **Production-Ready** - Self-hostable, built for scale and reliability
|
||||
|
||||
## Integration
|
||||
|
||||
@@ -392,10 +389,6 @@ Hive generates your entire agent system from natural language goals using a codi
|
||||
|
||||
Yes, Hive is fully open-source under the Apache License 2.0. We actively encourage community contributions and collaboration.
|
||||
|
||||
**Q: Can Hive handle complex, production-scale use cases?**
|
||||
|
||||
Yes. Hive is explicitly designed for production environments with features like automatic failure recovery, real-time observability, cost controls, and horizontal scaling support. The framework handles both simple automations and complex multi-agent workflows.
|
||||
|
||||
**Q: Does Hive support human-in-the-loop workflows?**
|
||||
|
||||
Yes, Hive fully supports [human-in-the-loop](docs/key_concepts/graph.md#human-in-the-loop) workflows through intervention nodes that pause execution for human input. These include configurable timeouts and escalation policies, allowing seamless collaboration between human experts and AI agents.
|
||||
@@ -420,6 +413,16 @@ Visit [docs.adenhq.com](https://docs.adenhq.com/) for complete guides, API refer
|
||||
|
||||
Contributions are welcome! Fork the repository, create your feature branch, implement your changes, and submit a pull request. See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
|
||||
|
||||
## Star History
|
||||
|
||||
<a href="https://star-history.com/#aden-hive/hive&Date">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date&theme=dark" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
|
||||
@@ -16,6 +16,7 @@ class AgentEntry:
|
||||
description: str
|
||||
category: str
|
||||
session_count: int = 0
|
||||
run_count: int = 0
|
||||
node_count: int = 0
|
||||
tool_count: int = 0
|
||||
tags: list[str] = field(default_factory=list)
|
||||
@@ -52,6 +53,31 @@ def _count_sessions(agent_name: str) -> int:
|
||||
return sum(1 for d in sessions_dir.iterdir() if d.is_dir() and d.name.startswith("session_"))
|
||||
|
||||
|
||||
def _count_runs(agent_name: str) -> int:
|
||||
"""Count unique run_ids across all sessions for an agent."""
|
||||
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
|
||||
if not sessions_dir.exists():
|
||||
return 0
|
||||
run_ids: set[str] = set()
|
||||
for session_dir in sessions_dir.iterdir():
|
||||
if not session_dir.is_dir() or not session_dir.name.startswith("session_"):
|
||||
continue
|
||||
# runs.jsonl lives inside workspace subdirectories
|
||||
for runs_file in session_dir.rglob("runs.jsonl"):
|
||||
try:
|
||||
for line in runs_file.read_text(encoding="utf-8").splitlines():
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
record = json.loads(line)
|
||||
rid = record.get("run_id")
|
||||
if rid:
|
||||
run_ids.add(rid)
|
||||
except Exception:
|
||||
continue
|
||||
return len(run_ids)
|
||||
|
||||
|
||||
def _extract_agent_stats(agent_path: Path) -> tuple[int, int, list[str]]:
|
||||
"""Extract node count, tool count, and tags from an agent directory.
|
||||
|
||||
@@ -139,6 +165,7 @@ def discover_agents() -> dict[str, list[AgentEntry]]:
|
||||
description=desc,
|
||||
category=category,
|
||||
session_count=_count_sessions(path.name),
|
||||
run_count=_count_runs(path.name),
|
||||
node_count=node_count,
|
||||
tool_count=tool_count,
|
||||
tags=tags,
|
||||
|
||||
@@ -14,8 +14,7 @@ queen_goal = Goal(
|
||||
id="queen-manager",
|
||||
name="Queen Manager",
|
||||
description=(
|
||||
"Manage the worker agent lifecycle and serve as the user's primary "
|
||||
"interactive interface. Triage health escalations from the judge."
|
||||
"Manage the worker agent lifecycle and serve as the user's primary interactive interface."
|
||||
),
|
||||
success_criteria=[],
|
||||
constraints=[],
|
||||
|
||||
@@ -62,6 +62,12 @@ _SHARED_TOOLS = [
|
||||
"get_agent_checkpoint",
|
||||
]
|
||||
|
||||
# Episodic memory tools — available in every queen phase.
|
||||
_QUEEN_MEMORY_TOOLS = [
|
||||
"write_to_diary",
|
||||
"recall_diary",
|
||||
]
|
||||
|
||||
# Queen phase-specific tool sets.
|
||||
|
||||
# Planning phase: read-only exploration + design, no write tools.
|
||||
@@ -84,16 +90,19 @@ _QUEEN_PLANNING_TOOLS = [
|
||||
"initialize_and_build_agent",
|
||||
# Load existing agent (after user confirms)
|
||||
"load_built_agent",
|
||||
]
|
||||
] + _QUEEN_MEMORY_TOOLS
|
||||
|
||||
# Building phase: full coding + agent construction tools.
|
||||
_QUEEN_BUILDING_TOOLS = _SHARED_TOOLS + [
|
||||
"load_built_agent",
|
||||
"list_credentials",
|
||||
"replan_agent",
|
||||
"save_agent_draft", # Re-draft during building → auto-dissolves + updates flowchart
|
||||
"write_to_diary", # Episodic memory — available in all phases
|
||||
]
|
||||
_QUEEN_BUILDING_TOOLS = (
|
||||
_SHARED_TOOLS
|
||||
+ [
|
||||
"load_built_agent",
|
||||
"list_credentials",
|
||||
"replan_agent",
|
||||
"save_agent_draft", # Re-draft during building → auto-dissolves + updates flowchart
|
||||
]
|
||||
+ _QUEEN_MEMORY_TOOLS
|
||||
)
|
||||
|
||||
# Staging phase: agent loaded but not yet running — inspect, configure, launch.
|
||||
_QUEEN_STAGING_TOOLS = [
|
||||
@@ -110,7 +119,11 @@ _QUEEN_STAGING_TOOLS = [
|
||||
"stop_worker_and_edit",
|
||||
"stop_worker_and_plan",
|
||||
"write_to_diary", # Episodic memory — available in all phases
|
||||
]
|
||||
# Trigger management
|
||||
"set_trigger",
|
||||
"remove_trigger",
|
||||
"list_triggers",
|
||||
] + _QUEEN_MEMORY_TOOLS
|
||||
|
||||
# Running phase: worker is executing — monitor and control.
|
||||
_QUEEN_RUNNING_TOOLS = [
|
||||
@@ -126,12 +139,16 @@ _QUEEN_RUNNING_TOOLS = [
|
||||
"stop_worker_and_edit",
|
||||
"stop_worker_and_plan",
|
||||
"get_worker_status",
|
||||
"run_agent_with_input",
|
||||
"inject_worker_message",
|
||||
# Monitoring
|
||||
"get_worker_health_summary",
|
||||
"notify_operator",
|
||||
"set_trigger",
|
||||
"remove_trigger",
|
||||
"list_triggers",
|
||||
"write_to_diary", # Episodic memory — available in all phases
|
||||
]
|
||||
] + _QUEEN_MEMORY_TOOLS
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -496,8 +513,8 @@ nodes/__init__.py
|
||||
- Goal description, success criteria values, constraint values, edge \
|
||||
definitions, identity_prompt in agent.py
|
||||
- CLI options in __main__.py
|
||||
- For async entry points (timers/webhooks), add AsyncEntryPointSpec \
|
||||
and AgentRuntimeConfig to agent.py
|
||||
- For triggers (timers/webhooks), add entries to triggers.json in the \
|
||||
agent's export directory
|
||||
|
||||
Do NOT modify or rewrite:
|
||||
- Import statements at top of agent.py (they are correct)
|
||||
@@ -660,6 +677,9 @@ The agent is loaded and ready to run. You can inspect it and launch it:
|
||||
- stop_worker_and_plan() — Go to PLANNING phase to discuss changes with the user \
|
||||
first (DEFAULT for most modification requests)
|
||||
- stop_worker_and_edit() — Go to BUILDING phase for immediate, specific fixes
|
||||
- set_trigger(trigger_id, trigger_type?, trigger_config?) — Activate a trigger (timer)
|
||||
- remove_trigger(trigger_id) — Deactivate a trigger
|
||||
- list_triggers() — List all triggers and their active/inactive status
|
||||
|
||||
You do NOT have write tools. To modify the agent, prefer \
|
||||
stop_worker_and_plan() unless the user gave a specific instruction.
|
||||
@@ -682,6 +702,15 @@ with the user first (DEFAULT for most modification requests)
|
||||
You do NOT have write tools. To modify the agent, prefer \
|
||||
stop_worker_and_plan() unless the user gave a specific instruction. \
|
||||
To just stop without modifying, call stop_worker().
|
||||
- stop_worker_and_edit() — Stop the worker and switch back to BUILDING phase
|
||||
- set_trigger(trigger_id, trigger_type?, trigger_config?) — Activate a trigger (timer)
|
||||
- remove_trigger(trigger_id) — Deactivate a trigger
|
||||
- list_triggers() — List all triggers and their active/inactive status
|
||||
|
||||
You do NOT have write tools or agent construction tools. \
|
||||
If you need to modify the agent, call stop_worker_and_edit() to switch back \
|
||||
to BUILDING phase. To stop the worker and ask the user what to do next, call \
|
||||
stop_worker() to return to STAGING phase.
|
||||
"""
|
||||
|
||||
# -- Behavior shared across all phases --
|
||||
@@ -837,6 +866,11 @@ You keep a diary. Use write_to_diary() when something worth remembering \
|
||||
happens: a pipeline went live, the user shared something important, a goal \
|
||||
was reached or abandoned. Write in first person, as you actually experienced \
|
||||
it. One or two paragraphs is enough.
|
||||
|
||||
Use recall_diary() to look up past diary entries when the user asks about \
|
||||
previous sessions ("what happened yesterday?", "what did we work on last \
|
||||
week?") or when you need past context to make a decision. You can filter by \
|
||||
keyword and control how far back to search.
|
||||
"""
|
||||
|
||||
_queen_behavior_always = _queen_behavior_always + _queen_memory_instructions
|
||||
@@ -968,6 +1002,33 @@ Use stop_worker_and_edit() only when:
|
||||
- The user gave a specific, concrete instruction ("add save_data to the gather node")
|
||||
- You already discussed the fix in a previous planning session
|
||||
- The change is trivial and unambiguous (rename, toggle a flag)
|
||||
|
||||
## Trigger Management
|
||||
|
||||
Use list_triggers() to see available triggers from the loaded worker.
|
||||
Use set_trigger(trigger_id) to activate a timer. Once active, triggers \
|
||||
fire periodically and inject [TRIGGER: ...] messages so you can decide \
|
||||
whether to call run_agent_with_input(task).
|
||||
|
||||
### When the user says "Enable trigger <id>" (or clicks Enable in the UI):
|
||||
|
||||
1. Call get_worker_status(focus="memory") to check if the worker has \
|
||||
saved configuration (rules, preferences, settings from a prior run).
|
||||
2. If memory contains saved config: compose a task string from it \
|
||||
(e.g. "Process inbox emails using saved rules") and call \
|
||||
set_trigger(trigger_id, task="...") immediately. Tell the user the \
|
||||
trigger is now active and what schedule it uses. Do NOT ask them to \
|
||||
provide the task — you derive it from memory.
|
||||
3. If memory is empty (no prior run): tell the user the agent needs to \
|
||||
run once first so its configuration can be saved. Offer to run it now. \
|
||||
Once the worker finishes, enable the trigger.
|
||||
4. If the user just provided config this session (rules/task context \
|
||||
already in conversation): use that directly, no memory lookup needed. \
|
||||
Enable the trigger immediately.
|
||||
|
||||
Never ask "what should the task be?" when enabling a trigger for an \
|
||||
agent with a clear purpose. The task string is a brief description of \
|
||||
what the worker does, derived from its saved state or your current context.
|
||||
"""
|
||||
|
||||
# -- RUNNING phase behavior --
|
||||
@@ -982,12 +1043,24 @@ NOT ask the user directly.
|
||||
You wake up when:
|
||||
- The user explicitly addresses you
|
||||
- A worker escalation arrives (`[WORKER_ESCALATION_REQUEST]`)
|
||||
- An escalation ticket arrives from the judge
|
||||
- The worker finishes (`[WORKER_TERMINAL]`)
|
||||
|
||||
If the user asks for progress, call get_worker_status() ONCE and report. \
|
||||
If the summary mentions issues, follow up with get_worker_status(focus="issues").
|
||||
|
||||
## Subagent delegations (browser automation, GCU)
|
||||
|
||||
When the worker delegates to a subagent (e.g., GCU browser automation), expect it \
|
||||
to take 2-5 minutes. During this time:
|
||||
- Progress will show 0% — this is NORMAL. The subagent only calls set_output at the end.
|
||||
- Check get_worker_status(focus="full") for "subagent_activity" — this shows the \
|
||||
subagent's latest reasoning text and confirms it is making real progress.
|
||||
- Do NOT conclude the subagent is stuck just because progress is 0% or because \
|
||||
you see repeated browser_click/browser_snapshot calls — that is the expected \
|
||||
pattern for web scraping.
|
||||
- Only intervene if: the subagent has been running for 5+ minutes with no new \
|
||||
subagent_activity updates, OR the judge escalates.
|
||||
|
||||
## Handling worker termination ([WORKER_TERMINAL])
|
||||
|
||||
When you receive a `[WORKER_TERMINAL]` event, the worker has finished:
|
||||
@@ -1016,19 +1089,30 @@ IMPORTANT: Only auto-handle if the user has NOT explicitly told you how to handl
|
||||
escalations. If the user gave you instructions (e.g., "just retry on errors", \
|
||||
"skip any auth issues"), follow those instructions instead.
|
||||
|
||||
CRITICAL — escalation relay protocol:
|
||||
When an escalation requires user input (auth blocks, human review), the worker \
|
||||
or its subagent is BLOCKED and waiting for your response. You MUST follow this \
|
||||
exact two-step sequence:
|
||||
Step 1: call ask_user() to get the user's answer.
|
||||
Step 2: call inject_worker_message() with the user's answer IMMEDIATELY after.
|
||||
If you skip Step 2, the worker/subagent stays blocked FOREVER and the task hangs. \
|
||||
NEVER respond to the user without also calling inject_worker_message() to unblock \
|
||||
the worker. Even if the user says "skip" or "cancel", you must still relay that \
|
||||
decision via inject_worker_message() so the worker can clean up.
|
||||
|
||||
**Auth blocks / credential issues:**
|
||||
- ALWAYS ask the user (unless user explicitly told you how to handle this).
|
||||
- The worker cannot proceed without valid credentials.
|
||||
- Explain which credential is missing or invalid.
|
||||
- Use ask_user to get guidance: "Provide credentials", "Skip this task", "Stop and edit agent"
|
||||
- Use inject_worker_message() to relay user decisions back to the worker.
|
||||
- Step 1: ask_user for guidance — "Provide credentials", "Skip this task", "Stop and edit agent"
|
||||
- Step 2: inject_worker_message() with the user's response to unblock the worker.
|
||||
|
||||
**Need human review / approval:**
|
||||
- ALWAYS ask the user (unless user explicitly told you how to handle this).
|
||||
- The worker is explicitly requesting human judgment.
|
||||
- Present the context clearly (what decision is needed, what are the options).
|
||||
- Use ask_user with the actual decision options.
|
||||
- Use inject_worker_message() to relay user decisions back to the worker.
|
||||
- Step 1: ask_user with the actual decision options.
|
||||
- Step 2: inject_worker_message() with the user's decision to unblock the worker.
|
||||
|
||||
**Errors / unexpected failures:**
|
||||
- Explain what went wrong in plain terms.
|
||||
@@ -1036,6 +1120,7 @@ escalations. If the user gave you instructions (e.g., "just retry on errors", \
|
||||
- Or offer: "Diagnose the issue" → use stop_worker_and_plan() to investigate first.
|
||||
- Or offer: "Retry as-is", "Skip this task", "Abort run"
|
||||
- (Skip asking if user explicitly told you to auto-retry or auto-skip errors.)
|
||||
- If the escalation had wait_for_response: inject_worker_message() with the decision.
|
||||
|
||||
**Informational / progress updates:**
|
||||
- Acknowledge briefly and let the worker continue.
|
||||
@@ -1060,6 +1145,21 @@ When the user asks to fix, change, modify, or update the loaded worker \
|
||||
**Default: use stop_worker_and_plan().** Most modification requests need \
|
||||
discussion first. Only use stop_worker_and_edit() when the user gave a \
|
||||
specific, unambiguous instruction or you already agreed on the fix.
|
||||
|
||||
## Trigger Handling
|
||||
|
||||
You will receive [TRIGGER: ...] messages when a scheduled timer fires. \
|
||||
These are framework-level signals, not user messages.
|
||||
|
||||
Rules:
|
||||
- Check get_worker_status() before calling run_agent_with_input(task). If the worker \
|
||||
is already RUNNING, decide: skip this trigger, or note it for after completion.
|
||||
- When multiple [TRIGGER] messages arrive at once, read them all before acting. \
|
||||
Batch your response — do not call run_agent_with_input() once per trigger.
|
||||
- If a trigger fires but the task no longer makes sense (e.g., user changed \
|
||||
config since last run), skip it and inform the user.
|
||||
- Never disable a trigger without telling the user. Use remove_trigger() only \
|
||||
when explicitly asked or when the trigger is clearly obsolete.
|
||||
"""
|
||||
|
||||
# -- Backward-compatible composed versions (used by queen_node.system_prompt default) --
|
||||
@@ -1123,8 +1223,8 @@ ticket_triage_node = NodeSpec(
|
||||
id="ticket_triage",
|
||||
name="Ticket Triage",
|
||||
description=(
|
||||
"Queen's triage node. Receives an EscalationTicket from the Health Judge "
|
||||
"via event-driven entry point and decides: dismiss or notify the operator."
|
||||
"Queen's triage node. Receives an EscalationTicket via event-driven "
|
||||
"entry point and decides: dismiss or notify the operator."
|
||||
),
|
||||
node_type="event_loop",
|
||||
client_facing=True, # Operator can chat with queen once connected (Ctrl+Q)
|
||||
@@ -1138,8 +1238,8 @@ ticket_triage_node = NodeSpec(
|
||||
),
|
||||
tools=["notify_operator"],
|
||||
system_prompt="""\
|
||||
You are the Queen. The Worker Health Judge has escalated a worker \
|
||||
issue to you. The ticket is in your memory under key "ticket". Read it carefully.
|
||||
You are the Queen. A worker health issue has been escalated to you. \
|
||||
The ticket is in your memory under key "ticket". Read it carefully.
|
||||
|
||||
## Dismiss criteria — do NOT call notify_operator:
|
||||
- severity is "low" AND steps_since_last_accept < 8
|
||||
@@ -1178,7 +1278,7 @@ queen_node = NodeSpec(
|
||||
description=(
|
||||
"User's primary interactive interface with full coding capability. "
|
||||
"Can build agents directly or delegate to the worker. Manages the "
|
||||
"worker agent lifecycle and triages health escalations from the judge."
|
||||
"worker agent lifecycle."
|
||||
),
|
||||
node_type="event_loop",
|
||||
client_facing=True,
|
||||
|
||||
@@ -50,6 +50,23 @@ def read_episodic_memory(d: date | None = None) -> str:
|
||||
return path.read_text(encoding="utf-8").strip() if path.exists() else ""
|
||||
|
||||
|
||||
def _find_recent_episodic(lookback: int = 7) -> tuple[date, str] | None:
|
||||
"""Find the most recent non-empty episodic memory within *lookback* days."""
|
||||
from datetime import timedelta
|
||||
|
||||
today = date.today()
|
||||
for offset in range(lookback):
|
||||
d = today - timedelta(days=offset)
|
||||
content = read_episodic_memory(d)
|
||||
if content:
|
||||
return d, content
|
||||
return None
|
||||
|
||||
|
||||
# Budget (in characters) for episodic memory in the system prompt.
|
||||
_EPISODIC_CHAR_BUDGET = 6_000
|
||||
|
||||
|
||||
def format_for_injection() -> str:
|
||||
"""Format cross-session memory for system prompt injection.
|
||||
|
||||
@@ -57,7 +74,7 @@ def format_for_injection() -> str:
|
||||
session with only the seed template).
|
||||
"""
|
||||
semantic = read_semantic_memory()
|
||||
episodic = read_episodic_memory()
|
||||
recent = _find_recent_episodic()
|
||||
|
||||
# Suppress injection if semantic is still just the seed template
|
||||
if semantic and semantic.startswith("# My Understanding of the User\n\n*No sessions"):
|
||||
@@ -66,9 +83,18 @@ def format_for_injection() -> str:
|
||||
parts: list[str] = []
|
||||
if semantic:
|
||||
parts.append(semantic)
|
||||
if episodic:
|
||||
today_str = date.today().strftime("%B %-d, %Y")
|
||||
parts.append(f"## Today — {today_str}\n\n{episodic}")
|
||||
|
||||
if recent:
|
||||
d, content = recent
|
||||
# Trim oversized episodic entries to keep the prompt manageable
|
||||
if len(content) > _EPISODIC_CHAR_BUDGET:
|
||||
content = content[:_EPISODIC_CHAR_BUDGET] + "\n\n…(truncated)"
|
||||
today = date.today()
|
||||
if d == today:
|
||||
label = f"## Today — {d.strftime('%B %-d, %Y')}"
|
||||
else:
|
||||
label = f"## {d.strftime('%B %-d, %Y')}"
|
||||
parts.append(f"{label}\n\n{content}")
|
||||
|
||||
if not parts:
|
||||
return ""
|
||||
@@ -100,7 +126,8 @@ def append_episodic_entry(content: str) -> None:
|
||||
"""
|
||||
ep_path = episodic_memory_path()
|
||||
ep_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
today_str = date.today().strftime("%B %-d, %Y")
|
||||
today = date.today()
|
||||
today_str = f"{today.strftime('%B')} {today.day}, {today.year}"
|
||||
timestamp = datetime.now().strftime("%H:%M")
|
||||
if not ep_path.exists():
|
||||
header = f"# {today_str}\n\n"
|
||||
@@ -299,7 +326,8 @@ async def consolidate_queen_memory(
|
||||
|
||||
existing_semantic = read_semantic_memory()
|
||||
today_journal = read_episodic_memory()
|
||||
today_str = date.today().strftime("%B %-d, %Y")
|
||||
today = date.today()
|
||||
today_str = f"{today.strftime('%B')} {today.day}, {today.year}"
|
||||
adapt_path = session_dir / "data" / "adapt.md"
|
||||
|
||||
user_msg = (
|
||||
|
||||
@@ -27,7 +27,9 @@
|
||||
## GCU Errors
|
||||
15. **Manually wiring browser tools on event_loop nodes** — Use `node_type="gcu"` which auto-includes browser tools. Do NOT manually list browser tool names.
|
||||
16. **Using GCU nodes as regular graph nodes** — GCU nodes are subagents only. They must ONLY appear in `sub_agents=["gcu-node-id"]` and be invoked via `delegate_to_sub_agent()`. Never connect via edges or use as entry/terminal nodes.
|
||||
17. **Reusing the same GCU node ID for parallel tasks** — Each concurrent browser task needs a distinct GCU node ID (e.g. `gcu-site-a`, `gcu-site-b`). Two `delegate_to_sub_agent` calls with the same `agent_id` share a browser profile and will interfere with each other's pages.
|
||||
18. **Passing `profile=` in GCU tool calls** — Profile isolation for parallel subagents is automatic. The framework injects a unique profile per subagent via an asyncio `ContextVar`. Hardcoding `profile="default"` in a GCU system prompt breaks this isolation.
|
||||
|
||||
## Worker Agent Errors
|
||||
17. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
|
||||
18. **Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
|
||||
19. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
|
||||
20. **Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
|
||||
|
||||
@@ -332,81 +332,46 @@ class MyAgent:
|
||||
default_agent = MyAgent()
|
||||
```
|
||||
|
||||
## agent.py — Async Entry Points Variant
|
||||
## triggers.json — Timer and Webhook Triggers
|
||||
|
||||
When an agent needs timers, webhooks, or event-driven triggers, add
|
||||
`async_entry_points` and optionally `runtime_config` as module-level variables.
|
||||
These are IN ADDITION to the standard variables above.
|
||||
When an agent needs timers, webhooks, or event-driven triggers, create a
|
||||
`triggers.json` file in the agent's directory (alongside `agent.py`).
|
||||
The queen loads these at session start and the user can manage them via
|
||||
the `set_trigger` / `remove_trigger` tools at runtime.
|
||||
|
||||
```python
|
||||
# Additional imports for async entry points
|
||||
from framework.graph.edge import GraphSpec, AsyncEntryPointSpec
|
||||
from framework.runtime.agent_runtime import (
|
||||
AgentRuntime, AgentRuntimeConfig, create_agent_runtime,
|
||||
)
|
||||
|
||||
# ... (goal, nodes, edges, entry_node, entry_points, etc. as above) ...
|
||||
|
||||
# Async entry points — event-driven triggers
|
||||
async_entry_points = [
|
||||
# Timer with cron: daily at 9am
|
||||
AsyncEntryPointSpec(
|
||||
id="daily-check",
|
||||
name="Daily Check",
|
||||
entry_node="process-node",
|
||||
trigger_type="timer",
|
||||
trigger_config={"cron": "0 9 * * *"},
|
||||
isolation_level="shared",
|
||||
max_concurrent=1,
|
||||
),
|
||||
# Timer with fixed interval: every 20 minutes
|
||||
AsyncEntryPointSpec(
|
||||
id="scheduled-check",
|
||||
name="Scheduled Check",
|
||||
entry_node="process-node",
|
||||
trigger_type="timer",
|
||||
trigger_config={"interval_minutes": 20, "run_immediately": False},
|
||||
isolation_level="shared",
|
||||
max_concurrent=1,
|
||||
),
|
||||
# Event: reacts to webhook events
|
||||
AsyncEntryPointSpec(
|
||||
id="webhook-event",
|
||||
name="Webhook Event Handler",
|
||||
entry_node="process-node",
|
||||
trigger_type="event",
|
||||
trigger_config={"event_types": ["webhook_received"]},
|
||||
isolation_level="shared",
|
||||
max_concurrent=10,
|
||||
),
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "daily-check",
|
||||
"name": "Daily Check",
|
||||
"trigger_type": "timer",
|
||||
"trigger_config": {"cron": "0 9 * * *"},
|
||||
"task": "Run the daily check process"
|
||||
},
|
||||
{
|
||||
"id": "scheduled-check",
|
||||
"name": "Scheduled Check",
|
||||
"trigger_type": "timer",
|
||||
"trigger_config": {"interval_minutes": 20},
|
||||
"task": "Run the scheduled check"
|
||||
},
|
||||
{
|
||||
"id": "webhook-event",
|
||||
"name": "Webhook Event Handler",
|
||||
"trigger_type": "webhook",
|
||||
"trigger_config": {"event_types": ["webhook_received"]},
|
||||
"task": "Process incoming webhook event"
|
||||
}
|
||||
]
|
||||
|
||||
# Webhook server config (only needed if using webhooks)
|
||||
runtime_config = AgentRuntimeConfig(
|
||||
webhook_host="127.0.0.1",
|
||||
webhook_port=8080,
|
||||
webhook_routes=[
|
||||
{
|
||||
"source_id": "my-source",
|
||||
"path": "/webhooks/my-source",
|
||||
"methods": ["POST"],
|
||||
},
|
||||
],
|
||||
)
|
||||
```
|
||||
|
||||
**Key rules for async entry points:**
|
||||
- `async_entry_points` is a list of `AsyncEntryPointSpec` (NOT `EntryPointSpec`)
|
||||
- `runtime_config` is `AgentRuntimeConfig` (NOT `RuntimeConfig` from config.py)
|
||||
- Valid trigger_types: `timer`, `event`, `webhook`, `manual`, `api`
|
||||
- Valid isolation_levels: `isolated`, `shared`, `synchronized`
|
||||
**Key rules for triggers.json:**
|
||||
- Valid trigger_types: `timer`, `webhook`
|
||||
- Timer trigger_config (cron): `{"cron": "0 9 * * *"}` — standard 5-field cron expression
|
||||
- Timer trigger_config (interval): `{"interval_minutes": float, "run_immediately": bool}`
|
||||
- Event trigger_config: `{"event_types": ["webhook_received"], "filter_stream": "...", "filter_node": "..."}`
|
||||
- Use `isolation_level="shared"` for async entry points that need to read
|
||||
the primary session's memory (e.g., user-configured rules)
|
||||
- The `_build_graph()` method passes `async_entry_points` to GraphSpec
|
||||
- Reference: `exports/gmail_inbox_guardian/agent.py`
|
||||
- Timer trigger_config (interval): `{"interval_minutes": float}`
|
||||
- Each trigger must have a unique `id`
|
||||
- The `task` field describes what the worker should do when the trigger fires
|
||||
- Triggers are persisted back to `triggers.json` when modified via queen tools
|
||||
|
||||
## __init__.py
|
||||
|
||||
@@ -453,21 +418,6 @@ __all__ = [
|
||||
]
|
||||
```
|
||||
|
||||
**If the agent uses async entry points**, also import and export:
|
||||
```python
|
||||
from .agent import (
|
||||
...,
|
||||
async_entry_points,
|
||||
runtime_config, # Only if using webhooks
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
...,
|
||||
"async_entry_points",
|
||||
"runtime_config",
|
||||
]
|
||||
```
|
||||
|
||||
## __main__.py
|
||||
|
||||
```python
|
||||
|
||||
@@ -31,8 +31,7 @@ module-level variables via `getattr()`:
|
||||
| `conversation_mode` | no | not passed | Isolated mode (no context carryover) |
|
||||
| `identity_prompt` | no | not passed | No agent-level identity |
|
||||
| `loop_config` | no | `{}` | No iteration limits |
|
||||
| `async_entry_points` | no | `[]` | No async triggers (timers, webhooks, events) |
|
||||
| `runtime_config` | no | `None` | No webhook server |
|
||||
| `triggers.json` (file) | no | not present | No triggers (timers, webhooks) |
|
||||
|
||||
**CRITICAL:** `__init__.py` MUST import and re-export ALL of these from
|
||||
`agent.py`. Missing exports silently fall back to defaults, causing
|
||||
@@ -257,44 +256,28 @@ Multiple ON_SUCCESS edges from same source → parallel execution via asyncio.ga
|
||||
|
||||
Judge is the SOLE acceptance mechanism — no ad-hoc framework gating.
|
||||
|
||||
## Async Entry Points (Webhooks, Timers, Events)
|
||||
## Triggers (Timers, Webhooks)
|
||||
|
||||
For agents that react to external events, use `AsyncEntryPointSpec`:
|
||||
For agents that react to external events, create a `triggers.json` file
|
||||
in the agent's export directory:
|
||||
|
||||
```python
|
||||
from framework.graph.edge import AsyncEntryPointSpec
|
||||
from framework.runtime.agent_runtime import AgentRuntimeConfig
|
||||
|
||||
# Timer trigger (cron or interval)
|
||||
async_entry_points = [
|
||||
AsyncEntryPointSpec(
|
||||
id="daily-check",
|
||||
name="Daily Check",
|
||||
entry_node="process",
|
||||
trigger_type="timer",
|
||||
trigger_config={"cron": "0 9 * * *"}, # daily at 9am
|
||||
isolation_level="shared",
|
||||
)
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "daily-check",
|
||||
"name": "Daily Check",
|
||||
"trigger_type": "timer",
|
||||
"trigger_config": {"cron": "0 9 * * *"},
|
||||
"task": "Run the daily check process"
|
||||
}
|
||||
]
|
||||
|
||||
# Webhook server (optional)
|
||||
runtime_config = AgentRuntimeConfig(
|
||||
webhook_host="127.0.0.1",
|
||||
webhook_port=8080,
|
||||
webhook_routes=[{"source_id": "gmail", "path": "/webhooks/gmail", "methods": ["POST"]}],
|
||||
)
|
||||
```
|
||||
|
||||
### Key Fields
|
||||
- `trigger_type`: `"timer"`, `"event"`, `"webhook"`, `"manual"`
|
||||
- `trigger_type`: `"timer"` or `"webhook"`
|
||||
- `trigger_config`: `{"cron": "0 9 * * *"}` or `{"interval_minutes": 20}`
|
||||
- `isolation_level`: `"shared"` (recommended), `"isolated"`, `"synchronized"`
|
||||
- `event_types`: For event triggers, e.g., `["webhook_received"]`
|
||||
|
||||
### Exports Required
|
||||
Both `async_entry_points` and `runtime_config` must be exported from `__init__.py`.
|
||||
|
||||
See `exports/gmail_inbox_guardian/agent.py` for complete example.
|
||||
- `task`: describes what the worker should do when the trigger fires
|
||||
- Triggers can also be created/removed at runtime via `set_trigger` / `remove_trigger` queen tools
|
||||
|
||||
## Tool Discovery
|
||||
|
||||
|
||||
@@ -109,6 +109,45 @@ Key rules to bake into GCU node prompts:
|
||||
- Keep tool calls per turn ≤10
|
||||
- Tab isolation: when browser is already running, use `browser_open(background=true)` and pass `target_id` to every call
|
||||
|
||||
## Multiple Concurrent GCU Subagents
|
||||
|
||||
When a task can be parallelized across multiple sites or profiles, declare a distinct GCU
|
||||
node for each and invoke them all in the same LLM turn. The framework batches all
|
||||
`delegate_to_sub_agent` calls made in one turn and runs them with `asyncio.gather`, so
|
||||
they execute concurrently — not sequentially.
|
||||
|
||||
**Each GCU subagent automatically gets its own isolated browser context** — no `profile=`
|
||||
argument is needed in tool calls. The framework derives a unique profile from the subagent's
|
||||
node ID and instance counter and injects it via an asyncio `ContextVar` before the subagent
|
||||
runs.
|
||||
|
||||
### Example: three sites in parallel
|
||||
|
||||
```python
|
||||
# Three distinct GCU nodes
|
||||
gcu_site_a = NodeSpec(id="gcu-site-a", node_type="gcu", ...)
|
||||
gcu_site_b = NodeSpec(id="gcu-site-b", node_type="gcu", ...)
|
||||
gcu_site_c = NodeSpec(id="gcu-site-c", node_type="gcu", ...)
|
||||
|
||||
orchestrator = NodeSpec(
|
||||
id="orchestrator",
|
||||
node_type="event_loop",
|
||||
sub_agents=["gcu-site-a", "gcu-site-b", "gcu-site-c"],
|
||||
system_prompt="""\
|
||||
Call all three subagents in a single response to run them in parallel:
|
||||
delegate_to_sub_agent(agent_id="gcu-site-a", task="Scrape prices from site A")
|
||||
delegate_to_sub_agent(agent_id="gcu-site-b", task="Scrape prices from site B")
|
||||
delegate_to_sub_agent(agent_id="gcu-site-c", task="Scrape prices from site C")
|
||||
""",
|
||||
)
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- Use distinct node IDs for each concurrent task — sharing an ID shares the browser context.
|
||||
- The GCU node prompts do not need to mention `profile=`; isolation is automatic.
|
||||
- Cleanup is automatic at session end, but GCU nodes can call `browser_stop()` explicitly
|
||||
if they want to release resources mid-run.
|
||||
|
||||
## GCU Anti-Patterns
|
||||
|
||||
- Using `browser_screenshot` to read text (use `browser_snapshot`)
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
"""Queen's ticket receiver entry point.
|
||||
|
||||
When the Worker Health Judge emits a WORKER_ESCALATION_TICKET event on the
|
||||
shared EventBus, this entry point fires and routes to the ``ticket_triage``
|
||||
node, where the Queen deliberates and decides whether to notify the operator.
|
||||
When a WORKER_ESCALATION_TICKET event is emitted on the shared EventBus,
|
||||
this entry point fires and routes to the ``ticket_triage`` node, where the
|
||||
Queen deliberates and decides whether to notify the operator.
|
||||
|
||||
Isolation level is ``isolated`` — the queen's triage memory is kept separate
|
||||
from the worker's shared memory. Each ticket triage runs in its own context.
|
||||
|
||||
@@ -121,6 +121,14 @@ def get_gcu_enabled() -> bool:
|
||||
return get_hive_config().get("gcu_enabled", True)
|
||||
|
||||
|
||||
def get_gcu_viewport_scale() -> float:
|
||||
"""Return GCU viewport scale factor (0.1-1.0), default 0.8."""
|
||||
scale = get_hive_config().get("gcu_viewport_scale", 0.8)
|
||||
if isinstance(scale, (int, float)) and 0.1 <= scale <= 1.0:
|
||||
return float(scale)
|
||||
return 0.8
|
||||
|
||||
|
||||
def get_api_base() -> str | None:
|
||||
"""Return the api_base URL for OpenAI-compatible endpoints, if configured."""
|
||||
llm = get_hive_config().get("llm", {})
|
||||
|
||||
@@ -142,13 +142,17 @@ def save_aden_api_key(key: str) -> None:
|
||||
os.environ[ADEN_ENV_VAR] = key
|
||||
|
||||
|
||||
def delete_aden_api_key() -> None:
|
||||
"""Remove ADEN_API_KEY from the encrypted store and ``os.environ``."""
|
||||
def delete_aden_api_key() -> bool:
|
||||
"""Remove ADEN_API_KEY from the encrypted store and ``os.environ``.
|
||||
|
||||
Returns True if the key existed and was deleted, False otherwise.
|
||||
"""
|
||||
deleted = False
|
||||
try:
|
||||
from .storage import EncryptedFileStorage
|
||||
|
||||
storage = EncryptedFileStorage()
|
||||
storage.delete(ADEN_CREDENTIAL_ID)
|
||||
deleted = storage.delete(ADEN_CREDENTIAL_ID)
|
||||
except (FileNotFoundError, PermissionError) as e:
|
||||
logger.debug("Could not delete %s from encrypted store: %s", ADEN_CREDENTIAL_ID, e)
|
||||
except Exception:
|
||||
@@ -157,8 +161,8 @@ def delete_aden_api_key() -> None:
|
||||
ADEN_CREDENTIAL_ID,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
os.environ.pop(ADEN_ENV_VAR, None)
|
||||
return deleted
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@@ -322,7 +322,11 @@ class AsyncEntryPointSpec(BaseModel):
|
||||
|
||||
id: str = Field(description="Unique identifier for this entry point")
|
||||
name: str = Field(description="Human-readable name")
|
||||
entry_node: str = Field(description="Node ID to start execution from")
|
||||
entry_node: str = Field(
|
||||
default="",
|
||||
description="Deprecated: Node ID to start execution from. "
|
||||
"Triggers are graph-level; worker always enters at GraphSpec.entry_node.",
|
||||
)
|
||||
trigger_type: str = Field(
|
||||
default="manual",
|
||||
description="How this entry point is triggered: webhook, api, timer, event, manual",
|
||||
@@ -331,6 +335,10 @@ class AsyncEntryPointSpec(BaseModel):
|
||||
default_factory=dict,
|
||||
description="Trigger-specific configuration (e.g., webhook URL, timer interval)",
|
||||
)
|
||||
task: str = Field(
|
||||
default="",
|
||||
description="Worker task string when this trigger fires autonomously",
|
||||
)
|
||||
isolation_level: str = Field(
|
||||
default="shared", description="State isolation: isolated, shared, or synchronized"
|
||||
)
|
||||
@@ -368,28 +376,8 @@ class GraphSpec(BaseModel):
|
||||
edges=[...],
|
||||
)
|
||||
|
||||
For multi-entry-point agents (concurrent streams):
|
||||
GraphSpec(
|
||||
id="support-agent-graph",
|
||||
goal_id="support-001",
|
||||
entry_node="process-webhook", # Default entry
|
||||
async_entry_points=[
|
||||
AsyncEntryPointSpec(
|
||||
id="webhook",
|
||||
name="Zendesk Webhook",
|
||||
entry_node="process-webhook",
|
||||
trigger_type="webhook",
|
||||
),
|
||||
AsyncEntryPointSpec(
|
||||
id="api",
|
||||
name="API Handler",
|
||||
entry_node="process-request",
|
||||
trigger_type="api",
|
||||
),
|
||||
],
|
||||
nodes=[...],
|
||||
edges=[...],
|
||||
)
|
||||
Triggers (timer, webhook, event) are now defined in ``triggers.json``
|
||||
alongside the agent directory, not embedded in the graph spec.
|
||||
"""
|
||||
|
||||
id: str
|
||||
@@ -402,12 +390,6 @@ class GraphSpec(BaseModel):
|
||||
default_factory=dict,
|
||||
description="Named entry points for resuming execution. Format: {name: node_id}",
|
||||
)
|
||||
async_entry_points: list[AsyncEntryPointSpec] = Field(
|
||||
default_factory=list,
|
||||
description=(
|
||||
"Asynchronous entry points for concurrent execution streams (used with AgentRuntime)"
|
||||
),
|
||||
)
|
||||
terminal_nodes: list[str] = Field(
|
||||
default_factory=list, description="IDs of nodes that end execution"
|
||||
)
|
||||
@@ -486,17 +468,6 @@ class GraphSpec(BaseModel):
|
||||
return node
|
||||
return None
|
||||
|
||||
def has_async_entry_points(self) -> bool:
|
||||
"""Check if this graph uses async entry points (multi-stream execution)."""
|
||||
return len(self.async_entry_points) > 0
|
||||
|
||||
def get_async_entry_point(self, entry_point_id: str) -> AsyncEntryPointSpec | None:
|
||||
"""Get an async entry point by ID."""
|
||||
for ep in self.async_entry_points:
|
||||
if ep.id == entry_point_id:
|
||||
return ep
|
||||
return None
|
||||
|
||||
def get_outgoing_edges(self, node_id: str) -> list[EdgeSpec]:
|
||||
"""Get all edges leaving a node, sorted by priority."""
|
||||
edges = [e for e in self.edges if e.source == node_id]
|
||||
@@ -587,37 +558,6 @@ class GraphSpec(BaseModel):
|
||||
if not self.get_node(self.entry_node):
|
||||
errors.append(f"Entry node '{self.entry_node}' not found")
|
||||
|
||||
# Check async entry points
|
||||
seen_entry_ids = set()
|
||||
for entry_point in self.async_entry_points:
|
||||
# Check for duplicate IDs
|
||||
if entry_point.id in seen_entry_ids:
|
||||
errors.append(f"Duplicate async entry point ID: '{entry_point.id}'")
|
||||
seen_entry_ids.add(entry_point.id)
|
||||
|
||||
# Check entry node exists
|
||||
if not self.get_node(entry_point.entry_node):
|
||||
errors.append(
|
||||
f"Async entry point '{entry_point.id}' references "
|
||||
f"missing node '{entry_point.entry_node}'"
|
||||
)
|
||||
|
||||
# Validate isolation level
|
||||
valid_isolation = {"isolated", "shared", "synchronized"}
|
||||
if entry_point.isolation_level not in valid_isolation:
|
||||
errors.append(
|
||||
f"Async entry point '{entry_point.id}' has invalid isolation_level "
|
||||
f"'{entry_point.isolation_level}'. Valid: {valid_isolation}"
|
||||
)
|
||||
|
||||
# Validate trigger type
|
||||
valid_triggers = {"webhook", "api", "timer", "event", "manual"}
|
||||
if entry_point.trigger_type not in valid_triggers:
|
||||
errors.append(
|
||||
f"Async entry point '{entry_point.id}' has invalid trigger_type "
|
||||
f"'{entry_point.trigger_type}'. Valid: {valid_triggers}"
|
||||
)
|
||||
|
||||
# Check terminal nodes exist
|
||||
for term in self.terminal_nodes:
|
||||
if not self.get_node(term):
|
||||
@@ -646,10 +586,6 @@ class GraphSpec(BaseModel):
|
||||
for entry_point_node in self.entry_points.values():
|
||||
to_visit.append(entry_point_node)
|
||||
|
||||
# Add all async entry points as valid starting points
|
||||
for async_entry in self.async_entry_points:
|
||||
to_visit.append(async_entry.entry_node)
|
||||
|
||||
# Traverse from all entry points
|
||||
while to_visit:
|
||||
current = to_visit.pop()
|
||||
@@ -666,18 +602,10 @@ class GraphSpec(BaseModel):
|
||||
for sub_agent_id in sub_agents:
|
||||
reachable.add(sub_agent_id)
|
||||
|
||||
# Build set of async entry point nodes for quick lookup
|
||||
async_entry_nodes = {ep.entry_node for ep in self.async_entry_points}
|
||||
|
||||
for node in self.nodes:
|
||||
if node.id not in reachable:
|
||||
# Skip if node is a pause node, entry point target, or async entry
|
||||
# (pause/resume architecture and async entry points make reachable)
|
||||
if (
|
||||
node.id in self.pause_nodes
|
||||
or node.id in self.entry_points.values()
|
||||
or node.id in async_entry_nodes
|
||||
):
|
||||
# Skip if node is a pause node or entry point target
|
||||
if node.id in self.pause_nodes or node.id in self.entry_points.values():
|
||||
continue
|
||||
errors.append(f"Node '{node.id}' is unreachable from entry")
|
||||
|
||||
|
||||
@@ -36,6 +36,21 @@ from framework.runtime.llm_debug_logger import log_llm_turn
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class TriggerEvent:
|
||||
"""A framework-level trigger signal (timer tick or webhook hit).
|
||||
|
||||
Triggers are queued separately from user messages / external events
|
||||
and drained atomically so the LLM sees all pending triggers at once.
|
||||
"""
|
||||
|
||||
trigger_type: str # "timer" | "webhook"
|
||||
source_id: str # entry point ID or webhook route ID
|
||||
payload: dict[str, Any] = field(default_factory=dict)
|
||||
timestamp: float = field(default_factory=time.time)
|
||||
|
||||
|
||||
# Pattern for detecting context-window-exceeded errors across LLM providers.
|
||||
_CONTEXT_TOO_LARGE_RE = re.compile(
|
||||
r"context.{0,20}(length|window|limit|size)|"
|
||||
@@ -187,6 +202,14 @@ class LoopConfig:
|
||||
max_tool_result_chars: int = 30_000
|
||||
spillover_dir: str | None = None # Path string; created on first use
|
||||
|
||||
# --- set_output value spilling ---
|
||||
# When a set_output value exceeds this character count it is auto-saved
|
||||
# to a file in *spillover_dir* and the stored value is replaced with a
|
||||
# lightweight file reference. This keeps shared memory / adapt.md /
|
||||
# transition markers small and forces the next node to load the full
|
||||
# data from the file. Set to 0 to disable.
|
||||
max_output_value_chars: int = 2_000
|
||||
|
||||
# --- Stream retry (transient error recovery within EventLoopNode) ---
|
||||
# When _run_single_turn() raises a transient error (network, rate limit,
|
||||
# server error), retry up to this many times with exponential backoff
|
||||
@@ -210,6 +233,18 @@ class LoopConfig:
|
||||
cf_grace_turns: int = 1
|
||||
tool_doom_loop_enabled: bool = True
|
||||
|
||||
# --- Per-tool-call timeout ---
|
||||
# Maximum seconds a single tool call may take before being killed.
|
||||
# Prevents hung MCP servers (especially browser/GCU tools) from
|
||||
# blocking the entire event loop indefinitely. 0 = no timeout.
|
||||
tool_call_timeout_seconds: float = 60.0
|
||||
|
||||
# --- Subagent delegation timeout ---
|
||||
# Maximum seconds a delegate_to_sub_agent call may run before being
|
||||
# killed. Subagents run a full event-loop so they naturally take
|
||||
# longer than a single tool call — default is 10 minutes. 0 = no timeout.
|
||||
subagent_timeout_seconds: float = 300.0
|
||||
|
||||
# --- Lifecycle hooks ---
|
||||
# Hooks are async callables keyed by event name. Supported events:
|
||||
# "session_start" — fires once after the first user message is added,
|
||||
@@ -346,6 +381,7 @@ class EventLoopNode(NodeProtocol):
|
||||
self._tool_executor = tool_executor
|
||||
self._conversation_store = conversation_store
|
||||
self._injection_queue: asyncio.Queue[tuple[str, bool]] = asyncio.Queue()
|
||||
self._trigger_queue: asyncio.Queue[TriggerEvent] = asyncio.Queue()
|
||||
# Client-facing input blocking state
|
||||
self._input_ready = asyncio.Event()
|
||||
self._awaiting_input = False
|
||||
@@ -457,6 +493,8 @@ class EventLoopNode(NodeProtocol):
|
||||
focus_prompt=ctx.node_spec.system_prompt,
|
||||
narrative=ctx.narrative or None,
|
||||
accounts_prompt=ctx.accounts_prompt or None,
|
||||
skills_catalog_prompt=ctx.skills_catalog_prompt or None,
|
||||
protocols_prompt=ctx.protocols_prompt or None,
|
||||
)
|
||||
if conversation.system_prompt != _current_prompt:
|
||||
conversation.update_system_prompt(_current_prompt)
|
||||
@@ -478,6 +516,22 @@ class EventLoopNode(NodeProtocol):
|
||||
if ctx.accounts_prompt:
|
||||
system_prompt = f"{system_prompt}\n\n{ctx.accounts_prompt}"
|
||||
|
||||
# Append skill catalog and operational protocols
|
||||
if ctx.skills_catalog_prompt:
|
||||
system_prompt = f"{system_prompt}\n\n{ctx.skills_catalog_prompt}"
|
||||
logger.info(
|
||||
"[%s] Injected skills catalog (%d chars)",
|
||||
node_id,
|
||||
len(ctx.skills_catalog_prompt),
|
||||
)
|
||||
if ctx.protocols_prompt:
|
||||
system_prompt = f"{system_prompt}\n\n{ctx.protocols_prompt}"
|
||||
logger.info(
|
||||
"[%s] Injected operational protocols (%d chars)",
|
||||
node_id,
|
||||
len(ctx.protocols_prompt),
|
||||
)
|
||||
|
||||
# Inject agent working memory (adapt.md).
|
||||
# If it doesn't exist yet, seed it with available context.
|
||||
if self._config.spillover_dir:
|
||||
@@ -559,10 +613,24 @@ class EventLoopNode(NodeProtocol):
|
||||
# - Node has sub_agents defined
|
||||
# - We are NOT in subagent mode (prevents nested delegation)
|
||||
if not ctx.is_subagent_mode:
|
||||
sub_agents = getattr(ctx.node_spec, "sub_agents", [])
|
||||
delegate_tool = self._build_delegate_tool(sub_agents, ctx.node_registry)
|
||||
if delegate_tool:
|
||||
tools.append(delegate_tool)
|
||||
sub_agents = getattr(ctx.node_spec, "sub_agents", None) or []
|
||||
if sub_agents:
|
||||
delegate_tool = self._build_delegate_tool(sub_agents, ctx.node_registry)
|
||||
if delegate_tool:
|
||||
tools.append(delegate_tool)
|
||||
logger.info(
|
||||
"[%s] delegate_to_sub_agent injected (sub_agents=%s)",
|
||||
node_id,
|
||||
sub_agents,
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
"[%s] _build_delegate_tool returned None for sub_agents=%s",
|
||||
node_id,
|
||||
sub_agents,
|
||||
)
|
||||
else:
|
||||
logger.debug("[%s] Skipped delegate tool (is_subagent_mode=True)", node_id)
|
||||
|
||||
# Add report_to_parent tool for sub-agents with a report callback
|
||||
if ctx.is_subagent_mode and ctx.report_callback is not None:
|
||||
@@ -631,6 +699,8 @@ class EventLoopNode(NodeProtocol):
|
||||
|
||||
# 6b. Drain injection queue
|
||||
await self._drain_injection_queue(conversation)
|
||||
# 6b1. Drain trigger queue (framework-level signals)
|
||||
await self._drain_trigger_queue(conversation)
|
||||
|
||||
# 6b2. Dynamic tool refresh (mode switching)
|
||||
if ctx.dynamic_tools_provider is not None:
|
||||
@@ -656,8 +726,20 @@ class EventLoopNode(NodeProtocol):
|
||||
conversation.update_system_prompt(_new_prompt)
|
||||
logger.info("[%s] Dynamic prompt updated (phase switch)", node_id)
|
||||
|
||||
# 6c. Publish iteration event
|
||||
await self._publish_iteration(stream_id, node_id, iteration, execution_id)
|
||||
# 6c. Publish iteration event (with per-iteration metadata when available)
|
||||
_iter_meta = None
|
||||
if ctx.iteration_metadata_provider is not None:
|
||||
try:
|
||||
_iter_meta = ctx.iteration_metadata_provider()
|
||||
except Exception:
|
||||
pass
|
||||
await self._publish_iteration(
|
||||
stream_id,
|
||||
node_id,
|
||||
iteration,
|
||||
execution_id,
|
||||
extra_data=_iter_meta,
|
||||
)
|
||||
|
||||
# 6d. Pre-turn compaction check (tiered)
|
||||
_compacted_this_iter = False
|
||||
@@ -1738,6 +1820,15 @@ class EventLoopNode(NodeProtocol):
|
||||
await self._injection_queue.put((content, is_client_input))
|
||||
self._input_ready.set()
|
||||
|
||||
async def inject_trigger(self, trigger: TriggerEvent) -> None:
|
||||
"""Inject a framework-level trigger into the running queen loop.
|
||||
|
||||
Triggers are queued separately from user messages and drained
|
||||
atomically via _drain_trigger_queue().
|
||||
"""
|
||||
await self._trigger_queue.put(trigger)
|
||||
self._input_ready.set()
|
||||
|
||||
def signal_shutdown(self) -> None:
|
||||
"""Signal the node to exit its loop cleanly.
|
||||
|
||||
@@ -1788,9 +1879,9 @@ class EventLoopNode(NodeProtocol):
|
||||
|
||||
Returns True if input arrived, False if shutdown was signaled.
|
||||
"""
|
||||
# If messages arrived while the LLM was processing, skip blocking
|
||||
# entirely — the next _drain_injection_queue() will pick them up.
|
||||
if not self._injection_queue.empty():
|
||||
# If messages or triggers arrived while the LLM was processing, skip
|
||||
# blocking — the next drain pass will pick them up.
|
||||
if not self._injection_queue.empty() or not self._trigger_queue.empty():
|
||||
return True
|
||||
|
||||
# Clear BEFORE emitting so that synchronous handlers (e.g. the
|
||||
@@ -1881,6 +1972,11 @@ class EventLoopNode(NodeProtocol):
|
||||
# Accumulate ALL tool calls across inner iterations for L3 logging.
|
||||
# Unlike real_tool_results (reset each inner iteration), this persists.
|
||||
logged_tool_calls: list[dict] = []
|
||||
# Counter for LLM calls within a single iteration. Each pass through
|
||||
# the inner tool loop starts a fresh LLM stream whose snapshot resets
|
||||
# to "". Without this, all calls share the same message ID on the
|
||||
# frontend and the second call's text silently replaces the first.
|
||||
inner_turn = 0
|
||||
|
||||
# Inner tool loop: stream may produce tool calls requiring re-invocation
|
||||
while True:
|
||||
@@ -1921,6 +2017,7 @@ class EventLoopNode(NodeProtocol):
|
||||
async def _do_stream(
|
||||
_msgs: list = messages, # noqa: B006
|
||||
_tc: list[ToolCallEvent] = tool_calls, # noqa: B006
|
||||
inner_turn: int = inner_turn,
|
||||
) -> None:
|
||||
nonlocal accumulated_text, _stream_error
|
||||
async for event in ctx.llm.stream(
|
||||
@@ -1939,6 +2036,7 @@ class EventLoopNode(NodeProtocol):
|
||||
ctx,
|
||||
execution_id,
|
||||
iteration=iteration,
|
||||
inner_turn=inner_turn,
|
||||
)
|
||||
|
||||
elif isinstance(event, ToolCallEvent):
|
||||
@@ -2098,6 +2196,57 @@ class EventLoopNode(NodeProtocol):
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
pass
|
||||
key = tc.tool_input.get("key", "")
|
||||
|
||||
# Auto-spill: save large values to data files and
|
||||
# replace with a lightweight file reference so shared
|
||||
# memory / adapt.md / transition markers stay small.
|
||||
spill_dir = self._config.spillover_dir
|
||||
max_val = self._config.max_output_value_chars
|
||||
if max_val > 0 and spill_dir:
|
||||
val_str = (
|
||||
json.dumps(value, ensure_ascii=False)
|
||||
if not isinstance(value, str)
|
||||
else value
|
||||
)
|
||||
if len(val_str) > max_val:
|
||||
spill_path = Path(spill_dir)
|
||||
spill_path.mkdir(parents=True, exist_ok=True)
|
||||
ext = ".json" if isinstance(value, (dict, list)) else ".txt"
|
||||
filename = f"output_{key}{ext}"
|
||||
write_content = (
|
||||
json.dumps(value, indent=2, ensure_ascii=False)
|
||||
if isinstance(value, (dict, list))
|
||||
else str(value)
|
||||
)
|
||||
(spill_path / filename).write_text(write_content, encoding="utf-8")
|
||||
file_size = (spill_path / filename).stat().st_size
|
||||
logger.info(
|
||||
"set_output value auto-spilled: key=%s, "
|
||||
"%d chars → %s (%d bytes)",
|
||||
key,
|
||||
len(val_str),
|
||||
filename,
|
||||
file_size,
|
||||
)
|
||||
# Replace value with reference
|
||||
value = (
|
||||
f"[Saved to '{filename}' ({file_size:,} bytes). "
|
||||
f"Use load_data(filename='{filename}') "
|
||||
f"to access full data.]"
|
||||
)
|
||||
# Update tool result to inform the LLM
|
||||
result = ToolResult(
|
||||
tool_use_id=tc.tool_use_id,
|
||||
content=(
|
||||
f"Output '{key}' was large "
|
||||
f"({len(val_str):,} chars) — data saved "
|
||||
f"to '{filename}' ({file_size:,} bytes). "
|
||||
f"The next phase will see the file "
|
||||
f"reference and can load full data."
|
||||
),
|
||||
is_error=False,
|
||||
)
|
||||
|
||||
await accumulator.set(key, value)
|
||||
self._record_learning(key, value)
|
||||
outputs_set_this_turn.append(key)
|
||||
@@ -2167,6 +2316,7 @@ class EventLoopNode(NodeProtocol):
|
||||
ctx=ctx,
|
||||
execution_id=execution_id,
|
||||
iteration=iteration,
|
||||
inner_turn=inner_turn,
|
||||
)
|
||||
|
||||
result = ToolResult(
|
||||
@@ -2400,21 +2550,44 @@ class EventLoopNode(NodeProtocol):
|
||||
|
||||
# Phase 2b: execute subagent delegations in parallel.
|
||||
if pending_subagent:
|
||||
_subagent_timeout = self._config.subagent_timeout_seconds
|
||||
|
||||
async def _timed_subagent(
|
||||
_ctx: NodeContext,
|
||||
_tc: ToolCallEvent,
|
||||
_acc: OutputAccumulator = accumulator,
|
||||
_timeout: float = _subagent_timeout,
|
||||
) -> tuple[ToolResult | BaseException, str, float]:
|
||||
_s = time.time()
|
||||
_iso = datetime.now(UTC).isoformat()
|
||||
try:
|
||||
_r = await self._execute_subagent(
|
||||
_coro = self._execute_subagent(
|
||||
_ctx,
|
||||
_tc.tool_input.get("agent_id", ""),
|
||||
_tc.tool_input.get("task", ""),
|
||||
accumulator=_acc,
|
||||
)
|
||||
if _timeout > 0:
|
||||
_r = await asyncio.wait_for(_coro, timeout=_timeout)
|
||||
else:
|
||||
_r = await _coro
|
||||
except TimeoutError:
|
||||
_agent_id = _tc.tool_input.get("agent_id", "unknown")
|
||||
logger.warning(
|
||||
"Subagent '%s' timed out after %.0fs",
|
||||
_agent_id,
|
||||
_timeout,
|
||||
)
|
||||
_r = ToolResult(
|
||||
tool_use_id=_tc.tool_use_id,
|
||||
content=(
|
||||
f"Subagent '{_agent_id}' timed out after "
|
||||
f"{_timeout:.0f}s. The delegation took "
|
||||
"too long and was cancelled. Try a simpler task "
|
||||
"or break it into smaller pieces."
|
||||
),
|
||||
is_error=True,
|
||||
)
|
||||
except BaseException as _exc:
|
||||
_r = _exc
|
||||
_dur = round(time.time() - _s, 3)
|
||||
@@ -2620,6 +2793,7 @@ class EventLoopNode(NodeProtocol):
|
||||
)
|
||||
|
||||
# Tool calls processed -- loop back to stream with updated conversation
|
||||
inner_turn += 1
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Synthetic tools: set_output, ask_user, escalate
|
||||
@@ -2756,6 +2930,12 @@ class EventLoopNode(NodeProtocol):
|
||||
name="set_output",
|
||||
description=(
|
||||
"Set an output value for this node. Call once per output key. "
|
||||
"Use this for brief notes, counts, status, and file references — "
|
||||
"NOT for large data payloads. When a tool result was saved to a "
|
||||
"data file, pass the filename as the value "
|
||||
"(e.g. 'google_sheets_get_values_1.txt') so the next phase can "
|
||||
"load the full data. Values exceeding ~2000 characters are "
|
||||
"auto-saved to data files. "
|
||||
f"Valid keys: {output_keys}"
|
||||
),
|
||||
parameters={
|
||||
@@ -2768,7 +2948,10 @@ class EventLoopNode(NodeProtocol):
|
||||
},
|
||||
"value": {
|
||||
"type": "string",
|
||||
"description": "The output value to store.",
|
||||
"description": (
|
||||
"The output value — a brief note, count, status, "
|
||||
"or data filename reference."
|
||||
),
|
||||
},
|
||||
},
|
||||
"required": ["key", "value"],
|
||||
@@ -3292,7 +3475,14 @@ class EventLoopNode(NodeProtocol):
|
||||
return False, ""
|
||||
|
||||
async def _execute_tool(self, tc: ToolCallEvent) -> ToolResult:
|
||||
"""Execute a tool call, handling both sync and async executors."""
|
||||
"""Execute a tool call, handling both sync and async executors.
|
||||
|
||||
Applies ``tool_call_timeout_seconds`` from LoopConfig to prevent
|
||||
hung MCP servers from blocking the event loop indefinitely.
|
||||
The initial executor call is offloaded to a thread pool so that
|
||||
sync executors (MCP STDIO tools that block on ``future.result()``)
|
||||
don't freeze the event loop.
|
||||
"""
|
||||
if self._tool_executor is None:
|
||||
return ToolResult(
|
||||
tool_use_id=tc.tool_use_id,
|
||||
@@ -3300,9 +3490,35 @@ class EventLoopNode(NodeProtocol):
|
||||
is_error=True,
|
||||
)
|
||||
tool_use = ToolUse(id=tc.tool_use_id, name=tc.tool_name, input=tc.tool_input)
|
||||
result = self._tool_executor(tool_use)
|
||||
if asyncio.iscoroutine(result) or asyncio.isfuture(result):
|
||||
result = await result
|
||||
timeout = self._config.tool_call_timeout_seconds
|
||||
|
||||
async def _run() -> ToolResult:
|
||||
# Offload the executor call to a thread. Sync MCP executors
|
||||
# block on future.result() — running in a thread keeps the
|
||||
# event loop free so asyncio.wait_for can fire the timeout.
|
||||
loop = asyncio.get_running_loop()
|
||||
result = await loop.run_in_executor(None, self._tool_executor, tool_use)
|
||||
# Async executors return a coroutine — await it on the loop
|
||||
if asyncio.iscoroutine(result) or asyncio.isfuture(result):
|
||||
result = await result
|
||||
return result
|
||||
|
||||
try:
|
||||
if timeout > 0:
|
||||
result = await asyncio.wait_for(_run(), timeout=timeout)
|
||||
else:
|
||||
result = await _run()
|
||||
except TimeoutError:
|
||||
logger.warning("Tool '%s' timed out after %.0fs", tc.tool_name, timeout)
|
||||
return ToolResult(
|
||||
tool_use_id=tc.tool_use_id,
|
||||
content=(
|
||||
f"Tool '{tc.tool_name}' timed out after {timeout:.0f}s. "
|
||||
"The operation took too long and was cancelled. "
|
||||
"Try a simpler request or a different approach."
|
||||
),
|
||||
is_error=True,
|
||||
)
|
||||
return result
|
||||
|
||||
def _record_learning(self, key: str, value: Any) -> None:
|
||||
@@ -3373,6 +3589,125 @@ class EventLoopNode(NodeProtocol):
|
||||
self._spill_counter = max_n
|
||||
logger.info("Restored spill counter to %d from existing files", max_n)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# JSON metadata / smart preview helpers for truncation
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@staticmethod
|
||||
def _extract_json_metadata(parsed: Any, *, _depth: int = 0, _max_depth: int = 3) -> str:
|
||||
"""Return a concise structural summary of parsed JSON.
|
||||
|
||||
Reports key names, value types, and — crucially — array lengths so
|
||||
the LLM knows how much data exists beyond the preview.
|
||||
|
||||
Returns an empty string for simple scalars.
|
||||
"""
|
||||
if _depth >= _max_depth:
|
||||
if isinstance(parsed, dict):
|
||||
return f"dict with {len(parsed)} keys"
|
||||
if isinstance(parsed, list):
|
||||
return f"list of {len(parsed)} items"
|
||||
return type(parsed).__name__
|
||||
|
||||
if isinstance(parsed, dict):
|
||||
if not parsed:
|
||||
return "empty dict"
|
||||
lines: list[str] = []
|
||||
indent = " " * (_depth + 1)
|
||||
for key, value in list(parsed.items())[:20]:
|
||||
if isinstance(value, list):
|
||||
line = f'{indent}"{key}": list of {len(value)} items'
|
||||
if value:
|
||||
first = value[0]
|
||||
if isinstance(first, dict):
|
||||
sample_keys = list(first.keys())[:10]
|
||||
line += f" (each item: dict with keys {sample_keys})"
|
||||
elif isinstance(first, list):
|
||||
line += f" (each item: list of {len(first)} elements)"
|
||||
lines.append(line)
|
||||
elif isinstance(value, dict):
|
||||
child = EventLoopNode._extract_json_metadata(
|
||||
value, _depth=_depth + 1, _max_depth=_max_depth
|
||||
)
|
||||
lines.append(f'{indent}"{key}": {child}')
|
||||
else:
|
||||
lines.append(f'{indent}"{key}": {type(value).__name__}')
|
||||
if len(parsed) > 20:
|
||||
lines.append(f"{indent}... and {len(parsed) - 20} more keys")
|
||||
return "\n".join(lines)
|
||||
|
||||
if isinstance(parsed, list):
|
||||
if not parsed:
|
||||
return "empty list"
|
||||
desc = f"list of {len(parsed)} items"
|
||||
first = parsed[0]
|
||||
if isinstance(first, dict):
|
||||
sample_keys = list(first.keys())[:10]
|
||||
desc += f" (each item: dict with keys {sample_keys})"
|
||||
elif isinstance(first, list):
|
||||
desc += f" (each item: list of {len(first)} elements)"
|
||||
return desc
|
||||
|
||||
return ""
|
||||
|
||||
@staticmethod
|
||||
def _build_json_preview(parsed: Any, *, max_chars: int = 5000) -> str | None:
|
||||
"""Build a smart preview of parsed JSON, truncating large arrays.
|
||||
|
||||
Shows first 3 + last 1 items of large arrays with explicit count
|
||||
markers so the LLM cannot mistake the preview for the full dataset.
|
||||
|
||||
Returns ``None`` if no truncation was needed (no large arrays).
|
||||
"""
|
||||
_LARGE_ARRAY_THRESHOLD = 10
|
||||
|
||||
def _truncate_arrays(obj: Any) -> tuple[Any, bool]:
|
||||
"""Return (truncated_copy, was_truncated)."""
|
||||
if isinstance(obj, list) and len(obj) > _LARGE_ARRAY_THRESHOLD:
|
||||
n = len(obj)
|
||||
head = obj[:3]
|
||||
tail = obj[-1:]
|
||||
marker = f"... ({n - 4} more items omitted, {n} total) ..."
|
||||
return head + [marker] + tail, True
|
||||
if isinstance(obj, dict):
|
||||
changed = False
|
||||
out: dict[str, Any] = {}
|
||||
for k, v in obj.items():
|
||||
new_v, did = _truncate_arrays(v)
|
||||
out[k] = new_v
|
||||
changed = changed or did
|
||||
return (out, True) if changed else (obj, False)
|
||||
return obj, False
|
||||
|
||||
preview_obj, was_truncated = _truncate_arrays(parsed)
|
||||
if not was_truncated:
|
||||
return None # No large arrays — caller should use raw slicing
|
||||
|
||||
try:
|
||||
result = json.dumps(preview_obj, indent=2, ensure_ascii=False)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
|
||||
if len(result) > max_chars:
|
||||
# Even 3+1 items too big — try just 1 item
|
||||
def _minimal_arrays(obj: Any) -> Any:
|
||||
if isinstance(obj, list) and len(obj) > _LARGE_ARRAY_THRESHOLD:
|
||||
n = len(obj)
|
||||
return obj[:1] + [f"... ({n - 1} more items omitted, {n} total) ..."]
|
||||
if isinstance(obj, dict):
|
||||
return {k: _minimal_arrays(v) for k, v in obj.items()}
|
||||
return obj
|
||||
|
||||
preview_obj = _minimal_arrays(parsed)
|
||||
try:
|
||||
result = json.dumps(preview_obj, indent=2, ensure_ascii=False)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
if len(result) > max_chars:
|
||||
result = result[:max_chars] + "…"
|
||||
|
||||
return result
|
||||
|
||||
def _truncate_tool_result(
|
||||
self,
|
||||
result: ToolResult,
|
||||
@@ -3401,15 +3736,36 @@ class EventLoopNode(NodeProtocol):
|
||||
if tool_name == "load_data":
|
||||
if limit <= 0 or len(result.content) <= limit:
|
||||
return result # Small load_data result — pass through as-is
|
||||
# Large load_data result — truncate with pagination hint
|
||||
preview_chars = max(limit - 300, limit // 2)
|
||||
preview = result.content[:preview_chars]
|
||||
truncated = (
|
||||
f"[{tool_name} result: {len(result.content)} chars — "
|
||||
f"too large for context. Use offset/limit parameters "
|
||||
f"to read smaller chunks.]\n\n"
|
||||
f"Preview:\n{preview}…"
|
||||
# Large load_data result — truncate with smart preview
|
||||
PREVIEW_CAP = min(5000, max(limit - 500, limit // 2))
|
||||
|
||||
metadata_str = ""
|
||||
smart_preview: str | None = None
|
||||
try:
|
||||
parsed_ld = json.loads(result.content)
|
||||
metadata_str = self._extract_json_metadata(parsed_ld)
|
||||
smart_preview = self._build_json_preview(parsed_ld, max_chars=PREVIEW_CAP)
|
||||
except (json.JSONDecodeError, TypeError, ValueError):
|
||||
pass
|
||||
|
||||
if smart_preview is not None:
|
||||
preview_block = smart_preview
|
||||
else:
|
||||
preview_block = result.content[:PREVIEW_CAP] + "…"
|
||||
|
||||
header = (
|
||||
f"[{tool_name} result: {len(result.content):,} chars — "
|
||||
f"too large for context. Use offset_bytes/limit_bytes "
|
||||
f"parameters to read smaller chunks.]"
|
||||
)
|
||||
if metadata_str:
|
||||
header += f"\n\nData structure:\n{metadata_str}"
|
||||
header += (
|
||||
"\n\nWARNING: This is an INCOMPLETE preview. "
|
||||
"Do NOT draw conclusions or counts from it."
|
||||
)
|
||||
|
||||
truncated = f"{header}\n\nPreview (small sample only):\n{preview_block}"
|
||||
logger.info(
|
||||
"%s result truncated: %d → %d chars (use offset/limit to paginate)",
|
||||
tool_name,
|
||||
@@ -3431,25 +3787,47 @@ class EventLoopNode(NodeProtocol):
|
||||
# Pretty-print JSON content so load_data's line-based
|
||||
# pagination works correctly.
|
||||
write_content = result.content
|
||||
parsed_json: Any = None # track for metadata extraction
|
||||
try:
|
||||
parsed = json.loads(result.content)
|
||||
write_content = json.dumps(parsed, indent=2, ensure_ascii=False)
|
||||
parsed_json = json.loads(result.content)
|
||||
write_content = json.dumps(parsed_json, indent=2, ensure_ascii=False)
|
||||
except (json.JSONDecodeError, TypeError, ValueError):
|
||||
pass # Not JSON — write as-is
|
||||
|
||||
(spill_path / filename).write_text(write_content, encoding="utf-8")
|
||||
|
||||
if limit > 0 and len(result.content) > limit:
|
||||
# Large result: preview + file reference
|
||||
preview_chars = max(limit - 300, limit // 2)
|
||||
preview = result.content[:preview_chars]
|
||||
content = (
|
||||
f"[Result from {tool_name}: {len(result.content)} chars — "
|
||||
f"too large for context, saved to '{filename}'. "
|
||||
f"Use load_data(filename='{filename}') "
|
||||
f"to read the full result.]\n\n"
|
||||
f"Preview:\n{preview}…"
|
||||
# Large result: build a small, metadata-rich preview so the
|
||||
# LLM cannot mistake it for the complete dataset.
|
||||
PREVIEW_CAP = 5000
|
||||
|
||||
# Extract structural metadata (array lengths, key names)
|
||||
metadata_str = ""
|
||||
smart_preview: str | None = None
|
||||
if parsed_json is not None:
|
||||
metadata_str = self._extract_json_metadata(parsed_json)
|
||||
smart_preview = self._build_json_preview(parsed_json, max_chars=PREVIEW_CAP)
|
||||
|
||||
if smart_preview is not None:
|
||||
preview_block = smart_preview
|
||||
else:
|
||||
preview_block = result.content[:PREVIEW_CAP] + "…"
|
||||
|
||||
# Assemble header with structural info + warning
|
||||
header = (
|
||||
f"[Result from {tool_name}: {len(result.content):,} chars — "
|
||||
f"too large for context, saved to '{filename}'.]"
|
||||
)
|
||||
if metadata_str:
|
||||
header += f"\n\nData structure:\n{metadata_str}"
|
||||
header += (
|
||||
f"\n\nWARNING: The preview below is INCOMPLETE. "
|
||||
f"Do NOT draw conclusions or counts from it. "
|
||||
f"Use load_data(filename='{filename}') to read the "
|
||||
f"full data before analysis."
|
||||
)
|
||||
|
||||
content = f"{header}\n\nPreview (small sample only):\n{preview_block}"
|
||||
logger.info(
|
||||
"Tool result spilled to file: %s (%d chars → %s)",
|
||||
tool_name,
|
||||
@@ -3474,13 +3852,34 @@ class EventLoopNode(NodeProtocol):
|
||||
|
||||
# No spillover_dir — truncate in-place if needed
|
||||
if limit > 0 and len(result.content) > limit:
|
||||
preview_chars = max(limit - 300, limit // 2)
|
||||
preview = result.content[:preview_chars]
|
||||
truncated = (
|
||||
f"[Result from {tool_name}: {len(result.content)} chars — "
|
||||
f"truncated to fit context budget. Only the first "
|
||||
f"{preview_chars} chars are shown.]\n\n{preview}…"
|
||||
PREVIEW_CAP = min(5000, max(limit - 500, limit // 2))
|
||||
|
||||
metadata_str = ""
|
||||
smart_preview: str | None = None
|
||||
try:
|
||||
parsed_inline = json.loads(result.content)
|
||||
metadata_str = self._extract_json_metadata(parsed_inline)
|
||||
smart_preview = self._build_json_preview(parsed_inline, max_chars=PREVIEW_CAP)
|
||||
except (json.JSONDecodeError, TypeError, ValueError):
|
||||
pass
|
||||
|
||||
if smart_preview is not None:
|
||||
preview_block = smart_preview
|
||||
else:
|
||||
preview_block = result.content[:PREVIEW_CAP] + "…"
|
||||
|
||||
header = (
|
||||
f"[Result from {tool_name}: {len(result.content):,} chars — "
|
||||
f"truncated to fit context budget.]"
|
||||
)
|
||||
if metadata_str:
|
||||
header += f"\n\nData structure:\n{metadata_str}"
|
||||
header += (
|
||||
"\n\nWARNING: This is an INCOMPLETE preview. "
|
||||
"Do NOT draw conclusions or counts from the preview alone."
|
||||
)
|
||||
|
||||
truncated = f"{header}\n\n{preview_block}"
|
||||
logger.info(
|
||||
"Tool result truncated in-place: %s (%d → %d chars)",
|
||||
tool_name,
|
||||
@@ -4047,6 +4446,34 @@ class EventLoopNode(NodeProtocol):
|
||||
break
|
||||
return count
|
||||
|
||||
async def _drain_trigger_queue(self, conversation: NodeConversation) -> int:
|
||||
"""Drain all pending trigger events as a single batched user message.
|
||||
|
||||
Multiple triggers are merged so the LLM sees them atomically and can
|
||||
reason about all pending triggers before acting.
|
||||
"""
|
||||
triggers: list[TriggerEvent] = []
|
||||
while not self._trigger_queue.empty():
|
||||
try:
|
||||
triggers.append(self._trigger_queue.get_nowait())
|
||||
except asyncio.QueueEmpty:
|
||||
break
|
||||
|
||||
if not triggers:
|
||||
return 0
|
||||
|
||||
parts: list[str] = []
|
||||
for t in triggers:
|
||||
task = t.payload.get("task", "")
|
||||
task_line = f"\nTask: {task}" if task else ""
|
||||
payload_str = json.dumps(t.payload, default=str)
|
||||
parts.append(f"[TRIGGER: {t.trigger_type}/{t.source_id}]{task_line}\n{payload_str}")
|
||||
|
||||
combined = "\n\n".join(parts)
|
||||
logger.info("[drain] %d trigger(s): %s", len(triggers), combined[:200])
|
||||
await conversation.add_user_message(combined)
|
||||
return len(triggers)
|
||||
|
||||
async def _check_pause(
|
||||
self,
|
||||
ctx: NodeContext,
|
||||
@@ -4181,7 +4608,12 @@ class EventLoopNode(NodeProtocol):
|
||||
await conversation.add_user_message(result.inject)
|
||||
|
||||
async def _publish_iteration(
|
||||
self, stream_id: str, node_id: str, iteration: int, execution_id: str = ""
|
||||
self,
|
||||
stream_id: str,
|
||||
node_id: str,
|
||||
iteration: int,
|
||||
execution_id: str = "",
|
||||
extra_data: dict | None = None,
|
||||
) -> None:
|
||||
if self._event_bus:
|
||||
await self._event_bus.emit_node_loop_iteration(
|
||||
@@ -4189,6 +4621,7 @@ class EventLoopNode(NodeProtocol):
|
||||
node_id=node_id,
|
||||
iteration=iteration,
|
||||
execution_id=execution_id,
|
||||
extra_data=extra_data,
|
||||
)
|
||||
|
||||
async def _publish_llm_turn_complete(
|
||||
@@ -4271,6 +4704,7 @@ class EventLoopNode(NodeProtocol):
|
||||
ctx: NodeContext,
|
||||
execution_id: str = "",
|
||||
iteration: int | None = None,
|
||||
inner_turn: int = 0,
|
||||
) -> None:
|
||||
if self._event_bus:
|
||||
if ctx.node_spec.client_facing:
|
||||
@@ -4281,6 +4715,7 @@ class EventLoopNode(NodeProtocol):
|
||||
snapshot=snapshot,
|
||||
execution_id=execution_id,
|
||||
iteration=iteration,
|
||||
inner_turn=inner_turn,
|
||||
)
|
||||
else:
|
||||
await self._event_bus.emit_llm_text_delta(
|
||||
@@ -4289,6 +4724,7 @@ class EventLoopNode(NodeProtocol):
|
||||
content=content,
|
||||
snapshot=snapshot,
|
||||
execution_id=execution_id,
|
||||
inner_turn=inner_turn,
|
||||
)
|
||||
|
||||
async def _publish_tool_started(
|
||||
@@ -4518,11 +4954,19 @@ class EventLoopNode(NodeProtocol):
|
||||
subagent_tool_names = set(subagent_spec.tools or [])
|
||||
tool_source = ctx.all_tools if ctx.all_tools else ctx.available_tools
|
||||
|
||||
subagent_tools = [
|
||||
t
|
||||
for t in tool_source
|
||||
if t.name in subagent_tool_names and t.name != "delegate_to_sub_agent"
|
||||
]
|
||||
# GCU auto-population: GCU nodes declare tools=[] because the runner
|
||||
# auto-populates them at setup time. But that expansion doesn't reach
|
||||
# subagents invoked via delegate_to_sub_agent — the subagent spec still
|
||||
# has the original empty list. When a GCU subagent has no declared
|
||||
# tools, include all catalog tools so browser tools are available.
|
||||
if subagent_spec.node_type == "gcu" and not subagent_tool_names:
|
||||
subagent_tools = [t for t in tool_source if t.name != "delegate_to_sub_agent"]
|
||||
else:
|
||||
subagent_tools = [
|
||||
t
|
||||
for t in tool_source
|
||||
if t.name in subagent_tool_names and t.name != "delegate_to_sub_agent"
|
||||
]
|
||||
|
||||
missing = subagent_tool_names - {t.name for t in subagent_tools}
|
||||
if missing:
|
||||
@@ -4606,7 +5050,7 @@ class EventLoopNode(NodeProtocol):
|
||||
)
|
||||
|
||||
subagent_node = EventLoopNode(
|
||||
event_bus=None, # Subagents don't emit events to parent's bus
|
||||
event_bus=self._event_bus, # Subagent events visible to Queen via shared bus
|
||||
judge=SubagentJudge(task=task, max_iterations=max_iter),
|
||||
config=LoopConfig(
|
||||
max_iterations=max_iter, # Tighter budget
|
||||
@@ -4621,25 +5065,42 @@ class EventLoopNode(NodeProtocol):
|
||||
conversation_store=subagent_conv_store,
|
||||
)
|
||||
|
||||
# Inject a unique GCU browser profile for this subagent so that
|
||||
# concurrent GCU subagents (run via asyncio.gather) each get their own
|
||||
# isolated BrowserContext. asyncio.gather copies the current context
|
||||
# for each coroutine, so the reset token is safe to call in finally.
|
||||
_profile_token = None
|
||||
try:
|
||||
from gcu.browser.session import set_active_profile as _set_gcu_profile
|
||||
|
||||
_profile_token = _set_gcu_profile(f"{agent_id}-{subagent_instance}")
|
||||
except ImportError:
|
||||
pass # GCU tools not installed; no-op
|
||||
|
||||
try:
|
||||
logger.info("🚀 Starting subagent '%s' execution...", agent_id)
|
||||
start_time = time.time()
|
||||
result = await subagent_node.execute(subagent_ctx)
|
||||
latency_ms = int((time.time() - start_time) * 1000)
|
||||
|
||||
separator = "-" * 60
|
||||
logger.info(
|
||||
"\n" + "-" * 60 + "\n"
|
||||
"\n%s\n"
|
||||
"✅ SUBAGENT '%s' COMPLETED\n"
|
||||
"-" * 60 + "\n"
|
||||
"%s\n"
|
||||
"Success: %s\n"
|
||||
"Latency: %dms\n"
|
||||
"Tokens used: %s\n"
|
||||
"Output keys: %s\n" + "-" * 60,
|
||||
"Output keys: %s\n"
|
||||
"%s",
|
||||
separator,
|
||||
agent_id,
|
||||
separator,
|
||||
result.success,
|
||||
latency_ms,
|
||||
result.tokens_used,
|
||||
list(result.output.keys()) if result.output else [],
|
||||
separator,
|
||||
)
|
||||
|
||||
result_json = {
|
||||
@@ -4685,3 +5146,29 @@ class EventLoopNode(NodeProtocol):
|
||||
content=json.dumps(result_json, indent=2),
|
||||
is_error=True,
|
||||
)
|
||||
finally:
|
||||
# Restore the GCU profile context that was set before this subagent ran.
|
||||
if _profile_token is not None:
|
||||
from gcu.browser.session import _active_profile as _gcu_profile_var
|
||||
|
||||
_gcu_profile_var.reset(_profile_token)
|
||||
|
||||
# Stop the browser session for this subagent's profile so tabs are
|
||||
# closed immediately rather than accumulating until server shutdown.
|
||||
if self._tool_executor is not None:
|
||||
_subagent_profile = f"{agent_id}-{subagent_instance}"
|
||||
try:
|
||||
_stop_use = ToolUse(
|
||||
id="gcu-cleanup",
|
||||
name="browser_stop",
|
||||
input={"profile": _subagent_profile},
|
||||
)
|
||||
_stop_result = self._tool_executor(_stop_use)
|
||||
if asyncio.iscoroutine(_stop_result) or asyncio.isfuture(_stop_result):
|
||||
await _stop_result
|
||||
except Exception as _gcu_exc:
|
||||
logger.warning(
|
||||
"GCU browser_stop failed for profile %r: %s",
|
||||
_subagent_profile,
|
||||
_gcu_exc,
|
||||
)
|
||||
|
||||
@@ -27,11 +27,14 @@ from framework.graph.node import (
|
||||
SharedMemory,
|
||||
)
|
||||
from framework.graph.validator import OutputValidator
|
||||
from framework.llm.provider import LLMProvider, Tool
|
||||
from framework.llm.provider import LLMProvider, Tool, ToolUse
|
||||
from framework.observability import set_trace_context
|
||||
from framework.runtime.core import Runtime
|
||||
from framework.schemas.checkpoint import Checkpoint
|
||||
from framework.storage.checkpoint_store import CheckpointStore
|
||||
from framework.utils.io import atomic_write
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _default_max_context_tokens() -> int:
|
||||
@@ -148,6 +151,9 @@ class GraphExecutor:
|
||||
tool_provider_map: dict[str, str] | None = None,
|
||||
dynamic_tools_provider: Callable | None = None,
|
||||
dynamic_prompt_provider: Callable | None = None,
|
||||
iteration_metadata_provider: Callable | None = None,
|
||||
skills_catalog_prompt: str = "",
|
||||
protocols_prompt: str = "",
|
||||
):
|
||||
"""
|
||||
Initialize the executor.
|
||||
@@ -173,6 +179,8 @@ class GraphExecutor:
|
||||
tool list (for mode switching)
|
||||
dynamic_prompt_provider: Optional callback returning current
|
||||
system prompt (for phase switching)
|
||||
skills_catalog_prompt: Available skills catalog for system prompt
|
||||
protocols_prompt: Default skill operational protocols for system prompt
|
||||
"""
|
||||
self.runtime = runtime
|
||||
self.llm = llm
|
||||
@@ -193,6 +201,21 @@ class GraphExecutor:
|
||||
self.tool_provider_map = tool_provider_map
|
||||
self.dynamic_tools_provider = dynamic_tools_provider
|
||||
self.dynamic_prompt_provider = dynamic_prompt_provider
|
||||
self.iteration_metadata_provider = iteration_metadata_provider
|
||||
self.skills_catalog_prompt = skills_catalog_prompt
|
||||
self.protocols_prompt = protocols_prompt
|
||||
|
||||
if protocols_prompt:
|
||||
self.logger.info(
|
||||
"GraphExecutor[%s] received protocols_prompt (%d chars)",
|
||||
stream_id,
|
||||
len(protocols_prompt),
|
||||
)
|
||||
else:
|
||||
self.logger.warning(
|
||||
"GraphExecutor[%s] received EMPTY protocols_prompt",
|
||||
stream_id,
|
||||
)
|
||||
|
||||
# Parallel execution settings
|
||||
self.enable_parallel_execution = enable_parallel_execution
|
||||
@@ -222,11 +245,11 @@ class GraphExecutor:
|
||||
"""
|
||||
if not self._storage_path:
|
||||
return
|
||||
state_path = self._storage_path / "state.json"
|
||||
try:
|
||||
import json as _json
|
||||
from datetime import datetime
|
||||
|
||||
state_path = self._storage_path / "state.json"
|
||||
if state_path.exists():
|
||||
state_data = _json.loads(state_path.read_text(encoding="utf-8"))
|
||||
else:
|
||||
@@ -249,9 +272,14 @@ class GraphExecutor:
|
||||
state_data["memory"] = memory_snapshot
|
||||
state_data["memory_keys"] = list(memory_snapshot.keys())
|
||||
|
||||
state_path.write_text(_json.dumps(state_data, indent=2), encoding="utf-8")
|
||||
with atomic_write(state_path, encoding="utf-8") as f:
|
||||
_json.dump(state_data, f, indent=2)
|
||||
except Exception:
|
||||
pass # Best-effort — never block execution
|
||||
logger.warning(
|
||||
"Failed to persist progress state to %s",
|
||||
state_path,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def _validate_tools(self, graph: GraphSpec) -> list[str]:
|
||||
"""
|
||||
@@ -413,6 +441,14 @@ class GraphExecutor:
|
||||
)
|
||||
return s1 + "\n\n" + s2
|
||||
|
||||
def _get_runtime_log_session_id(self) -> str:
|
||||
"""Return the session-backed execution ID for runtime logging, if any."""
|
||||
if not self._storage_path:
|
||||
return ""
|
||||
if self._storage_path.parent.name != "sessions":
|
||||
return ""
|
||||
return self._storage_path.name
|
||||
|
||||
async def execute(
|
||||
self,
|
||||
graph: GraphSpec,
|
||||
@@ -706,10 +742,7 @@ class GraphExecutor:
|
||||
)
|
||||
|
||||
if self.runtime_logger:
|
||||
# Extract session_id from storage_path if available (for unified sessions)
|
||||
session_id = ""
|
||||
if self._storage_path and self._storage_path.name.startswith("session_"):
|
||||
session_id = self._storage_path.name
|
||||
session_id = self._get_runtime_log_session_id()
|
||||
self.runtime_logger.start_run(goal_id=goal.id, session_id=session_id)
|
||||
|
||||
self.logger.info(f"🚀 Starting execution: {goal.name}")
|
||||
@@ -935,6 +968,33 @@ class GraphExecutor:
|
||||
self.logger.info(" Executing...")
|
||||
result = await node_impl.execute(ctx)
|
||||
|
||||
# GCU tab cleanup: stop the browser profile after a top-level GCU node
|
||||
# finishes so tabs don't accumulate. Mirrors the subagent cleanup in
|
||||
# EventLoopNode._execute_subagent().
|
||||
if node_spec.node_type == "gcu" and self.tool_executor is not None:
|
||||
try:
|
||||
from gcu.browser.session import (
|
||||
_active_profile as _gcu_profile_var,
|
||||
)
|
||||
|
||||
_gcu_profile = _gcu_profile_var.get()
|
||||
_stop_use = ToolUse(
|
||||
id="gcu-cleanup",
|
||||
name="browser_stop",
|
||||
input={"profile": _gcu_profile},
|
||||
)
|
||||
_stop_result = self.tool_executor(_stop_use)
|
||||
if asyncio.iscoroutine(_stop_result) or asyncio.isfuture(_stop_result):
|
||||
await _stop_result
|
||||
except ImportError:
|
||||
pass # GCU not installed
|
||||
except Exception as _gcu_exc:
|
||||
logger.warning(
|
||||
"GCU browser_stop failed for profile %r: %s",
|
||||
_gcu_profile,
|
||||
_gcu_exc,
|
||||
)
|
||||
|
||||
# Emit node-completed event (skip event_loop nodes)
|
||||
if self._event_bus and node_spec.node_type != "event_loop":
|
||||
await self._event_bus.emit_node_loop_completed(
|
||||
@@ -1763,10 +1823,31 @@ class GraphExecutor:
|
||||
if node_spec.tools:
|
||||
available_tools = [t for t in self.tools if t.name in node_spec.tools]
|
||||
|
||||
# Create scoped memory view
|
||||
# Create scoped memory view.
|
||||
# When permissions are restricted (non-empty key lists), auto-include
|
||||
# _-prefixed keys used by default skill protocols so agents can read/write
|
||||
# operational state (e.g. _working_notes, _batch_ledger) regardless of
|
||||
# what the node declares. When key lists are empty (unrestricted), leave
|
||||
# unchanged — empty means "allow all".
|
||||
read_keys = list(node_spec.input_keys)
|
||||
write_keys = list(node_spec.output_keys)
|
||||
# Only extend lists that were already restricted (non-empty).
|
||||
# Empty means "allow all" — adding keys would accidentally
|
||||
# activate the permission check and block legitimate reads/writes.
|
||||
if read_keys or write_keys:
|
||||
from framework.skills.defaults import SHARED_MEMORY_KEYS as _skill_keys
|
||||
|
||||
existing_underscore = [k for k in memory._data if k.startswith("_")]
|
||||
extra_keys = set(_skill_keys) | set(existing_underscore)
|
||||
for k in extra_keys:
|
||||
if read_keys and k not in read_keys:
|
||||
read_keys.append(k)
|
||||
if write_keys and k not in write_keys:
|
||||
write_keys.append(k)
|
||||
|
||||
scoped_memory = memory.with_permissions(
|
||||
read_keys=node_spec.input_keys,
|
||||
write_keys=node_spec.output_keys,
|
||||
read_keys=read_keys,
|
||||
write_keys=write_keys,
|
||||
)
|
||||
|
||||
# Build per-node accounts prompt (filtered to this node's tools)
|
||||
@@ -1809,6 +1890,9 @@ class GraphExecutor:
|
||||
shared_node_registry=self.node_registry, # For subagent escalation routing
|
||||
dynamic_tools_provider=self.dynamic_tools_provider,
|
||||
dynamic_prompt_provider=self.dynamic_prompt_provider,
|
||||
iteration_metadata_provider=self.iteration_metadata_provider,
|
||||
skills_catalog_prompt=self.skills_catalog_prompt,
|
||||
protocols_prompt=self.protocols_prompt,
|
||||
)
|
||||
|
||||
VALID_NODE_TYPES = {
|
||||
@@ -2049,6 +2133,10 @@ class GraphExecutor:
|
||||
edge=edge,
|
||||
)
|
||||
|
||||
# Track which branch wrote which key for memory conflict detection
|
||||
fanout_written_keys: dict[str, str] = {} # key -> branch_id that wrote it
|
||||
fanout_keys_lock = asyncio.Lock()
|
||||
|
||||
self.logger.info(f" ⑂ Fan-out: executing {len(branches)} branches in parallel")
|
||||
for branch in branches.values():
|
||||
target_spec = graph.get_node(branch.node_id)
|
||||
@@ -2140,8 +2228,31 @@ class GraphExecutor:
|
||||
)
|
||||
|
||||
if result.success:
|
||||
# Write outputs to shared memory using async write
|
||||
# Write outputs to shared memory with conflict detection
|
||||
conflict_strategy = self._parallel_config.memory_conflict_strategy
|
||||
for key, value in result.output.items():
|
||||
async with fanout_keys_lock:
|
||||
prior_branch = fanout_written_keys.get(key)
|
||||
if prior_branch and prior_branch != branch.branch_id:
|
||||
if conflict_strategy == "error":
|
||||
raise RuntimeError(
|
||||
f"Memory conflict: key '{key}' already written "
|
||||
f"by branch '{prior_branch}', "
|
||||
f"conflicting write from '{branch.branch_id}'"
|
||||
)
|
||||
elif conflict_strategy == "first_wins":
|
||||
self.logger.debug(
|
||||
f" ⚠ Skipping write to '{key}' "
|
||||
f"(first_wins: already set by {prior_branch})"
|
||||
)
|
||||
continue
|
||||
else:
|
||||
# last_wins (default): write and log
|
||||
self.logger.debug(
|
||||
f" ⚠ Key '{key}' overwritten "
|
||||
f"(last_wins: {prior_branch} -> {branch.branch_id})"
|
||||
)
|
||||
fanout_written_keys[key] = branch.branch_id
|
||||
await memory.write_async(key, value)
|
||||
|
||||
branch.result = result
|
||||
@@ -2188,9 +2299,11 @@ class GraphExecutor:
|
||||
|
||||
return branch, e
|
||||
|
||||
# Execute all branches concurrently
|
||||
tasks = [execute_single_branch(b) for b in branches.values()]
|
||||
results = await asyncio.gather(*tasks, return_exceptions=False)
|
||||
# Execute all branches concurrently with per-branch timeout
|
||||
timeout = self._parallel_config.branch_timeout_seconds
|
||||
branch_list = list(branches.values())
|
||||
tasks = [asyncio.wait_for(execute_single_branch(b), timeout=timeout) for b in branch_list]
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
# Process results
|
||||
total_tokens = 0
|
||||
@@ -2198,17 +2311,33 @@ class GraphExecutor:
|
||||
branch_results: dict[str, NodeResult] = {}
|
||||
failed_branches: list[ParallelBranch] = []
|
||||
|
||||
for branch, result in results:
|
||||
path.append(branch.node_id)
|
||||
for i, result in enumerate(results):
|
||||
branch = branch_list[i]
|
||||
|
||||
if isinstance(result, Exception):
|
||||
if isinstance(result, asyncio.TimeoutError):
|
||||
# Branch timed out
|
||||
branch.status = "timed_out"
|
||||
branch.error = f"Branch timed out after {timeout}s"
|
||||
self.logger.warning(
|
||||
f" ⏱ Branch {graph.get_node(branch.node_id).name}: "
|
||||
f"timed out after {timeout}s"
|
||||
)
|
||||
path.append(branch.node_id)
|
||||
failed_branches.append(branch)
|
||||
elif result is None or not result.success:
|
||||
elif isinstance(result, Exception):
|
||||
path.append(branch.node_id)
|
||||
failed_branches.append(branch)
|
||||
else:
|
||||
total_tokens += result.tokens_used
|
||||
total_latency += result.latency_ms
|
||||
branch_results[branch.branch_id] = result
|
||||
returned_branch, node_result = result
|
||||
path.append(returned_branch.node_id)
|
||||
if node_result is None or isinstance(node_result, Exception):
|
||||
failed_branches.append(returned_branch)
|
||||
elif not node_result.success:
|
||||
failed_branches.append(returned_branch)
|
||||
else:
|
||||
total_tokens += node_result.tokens_used
|
||||
total_latency += node_result.latency_ms
|
||||
branch_results[returned_branch.branch_id] = node_result
|
||||
|
||||
# Handle failures based on config
|
||||
if failed_branches:
|
||||
|
||||
+51
-11
@@ -37,24 +37,42 @@ Follow these rules for reliable, efficient browser interaction.
|
||||
## Reading Pages
|
||||
- ALWAYS prefer `browser_snapshot` over `browser_get_text("body")`
|
||||
— it returns a compact ~1-5 KB accessibility tree vs 100+ KB of raw HTML.
|
||||
- Use `browser_snapshot_aria` when you need full ARIA properties
|
||||
for detailed element inspection.
|
||||
- Interaction tools (`browser_click`, `browser_type`, `browser_fill`,
|
||||
`browser_scroll`, etc.) return a page snapshot automatically in their
|
||||
result. Use it to decide your next action — do NOT call
|
||||
`browser_snapshot` separately after every action.
|
||||
Only call `browser_snapshot` when you need a fresh view without
|
||||
performing an action, or after setting `auto_snapshot=false`.
|
||||
- Do NOT use `browser_screenshot` for reading text content
|
||||
— it produces huge base64 images with no searchable text.
|
||||
- Only fall back to `browser_get_text` for extracting specific
|
||||
small elements by CSS selector.
|
||||
|
||||
## Navigation & Waiting
|
||||
- Always call `browser_wait` after navigation actions
|
||||
(`browser_open`, `browser_navigate`, `browser_click` on links)
|
||||
to let the page load.
|
||||
- `browser_navigate` and `browser_open` already wait for the page to
|
||||
load (`domcontentloaded`). Do NOT call `browser_wait` with no
|
||||
arguments after navigation — it wastes time.
|
||||
Only use `browser_wait` when you need a *specific element* or *text*
|
||||
to appear (pass `selector` or `text`).
|
||||
- NEVER re-navigate to the same URL after scrolling
|
||||
— this resets your scroll position and loses loaded content.
|
||||
|
||||
## Scrolling
|
||||
- Use large scroll amounts ~2000 when loading more content
|
||||
— sites like twitter and linkedin have lazy loading for paging.
|
||||
- After scrolling, take a new `browser_snapshot` to see updated content.
|
||||
- The scroll result includes a snapshot automatically — no need to call
|
||||
`browser_snapshot` separately.
|
||||
|
||||
## Batching Actions
|
||||
- You can call multiple tools in a single turn — they execute in parallel.
|
||||
ALWAYS batch independent actions together. Examples:
|
||||
- Fill multiple form fields in one turn.
|
||||
- Navigate + snapshot in one turn.
|
||||
- Click + scroll if targeting different elements.
|
||||
- When batching, set `auto_snapshot=false` on all but the last action
|
||||
to avoid redundant snapshots.
|
||||
- Aim for 3-5 tool calls per turn minimum. One tool call per turn is
|
||||
wasteful.
|
||||
|
||||
## Error Recovery
|
||||
- If a tool fails, retry once with the same approach.
|
||||
@@ -65,11 +83,33 @@ Follow these rules for reliable, efficient browser interaction.
|
||||
then `browser_start`, then retry.
|
||||
|
||||
## Tab Management
|
||||
- Use `browser_tabs` to list open tabs when managing multiple pages.
|
||||
- Pass `target_id` to tools when operating on a specific tab.
|
||||
- Open background tabs with `browser_open(url=..., background=true)`
|
||||
to avoid losing your current context.
|
||||
- Close tabs you no longer need with `browser_close` to free resources.
|
||||
|
||||
**Close tabs as soon as you are done with them** — not only at the end of the task.
|
||||
After reading or extracting data from a tab, close it immediately.
|
||||
|
||||
**Decision rules:**
|
||||
- Finished reading/extracting from a tab? → `browser_close(target_id=...)`
|
||||
- Completed a multi-tab workflow? → `browser_close_finished()` to clean up all your tabs
|
||||
- More than 3 tabs open? → stop and close finished ones before opening more
|
||||
- Popup appeared that you didn't need? → close it immediately
|
||||
|
||||
**Origin awareness:** `browser_tabs` returns an `origin` field for each tab:
|
||||
- `"agent"` — you opened it; you own it; close it when done
|
||||
- `"popup"` — opened by a link or script; close after extracting what you need
|
||||
- `"startup"` or `"user"` — leave these alone unless the task requires it
|
||||
|
||||
**Cleanup tools:**
|
||||
- `browser_close(target_id=...)` — close one specific tab
|
||||
- `browser_close_finished()` — close all your agent/popup tabs (safe: leaves startup/user tabs)
|
||||
- `browser_close_all()` — close everything except the active tab (use only for full reset)
|
||||
|
||||
**Multi-tab workflow pattern:**
|
||||
1. Open background tabs with `browser_open(url=..., background=true)` to stay on current tab
|
||||
2. Process each tab and close it with `browser_close` when done
|
||||
3. When the full workflow completes, call `browser_close_finished()` to confirm cleanup
|
||||
4. Check `browser_tabs` at any point — it shows `origin` and `age_seconds` per tab
|
||||
|
||||
Never accumulate tabs. Treat every tab you open as a resource you must free.
|
||||
|
||||
## Login & Auth Walls
|
||||
- If you see a "Log in" or "Sign up" prompt instead of expected
|
||||
|
||||
@@ -565,6 +565,15 @@ class NodeContext:
|
||||
# staging / running) without restarting the conversation.
|
||||
dynamic_prompt_provider: Any = None # Callable[[], str] | None
|
||||
|
||||
# Skill system prompts — injected by the skill discovery pipeline
|
||||
skills_catalog_prompt: str = "" # Available skills XML catalog
|
||||
protocols_prompt: str = "" # Default skill operational protocols
|
||||
|
||||
# Per-iteration metadata provider — when set, EventLoopNode merges
|
||||
# the returned dict into node_loop_iteration event data. Used by
|
||||
# the queen to record the current phase per iteration.
|
||||
iteration_metadata_provider: Any = None # Callable[[], dict] | None
|
||||
|
||||
|
||||
@dataclass
|
||||
class NodeResult:
|
||||
|
||||
@@ -140,14 +140,18 @@ def compose_system_prompt(
|
||||
focus_prompt: str | None,
|
||||
narrative: str | None = None,
|
||||
accounts_prompt: str | None = None,
|
||||
skills_catalog_prompt: str | None = None,
|
||||
protocols_prompt: str | None = None,
|
||||
) -> str:
|
||||
"""Compose the three-layer system prompt.
|
||||
"""Compose the multi-layer system prompt.
|
||||
|
||||
Args:
|
||||
identity_prompt: Layer 1 — static agent identity (from GraphSpec).
|
||||
focus_prompt: Layer 3 — per-node focus directive (from NodeSpec.system_prompt).
|
||||
narrative: Layer 2 — auto-generated from conversation state.
|
||||
accounts_prompt: Connected accounts block (sits between identity and narrative).
|
||||
skills_catalog_prompt: Available skills catalog XML (Agent Skills standard).
|
||||
protocols_prompt: Default skill operational protocols section.
|
||||
|
||||
Returns:
|
||||
Composed system prompt with all layers present, plus current datetime.
|
||||
@@ -162,6 +166,14 @@ def compose_system_prompt(
|
||||
if accounts_prompt:
|
||||
parts.append(f"\n{accounts_prompt}")
|
||||
|
||||
# Skills catalog (discovered skills available for activation)
|
||||
if skills_catalog_prompt:
|
||||
parts.append(f"\n{skills_catalog_prompt}")
|
||||
|
||||
# Operational protocols (default skill behavioral guidance)
|
||||
if protocols_prompt:
|
||||
parts.append(f"\n{protocols_prompt}")
|
||||
|
||||
# Layer 2: Narrative (what's happened so far)
|
||||
if narrative:
|
||||
parts.append(f"\n--- Context (what has happened so far) ---\n{narrative}")
|
||||
|
||||
@@ -45,6 +45,12 @@ def _patch_litellm_anthropic_oauth() -> None:
|
||||
from litellm.llms.anthropic.common_utils import AnthropicModelInfo
|
||||
from litellm.types.llms.anthropic import ANTHROPIC_OAUTH_TOKEN_PREFIX
|
||||
except ImportError:
|
||||
logger.warning(
|
||||
"Could not apply litellm Anthropic OAuth patch — litellm internals may have "
|
||||
"changed. Anthropic OAuth tokens (Claude Code subscriptions) may fail with 401. "
|
||||
"See BerriAI/litellm#19618. Current litellm version: %s",
|
||||
getattr(litellm, "__version__", "unknown"),
|
||||
)
|
||||
return
|
||||
|
||||
original = AnthropicModelInfo.validate_environment
|
||||
@@ -86,10 +92,12 @@ def _patch_litellm_metadata_nonetype() -> None:
|
||||
"""
|
||||
import functools
|
||||
|
||||
patched_count = 0
|
||||
for fn_name in ("completion", "acompletion", "responses", "aresponses"):
|
||||
original = getattr(litellm, fn_name, None)
|
||||
if original is None:
|
||||
continue
|
||||
patched_count += 1
|
||||
if asyncio.iscoroutinefunction(original):
|
||||
|
||||
@functools.wraps(original)
|
||||
@@ -109,6 +117,14 @@ def _patch_litellm_metadata_nonetype() -> None:
|
||||
|
||||
setattr(litellm, fn_name, _sync_wrapper)
|
||||
|
||||
if patched_count == 0:
|
||||
logger.warning(
|
||||
"Could not apply litellm metadata=None patch — none of the expected entry "
|
||||
"points (completion, acompletion, responses, aresponses) were found. "
|
||||
"metadata=None TypeError may occur. Current litellm version: %s",
|
||||
getattr(litellm, "__version__", "unknown"),
|
||||
)
|
||||
|
||||
|
||||
if litellm is not None:
|
||||
_patch_litellm_anthropic_oauth()
|
||||
@@ -150,6 +166,10 @@ EMPTY_STREAM_RETRY_DELAY = 1.0 # seconds
|
||||
# Directory for dumping failed requests
|
||||
FAILED_REQUESTS_DIR = Path.home() / ".hive" / "failed_requests"
|
||||
|
||||
# Maximum number of dump files to retain in ~/.hive/failed_requests/.
|
||||
# Older files are pruned automatically to prevent unbounded disk growth.
|
||||
MAX_FAILED_REQUEST_DUMPS = 50
|
||||
|
||||
|
||||
def _estimate_tokens(model: str, messages: list[dict]) -> tuple[int, str]:
|
||||
"""Estimate token count for messages. Returns (token_count, method)."""
|
||||
@@ -166,6 +186,24 @@ def _estimate_tokens(model: str, messages: list[dict]) -> tuple[int, str]:
|
||||
return total_chars // 4, "estimate"
|
||||
|
||||
|
||||
def _prune_failed_request_dumps(max_files: int = MAX_FAILED_REQUEST_DUMPS) -> None:
|
||||
"""Remove oldest dump files when the count exceeds *max_files*.
|
||||
|
||||
Best-effort: never raises — a pruning failure must not break retry logic.
|
||||
"""
|
||||
try:
|
||||
all_dumps = sorted(
|
||||
FAILED_REQUESTS_DIR.glob("*.json"),
|
||||
key=lambda f: f.stat().st_mtime,
|
||||
)
|
||||
excess = len(all_dumps) - max_files
|
||||
if excess > 0:
|
||||
for old_file in all_dumps[:excess]:
|
||||
old_file.unlink(missing_ok=True)
|
||||
except Exception:
|
||||
pass # Best-effort — never block the caller
|
||||
|
||||
|
||||
def _dump_failed_request(
|
||||
model: str,
|
||||
kwargs: dict[str, Any],
|
||||
@@ -197,6 +235,9 @@ def _dump_failed_request(
|
||||
with open(filepath, "w", encoding="utf-8") as f:
|
||||
json.dump(dump_data, f, indent=2, default=str)
|
||||
|
||||
# Prune old dumps to prevent unbounded disk growth
|
||||
_prune_failed_request_dumps()
|
||||
|
||||
return str(filepath)
|
||||
|
||||
|
||||
|
||||
@@ -1,33 +1 @@
|
||||
"""Framework-level worker monitoring package.
|
||||
|
||||
Provides the Worker Health Judge: a reusable secondary graph that attaches to
|
||||
any worker agent runtime and monitors its execution health via periodic log
|
||||
inspection. Emits structured EscalationTickets when degradation is detected.
|
||||
|
||||
Usage::
|
||||
|
||||
from framework.monitoring import HEALTH_JUDGE_ENTRY_POINT, judge_goal, judge_graph
|
||||
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
|
||||
|
||||
# Register tools bound to the worker runtime's EventBus
|
||||
monitoring_registry = ToolRegistry()
|
||||
register_worker_monitoring_tools(monitoring_registry, worker_runtime._event_bus, storage_path)
|
||||
|
||||
# Load judge as secondary graph on the worker runtime
|
||||
await worker_runtime.add_graph(
|
||||
graph_id="judge",
|
||||
graph=judge_graph,
|
||||
goal=judge_goal,
|
||||
entry_points={"health_check": HEALTH_JUDGE_ENTRY_POINT},
|
||||
storage_subpath="graphs/judge",
|
||||
)
|
||||
"""
|
||||
|
||||
from .judge import HEALTH_JUDGE_ENTRY_POINT, judge_goal, judge_graph, judge_node
|
||||
|
||||
__all__ = [
|
||||
"HEALTH_JUDGE_ENTRY_POINT",
|
||||
"judge_goal",
|
||||
"judge_graph",
|
||||
"judge_node",
|
||||
]
|
||||
"""Framework-level worker monitoring package."""
|
||||
|
||||
@@ -1,258 +0,0 @@
|
||||
"""Worker Health Judge — framework-level reusable monitoring graph.
|
||||
|
||||
Attaches to any worker agent runtime as a secondary graph. Fires on a
|
||||
2-minute timer, reads the worker's session logs via ``get_worker_health_summary``,
|
||||
accumulates observations in a continuous conversation context, and emits a
|
||||
structured ``EscalationTicket`` when it detects a degradation pattern.
|
||||
|
||||
Usage::
|
||||
|
||||
from framework.monitoring import judge_graph, judge_goal, HEALTH_JUDGE_ENTRY_POINT
|
||||
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
|
||||
|
||||
# Register tools bound to the worker runtime's event bus
|
||||
monitoring_registry = ToolRegistry()
|
||||
register_worker_monitoring_tools(
|
||||
monitoring_registry, worker_runtime._event_bus, storage_path
|
||||
)
|
||||
monitoring_tools = list(monitoring_registry.get_tools().values())
|
||||
monitoring_executor = monitoring_registry.get_executor()
|
||||
|
||||
# Load judge as secondary graph on the worker runtime
|
||||
await worker_runtime.add_graph(
|
||||
graph_id="judge",
|
||||
graph=judge_graph,
|
||||
goal=judge_goal,
|
||||
entry_points={"health_check": HEALTH_JUDGE_ENTRY_POINT},
|
||||
storage_subpath="graphs/judge",
|
||||
)
|
||||
|
||||
Design:
|
||||
- ``isolation_level="isolated"`` — the judge has its own memory, not
|
||||
polluting the worker's shared memory namespace.
|
||||
- ``conversation_mode="continuous"`` — the judge's conversation carries
|
||||
across timer ticks. The conversation IS the judge's memory. It tracks
|
||||
trends by referring to its own prior messages ("Last check I saw 47
|
||||
steps; now 52; 5 new steps, 3 RETRY").
|
||||
- No shared memory keys. No external state files.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from framework.graph import Constraint, Goal, NodeSpec, SuccessCriterion
|
||||
from framework.graph.edge import AsyncEntryPointSpec, GraphSpec
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Goal
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
judge_goal = Goal(
|
||||
id="worker-health-monitor",
|
||||
name="Worker Health Monitor",
|
||||
description=(
|
||||
"Periodically assess the health of the worker agent by reading its "
|
||||
"execution logs. Detect degradation patterns (excessive retries, "
|
||||
"stalls, doom loops) and emit structured EscalationTickets when the "
|
||||
"worker needs attention."
|
||||
),
|
||||
success_criteria=[
|
||||
SuccessCriterion(
|
||||
id="accurate-detection",
|
||||
description="Only escalates genuine degradation, not normal retry cycles",
|
||||
metric="false_positive_rate",
|
||||
target="low",
|
||||
weight=0.5,
|
||||
),
|
||||
SuccessCriterion(
|
||||
id="timely-detection",
|
||||
description="Detects genuine stalls within 2 timer ticks (≤4 minutes)",
|
||||
metric="detection_latency_minutes",
|
||||
target="<=4",
|
||||
weight=0.5,
|
||||
),
|
||||
],
|
||||
constraints=[
|
||||
Constraint(
|
||||
id="conservative-escalation",
|
||||
description=(
|
||||
"Do not escalate on a single bad verdict or a brief stall. "
|
||||
"Require clear patterns (10+ consecutive bad verdicts or 4+ minute stall) "
|
||||
"before creating a ticket."
|
||||
),
|
||||
constraint_type="hard",
|
||||
category="quality",
|
||||
),
|
||||
Constraint(
|
||||
id="complete-ticket",
|
||||
description=(
|
||||
"Every EscalationTicket must have all required fields filled. "
|
||||
"Do not emit partial or placeholder tickets."
|
||||
),
|
||||
constraint_type="hard",
|
||||
category="correctness",
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Node
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
judge_node = NodeSpec(
|
||||
id="judge",
|
||||
name="Worker Health Judge",
|
||||
description=(
|
||||
"Autonomous health monitor for worker agents. Reads execution logs "
|
||||
"on each timer tick, compares to prior observations (via conversation "
|
||||
"history), and emits a structured EscalationTicket when a genuine "
|
||||
"degradation pattern is detected."
|
||||
),
|
||||
node_type="event_loop",
|
||||
client_facing=False, # Autonomous monitor, not interactive
|
||||
max_node_visits=0, # Unbounded — runs on every timer tick
|
||||
input_keys=[],
|
||||
output_keys=["health_verdict"],
|
||||
nullable_output_keys=["health_verdict"],
|
||||
success_criteria=(
|
||||
"A clear health verdict is produced each check: either 'healthy' with "
|
||||
"a brief observation, or a complete EscalationTicket is emitted via "
|
||||
"emit_escalation_ticket and health_verdict describes the issue."
|
||||
),
|
||||
tools=[
|
||||
"get_worker_health_summary",
|
||||
"emit_escalation_ticket",
|
||||
],
|
||||
system_prompt="""\
|
||||
You are the Worker Health Judge. You run every 2 minutes alongside a worker \
|
||||
agent to monitor its execution health.
|
||||
|
||||
# Your Role
|
||||
|
||||
You observe the worker's iteration patterns over time and escalate only when \
|
||||
you see genuine degradation — not normal retry cycles. Your conversation history \
|
||||
IS your memory. On each check, refer to your previous observations to track trends.
|
||||
|
||||
# Check Procedure
|
||||
|
||||
On each timer tick (every 2 minutes):
|
||||
|
||||
## Step 1: Read health snapshot
|
||||
Call get_worker_health_summary() with no arguments to auto-discover the active \
|
||||
session. This returns:
|
||||
- worker_agent_id: the worker's agent name — use this for ticket identity fields
|
||||
- worker_graph_id: the worker's primary graph ID — use this for ticket identity fields
|
||||
- session_id: the session being monitored — use this for worker_session_id in tickets
|
||||
- total_steps: how many log steps have been recorded
|
||||
- recent_verdicts: list of recent ACCEPT/RETRY/CONTINUE verdicts
|
||||
- steps_since_last_accept: consecutive non-ACCEPT steps
|
||||
- stall_minutes: wall-clock since last step (null if active)
|
||||
- evidence_snippet: recent LLM output
|
||||
|
||||
## Step 2: Compare to prior check
|
||||
Look at your conversation history. What was total_steps last time?
|
||||
- If total_steps is UNCHANGED from prior check AND prior check was also unchanged:
|
||||
→ STALL confirmed (worker has produced no new iterations in 4+ minutes).
|
||||
→ Escalate with severity="high" or "critical" depending on stall duration.
|
||||
- If total_steps increased: worker is making progress. Examine verdicts.
|
||||
|
||||
## Step 3: Analyze verdict pattern
|
||||
- Healthy: Mix of ACCEPT and RETRY, steps_since_last_accept < 5. No action.
|
||||
- Warning: steps_since_last_accept is 5-9. Note it, no escalation yet.
|
||||
- Degraded: steps_since_last_accept >= 10. Examine evidence_snippet.
|
||||
- If evidence shows the agent is making real progress (complex reasoning,
|
||||
exploring solutions, productive tool use): may be a hard problem. Note it.
|
||||
- If evidence shows a loop (same error, same tool call, no new information):
|
||||
→ Escalate with severity="medium" or "high".
|
||||
- Critical: steps_since_last_accept >= 20, OR stall_minutes >= 4.
|
||||
→ Escalate with severity="critical".
|
||||
|
||||
## Step 4: Decide
|
||||
|
||||
### If healthy:
|
||||
set_output("health_verdict", "healthy: <brief observation>")
|
||||
Done.
|
||||
|
||||
### If escalating:
|
||||
Build an EscalationTicket JSON string with ALL required fields:
|
||||
{
|
||||
"worker_agent_id": "<worker_agent_id from get_worker_health_summary>",
|
||||
"worker_session_id": "<session_id from get_worker_health_summary>",
|
||||
"worker_node_id": "<worker_graph_id from get_worker_health_summary>",
|
||||
"worker_graph_id": "<worker_graph_id from get_worker_health_summary>",
|
||||
"severity": "<low|medium|high|critical>",
|
||||
"cause": "<what you observed — concrete, specific>",
|
||||
"judge_reasoning": "<why you decided to escalate, not just dismiss>",
|
||||
"suggested_action": "<what you recommend: restart, human review, etc.>",
|
||||
"recent_verdicts": [<list from get_worker_health_summary>],
|
||||
"total_steps_checked": <int>,
|
||||
"steps_since_last_accept": <int>,
|
||||
"stall_minutes": <float or null>,
|
||||
"evidence_snippet": "<from get_worker_health_summary>"
|
||||
}
|
||||
|
||||
Call: emit_escalation_ticket(ticket_json=<the JSON string above>)
|
||||
Then: set_output("health_verdict", "escalated: <one-line summary>")
|
||||
|
||||
# Severity Guide
|
||||
|
||||
- low: Mild concern, worth noting. 5-9 consecutive bad verdicts.
|
||||
- medium: Clear degradation pattern. 10-15 bad verdicts or brief stall (1-2 min).
|
||||
- high: Serious issue. 15+ bad verdicts or stall 2-4 minutes or clear doom loop.
|
||||
- critical: Worker is definitively stuck. 20+ bad verdicts or stall > 4 minutes.
|
||||
|
||||
# Conservative Bias
|
||||
|
||||
You MUST resist the urge to escalate prematurely. Worker agents naturally retry.
|
||||
A node may legitimately need 5-8 retries before succeeding. Do not escalate unless:
|
||||
1. The pattern is clear and sustained across your observation window, AND
|
||||
2. The evidence shows no genuine progress
|
||||
|
||||
One missed escalation is less costly than two false alarms. The Queen will filter \
|
||||
further. But do not be passive — genuine stalls and doom loops must be caught.
|
||||
|
||||
# Rules
|
||||
- Never escalate on the FIRST check unless stall_minutes > 4
|
||||
- Always call get_worker_health_summary FIRST before deciding anything
|
||||
- All ticket fields are REQUIRED — do not submit partial tickets
|
||||
- After any emit_escalation_ticket call, always set_output to complete the check
|
||||
""",
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Entry Point
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
HEALTH_JUDGE_ENTRY_POINT = AsyncEntryPointSpec(
|
||||
id="health_check",
|
||||
name="Worker Health Check",
|
||||
entry_node="judge",
|
||||
trigger_type="timer",
|
||||
trigger_config={
|
||||
"interval_minutes": 2,
|
||||
"run_immediately": True, # Fire immediately to establish a baseline
|
||||
},
|
||||
isolation_level="isolated", # Own memory namespace, not polluting worker's
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Graph
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
judge_graph = GraphSpec(
|
||||
id="judge-graph",
|
||||
goal_id=judge_goal.id,
|
||||
version="1.0.0",
|
||||
entry_node="judge",
|
||||
entry_points={"health_check": "judge"},
|
||||
terminal_nodes=["judge"], # Judge node can terminate after each check
|
||||
pause_nodes=[],
|
||||
nodes=[judge_node],
|
||||
edges=[],
|
||||
conversation_mode="continuous", # Conversation persists across timer ticks
|
||||
async_entry_points=[HEALTH_JUDGE_ENTRY_POINT],
|
||||
loop_config={
|
||||
"max_iterations": 10, # One check shouldn't take many turns
|
||||
"max_tool_calls_per_turn": 3, # get_summary + optionally emit_ticket
|
||||
"max_context_tokens": 16000, # Compact — judge only needs recent context
|
||||
},
|
||||
)
|
||||
@@ -83,18 +83,18 @@ configure_logging(level="INFO", format="auto")
|
||||
- Compact single-line format (easy to stream/parse)
|
||||
- All trace context fields included automatically
|
||||
|
||||
### Human-Readable Format (Development)
|
||||
### Human-Readable Format (Development / Terminal)
|
||||
|
||||
```
|
||||
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] Starting agent execution
|
||||
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] Processing input data [node_id:input-processor]
|
||||
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] LLM call completed [latency_ms:1250] [tokens_used:450]
|
||||
[INFO ] [agent:sales-agent] Starting agent execution
|
||||
[INFO ] [agent:sales-agent] Processing input data [node_id:input-processor]
|
||||
[INFO ] [agent:sales-agent] LLM call completed [latency_ms:1250] [tokens_used:450]
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Color-coded log levels
|
||||
- Shortened IDs for readability (first 8 chars)
|
||||
- Context prefix shows trace correlation
|
||||
- Terminal output omits trace_id and execution_id for readability
|
||||
- For full traceability (e.g. debugging), use `ENV=production` to get JSON file logs with trace_id and execution_id
|
||||
|
||||
## Trace Context Fields
|
||||
|
||||
|
||||
@@ -4,8 +4,9 @@ Structured logging with automatic trace context propagation.
|
||||
Key Features:
|
||||
- Zero developer friction: Standard logger.info() calls get automatic context
|
||||
- ContextVar-based propagation: Thread-safe and async-safe
|
||||
- Dual output modes: JSON for production, human-readable for development
|
||||
- Correlation IDs: trace_id follows entire request flow automatically
|
||||
- Dual output modes: JSON for production (full trace_id/execution_id), human-readable for terminal
|
||||
- Terminal omits trace_id/execution_id for readability
|
||||
- Use ENV=production for file logs with full traceability
|
||||
|
||||
Architecture:
|
||||
Runtime.start_run() → Generates trace_id, sets context once
|
||||
@@ -101,10 +102,11 @@ class StructuredFormatter(logging.Formatter):
|
||||
|
||||
class HumanReadableFormatter(logging.Formatter):
|
||||
"""
|
||||
Human-readable formatter for development.
|
||||
Human-readable formatter for development (terminal output).
|
||||
|
||||
Provides colorized logs with trace context for local debugging.
|
||||
Includes trace_id prefix for correlation - AUTOMATIC!
|
||||
Provides colorized logs for local debugging. Omits trace_id and execution_id
|
||||
from the terminal for readability; use ENV=production (JSON file logs) when
|
||||
traceability is needed.
|
||||
"""
|
||||
|
||||
COLORS = {
|
||||
@@ -118,18 +120,11 @@ class HumanReadableFormatter(logging.Formatter):
|
||||
|
||||
def format(self, record: logging.LogRecord) -> str:
|
||||
"""Format log record as human-readable string."""
|
||||
# Get trace context - AUTOMATIC!
|
||||
# Get trace context; omit trace_id and execution_id in terminal for readability
|
||||
context = trace_context.get() or {}
|
||||
trace_id = context.get("trace_id", "")
|
||||
execution_id = context.get("execution_id", "")
|
||||
agent_id = context.get("agent_id", "")
|
||||
|
||||
# Build context prefix
|
||||
prefix_parts = []
|
||||
if trace_id:
|
||||
prefix_parts.append(f"trace:{trace_id[:8]}")
|
||||
if execution_id:
|
||||
prefix_parts.append(f"exec:{execution_id[-8:]}")
|
||||
if agent_id:
|
||||
prefix_parts.append(f"agent:{agent_id}")
|
||||
|
||||
|
||||
@@ -16,7 +16,6 @@ from framework.credentials.validation import (
|
||||
from framework.graph import Goal
|
||||
from framework.graph.edge import (
|
||||
DEFAULT_MAX_TOKENS,
|
||||
AsyncEntryPointSpec,
|
||||
EdgeCondition,
|
||||
EdgeSpec,
|
||||
GraphSpec,
|
||||
@@ -570,9 +569,6 @@ class AgentInfo:
|
||||
constraints: list[dict]
|
||||
required_tools: list[str]
|
||||
has_tools_module: bool
|
||||
# Multi-entry-point support
|
||||
async_entry_points: list[dict] = field(default_factory=list)
|
||||
is_multi_entry_point: bool = False
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -630,22 +626,6 @@ def load_agent_export(data: str | dict) -> tuple[GraphSpec, Goal]:
|
||||
)
|
||||
edges.append(edge)
|
||||
|
||||
# Build AsyncEntryPointSpec objects for multi-entry-point support
|
||||
async_entry_points = []
|
||||
for aep_data in graph_data.get("async_entry_points", []):
|
||||
async_entry_points.append(
|
||||
AsyncEntryPointSpec(
|
||||
id=aep_data["id"],
|
||||
name=aep_data.get("name", aep_data["id"]),
|
||||
entry_node=aep_data["entry_node"],
|
||||
trigger_type=aep_data.get("trigger_type", "manual"),
|
||||
trigger_config=aep_data.get("trigger_config", {}),
|
||||
isolation_level=aep_data.get("isolation_level", "shared"),
|
||||
priority=aep_data.get("priority", 0),
|
||||
max_concurrent=aep_data.get("max_concurrent", 10),
|
||||
)
|
||||
)
|
||||
|
||||
# Build GraphSpec
|
||||
graph = GraphSpec(
|
||||
id=graph_data.get("id", "agent-graph"),
|
||||
@@ -653,7 +633,6 @@ def load_agent_export(data: str | dict) -> tuple[GraphSpec, Goal]:
|
||||
version=graph_data.get("version", "1.0.0"),
|
||||
entry_node=graph_data.get("entry_node", ""),
|
||||
entry_points=graph_data.get("entry_points", {}), # Support pause/resume architecture
|
||||
async_entry_points=async_entry_points, # Support multi-entry-point agents
|
||||
terminal_nodes=graph_data.get("terminal_nodes", []),
|
||||
pause_nodes=graph_data.get("pause_nodes", []), # Support pause/resume architecture
|
||||
nodes=nodes,
|
||||
@@ -805,8 +784,6 @@ class AgentRunner:
|
||||
|
||||
# AgentRuntime — unified execution path for all agents
|
||||
self._agent_runtime: AgentRuntime | None = None
|
||||
self._uses_async_entry_points = self.graph.has_async_entry_points()
|
||||
|
||||
# Pre-load validation: structural checks + credentials.
|
||||
# Fails fast with actionable guidance — no MCP noise on screen.
|
||||
run_preload_validation(
|
||||
@@ -965,7 +942,6 @@ class AgentRunner:
|
||||
"version": "1.0.0",
|
||||
"entry_node": getattr(agent_module, "entry_node", nodes[0].id),
|
||||
"entry_points": getattr(agent_module, "entry_points", {}),
|
||||
"async_entry_points": getattr(agent_module, "async_entry_points", []),
|
||||
"terminal_nodes": getattr(agent_module, "terminal_nodes", []),
|
||||
"pause_nodes": getattr(agent_module, "pause_nodes", []),
|
||||
"nodes": nodes,
|
||||
@@ -983,6 +959,10 @@ class AgentRunner:
|
||||
|
||||
graph = GraphSpec(**graph_kwargs)
|
||||
|
||||
# Read skill configuration from agent module
|
||||
agent_default_skills = getattr(agent_module, "default_skills", None)
|
||||
agent_skills = getattr(agent_module, "skills", None)
|
||||
|
||||
# Read runtime config (webhook settings, etc.) if defined
|
||||
agent_runtime_config = getattr(agent_module, "runtime_config", None)
|
||||
|
||||
@@ -994,7 +974,7 @@ class AgentRunner:
|
||||
configure_fn = getattr(agent_module, "configure_for_account", None)
|
||||
list_accts_fn = getattr(agent_module, "list_connected_accounts", None)
|
||||
|
||||
return cls(
|
||||
runner = cls(
|
||||
agent_path=agent_path,
|
||||
graph=graph,
|
||||
goal=goal,
|
||||
@@ -1010,6 +990,10 @@ class AgentRunner:
|
||||
list_accounts=list_accts_fn,
|
||||
credential_store=credential_store,
|
||||
)
|
||||
# Stash skill config for use in _setup()
|
||||
runner._agent_default_skills = agent_default_skills
|
||||
runner._agent_skills = agent_skills
|
||||
return runner
|
||||
|
||||
# Fallback: load from agent.json (legacy JSON-based agents)
|
||||
agent_json_path = agent_path / "agent.json"
|
||||
@@ -1027,7 +1011,7 @@ class AgentRunner:
|
||||
except json.JSONDecodeError as exc:
|
||||
raise ValueError(f"Invalid JSON in agent export file: {agent_json_path}") from exc
|
||||
|
||||
return cls(
|
||||
runner = cls(
|
||||
agent_path=agent_path,
|
||||
graph=graph,
|
||||
goal=goal,
|
||||
@@ -1038,6 +1022,9 @@ class AgentRunner:
|
||||
skip_credential_validation=skip_credential_validation or False,
|
||||
credential_store=credential_store,
|
||||
)
|
||||
runner._agent_default_skills = None
|
||||
runner._agent_skills = None
|
||||
return runner
|
||||
|
||||
def register_tool(
|
||||
self,
|
||||
@@ -1347,6 +1334,19 @@ class AgentRunner:
|
||||
except Exception:
|
||||
pass # Best-effort — agent works without account info
|
||||
|
||||
# Skill configuration — the runtime handles discovery, loading, and
|
||||
# prompt rasterization. The runner just builds the config.
|
||||
from framework.skills.config import SkillsConfig
|
||||
from framework.skills.manager import SkillsManagerConfig
|
||||
|
||||
skills_manager_config = SkillsManagerConfig(
|
||||
skills_config=SkillsConfig.from_agent_vars(
|
||||
default_skills=getattr(self, "_agent_default_skills", None),
|
||||
skills=getattr(self, "_agent_skills", None),
|
||||
),
|
||||
project_root=self.agent_path,
|
||||
)
|
||||
|
||||
self._setup_agent_runtime(
|
||||
tools,
|
||||
tool_executor,
|
||||
@@ -1354,6 +1354,7 @@ class AgentRunner:
|
||||
accounts_data=accounts_data,
|
||||
tool_provider_map=tool_provider_map,
|
||||
event_bus=event_bus,
|
||||
skills_manager_config=skills_manager_config,
|
||||
)
|
||||
|
||||
def _get_api_key_env_var(self, model: str) -> str | None:
|
||||
@@ -1449,23 +1450,10 @@ class AgentRunner:
|
||||
accounts_data: list[dict] | None = None,
|
||||
tool_provider_map: dict[str, str] | None = None,
|
||||
event_bus=None,
|
||||
skills_manager_config=None,
|
||||
) -> None:
|
||||
"""Set up multi-entry-point execution using AgentRuntime."""
|
||||
# Convert AsyncEntryPointSpec to EntryPointSpec for AgentRuntime
|
||||
entry_points = []
|
||||
for async_ep in self.graph.async_entry_points:
|
||||
ep = EntryPointSpec(
|
||||
id=async_ep.id,
|
||||
name=async_ep.name,
|
||||
entry_node=async_ep.entry_node,
|
||||
trigger_type=async_ep.trigger_type,
|
||||
trigger_config=async_ep.trigger_config,
|
||||
isolation_level=async_ep.isolation_level,
|
||||
priority=async_ep.priority,
|
||||
max_concurrent=async_ep.max_concurrent,
|
||||
max_resurrections=async_ep.max_resurrections,
|
||||
)
|
||||
entry_points.append(ep)
|
||||
|
||||
# Always create a primary entry point for the graph's entry node.
|
||||
# For multi-entry-point agents this ensures the primary path (e.g.
|
||||
@@ -1522,26 +1510,37 @@ class AgentRunner:
|
||||
accounts_data=accounts_data,
|
||||
tool_provider_map=tool_provider_map,
|
||||
event_bus=event_bus,
|
||||
skills_manager_config=skills_manager_config,
|
||||
)
|
||||
|
||||
# Pass intro_message through for TUI display
|
||||
self._agent_runtime.intro_message = self.intro_message
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Execution modes
|
||||
#
|
||||
# run() – One-shot, blocking execution for worker agents
|
||||
# (headless CLI via ``hive run``). Validates, runs
|
||||
# the graph to completion, and returns the result.
|
||||
#
|
||||
# start() / trigger() – Long-lived runtime for the frontend (queen).
|
||||
# start() boots the runtime; trigger() sends
|
||||
# non-blocking execution requests. Used by the
|
||||
# server session manager and API routes.
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
async def run(
|
||||
self,
|
||||
input_data: dict | None = None,
|
||||
session_state: dict | None = None,
|
||||
entry_point_id: str | None = None,
|
||||
) -> ExecutionResult:
|
||||
"""
|
||||
Execute the agent with given input data.
|
||||
"""One-shot execution for worker agents (headless CLI).
|
||||
|
||||
Validates credentials before execution. If any required credentials
|
||||
are missing, returns an error result with instructions on how to
|
||||
provide them.
|
||||
Validates credentials, runs the graph to completion, and returns
|
||||
the result. Used by ``hive run`` and programmatic callers.
|
||||
|
||||
For single-entry-point agents, this is the standard execution path.
|
||||
For multi-entry-point agents, you can optionally specify which entry point to use.
|
||||
For the frontend (queen), use start() + trigger() instead.
|
||||
|
||||
Args:
|
||||
input_data: Input data for the agent (e.g., {"lead_id": "123"})
|
||||
@@ -1667,7 +1666,12 @@ class AgentRunner:
|
||||
# === Runtime API ===
|
||||
|
||||
async def start(self) -> None:
|
||||
"""Start the agent runtime."""
|
||||
"""Boot the agent runtime for the frontend (queen).
|
||||
|
||||
Pair with trigger() to send execution requests. Used by the
|
||||
server session manager. For headless worker agents, use run()
|
||||
instead.
|
||||
"""
|
||||
if self._agent_runtime is None:
|
||||
self._setup()
|
||||
|
||||
@@ -1684,10 +1688,10 @@ class AgentRunner:
|
||||
input_data: dict[str, Any],
|
||||
correlation_id: str | None = None,
|
||||
) -> str:
|
||||
"""
|
||||
Trigger execution at a specific entry point (non-blocking).
|
||||
"""Send a non-blocking execution request to a running runtime.
|
||||
|
||||
Returns execution ID for tracking.
|
||||
Used by the server API routes after start(). For headless
|
||||
worker agents, use run() instead.
|
||||
|
||||
Args:
|
||||
entry_point_id: Which entry point to trigger
|
||||
@@ -1772,19 +1776,6 @@ class AgentRunner:
|
||||
for edge in self.graph.edges
|
||||
]
|
||||
|
||||
# Build async entry points info
|
||||
async_entry_points_info = [
|
||||
{
|
||||
"id": ep.id,
|
||||
"name": ep.name,
|
||||
"entry_node": ep.entry_node,
|
||||
"trigger_type": ep.trigger_type,
|
||||
"isolation_level": ep.isolation_level,
|
||||
"max_concurrent": ep.max_concurrent,
|
||||
}
|
||||
for ep in self.graph.async_entry_points
|
||||
]
|
||||
|
||||
return AgentInfo(
|
||||
name=self.graph.id,
|
||||
description=self.graph.description,
|
||||
@@ -1811,8 +1802,6 @@ class AgentRunner:
|
||||
],
|
||||
required_tools=sorted(required_tools),
|
||||
has_tools_module=(self.agent_path / "tools.py").exists(),
|
||||
async_entry_points=async_entry_points_info,
|
||||
is_multi_entry_point=self._uses_async_entry_points,
|
||||
)
|
||||
|
||||
def validate(self) -> ValidationResult:
|
||||
@@ -2127,18 +2116,6 @@ Respond with JSON only:
|
||||
trigger_type="manual",
|
||||
isolation_level="shared",
|
||||
)
|
||||
for aep in runner.graph.async_entry_points:
|
||||
entry_points[aep.id] = EntryPointSpec(
|
||||
id=aep.id,
|
||||
name=aep.name,
|
||||
entry_node=aep.entry_node,
|
||||
trigger_type=aep.trigger_type,
|
||||
trigger_config=aep.trigger_config,
|
||||
isolation_level=aep.isolation_level,
|
||||
priority=aep.priority,
|
||||
max_concurrent=aep.max_concurrent,
|
||||
)
|
||||
|
||||
await runtime.add_graph(
|
||||
graph_id=gid,
|
||||
graph=runner.graph,
|
||||
|
||||
@@ -454,11 +454,11 @@ An agent has requested handoff to the Hive Coder (via the `escalate` synthetic t
|
||||
|
||||
## Worker Health Monitoring
|
||||
|
||||
These events form the **judge → queen → operator** escalation pipeline.
|
||||
These events form the **queen → operator** escalation pipeline.
|
||||
|
||||
### `worker_escalation_ticket`
|
||||
|
||||
The Worker Health Judge has detected a degradation pattern and is escalating to the Queen.
|
||||
A worker degradation pattern has been detected and is being escalated to the Queen.
|
||||
|
||||
| Data Field | Type | Description |
|
||||
| ---------- | ------ | ------------------------------------ |
|
||||
|
||||
@@ -8,6 +8,7 @@ while preserving the goal-driven approach.
|
||||
import asyncio
|
||||
import logging
|
||||
import time
|
||||
import uuid
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
@@ -28,6 +29,7 @@ if TYPE_CHECKING:
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.goal import Goal
|
||||
from framework.llm.provider import LLMProvider, Tool
|
||||
from framework.skills.manager import SkillsManagerConfig
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -131,6 +133,10 @@ class AgentRuntime:
|
||||
accounts_data: list[dict] | None = None,
|
||||
tool_provider_map: dict[str, str] | None = None,
|
||||
event_bus: "EventBus | None" = None,
|
||||
skills_manager_config: "SkillsManagerConfig | None" = None,
|
||||
# Deprecated — pass skills_manager_config instead.
|
||||
skills_catalog_prompt: str = "",
|
||||
protocols_prompt: str = "",
|
||||
):
|
||||
"""
|
||||
Initialize agent runtime.
|
||||
@@ -152,7 +158,13 @@ class AgentRuntime:
|
||||
event_bus: Optional external EventBus. If provided, the runtime shares
|
||||
this bus instead of creating its own. Used by SessionManager to
|
||||
share a single bus between queen, worker, and judge.
|
||||
skills_manager_config: Skill configuration — the runtime owns
|
||||
discovery, loading, and prompt renderation internally.
|
||||
skills_catalog_prompt: Deprecated. Pre-rendered skills catalog.
|
||||
protocols_prompt: Deprecated. Pre-rendered operational protocols.
|
||||
"""
|
||||
from framework.skills.manager import SkillsManager
|
||||
|
||||
self.graph = graph
|
||||
self.goal = goal
|
||||
self._config = config or AgentRuntimeConfig()
|
||||
@@ -160,6 +172,29 @@ class AgentRuntime:
|
||||
self._checkpoint_config = checkpoint_config
|
||||
self.accounts_prompt = accounts_prompt
|
||||
|
||||
# --- Skill lifecycle: runtime owns the SkillsManager ---
|
||||
if skills_manager_config is not None:
|
||||
# New path: config-driven, runtime handles loading
|
||||
self._skills_manager = SkillsManager(skills_manager_config)
|
||||
self._skills_manager.load()
|
||||
elif skills_catalog_prompt or protocols_prompt:
|
||||
# Legacy path: caller passed pre-rendered strings
|
||||
import warnings
|
||||
|
||||
warnings.warn(
|
||||
"Passing pre-rendered skills_catalog_prompt/protocols_prompt "
|
||||
"is deprecated. Pass skills_manager_config instead.",
|
||||
DeprecationWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
self._skills_manager = SkillsManager.from_precomputed(
|
||||
skills_catalog_prompt, protocols_prompt
|
||||
)
|
||||
else:
|
||||
# Bare constructor: auto-load defaults
|
||||
self._skills_manager = SkillsManager()
|
||||
self._skills_manager.load()
|
||||
|
||||
# Primary graph identity
|
||||
self._graph_id: str = graph_id or "primary"
|
||||
|
||||
@@ -215,6 +250,18 @@ class AgentRuntime:
|
||||
# Optional greeting shown to user on TUI load (set by AgentRunner)
|
||||
self.intro_message: str = ""
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Skill prompt accessors (read by ExecutionStream constructors)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@property
|
||||
def skills_catalog_prompt(self) -> str:
|
||||
return self._skills_manager.skills_catalog_prompt
|
||||
|
||||
@property
|
||||
def protocols_prompt(self) -> str:
|
||||
return self._skills_manager.protocols_prompt
|
||||
|
||||
def register_entry_point(self, spec: EntryPointSpec) -> None:
|
||||
"""
|
||||
Register a named entry point for the agent.
|
||||
@@ -292,6 +339,8 @@ class AgentRuntime:
|
||||
accounts_prompt=self._accounts_prompt,
|
||||
accounts_data=self._accounts_data,
|
||||
tool_provider_map=self._tool_provider_map,
|
||||
skills_catalog_prompt=self.skills_catalog_prompt,
|
||||
protocols_prompt=self.protocols_prompt,
|
||||
)
|
||||
await stream.start()
|
||||
self._streams[ep_id] = stream
|
||||
@@ -392,18 +441,24 @@ class AgentRuntime:
|
||||
|
||||
tc = spec.trigger_config
|
||||
cron_expr = tc.get("cron")
|
||||
interval = tc.get("interval_minutes")
|
||||
_raw_interval = tc.get("interval_minutes")
|
||||
interval = float(_raw_interval) if _raw_interval is not None else None
|
||||
run_immediately = tc.get("run_immediately", False)
|
||||
|
||||
if cron_expr:
|
||||
# Cron expression mode — takes priority over interval_minutes
|
||||
try:
|
||||
from croniter import croniter
|
||||
except ImportError as e:
|
||||
raise RuntimeError(
|
||||
"croniter is required for cron-based entry points. "
|
||||
"Install it with: uv pip install croniter"
|
||||
) from e
|
||||
|
||||
# Validate the expression upfront
|
||||
try:
|
||||
if not croniter.is_valid(cron_expr):
|
||||
raise ValueError(f"Invalid cron expression: {cron_expr}")
|
||||
except (ImportError, ValueError) as e:
|
||||
except ValueError as e:
|
||||
logger.warning(
|
||||
"Entry point '%s' has invalid cron config: %s",
|
||||
ep_id,
|
||||
@@ -543,7 +598,7 @@ class AgentRuntime:
|
||||
ep_id,
|
||||
cron_expr,
|
||||
run_immediately,
|
||||
idle_timeout=tc.get("idle_timeout_seconds", 300),
|
||||
idle_timeout=float(tc.get("idle_timeout_seconds", 300)),
|
||||
)()
|
||||
)
|
||||
self._timer_tasks.append(task)
|
||||
@@ -673,7 +728,7 @@ class AgentRuntime:
|
||||
ep_id,
|
||||
interval,
|
||||
run_immediately,
|
||||
idle_timeout=tc.get("idle_timeout_seconds", 300),
|
||||
idle_timeout=float(tc.get("idle_timeout_seconds", 300)),
|
||||
)()
|
||||
)
|
||||
self._timer_tasks.append(task)
|
||||
@@ -822,7 +877,8 @@ class AgentRuntime:
|
||||
if stream is None:
|
||||
raise ValueError(f"Entry point '{entry_point_id}' not found")
|
||||
|
||||
return await stream.execute(input_data, correlation_id, session_state)
|
||||
run_id = uuid.uuid4().hex[:12]
|
||||
return await stream.execute(input_data, correlation_id, session_state, run_id=run_id)
|
||||
|
||||
async def trigger_and_wait(
|
||||
self,
|
||||
@@ -919,6 +975,8 @@ class AgentRuntime:
|
||||
accounts_prompt=self._accounts_prompt,
|
||||
accounts_data=self._accounts_data,
|
||||
tool_provider_map=self._tool_provider_map,
|
||||
skills_catalog_prompt=self.skills_catalog_prompt,
|
||||
protocols_prompt=self.protocols_prompt,
|
||||
)
|
||||
if self._running:
|
||||
await stream.start()
|
||||
@@ -997,7 +1055,8 @@ class AgentRuntime:
|
||||
if spec.trigger_type != "timer":
|
||||
continue
|
||||
tc = spec.trigger_config
|
||||
interval = tc.get("interval_minutes")
|
||||
_raw_interval = tc.get("interval_minutes")
|
||||
interval = float(_raw_interval) if _raw_interval is not None else None
|
||||
run_immediately = tc.get("run_immediately", False)
|
||||
|
||||
if interval and interval > 0 and self._running:
|
||||
@@ -1142,7 +1201,7 @@ class AgentRuntime:
|
||||
ep_id,
|
||||
interval,
|
||||
run_immediately,
|
||||
idle_timeout=tc.get("idle_timeout_seconds", 300),
|
||||
idle_timeout=float(tc.get("idle_timeout_seconds", 300)),
|
||||
)()
|
||||
)
|
||||
timer_tasks.append(task)
|
||||
@@ -1359,8 +1418,8 @@ class AgentRuntime:
|
||||
allowed_keys = set(entry_node.input_keys)
|
||||
|
||||
# Search primary graph's streams for an active session.
|
||||
# Skip isolated streams (e.g. health judge) — they have their own
|
||||
# session directories and must never be used as a shared session.
|
||||
# Skip isolated streams — they have their own session directories
|
||||
# and must never be used as a shared session.
|
||||
all_streams: list[tuple[str, ExecutionStream]] = []
|
||||
for _gid, reg in self._graphs.items():
|
||||
for ep_id, stream in reg.streams.items():
|
||||
@@ -1697,6 +1756,10 @@ def create_agent_runtime(
|
||||
accounts_data: list[dict] | None = None,
|
||||
tool_provider_map: dict[str, str] | None = None,
|
||||
event_bus: "EventBus | None" = None,
|
||||
skills_manager_config: "SkillsManagerConfig | None" = None,
|
||||
# Deprecated — pass skills_manager_config instead.
|
||||
skills_catalog_prompt: str = "",
|
||||
protocols_prompt: str = "",
|
||||
) -> AgentRuntime:
|
||||
"""
|
||||
Create and configure an AgentRuntime with entry points.
|
||||
@@ -1723,6 +1786,10 @@ def create_agent_runtime(
|
||||
accounts_data: Raw account data for per-node prompt generation.
|
||||
tool_provider_map: Tool name to provider name mapping for account routing.
|
||||
event_bus: Optional external EventBus to share with other components.
|
||||
skills_manager_config: Skill configuration — the runtime owns
|
||||
discovery, loading, and prompt renderation internally.
|
||||
skills_catalog_prompt: Deprecated. Pre-rendered skills catalog.
|
||||
protocols_prompt: Deprecated. Pre-rendered operational protocols.
|
||||
|
||||
Returns:
|
||||
Configured AgentRuntime (not yet started)
|
||||
@@ -1749,6 +1816,9 @@ def create_agent_runtime(
|
||||
accounts_data=accounts_data,
|
||||
tool_provider_map=tool_provider_map,
|
||||
event_bus=event_bus,
|
||||
skills_manager_config=skills_manager_config,
|
||||
skills_catalog_prompt=skills_catalog_prompt,
|
||||
protocols_prompt=protocols_prompt,
|
||||
)
|
||||
|
||||
for spec in entry_points:
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""EscalationTicket — structured schema for worker health judge escalations."""
|
||||
"""EscalationTicket — structured schema for worker health escalations."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
@@ -10,10 +10,10 @@ from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class EscalationTicket(BaseModel):
|
||||
"""Structured escalation report emitted by the Worker Health Judge.
|
||||
"""Structured escalation report for worker health monitoring.
|
||||
|
||||
The judge must fill every field before calling emit_escalation_ticket.
|
||||
Pydantic validation rejects partial tickets, preventing impulsive escalation.
|
||||
All fields must be filled before calling emit_escalation_ticket.
|
||||
Pydantic validation rejects partial tickets.
|
||||
"""
|
||||
|
||||
ticket_id: str = Field(default_factory=lambda: str(uuid4()))
|
||||
@@ -25,7 +25,7 @@ class EscalationTicket(BaseModel):
|
||||
worker_node_id: str
|
||||
worker_graph_id: str
|
||||
|
||||
# Problem characterization (filled by judge via LLM deliberation)
|
||||
# Problem characterization
|
||||
severity: Literal["low", "medium", "high", "critical"]
|
||||
cause: str # Human-readable: "Node has produced 18 RETRY verdicts..."
|
||||
judge_reasoning: str # Judge's own deliberation chain
|
||||
|
||||
@@ -97,6 +97,7 @@ class EventType(StrEnum):
|
||||
# Client I/O (client_facing=True nodes only)
|
||||
CLIENT_OUTPUT_DELTA = "client_output_delta"
|
||||
CLIENT_INPUT_REQUESTED = "client_input_requested"
|
||||
CLIENT_INPUT_RECEIVED = "client_input_received"
|
||||
|
||||
# Internal node observability (client_facing=False nodes)
|
||||
NODE_INTERNAL_OUTPUT = "node_internal_output"
|
||||
@@ -104,7 +105,7 @@ class EventType(StrEnum):
|
||||
NODE_STALLED = "node_stalled"
|
||||
NODE_TOOL_DOOM_LOOP = "node_tool_doom_loop"
|
||||
|
||||
# Judge decisions
|
||||
# Judge decisions (implicit judge in event loop nodes)
|
||||
JUDGE_VERDICT = "judge_verdict"
|
||||
|
||||
# Output tracking
|
||||
@@ -126,7 +127,7 @@ class EventType(StrEnum):
|
||||
# Escalation (agent requests handoff to queen)
|
||||
ESCALATION_REQUESTED = "escalation_requested"
|
||||
|
||||
# Worker health monitoring (judge → queen → operator)
|
||||
# Worker health monitoring
|
||||
WORKER_ESCALATION_TICKET = "worker_escalation_ticket"
|
||||
QUEEN_INTERVENTION_REQUESTED = "queen_intervention_requested"
|
||||
|
||||
@@ -152,6 +153,13 @@ class EventType(StrEnum):
|
||||
# Subagent reports (one-way progress updates from sub-agents)
|
||||
SUBAGENT_REPORT = "subagent_report"
|
||||
|
||||
# Trigger lifecycle (queen-level triggers / heartbeats)
|
||||
TRIGGER_AVAILABLE = "trigger_available"
|
||||
TRIGGER_ACTIVATED = "trigger_activated"
|
||||
TRIGGER_DEACTIVATED = "trigger_deactivated"
|
||||
TRIGGER_FIRED = "trigger_fired"
|
||||
TRIGGER_REMOVED = "trigger_removed"
|
||||
|
||||
|
||||
@dataclass
|
||||
class AgentEvent:
|
||||
@@ -165,10 +173,11 @@ class AgentEvent:
|
||||
timestamp: datetime = field(default_factory=datetime.now)
|
||||
correlation_id: str | None = None # For tracking related events
|
||||
graph_id: str | None = None # Which graph emitted this event (multi-graph sessions)
|
||||
run_id: str | None = None # Unique ID per trigger() invocation — used for run dividers
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Convert to dictionary for serialization."""
|
||||
return {
|
||||
d = {
|
||||
"type": self.type.value,
|
||||
"stream_id": self.stream_id,
|
||||
"node_id": self.node_id,
|
||||
@@ -178,6 +187,9 @@ class AgentEvent:
|
||||
"correlation_id": self.correlation_id,
|
||||
"graph_id": self.graph_id,
|
||||
}
|
||||
if self.run_id is not None:
|
||||
d["run_id"] = self.run_id
|
||||
return d
|
||||
|
||||
|
||||
# Type for event handlers
|
||||
@@ -246,6 +258,128 @@ class EventBus:
|
||||
self._semaphore = asyncio.Semaphore(max_concurrent_handlers)
|
||||
self._subscription_counter = 0
|
||||
self._lock = asyncio.Lock()
|
||||
# Per-session persistent event log (always-on, survives restarts)
|
||||
self._session_log: IO[str] | None = None
|
||||
self._session_log_iteration_offset: int = 0
|
||||
# Accumulator for client_output_delta snapshots — flushed on llm_turn_complete.
|
||||
# Key: (stream_id, node_id, execution_id, iteration, inner_turn) → latest AgentEvent
|
||||
self._pending_output_snapshots: dict[tuple, AgentEvent] = {}
|
||||
|
||||
def set_session_log(self, path: Path, *, iteration_offset: int = 0) -> None:
|
||||
"""Enable per-session event persistence to a JSONL file.
|
||||
|
||||
Called once when the queen starts so that all events survive server
|
||||
restarts and can be replayed to reconstruct the frontend state.
|
||||
|
||||
``iteration_offset`` is added to the ``iteration`` field in logged
|
||||
events so that cold-resumed sessions produce monotonically increasing
|
||||
iteration values — preventing frontend message ID collisions between
|
||||
the original run and resumed runs.
|
||||
"""
|
||||
if self._session_log is not None:
|
||||
try:
|
||||
self._session_log.close()
|
||||
except Exception:
|
||||
pass
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
self._session_log = open(path, "a", encoding="utf-8") # noqa: SIM115
|
||||
self._session_log_iteration_offset = iteration_offset
|
||||
logger.info("Session event log → %s (iteration_offset=%d)", path, iteration_offset)
|
||||
|
||||
def close_session_log(self) -> None:
|
||||
"""Close the per-session event log file."""
|
||||
# Flush any pending output snapshots before closing
|
||||
self._flush_pending_snapshots()
|
||||
if self._session_log is not None:
|
||||
try:
|
||||
self._session_log.close()
|
||||
except Exception:
|
||||
pass
|
||||
self._session_log = None
|
||||
|
||||
# Event types that are high-frequency streaming deltas — accumulated rather
|
||||
# than written individually to the session log.
|
||||
_STREAMING_DELTA_TYPES = frozenset(
|
||||
{
|
||||
EventType.CLIENT_OUTPUT_DELTA,
|
||||
EventType.LLM_TEXT_DELTA,
|
||||
EventType.LLM_REASONING_DELTA,
|
||||
}
|
||||
)
|
||||
|
||||
def _write_session_log_event(self, event: AgentEvent) -> None:
|
||||
"""Write an event to the per-session log with streaming coalescing.
|
||||
|
||||
Streaming deltas (client_output_delta, llm_text_delta) are accumulated
|
||||
in memory. When llm_turn_complete fires, any pending snapshots for that
|
||||
(stream_id, node_id, execution_id) are flushed as single consolidated
|
||||
events before the turn-complete event itself is written.
|
||||
|
||||
Note: iteration offset is already applied in publish() before this is
|
||||
called, so events here already have correct iteration values.
|
||||
"""
|
||||
if self._session_log is None:
|
||||
return
|
||||
|
||||
if event.type in self._STREAMING_DELTA_TYPES:
|
||||
# Accumulate — keep only the latest event (which carries the full snapshot)
|
||||
key = (
|
||||
event.stream_id,
|
||||
event.node_id,
|
||||
event.execution_id,
|
||||
event.data.get("iteration"),
|
||||
event.data.get("inner_turn", 0),
|
||||
)
|
||||
self._pending_output_snapshots[key] = event
|
||||
return
|
||||
|
||||
# On turn-complete, flush accumulated snapshots for this stream first
|
||||
if event.type == EventType.LLM_TURN_COMPLETE:
|
||||
self._flush_pending_snapshots(
|
||||
stream_id=event.stream_id,
|
||||
node_id=event.node_id,
|
||||
execution_id=event.execution_id,
|
||||
)
|
||||
|
||||
line = json.dumps(event.to_dict(), default=str)
|
||||
self._session_log.write(line + "\n")
|
||||
self._session_log.flush()
|
||||
|
||||
def _flush_pending_snapshots(
|
||||
self,
|
||||
stream_id: str | None = None,
|
||||
node_id: str | None = None,
|
||||
execution_id: str | None = None,
|
||||
) -> None:
|
||||
"""Flush accumulated streaming snapshots to the session log.
|
||||
|
||||
When called with filters, only matching entries are flushed.
|
||||
When called without filters (e.g. on close), everything is flushed.
|
||||
"""
|
||||
if self._session_log is None or not self._pending_output_snapshots:
|
||||
return
|
||||
|
||||
to_flush: list[tuple] = []
|
||||
for key, _evt in self._pending_output_snapshots.items():
|
||||
if stream_id is not None:
|
||||
k_stream, k_node, k_exec, _, _ = key
|
||||
if k_stream != stream_id or k_node != node_id or k_exec != execution_id:
|
||||
continue
|
||||
to_flush.append(key)
|
||||
|
||||
for key in to_flush:
|
||||
evt = self._pending_output_snapshots.pop(key)
|
||||
try:
|
||||
line = json.dumps(evt.to_dict(), default=str)
|
||||
self._session_log.write(line + "\n")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if to_flush:
|
||||
try:
|
||||
self._session_log.flush()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def subscribe(
|
||||
self,
|
||||
@@ -311,6 +445,19 @@ class EventBus:
|
||||
Args:
|
||||
event: Event to publish
|
||||
"""
|
||||
# Apply iteration offset at the source so ALL consumers (SSE subscribers,
|
||||
# event history, session log) see the same monotonically increasing
|
||||
# iteration values. Without this, live SSE would use raw iterations
|
||||
# while events.jsonl would use offset iterations, causing ID collisions
|
||||
# on the frontend when replaying after cold resume.
|
||||
if (
|
||||
self._session_log_iteration_offset
|
||||
and isinstance(event.data, dict)
|
||||
and "iteration" in event.data
|
||||
):
|
||||
offset = self._session_log_iteration_offset
|
||||
event.data = {**event.data, "iteration": event.data["iteration"] + offset}
|
||||
|
||||
# Add to history
|
||||
async with self._lock:
|
||||
self._event_history.append(event)
|
||||
@@ -331,6 +478,15 @@ class EventBus:
|
||||
except Exception:
|
||||
pass # never break event delivery
|
||||
|
||||
# Per-session persistent log (always-on when set_session_log was called).
|
||||
# Streaming deltas are coalesced: client_output_delta and llm_text_delta
|
||||
# are accumulated and flushed as a single snapshot event on llm_turn_complete.
|
||||
if self._session_log is not None:
|
||||
try:
|
||||
self._write_session_log_event(event)
|
||||
except Exception:
|
||||
pass # never break event delivery
|
||||
|
||||
# Find matching subscriptions
|
||||
matching_handlers: list[EventHandler] = []
|
||||
|
||||
@@ -391,6 +547,7 @@ class EventBus:
|
||||
execution_id: str,
|
||||
input_data: dict[str, Any] | None = None,
|
||||
correlation_id: str | None = None,
|
||||
run_id: str | None = None,
|
||||
) -> None:
|
||||
"""Emit execution started event."""
|
||||
await self.publish(
|
||||
@@ -400,6 +557,7 @@ class EventBus:
|
||||
execution_id=execution_id,
|
||||
data={"input": input_data or {}},
|
||||
correlation_id=correlation_id,
|
||||
run_id=run_id,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -409,6 +567,7 @@ class EventBus:
|
||||
execution_id: str,
|
||||
output: dict[str, Any] | None = None,
|
||||
correlation_id: str | None = None,
|
||||
run_id: str | None = None,
|
||||
) -> None:
|
||||
"""Emit execution completed event."""
|
||||
await self.publish(
|
||||
@@ -418,6 +577,7 @@ class EventBus:
|
||||
execution_id=execution_id,
|
||||
data={"output": output or {}},
|
||||
correlation_id=correlation_id,
|
||||
run_id=run_id,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -427,6 +587,7 @@ class EventBus:
|
||||
execution_id: str,
|
||||
error: str,
|
||||
correlation_id: str | None = None,
|
||||
run_id: str | None = None,
|
||||
) -> None:
|
||||
"""Emit execution failed event."""
|
||||
await self.publish(
|
||||
@@ -436,6 +597,7 @@ class EventBus:
|
||||
execution_id=execution_id,
|
||||
data={"error": error},
|
||||
correlation_id=correlation_id,
|
||||
run_id=run_id,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -527,15 +689,19 @@ class EventBus:
|
||||
node_id: str,
|
||||
iteration: int,
|
||||
execution_id: str | None = None,
|
||||
extra_data: dict[str, Any] | None = None,
|
||||
) -> None:
|
||||
"""Emit node loop iteration event."""
|
||||
data: dict[str, Any] = {"iteration": iteration}
|
||||
if extra_data:
|
||||
data.update(extra_data)
|
||||
await self.publish(
|
||||
AgentEvent(
|
||||
type=EventType.NODE_LOOP_ITERATION,
|
||||
stream_id=stream_id,
|
||||
node_id=node_id,
|
||||
execution_id=execution_id,
|
||||
data={"iteration": iteration},
|
||||
data=data,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -584,6 +750,7 @@ class EventBus:
|
||||
content: str,
|
||||
snapshot: str,
|
||||
execution_id: str | None = None,
|
||||
inner_turn: int = 0,
|
||||
) -> None:
|
||||
"""Emit LLM text delta event."""
|
||||
await self.publish(
|
||||
@@ -592,7 +759,7 @@ class EventBus:
|
||||
stream_id=stream_id,
|
||||
node_id=node_id,
|
||||
execution_id=execution_id,
|
||||
data={"content": content, "snapshot": snapshot},
|
||||
data={"content": content, "snapshot": snapshot, "inner_turn": inner_turn},
|
||||
)
|
||||
)
|
||||
|
||||
@@ -708,9 +875,10 @@ class EventBus:
|
||||
snapshot: str,
|
||||
execution_id: str | None = None,
|
||||
iteration: int | None = None,
|
||||
inner_turn: int = 0,
|
||||
) -> None:
|
||||
"""Emit client output delta event (client_facing=True nodes)."""
|
||||
data: dict = {"content": content, "snapshot": snapshot}
|
||||
data: dict = {"content": content, "snapshot": snapshot, "inner_turn": inner_turn}
|
||||
if iteration is not None:
|
||||
data["iteration"] = iteration
|
||||
await self.publish(
|
||||
@@ -1009,7 +1177,7 @@ class EventBus:
|
||||
ticket: dict,
|
||||
execution_id: str | None = None,
|
||||
) -> None:
|
||||
"""Emitted by health judge when worker shows a degradation pattern."""
|
||||
"""Emitted when worker shows a degradation pattern."""
|
||||
await self.publish(
|
||||
AgentEvent(
|
||||
type=EventType.WORKER_ESCALATION_TICKET,
|
||||
|
||||
@@ -127,6 +127,7 @@ class ExecutionContext:
|
||||
input_data: dict[str, Any]
|
||||
isolation_level: IsolationLevel
|
||||
session_state: dict[str, Any] | None = None # For resuming from pause
|
||||
run_id: str | None = None # Unique ID per trigger() invocation
|
||||
started_at: datetime = field(default_factory=datetime.now)
|
||||
completed_at: datetime | None = None
|
||||
status: str = "pending" # pending, running, completed, failed, paused
|
||||
@@ -185,6 +186,8 @@ class ExecutionStream:
|
||||
accounts_prompt: str = "",
|
||||
accounts_data: list[dict] | None = None,
|
||||
tool_provider_map: dict[str, str] | None = None,
|
||||
skills_catalog_prompt: str = "",
|
||||
protocols_prompt: str = "",
|
||||
):
|
||||
"""
|
||||
Initialize execution stream.
|
||||
@@ -208,6 +211,8 @@ class ExecutionStream:
|
||||
accounts_prompt: Connected accounts block for system prompt injection
|
||||
accounts_data: Raw account data for per-node prompt generation
|
||||
tool_provider_map: Tool name to provider name mapping for account routing
|
||||
skills_catalog_prompt: Available skills catalog for system prompt
|
||||
protocols_prompt: Default skill operational protocols for system prompt
|
||||
"""
|
||||
self.stream_id = stream_id
|
||||
self.entry_spec = entry_spec
|
||||
@@ -229,6 +234,21 @@ class ExecutionStream:
|
||||
self._accounts_prompt = accounts_prompt
|
||||
self._accounts_data = accounts_data
|
||||
self._tool_provider_map = tool_provider_map
|
||||
self._skills_catalog_prompt = skills_catalog_prompt
|
||||
self._protocols_prompt = protocols_prompt
|
||||
|
||||
_es_logger = logging.getLogger(__name__)
|
||||
if protocols_prompt:
|
||||
_es_logger.info(
|
||||
"ExecutionStream[%s] received protocols_prompt (%d chars)",
|
||||
stream_id,
|
||||
len(protocols_prompt),
|
||||
)
|
||||
else:
|
||||
_es_logger.warning(
|
||||
"ExecutionStream[%s] received EMPTY protocols_prompt",
|
||||
stream_id,
|
||||
)
|
||||
|
||||
# Create stream-scoped runtime
|
||||
self._runtime = StreamRuntime(
|
||||
@@ -425,11 +445,36 @@ class ExecutionStream:
|
||||
return True
|
||||
return False
|
||||
|
||||
async def inject_trigger(
|
||||
self,
|
||||
node_id: str,
|
||||
trigger: Any,
|
||||
) -> bool:
|
||||
"""Inject a trigger event into a running queen EventLoopNode.
|
||||
|
||||
Searches active executors for a node matching ``node_id`` and calls
|
||||
its ``inject_trigger()`` method to wake the queen.
|
||||
|
||||
Args:
|
||||
node_id: The queen EventLoopNode ID.
|
||||
trigger: A ``TriggerEvent`` instance (typed as Any to avoid
|
||||
circular imports with graph layer).
|
||||
|
||||
Returns True if the trigger was delivered, False otherwise.
|
||||
"""
|
||||
for executor in self._active_executors.values():
|
||||
node = executor.node_registry.get(node_id)
|
||||
if node is not None and hasattr(node, "inject_trigger"):
|
||||
await node.inject_trigger(trigger)
|
||||
return True
|
||||
return False
|
||||
|
||||
async def execute(
|
||||
self,
|
||||
input_data: dict[str, Any],
|
||||
correlation_id: str | None = None,
|
||||
session_state: dict[str, Any] | None = None,
|
||||
run_id: str | None = None,
|
||||
) -> str:
|
||||
"""
|
||||
Queue an execution and return its ID.
|
||||
@@ -440,6 +485,7 @@ class ExecutionStream:
|
||||
input_data: Input data for this execution
|
||||
correlation_id: Optional ID to correlate related executions
|
||||
session_state: Optional session state to resume from (with paused_at, memory)
|
||||
run_id: Unique ID for this trigger invocation (for run dividers)
|
||||
|
||||
Returns:
|
||||
Execution ID for tracking
|
||||
@@ -500,6 +546,7 @@ class ExecutionStream:
|
||||
input_data=input_data,
|
||||
isolation_level=self.entry_spec.get_isolation_level(),
|
||||
session_state=session_state,
|
||||
run_id=run_id,
|
||||
)
|
||||
|
||||
async with self._lock:
|
||||
@@ -575,7 +622,9 @@ class ExecutionStream:
|
||||
execution_id=execution_id,
|
||||
input_data=ctx.input_data,
|
||||
correlation_id=ctx.correlation_id,
|
||||
run_id=ctx.run_id,
|
||||
)
|
||||
self._write_run_event(execution_id, ctx.run_id, "run_started")
|
||||
|
||||
# Create execution-scoped memory
|
||||
self._state_manager.create_memory(
|
||||
@@ -645,6 +694,8 @@ class ExecutionStream:
|
||||
accounts_prompt=self._accounts_prompt,
|
||||
accounts_data=self._accounts_data,
|
||||
tool_provider_map=self._tool_provider_map,
|
||||
skills_catalog_prompt=self._skills_catalog_prompt,
|
||||
protocols_prompt=self._protocols_prompt,
|
||||
)
|
||||
# Track executor so inject_input() can reach EventLoopNode instances
|
||||
self._active_executors[execution_id] = executor
|
||||
@@ -740,6 +791,7 @@ class ExecutionStream:
|
||||
execution_id=execution_id,
|
||||
output=result.output,
|
||||
correlation_id=ctx.correlation_id,
|
||||
run_id=ctx.run_id,
|
||||
)
|
||||
elif result.paused_at:
|
||||
# The executor returns paused_at on CancelledError but
|
||||
@@ -757,8 +809,22 @@ class ExecutionStream:
|
||||
execution_id=execution_id,
|
||||
error=result.error or "Unknown error",
|
||||
correlation_id=ctx.correlation_id,
|
||||
run_id=ctx.run_id,
|
||||
)
|
||||
|
||||
# Write run event for historical restoration
|
||||
if result.success:
|
||||
self._write_run_event(execution_id, ctx.run_id, "run_completed")
|
||||
elif result.paused_at:
|
||||
self._write_run_event(execution_id, ctx.run_id, "run_paused")
|
||||
else:
|
||||
self._write_run_event(
|
||||
execution_id,
|
||||
ctx.run_id,
|
||||
"run_failed",
|
||||
{"error": result.error or "Unknown error"},
|
||||
)
|
||||
|
||||
logger.debug(f"Execution {execution_id} completed: success={result.success}")
|
||||
|
||||
except asyncio.CancelledError:
|
||||
@@ -818,8 +884,10 @@ class ExecutionStream:
|
||||
execution_id=execution_id,
|
||||
error=cancel_reason,
|
||||
correlation_id=ctx.correlation_id,
|
||||
run_id=ctx.run_id,
|
||||
)
|
||||
|
||||
self._write_run_event(execution_id, ctx.run_id, "run_cancelled")
|
||||
# Don't re-raise - we've handled it and saved state
|
||||
|
||||
except Exception as e:
|
||||
@@ -856,7 +924,9 @@ class ExecutionStream:
|
||||
execution_id=execution_id,
|
||||
error=str(e),
|
||||
correlation_id=ctx.correlation_id,
|
||||
run_id=ctx.run_id,
|
||||
)
|
||||
self._write_run_event(execution_id, ctx.run_id, "run_failed", {"error": str(e)})
|
||||
|
||||
finally:
|
||||
# Clean up state
|
||||
@@ -872,6 +942,36 @@ class ExecutionStream:
|
||||
self._completion_events.pop(execution_id, None)
|
||||
self._execution_tasks.pop(execution_id, None)
|
||||
|
||||
def _write_run_event(
|
||||
self,
|
||||
execution_id: str,
|
||||
run_id: str | None,
|
||||
event: str,
|
||||
extra: dict[str, Any] | None = None,
|
||||
) -> None:
|
||||
"""Append a run lifecycle event to runs.jsonl for historical restoration."""
|
||||
if not self._session_store or not run_id:
|
||||
return
|
||||
import json as _json
|
||||
|
||||
session_dir = self._session_store.get_session_path(execution_id)
|
||||
runs_file = session_dir / "runs.jsonl"
|
||||
now = datetime.now()
|
||||
record = {
|
||||
"run_id": run_id,
|
||||
"event": event,
|
||||
"timestamp": now.isoformat(),
|
||||
"created_at": now.timestamp(),
|
||||
}
|
||||
if extra:
|
||||
record.update(extra)
|
||||
try:
|
||||
runs_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(runs_file, "a", encoding="utf-8") as f:
|
||||
f.write(_json.dumps(record) + "\n")
|
||||
except OSError:
|
||||
pass # Non-critical — don't break execution
|
||||
|
||||
async def _write_session_state(
|
||||
self,
|
||||
execution_id: str,
|
||||
@@ -978,8 +1078,8 @@ class ExecutionStream:
|
||||
def _create_modified_graph(self) -> "GraphSpec":
|
||||
"""Create a graph with the entry point overridden.
|
||||
|
||||
Preserves the original graph's entry_points and async_entry_points
|
||||
so that validation correctly considers ALL entry nodes reachable.
|
||||
Preserves the original graph's entry_points so that validation
|
||||
correctly considers ALL entry nodes reachable.
|
||||
Each stream only executes from its own entry_node, but the full
|
||||
graph must validate with all entry points accounted for.
|
||||
"""
|
||||
@@ -1004,7 +1104,6 @@ class ExecutionStream:
|
||||
version=self.graph.version,
|
||||
entry_node=self.entry_spec.entry_node, # Use our entry point
|
||||
entry_points=merged_entry_points,
|
||||
async_entry_points=self.graph.async_entry_points,
|
||||
terminal_nodes=self.graph.terminal_nodes,
|
||||
pause_nodes=self.graph.pause_nodes,
|
||||
nodes=self.graph.nodes,
|
||||
|
||||
@@ -47,25 +47,34 @@ class RuntimeLogStore:
|
||||
self._base_path = base_path
|
||||
# Note: _runs_dir is determined per-run_id by _get_run_dir()
|
||||
|
||||
def _session_logs_dir(self, run_id: str) -> Path:
|
||||
"""Return the unified session-backed logs directory for a run ID."""
|
||||
is_runtime_logs = self._base_path.name == "runtime_logs"
|
||||
root = self._base_path.parent if is_runtime_logs else self._base_path
|
||||
return root / "sessions" / run_id / "logs"
|
||||
|
||||
def _legacy_run_dir(self, run_id: str) -> Path:
|
||||
"""Return the deprecated standalone runs directory for a run ID."""
|
||||
return self._base_path / "runs" / run_id
|
||||
|
||||
def _get_run_dir(self, run_id: str) -> Path:
|
||||
"""Determine run directory path based on run_id format.
|
||||
|
||||
- New format (session_*): {storage_root}/sessions/{run_id}/logs/
|
||||
- Session-backed runs: {storage_root}/sessions/{run_id}/logs/
|
||||
- Old format (anything else): {base_path}/runs/{run_id}/ (deprecated)
|
||||
"""
|
||||
if run_id.startswith("session_"):
|
||||
is_runtime_logs = self._base_path.name == "runtime_logs"
|
||||
root = self._base_path.parent if is_runtime_logs else self._base_path
|
||||
return root / "sessions" / run_id / "logs"
|
||||
session_run_dir = self._session_logs_dir(run_id)
|
||||
if session_run_dir.exists() or run_id.startswith("session_"):
|
||||
return session_run_dir
|
||||
import warnings
|
||||
|
||||
warnings.warn(
|
||||
f"Reading logs from deprecated location for run_id={run_id}. "
|
||||
"New sessions use unified storage at sessions/session_*/logs/",
|
||||
"New sessions use unified storage at sessions/<session_id>/logs/",
|
||||
DeprecationWarning,
|
||||
stacklevel=3,
|
||||
)
|
||||
return self._base_path / "runs" / run_id
|
||||
return self._legacy_run_dir(run_id)
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Incremental write (sync — called from locked sections)
|
||||
@@ -76,6 +85,10 @@ class RuntimeLogStore:
|
||||
run_dir = self._get_run_dir(run_id)
|
||||
run_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def ensure_session_run_dir(self, run_id: str) -> None:
|
||||
"""Create the unified session-backed log directory immediately."""
|
||||
self._session_logs_dir(run_id).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def append_step(self, run_id: str, step: NodeStepLog) -> None:
|
||||
"""Append one JSONL line to tool_logs.jsonl. Sync."""
|
||||
path = self._get_run_dir(run_id) / "tool_logs.jsonl"
|
||||
@@ -200,17 +213,17 @@ class RuntimeLogStore:
|
||||
run_ids = []
|
||||
|
||||
# Scan new location: base_path/sessions/{session_id}/logs/
|
||||
# Determine the correct base path for sessions
|
||||
is_runtime_logs = self._base_path.name == "runtime_logs"
|
||||
root = self._base_path.parent if is_runtime_logs else self._base_path
|
||||
sessions_dir = root / "sessions"
|
||||
|
||||
if sessions_dir.exists():
|
||||
for session_dir in sessions_dir.iterdir():
|
||||
if session_dir.is_dir() and session_dir.name.startswith("session_"):
|
||||
logs_dir = session_dir / "logs"
|
||||
if logs_dir.exists() and logs_dir.is_dir():
|
||||
run_ids.append(session_dir.name)
|
||||
if not session_dir.is_dir():
|
||||
continue
|
||||
logs_dir = session_dir / "logs"
|
||||
if logs_dir.exists() and logs_dir.is_dir():
|
||||
run_ids.append(session_dir.name)
|
||||
|
||||
# Scan old location: base_path/runs/ (deprecated)
|
||||
old_runs_dir = self._base_path / "runs"
|
||||
|
||||
@@ -66,15 +66,16 @@ class RuntimeLogger:
|
||||
"""
|
||||
if session_id:
|
||||
self._run_id = session_id
|
||||
self._store.ensure_session_run_dir(self._run_id)
|
||||
else:
|
||||
ts = datetime.now(UTC).strftime("%Y%m%dT%H%M%S")
|
||||
short_uuid = uuid.uuid4().hex[:8]
|
||||
self._run_id = f"{ts}_{short_uuid}"
|
||||
self._store.ensure_run_dir(self._run_id)
|
||||
|
||||
self._goal_id = goal_id
|
||||
self._started_at = datetime.now(UTC).isoformat()
|
||||
self._logged_node_ids = set()
|
||||
self._store.ensure_run_dir(self._run_id)
|
||||
return self._run_id
|
||||
|
||||
def log_step(
|
||||
|
||||
@@ -17,7 +17,7 @@ from pathlib import Path
|
||||
import pytest
|
||||
|
||||
from framework.graph import Goal
|
||||
from framework.graph.edge import AsyncEntryPointSpec, EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.goal import Constraint, SuccessCriterion
|
||||
from framework.graph.node import NodeSpec
|
||||
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
|
||||
@@ -101,30 +101,12 @@ def sample_graph():
|
||||
),
|
||||
]
|
||||
|
||||
async_entry_points = [
|
||||
AsyncEntryPointSpec(
|
||||
id="webhook",
|
||||
name="Webhook Handler",
|
||||
entry_node="process-webhook",
|
||||
trigger_type="webhook",
|
||||
isolation_level="shared",
|
||||
),
|
||||
AsyncEntryPointSpec(
|
||||
id="api",
|
||||
name="API Handler",
|
||||
entry_node="process-api",
|
||||
trigger_type="api",
|
||||
isolation_level="shared",
|
||||
),
|
||||
]
|
||||
|
||||
return GraphSpec(
|
||||
id="test-graph",
|
||||
goal_id="test-goal",
|
||||
version="1.0.0",
|
||||
entry_node="process-webhook",
|
||||
entry_points={"start": "process-webhook"},
|
||||
async_entry_points=async_entry_points,
|
||||
terminal_nodes=["complete"],
|
||||
pause_nodes=[],
|
||||
nodes=nodes,
|
||||
@@ -504,108 +486,6 @@ class TestAgentRuntime:
|
||||
# === GraphSpec Validation Tests ===
|
||||
|
||||
|
||||
class TestGraphSpecValidation:
|
||||
"""Tests for GraphSpec with async_entry_points."""
|
||||
|
||||
def test_has_async_entry_points(self, sample_graph):
|
||||
"""Test checking for async entry points."""
|
||||
assert sample_graph.has_async_entry_points() is True
|
||||
|
||||
# Graph without async entry points
|
||||
simple_graph = GraphSpec(
|
||||
id="simple",
|
||||
goal_id="goal",
|
||||
entry_node="start",
|
||||
nodes=[],
|
||||
edges=[],
|
||||
)
|
||||
assert simple_graph.has_async_entry_points() is False
|
||||
|
||||
def test_get_async_entry_point(self, sample_graph):
|
||||
"""Test getting async entry point by ID."""
|
||||
ep = sample_graph.get_async_entry_point("webhook")
|
||||
assert ep is not None
|
||||
assert ep.id == "webhook"
|
||||
assert ep.entry_node == "process-webhook"
|
||||
|
||||
ep_not_found = sample_graph.get_async_entry_point("nonexistent")
|
||||
assert ep_not_found is None
|
||||
|
||||
def test_validate_async_entry_points(self):
|
||||
"""Test validation catches async entry point errors."""
|
||||
nodes = [
|
||||
NodeSpec(
|
||||
id="valid-node",
|
||||
name="Valid Node",
|
||||
description="A valid node",
|
||||
node_type="event_loop",
|
||||
input_keys=[],
|
||||
output_keys=[],
|
||||
),
|
||||
]
|
||||
|
||||
# Invalid entry node
|
||||
graph = GraphSpec(
|
||||
id="test",
|
||||
goal_id="goal",
|
||||
entry_node="valid-node",
|
||||
async_entry_points=[
|
||||
AsyncEntryPointSpec(
|
||||
id="invalid",
|
||||
name="Invalid",
|
||||
entry_node="nonexistent-node",
|
||||
trigger_type="webhook",
|
||||
),
|
||||
],
|
||||
nodes=nodes,
|
||||
edges=[],
|
||||
)
|
||||
|
||||
errors = graph.validate()["errors"]
|
||||
assert any("nonexistent-node" in e for e in errors)
|
||||
|
||||
# Invalid isolation level
|
||||
graph2 = GraphSpec(
|
||||
id="test",
|
||||
goal_id="goal",
|
||||
entry_node="valid-node",
|
||||
async_entry_points=[
|
||||
AsyncEntryPointSpec(
|
||||
id="bad-isolation",
|
||||
name="Bad Isolation",
|
||||
entry_node="valid-node",
|
||||
trigger_type="webhook",
|
||||
isolation_level="invalid",
|
||||
),
|
||||
],
|
||||
nodes=nodes,
|
||||
edges=[],
|
||||
)
|
||||
|
||||
errors2 = graph2.validate()["errors"]
|
||||
assert any("isolation_level" in e for e in errors2)
|
||||
|
||||
# Invalid trigger type
|
||||
graph3 = GraphSpec(
|
||||
id="test",
|
||||
goal_id="goal",
|
||||
entry_node="valid-node",
|
||||
async_entry_points=[
|
||||
AsyncEntryPointSpec(
|
||||
id="bad-trigger",
|
||||
name="Bad Trigger",
|
||||
entry_node="valid-node",
|
||||
trigger_type="invalid_trigger",
|
||||
),
|
||||
],
|
||||
nodes=nodes,
|
||||
edges=[],
|
||||
)
|
||||
|
||||
errors3 = graph3.validate()["errors"]
|
||||
assert any("trigger_type" in e for e in errors3)
|
||||
|
||||
|
||||
# === Integration Tests ===
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,29 @@
|
||||
"""Tests for custom session-backed runtime logging paths."""
|
||||
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from framework.graph.executor import GraphExecutor
|
||||
from framework.runtime.runtime_log_store import RuntimeLogStore
|
||||
from framework.runtime.runtime_logger import RuntimeLogger
|
||||
|
||||
|
||||
def test_graph_executor_uses_custom_session_dir_name_for_runtime_logs():
|
||||
executor = GraphExecutor(
|
||||
runtime=MagicMock(),
|
||||
storage_path=Path("/tmp/test-agent/sessions/my-custom-session"),
|
||||
)
|
||||
|
||||
assert executor._get_runtime_log_session_id() == "my-custom-session"
|
||||
|
||||
|
||||
def test_runtime_logger_creates_session_log_dir_for_custom_session_id(tmp_path):
|
||||
base = tmp_path / ".hive" / "agents" / "test_agent"
|
||||
base.mkdir(parents=True)
|
||||
store = RuntimeLogStore(base)
|
||||
logger = RuntimeLogger(store=store, agent_id="test-agent")
|
||||
|
||||
run_id = logger.start_run(goal_id="goal-1", session_id="my-custom-session")
|
||||
|
||||
assert run_id == "my-custom-session"
|
||||
assert (base / "sessions" / "my-custom-session" / "logs").is_dir()
|
||||
@@ -483,7 +483,6 @@ class TestEventDrivenEntryPoints:
|
||||
version="1.0.0",
|
||||
entry_node="process-event",
|
||||
entry_points={"start": "process-event"},
|
||||
async_entry_points=[],
|
||||
terminal_nodes=[],
|
||||
pause_nodes=[],
|
||||
nodes=nodes,
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
"""Trigger definitions for queen-level heartbeats (timers, webhooks)."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
|
||||
@dataclass
|
||||
class TriggerDefinition:
|
||||
"""A registered trigger that can be activated on the queen runtime.
|
||||
|
||||
Trigger *definitions* come from the worker's ``triggers.json``.
|
||||
Activation state is per-session (persisted in ``SessionState.active_triggers``).
|
||||
"""
|
||||
|
||||
id: str
|
||||
trigger_type: str # "timer" | "webhook"
|
||||
trigger_config: dict[str, Any] = field(default_factory=dict)
|
||||
description: str = ""
|
||||
task: str = ""
|
||||
active: bool = False
|
||||
@@ -144,6 +144,13 @@ class SessionState(BaseModel):
|
||||
checkpoint_enabled: bool = False
|
||||
latest_checkpoint_id: str | None = None
|
||||
|
||||
# Trigger activation state (IDs of triggers the queen/user turned on)
|
||||
active_triggers: list[str] = Field(default_factory=list)
|
||||
# Per-trigger task strings (user overrides, keyed by trigger ID)
|
||||
trigger_tasks: dict[str, str] = Field(default_factory=dict)
|
||||
# True after first successful worker execution (gates trigger delivery on restart)
|
||||
worker_configured: bool = Field(default=False)
|
||||
|
||||
model_config = {"extra": "allow"}
|
||||
|
||||
@computed_field
|
||||
|
||||
@@ -94,6 +94,29 @@ def sessions_dir(session: Session) -> Path:
|
||||
return Path.home() / ".hive" / "agents" / agent_name / "sessions"
|
||||
|
||||
|
||||
def cold_sessions_dir(session_id: str) -> Path | None:
|
||||
"""Resolve the worker sessions directory from disk for a cold/stopped session.
|
||||
|
||||
Reads agent_path from the queen session's meta.json to find the agent name,
|
||||
then returns ~/.hive/agents/{agent_name}/sessions/.
|
||||
Returns None if meta.json is missing or has no agent_path.
|
||||
"""
|
||||
import json
|
||||
|
||||
meta_path = Path.home() / ".hive" / "queen" / "session" / session_id / "meta.json"
|
||||
if not meta_path.exists():
|
||||
return None
|
||||
try:
|
||||
meta = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||
agent_path = meta.get("agent_path")
|
||||
if not agent_path:
|
||||
return None
|
||||
agent_name = Path(agent_path).name
|
||||
return Path.home() / ".hive" / "agents" / agent_name / "sessions"
|
||||
except (json.JSONDecodeError, OSError):
|
||||
return None
|
||||
|
||||
|
||||
# Allowed CORS origins (localhost on any port)
|
||||
_CORS_ORIGINS = {"http://localhost", "http://127.0.0.1"}
|
||||
|
||||
|
||||
@@ -132,6 +132,7 @@ async def create_queen(
|
||||
session.worker_path,
|
||||
stream_id="queen",
|
||||
worker_graph_id=session.worker_runtime._graph_id,
|
||||
default_session_id=session.id,
|
||||
)
|
||||
|
||||
queen_tools = list(queen_registry.get_tools().values())
|
||||
@@ -215,6 +216,16 @@ async def create_queen(
|
||||
+ worker_identity
|
||||
)
|
||||
|
||||
# ---- Default skill protocols -------------------------------------
|
||||
try:
|
||||
from framework.skills.manager import SkillsManager
|
||||
|
||||
_queen_skills_mgr = SkillsManager()
|
||||
_queen_skills_mgr.load()
|
||||
phase_state.protocols_prompt = _queen_skills_mgr.protocols_prompt
|
||||
except Exception:
|
||||
logger.debug("Queen skill loading failed (non-fatal)", exc_info=True)
|
||||
|
||||
# ---- Persona hook ------------------------------------------------
|
||||
_session_llm = session.llm
|
||||
_session_event_bus = session.event_bus
|
||||
@@ -275,6 +286,7 @@ async def create_queen(
|
||||
execution_id=session.id,
|
||||
dynamic_tools_provider=phase_state.get_current_tools,
|
||||
dynamic_prompt_provider=phase_state.get_current_prompt,
|
||||
iteration_metadata_provider=lambda: {"phase": phase_state.phase},
|
||||
)
|
||||
session.queen_executor = executor
|
||||
|
||||
@@ -292,6 +304,8 @@ async def create_queen(
|
||||
return
|
||||
if phase_state.phase == "running":
|
||||
if event.type == EventType.EXECUTION_COMPLETED:
|
||||
# Mark worker as configured after first successful run
|
||||
session.worker_configured = True
|
||||
output = event.data.get("output", {})
|
||||
output_summary = ""
|
||||
if output:
|
||||
|
||||
@@ -103,7 +103,9 @@ async def handle_delete_credential(request: web.Request) -> web.Response:
|
||||
if credential_id == "aden_api_key":
|
||||
from framework.credentials.key_storage import delete_aden_api_key
|
||||
|
||||
delete_aden_api_key()
|
||||
deleted = delete_aden_api_key()
|
||||
if not deleted:
|
||||
return web.json_response({"error": "Credential 'aden_api_key' not found"}, status=404)
|
||||
return web.json_response({"deleted": True})
|
||||
|
||||
store = _get_store(request)
|
||||
@@ -178,7 +180,10 @@ async def handle_check_agent(request: web.Request) -> web.Response:
|
||||
)
|
||||
except Exception as e:
|
||||
logger.exception(f"Error checking agent credentials: {e}")
|
||||
return web.json_response({"error": str(e)}, status=500)
|
||||
return web.json_response(
|
||||
{"error": "Internal server error while checking credentials"},
|
||||
status=500,
|
||||
)
|
||||
|
||||
|
||||
def _status_to_dict(c) -> dict:
|
||||
|
||||
@@ -15,6 +15,7 @@ logger = logging.getLogger(__name__)
|
||||
DEFAULT_EVENT_TYPES = [
|
||||
EventType.CLIENT_OUTPUT_DELTA,
|
||||
EventType.CLIENT_INPUT_REQUESTED,
|
||||
EventType.CLIENT_INPUT_RECEIVED,
|
||||
EventType.LLM_TEXT_DELTA,
|
||||
EventType.TOOL_CALL_STARTED,
|
||||
EventType.TOOL_CALL_COMPLETED,
|
||||
@@ -40,6 +41,11 @@ DEFAULT_EVENT_TYPES = [
|
||||
EventType.CREDENTIALS_REQUIRED,
|
||||
EventType.SUBAGENT_REPORT,
|
||||
EventType.QUEEN_PHASE_CHANGED,
|
||||
EventType.TRIGGER_AVAILABLE,
|
||||
EventType.TRIGGER_ACTIVATED,
|
||||
EventType.TRIGGER_DEACTIVATED,
|
||||
EventType.TRIGGER_FIRED,
|
||||
EventType.TRIGGER_REMOVED,
|
||||
EventType.DRAFT_GRAPH_UPDATED,
|
||||
]
|
||||
|
||||
@@ -90,6 +96,7 @@ async def handle_events(request: web.Request) -> web.StreamResponse:
|
||||
"execution_failed",
|
||||
"execution_paused",
|
||||
"client_input_requested",
|
||||
"client_input_received",
|
||||
"node_loop_iteration",
|
||||
"node_loop_started",
|
||||
"credentials_required",
|
||||
@@ -143,6 +150,7 @@ async def handle_events(request: web.Request) -> web.StreamResponse:
|
||||
EventType.CLIENT_OUTPUT_DELTA.value,
|
||||
EventType.EXECUTION_STARTED.value,
|
||||
EventType.CLIENT_INPUT_REQUESTED.value,
|
||||
EventType.CLIENT_INPUT_RECEIVED.value,
|
||||
}
|
||||
event_type_values = {et.value for et in event_types}
|
||||
replay_types = _REPLAY_TYPES & event_type_values
|
||||
|
||||
@@ -125,6 +125,18 @@ async def handle_chat(request: web.Request) -> web.Response:
|
||||
node = queen_executor.node_registry.get("queen")
|
||||
if node is not None and hasattr(node, "inject_event"):
|
||||
await node.inject_event(message, is_client_input=True)
|
||||
# Publish to EventBus so the session event log captures user messages
|
||||
from framework.runtime.event_bus import AgentEvent, EventType
|
||||
|
||||
await session.event_bus.publish(
|
||||
AgentEvent(
|
||||
type=EventType.CLIENT_INPUT_RECEIVED,
|
||||
stream_id="queen",
|
||||
node_id="queen",
|
||||
execution_id=session.id,
|
||||
data={"content": message},
|
||||
)
|
||||
)
|
||||
return web.json_response(
|
||||
{
|
||||
"status": "queen",
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
|
||||
from aiohttp import web
|
||||
|
||||
@@ -116,6 +117,20 @@ async def handle_list_nodes(request: web.Request) -> web.Response:
|
||||
}
|
||||
for ep in reg.entry_points.values()
|
||||
]
|
||||
# Append triggers from triggers.json (stored on session)
|
||||
for t in getattr(session, "available_triggers", {}).values():
|
||||
entry = {
|
||||
"id": t.id,
|
||||
"name": t.description or t.id,
|
||||
"entry_node": graph.entry_node,
|
||||
"trigger_type": t.trigger_type,
|
||||
"trigger_config": t.trigger_config,
|
||||
"task": t.task,
|
||||
}
|
||||
mono = getattr(session, "trigger_next_fire", {}).get(t.id)
|
||||
if mono is not None:
|
||||
entry["next_fire_in"] = max(0.0, mono - time.monotonic())
|
||||
entry_points.append(entry)
|
||||
return web.json_response(
|
||||
{
|
||||
"nodes": nodes,
|
||||
|
||||
@@ -9,8 +9,10 @@ Session-primary routes:
|
||||
- DELETE /api/sessions/{session_id}/worker — unload worker from session
|
||||
- GET /api/sessions/{session_id}/stats — runtime statistics
|
||||
- GET /api/sessions/{session_id}/entry-points — list entry points
|
||||
- PATCH /api/sessions/{session_id}/triggers/{id} — update trigger task
|
||||
- GET /api/sessions/{session_id}/graphs — list graph IDs
|
||||
- GET /api/sessions/{session_id}/queen-messages — queen conversation history
|
||||
- GET /api/sessions/{session_id}/events/history — persisted eventbus log (for replay)
|
||||
|
||||
Worker session browsing (persisted execution runs on disk):
|
||||
- GET /api/sessions/{session_id}/worker-sessions — list
|
||||
@@ -31,6 +33,7 @@ from pathlib import Path
|
||||
from aiohttp import web
|
||||
|
||||
from framework.server.app import (
|
||||
cold_sessions_dir,
|
||||
resolve_session,
|
||||
safe_path_segment,
|
||||
sessions_dir,
|
||||
@@ -140,6 +143,7 @@ async def handle_create_session(request: web.Request) -> web.Response:
|
||||
session = await manager.create_session_with_worker(
|
||||
agent_path,
|
||||
agent_id=agent_id,
|
||||
session_id=session_id,
|
||||
model=model,
|
||||
initial_prompt=initial_prompt,
|
||||
queen_resume_from=queen_resume_from,
|
||||
@@ -228,6 +232,22 @@ async def handle_get_live_session(request: web.Request) -> web.Response:
|
||||
}
|
||||
for ep in rt.get_entry_points()
|
||||
]
|
||||
# Append triggers from triggers.json (stored on session)
|
||||
runner = getattr(session, "runner", None)
|
||||
graph_entry = runner.graph.entry_node if runner else ""
|
||||
for t in getattr(session, "available_triggers", {}).values():
|
||||
entry = {
|
||||
"id": t.id,
|
||||
"name": t.description or t.id,
|
||||
"entry_node": graph_entry,
|
||||
"trigger_type": t.trigger_type,
|
||||
"trigger_config": t.trigger_config,
|
||||
"task": t.task,
|
||||
}
|
||||
mono = getattr(session, "trigger_next_fire", {}).get(t.id)
|
||||
if mono is not None:
|
||||
entry["next_fire_in"] = max(0.0, mono - time.monotonic())
|
||||
data["entry_points"].append(entry)
|
||||
data["graphs"] = session.worker_runtime.list_graphs()
|
||||
|
||||
return web.json_response(data)
|
||||
@@ -351,23 +371,84 @@ async def handle_session_entry_points(request: web.Request) -> web.Response:
|
||||
|
||||
rt = session.worker_runtime
|
||||
eps = rt.get_entry_points() if rt else []
|
||||
entry_points = [
|
||||
{
|
||||
"id": ep.id,
|
||||
"name": ep.name,
|
||||
"entry_node": ep.entry_node,
|
||||
"trigger_type": ep.trigger_type,
|
||||
"trigger_config": ep.trigger_config,
|
||||
**(
|
||||
{"next_fire_in": nf}
|
||||
if rt and (nf := rt.get_timer_next_fire_in(ep.id)) is not None
|
||||
else {}
|
||||
),
|
||||
}
|
||||
for ep in eps
|
||||
]
|
||||
# Append triggers from triggers.json (stored on session)
|
||||
runner = getattr(session, "runner", None)
|
||||
graph_entry = runner.graph.entry_node if runner else ""
|
||||
for t in getattr(session, "available_triggers", {}).values():
|
||||
entry = {
|
||||
"id": t.id,
|
||||
"name": t.description or t.id,
|
||||
"entry_node": graph_entry,
|
||||
"trigger_type": t.trigger_type,
|
||||
"trigger_config": t.trigger_config,
|
||||
"task": t.task,
|
||||
}
|
||||
mono = getattr(session, "trigger_next_fire", {}).get(t.id)
|
||||
if mono is not None:
|
||||
entry["next_fire_in"] = max(0.0, mono - time.monotonic())
|
||||
entry_points.append(entry)
|
||||
return web.json_response({"entry_points": entry_points})
|
||||
|
||||
|
||||
async def handle_update_trigger_task(request: web.Request) -> web.Response:
|
||||
"""PATCH /api/sessions/{session_id}/triggers/{trigger_id} — update trigger task."""
|
||||
session, err = resolve_session(request)
|
||||
if err:
|
||||
return err
|
||||
|
||||
trigger_id = request.match_info["trigger_id"]
|
||||
available = getattr(session, "available_triggers", {})
|
||||
tdef = available.get(trigger_id)
|
||||
if tdef is None:
|
||||
return web.json_response(
|
||||
{"error": f"Trigger '{trigger_id}' not found"},
|
||||
status=404,
|
||||
)
|
||||
|
||||
try:
|
||||
body = await request.json()
|
||||
except Exception:
|
||||
return web.json_response({"error": "Invalid JSON body"}, status=400)
|
||||
|
||||
task = body.get("task")
|
||||
if task is None:
|
||||
return web.json_response({"error": "Missing 'task' field"}, status=400)
|
||||
if not isinstance(task, str):
|
||||
return web.json_response({"error": "'task' must be a string"}, status=400)
|
||||
|
||||
tdef.task = task
|
||||
|
||||
# Persist to session state and agent definition
|
||||
from framework.tools.queen_lifecycle_tools import (
|
||||
_persist_active_triggers,
|
||||
_save_trigger_to_agent,
|
||||
)
|
||||
|
||||
if trigger_id in getattr(session, "active_trigger_ids", set()):
|
||||
session_id = request.match_info["session_id"]
|
||||
await _persist_active_triggers(session, session_id)
|
||||
|
||||
_save_trigger_to_agent(session, trigger_id, tdef)
|
||||
|
||||
return web.json_response(
|
||||
{
|
||||
"entry_points": [
|
||||
{
|
||||
"id": ep.id,
|
||||
"name": ep.name,
|
||||
"entry_node": ep.entry_node,
|
||||
"trigger_type": ep.trigger_type,
|
||||
"trigger_config": ep.trigger_config,
|
||||
**(
|
||||
{"next_fire_in": nf}
|
||||
if rt and (nf := rt.get_timer_next_fire_in(ep.id)) is not None
|
||||
else {}
|
||||
),
|
||||
}
|
||||
for ep in eps
|
||||
]
|
||||
"trigger_id": trigger_id,
|
||||
"task": tdef.task,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -397,23 +478,28 @@ async def handle_list_worker_sessions(request: web.Request) -> web.Response:
|
||||
"""List worker sessions on disk."""
|
||||
session, err = resolve_session(request)
|
||||
if err:
|
||||
return err
|
||||
|
||||
if not session.worker_path:
|
||||
return web.json_response({"sessions": []})
|
||||
|
||||
sess_dir = sessions_dir(session)
|
||||
# Fall back to cold session lookup from disk
|
||||
sid = request.match_info["session_id"]
|
||||
sess_dir = cold_sessions_dir(sid)
|
||||
if sess_dir is None:
|
||||
return err
|
||||
else:
|
||||
if not session.worker_path:
|
||||
return web.json_response({"sessions": []})
|
||||
sess_dir = sessions_dir(session)
|
||||
if not sess_dir.exists():
|
||||
return web.json_response({"sessions": []})
|
||||
|
||||
sessions = []
|
||||
for d in sorted(sess_dir.iterdir(), reverse=True):
|
||||
if not d.is_dir() or not d.name.startswith("session_"):
|
||||
if not d.is_dir():
|
||||
continue
|
||||
state_path = d / "state.json"
|
||||
if not d.name.startswith("session_") and not state_path.exists():
|
||||
continue
|
||||
|
||||
entry: dict = {"session_id": d.name}
|
||||
|
||||
state_path = d / "state.json"
|
||||
if state_path.exists():
|
||||
try:
|
||||
state = json.loads(state_path.read_text(encoding="utf-8"))
|
||||
@@ -564,48 +650,85 @@ async def handle_messages(request: web.Request) -> web.Response:
|
||||
"""Get messages for a worker session."""
|
||||
session, err = resolve_session(request)
|
||||
if err:
|
||||
return err
|
||||
|
||||
if not session.worker_path:
|
||||
return web.json_response({"error": "No worker loaded"}, status=503)
|
||||
# Fall back to cold session lookup from disk
|
||||
sid = request.match_info["session_id"]
|
||||
sess_dir = cold_sessions_dir(sid)
|
||||
if sess_dir is None:
|
||||
return err
|
||||
else:
|
||||
if not session.worker_path:
|
||||
return web.json_response({"error": "No worker loaded"}, status=503)
|
||||
sess_dir = sessions_dir(session)
|
||||
|
||||
ws_id = request.match_info.get("ws_id") or request.match_info.get("session_id", "")
|
||||
ws_id = safe_path_segment(ws_id)
|
||||
|
||||
convs_dir = sessions_dir(session) / ws_id / "conversations"
|
||||
convs_dir = sess_dir / ws_id / "conversations"
|
||||
if not convs_dir.exists():
|
||||
return web.json_response({"messages": []})
|
||||
|
||||
filter_node = request.query.get("node_id")
|
||||
all_messages = []
|
||||
|
||||
for node_dir in convs_dir.iterdir():
|
||||
if not node_dir.is_dir():
|
||||
continue
|
||||
if filter_node and node_dir.name != filter_node:
|
||||
continue
|
||||
|
||||
parts_dir = node_dir / "parts"
|
||||
def _collect_msg_parts(parts_dir: Path, node_id: str) -> None:
|
||||
if not parts_dir.exists():
|
||||
continue
|
||||
|
||||
return
|
||||
for part_file in sorted(parts_dir.iterdir()):
|
||||
if part_file.suffix != ".json":
|
||||
continue
|
||||
try:
|
||||
part = json.loads(part_file.read_text(encoding="utf-8"))
|
||||
part["_node_id"] = node_dir.name
|
||||
part["_node_id"] = node_id
|
||||
part.setdefault("created_at", part_file.stat().st_mtime)
|
||||
all_messages.append(part)
|
||||
except (json.JSONDecodeError, OSError):
|
||||
continue
|
||||
|
||||
# Flat layout: conversations/parts/*.json
|
||||
if not filter_node:
|
||||
_collect_msg_parts(convs_dir / "parts", "worker")
|
||||
|
||||
# Node-based layout: conversations/<node_id>/parts/*.json
|
||||
for node_dir in convs_dir.iterdir():
|
||||
if not node_dir.is_dir() or node_dir.name == "parts":
|
||||
continue
|
||||
if filter_node and node_dir.name != filter_node:
|
||||
continue
|
||||
_collect_msg_parts(node_dir / "parts", node_dir.name)
|
||||
|
||||
# Merge run lifecycle markers from runs.jsonl (for historical dividers)
|
||||
runs_file = sess_dir / ws_id / "runs.jsonl"
|
||||
if runs_file.exists():
|
||||
try:
|
||||
for line in runs_file.read_text(encoding="utf-8").splitlines():
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
record = json.loads(line)
|
||||
all_messages.append(
|
||||
{
|
||||
"seq": -1,
|
||||
"role": "system",
|
||||
"content": "",
|
||||
"_node_id": "_run_marker",
|
||||
"is_run_marker": True,
|
||||
"run_id": record.get("run_id"),
|
||||
"run_event": record.get("event"),
|
||||
"created_at": record.get("created_at", 0),
|
||||
}
|
||||
)
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
all_messages.sort(key=lambda m: m.get("created_at", m.get("seq", 0)))
|
||||
|
||||
client_only = request.query.get("client_only", "").lower() in ("true", "1")
|
||||
if client_only:
|
||||
client_facing_nodes: set[str] = set()
|
||||
if session.runner and hasattr(session.runner, "graph"):
|
||||
if session and session.runner and hasattr(session.runner, "graph"):
|
||||
for node in session.runner.graph.nodes:
|
||||
if node.client_facing:
|
||||
client_facing_nodes.add(node.id)
|
||||
@@ -614,12 +737,15 @@ async def handle_messages(request: web.Request) -> web.Response:
|
||||
all_messages = [
|
||||
m
|
||||
for m in all_messages
|
||||
if not m.get("is_transition_marker")
|
||||
and m["role"] != "tool"
|
||||
and not (m["role"] == "assistant" and m.get("tool_calls"))
|
||||
and (
|
||||
(m["role"] == "user" and m.get("is_client_input"))
|
||||
or (m["role"] == "assistant" and m.get("_node_id") in client_facing_nodes)
|
||||
if m.get("is_run_marker")
|
||||
or (
|
||||
not m.get("is_transition_marker")
|
||||
and m["role"] != "tool"
|
||||
and not (m["role"] == "assistant" and m.get("tool_calls"))
|
||||
and (
|
||||
(m["role"] == "user" and m.get("is_client_input"))
|
||||
or (m["role"] == "assistant" and m.get("_node_id") in client_facing_nodes)
|
||||
)
|
||||
)
|
||||
]
|
||||
|
||||
@@ -640,18 +766,16 @@ async def handle_queen_messages(request: web.Request) -> web.Response:
|
||||
return web.json_response({"messages": [], "session_id": session_id})
|
||||
|
||||
all_messages: list[dict] = []
|
||||
for node_dir in convs_dir.iterdir():
|
||||
if not node_dir.is_dir():
|
||||
continue
|
||||
parts_dir = node_dir / "parts"
|
||||
|
||||
def _read_parts(parts_dir: Path, node_id: str) -> None:
|
||||
if not parts_dir.exists():
|
||||
continue
|
||||
return
|
||||
for part_file in sorted(parts_dir.iterdir()):
|
||||
if part_file.suffix != ".json":
|
||||
continue
|
||||
try:
|
||||
part = json.loads(part_file.read_text(encoding="utf-8"))
|
||||
part["_node_id"] = node_dir.name
|
||||
part["_node_id"] = node_id
|
||||
# Use file mtime as created_at so frontend can order
|
||||
# queen and worker messages chronologically.
|
||||
part.setdefault("created_at", part_file.stat().st_mtime)
|
||||
@@ -659,6 +783,15 @@ async def handle_queen_messages(request: web.Request) -> web.Response:
|
||||
except (json.JSONDecodeError, OSError):
|
||||
continue
|
||||
|
||||
# Flat layout: conversations/parts/*.json
|
||||
_read_parts(convs_dir / "parts", "queen")
|
||||
|
||||
# Node-based layout: conversations/<node_id>/parts/*.json
|
||||
for node_dir in convs_dir.iterdir():
|
||||
if not node_dir.is_dir() or node_dir.name == "parts":
|
||||
continue
|
||||
_read_parts(node_dir / "parts", node_dir.name)
|
||||
|
||||
all_messages.sort(key=lambda m: m.get("created_at", m.get("seq", 0)))
|
||||
|
||||
# Filter to client-facing messages only
|
||||
@@ -673,6 +806,38 @@ async def handle_queen_messages(request: web.Request) -> web.Response:
|
||||
return web.json_response({"messages": all_messages, "session_id": session_id})
|
||||
|
||||
|
||||
async def handle_session_events_history(request: web.Request) -> web.Response:
|
||||
"""GET /api/sessions/{session_id}/events/history — persisted eventbus log.
|
||||
|
||||
Reads ``events.jsonl`` from the session directory on disk so it works for
|
||||
both live sessions and cold (post-server-restart) sessions. The frontend
|
||||
replays these events through ``sseEventToChatMessage`` to fully reconstruct
|
||||
the UI state on resume.
|
||||
"""
|
||||
session_id = request.match_info["session_id"]
|
||||
|
||||
queen_dir = Path.home() / ".hive" / "queen" / "session" / session_id
|
||||
events_path = queen_dir / "events.jsonl"
|
||||
if not events_path.exists():
|
||||
return web.json_response({"events": [], "session_id": session_id})
|
||||
|
||||
events: list[dict] = []
|
||||
try:
|
||||
with open(events_path, encoding="utf-8") as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
events.append(json.loads(line))
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
except OSError:
|
||||
return web.json_response({"events": [], "session_id": session_id})
|
||||
|
||||
return web.json_response({"events": events, "session_id": session_id})
|
||||
|
||||
|
||||
async def handle_session_history(request: web.Request) -> web.Response:
|
||||
"""GET /api/sessions/history — all queen sessions on disk (live + cold).
|
||||
|
||||
@@ -746,6 +911,7 @@ async def handle_discover(request: web.Request) -> web.Response:
|
||||
"description": entry.description,
|
||||
"category": entry.category,
|
||||
"session_count": entry.session_count,
|
||||
"run_count": entry.run_count,
|
||||
"node_count": entry.node_count,
|
||||
"tool_count": entry.tool_count,
|
||||
"tags": entry.tags,
|
||||
@@ -783,8 +949,12 @@ def register_routes(app: web.Application) -> None:
|
||||
# Session info
|
||||
app.router.add_get("/api/sessions/{session_id}/stats", handle_session_stats)
|
||||
app.router.add_get("/api/sessions/{session_id}/entry-points", handle_session_entry_points)
|
||||
app.router.add_patch(
|
||||
"/api/sessions/{session_id}/triggers/{trigger_id}", handle_update_trigger_task
|
||||
)
|
||||
app.router.add_get("/api/sessions/{session_id}/graphs", handle_session_graphs)
|
||||
app.router.add_get("/api/sessions/{session_id}/queen-messages", handle_queen_messages)
|
||||
app.router.add_get("/api/sessions/{session_id}/events/history", handle_session_events_history)
|
||||
|
||||
# Worker session browsing (session-primary)
|
||||
app.router.add_get("/api/sessions/{session_id}/worker-sessions", handle_list_worker_sessions)
|
||||
|
||||
@@ -7,7 +7,6 @@ Architecture:
|
||||
- Session owns EventBus + LLM, shared with queen and worker
|
||||
- Queen is always present once a session starts
|
||||
- Worker is optional — loaded into an existing session
|
||||
- Judge is active only when a worker is loaded
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
@@ -15,11 +14,13 @@ import json
|
||||
import logging
|
||||
import time
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from framework.runtime.triggers import TriggerDefinition
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -42,12 +43,23 @@ class Session:
|
||||
worker_info: Any | None = None # AgentInfo
|
||||
# Queen phase state (building/staging/running)
|
||||
phase_state: Any = None # QueenPhaseState
|
||||
# Judge (active when worker is loaded)
|
||||
judge_task: asyncio.Task | None = None
|
||||
escalation_sub: str | None = None
|
||||
# Worker handoff subscription
|
||||
worker_handoff_sub: str | None = None
|
||||
# Memory consolidation subscription (fires on CONTEXT_COMPACTED)
|
||||
memory_consolidation_sub: str | None = None
|
||||
# Trigger definitions loaded from agent's triggers.json (available but inactive)
|
||||
available_triggers: dict[str, TriggerDefinition] = field(default_factory=dict)
|
||||
# Active trigger tracking (IDs currently firing + their asyncio tasks)
|
||||
active_trigger_ids: set[str] = field(default_factory=set)
|
||||
active_timer_tasks: dict[str, asyncio.Task] = field(default_factory=dict)
|
||||
# Queen-owned webhook server (lazy singleton, created on first webhook trigger activation)
|
||||
queen_webhook_server: Any = None
|
||||
# EventBus subscription IDs for active webhook triggers (trigger_id -> sub_id)
|
||||
active_webhook_subs: dict[str, str] = field(default_factory=dict)
|
||||
# True after first successful worker execution (gates trigger delivery)
|
||||
worker_configured: bool = False
|
||||
# Monotonic timestamps for next trigger fire (mirrors AgentRuntime._timer_next_fire)
|
||||
trigger_next_fire: dict[str, float] = field(default_factory=dict)
|
||||
# Session directory resumption:
|
||||
# When set, _start_queen writes queen conversations to this existing session's
|
||||
# directory instead of creating a new one. This lets cold-restores accumulate
|
||||
@@ -130,7 +142,9 @@ class SessionManager:
|
||||
to that existing session's directory instead of creating a new one.
|
||||
This preserves full conversation history across server restarts.
|
||||
"""
|
||||
session = await self._create_session_core(session_id=session_id, model=model)
|
||||
# Reuse the original session ID when cold-restoring
|
||||
resolved_session_id = queen_resume_from or session_id
|
||||
session = await self._create_session_core(session_id=resolved_session_id, model=model)
|
||||
session.queen_resume_from = queen_resume_from
|
||||
|
||||
# Start queen immediately (queen-only, no worker tools yet)
|
||||
@@ -147,22 +161,28 @@ class SessionManager:
|
||||
self,
|
||||
agent_path: str | Path,
|
||||
agent_id: str | None = None,
|
||||
session_id: str | None = None,
|
||||
model: str | None = None,
|
||||
initial_prompt: str | None = None,
|
||||
queen_resume_from: str | None = None,
|
||||
) -> Session:
|
||||
"""Create a session and load a worker in one step.
|
||||
|
||||
When ``queen_resume_from`` is set the queen writes conversation messages
|
||||
to that existing session's directory instead of creating a new one.
|
||||
When ``queen_resume_from`` is set the session reuses the original session
|
||||
ID so the frontend sees a single continuous session. The queen writes
|
||||
conversation messages to that existing directory, preserving full history.
|
||||
"""
|
||||
from framework.tools.queen_lifecycle_tools import build_worker_profile
|
||||
|
||||
agent_path = Path(agent_path)
|
||||
resolved_worker_id = agent_id or agent_path.name
|
||||
|
||||
# Auto-generate session ID (not the agent name)
|
||||
session = await self._create_session_core(model=model)
|
||||
# Reuse the original session ID when cold-restoring so the frontend
|
||||
# sees one continuous session instead of a new one each time.
|
||||
session = await self._create_session_core(
|
||||
session_id=queen_resume_from,
|
||||
model=model,
|
||||
)
|
||||
session.queen_resume_from = queen_resume_from
|
||||
try:
|
||||
# Load worker FIRST (before queen) so queen gets full tools
|
||||
@@ -202,8 +222,8 @@ class SessionManager:
|
||||
) -> None:
|
||||
"""Load a worker agent into a session (core logic).
|
||||
|
||||
Sets up the runner, runtime, and session fields. Does NOT start the
|
||||
judge or notify the queen — callers handle those steps.
|
||||
Sets up the runner, runtime, and session fields. Does NOT notify
|
||||
the queen — callers handle that step.
|
||||
"""
|
||||
from framework.runner import AgentRunner
|
||||
|
||||
@@ -242,6 +262,25 @@ class SessionManager:
|
||||
|
||||
runtime = runner._agent_runtime
|
||||
|
||||
# Load triggers from the agent's triggers.json definition file.
|
||||
from framework.tools.queen_lifecycle_tools import _read_agent_triggers_json
|
||||
|
||||
for tdata in _read_agent_triggers_json(agent_path):
|
||||
tid = tdata.get("id", "")
|
||||
ttype = tdata.get("trigger_type", "")
|
||||
if tid and ttype in ("timer", "webhook"):
|
||||
session.available_triggers[tid] = TriggerDefinition(
|
||||
id=tid,
|
||||
trigger_type=ttype,
|
||||
trigger_config=tdata.get("trigger_config", {}),
|
||||
description=tdata.get("name", tid),
|
||||
task=tdata.get("task", ""),
|
||||
)
|
||||
logger.info("Loaded trigger '%s' (%s) from triggers.json", tid, ttype)
|
||||
|
||||
if session.available_triggers:
|
||||
await self._emit_trigger_events(session, "available", session.available_triggers)
|
||||
|
||||
# Start runtime on event loop
|
||||
if runtime and not runtime.is_running:
|
||||
await runtime.start()
|
||||
@@ -369,7 +408,7 @@ class SessionManager:
|
||||
) -> Session:
|
||||
"""Load a worker agent into an existing session (with running queen).
|
||||
|
||||
Starts the worker runtime, health judge, and notifies the queen.
|
||||
Starts the worker runtime and notifies the queen.
|
||||
"""
|
||||
agent_path = Path(agent_path)
|
||||
|
||||
@@ -385,11 +424,68 @@ class SessionManager:
|
||||
)
|
||||
|
||||
# Notify queen about the loaded worker (skip for queen itself).
|
||||
# Health judge disabled for simplicity.
|
||||
if agent_path.name != "queen" and session.worker_runtime:
|
||||
# await self._start_judge(session, session.runner._storage_path)
|
||||
await self._notify_queen_worker_loaded(session)
|
||||
|
||||
# Update meta.json so cold-restore can discover this session by agent_path
|
||||
storage_session_id = session.queen_resume_from or session.id
|
||||
meta_path = Path.home() / ".hive" / "queen" / "session" / storage_session_id / "meta.json"
|
||||
try:
|
||||
_agent_name = (
|
||||
session.worker_info.name
|
||||
if session.worker_info
|
||||
else str(agent_path.name).replace("_", " ").title()
|
||||
)
|
||||
existing_meta = {}
|
||||
if meta_path.exists():
|
||||
existing_meta = json.loads(meta_path.read_text(encoding="utf-8"))
|
||||
existing_meta["agent_name"] = _agent_name
|
||||
existing_meta["agent_path"] = (
|
||||
str(session.worker_path) if session.worker_path else str(agent_path)
|
||||
)
|
||||
meta_path.write_text(json.dumps(existing_meta), encoding="utf-8")
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# Restore previously active triggers from persisted session state
|
||||
if session.available_triggers and session.worker_runtime:
|
||||
try:
|
||||
store = session.worker_runtime._session_store
|
||||
state = await store.read_state(session_id)
|
||||
if state and state.active_triggers:
|
||||
from framework.tools.queen_lifecycle_tools import (
|
||||
_start_trigger_timer,
|
||||
_start_trigger_webhook,
|
||||
)
|
||||
|
||||
saved_tasks = getattr(state, "trigger_tasks", {}) or {}
|
||||
for tid in state.active_triggers:
|
||||
tdef = session.available_triggers.get(tid)
|
||||
if tdef:
|
||||
# Restore user-configured task override
|
||||
saved_task = saved_tasks.get(tid, "")
|
||||
if saved_task:
|
||||
tdef.task = saved_task
|
||||
tdef.active = True
|
||||
session.active_trigger_ids.add(tid)
|
||||
if tdef.trigger_type == "timer":
|
||||
await _start_trigger_timer(session, tid, tdef)
|
||||
logger.info("Restored trigger timer '%s'", tid)
|
||||
elif tdef.trigger_type == "webhook":
|
||||
await _start_trigger_webhook(session, tid, tdef)
|
||||
logger.info("Restored webhook trigger '%s'", tid)
|
||||
else:
|
||||
logger.warning(
|
||||
"Saved trigger '%s' not found in worker entry points, skipping",
|
||||
tid,
|
||||
)
|
||||
|
||||
# Restore worker_configured flag
|
||||
if state and getattr(state, "worker_configured", False):
|
||||
session.worker_configured = True
|
||||
except Exception as e:
|
||||
logger.warning("Failed to restore active triggers: %s", e)
|
||||
|
||||
# Emit SSE event so the frontend can update UI
|
||||
await self._emit_worker_loaded(session)
|
||||
|
||||
@@ -403,9 +499,6 @@ class SessionManager:
|
||||
if session.worker_runtime is None:
|
||||
return False
|
||||
|
||||
# Stop judge + escalation
|
||||
self._stop_judge(session)
|
||||
|
||||
# Cleanup worker
|
||||
if session.runner:
|
||||
try:
|
||||
@@ -413,6 +506,26 @@ class SessionManager:
|
||||
except Exception as e:
|
||||
logger.error("Error cleaning up worker '%s': %s", session.worker_id, e)
|
||||
|
||||
# Cancel active trigger timers
|
||||
for tid, task in session.active_timer_tasks.items():
|
||||
task.cancel()
|
||||
logger.info("Cancelled trigger timer '%s' on unload", tid)
|
||||
session.active_timer_tasks.clear()
|
||||
|
||||
# Unsubscribe webhook handlers (server stays alive — queen-owned)
|
||||
for sub_id in session.active_webhook_subs.values():
|
||||
try:
|
||||
session.event_bus.unsubscribe(sub_id)
|
||||
except Exception:
|
||||
pass
|
||||
session.active_webhook_subs.clear()
|
||||
session.active_trigger_ids.clear()
|
||||
|
||||
# Clean up triggers
|
||||
if session.available_triggers:
|
||||
await self._emit_trigger_events(session, "removed", session.available_triggers)
|
||||
session.available_triggers.clear()
|
||||
|
||||
worker_id = session.worker_id
|
||||
session.worker_id = None
|
||||
session.worker_path = None
|
||||
@@ -443,8 +556,6 @@ class SessionManager:
|
||||
_storage_id = getattr(session, "queen_resume_from", None) or session_id
|
||||
_session_dir = Path.home() / ".hive" / "queen" / "session" / _storage_id
|
||||
|
||||
# Stop judge
|
||||
self._stop_judge(session)
|
||||
if session.worker_handoff_sub is not None:
|
||||
try:
|
||||
session.event_bus.unsubscribe(session.worker_handoff_sub)
|
||||
@@ -464,6 +575,25 @@ class SessionManager:
|
||||
session.queen_task = None
|
||||
session.queen_executor = None
|
||||
|
||||
# Cancel active trigger timers
|
||||
for task in session.active_timer_tasks.values():
|
||||
task.cancel()
|
||||
session.active_timer_tasks.clear()
|
||||
|
||||
# Unsubscribe webhook handlers and stop queen webhook server
|
||||
for sub_id in session.active_webhook_subs.values():
|
||||
try:
|
||||
session.event_bus.unsubscribe(sub_id)
|
||||
except Exception:
|
||||
pass
|
||||
session.active_webhook_subs.clear()
|
||||
if session.queen_webhook_server is not None:
|
||||
try:
|
||||
await session.queen_webhook_server.stop()
|
||||
except Exception:
|
||||
logger.error("Error stopping queen webhook server", exc_info=True)
|
||||
session.queen_webhook_server = None
|
||||
|
||||
# Cleanup worker
|
||||
if session.runner:
|
||||
try:
|
||||
@@ -482,6 +612,9 @@ class SessionManager:
|
||||
name=f"queen-memory-consolidation-{session_id}",
|
||||
)
|
||||
|
||||
# Close per-session event log
|
||||
session.event_bus.close_session_log()
|
||||
|
||||
logger.info("Session '%s' stopped", session_id)
|
||||
return True
|
||||
|
||||
@@ -491,7 +624,7 @@ class SessionManager:
|
||||
|
||||
async def _handle_worker_handoff(self, session: Session, executor: Any, event: Any) -> None:
|
||||
"""Route worker escalation events into the queen conversation."""
|
||||
if event.stream_id in ("queen", "judge"):
|
||||
if event.stream_id == "queen":
|
||||
return
|
||||
|
||||
reason = str(event.data.get("reason", "")).strip()
|
||||
@@ -580,6 +713,39 @@ class SessionManager:
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# Enable per-session event persistence so that all eventbus events
|
||||
# survive server restarts and can be replayed on cold-session resume.
|
||||
# Scan the existing event log to find the max iteration ever written,
|
||||
# then use max+1 as offset so resumed sessions produce monotonically
|
||||
# increasing iteration values — preventing frontend message ID collisions.
|
||||
iteration_offset = 0
|
||||
events_path = queen_dir / "events.jsonl"
|
||||
try:
|
||||
if events_path.exists():
|
||||
max_iter = -1
|
||||
with open(events_path, encoding="utf-8") as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
evt = json.loads(line)
|
||||
it = evt.get("data", {}).get("iteration")
|
||||
if isinstance(it, int) and it > max_iter:
|
||||
max_iter = it
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
continue
|
||||
if max_iter >= 0:
|
||||
iteration_offset = max_iter + 1
|
||||
logger.info(
|
||||
"Session '%s' resuming with iteration_offset=%d (from events.jsonl max)",
|
||||
session.id,
|
||||
iteration_offset,
|
||||
)
|
||||
except OSError:
|
||||
pass
|
||||
session.event_bus.set_session_log(events_path, iteration_offset=iteration_offset)
|
||||
|
||||
session.queen_task = await create_queen(
|
||||
session=session,
|
||||
session_manager=self,
|
||||
@@ -623,116 +789,6 @@ class SessionManager:
|
||||
handler=_on_compaction,
|
||||
)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Judge startup / teardown
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
async def _start_judge(
|
||||
self,
|
||||
session: Session,
|
||||
worker_storage_path: str | Path,
|
||||
) -> None:
|
||||
"""Start the health judge for a session's worker."""
|
||||
from framework.graph.executor import GraphExecutor
|
||||
from framework.monitoring import judge_goal, judge_graph
|
||||
from framework.runner.tool_registry import ToolRegistry
|
||||
from framework.runtime.core import Runtime
|
||||
from framework.runtime.event_bus import EventType as _ET
|
||||
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
|
||||
|
||||
worker_storage_path = Path(worker_storage_path)
|
||||
|
||||
try:
|
||||
# Monitoring tools
|
||||
monitoring_registry = ToolRegistry()
|
||||
register_worker_monitoring_tools(
|
||||
monitoring_registry,
|
||||
session.event_bus,
|
||||
worker_storage_path,
|
||||
worker_graph_id=session.worker_runtime._graph_id,
|
||||
)
|
||||
|
||||
hive_home = Path.home() / ".hive"
|
||||
judge_dir = hive_home / "judge" / "session" / session.id
|
||||
judge_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
judge_runtime = Runtime(hive_home / "judge")
|
||||
monitoring_tools = list(monitoring_registry.get_tools().values())
|
||||
monitoring_executor = monitoring_registry.get_executor()
|
||||
|
||||
async def _judge_loop():
|
||||
interval = 300 # 5 minutes between checks
|
||||
# Wait before the first check — let the worker actually do something
|
||||
await asyncio.sleep(interval)
|
||||
while True:
|
||||
try:
|
||||
executor = GraphExecutor(
|
||||
runtime=judge_runtime,
|
||||
llm=session.llm,
|
||||
tools=monitoring_tools,
|
||||
tool_executor=monitoring_executor,
|
||||
event_bus=session.event_bus,
|
||||
stream_id="judge",
|
||||
storage_path=judge_dir,
|
||||
loop_config=judge_graph.loop_config,
|
||||
)
|
||||
await executor.execute(
|
||||
graph=judge_graph,
|
||||
goal=judge_goal,
|
||||
input_data={
|
||||
"event": {"source": "timer", "reason": "scheduled"},
|
||||
},
|
||||
session_state={"resume_session_id": session.id},
|
||||
)
|
||||
except Exception:
|
||||
logger.error("Health judge tick failed", exc_info=True)
|
||||
await asyncio.sleep(interval)
|
||||
|
||||
session.judge_task = asyncio.create_task(_judge_loop())
|
||||
|
||||
# Escalation: judge → queen
|
||||
async def _on_escalation(event):
|
||||
ticket = event.data.get("ticket", {})
|
||||
executor = session.queen_executor
|
||||
if executor is None:
|
||||
logger.warning("Escalation received but queen executor is None")
|
||||
return
|
||||
node = executor.node_registry.get("queen")
|
||||
if node is not None and hasattr(node, "inject_event"):
|
||||
msg = "[ESCALATION TICKET from Health Judge]\n" + json.dumps(
|
||||
ticket, indent=2, ensure_ascii=False
|
||||
)
|
||||
await node.inject_event(msg)
|
||||
else:
|
||||
logger.warning("Escalation received but queen node not ready")
|
||||
|
||||
session.escalation_sub = session.event_bus.subscribe(
|
||||
event_types=[_ET.WORKER_ESCALATION_TICKET],
|
||||
handler=_on_escalation,
|
||||
)
|
||||
|
||||
logger.info("Judge started for session '%s'", session.id)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Failed to start judge for session '%s': %s",
|
||||
session.id,
|
||||
e,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def _stop_judge(self, session: Session) -> None:
|
||||
"""Cancel judge task and unsubscribe escalation events."""
|
||||
if session.judge_task is not None:
|
||||
session.judge_task.cancel()
|
||||
session.judge_task = None
|
||||
if session.escalation_sub is not None:
|
||||
try:
|
||||
session.event_bus.unsubscribe(session.escalation_sub)
|
||||
except Exception:
|
||||
pass
|
||||
session.escalation_sub = None
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Queen notifications
|
||||
# ------------------------------------------------------------------
|
||||
@@ -749,7 +805,22 @@ class SessionManager:
|
||||
return
|
||||
|
||||
profile = build_worker_profile(session.worker_runtime, agent_path=session.worker_path)
|
||||
await node.inject_event(f"[SYSTEM] Worker loaded.{profile}")
|
||||
|
||||
# Append available trigger info so the queen knows what's schedulable
|
||||
trigger_lines = ""
|
||||
if session.available_triggers:
|
||||
parts = []
|
||||
for t in session.available_triggers.values():
|
||||
cfg = t.trigger_config
|
||||
detail = cfg.get("cron") or f"every {cfg.get('interval_minutes', '?')} min"
|
||||
task_info = f' -> task: "{t.task}"' if t.task else " (no task configured)"
|
||||
parts.append(f" - {t.id} ({t.trigger_type}: {detail}){task_info}")
|
||||
trigger_lines = (
|
||||
"\n\nAvailable triggers (inactive — use set_trigger to activate):\n"
|
||||
+ "\n".join(parts)
|
||||
)
|
||||
|
||||
await node.inject_event(f"[SYSTEM] Worker loaded.{profile}{trigger_lines}")
|
||||
|
||||
async def _emit_worker_loaded(self, session: Session) -> None:
|
||||
"""Publish a WORKER_LOADED event so the frontend can update."""
|
||||
@@ -785,6 +856,31 @@ class SessionManager:
|
||||
"according to your current phase."
|
||||
)
|
||||
|
||||
async def _emit_trigger_events(
|
||||
self,
|
||||
session: Session,
|
||||
kind: str,
|
||||
triggers: dict[str, TriggerDefinition],
|
||||
) -> None:
|
||||
"""Emit TRIGGER_AVAILABLE or TRIGGER_REMOVED events for each trigger."""
|
||||
from framework.runtime.event_bus import AgentEvent, EventType
|
||||
|
||||
event_type = (
|
||||
EventType.TRIGGER_AVAILABLE if kind == "available" else EventType.TRIGGER_REMOVED
|
||||
)
|
||||
for t in triggers.values():
|
||||
await session.event_bus.publish(
|
||||
AgentEvent(
|
||||
type=event_type,
|
||||
stream_id="queen",
|
||||
data={
|
||||
"trigger_id": t.id,
|
||||
"trigger_type": t.trigger_type,
|
||||
"trigger_config": t.trigger_config,
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
async def revive_queen(self, session: Session, initial_prompt: str | None = None) -> None:
|
||||
"""Revive a dead queen executor on an existing session.
|
||||
|
||||
@@ -856,13 +952,19 @@ class SessionManager:
|
||||
# Check whether any message part files are actually present
|
||||
has_messages = False
|
||||
try:
|
||||
for node_dir in convs_dir.iterdir():
|
||||
if not node_dir.is_dir():
|
||||
continue
|
||||
parts_dir = node_dir / "parts"
|
||||
if parts_dir.exists() and any(f.suffix == ".json" for f in parts_dir.iterdir()):
|
||||
has_messages = True
|
||||
break
|
||||
# Flat layout: conversations/parts/*.json
|
||||
flat_parts = convs_dir / "parts"
|
||||
if flat_parts.exists() and any(f.suffix == ".json" for f in flat_parts.iterdir()):
|
||||
has_messages = True
|
||||
else:
|
||||
# Node-based layout: conversations/<node_id>/parts/*.json
|
||||
for node_dir in convs_dir.iterdir():
|
||||
if not node_dir.is_dir() or node_dir.name == "parts":
|
||||
continue
|
||||
parts_dir = node_dir / "parts"
|
||||
if parts_dir.exists() and any(f.suffix == ".json" for f in parts_dir.iterdir()):
|
||||
has_messages = True
|
||||
break
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
@@ -939,21 +1041,27 @@ class SessionManager:
|
||||
if convs_dir.exists():
|
||||
try:
|
||||
all_parts: list[dict] = []
|
||||
for node_dir in convs_dir.iterdir():
|
||||
if not node_dir.is_dir():
|
||||
continue
|
||||
parts_dir = node_dir / "parts"
|
||||
|
||||
def _collect_parts(parts_dir: Path, _dest: list[dict] = all_parts) -> None:
|
||||
if not parts_dir.exists():
|
||||
continue
|
||||
return
|
||||
for part_file in sorted(parts_dir.iterdir()):
|
||||
if part_file.suffix != ".json":
|
||||
continue
|
||||
try:
|
||||
part = json.loads(part_file.read_text(encoding="utf-8"))
|
||||
part.setdefault("created_at", part_file.stat().st_mtime)
|
||||
all_parts.append(part)
|
||||
_dest.append(part)
|
||||
except (json.JSONDecodeError, OSError):
|
||||
continue
|
||||
|
||||
# Flat layout: conversations/parts/*.json
|
||||
_collect_parts(convs_dir / "parts")
|
||||
# Node-based layout: conversations/<node_id>/parts/*.json
|
||||
for node_dir in convs_dir.iterdir():
|
||||
if not node_dir.is_dir() or node_dir.name == "parts":
|
||||
continue
|
||||
_collect_parts(node_dir / "parts")
|
||||
# Filter to client-facing messages only
|
||||
client_msgs = [
|
||||
p
|
||||
|
||||
@@ -16,6 +16,9 @@ from aiohttp.test_utils import TestClient, TestServer
|
||||
from framework.server.app import create_app
|
||||
from framework.server.session_manager import Session
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[4]
|
||||
EXAMPLE_AGENT_PATH = REPO_ROOT / "examples" / "templates" / "deep_research_agent"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Mock helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -207,11 +210,8 @@ def tmp_agent_dir(tmp_path, monkeypatch):
|
||||
return tmp_path, agent_name, base
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_session(tmp_agent_dir):
|
||||
"""Create a sample session with state.json, checkpoints, and conversations."""
|
||||
tmp_path, agent_name, base = tmp_agent_dir
|
||||
session_id = "session_20260220_120000_abc12345"
|
||||
def _write_sample_session(base: Path, session_id: str):
|
||||
"""Create a sample worker session on disk."""
|
||||
session_dir = base / "sessions" / session_id
|
||||
|
||||
# state.json
|
||||
@@ -292,6 +292,20 @@ def sample_session(tmp_agent_dir):
|
||||
return session_id, session_dir, state
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_session(tmp_agent_dir):
|
||||
"""Create a sample session with state.json, checkpoints, and conversations."""
|
||||
_tmp_path, _agent_name, base = tmp_agent_dir
|
||||
return _write_sample_session(base, "session_20260220_120000_abc12345")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def custom_id_session(tmp_agent_dir):
|
||||
"""Create a sample session that uses a custom non-session_* ID."""
|
||||
_tmp_path, _agent_name, base = tmp_agent_dir
|
||||
return _write_sample_session(base, "my-custom-session")
|
||||
|
||||
|
||||
def _make_app_with_session(session):
|
||||
"""Create an aiohttp app with a pre-loaded session."""
|
||||
app = create_app()
|
||||
@@ -347,6 +361,35 @@ class TestHealth:
|
||||
|
||||
|
||||
class TestSessionCRUD:
|
||||
@pytest.mark.asyncio
|
||||
async def test_create_session_with_worker_forwards_session_id(self):
|
||||
app = create_app()
|
||||
manager = app["manager"]
|
||||
manager.create_session_with_worker = AsyncMock(
|
||||
return_value=_make_session(agent_id="my-custom-session")
|
||||
)
|
||||
|
||||
async with TestClient(TestServer(app)) as client:
|
||||
resp = await client.post(
|
||||
"/api/sessions",
|
||||
json={
|
||||
"session_id": "my-custom-session",
|
||||
"agent_path": str(EXAMPLE_AGENT_PATH),
|
||||
},
|
||||
)
|
||||
data = await resp.json()
|
||||
|
||||
assert resp.status == 201
|
||||
assert data["session_id"] == "my-custom-session"
|
||||
manager.create_session_with_worker.assert_awaited_once_with(
|
||||
str(EXAMPLE_AGENT_PATH.resolve()),
|
||||
agent_id=None,
|
||||
session_id="my-custom-session",
|
||||
model=None,
|
||||
initial_prompt=None,
|
||||
queen_resume_from=None,
|
||||
)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_list_sessions_empty(self):
|
||||
app = create_app()
|
||||
@@ -767,6 +810,22 @@ class TestWorkerSessions:
|
||||
assert data["sessions"][0]["status"] == "paused"
|
||||
assert data["sessions"][0]["steps"] == 5
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_list_sessions_includes_custom_id(self, custom_id_session, tmp_agent_dir):
|
||||
session_id, session_dir, state = custom_id_session
|
||||
tmp_path, agent_name, base = tmp_agent_dir
|
||||
|
||||
session = _make_session(tmp_dir=tmp_path / ".hive" / "agents" / agent_name)
|
||||
app = _make_app_with_session(session)
|
||||
|
||||
async with TestClient(TestServer(app)) as client:
|
||||
resp = await client.get("/api/sessions/test_agent/worker-sessions")
|
||||
assert resp.status == 200
|
||||
data = await resp.json()
|
||||
assert len(data["sessions"]) == 1
|
||||
assert data["sessions"][0]["session_id"] == session_id
|
||||
assert data["sessions"][0]["status"] == "paused"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_list_sessions_empty(self, tmp_agent_dir):
|
||||
tmp_path, agent_name, base = tmp_agent_dir
|
||||
@@ -1284,6 +1343,28 @@ class TestLogs:
|
||||
assert len(data["logs"]) >= 1
|
||||
assert data["logs"][0]["run_id"] == session_id
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_logs_list_summaries_with_custom_id(self, custom_id_session, tmp_agent_dir):
|
||||
session_id, session_dir, state = custom_id_session
|
||||
tmp_path, agent_name, base = tmp_agent_dir
|
||||
|
||||
from framework.runtime.runtime_log_store import RuntimeLogStore
|
||||
|
||||
log_store = RuntimeLogStore(base)
|
||||
session = _make_session(
|
||||
tmp_dir=tmp_path / ".hive" / "agents" / agent_name,
|
||||
log_store=log_store,
|
||||
)
|
||||
app = _make_app_with_session(session)
|
||||
|
||||
async with TestClient(TestServer(app)) as client:
|
||||
resp = await client.get("/api/sessions/test_agent/logs")
|
||||
assert resp.status == 200
|
||||
data = await resp.json()
|
||||
assert "logs" in data
|
||||
assert len(data["logs"]) >= 1
|
||||
assert data["logs"][0]["run_id"] == session_id
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_logs_session_summary(self, sample_session, tmp_agent_dir):
|
||||
session_id, session_dir, state = sample_session
|
||||
|
||||
@@ -0,0 +1,26 @@
|
||||
"""Hive Agent Skills — discovery, parsing, and injection of SKILL.md packages.
|
||||
|
||||
Implements the open Agent Skills standard (agentskills.io) for portable
|
||||
skill discovery and activation, plus built-in default skills for runtime
|
||||
operational discipline.
|
||||
"""
|
||||
|
||||
from framework.skills.catalog import SkillCatalog
|
||||
from framework.skills.config import DefaultSkillConfig, SkillsConfig
|
||||
from framework.skills.defaults import DefaultSkillManager
|
||||
from framework.skills.discovery import DiscoveryConfig, SkillDiscovery
|
||||
from framework.skills.manager import SkillsManager, SkillsManagerConfig
|
||||
from framework.skills.parser import ParsedSkill, parse_skill_md
|
||||
|
||||
__all__ = [
|
||||
"DefaultSkillConfig",
|
||||
"DefaultSkillManager",
|
||||
"DiscoveryConfig",
|
||||
"ParsedSkill",
|
||||
"SkillCatalog",
|
||||
"SkillDiscovery",
|
||||
"SkillsConfig",
|
||||
"SkillsManager",
|
||||
"SkillsManagerConfig",
|
||||
"parse_skill_md",
|
||||
]
|
||||
@@ -0,0 +1,24 @@
|
||||
---
|
||||
name: hive.batch-ledger
|
||||
description: Track per-item status when processing collections to prevent skipped or duplicated items.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Batch Progress Ledger
|
||||
|
||||
When processing a collection of items, maintain a batch ledger in `_batch_ledger`.
|
||||
|
||||
Initialize when you identify the batch:
|
||||
- `_batch_total`: total item count
|
||||
- `_batch_ledger`: JSON with per-item status
|
||||
|
||||
Per-item statuses: pending → in_progress → completed|failed|skipped
|
||||
|
||||
- Set `in_progress` BEFORE processing
|
||||
- Set final status AFTER processing with 1-line result_summary
|
||||
- Include error reason for failed/skipped items
|
||||
- Update aggregate counts after each item
|
||||
- NEVER remove items from the ledger
|
||||
- If resuming, skip items already marked completed
|
||||
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: hive.context-preservation
|
||||
description: Proactively preserve critical information before automatic context pruning destroys it.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Context Preservation
|
||||
|
||||
You operate under a finite context window. Important information WILL be pruned.
|
||||
|
||||
Save-As-You-Go: After any tool call producing information you'll need later,
|
||||
immediately extract key data into `_working_notes` or `_preserved_data`.
|
||||
Do NOT rely on referring back to old tool results.
|
||||
|
||||
What to extract: URLs and key snippets (not full pages), relevant API fields
|
||||
(not raw JSON), specific lines/values (not entire files), analysis results
|
||||
(not raw data).
|
||||
|
||||
Before transitioning to the next phase/node, write a handoff summary to
|
||||
`_handoff_context` with everything the next phase needs to know.
|
||||
@@ -0,0 +1,18 @@
|
||||
---
|
||||
name: hive.error-recovery
|
||||
description: Follow a structured recovery protocol when tool calls fail instead of blindly retrying or giving up.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Error Recovery
|
||||
|
||||
When a tool call fails:
|
||||
|
||||
1. Diagnose — record error in notes, classify as transient or structural
|
||||
2. Decide — transient: retry once. Structural fixable: fix and retry.
|
||||
Structural unfixable: record as failed, move to next item.
|
||||
Blocking all progress: record escalation note.
|
||||
3. Adapt — if same tool failed 3+ times, stop using it and find alternative.
|
||||
Update plan in notes. Never silently drop the failed item.
|
||||
@@ -0,0 +1,27 @@
|
||||
---
|
||||
name: hive.note-taking
|
||||
description: Maintain structured working notes throughout execution to prevent information loss during context pruning.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Structured Note-Taking
|
||||
|
||||
Maintain structured working notes in shared memory key `_working_notes`.
|
||||
Update at these checkpoints:
|
||||
|
||||
- After completing each discrete subtask or batch item
|
||||
- After receiving new information that changes your plan
|
||||
- Before any tool call that will produce substantial output
|
||||
|
||||
Structure:
|
||||
|
||||
### Objective — restate the goal
|
||||
### Current Plan — numbered steps, mark completed with ✓
|
||||
### Key Decisions — decisions made and WHY
|
||||
### Working Data — intermediate results, extracted values
|
||||
### Open Questions — uncertainties to verify
|
||||
### Blockers — anything preventing progress
|
||||
|
||||
Update incrementally — do not rewrite from scratch each time.
|
||||
@@ -0,0 +1,20 @@
|
||||
---
|
||||
name: hive.quality-monitor
|
||||
description: Periodically self-assess output quality to catch degradation before the judge does.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Quality Self-Assessment
|
||||
|
||||
Every 5 iterations, self-assess:
|
||||
|
||||
1. On-task? Still working toward the stated objective?
|
||||
2. Thorough? Cutting corners compared to earlier?
|
||||
3. Non-repetitive? Producing new value or rehashing?
|
||||
4. Consistent? Latest output contradict earlier decisions?
|
||||
5. Complete? Tracking all items, or silently dropped some?
|
||||
|
||||
If degrading: write assessment to `_quality_log`, re-read `_working_notes`,
|
||||
change approach explicitly. If acceptable: brief note in `_quality_log`.
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
name: hive.task-decomposition
|
||||
description: Decompose complex tasks into explicit subtasks before diving in.
|
||||
metadata:
|
||||
author: hive
|
||||
type: default-skill
|
||||
---
|
||||
|
||||
## Operational Protocol: Task Decomposition
|
||||
|
||||
Before starting a complex task:
|
||||
|
||||
1. Decompose — break into numbered subtasks in `_working_notes` Current Plan
|
||||
2. Estimate — relative effort per subtask (small/medium/large)
|
||||
3. Execute — work through in order, mark ✓ when complete
|
||||
4. Budget — if running low on iterations, prioritize by impact
|
||||
5. Verify — before declaring done, every subtask must be ✓, skipped (with reason), or blocked
|
||||
@@ -0,0 +1,107 @@
|
||||
"""Skill catalog — in-memory index with system prompt generation.
|
||||
|
||||
Builds the XML catalog injected into the system prompt for model-driven
|
||||
skill activation per the Agent Skills standard.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from xml.sax.saxutils import escape
|
||||
|
||||
from framework.skills.parser import ParsedSkill
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_BEHAVIORAL_INSTRUCTION = (
|
||||
"The following skills provide specialized instructions for specific tasks.\n"
|
||||
"When a task matches a skill's description, read the SKILL.md at the listed\n"
|
||||
"location to load the full instructions before proceeding.\n"
|
||||
"When a skill references relative paths, resolve them against the skill's\n"
|
||||
"directory (the parent of SKILL.md) and use absolute paths in tool calls."
|
||||
)
|
||||
|
||||
|
||||
class SkillCatalog:
|
||||
"""In-memory catalog of discovered skills."""
|
||||
|
||||
def __init__(self, skills: list[ParsedSkill] | None = None):
|
||||
self._skills: dict[str, ParsedSkill] = {}
|
||||
self._activated: set[str] = set()
|
||||
if skills:
|
||||
for skill in skills:
|
||||
self.add(skill)
|
||||
|
||||
def add(self, skill: ParsedSkill) -> None:
|
||||
"""Add a skill to the catalog."""
|
||||
self._skills[skill.name] = skill
|
||||
|
||||
def get(self, name: str) -> ParsedSkill | None:
|
||||
"""Look up a skill by name."""
|
||||
return self._skills.get(name)
|
||||
|
||||
def mark_activated(self, name: str) -> None:
|
||||
"""Mark a skill as activated in the current session."""
|
||||
self._activated.add(name)
|
||||
|
||||
def is_activated(self, name: str) -> bool:
|
||||
"""Check if a skill has been activated."""
|
||||
return name in self._activated
|
||||
|
||||
@property
|
||||
def skill_count(self) -> int:
|
||||
return len(self._skills)
|
||||
|
||||
@property
|
||||
def allowlisted_dirs(self) -> list[str]:
|
||||
"""All skill base directories for file access allowlisting."""
|
||||
return [skill.base_dir for skill in self._skills.values()]
|
||||
|
||||
def to_prompt(self) -> str:
|
||||
"""Generate the catalog prompt for system prompt injection.
|
||||
|
||||
Returns empty string if no community/user skills are discovered
|
||||
(default skills are handled separately by DefaultSkillManager).
|
||||
"""
|
||||
# Filter out framework-scope skills (default skills) — they're
|
||||
# injected via the protocols prompt, not the catalog
|
||||
community_skills = [s for s in self._skills.values() if s.source_scope != "framework"]
|
||||
|
||||
if not community_skills:
|
||||
return ""
|
||||
|
||||
lines = ["<available_skills>"]
|
||||
for skill in sorted(community_skills, key=lambda s: s.name):
|
||||
lines.append(" <skill>")
|
||||
lines.append(f" <name>{escape(skill.name)}</name>")
|
||||
lines.append(f" <description>{escape(skill.description)}</description>")
|
||||
lines.append(f" <location>{escape(skill.location)}</location>")
|
||||
lines.append(" </skill>")
|
||||
lines.append("</available_skills>")
|
||||
|
||||
xml_block = "\n".join(lines)
|
||||
return f"{_BEHAVIORAL_INSTRUCTION}\n\n{xml_block}"
|
||||
|
||||
def build_pre_activated_prompt(self, skill_names: list[str]) -> str:
|
||||
"""Build prompt content for pre-activated skills.
|
||||
|
||||
Pre-activated skills get their full SKILL.md body loaded into
|
||||
the system prompt at startup (tier 2), bypassing model-driven
|
||||
activation.
|
||||
|
||||
Returns empty string if no skills match.
|
||||
"""
|
||||
parts: list[str] = []
|
||||
|
||||
for name in skill_names:
|
||||
skill = self.get(name)
|
||||
if skill is None:
|
||||
logger.warning("Pre-activated skill '%s' not found in catalog", name)
|
||||
continue
|
||||
if self.is_activated(name):
|
||||
continue # Already activated, skip duplicate
|
||||
|
||||
self.mark_activated(name)
|
||||
parts.append(f"--- Pre-Activated Skill: {skill.name} ---\n{skill.body}")
|
||||
|
||||
return "\n\n".join(parts)
|
||||
@@ -0,0 +1,100 @@
|
||||
"""Skill configuration dataclasses.
|
||||
|
||||
Handles agent-level skill configuration from module-level variables
|
||||
(``default_skills`` and ``skills``).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
|
||||
@dataclass
|
||||
class DefaultSkillConfig:
|
||||
"""Configuration for a single default skill."""
|
||||
|
||||
enabled: bool = True
|
||||
overrides: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: dict[str, Any]) -> DefaultSkillConfig:
|
||||
enabled = data.get("enabled", True)
|
||||
overrides = {k: v for k, v in data.items() if k != "enabled"}
|
||||
return cls(enabled=enabled, overrides=overrides)
|
||||
|
||||
|
||||
@dataclass
|
||||
class SkillsConfig:
|
||||
"""Agent-level skill configuration.
|
||||
|
||||
Built from module-level variables in agent.py::
|
||||
|
||||
# Pre-activated community skills
|
||||
skills = ["deep-research", "code-review"]
|
||||
|
||||
# Default skill configuration
|
||||
default_skills = {
|
||||
"hive.note-taking": {"enabled": True},
|
||||
"hive.batch-ledger": {"enabled": True, "checkpoint_every_n": 10},
|
||||
"hive.quality-monitor": {"enabled": False},
|
||||
}
|
||||
"""
|
||||
|
||||
# Per-default-skill config, keyed by skill name (e.g. "hive.note-taking")
|
||||
default_skills: dict[str, DefaultSkillConfig] = field(default_factory=dict)
|
||||
|
||||
# Pre-activated community skills (by name)
|
||||
skills: list[str] = field(default_factory=list)
|
||||
|
||||
# Master switch: disable all default skills at once
|
||||
all_defaults_disabled: bool = False
|
||||
|
||||
def is_default_enabled(self, skill_name: str) -> bool:
|
||||
"""Check if a specific default skill is enabled."""
|
||||
if self.all_defaults_disabled:
|
||||
return False
|
||||
config = self.default_skills.get(skill_name)
|
||||
if config is None:
|
||||
return True # enabled by default
|
||||
return config.enabled
|
||||
|
||||
def get_default_overrides(self, skill_name: str) -> dict[str, Any]:
|
||||
"""Get skill-specific configuration overrides."""
|
||||
config = self.default_skills.get(skill_name)
|
||||
if config is None:
|
||||
return {}
|
||||
return config.overrides
|
||||
|
||||
@classmethod
|
||||
def from_agent_vars(
|
||||
cls,
|
||||
default_skills: dict[str, Any] | None = None,
|
||||
skills: list[str] | None = None,
|
||||
) -> SkillsConfig:
|
||||
"""Build config from agent module-level variables.
|
||||
|
||||
Args:
|
||||
default_skills: Dict from agent module, e.g.
|
||||
``{"hive.note-taking": {"enabled": True}}``
|
||||
skills: List of pre-activated skill names from agent module
|
||||
"""
|
||||
all_disabled = False
|
||||
parsed_defaults: dict[str, DefaultSkillConfig] = {}
|
||||
|
||||
if default_skills:
|
||||
for name, config_dict in default_skills.items():
|
||||
if name == "_all":
|
||||
if isinstance(config_dict, dict) and not config_dict.get("enabled", True):
|
||||
all_disabled = True
|
||||
continue
|
||||
if isinstance(config_dict, dict):
|
||||
parsed_defaults[name] = DefaultSkillConfig.from_dict(config_dict)
|
||||
elif isinstance(config_dict, bool):
|
||||
parsed_defaults[name] = DefaultSkillConfig(enabled=config_dict)
|
||||
|
||||
return cls(
|
||||
default_skills=parsed_defaults,
|
||||
skills=list(skills or []),
|
||||
all_defaults_disabled=all_disabled,
|
||||
)
|
||||
@@ -0,0 +1,151 @@
|
||||
"""DefaultSkillManager — load, configure, and inject built-in default skills.
|
||||
|
||||
Default skills are SKILL.md packages shipped with the framework that provide
|
||||
runtime operational protocols (note-taking, batch tracking, error recovery, etc.).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
|
||||
from framework.skills.config import SkillsConfig
|
||||
from framework.skills.parser import ParsedSkill, parse_skill_md
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Default skills directory relative to this module
|
||||
_DEFAULT_SKILLS_DIR = Path(__file__).parent / "_default_skills"
|
||||
|
||||
# Ordered list of default skills (name → directory)
|
||||
SKILL_REGISTRY: dict[str, str] = {
|
||||
"hive.note-taking": "note-taking",
|
||||
"hive.batch-ledger": "batch-ledger",
|
||||
"hive.context-preservation": "context-preservation",
|
||||
"hive.quality-monitor": "quality-monitor",
|
||||
"hive.error-recovery": "error-recovery",
|
||||
"hive.task-decomposition": "task-decomposition",
|
||||
}
|
||||
|
||||
# All shared memory keys used by default skills (for permission auto-inclusion)
|
||||
SHARED_MEMORY_KEYS: list[str] = [
|
||||
# note-taking
|
||||
"_working_notes",
|
||||
"_notes_updated_at",
|
||||
# batch-ledger
|
||||
"_batch_ledger",
|
||||
"_batch_total",
|
||||
"_batch_completed",
|
||||
"_batch_failed",
|
||||
# context-preservation
|
||||
"_handoff_context",
|
||||
"_preserved_data",
|
||||
# quality-monitor
|
||||
"_quality_log",
|
||||
"_quality_degradation_count",
|
||||
# error-recovery
|
||||
"_error_log",
|
||||
"_failed_tools",
|
||||
"_escalation_needed",
|
||||
# task-decomposition
|
||||
"_subtasks",
|
||||
"_iteration_budget_remaining",
|
||||
]
|
||||
|
||||
|
||||
class DefaultSkillManager:
|
||||
"""Manages loading, configuration, and prompt generation for default skills."""
|
||||
|
||||
def __init__(self, config: SkillsConfig | None = None):
|
||||
self._config = config or SkillsConfig()
|
||||
self._skills: dict[str, ParsedSkill] = {}
|
||||
self._loaded = False
|
||||
|
||||
def load(self) -> None:
|
||||
"""Load all enabled default skill SKILL.md files."""
|
||||
if self._loaded:
|
||||
return
|
||||
|
||||
for skill_name, dir_name in SKILL_REGISTRY.items():
|
||||
if not self._config.is_default_enabled(skill_name):
|
||||
logger.info("Default skill '%s' disabled by config", skill_name)
|
||||
continue
|
||||
|
||||
skill_path = _DEFAULT_SKILLS_DIR / dir_name / "SKILL.md"
|
||||
if not skill_path.is_file():
|
||||
logger.error("Default skill SKILL.md not found: %s", skill_path)
|
||||
continue
|
||||
|
||||
parsed = parse_skill_md(skill_path, source_scope="framework")
|
||||
if parsed is None:
|
||||
logger.error("Failed to parse default skill: %s", skill_path)
|
||||
continue
|
||||
|
||||
self._skills[skill_name] = parsed
|
||||
|
||||
self._loaded = True
|
||||
|
||||
def build_protocols_prompt(self) -> str:
|
||||
"""Build the combined operational protocols section.
|
||||
|
||||
Extracts protocol sections from all enabled default skills and
|
||||
combines them into a single ``## Operational Protocols`` block
|
||||
for system prompt injection.
|
||||
|
||||
Returns empty string if all defaults are disabled.
|
||||
"""
|
||||
if not self._skills:
|
||||
return ""
|
||||
|
||||
parts: list[str] = ["## Operational Protocols\n"]
|
||||
|
||||
for skill_name in SKILL_REGISTRY:
|
||||
skill = self._skills.get(skill_name)
|
||||
if skill is None:
|
||||
continue
|
||||
# Use the full body — each SKILL.md contains exactly one protocol section
|
||||
parts.append(skill.body)
|
||||
|
||||
if len(parts) <= 1:
|
||||
return ""
|
||||
|
||||
combined = "\n\n".join(parts)
|
||||
|
||||
# Token budget warning (approximate: 1 token ≈ 4 chars)
|
||||
approx_tokens = len(combined) // 4
|
||||
if approx_tokens > 2000:
|
||||
logger.warning(
|
||||
"Default skill protocols exceed 2000 token budget "
|
||||
"(~%d tokens, %d chars). Consider trimming.",
|
||||
approx_tokens,
|
||||
len(combined),
|
||||
)
|
||||
|
||||
return combined
|
||||
|
||||
def log_active_skills(self) -> None:
|
||||
"""Log which default skills are active and their configuration."""
|
||||
if not self._skills:
|
||||
logger.info("Default skills: all disabled")
|
||||
return
|
||||
|
||||
active = []
|
||||
for skill_name in SKILL_REGISTRY:
|
||||
if skill_name in self._skills:
|
||||
overrides = self._config.get_default_overrides(skill_name)
|
||||
if overrides:
|
||||
active.append(f"{skill_name} ({overrides})")
|
||||
else:
|
||||
active.append(skill_name)
|
||||
|
||||
logger.info("Default skills active: %s", ", ".join(active))
|
||||
|
||||
@property
|
||||
def active_skill_names(self) -> list[str]:
|
||||
"""Names of all currently active default skills."""
|
||||
return list(self._skills.keys())
|
||||
|
||||
@property
|
||||
def active_skills(self) -> dict[str, ParsedSkill]:
|
||||
"""All active default skills keyed by name."""
|
||||
return dict(self._skills)
|
||||
@@ -0,0 +1,183 @@
|
||||
"""Skill discovery — scan standard directories for SKILL.md files.
|
||||
|
||||
Implements the Agent Skills standard discovery paths plus Hive-specific
|
||||
locations. Resolves name collisions deterministically.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
|
||||
from framework.skills.parser import ParsedSkill, parse_skill_md
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Directories to skip during scanning
|
||||
_SKIP_DIRS = frozenset(
|
||||
{
|
||||
".git",
|
||||
"node_modules",
|
||||
"__pycache__",
|
||||
".venv",
|
||||
"venv",
|
||||
".mypy_cache",
|
||||
".pytest_cache",
|
||||
".ruff_cache",
|
||||
}
|
||||
)
|
||||
|
||||
# Scope priority (higher = takes precedence)
|
||||
_SCOPE_PRIORITY = {
|
||||
"framework": 0,
|
||||
"user": 1,
|
||||
"project": 2,
|
||||
}
|
||||
|
||||
# Within the same scope, Hive-specific paths override cross-client paths.
|
||||
# We encode this by scanning cross-client first, then Hive-specific (later wins).
|
||||
|
||||
|
||||
@dataclass
|
||||
class DiscoveryConfig:
|
||||
"""Configuration for skill discovery."""
|
||||
|
||||
project_root: Path | None = None
|
||||
skip_user_scope: bool = False
|
||||
skip_framework_scope: bool = False
|
||||
max_depth: int = 4
|
||||
max_dirs: int = 2000
|
||||
|
||||
|
||||
class SkillDiscovery:
|
||||
"""Scans standard directories for SKILL.md files and resolves collisions."""
|
||||
|
||||
def __init__(self, config: DiscoveryConfig | None = None):
|
||||
self._config = config or DiscoveryConfig()
|
||||
|
||||
def discover(self) -> list[ParsedSkill]:
|
||||
"""Scan all scopes and return deduplicated skill list.
|
||||
|
||||
Scanning order (lowest to highest precedence):
|
||||
1. Framework defaults
|
||||
2. User cross-client (~/.agents/skills/)
|
||||
3. User Hive-specific (~/.hive/skills/)
|
||||
4. Project cross-client (<project>/.agents/skills/)
|
||||
5. Project Hive-specific (<project>/.hive/skills/)
|
||||
|
||||
Later entries override earlier ones on name collision.
|
||||
"""
|
||||
all_skills: list[ParsedSkill] = []
|
||||
|
||||
# Framework scope (lowest precedence)
|
||||
if not self._config.skip_framework_scope:
|
||||
framework_dir = Path(__file__).parent / "_default_skills"
|
||||
if framework_dir.is_dir():
|
||||
all_skills.extend(self._scan_scope(framework_dir, "framework"))
|
||||
|
||||
# User scope
|
||||
if not self._config.skip_user_scope:
|
||||
home = Path.home()
|
||||
|
||||
# Cross-client (lower precedence within user scope)
|
||||
user_agents = home / ".agents" / "skills"
|
||||
if user_agents.is_dir():
|
||||
all_skills.extend(self._scan_scope(user_agents, "user"))
|
||||
|
||||
# Hive-specific (higher precedence within user scope)
|
||||
user_hive = home / ".hive" / "skills"
|
||||
if user_hive.is_dir():
|
||||
all_skills.extend(self._scan_scope(user_hive, "user"))
|
||||
|
||||
# Project scope (highest precedence)
|
||||
if self._config.project_root:
|
||||
root = self._config.project_root
|
||||
|
||||
# Cross-client
|
||||
project_agents = root / ".agents" / "skills"
|
||||
if project_agents.is_dir():
|
||||
all_skills.extend(self._scan_scope(project_agents, "project"))
|
||||
|
||||
# Hive-specific
|
||||
project_hive = root / ".hive" / "skills"
|
||||
if project_hive.is_dir():
|
||||
all_skills.extend(self._scan_scope(project_hive, "project"))
|
||||
|
||||
resolved = self._resolve_collisions(all_skills)
|
||||
|
||||
logger.info(
|
||||
"Skill discovery: found %d skills (%d after dedup) across all scopes",
|
||||
len(all_skills),
|
||||
len(resolved),
|
||||
)
|
||||
return resolved
|
||||
|
||||
def _scan_scope(self, root: Path, scope: str) -> list[ParsedSkill]:
|
||||
"""Scan a single directory for skill directories containing SKILL.md."""
|
||||
skills: list[ParsedSkill] = []
|
||||
dirs_scanned = 0
|
||||
|
||||
for skill_md in self._find_skill_files(root, depth=0):
|
||||
if dirs_scanned >= self._config.max_dirs:
|
||||
logger.warning(
|
||||
"Hit max directory limit (%d) scanning %s",
|
||||
self._config.max_dirs,
|
||||
root,
|
||||
)
|
||||
break
|
||||
|
||||
parsed = parse_skill_md(skill_md, source_scope=scope)
|
||||
if parsed is not None:
|
||||
skills.append(parsed)
|
||||
dirs_scanned += 1
|
||||
|
||||
return skills
|
||||
|
||||
def _find_skill_files(self, directory: Path, depth: int) -> list[Path]:
|
||||
"""Recursively find SKILL.md files up to max_depth."""
|
||||
if depth > self._config.max_depth:
|
||||
return []
|
||||
|
||||
results: list[Path] = []
|
||||
|
||||
try:
|
||||
entries = sorted(directory.iterdir())
|
||||
except OSError:
|
||||
return []
|
||||
|
||||
for entry in entries:
|
||||
if not entry.is_dir():
|
||||
continue
|
||||
if entry.name in _SKIP_DIRS:
|
||||
continue
|
||||
|
||||
skill_md = entry / "SKILL.md"
|
||||
if skill_md.is_file():
|
||||
results.append(skill_md)
|
||||
else:
|
||||
# Recurse into subdirectories
|
||||
results.extend(self._find_skill_files(entry, depth + 1))
|
||||
|
||||
return results
|
||||
|
||||
def _resolve_collisions(self, skills: list[ParsedSkill]) -> list[ParsedSkill]:
|
||||
"""Resolve name collisions deterministically.
|
||||
|
||||
Later entries in the list override earlier ones (because we scan
|
||||
from lowest to highest precedence). On collision, log a warning.
|
||||
"""
|
||||
seen: dict[str, ParsedSkill] = {}
|
||||
|
||||
for skill in skills:
|
||||
if skill.name in seen:
|
||||
existing = seen[skill.name]
|
||||
logger.warning(
|
||||
"Skill name collision: '%s' from %s overrides %s",
|
||||
skill.name,
|
||||
skill.location,
|
||||
existing.location,
|
||||
)
|
||||
seen[skill.name] = skill
|
||||
|
||||
return list(seen.values())
|
||||
@@ -0,0 +1,165 @@
|
||||
"""Unified skill lifecycle manager.
|
||||
|
||||
``SkillsManager`` is the single facade that owns skill discovery, loading,
|
||||
and prompt renderation. The runtime creates one at startup and downstream
|
||||
layers read the cached prompt strings.
|
||||
|
||||
Typical usage — **config-driven** (runner passes configuration)::
|
||||
|
||||
config = SkillsManagerConfig(
|
||||
skills_config=SkillsConfig.from_agent_vars(...),
|
||||
project_root=agent_path,
|
||||
)
|
||||
mgr = SkillsManager(config)
|
||||
mgr.load()
|
||||
print(mgr.protocols_prompt) # default skill protocols
|
||||
print(mgr.skills_catalog_prompt) # community skills XML
|
||||
|
||||
Typical usage — **bare** (exported agents, SDK users)::
|
||||
|
||||
mgr = SkillsManager() # default config
|
||||
mgr.load() # loads all 6 default skills, no community discovery
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
|
||||
from framework.skills.config import SkillsConfig
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class SkillsManagerConfig:
|
||||
"""Everything the runtime needs to configure skills.
|
||||
|
||||
Attributes:
|
||||
skills_config: Per-skill enable/disable and overrides.
|
||||
project_root: Agent directory for community skill discovery.
|
||||
When ``None``, community discovery is skipped.
|
||||
skip_community_discovery: Explicitly skip community scanning
|
||||
even when ``project_root`` is set.
|
||||
"""
|
||||
|
||||
skills_config: SkillsConfig = field(default_factory=SkillsConfig)
|
||||
project_root: Path | None = None
|
||||
skip_community_discovery: bool = False
|
||||
|
||||
|
||||
class SkillsManager:
|
||||
"""Unified skill lifecycle: discovery → loading → prompt renderation.
|
||||
|
||||
The runtime creates one instance during init and owns it for the
|
||||
lifetime of the process. Downstream layers (``ExecutionStream``,
|
||||
``GraphExecutor``, ``NodeContext``, ``EventLoopNode``) receive the
|
||||
cached prompt strings via property accessors.
|
||||
"""
|
||||
|
||||
def __init__(self, config: SkillsManagerConfig | None = None) -> None:
|
||||
self._config = config or SkillsManagerConfig()
|
||||
self._loaded = False
|
||||
self._catalog_prompt: str = ""
|
||||
self._protocols_prompt: str = ""
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Factory for backwards-compat bridge
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@classmethod
|
||||
def from_precomputed(
|
||||
cls,
|
||||
skills_catalog_prompt: str = "",
|
||||
protocols_prompt: str = "",
|
||||
) -> SkillsManager:
|
||||
"""Wrap pre-rendered prompt strings (legacy callers).
|
||||
|
||||
Returns a manager that skips discovery/loading and just returns
|
||||
the provided strings. Used by the deprecation bridge in
|
||||
``AgentRuntime`` when callers pass raw prompt strings.
|
||||
"""
|
||||
mgr = cls.__new__(cls)
|
||||
mgr._config = SkillsManagerConfig()
|
||||
mgr._loaded = True # skip load()
|
||||
mgr._catalog_prompt = skills_catalog_prompt
|
||||
mgr._protocols_prompt = protocols_prompt
|
||||
return mgr
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Lifecycle
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def load(self) -> None:
|
||||
"""Discover, load, and cache skill prompts. Idempotent."""
|
||||
if self._loaded:
|
||||
return
|
||||
self._loaded = True
|
||||
|
||||
try:
|
||||
self._do_load()
|
||||
except Exception:
|
||||
logger.warning("Skill system init failed (non-fatal)", exc_info=True)
|
||||
|
||||
def _do_load(self) -> None:
|
||||
"""Internal load — may raise; caller catches."""
|
||||
from framework.skills.catalog import SkillCatalog
|
||||
from framework.skills.defaults import DefaultSkillManager
|
||||
from framework.skills.discovery import DiscoveryConfig, SkillDiscovery
|
||||
|
||||
skills_config = self._config.skills_config
|
||||
|
||||
# 1. Community skill discovery (when project_root is available)
|
||||
catalog_prompt = ""
|
||||
if self._config.project_root is not None and not self._config.skip_community_discovery:
|
||||
discovery = SkillDiscovery(DiscoveryConfig(project_root=self._config.project_root))
|
||||
discovered = discovery.discover()
|
||||
catalog = SkillCatalog(discovered)
|
||||
catalog_prompt = catalog.to_prompt()
|
||||
|
||||
# Pre-activated community skills
|
||||
if skills_config.skills:
|
||||
pre_activated = catalog.build_pre_activated_prompt(skills_config.skills)
|
||||
if pre_activated:
|
||||
if catalog_prompt:
|
||||
catalog_prompt = f"{catalog_prompt}\n\n{pre_activated}"
|
||||
else:
|
||||
catalog_prompt = pre_activated
|
||||
|
||||
# 2. Default skills (always loaded unless explicitly disabled)
|
||||
default_mgr = DefaultSkillManager(config=skills_config)
|
||||
default_mgr.load()
|
||||
default_mgr.log_active_skills()
|
||||
protocols_prompt = default_mgr.build_protocols_prompt()
|
||||
|
||||
# 3. Cache
|
||||
self._catalog_prompt = catalog_prompt
|
||||
self._protocols_prompt = protocols_prompt
|
||||
|
||||
if protocols_prompt:
|
||||
logger.info(
|
||||
"Skill system ready: protocols=%d chars, catalog=%d chars",
|
||||
len(protocols_prompt),
|
||||
len(catalog_prompt),
|
||||
)
|
||||
else:
|
||||
logger.warning("Skill system produced empty protocols_prompt")
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Prompt accessors (consumed by downstream layers)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
@property
|
||||
def skills_catalog_prompt(self) -> str:
|
||||
"""Community skills XML catalog for system prompt injection."""
|
||||
return self._catalog_prompt
|
||||
|
||||
@property
|
||||
def protocols_prompt(self) -> str:
|
||||
"""Default skill operational protocols for system prompt injection."""
|
||||
return self._protocols_prompt
|
||||
|
||||
@property
|
||||
def is_loaded(self) -> bool:
|
||||
return self._loaded
|
||||
@@ -0,0 +1,158 @@
|
||||
"""SKILL.md parser — extracts YAML frontmatter and markdown body.
|
||||
|
||||
Parses SKILL.md files per the Agent Skills standard (agentskills.io/specification).
|
||||
Lenient validation: warns on non-critical issues, skips only on missing description
|
||||
or completely unparseable YAML.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Maximum name length before a warning is logged
|
||||
_MAX_NAME_LENGTH = 64
|
||||
|
||||
|
||||
@dataclass
|
||||
class ParsedSkill:
|
||||
"""In-memory representation of a parsed SKILL.md file."""
|
||||
|
||||
name: str
|
||||
description: str
|
||||
location: str # absolute path to SKILL.md
|
||||
base_dir: str # parent directory of SKILL.md
|
||||
source_scope: str # "project", "user", or "framework"
|
||||
body: str # markdown body after closing ---
|
||||
|
||||
# Optional frontmatter fields
|
||||
license: str | None = None
|
||||
compatibility: list[str] | None = None
|
||||
metadata: dict[str, Any] | None = None
|
||||
allowed_tools: list[str] | None = None
|
||||
|
||||
|
||||
def _try_fix_yaml(raw: str) -> str:
|
||||
"""Attempt to fix common YAML issues (unquoted colon values).
|
||||
|
||||
Some SKILL.md files written for other clients may contain unquoted
|
||||
values with colons, e.g. ``description: Use for: research tasks``.
|
||||
This wraps such values in quotes as a best-effort fixup.
|
||||
"""
|
||||
lines = raw.split("\n")
|
||||
fixed = []
|
||||
for line in lines:
|
||||
# Match "key: value" where value contains an unquoted colon
|
||||
m = re.match(r"^(\s*\w[\w-]*:\s*)(.+)$", line)
|
||||
if m:
|
||||
key_part, value_part = m.group(1), m.group(2)
|
||||
# If value contains a colon and isn't already quoted
|
||||
if ":" in value_part and not (value_part.startswith('"') or value_part.startswith("'")):
|
||||
value_part = f'"{value_part}"'
|
||||
fixed.append(f"{key_part}{value_part}")
|
||||
else:
|
||||
fixed.append(line)
|
||||
return "\n".join(fixed)
|
||||
|
||||
|
||||
def parse_skill_md(path: Path, source_scope: str = "project") -> ParsedSkill | None:
|
||||
"""Parse a SKILL.md file into a ParsedSkill record.
|
||||
|
||||
Args:
|
||||
path: Absolute path to the SKILL.md file.
|
||||
source_scope: One of "project", "user", or "framework".
|
||||
|
||||
Returns:
|
||||
ParsedSkill on success, None if the file is unparseable or
|
||||
missing required fields (description).
|
||||
"""
|
||||
try:
|
||||
content = path.read_text(encoding="utf-8")
|
||||
except OSError as exc:
|
||||
logger.error("Failed to read %s: %s", path, exc)
|
||||
return None
|
||||
|
||||
if not content.strip():
|
||||
logger.error("Empty SKILL.md: %s", path)
|
||||
return None
|
||||
|
||||
# Split on --- delimiters (first two occurrences)
|
||||
parts = content.split("---", 2)
|
||||
if len(parts) < 3:
|
||||
logger.error("SKILL.md missing YAML frontmatter delimiters (---): %s", path)
|
||||
return None
|
||||
|
||||
# parts[0] is content before first --- (should be empty or whitespace)
|
||||
# parts[1] is the YAML frontmatter
|
||||
# parts[2] is the markdown body
|
||||
raw_yaml = parts[1].strip()
|
||||
body = parts[2].strip()
|
||||
|
||||
if not raw_yaml:
|
||||
logger.error("Empty YAML frontmatter in %s", path)
|
||||
return None
|
||||
|
||||
# Parse YAML
|
||||
import yaml
|
||||
|
||||
frontmatter: dict[str, Any] | None = None
|
||||
try:
|
||||
frontmatter = yaml.safe_load(raw_yaml)
|
||||
except yaml.YAMLError:
|
||||
# Fallback: try fixing unquoted colon values
|
||||
try:
|
||||
fixed = _try_fix_yaml(raw_yaml)
|
||||
frontmatter = yaml.safe_load(fixed)
|
||||
logger.warning("Fixed YAML parse issues in %s (unquoted colons)", path)
|
||||
except yaml.YAMLError as exc:
|
||||
logger.error("Unparseable YAML in %s: %s", path, exc)
|
||||
return None
|
||||
|
||||
if not isinstance(frontmatter, dict):
|
||||
logger.error("YAML frontmatter is not a mapping in %s", path)
|
||||
return None
|
||||
|
||||
# Required: description
|
||||
description = frontmatter.get("description")
|
||||
if not description or not str(description).strip():
|
||||
logger.error("Missing or empty 'description' in %s — skipping skill", path)
|
||||
return None
|
||||
|
||||
# Required: name (fallback to parent directory name)
|
||||
name = frontmatter.get("name")
|
||||
parent_dir_name = path.parent.name
|
||||
if not name or not str(name).strip():
|
||||
name = parent_dir_name
|
||||
logger.warning("Missing 'name' in %s — using directory name '%s'", path, name)
|
||||
else:
|
||||
name = str(name).strip()
|
||||
|
||||
# Lenient warnings
|
||||
if len(name) > _MAX_NAME_LENGTH:
|
||||
logger.warning("Skill name exceeds %d chars in %s: '%s'", _MAX_NAME_LENGTH, path, name)
|
||||
|
||||
if name != parent_dir_name and not name.endswith(f".{parent_dir_name}"):
|
||||
logger.warning(
|
||||
"Skill name '%s' doesn't match parent directory '%s' in %s",
|
||||
name,
|
||||
parent_dir_name,
|
||||
path,
|
||||
)
|
||||
|
||||
return ParsedSkill(
|
||||
name=name,
|
||||
description=str(description).strip(),
|
||||
location=str(path.resolve()),
|
||||
base_dir=str(path.parent.resolve()),
|
||||
source_scope=source_scope,
|
||||
body=body,
|
||||
license=frontmatter.get("license"),
|
||||
compatibility=frontmatter.get("compatibility"),
|
||||
metadata=frontmatter.get("metadata"),
|
||||
allowed_tools=frontmatter.get("allowed-tools"),
|
||||
)
|
||||
@@ -40,18 +40,31 @@ class LLMJudge:
|
||||
|
||||
def _get_fallback_provider(self) -> LLMProvider | None:
|
||||
"""
|
||||
Auto-detects available API keys and returns the appropriate provider.
|
||||
Priority: OpenAI -> Anthropic.
|
||||
Auto-detects available API keys and returns an appropriate provider.
|
||||
Uses LiteLLM for OpenAI (framework has no framework.llm.openai module).
|
||||
Priority:
|
||||
1. OpenAI-compatible models via LiteLLM (OPENAI_API_KEY)
|
||||
2. Anthropic via AnthropicProvider (ANTHROPIC_API_KEY)
|
||||
"""
|
||||
# OpenAI: use LiteLLM (the framework's standard multi-provider integration)
|
||||
if os.environ.get("OPENAI_API_KEY"):
|
||||
from framework.llm.openai import OpenAIProvider
|
||||
try:
|
||||
from framework.llm.litellm import LiteLLMProvider
|
||||
|
||||
return OpenAIProvider(model="gpt-4o-mini")
|
||||
return LiteLLMProvider(model="gpt-4o-mini")
|
||||
except ImportError:
|
||||
# LiteLLM is optional; fall through to Anthropic/None
|
||||
pass
|
||||
|
||||
# Anthropic via dedicated provider (wraps LiteLLM internally)
|
||||
if os.environ.get("ANTHROPIC_API_KEY"):
|
||||
from framework.llm.anthropic import AnthropicProvider
|
||||
try:
|
||||
from framework.llm.anthropic import AnthropicProvider
|
||||
|
||||
return AnthropicProvider(model="claude-3-haiku-20240307")
|
||||
return AnthropicProvider(model="claude-haiku-4-5-20251001")
|
||||
except Exception:
|
||||
# If AnthropicProvider cannot be constructed, treat as no fallback
|
||||
return None
|
||||
|
||||
return None
|
||||
|
||||
@@ -77,11 +90,16 @@ SUMMARY TO EVALUATE:
|
||||
Respond with JSON: {{"passes": true/false, "explanation": "..."}}"""
|
||||
|
||||
try:
|
||||
# Compute fallback provider once so we do not create multiple instances
|
||||
fallback_provider = self._get_fallback_provider()
|
||||
|
||||
# 1. Use injected provider
|
||||
if self._provider:
|
||||
active_provider = self._provider
|
||||
# 2. Check if _get_client was MOCKED (legacy tests) or use Agnostic Fallback
|
||||
elif hasattr(self._get_client, "return_value") or not self._get_fallback_provider():
|
||||
# 2. Legacy path: anthropic client mocked in tests takes precedence,
|
||||
# or no fallback provider is available.
|
||||
elif hasattr(self._get_client, "return_value") or fallback_provider is None:
|
||||
# Use legacy Anthropic client (e.g. when tests mock _get_client, or no env keys set)
|
||||
client = self._get_client()
|
||||
response = client.messages.create(
|
||||
model="claude-haiku-4-5-20251001",
|
||||
@@ -90,7 +108,8 @@ Respond with JSON: {{"passes": true/false, "explanation": "..."}}"""
|
||||
)
|
||||
return self._parse_json_result(response.content[0].text.strip())
|
||||
else:
|
||||
active_provider = self._get_fallback_provider()
|
||||
# Use env-based fallback (LiteLLM or AnthropicProvider)
|
||||
active_provider = fallback_provider
|
||||
|
||||
response = active_provider.complete(
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
|
||||
@@ -36,8 +36,9 @@ from __future__ import annotations
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import UTC
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
@@ -108,6 +109,9 @@ class QueenPhaseState:
|
||||
prompt_staging: str = ""
|
||||
prompt_running: str = ""
|
||||
|
||||
# Default skill operational protocols — appended to every phase prompt
|
||||
protocols_prompt: str = ""
|
||||
|
||||
def get_current_tools(self) -> list:
|
||||
"""Return tools for the current phase."""
|
||||
if self.phase == "planning":
|
||||
@@ -132,7 +136,12 @@ class QueenPhaseState:
|
||||
from framework.agents.queen.queen_memory import format_for_injection
|
||||
|
||||
memory = format_for_injection()
|
||||
return base + ("\n\n" + memory if memory else "")
|
||||
parts = [base]
|
||||
if self.protocols_prompt:
|
||||
parts.append(self.protocols_prompt)
|
||||
if memory:
|
||||
parts.append(memory)
|
||||
return "\n\n".join(parts)
|
||||
|
||||
async def _emit_phase_event(self) -> None:
|
||||
"""Publish a QUEEN_PHASE_CHANGED event so the frontend updates the tag."""
|
||||
@@ -285,10 +294,6 @@ def build_worker_profile(runtime: AgentRuntime, agent_path: Path | str | None =
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
# Classical flowchart symbols per ISO 5807 / ANSI standards.
|
||||
# Each type maps to a standard shape and a unique color for the
|
||||
# frontend renderer. Shapes use Mermaid-compatible names where
|
||||
# possible so the frontend can render them directly.
|
||||
_FLOWCHART_TYPES = {
|
||||
# ── Core symbols (ISO 5807 §4) ──────────────────────────
|
||||
# Terminator — rounded rectangle (stadium shape)
|
||||
@@ -351,6 +356,211 @@ _FLOWCHART_TYPES = {
|
||||
}
|
||||
|
||||
|
||||
def _read_agent_triggers_json(agent_path: Path) -> list[dict]:
|
||||
"""Read triggers.json from the agent's export directory."""
|
||||
triggers_path = agent_path / "triggers.json"
|
||||
if not triggers_path.exists():
|
||||
return []
|
||||
try:
|
||||
data = json.loads(triggers_path.read_text(encoding="utf-8"))
|
||||
return data if isinstance(data, list) else []
|
||||
except (json.JSONDecodeError, OSError):
|
||||
return []
|
||||
|
||||
|
||||
def _write_agent_triggers_json(agent_path: Path, triggers: list[dict]) -> None:
|
||||
"""Write triggers.json to the agent's export directory."""
|
||||
triggers_path = agent_path / "triggers.json"
|
||||
triggers_path.write_text(
|
||||
json.dumps(triggers, indent=2, ensure_ascii=False) + "\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
|
||||
def _save_trigger_to_agent(session: Any, trigger_id: str, tdef: Any) -> None:
|
||||
"""Persist a trigger definition to the agent's triggers.json."""
|
||||
agent_path = getattr(session, "worker_path", None)
|
||||
if agent_path is None:
|
||||
return
|
||||
triggers = _read_agent_triggers_json(agent_path)
|
||||
triggers = [t for t in triggers if t.get("id") != trigger_id]
|
||||
triggers.append(
|
||||
{
|
||||
"id": tdef.id,
|
||||
"name": tdef.description or tdef.id,
|
||||
"trigger_type": tdef.trigger_type,
|
||||
"trigger_config": tdef.trigger_config,
|
||||
"task": tdef.task or "",
|
||||
}
|
||||
)
|
||||
_write_agent_triggers_json(agent_path, triggers)
|
||||
logger.info("Saved trigger '%s' to %s/triggers.json", trigger_id, agent_path)
|
||||
|
||||
|
||||
def _remove_trigger_from_agent(session: Any, trigger_id: str) -> None:
|
||||
"""Remove a trigger definition from the agent's triggers.json."""
|
||||
agent_path = getattr(session, "worker_path", None)
|
||||
if agent_path is None:
|
||||
return
|
||||
triggers = _read_agent_triggers_json(agent_path)
|
||||
updated = [t for t in triggers if t.get("id") != trigger_id]
|
||||
if len(updated) != len(triggers):
|
||||
_write_agent_triggers_json(agent_path, updated)
|
||||
logger.info("Removed trigger '%s' from %s/triggers.json", trigger_id, agent_path)
|
||||
|
||||
|
||||
async def _persist_active_triggers(session: Any, session_id: str) -> None:
|
||||
"""Persist the set of active trigger IDs (and their tasks) to SessionState."""
|
||||
runtime = getattr(session, "worker_runtime", None)
|
||||
if runtime is None:
|
||||
return
|
||||
store = getattr(runtime, "_session_store", None)
|
||||
if store is None:
|
||||
return
|
||||
try:
|
||||
state = await store.read_state(session_id)
|
||||
if state is None:
|
||||
return
|
||||
active_ids = list(getattr(session, "active_trigger_ids", set()))
|
||||
state.active_triggers = active_ids
|
||||
# Persist per-trigger task overrides
|
||||
available = getattr(session, "available_triggers", {})
|
||||
state.trigger_tasks = {
|
||||
tid: available[tid].task
|
||||
for tid in active_ids
|
||||
if tid in available and available[tid].task
|
||||
}
|
||||
await store.write_state(session_id, state)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"Failed to persist active triggers for session %s", session_id, exc_info=True
|
||||
)
|
||||
|
||||
|
||||
async def _start_trigger_timer(session: Any, trigger_id: str, tdef: Any) -> None:
|
||||
"""Start an asyncio background task that fires the trigger on a timer."""
|
||||
from framework.graph.event_loop_node import TriggerEvent
|
||||
|
||||
cron_expr = tdef.trigger_config.get("cron")
|
||||
interval_minutes = tdef.trigger_config.get("interval_minutes")
|
||||
|
||||
async def _timer_loop() -> None:
|
||||
if cron_expr:
|
||||
from croniter import croniter
|
||||
|
||||
cron = croniter(cron_expr, datetime.now(tz=UTC))
|
||||
|
||||
while True:
|
||||
try:
|
||||
if cron_expr:
|
||||
next_fire = cron.get_next(datetime)
|
||||
delay = (next_fire - datetime.now(tz=UTC)).total_seconds()
|
||||
if delay > 0:
|
||||
await asyncio.sleep(delay)
|
||||
else:
|
||||
await asyncio.sleep(float(interval_minutes) * 60)
|
||||
|
||||
# Record next fire time for introspection (monotonic, matches routes)
|
||||
fire_times = getattr(session, "trigger_next_fire", None)
|
||||
if fire_times is not None:
|
||||
_next_delay = float(interval_minutes) * 60 if interval_minutes else 60
|
||||
fire_times[trigger_id] = time.monotonic() + _next_delay
|
||||
|
||||
# Gate on worker being loaded
|
||||
if getattr(session, "worker_runtime", None) is None:
|
||||
continue
|
||||
|
||||
# Fire into queen node
|
||||
executor = getattr(session, "queen_executor", None)
|
||||
if executor is None:
|
||||
continue
|
||||
queen_node = getattr(executor, "node_registry", {}).get("queen")
|
||||
if queen_node is None:
|
||||
continue
|
||||
|
||||
event = TriggerEvent(
|
||||
trigger_type="timer",
|
||||
source_id=trigger_id,
|
||||
payload={
|
||||
"task": tdef.task or "",
|
||||
"trigger_config": tdef.trigger_config,
|
||||
},
|
||||
)
|
||||
await queen_node.inject_trigger(event)
|
||||
except asyncio.CancelledError:
|
||||
raise
|
||||
except Exception:
|
||||
logger.warning("Timer trigger '%s' tick failed", trigger_id, exc_info=True)
|
||||
|
||||
task = asyncio.create_task(_timer_loop(), name=f"trigger_timer_{trigger_id}")
|
||||
if not hasattr(session, "active_timer_tasks"):
|
||||
session.active_timer_tasks = {}
|
||||
session.active_timer_tasks[trigger_id] = task
|
||||
|
||||
|
||||
async def _start_trigger_webhook(session: Any, trigger_id: str, tdef: Any) -> None:
|
||||
"""Subscribe to WEBHOOK_RECEIVED events and route matching ones to the queen."""
|
||||
from framework.graph.event_loop_node import TriggerEvent
|
||||
from framework.runtime.webhook_server import WebhookRoute, WebhookServer, WebhookServerConfig
|
||||
|
||||
bus = session.event_bus
|
||||
path = tdef.trigger_config.get("path", "")
|
||||
methods = [m.upper() for m in tdef.trigger_config.get("methods", ["POST"])]
|
||||
|
||||
async def _on_webhook(event: AgentEvent) -> None:
|
||||
data = event.data or {}
|
||||
if data.get("path") != path:
|
||||
return
|
||||
if data.get("method", "").upper() not in methods:
|
||||
return
|
||||
# Gate on worker being loaded
|
||||
if getattr(session, "worker_runtime", None) is None:
|
||||
return
|
||||
executor = getattr(session, "queen_executor", None)
|
||||
if executor is None:
|
||||
return
|
||||
queen_node = getattr(executor, "node_registry", {}).get("queen")
|
||||
if queen_node is None:
|
||||
return
|
||||
|
||||
trigger_event = TriggerEvent(
|
||||
trigger_type="webhook",
|
||||
source_id=trigger_id,
|
||||
payload={
|
||||
"task": tdef.task or "",
|
||||
"path": data.get("path", ""),
|
||||
"method": data.get("method", ""),
|
||||
"headers": data.get("headers", {}),
|
||||
"payload": data.get("payload", {}),
|
||||
"query_params": data.get("query_params", {}),
|
||||
},
|
||||
)
|
||||
await queen_node.inject_trigger(trigger_event)
|
||||
|
||||
sub_id = bus.subscribe(
|
||||
event_types=[EventType.WEBHOOK_RECEIVED],
|
||||
handler=_on_webhook,
|
||||
filter_stream=trigger_id,
|
||||
)
|
||||
if not hasattr(session, "active_webhook_subs"):
|
||||
session.active_webhook_subs = {}
|
||||
session.active_webhook_subs[trigger_id] = sub_id
|
||||
|
||||
# Ensure the webhook HTTP server is running
|
||||
if getattr(session, "queen_webhook_server", None) is None:
|
||||
port = int(tdef.trigger_config.get("port", 8090))
|
||||
config = WebhookServerConfig(host="127.0.0.1", port=port)
|
||||
server = WebhookServer(bus, config)
|
||||
session.queen_webhook_server = server
|
||||
|
||||
server = session.queen_webhook_server
|
||||
route = WebhookRoute(source_id=trigger_id, path=path, methods=methods)
|
||||
server.add_route(route)
|
||||
if not getattr(server, "is_running", False):
|
||||
await server.start()
|
||||
server.is_running = True
|
||||
|
||||
|
||||
def _dissolve_planning_nodes(
|
||||
draft: dict,
|
||||
) -> tuple[dict, dict[str, list[str]]]:
|
||||
@@ -2517,7 +2727,6 @@ def register_queen_lifecycle_tools(
|
||||
|
||||
def _format_time_ago(ts) -> str:
|
||||
"""Format a datetime as relative time ago."""
|
||||
from datetime import datetime
|
||||
|
||||
now = datetime.now(UTC)
|
||||
if ts.tzinfo is None:
|
||||
@@ -2555,7 +2764,6 @@ def register_queen_lifecycle_tools(
|
||||
- pending_question (when waiting)
|
||||
- _active_execs (internal, stripped before return)
|
||||
"""
|
||||
from datetime import datetime
|
||||
|
||||
graph_id = runtime.graph_id
|
||||
reg = runtime.get_graph_registration(graph_id)
|
||||
@@ -2655,6 +2863,16 @@ def register_queen_lifecycle_tools(
|
||||
else:
|
||||
parts.append("No issues detected")
|
||||
|
||||
# Latest subagent progress (if any delegation is in flight)
|
||||
bus = _get_event_bus()
|
||||
if bus:
|
||||
sa_reports = bus.get_history(event_type=EventType.SUBAGENT_REPORT, limit=1)
|
||||
if sa_reports:
|
||||
latest = sa_reports[0]
|
||||
sa_msg = str(latest.data.get("message", ""))[:200]
|
||||
ago = _format_time_ago(latest.timestamp)
|
||||
parts.append(f"Latest subagent update ({ago}): {sa_msg}")
|
||||
|
||||
return ". ".join(parts) + "."
|
||||
|
||||
def _format_activity(bus: EventBus, preamble: dict[str, Any], last_n: int) -> str:
|
||||
@@ -2782,6 +3000,10 @@ def register_queen_lifecycle_tools(
|
||||
duration = evt.data.get("duration_s")
|
||||
dur_str = f", {duration:.1f}s" if duration else ""
|
||||
lines.append(f" {name} ({node}) — {status}{dur_str}")
|
||||
result_text = evt.data.get("result", "")
|
||||
if result_text:
|
||||
preview = str(result_text)[:300].replace("\n", " ")
|
||||
lines.append(f" Result: {preview}")
|
||||
else:
|
||||
lines.append("No recent tool calls.")
|
||||
|
||||
@@ -2948,15 +3170,19 @@ def register_queen_lifecycle_tools(
|
||||
for evt in running
|
||||
]
|
||||
if tool_completed:
|
||||
result["recent_tool_calls"] = [
|
||||
{
|
||||
recent_calls = []
|
||||
for evt in tool_completed[:last_n]:
|
||||
entry: dict[str, Any] = {
|
||||
"tool": evt.data.get("tool_name"),
|
||||
"error": bool(evt.data.get("is_error")),
|
||||
"node": evt.node_id,
|
||||
"time": evt.timestamp.isoformat(),
|
||||
}
|
||||
for evt in tool_completed[:last_n]
|
||||
]
|
||||
result_text = evt.data.get("result", "")
|
||||
if result_text:
|
||||
entry["result_preview"] = str(result_text)[:300]
|
||||
recent_calls.append(entry)
|
||||
result["recent_tool_calls"] = recent_calls
|
||||
|
||||
# Node transitions
|
||||
edges = bus.get_history(event_type=EventType.EDGE_TRAVERSED, limit=last_n)
|
||||
@@ -3009,6 +3235,18 @@ def register_queen_lifecycle_tools(
|
||||
if issues:
|
||||
result["issues"] = issues
|
||||
|
||||
# Subagent activity (in-flight progress from delegated subagents)
|
||||
sa_reports = bus.get_history(event_type=EventType.SUBAGENT_REPORT, limit=last_n)
|
||||
if sa_reports:
|
||||
result["subagent_activity"] = [
|
||||
{
|
||||
"subagent": evt.data.get("subagent_id"),
|
||||
"message": str(evt.data.get("message", ""))[:300],
|
||||
"time": evt.timestamp.isoformat(),
|
||||
}
|
||||
for evt in sa_reports[:last_n]
|
||||
]
|
||||
|
||||
# Constraint violations
|
||||
violations = bus.get_history(event_type=EventType.CONSTRAINT_VIOLATION, limit=5)
|
||||
if violations:
|
||||
@@ -3738,5 +3976,319 @@ def register_queen_lifecycle_tools(
|
||||
)
|
||||
tools_registered += 1
|
||||
|
||||
# --- set_trigger -----------------------------------------------------------
|
||||
|
||||
async def set_trigger(
|
||||
trigger_id: str,
|
||||
trigger_type: str | None = None,
|
||||
trigger_config: dict | None = None,
|
||||
task: str | None = None,
|
||||
) -> str:
|
||||
"""Activate a trigger so it fires periodically into the queen."""
|
||||
if trigger_id in getattr(session, "active_trigger_ids", set()):
|
||||
return json.dumps({"error": f"Trigger '{trigger_id}' is already active."})
|
||||
|
||||
# Look up existing or create new
|
||||
available = getattr(session, "available_triggers", {})
|
||||
tdef = available.get(trigger_id)
|
||||
|
||||
if tdef is None:
|
||||
if trigger_type and trigger_config:
|
||||
from framework.runtime.triggers import TriggerDefinition
|
||||
|
||||
tdef = TriggerDefinition(
|
||||
id=trigger_id,
|
||||
trigger_type=trigger_type,
|
||||
trigger_config=trigger_config,
|
||||
)
|
||||
available[trigger_id] = tdef
|
||||
else:
|
||||
return json.dumps(
|
||||
{
|
||||
"error": (
|
||||
f"Trigger '{trigger_id}' not found. "
|
||||
"Provide trigger_type and trigger_config to create a custom trigger."
|
||||
)
|
||||
}
|
||||
)
|
||||
|
||||
# Apply task override if provided
|
||||
if task:
|
||||
tdef.task = task
|
||||
|
||||
# Task is mandatory before activation
|
||||
if not tdef.task:
|
||||
return json.dumps(
|
||||
{
|
||||
"error": f"Trigger '{trigger_id}' has no task configured. "
|
||||
"Set a task describing what the worker should do when this trigger fires."
|
||||
}
|
||||
)
|
||||
|
||||
# Use provided overrides if given
|
||||
t_type = trigger_type or tdef.trigger_type
|
||||
t_config = trigger_config or tdef.trigger_config
|
||||
if trigger_type:
|
||||
tdef.trigger_type = t_type
|
||||
if trigger_config:
|
||||
tdef.trigger_config = t_config
|
||||
|
||||
# Validate and activate by type
|
||||
if t_type == "webhook":
|
||||
path = t_config.get("path", "").strip()
|
||||
if not path or not path.startswith("/"):
|
||||
return json.dumps(
|
||||
{
|
||||
"error": (
|
||||
"Webhook trigger requires 'path' starting with '/'"
|
||||
" in trigger_config (e.g. '/hooks/github')."
|
||||
)
|
||||
}
|
||||
)
|
||||
valid_methods = {"GET", "POST", "PUT", "PATCH", "DELETE", "HEAD", "OPTIONS"}
|
||||
methods = t_config.get("methods", ["POST"])
|
||||
invalid = [m.upper() for m in methods if m.upper() not in valid_methods]
|
||||
if invalid:
|
||||
return json.dumps(
|
||||
{"error": f"Invalid HTTP methods: {invalid}. Valid: {sorted(valid_methods)}"}
|
||||
)
|
||||
|
||||
try:
|
||||
await _start_trigger_webhook(session, trigger_id, tdef)
|
||||
except Exception as e:
|
||||
return json.dumps({"error": f"Failed to start webhook trigger: {e}"})
|
||||
|
||||
tdef.active = True
|
||||
session.active_trigger_ids.add(trigger_id)
|
||||
await _persist_active_triggers(session, session_id)
|
||||
_save_trigger_to_agent(session, trigger_id, tdef)
|
||||
bus = getattr(session, "event_bus", None)
|
||||
if bus:
|
||||
await bus.publish(
|
||||
AgentEvent(
|
||||
type=EventType.TRIGGER_ACTIVATED,
|
||||
stream_id="queen",
|
||||
data={
|
||||
"trigger_id": trigger_id,
|
||||
"trigger_type": t_type,
|
||||
"trigger_config": t_config,
|
||||
},
|
||||
)
|
||||
)
|
||||
port = int(t_config.get("port", 8090))
|
||||
return json.dumps(
|
||||
{
|
||||
"status": "activated",
|
||||
"trigger_id": trigger_id,
|
||||
"trigger_type": t_type,
|
||||
"webhook_url": f"http://127.0.0.1:{port}{path}",
|
||||
}
|
||||
)
|
||||
|
||||
if t_type != "timer":
|
||||
return json.dumps({"error": f"Unsupported trigger type: {t_type}"})
|
||||
|
||||
cron_expr = t_config.get("cron")
|
||||
interval = t_config.get("interval_minutes")
|
||||
if cron_expr:
|
||||
try:
|
||||
from croniter import croniter
|
||||
|
||||
if not croniter.is_valid(cron_expr):
|
||||
return json.dumps({"error": f"Invalid cron expression: {cron_expr}"})
|
||||
except ImportError:
|
||||
return json.dumps(
|
||||
{"error": "croniter package not installed — cannot validate cron expression."}
|
||||
)
|
||||
elif interval:
|
||||
if not isinstance(interval, (int, float)) or interval <= 0:
|
||||
return json.dumps({"error": f"interval_minutes must be > 0, got {interval}"})
|
||||
else:
|
||||
return json.dumps(
|
||||
{"error": "Timer trigger needs 'cron' or 'interval_minutes' in trigger_config."}
|
||||
)
|
||||
|
||||
# Start timer
|
||||
try:
|
||||
await _start_trigger_timer(session, trigger_id, tdef)
|
||||
except Exception as e:
|
||||
return json.dumps({"error": f"Failed to start trigger timer: {e}"})
|
||||
|
||||
tdef.active = True
|
||||
session.active_trigger_ids.add(trigger_id)
|
||||
|
||||
# Persist to session state and agent definition
|
||||
await _persist_active_triggers(session, session_id)
|
||||
_save_trigger_to_agent(session, trigger_id, tdef)
|
||||
|
||||
# Emit event
|
||||
bus = getattr(session, "event_bus", None)
|
||||
if bus:
|
||||
await bus.publish(
|
||||
AgentEvent(
|
||||
type=EventType.TRIGGER_ACTIVATED,
|
||||
stream_id="queen",
|
||||
data={
|
||||
"trigger_id": trigger_id,
|
||||
"trigger_type": t_type,
|
||||
"trigger_config": t_config,
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
return json.dumps(
|
||||
{
|
||||
"status": "activated",
|
||||
"trigger_id": trigger_id,
|
||||
"trigger_type": t_type,
|
||||
"trigger_config": t_config,
|
||||
}
|
||||
)
|
||||
|
||||
_set_trigger_tool = Tool(
|
||||
name="set_trigger",
|
||||
description=(
|
||||
"Activate a trigger (timer) so it fires periodically. "
|
||||
"Use trigger_id of an available trigger, or provide trigger_type + trigger_config"
|
||||
" to create a custom one. "
|
||||
"A task must be configured before activation —"
|
||||
" either pre-set on the trigger or provided here."
|
||||
),
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"trigger_id": {
|
||||
"type": "string",
|
||||
"description": (
|
||||
"ID of the trigger to activate (from list_triggers) or a new custom ID"
|
||||
),
|
||||
},
|
||||
"trigger_type": {
|
||||
"type": "string",
|
||||
"description": "Type of trigger ('timer'). Only needed for custom triggers.",
|
||||
},
|
||||
"trigger_config": {
|
||||
"type": "object",
|
||||
"description": (
|
||||
"Config for the trigger."
|
||||
" Timer: {cron: '*/5 * * * *'} or {interval_minutes: 5}."
|
||||
" Only needed for custom triggers."
|
||||
),
|
||||
},
|
||||
"task": {
|
||||
"type": "string",
|
||||
"description": (
|
||||
"The task/instructions for the worker when this trigger fires"
|
||||
" (e.g. 'Process inbox emails using saved rules')."
|
||||
" Required if not already configured on the trigger."
|
||||
),
|
||||
},
|
||||
},
|
||||
"required": ["trigger_id"],
|
||||
},
|
||||
)
|
||||
registry.register("set_trigger", _set_trigger_tool, lambda inputs: set_trigger(**inputs))
|
||||
tools_registered += 1
|
||||
|
||||
# --- remove_trigger --------------------------------------------------------
|
||||
|
||||
async def remove_trigger(trigger_id: str) -> str:
|
||||
"""Deactivate an active trigger."""
|
||||
if trigger_id not in getattr(session, "active_trigger_ids", set()):
|
||||
return json.dumps({"error": f"Trigger '{trigger_id}' is not active."})
|
||||
|
||||
# Cancel timer task (if timer trigger)
|
||||
task = session.active_timer_tasks.pop(trigger_id, None)
|
||||
if task and not task.done():
|
||||
task.cancel()
|
||||
getattr(session, "trigger_next_fire", {}).pop(trigger_id, None)
|
||||
|
||||
# Unsubscribe webhook handler (if webhook trigger)
|
||||
webhook_subs = getattr(session, "active_webhook_subs", {})
|
||||
if sub_id := webhook_subs.pop(trigger_id, None):
|
||||
try:
|
||||
session.event_bus.unsubscribe(sub_id)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
session.active_trigger_ids.discard(trigger_id)
|
||||
|
||||
# Mark inactive
|
||||
available = getattr(session, "available_triggers", {})
|
||||
tdef = available.get(trigger_id)
|
||||
if tdef:
|
||||
tdef.active = False
|
||||
|
||||
# Persist to session state and remove from agent definition
|
||||
await _persist_active_triggers(session, session_id)
|
||||
_remove_trigger_from_agent(session, trigger_id)
|
||||
|
||||
# Emit event
|
||||
bus = getattr(session, "event_bus", None)
|
||||
if bus:
|
||||
await bus.publish(
|
||||
AgentEvent(
|
||||
type=EventType.TRIGGER_DEACTIVATED,
|
||||
stream_id="queen",
|
||||
data={"trigger_id": trigger_id},
|
||||
)
|
||||
)
|
||||
|
||||
return json.dumps({"status": "deactivated", "trigger_id": trigger_id})
|
||||
|
||||
_remove_trigger_tool = Tool(
|
||||
name="remove_trigger",
|
||||
description=(
|
||||
"Deactivate an active trigger."
|
||||
" The trigger stops firing but remains available for re-activation."
|
||||
),
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"trigger_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the trigger to deactivate",
|
||||
},
|
||||
},
|
||||
"required": ["trigger_id"],
|
||||
},
|
||||
)
|
||||
registry.register(
|
||||
"remove_trigger", _remove_trigger_tool, lambda inputs: remove_trigger(**inputs)
|
||||
)
|
||||
tools_registered += 1
|
||||
|
||||
# --- list_triggers ---------------------------------------------------------
|
||||
|
||||
async def list_triggers() -> str:
|
||||
"""List all available triggers and their status."""
|
||||
available = getattr(session, "available_triggers", {})
|
||||
triggers = []
|
||||
for tdef in available.values():
|
||||
triggers.append(
|
||||
{
|
||||
"id": tdef.id,
|
||||
"trigger_type": tdef.trigger_type,
|
||||
"trigger_config": tdef.trigger_config,
|
||||
"description": tdef.description,
|
||||
"task": tdef.task,
|
||||
"active": tdef.active,
|
||||
}
|
||||
)
|
||||
return json.dumps({"triggers": triggers})
|
||||
|
||||
_list_triggers_tool = Tool(
|
||||
name="list_triggers",
|
||||
description=(
|
||||
"List all available triggers (from the loaded worker) and their active/inactive status."
|
||||
),
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {},
|
||||
},
|
||||
)
|
||||
registry.register("list_triggers", _list_triggers_tool, lambda inputs: list_triggers())
|
||||
tools_registered += 1
|
||||
|
||||
logger.info("Registered %d queen lifecycle tools", tools_registered)
|
||||
return tools_registered
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
"""Tool for the queen to write to her episodic memory.
|
||||
"""Tools for the queen to read and write episodic memory.
|
||||
|
||||
The queen can consciously record significant moments during a session — like
|
||||
writing in a diary. Semantic memory (MEMORY.md) is updated automatically at
|
||||
session end and is never written by the queen directly.
|
||||
writing in a diary — and recall past diary entries when needed. Semantic
|
||||
memory (MEMORY.md) is updated automatically at session end and is never
|
||||
written by the queen directly.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -33,6 +34,67 @@ def write_to_diary(entry: str) -> str:
|
||||
return "Diary entry recorded."
|
||||
|
||||
|
||||
def recall_diary(query: str = "", days_back: int = 7) -> str:
|
||||
"""Search recent diary entries (episodic memory).
|
||||
|
||||
Use this when the user asks about what happened in the past — "what did we
|
||||
do yesterday?", "what happened last week?", "remind me about the pipeline
|
||||
issue", etc. Also use it proactively when you need context from recent
|
||||
sessions to answer a question or make a decision.
|
||||
|
||||
Args:
|
||||
query: Optional keyword or phrase to filter entries. If empty, all
|
||||
recent entries are returned.
|
||||
days_back: How many days to look back (1–30). Defaults to 7.
|
||||
"""
|
||||
from datetime import date, timedelta
|
||||
|
||||
from framework.agents.queen.queen_memory import read_episodic_memory
|
||||
|
||||
days_back = max(1, min(days_back, 30))
|
||||
today = date.today()
|
||||
results: list[str] = []
|
||||
total_chars = 0
|
||||
char_budget = 12_000
|
||||
|
||||
for offset in range(days_back):
|
||||
d = today - timedelta(days=offset)
|
||||
content = read_episodic_memory(d)
|
||||
if not content:
|
||||
continue
|
||||
# If a query is given, only include entries that mention it
|
||||
if query:
|
||||
# Check each section (split by ###) for relevance
|
||||
sections = content.split("### ")
|
||||
matched = [s for s in sections if query.lower() in s.lower()]
|
||||
if not matched:
|
||||
continue
|
||||
content = "### ".join(matched)
|
||||
label = d.strftime("%B %-d, %Y")
|
||||
if d == today:
|
||||
label = f"Today — {label}"
|
||||
entry = f"## {label}\n\n{content}"
|
||||
if total_chars + len(entry) > char_budget:
|
||||
remaining = char_budget - total_chars
|
||||
if remaining > 200:
|
||||
# Fit a partial entry within budget
|
||||
trimmed = content[: remaining - 100] + "\n\n…(truncated)"
|
||||
results.append(f"## {label}\n\n{trimmed}")
|
||||
else:
|
||||
results.append(f"## {label}\n\n(truncated — hit size limit)")
|
||||
break
|
||||
results.append(entry)
|
||||
total_chars += len(entry)
|
||||
|
||||
if not results:
|
||||
if query:
|
||||
return f"No diary entries matching '{query}' in the last {days_back} days."
|
||||
return f"No diary entries found in the last {days_back} days."
|
||||
|
||||
return "\n\n---\n\n".join(results)
|
||||
|
||||
|
||||
def register_queen_memory_tools(registry: ToolRegistry) -> None:
|
||||
"""Register the episodic memory tool into the queen's tool registry."""
|
||||
"""Register the episodic memory tools into the queen's tool registry."""
|
||||
registry.register_function(write_to_diary)
|
||||
registry.register_function(recall_diary)
|
||||
|
||||
@@ -78,19 +78,6 @@ def register_graph_tools(registry: ToolRegistry, runtime: AgentRuntime) -> int:
|
||||
isolation_level="shared",
|
||||
)
|
||||
|
||||
# Async entry points
|
||||
for aep in runner.graph.async_entry_points:
|
||||
entry_points[aep.id] = EntryPointSpec(
|
||||
id=aep.id,
|
||||
name=aep.name,
|
||||
entry_node=aep.entry_node,
|
||||
trigger_type=aep.trigger_type,
|
||||
trigger_config=aep.trigger_config,
|
||||
isolation_level=aep.isolation_level,
|
||||
priority=aep.priority,
|
||||
max_concurrent=aep.max_concurrent,
|
||||
)
|
||||
|
||||
await runtime.add_graph(
|
||||
graph_id=graph_id,
|
||||
graph=runner.graph,
|
||||
|
||||
@@ -1,20 +1,17 @@
|
||||
"""Worker monitoring tools for the Health Judge and Queen triage agents.
|
||||
"""Worker monitoring tools for Queen triage agents.
|
||||
|
||||
Three tools are registered by ``register_worker_monitoring_tools()``:
|
||||
|
||||
- ``get_worker_health_summary`` — reads the worker's session log files and
|
||||
returns a compact health snapshot (recent verdicts, step count, timing).
|
||||
session_id is optional: if omitted, the most recent active session is
|
||||
auto-discovered from storage. No agent-side configuration required.
|
||||
Used by the Health Judge on every timer tick.
|
||||
auto-discovered from storage.
|
||||
|
||||
- ``emit_escalation_ticket`` — validates and publishes an EscalationTicket
|
||||
to the shared EventBus as a WORKER_ESCALATION_TICKET event.
|
||||
Used by the Health Judge when it decides to escalate.
|
||||
|
||||
- ``notify_operator`` — emits a QUEEN_INTERVENTION_REQUESTED event so the TUI
|
||||
can surface a non-disruptive operator notification.
|
||||
Used by the Queen's ticket_triage_node when it decides to intervene.
|
||||
|
||||
Usage::
|
||||
|
||||
@@ -45,8 +42,9 @@ def register_worker_monitoring_tools(
|
||||
registry: ToolRegistry,
|
||||
event_bus: EventBus,
|
||||
storage_path: Path,
|
||||
stream_id: str = "judge",
|
||||
stream_id: str = "monitoring",
|
||||
worker_graph_id: str | None = None,
|
||||
default_session_id: str | None = None,
|
||||
) -> int:
|
||||
"""Register worker monitoring tools bound to *event_bus* and *storage_path*.
|
||||
|
||||
@@ -55,9 +53,15 @@ def register_worker_monitoring_tools(
|
||||
event_bus: The shared EventBus for the worker runtime.
|
||||
storage_path: Root storage path of the worker runtime
|
||||
(e.g. ``~/.hive/agents/{name}``).
|
||||
stream_id: Stream ID used when emitting events; defaults to judge's stream.
|
||||
stream_id: Stream ID used when emitting events.
|
||||
worker_graph_id: The primary worker graph's ID. Included in health summary
|
||||
so the judge can populate ticket identity fields accurately.
|
||||
default_session_id: When set, ``get_worker_health_summary`` uses this
|
||||
session ID as the default instead of auto-discovering
|
||||
the most-recent-by-mtime session. Callers should pass
|
||||
the queen's own session ID so that after a cold-restore
|
||||
the monitoring tool reads the correct worker session
|
||||
rather than a stale orphaned one.
|
||||
|
||||
Returns:
|
||||
Number of tools registered.
|
||||
@@ -65,7 +69,7 @@ def register_worker_monitoring_tools(
|
||||
from framework.llm.provider import Tool
|
||||
|
||||
storage_path = Path(storage_path)
|
||||
# Derive agent identity from storage path so the judge can fill ticket fields.
|
||||
# Derive agent identity from storage path for ticket fields.
|
||||
# storage_path is ~/.hive/agents/{agent_name} — the name is the last component.
|
||||
_worker_agent_id: str = storage_path.name
|
||||
_worker_graph_id: str = worker_graph_id or storage_path.name
|
||||
@@ -100,23 +104,29 @@ def register_worker_monitoring_tools(
|
||||
if not sessions_dir.exists():
|
||||
return json.dumps({"error": "No sessions found — worker has not started yet"})
|
||||
|
||||
candidates = [
|
||||
d for d in sessions_dir.iterdir() if d.is_dir() and (d / "state.json").exists()
|
||||
]
|
||||
if not candidates:
|
||||
return json.dumps({"error": "No sessions found — worker has not started yet"})
|
||||
# Prefer the queen's own session ID (set at registration time) over
|
||||
# mtime-based discovery, which can pick a stale orphaned session after
|
||||
# a cold-restore when a newer-but-empty session directory exists.
|
||||
if default_session_id and (sessions_dir / default_session_id).is_dir():
|
||||
session_id = default_session_id
|
||||
else:
|
||||
candidates = [
|
||||
d for d in sessions_dir.iterdir() if d.is_dir() and (d / "state.json").exists()
|
||||
]
|
||||
if not candidates:
|
||||
return json.dumps({"error": "No sessions found — worker has not started yet"})
|
||||
|
||||
def _sort_key(d: Path):
|
||||
try:
|
||||
state = json.loads((d / "state.json").read_text(encoding="utf-8"))
|
||||
# in_progress/running sorts before completed/failed
|
||||
priority = 0 if state.get("status", "") in ("in_progress", "running") else 1
|
||||
return (priority, -d.stat().st_mtime)
|
||||
except Exception:
|
||||
return (2, 0)
|
||||
def _sort_key(d: Path):
|
||||
try:
|
||||
state = json.loads((d / "state.json").read_text(encoding="utf-8"))
|
||||
# in_progress/running sorts before completed/failed
|
||||
priority = 0 if state.get("status", "") in ("in_progress", "running") else 1
|
||||
return (priority, -d.stat().st_mtime)
|
||||
except Exception:
|
||||
return (2, 0)
|
||||
|
||||
candidates.sort(key=_sort_key)
|
||||
session_id = candidates[0].name
|
||||
candidates.sort(key=_sort_key)
|
||||
session_id = candidates[0].name
|
||||
|
||||
# Resolve log paths
|
||||
session_dir = storage_path / "sessions" / session_id
|
||||
@@ -201,10 +211,9 @@ def register_worker_monitoring_tools(
|
||||
description=(
|
||||
"Read the worker agent's execution logs and return a compact health snapshot. "
|
||||
"Returns worker_agent_id and worker_graph_id (use these for ticket identity fields), "
|
||||
"recent judge verdicts, step count, time since last step, and "
|
||||
"recent verdicts, step count, time since last step, and "
|
||||
"a snippet of the most recent LLM output. "
|
||||
"session_id is optional — omit it to auto-discover the most recent active session. "
|
||||
"Use this on every health check to observe trends."
|
||||
"session_id is optional — omit it to auto-discover the most recent active session."
|
||||
),
|
||||
parameters={
|
||||
"type": "object",
|
||||
@@ -241,8 +250,7 @@ def register_worker_monitoring_tools(
|
||||
"""Validate and publish an EscalationTicket to the shared EventBus.
|
||||
|
||||
ticket_json must be a JSON string containing all required EscalationTicket
|
||||
fields. The ticket is validated before publishing — this ensures the judge
|
||||
has genuinely filled out all required evidence fields.
|
||||
fields. The ticket is validated before publishing.
|
||||
|
||||
Returns a confirmation JSON with the ticket_id on success, or an error.
|
||||
"""
|
||||
@@ -257,7 +265,7 @@ def register_worker_monitoring_tools(
|
||||
try:
|
||||
await event_bus.emit_worker_escalation_ticket(
|
||||
stream_id=stream_id,
|
||||
node_id="judge",
|
||||
node_id="monitoring",
|
||||
ticket=ticket.model_dump(),
|
||||
)
|
||||
logger.info(
|
||||
@@ -280,7 +288,6 @@ def register_worker_monitoring_tools(
|
||||
name="emit_escalation_ticket",
|
||||
description=(
|
||||
"Validate and publish a structured EscalationTicket to the shared EventBus. "
|
||||
"The Queen's ticket_receiver entry point will fire and triage the ticket. "
|
||||
"ticket_json must be a JSON string with all required EscalationTicket fields: "
|
||||
"worker_agent_id, worker_session_id, worker_node_id, worker_graph_id, "
|
||||
"severity (low/medium/high/critical), cause, judge_reasoning, suggested_action, "
|
||||
|
||||
@@ -38,4 +38,9 @@ export const api = {
|
||||
body: body ? JSON.stringify(body) : undefined,
|
||||
}),
|
||||
delete: <T>(path: string) => request<T>(path, { method: "DELETE" }),
|
||||
patch: <T>(path: string, body?: unknown) =>
|
||||
request<T>(path, {
|
||||
method: "PATCH",
|
||||
body: body ? JSON.stringify(body) : undefined,
|
||||
}),
|
||||
};
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
import { api } from "./client";
|
||||
import type {
|
||||
AgentEvent,
|
||||
LiveSession,
|
||||
LiveSessionDetail,
|
||||
SessionSummary,
|
||||
SessionDetail,
|
||||
Checkpoint,
|
||||
Message,
|
||||
EntryPoint,
|
||||
} from "./types";
|
||||
|
||||
@@ -64,12 +64,18 @@ export const sessionsApi = {
|
||||
`/sessions/${sessionId}/entry-points`,
|
||||
),
|
||||
|
||||
updateTriggerTask: (sessionId: string, triggerId: string, task: string) =>
|
||||
api.patch<{ trigger_id: string; task: string }>(
|
||||
`/sessions/${sessionId}/triggers/${triggerId}`,
|
||||
{ task },
|
||||
),
|
||||
|
||||
graphs: (sessionId: string) =>
|
||||
api.get<{ graphs: string[] }>(`/sessions/${sessionId}/graphs`),
|
||||
|
||||
/** Get queen conversation history for a session (works for cold/post-restart sessions too). */
|
||||
queenMessages: (sessionId: string) =>
|
||||
api.get<{ messages: Message[]; session_id: string }>(`/sessions/${sessionId}/queen-messages`),
|
||||
/** Get persisted eventbus log for a session (works for cold sessions — used for full UI replay). */
|
||||
eventsHistory: (sessionId: string) =>
|
||||
api.get<{ events: AgentEvent[]; session_id: string }>(`/sessions/${sessionId}/events/history`),
|
||||
|
||||
/** List all queen sessions on disk — live + cold (post-restart). */
|
||||
history: () =>
|
||||
@@ -105,12 +111,4 @@ export const sessionsApi = {
|
||||
api.post<{ execution_id: string }>(
|
||||
`/sessions/${sessionId}/worker-sessions/${wsId}/checkpoints/${checkpointId}/restore`,
|
||||
),
|
||||
|
||||
messages: (sessionId: string, wsId: string, nodeId?: string) => {
|
||||
const params = new URLSearchParams({ client_only: "true" });
|
||||
if (nodeId) params.set("node_id", nodeId);
|
||||
return api.get<{ messages: Message[] }>(
|
||||
`/sessions/${sessionId}/worker-sessions/${wsId}/messages?${params}`,
|
||||
);
|
||||
},
|
||||
};
|
||||
|
||||
@@ -31,6 +31,8 @@ export interface EntryPoint {
|
||||
entry_node: string;
|
||||
trigger_type: string;
|
||||
trigger_config?: Record<string, unknown>;
|
||||
/** Worker task string when this trigger fires autonomously. */
|
||||
task?: string;
|
||||
/** Seconds until the next timer fire (only present for timer entry points). */
|
||||
next_fire_in?: number;
|
||||
}
|
||||
@@ -41,6 +43,7 @@ export interface DiscoverEntry {
|
||||
description: string;
|
||||
category: string;
|
||||
session_count: number;
|
||||
run_count: number;
|
||||
node_count: number;
|
||||
tool_count: number;
|
||||
tags: string[];
|
||||
@@ -311,6 +314,7 @@ export type EventTypeName =
|
||||
| "tool_call_completed"
|
||||
| "client_output_delta"
|
||||
| "client_input_requested"
|
||||
| "client_input_received"
|
||||
| "node_internal_output"
|
||||
| "node_input_blocked"
|
||||
| "node_stalled"
|
||||
@@ -328,7 +332,12 @@ export type EventTypeName =
|
||||
| "queen_phase_changed"
|
||||
| "subagent_report"
|
||||
| "draft_graph_updated"
|
||||
| "flowchart_map_updated";
|
||||
| "flowchart_map_updated"
|
||||
| "trigger_available"
|
||||
| "trigger_activated"
|
||||
| "trigger_deactivated"
|
||||
| "trigger_fired"
|
||||
| "trigger_removed";
|
||||
|
||||
export interface AgentEvent {
|
||||
type: EventTypeName;
|
||||
@@ -339,4 +348,5 @@ export interface AgentEvent {
|
||||
timestamp: string;
|
||||
correlation_id: string | null;
|
||||
graph_id: string | null;
|
||||
run_id?: string | null;
|
||||
}
|
||||
|
||||
@@ -171,6 +171,14 @@ function useThemeColors() {
|
||||
return { statusColors, triggerColors };
|
||||
}
|
||||
|
||||
// Active trigger — brighter, more saturated blue
|
||||
const activeTriggerColors = {
|
||||
bg: "hsl(210,30%,18%)",
|
||||
border: "hsl(210,50%,50%)",
|
||||
text: "hsl(210,40%,75%)",
|
||||
icon: "hsl(210,60%,65%)",
|
||||
};
|
||||
|
||||
const triggerIcons: Record<string, string> = {
|
||||
webhook: "\u26A1", // lightning bolt
|
||||
timer: "\u23F1", // stopwatch
|
||||
@@ -546,10 +554,12 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
|
||||
const triggerAvailW = nodeW - 38;
|
||||
const triggerDisplayLabel = truncateLabel(node.label, triggerAvailW, triggerFontSize);
|
||||
const nextFireIn = node.triggerConfig?.next_fire_in as number | undefined;
|
||||
const isActive = node.status === "running" || node.status === "complete";
|
||||
const colors = isActive ? activeTriggerColors : triggerColors;
|
||||
|
||||
// Format countdown for display below node
|
||||
let countdownLabel: string | null = null;
|
||||
if (nextFireIn != null && nextFireIn > 0) {
|
||||
if (isActive && nextFireIn != null && nextFireIn > 0) {
|
||||
const h = Math.floor(nextFireIn / 3600);
|
||||
const m = Math.floor((nextFireIn % 3600) / 60);
|
||||
const s = Math.floor(nextFireIn % 60);
|
||||
@@ -558,24 +568,28 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
|
||||
: `next in ${m}m ${String(s).padStart(2, "0")}s`;
|
||||
}
|
||||
|
||||
// Status label below countdown
|
||||
const statusLabel = isActive ? "active" : "inactive";
|
||||
const statusColor = isActive ? "hsl(140,40%,50%)" : "hsl(210,20%,40%)";
|
||||
|
||||
return (
|
||||
<g key={node.id} onClick={() => onNodeClick?.(node)} style={{ cursor: onNodeClick ? "pointer" : "default" }}>
|
||||
<title>{node.label}</title>
|
||||
{/* Pill-shaped background with dashed border */}
|
||||
{/* Pill-shaped background — solid border when active, dashed when inactive */}
|
||||
<rect
|
||||
x={pos.x} y={pos.y}
|
||||
width={nodeW} height={NODE_H}
|
||||
rx={NODE_H / 2}
|
||||
fill={triggerColors.bg}
|
||||
stroke={triggerColors.border}
|
||||
strokeWidth={1}
|
||||
strokeDasharray="4 2"
|
||||
fill={colors.bg}
|
||||
stroke={colors.border}
|
||||
strokeWidth={isActive ? 1.5 : 1}
|
||||
strokeDasharray={isActive ? undefined : "4 2"}
|
||||
/>
|
||||
|
||||
{/* Trigger type icon */}
|
||||
<text
|
||||
x={pos.x + 18} y={pos.y + NODE_H / 2}
|
||||
fill={triggerColors.icon} fontSize={13}
|
||||
fill={colors.icon} fontSize={13}
|
||||
textAnchor="middle" dominantBaseline="middle"
|
||||
>
|
||||
{icon}
|
||||
@@ -584,7 +598,7 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
|
||||
{/* Label */}
|
||||
<text
|
||||
x={pos.x + 32} y={pos.y + NODE_H / 2}
|
||||
fill={triggerColors.text}
|
||||
fill={colors.text}
|
||||
fontSize={triggerFontSize}
|
||||
fontWeight={500}
|
||||
dominantBaseline="middle"
|
||||
@@ -603,6 +617,15 @@ export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, o
|
||||
{countdownLabel}
|
||||
</text>
|
||||
)}
|
||||
|
||||
{/* Status label */}
|
||||
<text
|
||||
x={pos.x + nodeW / 2} y={pos.y + NODE_H + (countdownLabel ? 25 : 13)}
|
||||
fill={statusColor} fontSize={9}
|
||||
textAnchor="middle" opacity={0.8}
|
||||
>
|
||||
{statusLabel}
|
||||
</text>
|
||||
</g>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -10,12 +10,14 @@ export interface ChatMessage {
|
||||
agentColor: string;
|
||||
content: string;
|
||||
timestamp: string;
|
||||
type?: "system" | "agent" | "user" | "tool_status" | "worker_input_request";
|
||||
type?: "system" | "agent" | "user" | "tool_status" | "worker_input_request" | "run_divider";
|
||||
role?: "queen" | "worker";
|
||||
/** Which worker thread this message belongs to (worker agent name) */
|
||||
thread?: string;
|
||||
/** Epoch ms when this message was first created — used for ordering queen/worker interleaving */
|
||||
createdAt?: number;
|
||||
/** Queen phase active when this message was created */
|
||||
phase?: "planning" | "building" | "staging" | "running";
|
||||
}
|
||||
|
||||
interface ChatPanelProps {
|
||||
@@ -154,6 +156,18 @@ const MessageBubble = memo(function MessageBubble({ msg, queenPhase }: { msg: Ch
|
||||
const isQueen = msg.role === "queen";
|
||||
const color = getColor(msg.agent, msg.role);
|
||||
|
||||
if (msg.type === "run_divider") {
|
||||
return (
|
||||
<div className="flex items-center gap-3 py-2 my-1">
|
||||
<div className="flex-1 h-px bg-border/60" />
|
||||
<span className="text-[10px] text-muted-foreground font-medium uppercase tracking-wider">
|
||||
{msg.content}
|
||||
</span>
|
||||
<div className="flex-1 h-px bg-border/60" />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
if (msg.type === "system") {
|
||||
return (
|
||||
<div className="flex justify-center py-1">
|
||||
@@ -205,13 +219,13 @@ const MessageBubble = memo(function MessageBubble({ msg, queenPhase }: { msg: Ch
|
||||
}`}
|
||||
>
|
||||
{isQueen
|
||||
? queenPhase === "running"
|
||||
? "running phase"
|
||||
: queenPhase === "staging"
|
||||
? "staging phase"
|
||||
: queenPhase === "planning"
|
||||
? "planning phase"
|
||||
: "building phase"
|
||||
? ((msg.phase ?? queenPhase) === "running"
|
||||
? "running"
|
||||
: (msg.phase ?? queenPhase) === "staging"
|
||||
? "staging"
|
||||
: (msg.phase ?? queenPhase) === "planning"
|
||||
? "planning"
|
||||
: "building")
|
||||
: "Worker"}
|
||||
</span>
|
||||
</div>
|
||||
@@ -225,7 +239,7 @@ const MessageBubble = memo(function MessageBubble({ msg, queenPhase }: { msg: Ch
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}, (prev, next) => prev.msg.id === next.msg.id && prev.msg.content === next.msg.content && prev.queenPhase === next.queenPhase);
|
||||
}, (prev, next) => prev.msg.id === next.msg.id && prev.msg.content === next.msg.content && prev.msg.phase === next.msg.phase && prev.queenPhase === next.queenPhase);
|
||||
|
||||
export default function ChatPanel({ messages, onSend, isWaiting, isWorkerWaiting, isBusy, activeThread, disabled, onCancel, pendingQuestion, pendingOptions, pendingQuestions, onQuestionSubmit, onMultiQuestionSubmit, onQuestionDismiss, queenPhase }: ChatPanelProps) {
|
||||
const [input, setInput] = useState("");
|
||||
|
||||
@@ -126,8 +126,13 @@ export default function CredentialsModal({
|
||||
// No real path — no credentials to show
|
||||
setRows([]);
|
||||
}
|
||||
} catch {
|
||||
// Backend unavailable — fall back to legacy props or empty
|
||||
} catch (err) {
|
||||
// Surface the error so the modal shows a meaningful message
|
||||
const message =
|
||||
err instanceof Error ? err.message : "Failed to check credentials";
|
||||
setError(message);
|
||||
|
||||
// Fall back to legacy props or empty rows
|
||||
if (legacyCredentials) {
|
||||
setRows(legacyCredentials.map(c => ({
|
||||
...c,
|
||||
@@ -289,11 +294,18 @@ export default function CredentialsModal({
|
||||
{/* Status banner */}
|
||||
{!loading && (
|
||||
<div className={`mx-5 mt-4 px-3 py-2.5 rounded-lg border text-xs font-medium flex items-center gap-2 ${
|
||||
allRequiredMet
|
||||
? "bg-emerald-500/10 border-emerald-500/20 text-emerald-600"
|
||||
: "bg-destructive/5 border-destructive/20 text-destructive"
|
||||
error && rows.length === 0
|
||||
? "bg-destructive/5 border-destructive/20 text-destructive"
|
||||
: allRequiredMet
|
||||
? "bg-emerald-500/10 border-emerald-500/20 text-emerald-600"
|
||||
: "bg-destructive/5 border-destructive/20 text-destructive"
|
||||
}`}>
|
||||
{allRequiredMet ? (
|
||||
{error && rows.length === 0 ? (
|
||||
<>
|
||||
<AlertCircle className="w-3.5 h-3.5 flex-shrink-0" />
|
||||
<span className="break-words">Failed to check credentials: {error}</span>
|
||||
</>
|
||||
) : allRequiredMet ? (
|
||||
<>
|
||||
<Shield className="w-3.5 h-3.5" />
|
||||
{rows.length === 0
|
||||
|
||||
@@ -73,7 +73,7 @@ function useDraftChromeColors() {
|
||||
type DraftNodeStatus = "pending" | "running" | "complete" | "error";
|
||||
|
||||
interface DraftGraphProps {
|
||||
draft: DraftGraphData;
|
||||
draft: DraftGraphData | null;
|
||||
onNodeClick?: (node: DraftNode) => void;
|
||||
/** Runtime node ID → list of original draft node IDs (post-dissolution mapping). */
|
||||
flowchartMap?: Record<string, string[]>;
|
||||
@@ -83,6 +83,8 @@ interface DraftGraphProps {
|
||||
onRuntimeNodeClick?: (runtimeNodeId: string) => void;
|
||||
/** True while the queen is building the agent from the draft. */
|
||||
building?: boolean;
|
||||
/** True while the queen is designing the draft (no draft yet). Shows a spinner. */
|
||||
loading?: boolean;
|
||||
/** Called when the user clicks Run. */
|
||||
onRun?: () => void;
|
||||
/** Called when the user clicks Pause. */
|
||||
@@ -355,7 +357,7 @@ function Tooltip({ node, style }: { node: DraftNode; style: React.CSSProperties
|
||||
);
|
||||
}
|
||||
|
||||
export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNodes, onRuntimeNodeClick, building, onRun, onPause, runState = "idle" }: DraftGraphProps) {
|
||||
export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNodes, onRuntimeNodeClick, building, loading, onRun, onPause, runState = "idle" }: DraftGraphProps) {
|
||||
const [hoveredNode, setHoveredNode] = useState<string | null>(null);
|
||||
const [mousePos, setMousePos] = useState<{ x: number; y: number } | null>(null);
|
||||
const containerRef = useRef<HTMLDivElement>(null);
|
||||
@@ -463,7 +465,8 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
|
||||
|
||||
const hasStatusOverlay = Object.keys(nodeStatuses).length > 0;
|
||||
|
||||
const { nodes, edges } = draft;
|
||||
const nodes = draft?.nodes ?? [];
|
||||
const edges = draft?.edges ?? [];
|
||||
|
||||
const idxMap = useMemo(
|
||||
() => Object.fromEntries(nodes.map((n, i) => [n.id, i])),
|
||||
@@ -656,25 +659,6 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
|
||||
return { layers, nodeW, firstColX, nodeXPositions, backEdgeOverflow, maxContentRight };
|
||||
}, [nodes, forwardEdges, backEdges.length, containerW, flowchartMap, idxMap]);
|
||||
|
||||
if (nodes.length === 0) {
|
||||
return (
|
||||
<div className="flex flex-col h-full">
|
||||
<div className="px-4 pt-4 pb-2">
|
||||
<p className="text-[11px] text-muted-foreground font-medium uppercase tracking-wider">
|
||||
Draft
|
||||
</p>
|
||||
</div>
|
||||
<div className="flex-1 flex items-center justify-center px-4">
|
||||
<p className="text-xs text-muted-foreground/60 text-center italic">
|
||||
No draft graph yet.
|
||||
<br />
|
||||
Describe your workflow to get started.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const { layers, nodeW, nodeXPositions, backEdgeOverflow, maxContentRight } = layout;
|
||||
|
||||
const maxLayer = nodes.length > 0 ? Math.max(...layers) : 0;
|
||||
@@ -982,6 +966,31 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
|
||||
);
|
||||
};
|
||||
|
||||
if (loading || !draft || nodes.length === 0) {
|
||||
return (
|
||||
<div className="flex flex-col h-full">
|
||||
<div className="px-4 pt-3 pb-1.5 flex items-center gap-2">
|
||||
<p className="text-[11px] text-muted-foreground font-medium uppercase tracking-wider">Draft</p>
|
||||
<span className="text-[9px] font-mono font-medium rounded px-1 py-0.5 leading-none border text-amber-500/60 border-amber-500/20">planning</span>
|
||||
</div>
|
||||
<div className="flex-1 flex flex-col items-center justify-center gap-3">
|
||||
{loading || !draft ? (
|
||||
<>
|
||||
<Loader2 className="w-5 h-5 animate-spin text-muted-foreground/40" />
|
||||
<p className="text-xs text-muted-foreground/50">Designing flowchart…</p>
|
||||
</>
|
||||
) : (
|
||||
<p className="text-xs text-muted-foreground/60 text-center italic">
|
||||
No draft graph yet.
|
||||
<br />
|
||||
Describe your workflow to get started.
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="flex flex-col h-full">
|
||||
{/* Header */}
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
import { useState, useCallback } from "react";
|
||||
import { useNavigate } from "react-router-dom";
|
||||
import { Crown, X } from "lucide-react";
|
||||
import { loadPersistedTabs, savePersistedTabs, TAB_STORAGE_KEY, type PersistedTabState } from "@/lib/tab-persistence";
|
||||
import { sessionsApi } from "@/api/sessions";
|
||||
import { loadPersistedTabs, savePersistedTabs, TAB_STORAGE_KEY, type PersistedTabState } from "@/lib/tab-persistence";
|
||||
|
||||
export interface TopBarTab {
|
||||
agentType: string;
|
||||
@@ -51,10 +51,10 @@ export default function TopBar({ tabs: tabsProp, onTabClick, onCloseTab, canClos
|
||||
onCloseTab(agentType);
|
||||
return;
|
||||
}
|
||||
// Kill the backend session (queen/judge/worker) even outside workspace
|
||||
// Kill the backend session (queen/worker) even outside workspace
|
||||
sessionsApi.list()
|
||||
.then(({ sessions }) => {
|
||||
const match = sessions.find(s => s.agent_path === agentType);
|
||||
const match = sessions.find(s => s.agent_path.endsWith(agentType));
|
||||
if (match) return sessionsApi.stop(match.session_id);
|
||||
})
|
||||
.catch(() => {}); // fire-and-forget
|
||||
|
||||
@@ -1,60 +1,6 @@
|
||||
import { describe, it, expect } from "vitest";
|
||||
import { backendMessageToChatMessage, sseEventToChatMessage, formatAgentDisplayName } from "./chat-helpers";
|
||||
import type { AgentEvent, Message } from "@/api/types";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// backendMessageToChatMessage
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe("backendMessageToChatMessage", () => {
|
||||
it("converts a user message", () => {
|
||||
const msg: Message = { seq: 1, role: "user", content: "hello", _node_id: "chat" };
|
||||
const result = backendMessageToChatMessage(msg, "inbox-management");
|
||||
expect(result.type).toBe("user");
|
||||
expect(result.agent).toBe("You");
|
||||
expect(result.role).toBeUndefined();
|
||||
expect(result.content).toBe("hello");
|
||||
expect(result.thread).toBe("inbox-management");
|
||||
});
|
||||
|
||||
it("converts an assistant message with node_id as agent", () => {
|
||||
const msg: Message = { seq: 2, role: "assistant", content: "hi", _node_id: "intake" };
|
||||
const result = backendMessageToChatMessage(msg, "inbox-management");
|
||||
expect(result.agent).toBe("intake");
|
||||
expect(result.role).toBe("worker");
|
||||
expect(result.type).toBeUndefined();
|
||||
});
|
||||
|
||||
it("defaults agent to 'Agent' when _node_id is empty", () => {
|
||||
const msg: Message = { seq: 3, role: "assistant", content: "ok", _node_id: "" };
|
||||
const result = backendMessageToChatMessage(msg, "inbox-management");
|
||||
expect(result.agent).toBe("Agent");
|
||||
});
|
||||
|
||||
it("produces deterministic ID from seq", () => {
|
||||
const msg: Message = { seq: 42, role: "user", content: "test", _node_id: "x" };
|
||||
const result = backendMessageToChatMessage(msg, "thread");
|
||||
expect(result.id).toBe("backend-42");
|
||||
});
|
||||
|
||||
it("passes through the thread parameter", () => {
|
||||
const msg: Message = { seq: 1, role: "user", content: "hi", _node_id: "x" };
|
||||
const result = backendMessageToChatMessage(msg, "my-thread");
|
||||
expect(result.thread).toBe("my-thread");
|
||||
});
|
||||
|
||||
it("uses agentDisplayName instead of node_id when provided", () => {
|
||||
const msg: Message = { seq: 2, role: "assistant", content: "hi", _node_id: "intake" };
|
||||
const result = backendMessageToChatMessage(msg, "thread", "Competitive Intel Agent");
|
||||
expect(result.agent).toBe("Competitive Intel Agent");
|
||||
});
|
||||
|
||||
it("still shows 'You' for user messages even when agentDisplayName is provided", () => {
|
||||
const msg: Message = { seq: 1, role: "user", content: "hello", _node_id: "chat" };
|
||||
const result = backendMessageToChatMessage(msg, "thread", "My Agent");
|
||||
expect(result.agent).toBe("You");
|
||||
});
|
||||
});
|
||||
import { sseEventToChatMessage, formatAgentDisplayName } from "./chat-helpers";
|
||||
import type { AgentEvent } from "@/api/types";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// sseEventToChatMessage
|
||||
@@ -250,6 +196,102 @@ describe("sseEventToChatMessage", () => {
|
||||
);
|
||||
});
|
||||
|
||||
it("different inner_turn values produce different message IDs", () => {
|
||||
const e1 = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "first response", iteration: 0, inner_turn: 0 },
|
||||
});
|
||||
const e2 = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "after tool call", iteration: 0, inner_turn: 1 },
|
||||
});
|
||||
const r1 = sseEventToChatMessage(e1, "t");
|
||||
const r2 = sseEventToChatMessage(e2, "t");
|
||||
expect(r1!.id).not.toBe(r2!.id);
|
||||
});
|
||||
|
||||
it("same inner_turn produces same ID (streaming upsert within one LLM call)", () => {
|
||||
const e1 = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "partial", iteration: 0, inner_turn: 1 },
|
||||
});
|
||||
const e2 = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "partial response", iteration: 0, inner_turn: 1 },
|
||||
});
|
||||
expect(sseEventToChatMessage(e1, "t")!.id).toBe(
|
||||
sseEventToChatMessage(e2, "t")!.id,
|
||||
);
|
||||
});
|
||||
|
||||
it("absent inner_turn produces same ID as inner_turn=0 (backward compat)", () => {
|
||||
const withField = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "hello", iteration: 2, inner_turn: 0 },
|
||||
});
|
||||
const withoutField = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "hello", iteration: 2 },
|
||||
});
|
||||
expect(sseEventToChatMessage(withField, "t")!.id).toBe(
|
||||
sseEventToChatMessage(withoutField, "t")!.id,
|
||||
);
|
||||
});
|
||||
|
||||
it("inner_turn=0 produces no suffix (matches old ID format)", () => {
|
||||
const event = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "hello", iteration: 3, inner_turn: 0 },
|
||||
});
|
||||
const result = sseEventToChatMessage(event, "t");
|
||||
expect(result!.id).toBe("stream-exec-1-3-queen");
|
||||
});
|
||||
|
||||
it("inner_turn>0 adds -t suffix to ID", () => {
|
||||
const event = makeEvent({
|
||||
type: "client_output_delta",
|
||||
node_id: "queen",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "hello", iteration: 3, inner_turn: 2 },
|
||||
});
|
||||
const result = sseEventToChatMessage(event, "t");
|
||||
expect(result!.id).toBe("stream-exec-1-3-t2-queen");
|
||||
});
|
||||
|
||||
it("llm_text_delta also uses inner_turn for distinct IDs", () => {
|
||||
const e1 = makeEvent({
|
||||
type: "llm_text_delta",
|
||||
node_id: "research",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "first", inner_turn: 0 },
|
||||
});
|
||||
const e2 = makeEvent({
|
||||
type: "llm_text_delta",
|
||||
node_id: "research",
|
||||
execution_id: "exec-1",
|
||||
data: { snapshot: "second", inner_turn: 1 },
|
||||
});
|
||||
const r1 = sseEventToChatMessage(e1, "t");
|
||||
const r2 = sseEventToChatMessage(e2, "t");
|
||||
expect(r1!.id).not.toBe(r2!.id);
|
||||
expect(r1!.id).toBe("stream-exec-1-research");
|
||||
expect(r2!.id).toBe("stream-exec-1-t1-research");
|
||||
});
|
||||
|
||||
it("uses timestamp fallback when both turnId and execution_id are null", () => {
|
||||
const event = makeEvent({
|
||||
type: "client_output_delta",
|
||||
@@ -261,25 +303,36 @@ describe("sseEventToChatMessage", () => {
|
||||
expect(result!.id).toMatch(/^stream-t-\d+-chat$/);
|
||||
});
|
||||
|
||||
it("converts client_input_requested with prompt to message", () => {
|
||||
it("returns null for client_input_requested (handled in workspace.tsx)", () => {
|
||||
const event = makeEvent({
|
||||
type: "client_input_requested",
|
||||
node_id: "chat",
|
||||
execution_id: "abc",
|
||||
data: { prompt: "What next?" },
|
||||
});
|
||||
const result = sseEventToChatMessage(event, "t");
|
||||
expect(result).not.toBeNull();
|
||||
expect(result!.content).toBe("What next?");
|
||||
expect(result!.role).toBe("worker");
|
||||
expect(sseEventToChatMessage(event, "t")).toBeNull();
|
||||
});
|
||||
|
||||
it("returns null for client_input_requested without prompt", () => {
|
||||
it("converts client_input_received to user message", () => {
|
||||
const event = makeEvent({
|
||||
type: "client_input_requested",
|
||||
node_id: "chat",
|
||||
type: "client_input_received",
|
||||
node_id: "queen",
|
||||
execution_id: "abc",
|
||||
data: { prompt: "" },
|
||||
data: { content: "do the thing" },
|
||||
});
|
||||
const result = sseEventToChatMessage(event, "t");
|
||||
expect(result).not.toBeNull();
|
||||
expect(result!.agent).toBe("You");
|
||||
expect(result!.type).toBe("user");
|
||||
expect(result!.content).toBe("do the thing");
|
||||
});
|
||||
|
||||
it("returns null for client_input_received with empty content", () => {
|
||||
const event = makeEvent({
|
||||
type: "client_input_received",
|
||||
node_id: "queen",
|
||||
execution_id: "abc",
|
||||
data: { content: "" },
|
||||
});
|
||||
expect(sseEventToChatMessage(event, "t")).toBeNull();
|
||||
});
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
/**
|
||||
* Pure functions for converting backend messages and SSE events into ChatMessage objects.
|
||||
* Pure functions for converting SSE events into ChatMessage objects.
|
||||
* No React dependencies — just JSON in, object out.
|
||||
*/
|
||||
|
||||
import type { ChatMessage } from "@/components/ChatPanel";
|
||||
import type { AgentEvent, Message } from "@/api/types";
|
||||
import type { AgentEvent } from "@/api/types";
|
||||
|
||||
/**
|
||||
* Derive a human-readable display name from a raw agent identifier.
|
||||
@@ -27,32 +27,6 @@ export function formatAgentDisplayName(raw: string): string {
|
||||
.trim();
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a backend Message (from sessionsApi.messages()) into a ChatMessage.
|
||||
* When agentDisplayName is provided, it is used as the sender for all agent
|
||||
* messages instead of the raw node_id.
|
||||
*/
|
||||
export function backendMessageToChatMessage(
|
||||
msg: Message,
|
||||
thread: string,
|
||||
agentDisplayName?: string,
|
||||
): ChatMessage {
|
||||
// Use file-mtime created_at (epoch seconds → ms) for cross-conversation
|
||||
// ordering; fall back to seq for backwards compatibility.
|
||||
const createdAt = msg.created_at ? msg.created_at * 1000 : msg.seq;
|
||||
return {
|
||||
id: `backend-${msg._node_id}-${msg.seq}`,
|
||||
agent: msg.role === "user" ? "You" : agentDisplayName || msg._node_id || "Agent",
|
||||
agentColor: "",
|
||||
content: msg.content,
|
||||
timestamp: "",
|
||||
type: msg.role === "user" ? "user" : undefined,
|
||||
role: msg.role === "user" ? undefined : "worker",
|
||||
thread,
|
||||
createdAt,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert an SSE AgentEvent into a ChatMessage, or null if the event
|
||||
* doesn't produce a visible chat message.
|
||||
@@ -82,10 +56,15 @@ export function sseEventToChatMessage(
|
||||
const iterTid = iter != null ? String(iter) : tid;
|
||||
const iterIdKey = eid && iterTid ? `${eid}-${iterTid}` : eid || iterTid || `t-${Date.now()}`;
|
||||
|
||||
// Distinguish multiple LLM calls within the same iteration (inner tool loop).
|
||||
// inner_turn=0 (or absent) produces no suffix for backward compat.
|
||||
const innerTurn = event.data?.inner_turn as number | undefined;
|
||||
const innerSuffix = innerTurn != null && innerTurn > 0 ? `-t${innerTurn}` : "";
|
||||
|
||||
const snapshot = (event.data?.snapshot as string) || (event.data?.content as string) || "";
|
||||
if (!snapshot) return null;
|
||||
return {
|
||||
id: `stream-${iterIdKey}-${event.node_id}`,
|
||||
id: `stream-${iterIdKey}${innerSuffix}-${event.node_id}`,
|
||||
agent: agentDisplayName || event.node_id || "Agent",
|
||||
agentColor: "",
|
||||
content: snapshot,
|
||||
@@ -101,11 +80,29 @@ export function sseEventToChatMessage(
|
||||
// create a worker_input_request message and set awaitingInput state.
|
||||
return null;
|
||||
|
||||
case "client_input_received": {
|
||||
const userContent = (event.data?.content as string) || "";
|
||||
if (!userContent) return null;
|
||||
return {
|
||||
id: `user-input-${event.timestamp}`,
|
||||
agent: "You",
|
||||
agentColor: "",
|
||||
content: userContent,
|
||||
timestamp: "",
|
||||
type: "user",
|
||||
thread,
|
||||
createdAt,
|
||||
};
|
||||
}
|
||||
|
||||
case "llm_text_delta": {
|
||||
const llmInnerTurn = event.data?.inner_turn as number | undefined;
|
||||
const llmInnerSuffix = llmInnerTurn != null && llmInnerTurn > 0 ? `-t${llmInnerTurn}` : "";
|
||||
|
||||
const snapshot = (event.data?.snapshot as string) || (event.data?.content as string) || "";
|
||||
if (!snapshot) return null;
|
||||
return {
|
||||
id: `stream-${idKey}-${event.node_id}`,
|
||||
id: `stream-${idKey}${llmInnerSuffix}-${event.node_id}`,
|
||||
agent: event.node_id || "Agent",
|
||||
agentColor: "",
|
||||
content: snapshot,
|
||||
@@ -148,3 +145,25 @@ export function sseEventToChatMessage(
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
type QueenPhase = "planning" | "building" | "staging" | "running";
|
||||
const VALID_PHASES = new Set<string>(["planning", "building", "staging", "running"]);
|
||||
|
||||
/**
|
||||
* Scan an array of persisted events and return the last queen phase seen,
|
||||
* or null if no phase event exists. Reads both `queen_phase_changed` events
|
||||
* and the per-iteration `phase` metadata on `node_loop_iteration` events.
|
||||
*/
|
||||
export function extractLastPhase(events: AgentEvent[]): QueenPhase | null {
|
||||
let last: QueenPhase | null = null;
|
||||
for (const evt of events) {
|
||||
const phase =
|
||||
evt.type === "queen_phase_changed" ? (evt.data?.phase as string) :
|
||||
evt.type === "node_loop_iteration" ? (evt.data?.phase as string | undefined) :
|
||||
undefined;
|
||||
if (phase && VALID_PHASES.has(phase)) {
|
||||
last = phase as QueenPhase;
|
||||
}
|
||||
}
|
||||
return last;
|
||||
}
|
||||
|
||||
@@ -51,6 +51,7 @@ export function topologyToGraphNodes(topology: GraphTopology): GraphNode[] {
|
||||
triggerConfig: {
|
||||
...ep.trigger_config,
|
||||
...(ep.next_fire_in != null ? { next_fire_in: ep.next_fire_in } : {}),
|
||||
...(ep.task ? { task: ep.task } : {}),
|
||||
},
|
||||
next: [ep.entry_node],
|
||||
});
|
||||
|
||||
@@ -113,7 +113,7 @@ export default function MyAgents() {
|
||||
<div className="flex items-center gap-1">
|
||||
<Activity className="w-3 h-3" />
|
||||
<span>
|
||||
{agent.session_count} session{agent.session_count !== 1 ? "s" : ""}
|
||||
{agent.run_count} run{agent.run_count !== 1 ? "s" : ""}
|
||||
</span>
|
||||
</div>
|
||||
<span>{agent.last_active ? timeAgo(agent.last_active) : "Never run"}</span>
|
||||
|
||||
@@ -14,8 +14,8 @@ import { executionApi } from "@/api/execution";
|
||||
import { graphsApi } from "@/api/graphs";
|
||||
import { sessionsApi } from "@/api/sessions";
|
||||
import { useMultiSSE } from "@/hooks/use-sse";
|
||||
import type { LiveSession, AgentEvent, DiscoverEntry, Message, NodeSpec, DraftGraph as DraftGraphData } from "@/api/types";
|
||||
import { backendMessageToChatMessage, sseEventToChatMessage, formatAgentDisplayName } from "@/lib/chat-helpers";
|
||||
import type { LiveSession, AgentEvent, DiscoverEntry, NodeSpec, DraftGraph as DraftGraphData } from "@/api/types";
|
||||
import { sseEventToChatMessage, formatAgentDisplayName } from "@/lib/chat-helpers";
|
||||
import { topologyToGraphNodes } from "@/lib/graph-converter";
|
||||
import { ApiError } from "@/api/client";
|
||||
|
||||
@@ -113,7 +113,13 @@ function NewTabPopover({ open, onClose, anchorRef, discoverAgents, onFromScratch
|
||||
useEffect(() => {
|
||||
if (open && anchorRef.current) {
|
||||
const rect = anchorRef.current.getBoundingClientRect();
|
||||
setPos({ top: rect.bottom + 4, left: rect.left });
|
||||
const POPUP_WIDTH = 240; // w-60 = 15rem = 240px
|
||||
const overflows = rect.left + POPUP_WIDTH > window.innerWidth - 8;
|
||||
console.log("Anchor rect:", rect, "Overflows:", overflows);
|
||||
setPos({
|
||||
top: rect.bottom + 4,
|
||||
left: overflows ? rect.right - POPUP_WIDTH : rect.left,
|
||||
});
|
||||
}
|
||||
}, [open, anchorRef]);
|
||||
|
||||
@@ -242,6 +248,49 @@ function truncate(s: string, max: number): string {
|
||||
return s.length > max ? s.slice(0, max) + "..." : s;
|
||||
}
|
||||
|
||||
type SessionRestoreResult = {
|
||||
messages: ChatMessage[];
|
||||
restoredPhase: "planning" | "building" | "staging" | "running" | null;
|
||||
};
|
||||
|
||||
/**
|
||||
* Restore session messages from the persisted event log.
|
||||
* Returns an empty result if no event log exists.
|
||||
*/
|
||||
async function restoreSessionMessages(
|
||||
sessionId: string,
|
||||
thread: string,
|
||||
agentDisplayName: string,
|
||||
): Promise<SessionRestoreResult> {
|
||||
try {
|
||||
const { events } = await sessionsApi.eventsHistory(sessionId);
|
||||
if (events.length > 0) {
|
||||
const messages: ChatMessage[] = [];
|
||||
let runningPhase: ChatMessage["phase"] = undefined;
|
||||
for (const evt of events) {
|
||||
// Track phase transitions so each message gets the phase it was created in
|
||||
const p = evt.type === "queen_phase_changed" ? evt.data?.phase as string
|
||||
: evt.type === "node_loop_iteration" ? evt.data?.phase as string | undefined
|
||||
: undefined;
|
||||
if (p && ["planning", "building", "staging", "running"].includes(p)) {
|
||||
runningPhase = p as ChatMessage["phase"];
|
||||
}
|
||||
const msg = sseEventToChatMessage(evt, thread, agentDisplayName);
|
||||
if (!msg) continue;
|
||||
if (evt.stream_id === "queen") {
|
||||
msg.role = "queen";
|
||||
msg.phase = runningPhase;
|
||||
}
|
||||
messages.push(msg);
|
||||
}
|
||||
return { messages, restoredPhase: runningPhase ?? null };
|
||||
}
|
||||
} catch {
|
||||
// Event log not available — session will start fresh.
|
||||
}
|
||||
return { messages: [], restoredPhase: null };
|
||||
}
|
||||
|
||||
// --- Per-agent backend state (consolidated) ---
|
||||
interface AgentBackendState {
|
||||
sessionId: string | null;
|
||||
@@ -266,6 +315,7 @@ interface AgentBackendState {
|
||||
flowchartMap: Record<string, string[]> | null;
|
||||
workerRunState: "idle" | "deploying" | "running";
|
||||
currentExecutionId: string | null;
|
||||
currentRunId: string | null;
|
||||
nodeLogs: Record<string, string[]>;
|
||||
nodeActionPlans: Record<string, string>;
|
||||
subagentReports: { subagent_id: string; message: string; data?: Record<string, unknown>; timestamp: string }[];
|
||||
@@ -309,6 +359,7 @@ function defaultAgentState(): AgentBackendState {
|
||||
agentPath: null,
|
||||
workerRunState: "idle",
|
||||
currentExecutionId: null,
|
||||
currentRunId: null,
|
||||
nodeLogs: {},
|
||||
nodeActionPlans: {},
|
||||
subagentReports: [],
|
||||
@@ -353,11 +404,8 @@ export default function Workspace() {
|
||||
// tabKey is the actual key used in sessionsByAgent (may contain "::" suffix).
|
||||
// Fall back to agentType for tabs persisted before this field was added.
|
||||
const tabKey = tab.tabKey || tab.agentType;
|
||||
// Skip new-agent tabs when starting fresh from home with a prompt
|
||||
// to avoid creating duplicate sessions
|
||||
if (initialPrompt && hasExplicitAgent && (tab.agentType === "new-agent" || tab.agentType.startsWith("new-agent-"))) {
|
||||
continue;
|
||||
}
|
||||
// New-agent tabs each have a unique key (e.g. "new-agent-abc123"),
|
||||
// so they never collide with the incoming tab — always restore them.
|
||||
if (!initial[tabKey]) initial[tabKey] = [];
|
||||
const session = createSession(tab.agentType, tab.label);
|
||||
session.id = tab.id;
|
||||
@@ -388,15 +436,26 @@ export default function Workspace() {
|
||||
if (initial[initialAgent]?.length) {
|
||||
return initial;
|
||||
}
|
||||
// Also check for existing tabs with instance suffixes (e.g. "agentType::instanceId")
|
||||
const existingKey = Object.keys(initial).find(
|
||||
k => baseAgentType(k) === initialAgent && initial[k]?.length > 0
|
||||
);
|
||||
if (existingKey && !initialPrompt) {
|
||||
return initial;
|
||||
}
|
||||
|
||||
// If the user submitted a new prompt from the home page, always create
|
||||
// a fresh session so the prompt isn't lost into an existing session.
|
||||
// initialAgent is already a unique key (e.g. "new-agent-abc123") when
|
||||
// coming from home, so the new tab won't overwrite existing ones.
|
||||
if (initialPrompt && hasExplicitAgent) {
|
||||
const label = initialAgent.startsWith("new-agent")
|
||||
const rawLabel = initialAgent.startsWith("new-agent")
|
||||
? "New Agent"
|
||||
: formatAgentDisplayName(initialAgent);
|
||||
const existingNewAgentCount = Object.keys(initial).filter(
|
||||
k => (k === "new-agent" || k.startsWith("new-agent-")) && (initial[k] || []).length > 0
|
||||
).length;
|
||||
const label = existingNewAgentCount === 0 ? rawLabel : `${rawLabel} #${existingNewAgentCount + 1}`;
|
||||
const newSession = createSession(initialAgent, label);
|
||||
initial[initialAgent] = [newSession];
|
||||
return initial;
|
||||
@@ -494,6 +553,8 @@ export default function Workspace() {
|
||||
const [credentialAgentPath, setCredentialAgentPath] = useState<string | null>(null);
|
||||
const [dismissedBanner, setDismissedBanner] = useState<string | null>(null);
|
||||
const [selectedNode, setSelectedNode] = useState<GraphNode | null>(null);
|
||||
const [triggerTaskDraft, setTriggerTaskDraft] = useState("");
|
||||
const [triggerTaskSaving, setTriggerTaskSaving] = useState(false);
|
||||
const [newTabOpen, setNewTabOpen] = useState(false);
|
||||
const newTabBtnRef = useRef<HTMLButtonElement>(null);
|
||||
|
||||
@@ -512,6 +573,10 @@ export default function Workspace() {
|
||||
// Using a ref avoids stale-closure bugs when multiple SSE events
|
||||
// arrive in the same React batch.
|
||||
const turnCounterRef = useRef<Record<string, number>>({});
|
||||
// Per-agent queen phase ref — used to stamp each message with the phase
|
||||
// it was created in (avoids stale-closure when phase change and message
|
||||
// events arrive in the same React batch).
|
||||
const queenPhaseRef = useRef<Record<string, string>>({});
|
||||
|
||||
// Synchronous ref to suppress the queen's auto-intro SSE messages
|
||||
// after a cold-restore (where we already restored the conversation from disk).
|
||||
@@ -658,6 +723,38 @@ export default function Workspace() {
|
||||
|
||||
let restoredMessageCount = 0;
|
||||
|
||||
// Before creating a new session, check if there's already a live backend
|
||||
// session for this queen-only agent that no open tab owns.
|
||||
// Skip this search when the tab has a prompt — it's a fresh agent from
|
||||
// home and must always get its own session.
|
||||
if (!liveSession && !coldRestoreId && !prompt) {
|
||||
try {
|
||||
const { sessions: allLive } = await sessionsApi.list();
|
||||
const existing = allLive.find(s => !s.has_worker && !s.agent_path);
|
||||
if (existing) {
|
||||
const alreadyOwned = Object.values(sessionsRef.current).flat()
|
||||
.some(s => s.backendSessionId === existing.session_id);
|
||||
if (!alreadyOwned) {
|
||||
liveSession = existing;
|
||||
}
|
||||
}
|
||||
} catch { /* proceed to create */ }
|
||||
|
||||
// If no live session, check history for a cold queen-only session
|
||||
if (!liveSession) {
|
||||
try {
|
||||
const { sessions: allHistory } = await sessionsApi.history();
|
||||
const coldMatch = allHistory.find(
|
||||
s => !s.agent_path && s.has_messages
|
||||
);
|
||||
if (coldMatch) {
|
||||
coldRestoreId = coldMatch.session_id;
|
||||
}
|
||||
} catch { /* proceed to create fresh */ }
|
||||
}
|
||||
}
|
||||
|
||||
let restoredPhase: "planning" | "building" | "staging" | "running" | null = null;
|
||||
if (!liveSession) {
|
||||
// Fetch conversation history from disk BEFORE creating the new session.
|
||||
// SKIP if messages were already pre-populated by handleHistoryOpen.
|
||||
@@ -666,12 +763,9 @@ export default function Workspace() {
|
||||
const alreadyHasMessages = (activeSess?.messages?.length ?? 0) > 0;
|
||||
if (restoreFrom && !alreadyHasMessages) {
|
||||
try {
|
||||
const { messages: queenMsgs } = await sessionsApi.queenMessages(restoreFrom);
|
||||
for (const m of queenMsgs as Message[]) {
|
||||
const msg = backendMessageToChatMessage(m, agentType, "Queen Bee");
|
||||
msg.role = "queen";
|
||||
preRestoredMsgs.push(msg);
|
||||
}
|
||||
const restored = await restoreSessionMessages(restoreFrom, agentType, "Queen Bee");
|
||||
preRestoredMsgs.push(...restored.messages);
|
||||
restoredPhase = restored.restoredPhase;
|
||||
} catch {
|
||||
// Not available — will start fresh
|
||||
}
|
||||
@@ -741,12 +835,16 @@ export default function Workspace() {
|
||||
// If no messages were actually restored, lift the intro suppression
|
||||
if (restoredMessageCount === 0) suppressIntroRef.current.delete(agentType);
|
||||
|
||||
const qPhase = restoredPhase || liveSession.queen_phase || "planning";
|
||||
queenPhaseRef.current[agentType] = qPhase;
|
||||
updateAgentState(agentType, {
|
||||
sessionId: liveSession.session_id,
|
||||
displayName: "Queen Bee",
|
||||
ready: true,
|
||||
loading: false,
|
||||
queenReady: true,
|
||||
queenPhase: qPhase,
|
||||
queenBuilding: qPhase === "building",
|
||||
});
|
||||
} catch (err: unknown) {
|
||||
const msg = err instanceof Error ? err.message : String(err);
|
||||
@@ -784,12 +882,44 @@ export default function Workspace() {
|
||||
} catch {
|
||||
// 404: session was explicitly stopped (via closeAgentTab) but conversation
|
||||
// files likely still exist on disk. Treat it as cold so we can restore.
|
||||
// Verify files exist before assuming cold — if queenMessages succeeds with
|
||||
// content, files are there.
|
||||
coldRestoreId = historySourceId || storedSessionId;
|
||||
}
|
||||
}
|
||||
|
||||
// No stored session — check for a live or cold session for this agent
|
||||
// that we can reuse (e.g., tab was closed but backend session survived,
|
||||
// or server restarted with conversation files on disk).
|
||||
if (!liveSession && !coldRestoreId) {
|
||||
try {
|
||||
const { sessions: allLive } = await sessionsApi.list();
|
||||
const existingLive = allLive.find(s => s.agent_path.endsWith(agentPath));
|
||||
if (existingLive) {
|
||||
const alreadyOwned = Object.values(sessionsRef.current).flat()
|
||||
.some(s => s.backendSessionId === existingLive.session_id);
|
||||
if (!alreadyOwned) {
|
||||
liveSession = existingLive;
|
||||
isResumedSession = true;
|
||||
}
|
||||
}
|
||||
} catch { /* proceed */ }
|
||||
|
||||
// If no live session, check history for a cold session to restore
|
||||
if (!liveSession) {
|
||||
try {
|
||||
const { sessions: allHistory } = await sessionsApi.history();
|
||||
const coldMatch = allHistory.find(
|
||||
s => s.agent_path?.endsWith(agentPath) && s.has_messages
|
||||
);
|
||||
if (coldMatch) {
|
||||
coldRestoreId = coldMatch.session_id;
|
||||
}
|
||||
} catch { /* proceed to create fresh */ }
|
||||
}
|
||||
}
|
||||
|
||||
// Track the last queen phase seen in the event log for cold restore
|
||||
let restoredPhase: "planning" | "building" | "staging" | "running" | null = null;
|
||||
|
||||
if (!liveSession) {
|
||||
// Reconnect failed — clear stale cached messages from localStorage restore.
|
||||
// NEVER wipe when: (a) doing a cold restore (we'll restore from disk) or
|
||||
@@ -812,29 +942,10 @@ export default function Workspace() {
|
||||
// double-fetch and greeting leakage).
|
||||
let preQueenMsgs: ChatMessage[] = [];
|
||||
if (coldRestoreId && !alreadyHasMessages) {
|
||||
try {
|
||||
const { messages: queenMsgs } = await sessionsApi.queenMessages(coldRestoreId);
|
||||
// Also pre-fetch worker messages from the old session if a resumable worker exists
|
||||
const displayNameTemp = formatAgentDisplayName(agentPath);
|
||||
for (const m of queenMsgs as Message[]) {
|
||||
const msg = backendMessageToChatMessage(m, agentType, "Queen Bee");
|
||||
msg.role = "queen";
|
||||
preQueenMsgs.push(msg);
|
||||
}
|
||||
// Also try to grab worker messages while we're here
|
||||
try {
|
||||
const { sessions: workerSessions } = await sessionsApi.workerSessions(coldRestoreId);
|
||||
const resumable = workerSessions.find(s => s.status === "active" || s.status === "paused");
|
||||
if (resumable) {
|
||||
const { messages: wMsgs } = await sessionsApi.messages(coldRestoreId, resumable.session_id);
|
||||
for (const m of wMsgs as Message[]) {
|
||||
preQueenMsgs.push(backendMessageToChatMessage(m, agentType, displayNameTemp));
|
||||
}
|
||||
}
|
||||
} catch { /* not critical */ }
|
||||
} catch {
|
||||
// Not available — will start fresh
|
||||
}
|
||||
const displayNameTemp = formatAgentDisplayName(agentPath);
|
||||
const restored = await restoreSessionMessages(coldRestoreId, agentType, displayNameTemp);
|
||||
preQueenMsgs = restored.messages;
|
||||
restoredPhase = restored.restoredPhase;
|
||||
}
|
||||
|
||||
// Suppress intro whenever we are about to restore a previous conversation.
|
||||
@@ -908,7 +1019,8 @@ export default function Workspace() {
|
||||
// failed, the throw inside the catch exits the outer try block.
|
||||
const session = liveSession!;
|
||||
const displayName = formatAgentDisplayName(session.worker_name || agentType);
|
||||
const initialPhase = session.queen_phase || (session.has_worker ? "staging" : "planning");
|
||||
const initialPhase = restoredPhase || session.queen_phase || (session.has_worker ? "staging" : "planning");
|
||||
queenPhaseRef.current[agentType] = initialPhase;
|
||||
updateAgentState(agentType, {
|
||||
sessionId: session.session_id,
|
||||
displayName,
|
||||
@@ -945,37 +1057,23 @@ export default function Workspace() {
|
||||
// For cold-restore, use the old session ID. For live resume, use current session.
|
||||
const historyId = coldRestoreId ?? (isResumedSession ? session.session_id : undefined);
|
||||
|
||||
// For LIVE resume (not cold restore), fetch worker + queen messages now.
|
||||
// For LIVE resume (not cold restore), fetch event log + worker status now.
|
||||
// For cold restore they were already pre-fetched above (before create) so we skip to avoid
|
||||
// double-restoring and to avoid capturing the new greeting.
|
||||
if (historyId && !coldRestoreId) {
|
||||
const restored = await restoreSessionMessages(historyId, agentType, displayName);
|
||||
restoredMsgs.push(...restored.messages);
|
||||
|
||||
// Check worker status (needed for isWorkerRunning flag)
|
||||
try {
|
||||
const { sessions: workerSessions } = await sessionsApi.workerSessions(historyId);
|
||||
const resumable = workerSessions.find(
|
||||
(s) => s.status === "active" || s.status === "paused",
|
||||
);
|
||||
isWorkerRunning = resumable?.status === "active";
|
||||
|
||||
if (resumable) {
|
||||
const { messages } = await sessionsApi.messages(historyId, resumable.session_id);
|
||||
for (const m of messages as Message[]) {
|
||||
restoredMsgs.push(backendMessageToChatMessage(m, agentType, displayName));
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Worker session listing failed — not critical
|
||||
}
|
||||
|
||||
try {
|
||||
const { messages: queenMsgs } = await sessionsApi.queenMessages(historyId);
|
||||
for (const m of queenMsgs as Message[]) {
|
||||
const msg = backendMessageToChatMessage(m, agentType, "Queen Bee");
|
||||
msg.role = "queen";
|
||||
restoredMsgs.push(msg);
|
||||
}
|
||||
} catch {
|
||||
// Queen messages not available — not critical
|
||||
}
|
||||
}
|
||||
|
||||
// Merge messages in chronological order (only for live resume; cold restore
|
||||
@@ -1105,38 +1203,79 @@ export default function Workspace() {
|
||||
}
|
||||
}, [agentStates, updateAgentState]);
|
||||
|
||||
// Poll entry points every second for agents with timers to keep
|
||||
// next_fire_in countdowns fresh without re-fetching the full topology.
|
||||
// Poll entry points every second to keep next_fire_in countdowns fresh
|
||||
// and discover dynamically created triggers (via set_trigger).
|
||||
useEffect(() => {
|
||||
const id = setInterval(async () => {
|
||||
for (const [agentType, sessions] of Object.entries(sessionsByAgent)) {
|
||||
const session = sessions[0];
|
||||
if (!session) continue;
|
||||
const timerNodes = session.graphNodes.filter(
|
||||
(n) => n.nodeType === "trigger" && n.triggerType === "timer",
|
||||
);
|
||||
if (timerNodes.length === 0) continue;
|
||||
const state = agentStates[agentType];
|
||||
if (!state?.sessionId) continue;
|
||||
try {
|
||||
const { entry_points } = await sessionsApi.entryPoints(state.sessionId);
|
||||
// Skip non-manual triggers only
|
||||
const triggerEps = entry_points.filter(ep => ep.trigger_type !== "manual");
|
||||
if (triggerEps.length === 0) continue;
|
||||
|
||||
const fireMap = new Map<string, number>();
|
||||
for (const ep of entry_points) {
|
||||
const taskMap = new Map<string, string>();
|
||||
for (const ep of triggerEps) {
|
||||
if (ep.next_fire_in != null) {
|
||||
fireMap.set(`__trigger_${ep.id}`, ep.next_fire_in);
|
||||
}
|
||||
if (ep.task != null) {
|
||||
taskMap.set(`__trigger_${ep.id}`, ep.task);
|
||||
}
|
||||
}
|
||||
if (fireMap.size === 0) continue;
|
||||
|
||||
setSessionsByAgent((prev) => {
|
||||
const ss = prev[agentType];
|
||||
if (!ss?.length) return prev;
|
||||
const updated = ss[0].graphNodes.map((n) => {
|
||||
const existingIds = new Set(ss[0].graphNodes.map(n => n.id));
|
||||
|
||||
// Update existing trigger nodes
|
||||
let updated = ss[0].graphNodes.map((n) => {
|
||||
if (n.nodeType !== "trigger") return n;
|
||||
const nfi = fireMap.get(n.id);
|
||||
if (nfi == null || n.nodeType !== "trigger") return n;
|
||||
return { ...n, triggerConfig: { ...n.triggerConfig, next_fire_in: nfi } };
|
||||
const task = taskMap.get(n.id);
|
||||
if (nfi == null && task == null) return n;
|
||||
return {
|
||||
...n,
|
||||
triggerConfig: {
|
||||
...n.triggerConfig,
|
||||
...(nfi != null ? { next_fire_in: nfi } : {}),
|
||||
...(task != null ? { task } : {}),
|
||||
},
|
||||
};
|
||||
});
|
||||
|
||||
// Discover new triggers not yet in the graph
|
||||
const entryNode = ss[0].graphNodes.find(n => n.nodeType !== "trigger")?.id;
|
||||
const newNodes: GraphNode[] = [];
|
||||
for (const ep of triggerEps) {
|
||||
const nodeId = `__trigger_${ep.id}`;
|
||||
if (existingIds.has(nodeId)) continue;
|
||||
newNodes.push({
|
||||
id: nodeId,
|
||||
label: ep.name || ep.id,
|
||||
status: "pending",
|
||||
nodeType: "trigger",
|
||||
triggerType: ep.trigger_type,
|
||||
triggerConfig: {
|
||||
...ep.trigger_config,
|
||||
...(ep.next_fire_in != null ? { next_fire_in: ep.next_fire_in } : {}),
|
||||
...(ep.task ? { task: ep.task } : {}),
|
||||
},
|
||||
...(entryNode ? { next: [entryNode] } : {}),
|
||||
});
|
||||
}
|
||||
if (newNodes.length > 0) {
|
||||
updated = [...newNodes, ...updated];
|
||||
}
|
||||
|
||||
// Skip update if nothing changed
|
||||
if (updated.every((n, idx) => n === ss[0].graphNodes[idx])) return prev;
|
||||
if (newNodes.length === 0 && updated.every((n, idx) => n === ss[0].graphNodes[idx])) return prev;
|
||||
return {
|
||||
...prev,
|
||||
[agentType]: ss.map((s, i) => (i === 0 ? { ...s, graphNodes: updated } : s)),
|
||||
@@ -1275,7 +1414,7 @@ export default function Workspace() {
|
||||
|
||||
// --- SSE event handler ---
|
||||
const upsertChatMessage = useCallback(
|
||||
(agentType: string, chatMsg: ChatMessage) => {
|
||||
(agentType: string, chatMsg: ChatMessage, options?: { reconcileOptimisticUser?: boolean }) => {
|
||||
setSessionsByAgent((prev) => {
|
||||
const sessions = prev[agentType] || [];
|
||||
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
|
||||
@@ -1291,6 +1430,25 @@ export default function Workspace() {
|
||||
i === idx ? { ...chatMsg, createdAt: m.createdAt ?? chatMsg.createdAt } : m,
|
||||
);
|
||||
} else {
|
||||
const shouldReconcileOptimisticUser =
|
||||
!!options?.reconcileOptimisticUser && chatMsg.type === "user" && s.messages.length > 0;
|
||||
if (shouldReconcileOptimisticUser) {
|
||||
const lastIdx = s.messages.length - 1;
|
||||
const lastMsg = s.messages[lastIdx];
|
||||
const incomingTs = chatMsg.createdAt ?? Date.now();
|
||||
const lastTs = lastMsg.createdAt ?? incomingTs;
|
||||
const sameMessage =
|
||||
lastMsg.type === "user"
|
||||
&& lastMsg.content === chatMsg.content
|
||||
&& Math.abs(incomingTs - lastTs) <= 15000;
|
||||
if (sameMessage) {
|
||||
newMessages = s.messages.map((m, i) =>
|
||||
i === lastIdx ? { ...m, id: chatMsg.id } : m,
|
||||
);
|
||||
return { ...s, messages: newMessages };
|
||||
}
|
||||
}
|
||||
|
||||
// Append — SSE events arrive in server-timestamp order via the
|
||||
// shared EventBus, so arrival order already interleaves queen
|
||||
// and worker correctly. Local user messages are always created
|
||||
@@ -1308,8 +1466,6 @@ export default function Workspace() {
|
||||
const handleSSEEvent = useCallback(
|
||||
(agentType: string, event: AgentEvent) => {
|
||||
const streamId = event.stream_id;
|
||||
if (streamId === "judge") return;
|
||||
|
||||
const isQueen = streamId === "queen";
|
||||
if (isQueen) console.log('[QUEEN] handleSSEEvent:', event.type, 'agentType:', agentType);
|
||||
// Drop queen message content while suppressing the auto-intro after a cold-restore.
|
||||
@@ -1345,6 +1501,23 @@ export default function Workspace() {
|
||||
if (Object.keys(priorSnapshots).length > 0) {
|
||||
console.debug(`[hive] execution_started: dropping ${Object.keys(priorSnapshots).length} unflushed LLM snapshot(s)`);
|
||||
}
|
||||
// Insert a run divider when a new run_id is detected
|
||||
const incomingRunId = event.run_id || null;
|
||||
const prevRunId = agentStates[agentType]?.currentRunId;
|
||||
if (incomingRunId && incomingRunId !== prevRunId) {
|
||||
const dividerMsg: ChatMessage = {
|
||||
id: `run-divider-${incomingRunId}`,
|
||||
agent: "",
|
||||
agentColor: "",
|
||||
content: prevRunId ? "New Run" : "Run Started",
|
||||
timestamp: ts,
|
||||
type: "run_divider",
|
||||
role: "worker",
|
||||
thread: agentType,
|
||||
createdAt: eventCreatedAt,
|
||||
};
|
||||
upsertChatMessage(agentType, dividerMsg);
|
||||
}
|
||||
turnCounterRef.current[turnKey] = currentTurn + 1;
|
||||
updateAgentState(agentType, {
|
||||
isTyping: true,
|
||||
@@ -1353,6 +1526,7 @@ export default function Workspace() {
|
||||
awaitingInput: false,
|
||||
workerRunState: "running",
|
||||
currentExecutionId: event.execution_id || agentStates[agentType]?.currentExecutionId || null,
|
||||
currentRunId: incomingRunId,
|
||||
nodeLogs: {},
|
||||
subagentReports: [],
|
||||
llmSnapshots: {},
|
||||
@@ -1404,13 +1578,29 @@ export default function Workspace() {
|
||||
case "execution_paused":
|
||||
case "execution_failed":
|
||||
case "client_output_delta":
|
||||
case "client_input_received":
|
||||
case "client_input_requested":
|
||||
case "llm_text_delta": {
|
||||
const chatMsg = sseEventToChatMessage(event, agentType, displayName, currentTurn);
|
||||
if (isQueen) console.log('[QUEEN] chatMsg:', chatMsg?.id, chatMsg?.content?.slice(0, 50), 'turn:', currentTurn);
|
||||
if (chatMsg && !suppressQueenMessages) {
|
||||
if (isQueen) chatMsg.role = role;
|
||||
upsertChatMessage(agentType, chatMsg);
|
||||
// Queen emits multiple client_output_delta / llm_text_delta snapshots
|
||||
// across iterations and inner tool-loop turns. Build a stable ID that
|
||||
// groups streaming deltas for the *same* output (same execution +
|
||||
// iteration + inner_turn) into one bubble, while keeping distinct
|
||||
// outputs as separate bubbles so earlier text isn't overwritten.
|
||||
if (isQueen && (event.type === "client_output_delta" || event.type === "llm_text_delta") && event.execution_id) {
|
||||
const iter = event.data?.iteration ?? 0;
|
||||
const inner = event.data?.inner_turn ?? 0;
|
||||
chatMsg.id = `queen-stream-${event.execution_id}-${iter}-${inner}`;
|
||||
}
|
||||
if (isQueen) {
|
||||
chatMsg.role = role;
|
||||
chatMsg.phase = queenPhaseRef.current[agentType] as ChatMessage["phase"];
|
||||
}
|
||||
upsertChatMessage(agentType, chatMsg, {
|
||||
reconcileOptimisticUser: event.type === "client_input_received",
|
||||
});
|
||||
}
|
||||
|
||||
// Mark streaming when LLM text is actively arriving
|
||||
@@ -1850,6 +2040,7 @@ export default function Workspace() {
|
||||
: rawPhase === "staging" ? "staging"
|
||||
: rawPhase === "planning" ? "planning"
|
||||
: "building";
|
||||
queenPhaseRef.current[agentType] = newPhase;
|
||||
updateAgentState(agentType, {
|
||||
queenPhase: newPhase,
|
||||
queenBuilding: newPhase === "building",
|
||||
@@ -1950,6 +2141,136 @@ export default function Workspace() {
|
||||
break;
|
||||
}
|
||||
|
||||
case "trigger_activated": {
|
||||
const triggerId = event.data?.trigger_id as string;
|
||||
if (triggerId) {
|
||||
const nodeId = `__trigger_${triggerId}`;
|
||||
// If the trigger node doesn't exist yet (dynamically created via set_trigger),
|
||||
// synthesize it before updating status.
|
||||
setSessionsByAgent(prev => {
|
||||
const sessions = prev[agentType] || [];
|
||||
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
|
||||
return {
|
||||
...prev,
|
||||
[agentType]: sessions.map(s => {
|
||||
if (s.id !== activeId) return s;
|
||||
const exists = s.graphNodes.some(n => n.id === nodeId);
|
||||
if (exists) {
|
||||
return {
|
||||
...s,
|
||||
graphNodes: s.graphNodes.map(n =>
|
||||
n.id === nodeId ? { ...n, status: "running" as const } : n,
|
||||
),
|
||||
};
|
||||
}
|
||||
// Synthesize new trigger node at the front of the graph
|
||||
const triggerType = (event.data?.trigger_type as string) || "timer";
|
||||
const triggerConfig = (event.data?.trigger_config as Record<string, unknown>) || {};
|
||||
const entryNode = s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
|
||||
const newNode: GraphNode = {
|
||||
id: nodeId,
|
||||
label: triggerId,
|
||||
status: "running",
|
||||
nodeType: "trigger",
|
||||
triggerType,
|
||||
triggerConfig,
|
||||
...(entryNode ? { next: [entryNode] } : {}),
|
||||
};
|
||||
return { ...s, graphNodes: [newNode, ...s.graphNodes] };
|
||||
}),
|
||||
};
|
||||
});
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "trigger_deactivated": {
|
||||
const triggerId = event.data?.trigger_id as string;
|
||||
if (triggerId) {
|
||||
// Clear next_fire_in so countdown hides when inactive
|
||||
setSessionsByAgent(prev => {
|
||||
const sessions = prev[agentType] || [];
|
||||
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
|
||||
return {
|
||||
...prev,
|
||||
[agentType]: sessions.map(s => {
|
||||
if (s.id !== activeId) return s;
|
||||
return {
|
||||
...s,
|
||||
graphNodes: s.graphNodes.map(n => {
|
||||
if (n.id !== `__trigger_${triggerId}`) return n;
|
||||
const { next_fire_in: _, ...restConfig } = (n.triggerConfig || {}) as Record<string, unknown> & { next_fire_in?: unknown };
|
||||
return { ...n, status: "pending" as const, triggerConfig: restConfig };
|
||||
}),
|
||||
};
|
||||
}),
|
||||
};
|
||||
});
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "trigger_fired": {
|
||||
const triggerId = event.data?.trigger_id as string;
|
||||
if (triggerId) {
|
||||
const nodeId = `__trigger_${triggerId}`;
|
||||
updateGraphNodeStatus(agentType, nodeId, "complete");
|
||||
setTimeout(() => updateGraphNodeStatus(agentType, nodeId, "running"), 1500);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "trigger_available": {
|
||||
const triggerId = event.data?.trigger_id as string;
|
||||
if (triggerId) {
|
||||
const nodeId = `__trigger_${triggerId}`;
|
||||
setSessionsByAgent(prev => {
|
||||
const sessions = prev[agentType] || [];
|
||||
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
|
||||
return {
|
||||
...prev,
|
||||
[agentType]: sessions.map(s => {
|
||||
if (s.id !== activeId) return s;
|
||||
if (s.graphNodes.some(n => n.id === nodeId)) return s;
|
||||
const triggerType = (event.data?.trigger_type as string) || "timer";
|
||||
const triggerConfig = (event.data?.trigger_config as Record<string, unknown>) || {};
|
||||
const entryNode = s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
|
||||
const newNode: GraphNode = {
|
||||
id: nodeId,
|
||||
label: triggerId,
|
||||
status: "pending",
|
||||
nodeType: "trigger",
|
||||
triggerType,
|
||||
triggerConfig,
|
||||
...(entryNode ? { next: [entryNode] } : {}),
|
||||
};
|
||||
return { ...s, graphNodes: [newNode, ...s.graphNodes] };
|
||||
}),
|
||||
};
|
||||
});
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "trigger_removed": {
|
||||
const triggerId = event.data?.trigger_id as string;
|
||||
if (triggerId) {
|
||||
const nodeId = `__trigger_${triggerId}`;
|
||||
setSessionsByAgent(prev => {
|
||||
const sessions = prev[agentType] || [];
|
||||
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
|
||||
return {
|
||||
...prev,
|
||||
[agentType]: sessions.map(s => {
|
||||
if (s.id !== activeId) return s;
|
||||
return { ...s, graphNodes: s.graphNodes.filter(n => n.id !== nodeId) };
|
||||
}),
|
||||
};
|
||||
});
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
default:
|
||||
// Fallback: ensure queenReady is set even for unexpected first events
|
||||
if (shouldMarkQueenReady) updateAgentState(agentType, { queenReady: true });
|
||||
@@ -1980,6 +2301,18 @@ export default function Workspace() {
|
||||
? { nodes: activeSession.graphNodes, title: activeAgentState?.displayName || formatAgentDisplayName(baseAgentType(activeWorker)) }
|
||||
: { nodes: [] as GraphNode[], title: "" };
|
||||
|
||||
// Keep selectedNode in sync with live graphNodes (trigger status updates via SSE)
|
||||
const liveSelectedNode = selectedNode && currentGraph.nodes.find(n => n.id === selectedNode.id);
|
||||
const resolvedSelectedNode = liveSelectedNode || selectedNode;
|
||||
|
||||
// Sync trigger task draft when selected trigger node changes
|
||||
useEffect(() => {
|
||||
if (resolvedSelectedNode?.nodeType === "trigger") {
|
||||
const tc = resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined;
|
||||
setTriggerTaskDraft((tc?.task as string) || "");
|
||||
}
|
||||
}, [resolvedSelectedNode?.id]);
|
||||
|
||||
// Build a flat list of all agent-type tabs for the tab bar
|
||||
const agentTabs = Object.entries(sessionsByAgent)
|
||||
.filter(([, sessions]) => sessions.length > 0)
|
||||
@@ -2276,7 +2609,7 @@ export default function Workspace() {
|
||||
const closeAgentTab = useCallback((agentType: string) => {
|
||||
setSelectedNode(null);
|
||||
// Pause worker execution if running (saves checkpoint), then kill the
|
||||
// entire backend session so the queen and judge don't keep running.
|
||||
// entire backend session so the queen doesn't keep running.
|
||||
const state = agentStates[agentType];
|
||||
if (state?.sessionId) {
|
||||
const pausePromise = (state.currentExecutionId && state.workerRunState === "running")
|
||||
@@ -2316,28 +2649,37 @@ export default function Workspace() {
|
||||
}
|
||||
}, [sessionsByAgent, activeWorker, navigate, agentStates]);
|
||||
|
||||
// Create a new session for any agent type (used by NewTabPopover)
|
||||
// Open a tab for an agent type. If a tab already exists, switch to it
|
||||
// instead of creating a duplicate — each agent gets one session.
|
||||
// Exception: "new-agent" tabs always create a new instance since each
|
||||
// represents a distinct conversation the user is starting from scratch.
|
||||
const addAgentSession = useCallback((agentType: string, agentLabel?: string) => {
|
||||
// Count all existing open tabs for this base agent type (first tab uses agentType
|
||||
// as key; subsequent tabs use "agentType::frontendSessionId" as unique keys).
|
||||
const existingTabCount = Object.keys(sessionsByAgent).filter(
|
||||
k => baseAgentType(k) === agentType && (sessionsByAgent[k] || []).length > 0,
|
||||
).length;
|
||||
const isNewAgent = agentType === "new-agent" || agentType.startsWith("new-agent-");
|
||||
|
||||
const newIndex = existingTabCount + 1;
|
||||
const existingCreds = sessionsByAgent[agentType]?.[0]?.credentials;
|
||||
const displayLabel = agentLabel || formatAgentDisplayName(agentType);
|
||||
const label = newIndex === 1 ? displayLabel : `${displayLabel} #${newIndex}`;
|
||||
const newSession = createSession(agentType, label, existingCreds);
|
||||
|
||||
// First tab keeps agentType as its key (backward-compatible with all existing
|
||||
// logic). Additional tabs get a unique key so each has its own isolated
|
||||
// agentStates slot, its own backend session, and its own tab-bar entry.
|
||||
const tabKey = existingTabCount === 0 ? agentType : `${agentType}::${newSession.id}`;
|
||||
if (tabKey !== agentType) {
|
||||
newSession.tabKey = tabKey;
|
||||
if (!isNewAgent) {
|
||||
const existingTabKey = Object.keys(sessionsByAgent).find(
|
||||
k => baseAgentType(k) === agentType && (sessionsByAgent[k] || []).length > 0,
|
||||
);
|
||||
if (existingTabKey) {
|
||||
setActiveWorker(existingTabKey);
|
||||
const existing = sessionsByAgent[existingTabKey]?.[0];
|
||||
if (existing) {
|
||||
setActiveSessionByAgent(prev => ({ ...prev, [existingTabKey]: existing.id }));
|
||||
}
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
const tabKey = isNewAgent ? `new-agent-${makeId()}` : agentType;
|
||||
const existingNewAgentCount = isNewAgent
|
||||
? Object.keys(sessionsByAgent).filter(
|
||||
k => (k === "new-agent" || k.startsWith("new-agent-")) && (sessionsByAgent[k] || []).length > 0
|
||||
).length
|
||||
: 0;
|
||||
const rawLabel = agentLabel || (isNewAgent ? "New Agent" : formatAgentDisplayName(agentType));
|
||||
const displayLabel = existingNewAgentCount === 0 ? rawLabel : `${rawLabel} #${existingNewAgentCount + 1}`;
|
||||
const newSession = createSession(tabKey, displayLabel);
|
||||
|
||||
setSessionsByAgent(prev => ({
|
||||
...prev,
|
||||
[tabKey]: [newSession],
|
||||
@@ -2365,16 +2707,13 @@ export default function Workspace() {
|
||||
}
|
||||
|
||||
// Pre-fetch messages from disk so the tab opens with conversation already shown.
|
||||
// This happens BEFORE creating the tab so no "new session" empty state is visible.
|
||||
// Prefer the persisted event log for full UI reconstruction; fall back to parts.
|
||||
let prefetchedMessages: ChatMessage[] = [];
|
||||
try {
|
||||
const { messages: queenMsgs } = await sessionsApi.queenMessages(sessionId);
|
||||
for (const m of queenMsgs as Message[]) {
|
||||
const resolvedType = agentPath || "new-agent";
|
||||
const msg = backendMessageToChatMessage(m, resolvedType, "Queen Bee");
|
||||
msg.role = "queen";
|
||||
prefetchedMessages.push(msg);
|
||||
}
|
||||
const resolvedType = agentPath || "new-agent";
|
||||
const displayNameTemp = agentName || formatAgentDisplayName(resolvedType);
|
||||
const restored = await restoreSessionMessages(sessionId, resolvedType, displayNameTemp);
|
||||
prefetchedMessages = restored.messages;
|
||||
if (prefetchedMessages.length > 0) {
|
||||
prefetchedMessages.sort((a, b) => (a.createdAt ?? 0) - (b.createdAt ?? 0));
|
||||
}
|
||||
@@ -2441,7 +2780,6 @@ export default function Workspace() {
|
||||
|
||||
const activeWorkerLabel = activeAgentState?.displayName || formatAgentDisplayName(baseAgentType(activeWorker));
|
||||
|
||||
|
||||
return (
|
||||
<div className="flex flex-col h-screen bg-background overflow-hidden">
|
||||
<TopBar
|
||||
@@ -2490,10 +2828,10 @@ export default function Workspace() {
|
||||
<div className="flex flex-1 min-h-0">
|
||||
|
||||
{/* ── Pipeline graph + chat ──────────────────────────────────── */}
|
||||
<div className={`${((activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building") && activeAgentState?.draftGraph) || activeAgentState?.originalDraft ? "w-[500px] min-w-[400px]" : "w-[300px] min-w-[240px]"} bg-card/30 flex flex-col border-r border-border/30 transition-[width] duration-200`}>
|
||||
<div className={`${activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building" || activeAgentState?.originalDraft ? "w-[500px] min-w-[400px]" : "w-[300px] min-w-[240px]"} bg-card/30 flex flex-col border-r border-border/30 transition-[width] duration-200`}>
|
||||
<div className="flex-1 min-h-0">
|
||||
{(activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building") && activeAgentState?.draftGraph ? (
|
||||
<DraftGraph draft={activeAgentState.draftGraph} building={activeAgentState?.queenBuilding} onRun={handleRun} onPause={handlePause} runState={activeAgentState?.workerRunState ?? "idle"} />
|
||||
{activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building" ? (
|
||||
<DraftGraph draft={activeAgentState?.draftGraph ?? null} loading={!activeAgentState?.draftGraph} building={activeAgentState?.queenBuilding} onRun={handleRun} onPause={handlePause} runState={activeAgentState?.workerRunState ?? "idle"} />
|
||||
) : activeAgentState?.originalDraft ? (
|
||||
<DraftGraph
|
||||
draft={activeAgentState.originalDraft}
|
||||
@@ -2602,20 +2940,32 @@ export default function Workspace() {
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
{selectedNode && (
|
||||
<div className="w-[408px] min-w-[340px] flex-shrink-0">
|
||||
{selectedNode.nodeType === "trigger" ? (
|
||||
{resolvedSelectedNode && (
|
||||
<div className="w-[480px] min-w-[400px] flex-shrink-0">
|
||||
{resolvedSelectedNode.nodeType === "trigger" ? (
|
||||
<div className="flex flex-col h-full border-l border-border/40 bg-card/20 animate-in slide-in-from-right">
|
||||
<div className="px-4 pt-4 pb-3 border-b border-border/30 flex items-start justify-between gap-2">
|
||||
<div className="flex items-start gap-3 min-w-0">
|
||||
<div className="w-8 h-8 rounded-lg flex items-center justify-center flex-shrink-0 mt-0.5 bg-[hsl(210,40%,55%)]/15 border border-[hsl(210,40%,55%)]/25">
|
||||
<span className="text-sm" style={{ color: "hsl(210,40%,55%)" }}>
|
||||
{{ "webhook": "\u26A1", "timer": "\u23F1", "api": "\u2192", "event": "\u223F" }[selectedNode.triggerType || ""] || "\u26A1"}
|
||||
{{ "webhook": "\u26A1", "timer": "\u23F1", "api": "\u2192", "event": "\u223F" }[resolvedSelectedNode.triggerType || ""] || "\u26A1"}
|
||||
</span>
|
||||
</div>
|
||||
<div className="min-w-0">
|
||||
<h3 className="text-sm font-semibold text-foreground leading-tight">{selectedNode.label}</h3>
|
||||
<p className="text-[11px] text-muted-foreground mt-0.5 capitalize">{selectedNode.triggerType} trigger</p>
|
||||
<h3 className="text-sm font-semibold text-foreground leading-tight">{resolvedSelectedNode.label}</h3>
|
||||
<p className="text-[11px] text-muted-foreground mt-0.5 capitalize flex items-center gap-1.5">
|
||||
{resolvedSelectedNode.triggerType} trigger
|
||||
<span className={`inline-block w-1.5 h-1.5 rounded-full ${
|
||||
resolvedSelectedNode.status === "running" || resolvedSelectedNode.status === "complete"
|
||||
? "bg-emerald-400" : "bg-muted-foreground/40"
|
||||
}`} />
|
||||
<span className={`text-[10px] ${
|
||||
resolvedSelectedNode.status === "running" || resolvedSelectedNode.status === "complete"
|
||||
? "text-emerald-400" : "text-muted-foreground/60"
|
||||
}`}>
|
||||
{resolvedSelectedNode.status === "running" || resolvedSelectedNode.status === "complete" ? "active" : "inactive"}
|
||||
</span>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
<button onClick={() => setSelectedNode(null)} className="p-1 rounded-md text-muted-foreground hover:text-foreground hover:bg-muted/50 transition-colors flex-shrink-0">
|
||||
@@ -2624,7 +2974,7 @@ export default function Workspace() {
|
||||
</div>
|
||||
<div className="px-4 py-4 flex flex-col gap-3">
|
||||
{(() => {
|
||||
const tc = selectedNode.triggerConfig as Record<string, unknown> | undefined;
|
||||
const tc = resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined;
|
||||
const cron = tc?.cron as string | undefined;
|
||||
const interval = tc?.interval_minutes as number | undefined;
|
||||
const eventTypes = tc?.event_types as string[] | undefined;
|
||||
@@ -2645,7 +2995,7 @@ export default function Workspace() {
|
||||
) : null;
|
||||
})()}
|
||||
{(() => {
|
||||
const nfi = (selectedNode.triggerConfig as Record<string, unknown> | undefined)?.next_fire_in as number | undefined;
|
||||
const nfi = (resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined)?.next_fire_in as number | undefined;
|
||||
return nfi != null ? (
|
||||
<div>
|
||||
<p className="text-[10px] font-medium text-muted-foreground uppercase tracking-wider mb-1.5">Next run</p>
|
||||
@@ -2655,25 +3005,92 @@ export default function Workspace() {
|
||||
</div>
|
||||
) : null;
|
||||
})()}
|
||||
<div>
|
||||
<p className="text-[10px] font-medium text-muted-foreground uppercase tracking-wider mb-1.5">Task</p>
|
||||
<textarea
|
||||
value={triggerTaskDraft}
|
||||
onChange={(e) => setTriggerTaskDraft(e.target.value)}
|
||||
placeholder="Describe what the worker should do when this trigger fires..."
|
||||
className="w-full text-xs text-foreground/80 bg-muted/30 rounded-lg px-3 py-2 border border-border/20 resize-none min-h-[60px] font-mono focus:outline-none focus:border-primary/40"
|
||||
rows={3}
|
||||
/>
|
||||
{(() => {
|
||||
const currentTask = (resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined)?.task as string || "";
|
||||
const hasChanged = triggerTaskDraft !== currentTask;
|
||||
if (!hasChanged) return null;
|
||||
return (
|
||||
<button
|
||||
disabled={triggerTaskSaving}
|
||||
onClick={async () => {
|
||||
const sessionId = activeAgentState?.sessionId;
|
||||
const triggerId = resolvedSelectedNode.id.replace("__trigger_", "");
|
||||
if (!sessionId) return;
|
||||
setTriggerTaskSaving(true);
|
||||
try {
|
||||
await sessionsApi.updateTriggerTask(sessionId, triggerId, triggerTaskDraft);
|
||||
} finally {
|
||||
setTriggerTaskSaving(false);
|
||||
}
|
||||
}}
|
||||
className="mt-1.5 w-full text-[11px] px-3 py-1.5 rounded-lg border border-primary/30 text-primary hover:bg-primary/10 transition-colors disabled:opacity-50"
|
||||
>
|
||||
{triggerTaskSaving ? "Saving..." : "Save Task"}
|
||||
</button>
|
||||
);
|
||||
})()}
|
||||
{!triggerTaskDraft && (
|
||||
<p className="text-[10px] text-amber-400/80 mt-1">A task is required before enabling this trigger.</p>
|
||||
)}
|
||||
</div>
|
||||
<div>
|
||||
<p className="text-[10px] font-medium text-muted-foreground uppercase tracking-wider mb-1.5">Fires into</p>
|
||||
<p className="text-xs text-foreground/80 font-mono bg-muted/30 rounded-lg px-3 py-2 border border-border/20">
|
||||
{selectedNode.next?.[0]?.split("-").map(w => w.charAt(0).toUpperCase() + w.slice(1)).join(" ") || "—"}
|
||||
{resolvedSelectedNode.next?.[0]?.split("-").map(w => w.charAt(0).toUpperCase() + w.slice(1)).join(" ") || "—"}
|
||||
</p>
|
||||
</div>
|
||||
{activeAgentState?.queenPhase !== "building" && (() => {
|
||||
const triggerIsActive = resolvedSelectedNode.status === "running" || resolvedSelectedNode.status === "complete";
|
||||
const triggerId = resolvedSelectedNode.id.replace("__trigger_", "");
|
||||
const taskMissing = !triggerTaskDraft;
|
||||
return (
|
||||
<div className="pt-1">
|
||||
<button
|
||||
disabled={!triggerIsActive && taskMissing}
|
||||
onClick={async () => {
|
||||
const sessionId = activeAgentState?.sessionId;
|
||||
if (!sessionId) return;
|
||||
const action = triggerIsActive ? "Disable" : "Enable";
|
||||
await executionApi.chat(sessionId, `${action} trigger ${triggerId}`);
|
||||
}}
|
||||
className={`w-full text-xs px-3 py-2 rounded-lg border transition-colors ${
|
||||
triggerIsActive
|
||||
? "border-red-500/30 text-red-400 hover:bg-red-500/10"
|
||||
: taskMissing
|
||||
? "border-border/30 text-muted-foreground/40 cursor-not-allowed"
|
||||
: "border-emerald-500/30 text-emerald-400 hover:bg-emerald-500/10"
|
||||
}`}
|
||||
>
|
||||
{triggerIsActive ? "Disable Trigger" : "Enable Trigger"}
|
||||
</button>
|
||||
{!triggerIsActive && taskMissing && (
|
||||
<p className="text-[10px] text-muted-foreground/50 mt-1 text-center">Configure a task first</p>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
})()}
|
||||
</div>
|
||||
</div>
|
||||
) : (
|
||||
<NodeDetailPanel
|
||||
node={selectedNode}
|
||||
nodeSpec={activeAgentState?.nodeSpecs.find(n => n.id === selectedNode.id) ?? null}
|
||||
node={resolvedSelectedNode}
|
||||
nodeSpec={activeAgentState?.nodeSpecs.find(n => n.id === resolvedSelectedNode.id) ?? null}
|
||||
allNodeSpecs={activeAgentState?.nodeSpecs}
|
||||
subagentReports={activeAgentState?.subagentReports}
|
||||
sessionId={activeAgentState?.sessionId || undefined}
|
||||
graphId={activeAgentState?.graphId || undefined}
|
||||
workerSessionId={null}
|
||||
nodeLogs={activeAgentState?.nodeLogs[selectedNode.id] || []}
|
||||
actionPlan={activeAgentState?.nodeActionPlans[selectedNode.id]}
|
||||
nodeLogs={activeAgentState?.nodeLogs[resolvedSelectedNode.id] || []}
|
||||
actionPlan={activeAgentState?.nodeActionPlans[resolvedSelectedNode.id]}
|
||||
onClose={() => setSelectedNode(null)}
|
||||
/>
|
||||
)}
|
||||
@@ -2687,7 +3104,15 @@ export default function Workspace() {
|
||||
agentLabel={activeWorkerLabel}
|
||||
agentPath={credentialAgentPath || activeAgentState?.agentPath || (!activeWorker.startsWith("new-agent") ? activeWorker : undefined)}
|
||||
open={credentialsOpen}
|
||||
onClose={() => { setCredentialsOpen(false); setCredentialAgentPath(null); setDismissedBanner(null); }}
|
||||
onClose={() => {
|
||||
setCredentialsOpen(false);
|
||||
setCredentialAgentPath(null);
|
||||
// Keep credentials_required error set — clearing it here triggers
|
||||
// the auto-load effect which retries session creation immediately,
|
||||
// causing an infinite modal loop when credentials are still missing.
|
||||
// The error is only cleared in onCredentialChange (below) when the
|
||||
// user actually saves valid credentials.
|
||||
}}
|
||||
credentials={activeSession?.credentials || []}
|
||||
onCredentialChange={() => {
|
||||
// Clear credential error so the auto-load effect retries session creation
|
||||
|
||||
+2
-1
@@ -1,6 +1,6 @@
|
||||
[project]
|
||||
name = "framework"
|
||||
version = "0.5.1"
|
||||
version = "0.7.1"
|
||||
description = "Goal-driven agent runtime with Builder-friendly observability"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.11"
|
||||
@@ -11,6 +11,7 @@ dependencies = [
|
||||
"litellm>=1.81.0",
|
||||
"mcp>=1.0.0",
|
||||
"fastmcp>=2.0.0",
|
||||
"croniter>=1.4.0",
|
||||
"tools",
|
||||
]
|
||||
|
||||
|
||||
@@ -1,140 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Setup script for Aden Hive Framework MCP Server
|
||||
|
||||
This script installs the framework and configures the MCP server.
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def setup_logger():
|
||||
"""Configure logger for CLI usage with colored output."""
|
||||
if not logger.handlers:
|
||||
handler = logging.StreamHandler(sys.stdout)
|
||||
formatter = logging.Formatter("%(message)s")
|
||||
handler.setFormatter(formatter)
|
||||
logger.addHandler(handler)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
|
||||
class Colors:
|
||||
"""ANSI color codes for terminal output."""
|
||||
|
||||
GREEN = "\033[0;32m"
|
||||
YELLOW = "\033[1;33m"
|
||||
RED = "\033[0;31m"
|
||||
BLUE = "\033[0;34m"
|
||||
NC = "\033[0m" # No Color
|
||||
|
||||
|
||||
def log_step(message: str):
|
||||
"""Log a colored step message."""
|
||||
logger.info(f"{Colors.YELLOW}{message}{Colors.NC}")
|
||||
|
||||
|
||||
def log_success(message: str):
|
||||
"""Log a success message."""
|
||||
logger.info(f"{Colors.GREEN}✓ {message}{Colors.NC}")
|
||||
|
||||
|
||||
def log_error(message: str):
|
||||
"""Log an error message."""
|
||||
logger.error(f"{Colors.RED}✗ {message}{Colors.NC}")
|
||||
|
||||
|
||||
def run_command(cmd: list, error_msg: str) -> bool:
|
||||
"""Run a command and return success status."""
|
||||
try:
|
||||
subprocess.run(
|
||||
cmd,
|
||||
check=True,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
encoding="utf-8",
|
||||
)
|
||||
return True
|
||||
except subprocess.CalledProcessError as e:
|
||||
log_error(error_msg)
|
||||
logger.error(f"Error output: {e.stderr}")
|
||||
return False
|
||||
|
||||
|
||||
def main():
|
||||
"""Main setup function."""
|
||||
setup_logger()
|
||||
logger.info("=== Aden Hive Framework MCP Server Setup ===")
|
||||
logger.info("")
|
||||
|
||||
# Get script directory
|
||||
script_dir = Path(__file__).parent.absolute()
|
||||
|
||||
# Step 1: Install framework package
|
||||
log_step("Step 1: Installing framework package...")
|
||||
if not run_command(
|
||||
[sys.executable, "-m", "pip", "install", "-e", str(script_dir)],
|
||||
"Failed to install framework package",
|
||||
):
|
||||
sys.exit(1)
|
||||
log_success("Framework package installed")
|
||||
logger.info("")
|
||||
|
||||
# Step 2: Install MCP dependencies
|
||||
log_step("Step 2: Installing MCP dependencies...")
|
||||
if not run_command(
|
||||
[sys.executable, "-m", "pip", "install", "mcp", "fastmcp"],
|
||||
"Failed to install MCP dependencies",
|
||||
):
|
||||
sys.exit(1)
|
||||
log_success("MCP dependencies installed")
|
||||
logger.info("")
|
||||
|
||||
# Step 3: Verify MCP configuration
|
||||
log_step("Step 3: Verifying MCP server configuration...")
|
||||
mcp_config_path = script_dir / ".mcp.json"
|
||||
|
||||
if mcp_config_path.exists():
|
||||
log_success("MCP configuration found at .mcp.json")
|
||||
logger.info("Configuration:")
|
||||
with open(mcp_config_path, encoding="utf-8") as f:
|
||||
config = json.load(f)
|
||||
logger.info(json.dumps(config, indent=2))
|
||||
else:
|
||||
log_success("No .mcp.json needed (MCP servers configured at repo root)")
|
||||
logger.info("")
|
||||
|
||||
# Step 4: Test framework import
|
||||
log_step("Step 4: Testing framework import...")
|
||||
try:
|
||||
subprocess.run(
|
||||
[sys.executable, "-c", "import framework; print('OK')"],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
encoding="utf-8",
|
||||
)
|
||||
log_success("Framework module verified")
|
||||
except subprocess.CalledProcessError as e:
|
||||
log_error("Failed to import framework module")
|
||||
logger.error(f"Error: {e.stderr}")
|
||||
sys.exit(1)
|
||||
logger.info("")
|
||||
|
||||
# Success summary
|
||||
logger.info(f"{Colors.GREEN}=== Setup Complete ==={Colors.NC}")
|
||||
logger.info("")
|
||||
logger.info("The framework is now ready to use!")
|
||||
logger.info("")
|
||||
logger.info(f"{Colors.BLUE}MCP Configuration location:{Colors.NC}")
|
||||
logger.info(f" {mcp_config_path}")
|
||||
logger.info("")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,44 @@
|
||||
# Dummy Agent Tests (Level 2)
|
||||
|
||||
End-to-end tests that run real LLM calls against deterministic graph structures. Not part of CI — run manually to verify the executor works with real providers.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
cd core
|
||||
uv run python tests/dummy_agents/run_all.py
|
||||
```
|
||||
|
||||
The script detects available credentials and prompts you to pick a provider. You need at least one of:
|
||||
|
||||
- `ANTHROPIC_API_KEY`
|
||||
- `OPENAI_API_KEY`
|
||||
- `GEMINI_API_KEY`
|
||||
- `ZAI_API_KEY`
|
||||
- Claude Code / Codex / Kimi subscription
|
||||
|
||||
## Verbose Mode
|
||||
|
||||
Show live LLM logs (tool calls, judge verdicts, node traversal):
|
||||
|
||||
```bash
|
||||
uv run python tests/dummy_agents/run_all.py --verbose
|
||||
```
|
||||
|
||||
## What's Tested
|
||||
|
||||
| Agent | Tests | What it covers |
|
||||
|-------|-------|----------------|
|
||||
| echo | 2 | Single-node lifecycle, basic set_output |
|
||||
| pipeline | 4 | Multi-node traversal, input_mapping, conversation modes |
|
||||
| branch | 3 | Conditional edges, LLM-driven routing |
|
||||
| parallel_merge | 4 | Fan-out/fan-in, failure strategies |
|
||||
| retry | 4 | Retry mechanics, exhaustion, ON_FAILURE edges |
|
||||
| feedback_loop | 3 | Feedback cycles, max_node_visits |
|
||||
| worker | 4 | Real MCP tools (example_tool, get_current_time, save_data/load_data) |
|
||||
|
||||
## Notes
|
||||
|
||||
- Tests are **auto-skipped** in regular `pytest` runs (no LLM configured)
|
||||
- Worker tests start the `hive-tools` MCP server as a subprocess
|
||||
- Typical runtime: ~1-3 min depending on provider
|
||||
@@ -0,0 +1,3 @@
|
||||
# Level 2: Dummy Agent Tests
|
||||
# End-to-end graph execution tests with real LLM calls.
|
||||
# NOT part of regular CI — run manually with: uv run python tests/dummy_agents/run_all.py
|
||||
@@ -0,0 +1,140 @@
|
||||
"""Shared fixtures for dummy agent end-to-end tests.
|
||||
|
||||
These tests use real LLM providers — they are NOT part of regular CI.
|
||||
Run via: cd core && uv run python tests/dummy_agents/run_all.py
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.executor import GraphExecutor, ParallelExecutionConfig
|
||||
from framework.graph.goal import Goal
|
||||
from framework.llm.litellm import LiteLLMProvider
|
||||
from framework.runtime.core import Runtime
|
||||
|
||||
# ── module-level state set by run_all.py ─────────────────────────────
|
||||
|
||||
_selected_model: str | None = None
|
||||
_selected_api_key: str | None = None
|
||||
_selected_extra_headers: dict[str, str] | None = None
|
||||
_selected_api_base: str | None = None
|
||||
|
||||
|
||||
def set_llm_selection(
|
||||
model: str,
|
||||
api_key: str,
|
||||
extra_headers: dict[str, str] | None = None,
|
||||
api_base: str | None = None,
|
||||
) -> None:
|
||||
"""Called by run_all.py after user selects a provider."""
|
||||
global _selected_model, _selected_api_key, _selected_extra_headers, _selected_api_base
|
||||
_selected_model = model
|
||||
_selected_api_key = api_key
|
||||
_selected_extra_headers = extra_headers
|
||||
_selected_api_base = api_base
|
||||
|
||||
|
||||
# ── collection hook: skip entire directory when not configured ───────
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(config, items):
|
||||
"""Skip all dummy_agents tests when no LLM is configured.
|
||||
|
||||
This prevents these tests from running in regular CI. They only run
|
||||
when launched via run_all.py (which calls set_llm_selection first).
|
||||
"""
|
||||
if _selected_model is not None:
|
||||
return # LLM configured, run normally
|
||||
|
||||
skip = pytest.mark.skip(
|
||||
reason="Dummy agent tests require a real LLM. "
|
||||
"Run via: cd core && uv run python tests/dummy_agents/run_all.py"
|
||||
)
|
||||
for item in items:
|
||||
if "dummy_agents" in str(item.fspath):
|
||||
item.add_marker(skip)
|
||||
|
||||
|
||||
# ── fixtures ─────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def llm_provider():
|
||||
"""Real LLM provider using the user-selected model."""
|
||||
if _selected_model is None or _selected_api_key is None:
|
||||
pytest.skip("No LLM selected — run via run_all.py")
|
||||
kwargs = {"model": _selected_model, "api_key": _selected_api_key}
|
||||
if _selected_extra_headers:
|
||||
kwargs["extra_headers"] = _selected_extra_headers
|
||||
if _selected_api_base:
|
||||
kwargs["api_base"] = _selected_api_base
|
||||
return LiteLLMProvider(**kwargs)
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def tool_registry():
|
||||
"""Load hive-tools MCP server and return a ToolRegistry with real tools.
|
||||
|
||||
Session-scoped so the MCP server is started once and reused across tests.
|
||||
"""
|
||||
from framework.runner.tool_registry import ToolRegistry
|
||||
|
||||
registry = ToolRegistry()
|
||||
# Resolve the tools directory relative to the repo root
|
||||
repo_root = Path(__file__).resolve().parents[3] # core/tests/dummy_agents -> repo root
|
||||
tools_dir = repo_root / "tools"
|
||||
|
||||
mcp_config = {
|
||||
"name": "hive-tools",
|
||||
"transport": "stdio",
|
||||
"command": "uv",
|
||||
"args": ["run", "python", "mcp_server.py", "--stdio"],
|
||||
"cwd": str(tools_dir),
|
||||
"description": "Hive tools MCP server",
|
||||
}
|
||||
registry.register_mcp_server(mcp_config)
|
||||
yield registry
|
||||
registry.cleanup()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def runtime(tmp_path):
|
||||
"""Real Runtime backed by a temp directory."""
|
||||
return Runtime(storage_path=tmp_path / "runtime")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def goal():
|
||||
return Goal(id="dummy", name="Dummy Agent Test", description="Level 2 end-to-end testing")
|
||||
|
||||
|
||||
def make_executor(
|
||||
runtime: Runtime,
|
||||
llm: LiteLLMProvider,
|
||||
*,
|
||||
enable_parallel: bool = True,
|
||||
parallel_config: ParallelExecutionConfig | None = None,
|
||||
loop_config: dict | None = None,
|
||||
tool_registry=None,
|
||||
storage_path: Path | None = None,
|
||||
) -> GraphExecutor:
|
||||
"""Factory that creates a GraphExecutor with a real LLM."""
|
||||
tools = []
|
||||
tool_executor = None
|
||||
if tool_registry is not None:
|
||||
tools = list(tool_registry.get_tools().values())
|
||||
tool_executor = tool_registry.get_executor()
|
||||
|
||||
return GraphExecutor(
|
||||
runtime=runtime,
|
||||
llm=llm,
|
||||
tools=tools,
|
||||
tool_executor=tool_executor,
|
||||
enable_parallel_execution=enable_parallel,
|
||||
parallel_config=parallel_config,
|
||||
loop_config=loop_config or {"max_iterations": 10},
|
||||
storage_path=storage_path,
|
||||
)
|
||||
@@ -0,0 +1,64 @@
|
||||
"""Minimal helper nodes for deterministic control-flow tests.
|
||||
|
||||
Most tests use real EventLoopNode with real LLM calls. These helpers
|
||||
exist only for tests that need predictable failure/success patterns
|
||||
(retry, feedback loop, parallel failure modes).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from framework.graph.node import NodeContext, NodeProtocol, NodeResult
|
||||
|
||||
|
||||
class SuccessNode(NodeProtocol):
|
||||
"""Always succeeds with configurable output dict."""
|
||||
|
||||
def __init__(self, output: dict | None = None):
|
||||
self._output = output or {"status": "ok"}
|
||||
self.executed = False
|
||||
self.execute_count = 0
|
||||
|
||||
async def execute(self, ctx: NodeContext) -> NodeResult:
|
||||
self.executed = True
|
||||
self.execute_count += 1
|
||||
return NodeResult(success=True, output=self._output, tokens_used=1, latency_ms=1)
|
||||
|
||||
|
||||
class FailNode(NodeProtocol):
|
||||
"""Always fails with configurable error."""
|
||||
|
||||
def __init__(self, error: str = "node failed"):
|
||||
self._error = error
|
||||
self.attempt_count = 0
|
||||
|
||||
async def execute(self, ctx: NodeContext) -> NodeResult:
|
||||
self.attempt_count += 1
|
||||
return NodeResult(success=False, error=self._error)
|
||||
|
||||
|
||||
class FlakyNode(NodeProtocol):
|
||||
"""Fails N times then succeeds. For retry tests."""
|
||||
|
||||
def __init__(self, fail_times: int = 2, output: dict | None = None):
|
||||
self.fail_times = fail_times
|
||||
self._output = output or {"status": "recovered"}
|
||||
self.attempt_count = 0
|
||||
|
||||
async def execute(self, ctx: NodeContext) -> NodeResult:
|
||||
self.attempt_count += 1
|
||||
if self.attempt_count <= self.fail_times:
|
||||
return NodeResult(success=False, error=f"fail #{self.attempt_count}")
|
||||
return NodeResult(success=True, output=self._output, tokens_used=1, latency_ms=1)
|
||||
|
||||
|
||||
class StatefulNode(NodeProtocol):
|
||||
"""Returns different outputs on successive calls. For feedback loop tests."""
|
||||
|
||||
def __init__(self, outputs: list[NodeResult]):
|
||||
self._outputs = outputs
|
||||
self.call_count = 0
|
||||
|
||||
async def execute(self, ctx: NodeContext) -> NodeResult:
|
||||
idx = min(self.call_count, len(self._outputs) - 1)
|
||||
self.call_count += 1
|
||||
return self._outputs[idx]
|
||||
@@ -0,0 +1,359 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Runner for Level 2 dummy agent tests with interactive LLM provider selection.
|
||||
|
||||
This is NOT part of regular CI. It makes real LLM API calls.
|
||||
|
||||
Usage:
|
||||
cd core && uv run python tests/dummy_agents/run_all.py
|
||||
cd core && uv run python tests/dummy_agents/run_all.py --verbose
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import xml.etree.ElementTree as ET
|
||||
from pathlib import Path
|
||||
from tempfile import NamedTemporaryFile
|
||||
|
||||
TESTS_DIR = Path(__file__).parent
|
||||
|
||||
# ── provider registry ────────────────────────────────────────────────
|
||||
|
||||
# (env_var, display_name, default_model) — models match quickstart.sh defaults
|
||||
API_KEY_PROVIDERS = [
|
||||
("ANTHROPIC_API_KEY", "Anthropic (Claude)", "claude-sonnet-4-20250514"),
|
||||
("OPENAI_API_KEY", "OpenAI", "gpt-5-mini"),
|
||||
("GEMINI_API_KEY", "Google Gemini", "gemini/gemini-3-flash-preview"),
|
||||
("ZAI_API_KEY", "ZAI (GLM)", "openai/glm-5"),
|
||||
("GROQ_API_KEY", "Groq", "moonshotai/kimi-k2-instruct-0905"),
|
||||
("MISTRAL_API_KEY", "Mistral", "mistral-large-latest"),
|
||||
("CEREBRAS_API_KEY", "Cerebras", "cerebras/zai-glm-4.7"),
|
||||
("TOGETHER_API_KEY", "Together AI", "together_ai/meta-llama/Llama-3.3-70B-Instruct-Turbo"),
|
||||
("DEEPSEEK_API_KEY", "DeepSeek", "deepseek-chat"),
|
||||
("MINIMAX_API_KEY", "MiniMax", "MiniMax-M2.5"),
|
||||
]
|
||||
|
||||
|
||||
def _detect_claude_code_token() -> str | None:
|
||||
"""Check if Claude Code subscription credentials are available."""
|
||||
try:
|
||||
from framework.runner.runner import get_claude_code_token
|
||||
|
||||
return get_claude_code_token()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _detect_codex_token() -> str | None:
|
||||
"""Check if Codex subscription credentials are available."""
|
||||
try:
|
||||
from framework.runner.runner import get_codex_token
|
||||
|
||||
return get_codex_token()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _detect_kimi_code_token() -> str | None:
|
||||
"""Check if Kimi Code subscription credentials are available."""
|
||||
try:
|
||||
from framework.runner.runner import get_kimi_code_token
|
||||
|
||||
return get_kimi_code_token()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def detect_available() -> list[dict]:
|
||||
"""Detect all available LLM providers with valid credentials.
|
||||
|
||||
Returns list of dicts: {name, model, api_key, source}
|
||||
"""
|
||||
available = []
|
||||
|
||||
# Subscription-based providers
|
||||
token = _detect_claude_code_token()
|
||||
if token:
|
||||
available.append(
|
||||
{
|
||||
"name": "Claude Code (subscription)",
|
||||
"model": "claude-sonnet-4-20250514",
|
||||
"api_key": token,
|
||||
"source": "claude_code_sub",
|
||||
"extra_headers": {"authorization": f"Bearer {token}"},
|
||||
}
|
||||
)
|
||||
|
||||
token = _detect_codex_token()
|
||||
if token:
|
||||
available.append(
|
||||
{
|
||||
"name": "Codex (subscription)",
|
||||
"model": "gpt-5-mini",
|
||||
"api_key": token,
|
||||
"source": "codex_sub",
|
||||
}
|
||||
)
|
||||
|
||||
token = _detect_kimi_code_token()
|
||||
if token:
|
||||
available.append(
|
||||
{
|
||||
"name": "Kimi Code (subscription)",
|
||||
"model": "moonshotai/kimi-k2-instruct-0905",
|
||||
"api_key": token,
|
||||
"source": "kimi_sub",
|
||||
}
|
||||
)
|
||||
|
||||
# API key providers (env vars)
|
||||
for env_var, name, default_model in API_KEY_PROVIDERS:
|
||||
key = os.environ.get(env_var)
|
||||
if key:
|
||||
entry = {
|
||||
"name": f"{name} (${env_var})",
|
||||
"model": default_model,
|
||||
"api_key": key,
|
||||
"source": env_var,
|
||||
}
|
||||
# ZAI requires an api_base (OpenAI-compatible endpoint)
|
||||
if env_var == "ZAI_API_KEY":
|
||||
entry["api_base"] = "https://api.z.ai/api/coding/paas/v4"
|
||||
available.append(entry)
|
||||
|
||||
return available
|
||||
|
||||
|
||||
def prompt_provider_selection() -> dict:
|
||||
"""Interactive prompt to select an LLM provider. Returns the chosen provider dict."""
|
||||
available = detect_available()
|
||||
|
||||
if not available:
|
||||
print("\n No LLM credentials detected.")
|
||||
print(" Set an API key environment variable, e.g.:")
|
||||
print(" export ANTHROPIC_API_KEY=sk-...")
|
||||
print(" export OPENAI_API_KEY=sk-...")
|
||||
print(" Or authenticate with Claude Code: claude")
|
||||
sys.exit(1)
|
||||
|
||||
if len(available) == 1:
|
||||
choice = available[0]
|
||||
print(f"\n Using: {choice['name']} ({choice['model']})")
|
||||
return choice
|
||||
|
||||
print("\n Available LLM providers:\n")
|
||||
for i, p in enumerate(available, 1):
|
||||
print(f" {i}) {p['name']} [{p['model']}]")
|
||||
|
||||
print()
|
||||
while True:
|
||||
try:
|
||||
raw = input(f" Select provider [1-{len(available)}]: ").strip()
|
||||
idx = int(raw) - 1
|
||||
if 0 <= idx < len(available):
|
||||
choice = available[idx]
|
||||
print(f"\n Using: {choice['name']} ({choice['model']})\n")
|
||||
return choice
|
||||
except (ValueError, EOFError):
|
||||
pass
|
||||
print(f" Please enter a number between 1 and {len(available)}")
|
||||
|
||||
|
||||
# ── test runner ──────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def parse_junit_xml(xml_path: str) -> dict[str, dict]:
|
||||
"""Parse JUnit XML and group results by agent (test file)."""
|
||||
tree = ET.parse(xml_path)
|
||||
root = tree.getroot()
|
||||
agents: dict[str, dict] = {}
|
||||
|
||||
for testsuite in root.iter("testsuite"):
|
||||
for testcase in testsuite.iter("testcase"):
|
||||
classname = testcase.get("classname", "")
|
||||
parts = classname.split(".")
|
||||
agent_name = "unknown"
|
||||
for part in parts:
|
||||
if part.startswith("test_"):
|
||||
agent_name = part[5:]
|
||||
break
|
||||
|
||||
if agent_name not in agents:
|
||||
agents[agent_name] = {
|
||||
"total": 0,
|
||||
"passed": 0,
|
||||
"failed": 0,
|
||||
"time": 0.0,
|
||||
"tests": [],
|
||||
}
|
||||
|
||||
agents[agent_name]["total"] += 1
|
||||
test_time = float(testcase.get("time", "0"))
|
||||
agents[agent_name]["time"] += test_time
|
||||
|
||||
failures = testcase.findall("failure")
|
||||
errors = testcase.findall("error")
|
||||
test_name = testcase.get("name", "")
|
||||
|
||||
if failures or errors:
|
||||
agents[agent_name]["failed"] += 1
|
||||
# Extract failure reason from the first failure/error element
|
||||
fail_el = (failures or errors)[0]
|
||||
reason = fail_el.get("message", "") or ""
|
||||
# Also grab the text body for more detail
|
||||
body = fail_el.text or ""
|
||||
# Build a concise reason: prefer message, fall back to first line of body
|
||||
if not reason and body:
|
||||
reason = body.strip().split("\n")[0]
|
||||
agents[agent_name]["tests"].append((test_name, "FAIL", reason))
|
||||
else:
|
||||
agents[agent_name]["passed"] += 1
|
||||
agents[agent_name]["tests"].append((test_name, "PASS", ""))
|
||||
|
||||
return agents
|
||||
|
||||
|
||||
def print_table(agents: dict[str, dict], total_time: float, verbose: bool = False) -> None:
|
||||
"""Print summary table."""
|
||||
col_agent = 20
|
||||
col_tests = 6
|
||||
col_passed = 8
|
||||
col_time = 12
|
||||
|
||||
def sep(char: str = "═") -> str:
|
||||
return (
|
||||
f"╠{char * (col_agent + 2)}╬{char * (col_tests + 2)}"
|
||||
f"╬{char * (col_passed + 2)}╬{char * (col_time + 2)}╣"
|
||||
)
|
||||
|
||||
header = (
|
||||
f"║ {'Agent':<{col_agent}} ║ {'Tests':>{col_tests}} "
|
||||
f"║ {'Passed':>{col_passed}} ║ {'Time (s)':>{col_time}} ║"
|
||||
)
|
||||
top = (
|
||||
f"╔{'═' * (col_agent + 2)}╦{'═' * (col_tests + 2)}"
|
||||
f"╦{'═' * (col_passed + 2)}╦{'═' * (col_time + 2)}╗"
|
||||
)
|
||||
bottom = (
|
||||
f"╚{'═' * (col_agent + 2)}╩{'═' * (col_tests + 2)}"
|
||||
f"╩{'═' * (col_passed + 2)}╩{'═' * (col_time + 2)}╝"
|
||||
)
|
||||
|
||||
print()
|
||||
print(top)
|
||||
print(header)
|
||||
print(sep())
|
||||
|
||||
total_tests = 0
|
||||
total_passed = 0
|
||||
|
||||
for agent_name in sorted(agents.keys()):
|
||||
data = agents[agent_name]
|
||||
total_tests += data["total"]
|
||||
total_passed += data["passed"]
|
||||
marker = " " if data["failed"] == 0 else "!"
|
||||
row = (
|
||||
f"║{marker}{agent_name:<{col_agent + 1}} ║ {data['total']:>{col_tests}} "
|
||||
f"║ {data['passed']:>{col_passed}} ║ {data['time']:>{col_time}.2f} ║"
|
||||
)
|
||||
print(row)
|
||||
|
||||
if verbose:
|
||||
for test_name, status, reason in data["tests"]:
|
||||
icon = " ✓" if status == "PASS" else " ✗"
|
||||
print(
|
||||
f"║ {icon} {test_name:<{col_agent - 2}}"
|
||||
f"║{'':>{col_tests + 2}}║{'':>{col_passed + 2}}║{'':>{col_time + 2}}║"
|
||||
)
|
||||
if status == "FAIL" and reason:
|
||||
# Print failure reason wrapped to fit, indented under the test
|
||||
reason_short = reason[:120] + ("..." if len(reason) > 120 else "")
|
||||
print(f"║ {reason_short}")
|
||||
print("║")
|
||||
|
||||
print(sep())
|
||||
all_pass = total_passed == total_tests
|
||||
status = "ALL PASS" if all_pass else f"{total_tests - total_passed} FAILED"
|
||||
totals = (
|
||||
f"║ {status:<{col_agent}} ║ {total_tests:>{col_tests}} "
|
||||
f"║ {total_passed:>{col_passed}} ║ {total_time:>{col_time}.2f} ║"
|
||||
)
|
||||
print(totals)
|
||||
print(bottom)
|
||||
|
||||
# Always print failure details if any tests failed
|
||||
if not all_pass:
|
||||
print("\n Failure Details:")
|
||||
print(" " + "─" * 70)
|
||||
for agent_name in sorted(agents.keys()):
|
||||
for test_name, status, reason in agents[agent_name]["tests"]:
|
||||
if status == "FAIL":
|
||||
print(f"\n ✗ {agent_name}::{test_name}")
|
||||
if reason:
|
||||
# Wrap long reasons
|
||||
for i in range(0, len(reason), 100):
|
||||
print(f" {reason[i : i + 100]}")
|
||||
print()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
verbose = "--verbose" in sys.argv or "-v" in sys.argv
|
||||
|
||||
print("\n ╔═══════════════════════════════════════╗")
|
||||
print(" ║ Level 2: Dummy Agent Tests (E2E) ║")
|
||||
print(" ╚═══════════════════════════════════════╝")
|
||||
|
||||
# Step 1: detect credentials and let user pick
|
||||
provider = prompt_provider_selection()
|
||||
|
||||
# Step 2: inject selection into conftest module state
|
||||
from tests.dummy_agents.conftest import set_llm_selection
|
||||
|
||||
set_llm_selection(
|
||||
model=provider["model"],
|
||||
api_key=provider["api_key"],
|
||||
extra_headers=provider.get("extra_headers"),
|
||||
api_base=provider.get("api_base"),
|
||||
)
|
||||
|
||||
# Step 3: run pytest
|
||||
with NamedTemporaryFile(suffix=".xml", delete=False) as tmp:
|
||||
xml_path = tmp.name
|
||||
|
||||
start = time.time()
|
||||
import pytest as _pytest
|
||||
|
||||
pytest_args = [
|
||||
str(TESTS_DIR),
|
||||
f"--junitxml={xml_path}",
|
||||
"--tb=short",
|
||||
"--override-ini=asyncio_mode=auto",
|
||||
"--log-cli-level=INFO", # Stream logs live to terminal
|
||||
"-v",
|
||||
]
|
||||
if not verbose:
|
||||
# In non-verbose mode, only show warnings and above
|
||||
pytest_args[pytest_args.index("--log-cli-level=INFO")] = "--log-cli-level=WARNING"
|
||||
pytest_args.remove("-v")
|
||||
pytest_args.append("-q")
|
||||
|
||||
exit_code = _pytest.main(pytest_args)
|
||||
elapsed = time.time() - start
|
||||
|
||||
# Step 4: print summary
|
||||
try:
|
||||
agents = parse_junit_xml(xml_path)
|
||||
print_table(agents, elapsed, verbose=verbose)
|
||||
except Exception as e:
|
||||
print(f"\n Could not parse results: {e}")
|
||||
|
||||
# Clean up
|
||||
Path(xml_path).unlink(missing_ok=True)
|
||||
|
||||
return exit_code
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -0,0 +1,132 @@
|
||||
"""Branch agent: LLM classifies input, conditional edges route to different paths.
|
||||
|
||||
Tests conditional edge evaluation with real LLM output.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
|
||||
SET_OUTPUT_INSTRUCTION = (
|
||||
"You MUST call the set_output tool to provide your answer. "
|
||||
"Do not just write text — call set_output with the correct key and value."
|
||||
)
|
||||
|
||||
|
||||
def _build_branch_graph() -> GraphSpec:
|
||||
return GraphSpec(
|
||||
id="branch-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="classify",
|
||||
entry_points={"start": "classify"},
|
||||
terminal_nodes=["positive", "negative"],
|
||||
conversation_mode="continuous",
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="classify",
|
||||
name="Classify",
|
||||
description="Classifies input sentiment",
|
||||
node_type="event_loop",
|
||||
input_keys=["text"],
|
||||
output_keys=["score", "label"],
|
||||
system_prompt=(
|
||||
"You are a sentiment classifier. Read the 'text' input and determine "
|
||||
"if the sentiment is positive or negative.\n\n"
|
||||
"You MUST call set_output TWICE:\n"
|
||||
"1. set_output(key='score', value='<number>') — a score between 0.0 "
|
||||
"and 1.0 where >0.5 means positive\n"
|
||||
"2. set_output(key='label', value='positive') or "
|
||||
"set_output(key='label', value='negative')\n\n" + SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="positive",
|
||||
name="Positive Handler",
|
||||
description="Handles positive sentiment",
|
||||
node_type="event_loop",
|
||||
output_keys=["result"],
|
||||
system_prompt=(
|
||||
"The input was classified as positive. Call set_output with "
|
||||
"key='result' and a brief one-sentence acknowledgment. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="negative",
|
||||
name="Negative Handler",
|
||||
description="Handles negative sentiment",
|
||||
node_type="event_loop",
|
||||
output_keys=["result"],
|
||||
system_prompt=(
|
||||
"The input was classified as negative. Call set_output with "
|
||||
"key='result' and a brief one-sentence acknowledgment. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[
|
||||
EdgeSpec(
|
||||
id="classify-to-positive",
|
||||
source="classify",
|
||||
target="positive",
|
||||
condition=EdgeCondition.CONDITIONAL,
|
||||
condition_expr="output.get('label') == 'positive'",
|
||||
priority=1,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="classify-to-negative",
|
||||
source="classify",
|
||||
target="negative",
|
||||
condition=EdgeCondition.CONDITIONAL,
|
||||
condition_expr="output.get('label') == 'negative'",
|
||||
priority=0,
|
||||
),
|
||||
],
|
||||
memory_keys=["text", "score", "label", "result"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_positive_path(runtime, goal, llm_provider):
|
||||
graph = _build_branch_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(
|
||||
graph, goal, {"text": "I love this product, it's amazing!"}, validate_graph=False
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.path == ["classify", "positive"]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_negative_path(runtime, goal, llm_provider):
|
||||
graph = _build_branch_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(
|
||||
graph, goal, {"text": "This is terrible and broken, I hate it."}, validate_graph=False
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.path == ["classify", "negative"]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_two_nodes_traversed(runtime, goal, llm_provider):
|
||||
"""Regardless of which branch, exactly 2 nodes should execute."""
|
||||
graph = _build_branch_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(
|
||||
graph, goal, {"text": "The weather is nice today."}, validate_graph=False
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.steps_executed == 2
|
||||
assert len(result.path) == 2
|
||||
@@ -0,0 +1,66 @@
|
||||
"""Echo agent: single-node worker that echoes input to output.
|
||||
|
||||
Tests basic node lifecycle with a real LLM call — simplest possible worker.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
|
||||
|
||||
def _build_echo_graph() -> GraphSpec:
|
||||
return GraphSpec(
|
||||
id="echo-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="echo",
|
||||
entry_points={"start": "echo"},
|
||||
terminal_nodes=["echo"],
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="echo",
|
||||
name="Echo",
|
||||
description="Echoes input to output",
|
||||
node_type="event_loop",
|
||||
input_keys=["input"],
|
||||
output_keys=["output"],
|
||||
system_prompt=(
|
||||
"You are an echo node. Your ONLY job is to read the 'input' value "
|
||||
"provided in the user message, then immediately call the set_output "
|
||||
"tool with key='output' and value set to the EXACT same string. "
|
||||
"Do not add any text or explanation. Just call set_output."
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[],
|
||||
memory_keys=["input", "output"],
|
||||
conversation_mode="continuous",
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_echo_basic(runtime, goal, llm_provider):
|
||||
graph = _build_echo_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"input": "hello"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("output") is not None
|
||||
assert result.path == ["echo"]
|
||||
assert result.steps_executed == 1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_echo_empty_input(runtime, goal, llm_provider):
|
||||
graph = _build_echo_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"input": ""}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert "output" in result.output
|
||||
@@ -0,0 +1,144 @@
|
||||
"""Feedback loop agent: draft/review cycle with max_node_visits limit.
|
||||
|
||||
Uses StatefulNode for review to control loop iterations deterministically.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.node import NodeResult, NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
from .nodes import StatefulNode, SuccessNode
|
||||
|
||||
|
||||
def _build_feedback_graph(max_visits: int = 3) -> GraphSpec:
|
||||
return GraphSpec(
|
||||
id="feedback-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="draft",
|
||||
terminal_nodes=["done"],
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="draft",
|
||||
name="Draft",
|
||||
description="Produces a draft",
|
||||
node_type="event_loop",
|
||||
output_keys=["draft_output"],
|
||||
max_node_visits=max_visits,
|
||||
),
|
||||
NodeSpec(
|
||||
id="review",
|
||||
name="Review",
|
||||
description="Reviews the draft",
|
||||
node_type="event_loop",
|
||||
input_keys=["draft_output"],
|
||||
output_keys=["approved"],
|
||||
),
|
||||
NodeSpec(
|
||||
id="done",
|
||||
name="Done",
|
||||
description="Final node",
|
||||
node_type="event_loop",
|
||||
output_keys=["final"],
|
||||
),
|
||||
],
|
||||
edges=[
|
||||
EdgeSpec(
|
||||
id="draft-to-review",
|
||||
source="draft",
|
||||
target="review",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="review-to-draft",
|
||||
source="review",
|
||||
target="draft",
|
||||
condition=EdgeCondition.CONDITIONAL,
|
||||
condition_expr="output.get('approved') == False",
|
||||
priority=1,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="review-to-done",
|
||||
source="review",
|
||||
target="done",
|
||||
condition=EdgeCondition.CONDITIONAL,
|
||||
condition_expr="output.get('approved') == True",
|
||||
priority=0,
|
||||
),
|
||||
],
|
||||
memory_keys=["draft_output", "approved", "final"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_feedback_loop_terminates(runtime, goal, llm_provider):
|
||||
"""Loop should terminate: draft visits are capped, review eventually approves."""
|
||||
graph = _build_feedback_graph(max_visits=3)
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("draft", SuccessNode(output={"draft_output": "v1"}))
|
||||
executor.register_node(
|
||||
"review",
|
||||
StatefulNode(
|
||||
[
|
||||
NodeResult(success=True, output={"approved": False}),
|
||||
NodeResult(success=True, output={"approved": False}),
|
||||
NodeResult(success=True, output={"approved": True}),
|
||||
]
|
||||
),
|
||||
)
|
||||
executor.register_node("done", SuccessNode(output={"final": "done"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.node_visit_counts.get("draft", 0) == 3
|
||||
assert "done" in result.path
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_feedback_loop_visit_counts(runtime, goal, llm_provider):
|
||||
graph = _build_feedback_graph(max_visits=3)
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("draft", SuccessNode(output={"draft_output": "v1"}))
|
||||
executor.register_node(
|
||||
"review",
|
||||
StatefulNode(
|
||||
[
|
||||
NodeResult(success=True, output={"approved": False}),
|
||||
NodeResult(success=True, output={"approved": True}),
|
||||
]
|
||||
),
|
||||
)
|
||||
executor.register_node("done", SuccessNode(output={"final": "done"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.node_visit_counts.get("draft", 0) == 2
|
||||
assert result.node_visit_counts.get("review", 0) == 2
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_feedback_loop_early_exit(runtime, goal, llm_provider):
|
||||
"""Review approves on first iteration — loop exits before max."""
|
||||
graph = _build_feedback_graph(max_visits=5)
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("draft", SuccessNode(output={"draft_output": "perfect"}))
|
||||
executor.register_node(
|
||||
"review",
|
||||
StatefulNode(
|
||||
[
|
||||
NodeResult(success=True, output={"approved": True}),
|
||||
]
|
||||
),
|
||||
)
|
||||
executor.register_node("done", SuccessNode(output={"final": "done"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.node_visit_counts.get("draft", 0) == 1
|
||||
assert "done" in result.path
|
||||
@@ -0,0 +1,179 @@
|
||||
"""GCU subagent test: parent event_loop delegates to a GCU subagent.
|
||||
|
||||
Tests the subagent delegation pattern where a parent node uses
|
||||
delegate_to_sub_agent to invoke a GCU (browser) node for a task.
|
||||
The GCU node has access to browser tools via the GCU MCP server.
|
||||
|
||||
Note: This test requires the GCU MCP server (gcu.server) to be available.
|
||||
If not installed, the test is skipped.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.goal import Goal
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
|
||||
|
||||
def _has_gcu_server() -> bool:
|
||||
"""Check if the GCU MCP server module is available."""
|
||||
try:
|
||||
import gcu.server # noqa: F401
|
||||
|
||||
return True
|
||||
except ImportError:
|
||||
return False
|
||||
|
||||
|
||||
def _build_gcu_subagent_graph() -> GraphSpec:
|
||||
"""Parent event_loop node with a GCU subagent for browser tasks.
|
||||
|
||||
Structure:
|
||||
- parent (event_loop): orchestrator that decides when to delegate
|
||||
- browser_worker (gcu): subagent with browser tools
|
||||
- parent delegates to browser_worker via delegate_to_sub_agent tool
|
||||
- browser_worker is NOT connected by edges (validation rule)
|
||||
"""
|
||||
return GraphSpec(
|
||||
id="gcu-subagent-graph",
|
||||
goal_id="gcu-test",
|
||||
entry_node="parent",
|
||||
entry_points={"start": "parent"},
|
||||
terminal_nodes=["parent"],
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="parent",
|
||||
name="Orchestrator",
|
||||
description="Orchestrates browser tasks via subagent delegation",
|
||||
node_type="event_loop",
|
||||
input_keys=["task"],
|
||||
output_keys=["result"],
|
||||
sub_agents=["browser_worker"],
|
||||
system_prompt=(
|
||||
"You are an orchestrator. You have a browser subagent called "
|
||||
"'browser_worker' available via delegate_to_sub_agent.\n\n"
|
||||
"Read the 'task' input and delegate the browser work to "
|
||||
"the browser_worker subagent. When the subagent completes, "
|
||||
"summarize the result and call set_output with key='result'."
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="browser_worker",
|
||||
name="Browser Worker",
|
||||
description="GCU browser subagent for web tasks",
|
||||
node_type="gcu",
|
||||
output_keys=["browser_result"],
|
||||
system_prompt=(
|
||||
"You are a browser worker subagent. Complete the delegated "
|
||||
"browser task using available browser tools. "
|
||||
"When done, call set_output with key='browser_result' and "
|
||||
"the information you found."
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[], # GCU subagents must NOT be connected by edges
|
||||
memory_keys=["task", "result", "browser_result"],
|
||||
conversation_mode="continuous",
|
||||
)
|
||||
|
||||
|
||||
def _gcu_goal() -> Goal:
|
||||
return Goal(
|
||||
id="gcu-test",
|
||||
name="GCU Subagent Test",
|
||||
description="Test browser subagent delegation",
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.skipif(not _has_gcu_server(), reason="GCU server not installed")
|
||||
async def test_gcu_subagent_delegation(runtime, llm_provider, tool_registry, tmp_path):
|
||||
"""Parent delegates a simple browser task to GCU subagent."""
|
||||
# Register GCU MCP server tools
|
||||
from framework.graph.gcu import GCU_MCP_SERVER_CONFIG
|
||||
|
||||
repo_root = Path(__file__).resolve().parents[3]
|
||||
gcu_config = dict(GCU_MCP_SERVER_CONFIG)
|
||||
gcu_config["cwd"] = str(repo_root / "tools")
|
||||
tool_registry.register_mcp_server(gcu_config)
|
||||
|
||||
# Expand GCU node tools (mirrors what runner._setup does)
|
||||
graph = _build_gcu_subagent_graph()
|
||||
gcu_tool_names = tool_registry.get_server_tool_names("gcu-tools")
|
||||
if gcu_tool_names:
|
||||
for node in graph.nodes:
|
||||
if node.node_type == "gcu":
|
||||
existing = set(node.tools)
|
||||
for tool_name in sorted(gcu_tool_names):
|
||||
if tool_name not in existing:
|
||||
node.tools.append(tool_name)
|
||||
|
||||
executor = make_executor(
|
||||
runtime,
|
||||
llm_provider,
|
||||
tool_registry=tool_registry,
|
||||
storage_path=tmp_path / "storage",
|
||||
)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_gcu_goal(),
|
||||
{"task": "Use the browser to navigate to https://example.com and report the page title."},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.skipif(not _has_gcu_server(), reason="GCU server not installed")
|
||||
async def test_gcu_subagent_returns_data(runtime, llm_provider, tool_registry, tmp_path):
|
||||
"""Verify the parent receives structured data from the GCU subagent."""
|
||||
from framework.graph.gcu import GCU_MCP_SERVER_CONFIG
|
||||
|
||||
repo_root = Path(__file__).resolve().parents[3]
|
||||
gcu_config = dict(GCU_MCP_SERVER_CONFIG)
|
||||
gcu_config["cwd"] = str(repo_root / "tools")
|
||||
# Only register if not already registered
|
||||
if not tool_registry.get_server_tool_names("gcu-tools"):
|
||||
tool_registry.register_mcp_server(gcu_config)
|
||||
|
||||
graph = _build_gcu_subagent_graph()
|
||||
gcu_tool_names = tool_registry.get_server_tool_names("gcu-tools")
|
||||
if gcu_tool_names:
|
||||
for node in graph.nodes:
|
||||
if node.node_type == "gcu":
|
||||
existing = set(node.tools)
|
||||
for tool_name in sorted(gcu_tool_names):
|
||||
if tool_name not in existing:
|
||||
node.tools.append(tool_name)
|
||||
|
||||
executor = make_executor(
|
||||
runtime,
|
||||
llm_provider,
|
||||
tool_registry=tool_registry,
|
||||
storage_path=tmp_path / "storage",
|
||||
)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_gcu_goal(),
|
||||
{
|
||||
"task": "Use the browser to visit https://example.com and report "
|
||||
"what domain the page is on."
|
||||
},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
# The result should contain something from the browser
|
||||
result_text = str(result.output["result"]).lower()
|
||||
assert "example" in result_text
|
||||
@@ -0,0 +1,166 @@
|
||||
"""Parallel merge agent: fan-out to two branches, fan-in to merge node.
|
||||
|
||||
Tests parallel execution with real LLM at each branch.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.executor import ParallelExecutionConfig
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
from .nodes import FailNode
|
||||
|
||||
SET_OUTPUT_INSTRUCTION = (
|
||||
"You MUST call the set_output tool to provide your answer. "
|
||||
"Do not just write text — call set_output with the correct key and value."
|
||||
)
|
||||
|
||||
|
||||
def _build_parallel_graph() -> GraphSpec:
|
||||
return GraphSpec(
|
||||
id="parallel-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="split",
|
||||
entry_points={"start": "split"},
|
||||
terminal_nodes=["merge"],
|
||||
conversation_mode="continuous",
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="split",
|
||||
name="Split",
|
||||
description="Entry point that triggers parallel branches",
|
||||
node_type="event_loop",
|
||||
input_keys=["topic"],
|
||||
output_keys=["split_done"],
|
||||
system_prompt=(
|
||||
"You are a dispatcher. Read the 'topic' input, then immediately "
|
||||
"call set_output with key='split_done' and value='true'. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="analyze_a",
|
||||
name="Analyze Pros",
|
||||
description="Analyzes positive aspects",
|
||||
node_type="event_loop",
|
||||
output_keys=["result_a"],
|
||||
system_prompt=(
|
||||
"Analyze the positive aspects of the topic. Then call set_output "
|
||||
"with key='result_a' and a brief one-sentence analysis. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="analyze_b",
|
||||
name="Analyze Cons",
|
||||
description="Analyzes negative aspects",
|
||||
node_type="event_loop",
|
||||
output_keys=["result_b"],
|
||||
system_prompt=(
|
||||
"Analyze the negative aspects of the topic. Then call set_output "
|
||||
"with key='result_b' and a brief one-sentence analysis. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="merge",
|
||||
name="Merge",
|
||||
description="Combines both analyses",
|
||||
node_type="event_loop",
|
||||
input_keys=["result_a", "result_b"],
|
||||
output_keys=["merged"],
|
||||
system_prompt=(
|
||||
"Read 'result_a' and 'result_b' from the input, combine them into "
|
||||
"a one-sentence summary, then call set_output with key='merged' "
|
||||
"and the summary. " + SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[
|
||||
EdgeSpec(
|
||||
id="split-to-a",
|
||||
source="split",
|
||||
target="analyze_a",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="split-to-b",
|
||||
source="split",
|
||||
target="analyze_b",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="a-to-merge",
|
||||
source="analyze_a",
|
||||
target="merge",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
EdgeSpec(
|
||||
id="b-to-merge",
|
||||
source="analyze_b",
|
||||
target="merge",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
],
|
||||
memory_keys=["topic", "split_done", "result_a", "result_b", "merged"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_parallel_both_succeed(runtime, goal, llm_provider):
|
||||
graph = _build_parallel_graph()
|
||||
config = ParallelExecutionConfig(on_branch_failure="fail_all")
|
||||
executor = make_executor(runtime, llm_provider, parallel_config=config)
|
||||
|
||||
result = await executor.execute(graph, goal, {"topic": "remote work"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert "split" in result.path
|
||||
assert "merge" in result.path
|
||||
assert result.output.get("merged") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_parallel_branch_failure_fail_all(runtime, goal, llm_provider):
|
||||
"""One branch fails with fail_all -> execution fails."""
|
||||
graph = _build_parallel_graph()
|
||||
config = ParallelExecutionConfig(on_branch_failure="fail_all")
|
||||
executor = make_executor(runtime, llm_provider, parallel_config=config)
|
||||
executor.register_node("analyze_b", FailNode(error="branch B failed"))
|
||||
|
||||
result = await executor.execute(graph, goal, {"topic": "remote work"}, validate_graph=False)
|
||||
|
||||
assert not result.success
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_parallel_branch_failure_continue_others(runtime, goal, llm_provider):
|
||||
"""One branch fails with continue_others -> surviving branch completes."""
|
||||
graph = _build_parallel_graph()
|
||||
config = ParallelExecutionConfig(on_branch_failure="continue_others")
|
||||
executor = make_executor(runtime, llm_provider, parallel_config=config)
|
||||
executor.register_node("analyze_b", FailNode(error="branch B failed"))
|
||||
|
||||
result = await executor.execute(graph, goal, {"topic": "remote work"}, validate_graph=False)
|
||||
|
||||
# With continue_others, execution can proceed past failed branches
|
||||
assert result.output.get("merged") is not None or result.output.get("result_a") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_parallel_disjoint_output_keys(runtime, goal, llm_provider):
|
||||
"""Verify both branches write to separate memory keys without conflicts."""
|
||||
graph = _build_parallel_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(
|
||||
graph, goal, {"topic": "artificial intelligence"}, validate_graph=False
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result_a") is not None
|
||||
assert result.output.get("result_b") is not None
|
||||
@@ -0,0 +1,134 @@
|
||||
"""Pipeline agent: linear 3-node chain with real LLM at each step.
|
||||
|
||||
Tests input_mapping, conversation modes, and multi-node traversal.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
|
||||
SET_OUTPUT_INSTRUCTION = (
|
||||
"You MUST call the set_output tool to provide your answer. "
|
||||
"Do not just write text — call set_output with the correct key and value."
|
||||
)
|
||||
|
||||
|
||||
def _build_pipeline_graph(conversation_mode: str = "continuous") -> GraphSpec:
|
||||
return GraphSpec(
|
||||
id="pipeline-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="intake",
|
||||
entry_points={"start": "intake"},
|
||||
terminal_nodes=["output"],
|
||||
conversation_mode=conversation_mode,
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="intake",
|
||||
name="Intake",
|
||||
description="Captures raw input and passes it along",
|
||||
node_type="event_loop",
|
||||
input_keys=["raw"],
|
||||
output_keys=["captured"],
|
||||
system_prompt=(
|
||||
"You are the intake node. Read the 'raw' input value from the user "
|
||||
"message, then call set_output with key='captured' and the same value. "
|
||||
+ SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="transform",
|
||||
name="Transform",
|
||||
description="Uppercases the input value",
|
||||
node_type="event_loop",
|
||||
input_keys=["value"],
|
||||
output_keys=["transformed"],
|
||||
system_prompt=(
|
||||
"You are a transform node. Read the 'value' input from the user "
|
||||
"message, convert it to UPPERCASE, then call set_output with "
|
||||
"key='transformed' and the uppercased value. " + SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
NodeSpec(
|
||||
id="output",
|
||||
name="Output",
|
||||
description="Formats final result",
|
||||
node_type="event_loop",
|
||||
input_keys=["value"],
|
||||
output_keys=["result"],
|
||||
system_prompt=(
|
||||
"You are the output node. Read the 'value' input from the user "
|
||||
"message, prefix it with 'Result: ', then call set_output with "
|
||||
"key='result' and the prefixed value. " + SET_OUTPUT_INSTRUCTION
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[
|
||||
EdgeSpec(
|
||||
id="intake-to-transform",
|
||||
source="intake",
|
||||
target="transform",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
input_mapping={"value": "captured"},
|
||||
),
|
||||
EdgeSpec(
|
||||
id="transform-to-output",
|
||||
source="transform",
|
||||
target="output",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
input_mapping={"value": "transformed"},
|
||||
),
|
||||
],
|
||||
memory_keys=["raw", "captured", "value", "transformed", "result"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_pipeline_linear_traversal(runtime, goal, llm_provider):
|
||||
graph = _build_pipeline_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"raw": "hello"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.path == ["intake", "transform", "output"]
|
||||
assert result.steps_executed == 3
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_pipeline_input_mapping(runtime, goal, llm_provider):
|
||||
"""Verify input_mapping wires source output keys to target input keys."""
|
||||
graph = _build_pipeline_graph()
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"raw": "test value"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.steps_executed == 3
|
||||
assert result.output.get("result") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_pipeline_continuous_conversation(runtime, goal, llm_provider):
|
||||
graph = _build_pipeline_graph(conversation_mode="continuous")
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"raw": "data"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert len(result.path) == 3
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_pipeline_isolated_conversation(runtime, goal, llm_provider):
|
||||
graph = _build_pipeline_graph(conversation_mode="isolated")
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
|
||||
result = await executor.execute(graph, goal, {"raw": "data"}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert len(result.path) == 3
|
||||
@@ -0,0 +1,131 @@
|
||||
"""Retry agent: flaky node with retry limit and failure edges.
|
||||
|
||||
Uses deterministic FlakyNode (not LLM) since we need controlled failure patterns.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
from .nodes import FlakyNode, SuccessNode
|
||||
|
||||
|
||||
def _build_retry_graph(max_retries: int = 3, with_failure_edge: bool = False) -> GraphSpec:
|
||||
nodes = [
|
||||
NodeSpec(
|
||||
id="flaky",
|
||||
name="Flaky",
|
||||
description="Fails then succeeds",
|
||||
node_type="event_loop",
|
||||
output_keys=["status"],
|
||||
max_retries=max_retries,
|
||||
),
|
||||
NodeSpec(
|
||||
id="done",
|
||||
name="Done",
|
||||
description="Terminal success node",
|
||||
node_type="event_loop",
|
||||
output_keys=["final"],
|
||||
),
|
||||
]
|
||||
edges = [
|
||||
EdgeSpec(
|
||||
id="flaky-to-done",
|
||||
source="flaky",
|
||||
target="done",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
),
|
||||
]
|
||||
terminal_nodes = ["done"]
|
||||
|
||||
if with_failure_edge:
|
||||
nodes.append(
|
||||
NodeSpec(
|
||||
id="error_handler",
|
||||
name="Error Handler",
|
||||
description="Handles exhausted retries",
|
||||
node_type="event_loop",
|
||||
output_keys=["error_handled"],
|
||||
)
|
||||
)
|
||||
edges.append(
|
||||
EdgeSpec(
|
||||
id="flaky-to-error",
|
||||
source="flaky",
|
||||
target="error_handler",
|
||||
condition=EdgeCondition.ON_FAILURE,
|
||||
)
|
||||
)
|
||||
terminal_nodes.append("error_handler")
|
||||
|
||||
return GraphSpec(
|
||||
id="retry-graph",
|
||||
goal_id="dummy",
|
||||
entry_node="flaky",
|
||||
terminal_nodes=terminal_nodes,
|
||||
nodes=nodes,
|
||||
edges=edges,
|
||||
memory_keys=["status", "final", "error_handled"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retry_succeeds_within_limit(runtime, goal, llm_provider):
|
||||
graph = _build_retry_graph(max_retries=3)
|
||||
flaky = FlakyNode(fail_times=2, output={"status": "recovered"})
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("flaky", flaky)
|
||||
executor.register_node("done", SuccessNode(output={"final": "complete"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.total_retries >= 2
|
||||
assert flaky.attempt_count == 3 # 2 failures + 1 success
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retry_exhaustion(runtime, goal, llm_provider):
|
||||
graph = _build_retry_graph(max_retries=3)
|
||||
flaky = FlakyNode(fail_times=10, output={"status": "recovered"})
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("flaky", flaky)
|
||||
executor.register_node("done", SuccessNode(output={"final": "complete"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert not result.success
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retry_with_on_failure_edge(runtime, goal, llm_provider):
|
||||
graph = _build_retry_graph(max_retries=2, with_failure_edge=True)
|
||||
flaky = FlakyNode(fail_times=10)
|
||||
error_handler = SuccessNode(output={"error_handled": True})
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("flaky", flaky)
|
||||
executor.register_node("done", SuccessNode(output={"final": "complete"}))
|
||||
executor.register_node("error_handler", error_handler)
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert "error_handler" in result.path
|
||||
assert error_handler.executed
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_retry_tracking(runtime, goal, llm_provider):
|
||||
graph = _build_retry_graph(max_retries=3)
|
||||
flaky = FlakyNode(fail_times=2)
|
||||
executor = make_executor(runtime, llm_provider)
|
||||
executor.register_node("flaky", flaky)
|
||||
executor.register_node("done", SuccessNode(output={"final": "complete"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {}, validate_graph=False)
|
||||
|
||||
assert result.success
|
||||
assert result.retry_details.get("flaky", 0) >= 2
|
||||
@@ -0,0 +1,139 @@
|
||||
"""Worker agent: single-node event loop with real MCP tools.
|
||||
|
||||
Tests the core worker pattern — a single EventLoopNode that uses real
|
||||
hive-tools (example_tool, get_current_time, save_data/load_data) to
|
||||
accomplish tasks, matching how real agents are structured.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.goal import Goal
|
||||
from framework.graph.node import NodeSpec
|
||||
|
||||
from .conftest import make_executor
|
||||
|
||||
|
||||
def _build_worker_graph(tools: list[str]) -> GraphSpec:
|
||||
"""Single-node worker agent with MCP tools — matches real agent structure."""
|
||||
return GraphSpec(
|
||||
id="worker-graph",
|
||||
goal_id="worker-goal",
|
||||
entry_node="worker",
|
||||
entry_points={"start": "worker"},
|
||||
terminal_nodes=["worker"],
|
||||
nodes=[
|
||||
NodeSpec(
|
||||
id="worker",
|
||||
name="Worker",
|
||||
description="General-purpose worker with tools",
|
||||
node_type="event_loop",
|
||||
input_keys=["task"],
|
||||
output_keys=["result"],
|
||||
tools=tools,
|
||||
system_prompt=(
|
||||
"You are a worker agent with access to tools. "
|
||||
"Read the 'task' input and complete it using the available tools. "
|
||||
"When done, call set_output with key='result' and the final answer."
|
||||
),
|
||||
),
|
||||
],
|
||||
edges=[],
|
||||
memory_keys=["task", "result"],
|
||||
conversation_mode="continuous",
|
||||
)
|
||||
|
||||
|
||||
def _worker_goal() -> Goal:
|
||||
return Goal(
|
||||
id="worker-goal",
|
||||
name="Worker Agent",
|
||||
description="Complete a task using available tools",
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_worker_example_tool(runtime, llm_provider, tool_registry):
|
||||
"""Worker uses example_tool to process text."""
|
||||
graph = _build_worker_graph(tools=["example_tool"])
|
||||
executor = make_executor(runtime, llm_provider, tool_registry=tool_registry)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_worker_goal(),
|
||||
{"task": "Use the example_tool to process the message 'hello world' with uppercase=true"},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_worker_time_tool(runtime, llm_provider, tool_registry):
|
||||
"""Worker uses get_current_time to check the current time."""
|
||||
graph = _build_worker_graph(tools=["get_current_time"])
|
||||
executor = make_executor(runtime, llm_provider, tool_registry=tool_registry)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_worker_goal(),
|
||||
{
|
||||
"task": "Use get_current_time to find the current time in UTC, "
|
||||
"and report the day of the week as the result"
|
||||
},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_worker_data_tools(runtime, llm_provider, tool_registry, tmp_path):
|
||||
"""Worker uses save_data and load_data to store and retrieve data."""
|
||||
graph = _build_worker_graph(tools=["save_data", "load_data"])
|
||||
executor = make_executor(
|
||||
runtime,
|
||||
llm_provider,
|
||||
tool_registry=tool_registry,
|
||||
storage_path=tmp_path / "storage",
|
||||
)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_worker_goal(),
|
||||
{
|
||||
"task": f"Use save_data to save the text 'test payload' to a file called "
|
||||
f"'test.txt' in the data_dir '{tmp_path}/data'. "
|
||||
f"Then use load_data to read it back from the same data_dir. "
|
||||
f"Report what you loaded as the result."
|
||||
},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_worker_multi_tool(runtime, llm_provider, tool_registry):
|
||||
"""Worker uses multiple tools in sequence."""
|
||||
graph = _build_worker_graph(tools=["example_tool", "get_current_time"])
|
||||
executor = make_executor(runtime, llm_provider, tool_registry=tool_registry)
|
||||
|
||||
result = await executor.execute(
|
||||
graph,
|
||||
_worker_goal(),
|
||||
{
|
||||
"task": "First use get_current_time to find the current day of the week. "
|
||||
"Then use example_tool to process that day name with uppercase=true. "
|
||||
"Report the uppercased day name as the result."
|
||||
},
|
||||
validate_graph=False,
|
||||
)
|
||||
|
||||
assert result.success
|
||||
assert result.output.get("result") is not None
|
||||
@@ -0,0 +1,188 @@
|
||||
"""Tests for default skills — parsing, token budget, and configuration."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.skills.config import DefaultSkillConfig, SkillsConfig
|
||||
from framework.skills.defaults import (
|
||||
SHARED_MEMORY_KEYS,
|
||||
SKILL_REGISTRY,
|
||||
DefaultSkillManager,
|
||||
)
|
||||
from framework.skills.parser import parse_skill_md
|
||||
|
||||
_DEFAULT_SKILLS_DIR = (
|
||||
Path(__file__).resolve().parent.parent / "framework" / "skills" / "_default_skills"
|
||||
)
|
||||
|
||||
|
||||
class TestDefaultSkillFiles:
|
||||
"""Verify all 6 built-in SKILL.md files parse correctly."""
|
||||
|
||||
def test_all_six_skills_exist(self):
|
||||
assert len(SKILL_REGISTRY) == 6
|
||||
|
||||
@pytest.mark.parametrize("skill_name,dir_name", list(SKILL_REGISTRY.items()))
|
||||
def test_skill_parses(self, skill_name, dir_name):
|
||||
path = _DEFAULT_SKILLS_DIR / dir_name / "SKILL.md"
|
||||
assert path.is_file(), f"Missing SKILL.md at {path}"
|
||||
|
||||
parsed = parse_skill_md(path, source_scope="framework")
|
||||
assert parsed is not None, f"Failed to parse {path}"
|
||||
assert parsed.name == skill_name
|
||||
assert parsed.description
|
||||
assert parsed.body
|
||||
assert parsed.source_scope == "framework"
|
||||
|
||||
def test_combined_token_budget(self):
|
||||
"""All default skill bodies combined should be under 2000 tokens (~8000 chars)."""
|
||||
total_chars = 0
|
||||
for dir_name in SKILL_REGISTRY.values():
|
||||
path = _DEFAULT_SKILLS_DIR / dir_name / "SKILL.md"
|
||||
parsed = parse_skill_md(path, source_scope="framework")
|
||||
assert parsed is not None
|
||||
total_chars += len(parsed.body)
|
||||
|
||||
approx_tokens = total_chars // 4
|
||||
assert approx_tokens < 2000, (
|
||||
f"Combined default skill bodies are ~{approx_tokens} tokens "
|
||||
f"({total_chars} chars), exceeding the 2000 token budget"
|
||||
)
|
||||
|
||||
def test_shared_memory_keys_all_prefixed(self):
|
||||
"""All shared memory keys must start with underscore."""
|
||||
for key in SHARED_MEMORY_KEYS:
|
||||
assert key.startswith("_"), f"Shared memory key missing _ prefix: {key}"
|
||||
|
||||
|
||||
class TestDefaultSkillManager:
|
||||
def test_load_all_defaults(self):
|
||||
manager = DefaultSkillManager()
|
||||
manager.load()
|
||||
|
||||
assert len(manager.active_skill_names) == 6
|
||||
for name in SKILL_REGISTRY:
|
||||
assert name in manager.active_skill_names
|
||||
|
||||
def test_load_idempotent(self):
|
||||
manager = DefaultSkillManager()
|
||||
manager.load()
|
||||
first_skills = dict(manager.active_skills)
|
||||
manager.load()
|
||||
assert manager.active_skills == first_skills
|
||||
|
||||
def test_build_protocols_prompt(self):
|
||||
manager = DefaultSkillManager()
|
||||
manager.load()
|
||||
prompt = manager.build_protocols_prompt()
|
||||
|
||||
assert prompt.startswith("## Operational Protocols")
|
||||
# Should contain content from each active skill
|
||||
for name in SKILL_REGISTRY:
|
||||
skill = manager.active_skills[name]
|
||||
# At least some of the body should appear
|
||||
assert skill.body[:20] in prompt
|
||||
|
||||
def test_protocols_prompt_empty_when_all_disabled(self):
|
||||
config = SkillsConfig(all_defaults_disabled=True)
|
||||
manager = DefaultSkillManager(config)
|
||||
manager.load()
|
||||
|
||||
assert manager.build_protocols_prompt() == ""
|
||||
assert manager.active_skill_names == []
|
||||
|
||||
def test_disable_single_skill(self):
|
||||
config = SkillsConfig.from_agent_vars(
|
||||
default_skills={"hive.quality-monitor": {"enabled": False}}
|
||||
)
|
||||
manager = DefaultSkillManager(config)
|
||||
manager.load()
|
||||
|
||||
assert "hive.quality-monitor" not in manager.active_skill_names
|
||||
assert len(manager.active_skill_names) == 5
|
||||
|
||||
def test_disable_all_via_convention(self):
|
||||
config = SkillsConfig.from_agent_vars(default_skills={"_all": {"enabled": False}})
|
||||
manager = DefaultSkillManager(config)
|
||||
manager.load()
|
||||
|
||||
assert manager.active_skill_names == []
|
||||
|
||||
def test_log_active_skills(self, caplog):
|
||||
import logging
|
||||
|
||||
with caplog.at_level(logging.INFO, logger="framework.skills.defaults"):
|
||||
manager = DefaultSkillManager()
|
||||
manager.load()
|
||||
manager.log_active_skills()
|
||||
|
||||
assert "Default skills active:" in caplog.text
|
||||
|
||||
def test_log_all_disabled(self, caplog):
|
||||
import logging
|
||||
|
||||
config = SkillsConfig(all_defaults_disabled=True)
|
||||
with caplog.at_level(logging.INFO, logger="framework.skills.defaults"):
|
||||
manager = DefaultSkillManager(config)
|
||||
manager.load()
|
||||
manager.log_active_skills()
|
||||
|
||||
assert "all disabled" in caplog.text
|
||||
|
||||
|
||||
class TestSkillsConfig:
|
||||
def test_default_is_enabled(self):
|
||||
config = SkillsConfig()
|
||||
assert config.is_default_enabled("hive.note-taking") is True
|
||||
|
||||
def test_explicit_disable(self):
|
||||
config = SkillsConfig(
|
||||
default_skills={"hive.note-taking": DefaultSkillConfig(enabled=False)}
|
||||
)
|
||||
assert config.is_default_enabled("hive.note-taking") is False
|
||||
assert config.is_default_enabled("hive.batch-ledger") is True
|
||||
|
||||
def test_all_disabled_flag(self):
|
||||
config = SkillsConfig(all_defaults_disabled=True)
|
||||
assert config.is_default_enabled("hive.note-taking") is False
|
||||
assert config.is_default_enabled("anything") is False
|
||||
|
||||
def test_from_agent_vars_basic(self):
|
||||
config = SkillsConfig.from_agent_vars(
|
||||
default_skills={
|
||||
"hive.note-taking": {"enabled": True},
|
||||
"hive.quality-monitor": {"enabled": False},
|
||||
},
|
||||
skills=["deep-research"],
|
||||
)
|
||||
assert config.is_default_enabled("hive.note-taking") is True
|
||||
assert config.is_default_enabled("hive.quality-monitor") is False
|
||||
assert config.skills == ["deep-research"]
|
||||
|
||||
def test_from_agent_vars_bool_shorthand(self):
|
||||
config = SkillsConfig.from_agent_vars(default_skills={"hive.note-taking": False})
|
||||
assert config.is_default_enabled("hive.note-taking") is False
|
||||
|
||||
def test_from_agent_vars_all_disabled(self):
|
||||
config = SkillsConfig.from_agent_vars(default_skills={"_all": {"enabled": False}})
|
||||
assert config.all_defaults_disabled is True
|
||||
|
||||
def test_get_default_overrides(self):
|
||||
config = SkillsConfig.from_agent_vars(
|
||||
default_skills={
|
||||
"hive.batch-ledger": {"enabled": True, "checkpoint_every_n": 10},
|
||||
}
|
||||
)
|
||||
overrides = config.get_default_overrides("hive.batch-ledger")
|
||||
assert overrides == {"checkpoint_every_n": 10}
|
||||
|
||||
def test_get_default_overrides_empty(self):
|
||||
config = SkillsConfig()
|
||||
assert config.get_default_overrides("hive.note-taking") == {}
|
||||
|
||||
def test_from_agent_vars_none_inputs(self):
|
||||
config = SkillsConfig.from_agent_vars(default_skills=None, skills=None)
|
||||
assert config.skills == []
|
||||
assert config.default_skills == {}
|
||||
assert config.all_defaults_disabled is False
|
||||
@@ -12,6 +12,7 @@ Covers:
|
||||
- Single-edge paths unaffected
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
@@ -77,6 +78,19 @@ class TimingNode(NodeProtocol):
|
||||
)
|
||||
|
||||
|
||||
class SlowNode(NodeProtocol):
|
||||
"""Sleeps before returning -- used for timeout testing."""
|
||||
|
||||
def __init__(self, delay: float = 10.0):
|
||||
self.delay = delay
|
||||
self.executed = False
|
||||
|
||||
async def execute(self, ctx: NodeContext) -> NodeResult:
|
||||
await asyncio.sleep(self.delay)
|
||||
self.executed = True
|
||||
return NodeResult(success=True, output={"result": "slow"}, tokens_used=1, latency_ms=1)
|
||||
|
||||
|
||||
# --- Fixtures ---
|
||||
|
||||
|
||||
@@ -492,3 +506,186 @@ async def test_parallel_disabled_uses_sequential(runtime, goal):
|
||||
# Only one branch should have executed (sequential follows first edge)
|
||||
executed_count = sum([b1_impl.executed, b2_impl.executed])
|
||||
assert executed_count == 1
|
||||
|
||||
|
||||
# === 12. Branch timeout cancels slow branch ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_timeout_cancels_slow_branch(runtime, goal):
|
||||
"""A branch exceeding branch_timeout_seconds should be cancelled."""
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="slow", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="fast", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(branch_timeout_seconds=0.1, on_branch_failure="fail_all")
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
executor.register_node("b1", SlowNode(delay=10.0))
|
||||
executor.register_node("b2", SuccessNode({"b2_out": "ok"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {})
|
||||
|
||||
# fail_all: one branch timed out → execution fails
|
||||
assert not result.success
|
||||
assert "failed" in result.error.lower()
|
||||
|
||||
|
||||
# === 13. Branch timeout with continue_others ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_timeout_with_continue_others(runtime, goal):
|
||||
"""continue_others should let fast branches finish even when one times out."""
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="slow", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="fast", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(
|
||||
branch_timeout_seconds=0.1, on_branch_failure="continue_others"
|
||||
)
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
executor.register_node("b1", SlowNode(delay=10.0))
|
||||
b2_impl = SuccessNode({"b2_out": "ok"})
|
||||
executor.register_node("b2", b2_impl)
|
||||
|
||||
await executor.execute(graph, goal, {})
|
||||
|
||||
# continue_others tolerates the timeout
|
||||
assert b2_impl.executed
|
||||
|
||||
|
||||
# === 14. Branch timeout with fail_all (explicit) ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_branch_timeout_with_fail_all(runtime, goal):
|
||||
"""fail_all should propagate timeout as execution failure."""
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="slow", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="also slow", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(branch_timeout_seconds=0.1, on_branch_failure="fail_all")
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
executor.register_node("b1", SlowNode(delay=10.0))
|
||||
executor.register_node("b2", SlowNode(delay=10.0))
|
||||
|
||||
result = await executor.execute(graph, goal, {})
|
||||
|
||||
assert not result.success
|
||||
|
||||
|
||||
# === 15. Memory conflict: last_wins ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_memory_conflict_last_wins(runtime, goal):
|
||||
"""last_wins should allow both branches to write the same key without error."""
|
||||
# Use distinct output_keys in spec (to pass graph validation) but have
|
||||
# the node impl write a shared key at runtime — this is the scenario
|
||||
# memory_conflict_strategy is designed to handle.
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="b1", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="b2", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(memory_conflict_strategy="last_wins")
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
# Both impls write "shared_key" — triggers conflict detection at runtime
|
||||
executor.register_node("b1", SuccessNode({"shared_key": "from_b1", "b1_out": "ok"}))
|
||||
executor.register_node("b2", SuccessNode({"shared_key": "from_b2", "b2_out": "ok"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {})
|
||||
|
||||
assert result.success
|
||||
# The key should exist with one of the two values
|
||||
assert result.output.get("shared_key") in ("from_b1", "from_b2")
|
||||
|
||||
|
||||
# === 16. Memory conflict: first_wins ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_memory_conflict_first_wins(runtime, goal):
|
||||
"""first_wins should keep the first branch's value and skip later writes."""
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="b1", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="b2", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(memory_conflict_strategy="first_wins")
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
executor.register_node("b1", SuccessNode({"shared_key": "from_b1", "b1_out": "ok"}))
|
||||
executor.register_node("b2", SuccessNode({"shared_key": "from_b2", "b2_out": "ok"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {})
|
||||
|
||||
assert result.success
|
||||
|
||||
|
||||
# === 17. Memory conflict: error raises ===
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_memory_conflict_error_raises(runtime, goal):
|
||||
"""error strategy should fail when two branches write the same key."""
|
||||
b1 = NodeSpec(
|
||||
id="b1", name="B1", description="b1", node_type="event_loop", output_keys=["b1_out"]
|
||||
)
|
||||
b2 = NodeSpec(
|
||||
id="b2", name="B2", description="b2", node_type="event_loop", output_keys=["b2_out"]
|
||||
)
|
||||
|
||||
graph = _make_fanout_graph([b1, b2])
|
||||
|
||||
config = ParallelExecutionConfig(memory_conflict_strategy="error")
|
||||
executor = GraphExecutor(
|
||||
runtime=runtime, enable_parallel_execution=True, parallel_config=config
|
||||
)
|
||||
executor.register_node("source", SuccessNode({"data": "x"}))
|
||||
executor.register_node("b1", SuccessNode({"shared_key": "from_b1", "b1_out": "ok"}))
|
||||
executor.register_node("b2", SuccessNode({"shared_key": "from_b2", "b2_out": "ok"}))
|
||||
|
||||
result = await executor.execute(graph, goal, {})
|
||||
|
||||
assert not result.success
|
||||
# The conflict RuntimeError is caught inside execute_single_branch,
|
||||
# which causes the branch to fail. fail_all then raises its own error.
|
||||
assert "failed" in result.error.lower()
|
||||
|
||||
@@ -3,12 +3,16 @@ Tests for core GraphExecutor execution paths.
|
||||
Focused on minimal success and failure scenarios.
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.executor import GraphExecutor
|
||||
from framework.graph.goal import Goal
|
||||
from framework.graph.node import NodeResult, NodeSpec
|
||||
from framework.utils.io import atomic_write
|
||||
|
||||
|
||||
# ---- Dummy runtime (no real logging) ----
|
||||
@@ -25,6 +29,14 @@ class DummyRuntime:
|
||||
pass
|
||||
|
||||
|
||||
class DummyMemory:
|
||||
def __init__(self, data):
|
||||
self._data = data
|
||||
|
||||
def read_all(self):
|
||||
return self._data
|
||||
|
||||
|
||||
# ---- Fake node that always succeeds ----
|
||||
class SuccessNode:
|
||||
def validate_input(self, ctx):
|
||||
@@ -245,3 +257,61 @@ async def test_executor_no_events_without_event_bus():
|
||||
result = await executor.execute(graph=graph, goal=goal)
|
||||
|
||||
assert result.success is True
|
||||
|
||||
|
||||
def test_write_progress_uses_atomic_write_and_updates_state(tmp_path, monkeypatch):
|
||||
runtime = DummyRuntime()
|
||||
executor = GraphExecutor(runtime=runtime, storage_path=tmp_path)
|
||||
state_path = tmp_path / "state.json"
|
||||
state_path.write_text(json.dumps({"entry_point": "primary"}), encoding="utf-8")
|
||||
memory = DummyMemory({"foo": "bar"})
|
||||
|
||||
called = {}
|
||||
|
||||
def recording_atomic_write(path, *args, **kwargs):
|
||||
called["path"] = path
|
||||
return atomic_write(path, *args, **kwargs)
|
||||
|
||||
monkeypatch.setattr("framework.graph.executor.atomic_write", recording_atomic_write)
|
||||
|
||||
executor._write_progress(
|
||||
current_node="node-b",
|
||||
path=["node-a", "node-b"],
|
||||
memory=memory,
|
||||
node_visit_counts={"node-a": 1, "node-b": 1},
|
||||
)
|
||||
|
||||
state = json.loads(state_path.read_text(encoding="utf-8"))
|
||||
assert called["path"] == state_path
|
||||
assert state["entry_point"] == "primary"
|
||||
assert state["progress"]["current_node"] == "node-b"
|
||||
assert state["progress"]["path"] == ["node-a", "node-b"]
|
||||
assert state["progress"]["node_visit_counts"] == {"node-a": 1, "node-b": 1}
|
||||
assert state["progress"]["steps_executed"] == 2
|
||||
assert state["memory"] == {"foo": "bar"}
|
||||
assert state["memory_keys"] == ["foo"]
|
||||
assert "updated_at" in state["timestamps"]
|
||||
|
||||
|
||||
def test_write_progress_logs_warning_on_atomic_write_failure(tmp_path, monkeypatch, caplog):
|
||||
runtime = DummyRuntime()
|
||||
executor = GraphExecutor(runtime=runtime, storage_path=tmp_path)
|
||||
state_path = tmp_path / "state.json"
|
||||
state_path.write_text(json.dumps({"entry_point": "primary"}), encoding="utf-8")
|
||||
memory = DummyMemory({"foo": "bar"})
|
||||
|
||||
def failing_atomic_write(*args, **kwargs):
|
||||
raise OSError("disk full")
|
||||
|
||||
monkeypatch.setattr("framework.graph.executor.atomic_write", failing_atomic_write)
|
||||
|
||||
with caplog.at_level(logging.WARNING):
|
||||
executor._write_progress(
|
||||
current_node="node-b",
|
||||
path=["node-a", "node-b"],
|
||||
memory=memory,
|
||||
node_visit_counts={"node-a": 1, "node-b": 1},
|
||||
)
|
||||
|
||||
assert "Failed to persist progress state to" in caplog.text
|
||||
assert str(state_path) in caplog.text
|
||||
|
||||
@@ -338,6 +338,69 @@ class TestLLMJudgeBackwardCompatibility:
|
||||
assert call_kwargs["model"] == "claude-haiku-4-5-20251001"
|
||||
assert call_kwargs["max_tokens"] == 500
|
||||
|
||||
def test_openai_fallback_uses_litellm_provider(self, monkeypatch):
|
||||
"""When OPENAI_API_KEY is set, evaluate() should use a LiteLLM-based provider."""
|
||||
# Force the OpenAI fallback path (no injected provider, no Anthropic key)
|
||||
monkeypatch.setenv("OPENAI_API_KEY", "sk-test-openai")
|
||||
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
|
||||
|
||||
# Stub LiteLLMProvider so we don't call the real API; record what judge passes through
|
||||
captured_calls: list[dict] = []
|
||||
|
||||
class DummyProvider:
|
||||
def __init__(self, model: str = "gpt-4o-mini"):
|
||||
self.model = model
|
||||
|
||||
def complete(
|
||||
self,
|
||||
messages,
|
||||
system="",
|
||||
tools=None,
|
||||
max_tokens=1024,
|
||||
response_format=None,
|
||||
json_mode=False,
|
||||
max_retries=None,
|
||||
):
|
||||
captured_calls.append(
|
||||
{
|
||||
"messages": messages,
|
||||
"system": system,
|
||||
"max_tokens": max_tokens,
|
||||
"json_mode": json_mode,
|
||||
"model": self.model,
|
||||
}
|
||||
)
|
||||
|
||||
class _Resp:
|
||||
def __init__(self, content: str):
|
||||
self.content = content
|
||||
|
||||
# Minimal response object with a content attribute
|
||||
return _Resp('{"passes": true, "explanation": "OK"}')
|
||||
|
||||
monkeypatch.setattr(
|
||||
"framework.llm.litellm.LiteLLMProvider",
|
||||
DummyProvider,
|
||||
)
|
||||
|
||||
judge = LLMJudge()
|
||||
result = judge.evaluate(
|
||||
constraint="no-hallucination",
|
||||
source_document="The sky is blue.",
|
||||
summary="The sky is blue.",
|
||||
criteria="Summary must only contain facts from source",
|
||||
)
|
||||
|
||||
# Judge should have used our stub once and returned the stub's JSON result
|
||||
assert result["passes"] is True
|
||||
assert result["explanation"] == "OK"
|
||||
assert len(captured_calls) == 1
|
||||
|
||||
call = captured_calls[0]
|
||||
assert call["model"] == "gpt-4o-mini"
|
||||
assert call["max_tokens"] == 500
|
||||
assert call["json_mode"] is True
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# LLMJudge Integration Pattern Tests
|
||||
|
||||
@@ -50,7 +50,7 @@ async def test_worker_handoff_injects_formatted_request_into_queen() -> None:
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_worker_handoff_ignores_queen_and_judge_streams() -> None:
|
||||
async def test_worker_handoff_ignores_queen_stream() -> None:
|
||||
bus = EventBus()
|
||||
manager = SessionManager()
|
||||
session = _make_session(bus)
|
||||
@@ -63,11 +63,6 @@ async def test_worker_handoff_ignores_queen_and_judge_streams() -> None:
|
||||
node_id="queen",
|
||||
reason="should be ignored",
|
||||
)
|
||||
await bus.emit_escalation_requested(
|
||||
stream_id="judge",
|
||||
node_id="judge",
|
||||
reason="should be ignored",
|
||||
)
|
||||
|
||||
assert queen_node.inject_event.await_count == 0
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user