Compare commits

...

248 Commits

Author SHA1 Message Date
Timothy bb2e55f5fa fix: click response 2026-04-16 17:43:21 -07:00
Timothy 5d0106ac34 fix: remove coordinate conversation 2026-04-16 17:36:05 -07:00
Timothy 3a219e27ab fix: two dimensions 2026-04-16 17:08:40 -07:00
Timothy 7f62e7a2d0 fix: split image click vs coordinate click 2026-04-16 16:37:52 -07:00
Richard Tang 44d114f0d0 feat: default 1ms delay and prompt improvements 2026-04-16 16:19:38 -07:00
Richard Tang 9e71f16d15 Merge remote-tracking branch 'origin/fix/browser-behaviour-improvements' into fix/browser-behaviour-improvements 2026-04-16 16:14:43 -07:00
Richard Tang 28cad2376c feat: separate type focus tool 2026-04-16 16:08:43 -07:00
Timothy 8222cd306e fix: simplify canonical workflow 2026-04-16 16:02:37 -07:00
Richard Tang 916803889f feat: browswer control tools improvement and debugger 2026-04-16 15:14:08 -07:00
Hundao 9051c443fb fix(tests): resolve Windows CI failures (#7061)
- test_background_job: use sys.executable and double quotes instead of
  single-quoted 'python -c' which Windows cmd.exe doesn't understand
- test_cli_entry_point: guard against None stdout on Windows with
  (result.stdout or "").lower()
- test_safe_eval: bump DEFAULT_TIMEOUT_MS from 100 to 500 to accommodate
  slow Windows CI runners where SIGALRM is unavailable
2026-04-16 21:05:09 +08:00
Hundao e5a93b059f fix(tests): resolve test failures across framework and tools (#7059)
* fix(tests): resolve test failures across framework and tools

Framework tests (52 -> 1 failure):
- Add missing `model` attribute to mock LLM classes (MockStreamingLLM,
  CrashingLLM, ErrorThenSuccessLLM, etc.) to match new agent_loop.py
  requirement at line 624
- Update skill count assertions from 6 to 7 (new writing-hive-skills)
- Fix phase compaction test to match new message format (no brackets)
- Update model catalog test for current gemini model names
- Fix queen memory test: set phase="building" to match prompt_building,
  adjust reflection trigger count to match cooldown behavior

Tools tests (52 -> 0 failures):
- Update csv_tool tests: remove agent_id parameter, use absolute paths,
  patch _ALLOWED_ROOTS instead of AGENT_SANDBOXES_DIR
- Fix browser_evaluate test to allow toast wrapper around script

Remaining: 1 pre-existing failure in test_worker_report where mock LLM
gets stuck when scenarios are exhausted (separate bug).

* fix(tests): resolve remaining test failures

- Add text stop scenario to test_worker_report so worker terminates
  cleanly after tool_calls finish instead of replaying the last
  scenario forever
- Remove duplicated hive home isolation fixture from test_colony_fork_live;
  reuse conftest autouse fixture and only add config copy on top

* fix(tests): prevent mock LLM infinite loops on exhausted scenarios

fix(core): accept both pruned tool result sentinel formats

MockStreamingLLM and _ByTaskMockLLM replay the last scenario forever
when call_index exceeds the scenario list, causing worker timeouts in
CI. Fix by emitting a text stop when scenarios are exhausted (scenarios
mode) or already consumed (by_task mode).

Also fix pruned tool result sentinel mismatch: conversation.py produces
"Pruned tool result ..." but compaction.py and conversation.py only
checked for "[Pruned tool result". Now both formats are accepted.

Also remove duplicated hive home isolation fixture from
test_colony_fork_live; reuse conftest autouse fixture instead.
2026-04-16 20:13:43 +08:00
Hundao 589c5b06fe fix: resolve all ruff lint and format errors across codebase (#7058)
- Auto-fixed 70 lint errors (import sorting, aliased errors, datetime.UTC)
- Fixed 85 remaining errors manually:
  - E501: wrapped long lines in queen_profiles, catalog, routes_credentials
  - F821: added missing TYPE_CHECKING imports for AgentHost, ToolRegistry,
    HookContext, HookResult; added runtime imports where needed
  - F811: removed duplicate method definitions in queen_lifecycle_tools
  - F841/B007: removed unused variables in discovery.py
  - W291: removed trailing whitespace in queen nodes
  - E402: moved import to top of queen_memory_v2.py
  - Fixed AgentRuntime -> AgentHost in example template type annotations
- Reformatted 343 files with ruff format
2026-04-16 19:30:01 +08:00
Richard Tang 4fdbc438f9 chore: release v0.10.1
Release / Create Release (push) Waiting to run
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 18:15:40 -07:00
Richard Tang 78301274cd feat: broswer tool improvements 2026-04-15 18:09:28 -07:00
Richard Tang 451a5d55d2 feat: queen independent prompt improvements 2026-04-15 17:36:48 -07:00
Richard Tang e2a21b3613 chore: title of finance 2026-04-15 16:55:00 -07:00
Richard Tang 5c251645d3 Merge branch 'main' into feat/gui-ux-updates 2026-04-15 16:45:39 -07:00
Richard Tang 8783f372fc feat: use the customtools model for gemini 2026-04-15 16:44:23 -07:00
bryan 2790d13bb6 Merge branch 'main' into feat/gui-ux-updates 2026-04-15 15:45:56 -07:00
bryan 900d94e49f feat: add message timestamps, day-divider rows, and stable createdAt across stream updates 2026-04-15 15:45:31 -07:00
bryan 70e3eb539b feat: extract QueenProfilePanel and open it from the app header 2026-04-15 15:45:20 -07:00
bryan deeb7de800 feat: sort queens by last DM activity and trim "Head of" title prefix 2026-04-15 15:44:52 -07:00
bryan 57ad98005d feat: derive last_active_at from latest message timestamp and sort history newest-first 2026-04-15 15:44:32 -07:00
Timothy 252710fb41 fix: context health and eviction 2026-04-15 11:40:45 -07:00
Richard Tang 22df99ef51 Merge remote-tracking branch 'origin/main'
Release / Create Release (push) Waiting to run
2026-04-14 19:56:33 -07:00
Richard Tang edc3135797 Merge branch 'feature/new-colony' 2026-04-14 19:56:08 -07:00
Richard Tang 27b15789fb fix: skills prompts 2026-04-14 18:51:14 -07:00
RichardTang-Aden 5ba5933edc Merge pull request #7046 from vincentjiang777/main
docs: new readme
2026-04-14 18:02:49 -07:00
Timothy 50eb4b0e8f Merge branch 'feature/colony-creation' into feature/new-colony 2026-04-14 16:34:30 -07:00
Richard Tang 3e4a4c9924 Merge remote-tracking branch 'origin/feat/text-only-tool-filter' into feature/new-colony 2026-04-14 16:29:19 -07:00
Richard Tang c47987e73c fix: ask user widget fallback 2026-04-14 16:27:12 -07:00
Timothy 256b52b818 fix: skills for colonies 2026-04-14 16:23:17 -07:00
Richard Tang 8f5daf0569 fix: swtiching model and new chat 2026-04-14 16:04:07 -07:00
bryan af5c72e785 feat: hide image-producing tools and vision-only prompt blocks from text-only models 2026-04-14 12:50:44 -07:00
Timothy 958bafea29 fix: tool gated skill activation 2026-04-14 11:17:03 -07:00
bryan 5cdc01cb8c fix: preserve tool pill mapping across turn boundary for deferred ask_user completions 2026-04-14 10:56:38 -07:00
Timothy 6979ea825d fix: remove tool limit 2026-04-14 10:35:08 -07:00
Timothy d6093a560f Merge branch 'feature/new-colony' into feature/colony-creation 2026-04-14 10:19:24 -07:00
Hundao 2f58cce781 fix(tools): web_scrape truncation no longer exceeds max_length (#7044)
The previous code did `text[:max_length] + "..."`, which made the
returned content always 3 chars longer than the requested max_length.
Reserve room for the ellipsis inside the limit so the contract holds.

Fixes #2098
2026-04-14 14:24:42 +08:00
Richard Tang ab76a66646 fix: queen loading 2026-04-13 22:39:39 -07:00
Richard Tang c575ff3fe7 feat: queen messages improvements 2026-04-13 22:31:49 -07:00
Timothy 8668d103a8 Merge branch 'feature/new-colony' into feature/colony-creation 2026-04-13 21:34:17 -07:00
Timothy 133f393f8b feat: scheduled triggers 2026-04-13 21:33:54 -07:00
Timothy fd3ef36a15 fix: side panel 2026-04-13 21:08:11 -07:00
Timothy aa281aad34 fix: remove deprecated graphs 2026-04-13 20:56:47 -07:00
Richard Tang a3d0c7e0cb fix: remove No ask_user prompt in the examples 2026-04-13 20:54:17 -07:00
Richard Tang de3042ba3f fix Prompt in the home page are not given to the queen directly, users have to wait till the hello message to be finished. 2026-04-13 20:34:11 -07:00
Timothy 326d7f201c Merge branch 'feature/new-colony' into feature/colony-creation 2026-04-13 19:59:34 -07:00
Timothy db30ef3094 fix: reframe colony creation 2026-04-13 19:56:14 -07:00
Timothy e3d1cb6739 fix: colony creation link 2026-04-13 19:46:24 -07:00
Timothy 846f3f2470 feat: improve tool call reliability 2026-04-13 19:34:47 -07:00
Richard Tang 913437ea0b fix: build error 2026-04-13 18:06:40 -07:00
Richard Tang 520bd635e2 Merge branch 'feature/hive-experimental-comp-pipeline' into feature/new-colony 2026-04-13 18:02:34 -07:00
bryan b7d850ddd0 feat: add LLM key validation endpoint, emit agent errors via SSE, and improve key management UI 2026-04-13 16:25:43 -07:00
Timothy 0a251278f1 feat: learned default skills 2026-04-13 10:34:25 -07:00
Timothy 857af8e6a3 fix: gcu system prompt 2026-04-13 10:00:00 -07:00
Timothy 273d4ec66e fix: upgrade browser skills 2026-04-13 09:45:07 -07:00
Timothy eeb46a2b3e fix: tool credential filter 2026-04-11 12:54:26 -07:00
Timothy b5e05fefae fix: screenshot 2026-04-11 09:53:53 -07:00
Timothy bdfbb7698a fix: browser click 2026-04-10 23:34:39 -07:00
Timothy 35b1eadb7f fix: improve reliability 2026-04-10 22:46:30 -07:00
Timothy 38036eb7bd fix: reliability tunes 2026-04-10 22:12:13 -07:00
Timothy 70d90fda19 fix: screenshot 2026-04-10 21:11:49 -07:00
vincentjiang777 9dc214cfd2 Merge branch 'aden-hive:main' into main 2026-04-10 20:35:42 -07:00
Bryan 1e3dcbbbc2 feat: ask user tool in queen prompt 2026-04-10 17:46:18 -07:00
Bryan 53b095cdcb feat: use ask_user and ask_user_multiple 2026-04-10 17:31:32 -07:00
Timothy d04862053f fix: queen instruction on colony creation 2026-04-10 17:31:01 -07:00
Timothy df0e0ea082 Merge branch 'fix/after-colony-refresh' into feature/new-colony 2026-04-10 17:19:22 -07:00
Timothy b1724ee360 fix: after colony creation list needs refresh 2026-04-10 17:18:59 -07:00
Bryan a59493835d fix: new session for prompt library and new chat 2026-04-10 17:17:55 -07:00
Timothy 334af2b74e fix: default log level 2026-04-10 16:58:27 -07:00
Richard Tang 81c72949ce feat: prompt library ui improvement 2026-04-10 16:54:34 -07:00
Timothy 97fd45d36a fix: mcp tool initialization 2026-04-10 16:52:04 -07:00
Timothy caebbea1aa fix: initialize default mcps 2026-04-10 16:42:03 -07:00
Richard Tang 574a3a284e Merge remote-tracking branch 'origin/feature/new-colony' into feature/new-colony 2026-04-10 16:38:50 -07:00
Richard Tang 8ea3fb8cfe chore: align the hive tool names 2026-04-10 16:38:21 -07:00
Timothy 69d16a8f6c fix: remove deprecated tools 2026-04-10 16:26:29 -07:00
Richard Tang f16cb0ea1f fix: frontend dm fix 2026-04-10 16:25:33 -07:00
Richard Tang e0f1e9d494 feat: efficient mcp loading in initialization 2026-04-10 16:23:36 -07:00
Richard Tang 7fb0da26fc feat: register available MCP tools 2026-04-10 16:01:42 -07:00
Timothy f5f72c1c9c Merge branch 'feature/hive-experimental-comp-pipeline' into feature/new-colony 2026-04-10 15:56:41 -07:00
Timothy 06d0a16201 Merge branch 'feature/colony-orchestrate' into feature/new-colony 2026-04-10 15:52:16 -07:00
Timothy 0964758b12 Merge branch 'feature/colony-orchestrate' into feature/hive-experimental-comp-pipeline 2026-04-10 15:48:02 -07:00
Bryan c25abdfd84 feat: natural chat replies + cleaner home-prompt bootstrap 2026-04-10 15:47:28 -07:00
Timothy af720bb569 fix: stop worker 2026-04-10 15:40:35 -07:00
Bryan b763226a64 docs: update references for orchestrator/host/loader renames 2026-04-10 15:39:36 -07:00
Timothy 9b7580d22b fix: colony event bus subscription 2026-04-10 15:33:44 -07:00
Timothy c23c274ac7 feat: colony creation with skill 2026-04-10 15:09:27 -07:00
Timothy 1335a15341 Merge branch 'feature/new-colony' into feature/colony-orchestrate 2026-04-10 12:47:38 -07:00
Timothy 2a1cbaa582 fix: worker spawn 2026-04-10 12:47:14 -07:00
Richard Tang 74cba57cce Merge remote-tracking branch 'origin/feature/new-colony-credentials' into feature/new-colony 2026-04-10 12:15:11 -07:00
Richard Tang 7616de2417 feat: escaltion and queen reply tools 2026-04-10 12:14:49 -07:00
Richard Tang d96875932a fix: correct aden support tag 2026-04-10 12:03:39 -07:00
Richard Tang 238d90871a feat: stable credential states 2026-04-10 11:33:34 -07:00
Timothy e38e1563ba fix: worker execution 2026-04-10 10:26:29 -07:00
Timothy e3d8b89b69 fix: tool blacklist 2026-04-10 09:07:17 -07:00
Timothy ec64c14d37 fix: test cases 2026-04-09 23:51:51 -07:00
Timothy fb5b7ed9de fix: integration tests 2026-04-09 23:05:11 -07:00
Timothy da0aa65c31 refactor: big test cleanup 2026-04-09 22:04:23 -07:00
Timothy cbf7cc0a37 feat(agent): simple fork 2026-04-09 20:42:28 -07:00
Richard Tang 802f64f4a7 feat: cooldown for reflection 2026-04-09 19:00:10 -07:00
Richard Tang 9ad95fde59 chore: ruff lint 2026-04-09 18:22:16 -07:00
Richard Tang b812f6a03a feat: user memory structure and identity 2026-04-09 18:09:38 -07:00
Richard Tang 0299a87d0c fix: queen identity for new session 2026-04-09 18:07:42 -07:00
Timothy 4aa2358211 feat: doppelganger wiring 2026-04-09 18:04:45 -07:00
Richard Tang bc8a97079e feat: queen role and examples 2026-04-09 17:55:22 -07:00
Richard Tang 6eaa609f63 feat: queen scope memory 2026-04-09 17:33:14 -07:00
Bryan 8f0101b273 fix(queen): handle extra text in selector JSON response 2026-04-09 17:13:20 -07:00
Bryan 5ee98ac7cf feat: add prompt library with search and category filtering 2026-04-09 17:00:09 -07:00
Bryan c058029ac0 feat: add aden credentials storage adapter 2026-04-09 16:59:16 -07:00
Bryan 6a79728d99 feat: update model switcher and enhance queen DM page with navigation 2026-04-09 16:58:55 -07:00
Bryan 200c202465 refactor: update provider descriptions and simplify subscription activation 2026-04-09 16:58:36 -07:00
Bryan 791da46f59 feat: add subscription-based LLM config activation endpoint 2026-04-09 16:58:21 -07:00
Bryan 6377c5b094 refactor: cache tool registry and add queen identity selection hook 2026-04-09 16:58:09 -07:00
Bryan 8f4e901c3c feat: add kimi and hive providers to model catalog 2026-04-09 16:57:53 -07:00
Timothy 4be61ebfc7 refactor: shatter the eld*n ring 2026-04-09 16:57:43 -07:00
Richard Tang ac46ce7bfb fix: unavailable minimax model and enhance reflection log 2026-04-09 16:37:09 -07:00
Richard Tang 110d7e0075 fix: remove outdated queen communication prompt 2026-04-09 15:36:56 -07:00
Richard Tang 749185e760 feat: queen dm prompt 2026-04-09 15:26:35 -07:00
Richard Tang 5cb75d1822 chore: instruction on resetting the port 2026-04-09 15:01:22 -07:00
Richard Tang 3febef106d fix: queen identity loading 2026-04-09 14:47:42 -07:00
Richard Tang db18186825 Merge remote-tracking branch 'origin/feature/hive-experimental-comp-pipeline' into feature/hive-experimental-comp-pipeline 2026-04-09 13:59:25 -07:00
Richard Tang 87918b5263 feat: queen selection like a CEO 2026-04-09 13:58:38 -07:00
Bryan @ Aden 01f258c4c4 Merge pull request #7006 from vincentjiang777/main
micro-fix: readme & 500 use cases
2026-04-09 13:46:36 -07:00
Vincent Jiang 3d992bbda3 readme & 500 use cases 2026-04-09 13:43:35 -07:00
Timothy df43f36385 fix: issues 2026-04-09 12:59:42 -07:00
Richard Tang bdd099bb78 feat: queen selection prompt 2026-04-09 12:58:59 -07:00
Richard Tang acca008772 feat: update provider config 2026-04-09 11:59:41 -07:00
Richard Tang 0bf4d8b9fa fix: session resume 2026-04-09 11:44:03 -07:00
Richard Tang 7a2752eb42 feat: consolidate model config 2026-04-09 09:53:05 -07:00
Timothy c65b43c21b Merge branch 'feature/browser-use-fix' into feature/hive-experimental-comp-pipeline 2026-04-09 08:53:37 -07:00
Timothy 90f376136e fix: always on tools 2026-04-09 07:21:24 -07:00
Richard Tang d5ea28f8f3 chore: loading message 2026-04-08 19:11:46 -07:00
Richard Tang 1ccfc7aefa feat: update the model config and selection 2026-04-08 19:09:30 -07:00
Timothy 64830a6720 fix: config validation 2026-04-08 19:03:26 -07:00
Timothy 514d2828fa fix: tool issues 2026-04-08 18:52:34 -07:00
Richard Tang 5705647364 feat: new session for the queen 2026-04-08 18:42:10 -07:00
Richard Tang 8a3e1e68a9 feat: route the new user request into a queen session and add swtich for queen sessions 2026-04-08 18:31:46 -07:00
Richard Tang 4c900e9ab2 fix: position of queen tool bubble 2026-04-08 18:21:13 -07:00
Richard Tang fa0518b249 fix: show tool calls in queen dm message 2026-04-08 17:58:15 -07:00
Richard Tang 6a5bc0d484 fix: edge case causing message injection in session resume 2026-04-08 17:48:59 -07:00
Bryan d288c865d0 feat: sync user profile to global memory as user-profile.md; add queen profile API transformation 2026-04-08 17:42:57 -07:00
bryan 81051a11fc Merge branch 'feature/hive-experimental-comp-pipeline' into feat/open-hive-colony 2026-04-08 16:53:39 -07:00
Richard Tang c4a8c73b24 Merge remote-tracking branch 'origin/feature/hive-experimental-comp-pipeline' into feature/hive-experimental-comp-pipeline 2026-04-08 16:49:17 -07:00
Richard Tang 2b8ed0eb05 fix: bug causing queen message injection when resuming a session 2026-04-08 16:48:46 -07:00
Timothy 40c530603b fix: internal tag choice of diction 2026-04-08 16:38:55 -07:00
Timothy dee3980dbe fix: browser, csv tools 2026-04-08 16:32:26 -07:00
Richard Tang d19cb2843e feat: separate resume and new session flow for queen sessions 2026-04-08 15:45:37 -07:00
Richard Tang ea31b037b8 feat: register browser tools and skills for queen 2026-04-08 15:29:00 -07:00
Richard Tang 5fe924318d Merge remote-tracking branch 'origin/feature/hive-experimental-comp-pipeline' into feat/independent-queen 2026-04-08 15:09:04 -07:00
Bryan 8e6a812ce6 Merge branch 'feature/hive-experimental-comp-pipeline' into feat/open-hive-colony 2026-04-08 15:08:00 -07:00
Bryan 1565fd52e1 feat: add user profile settings and UI enhancements 2026-04-08 15:07:01 -07:00
Bryan 53f5f93deb fix: correct import paths for subscription token detection in BYOK modal 2026-04-08 15:06:05 -07:00
Richard Tang 21afac2b59 feat: use independent mode when pm queen 2026-04-08 14:54:07 -07:00
Timothy c03f1caa58 fix: strip internal tags 2026-04-08 14:52:41 -07:00
Richard Tang a5e928ac95 feat: indenpend phase queen 2026-04-08 14:36:24 -07:00
Timothy 648e3cd52a feat: think out loud 2026-04-08 14:30:24 -07:00
Timothy b216df76a0 fix: character inception 2026-04-08 14:07:58 -07:00
Bryan ddee82eaef Merge branch 'feature/hive-experimental-comp-pipeline' into feat/open-hive-colony 2026-04-08 12:56:50 -07:00
Timothy 6e88bb0205 feat: wire queen dms 2026-04-08 12:56:00 -07:00
Bryan 0aa19721c3 Merge branch 'feature/hive-experimental-comp-pipeline' into feat/open-hive-colony 2026-04-08 12:11:48 -07:00
Timothy cf1e26b012 Merge branch 'feat/open-hive-colony' into feature/hive-experimental-comp-pipeline 2026-04-08 12:08:42 -07:00
Timothy 47e02c0821 Merge branch 'feat/queen-profile' into feature/hive-experimental-comp-pipeline 2026-04-08 12:07:21 -07:00
Bryan 7e1ebf1c26 Merge branch 'feature/hive-experimental-comp-pipeline' into feat/open-hive-colony 2026-04-08 11:50:39 -07:00
Bryan ecbf543e4c Merge branch 'main' into feat/open-hive-colony 2026-04-08 11:50:07 -07:00
Timothy 7daca39bb2 fix: proper skill loading 2026-04-08 11:37:29 -07:00
Aagrim Rautela d8712ceb72 fix(core): add error handling to _dump_failed_request to prevent crashes on read only filesystem (#6036) 2026-04-08 16:51:53 +08:00
Navya Bijoy 5a90a4ba42 update default storage path from /tmp to ~/.hive/agents/{agent_name}/ (#6556) 2026-04-08 16:22:45 +08:00
Emmanuel Nwanguma e69c381331 test(event_bus): add comprehensive unit tests for EventBus (#4826)
* test(event_bus): add comprehensive unit tests for EventBus

- Add 38 tests covering all EventBus functionality
- Test subscription management (subscribe/unsubscribe)
- Test event publishing and delivery to subscribers
- Test filtering by stream, node, and execution
- Test concurrency (handler errors, semaphore limits)
- Test history operations (get_history, get_stats)
- Test wait_for async waiting with timeout
- Test convenience publishers (emit_* methods)
- Test AgentEvent dataclass and EventType enum

Fixes #4782

* test(event_bus): add comprehensive unit tests for EventBus

- Add 41 tests covering all EventBus functionality
- Test subscription management (subscribe/unsubscribe)
- Test event publishing and delivery to subscribers
- Test filtering by stream, node, execution, and graph_id
- Test concurrency (handler errors, semaphore limits)
- Test history operations (get_history, get_stats)
- Test wait_for async waiting with timeout
- Test convenience publishers including new emit_tool_doom_loop
  and emit_escalation_requested methods
- Test AgentEvent dataclass with graph_id field
- Test EventType enum including NODE_TOOL_DOOM_LOOP and ESCALATION_REQUESTED

Fixes #4782

* test(event_bus): add tests for new worker monitoring events

Add tests for newly added emit methods:
- emit_worker_escalation_ticket (judge → queen escalation)
- emit_queen_intervention_requested (queen → operator escalation)
- emit_llm_turn_complete (LLM turn metadata)
- emit_node_action_plan (node planning)

Update test_key_event_types_exist to include:
- WORKER_ESCALATION_TICKET, QUEEN_INTERVENTION_REQUESTED
- LLM_TURN_COMPLETE, NODE_ACTION_PLAN
- WORKER_LOADED, CREDENTIALS_REQUIRED

Total: 45 tests (up from 41)

* test(event_bus): add tests for run_id, subagent_report, and new event types

- Add tests for run_id field in AgentEvent.to_dict()
- Add test for emit_subagent_report convenience method
- Update test_key_event_types_exist with new event types
- Total: 48 tests

* fix(test): remove tests for deleted EventBus methods and fix enum names

Remove test_emit_worker_escalation_ticket and
test_emit_queen_intervention_requested (methods were removed in recent
refactors). Fix WORKER_LOADED -> WORKER_GRAPH_LOADED in enum assertions.

* style: add future annotations import and fix empty assertion in test_emit_node_action_plan
2026-04-08 16:06:53 +08:00
Gaurav Rai 8f608048f9 feat(tools): add Weights & Biases ML experiment tracking integration (#6963)
* feat(tools): add Weights & Biases experiment tracking and model monitoring integration

* style: fix ruff formatting in wandb_tool.py

* feat(tools): add Weights & Biases ML experiment tracking integration

* fix(tools): address CodeRabbit review comments on wandb_tool

* fix(tools): rewrite wandb_tool to use official Python SDK instead of undocumented REST endpoints

* fix(tools): address Hundao review — remove .coverage, switch to GraphQL/httpx, fix wandb_host, add README

* fix(tools): wire filters to GraphQL, validate empty metric_keys, fix line lengths

* fix(tools): check credentials before input validation in wandb_get_run_metrics

Move _get_creds() call before run_id/metric_keys checks so the
framework credential test receives the expected {error, help} response
instead of a bare input-validation error.
2026-04-08 14:57:03 +08:00
Hundao df29c49bd0 fix(test): update queen memory reflection test mocks for litellm format (#6991)
* fix(test): update queen memory test mocks to match litellm ModelResponse format

PR #6976 refactored reflection_agent to extract tool calls from litellm
ModelResponse objects (choices[0].message.tool_calls) instead of plain
dicts. The test mocks were not updated, causing tool calls to silently
fail and two tests to break.

Fixes #6990

* style: ruff format
2026-04-08 14:44:11 +08:00
Timothy b3759db83b refactor(hive): home hive dir structure 2026-04-07 19:21:16 -07:00
Bryan 8308207be8 feat: add light mode support for flowchart, sub-agent panes, and normalize settings modal sizing 2026-04-07 19:11:54 -07:00
Timothy 6b86c602c7 Merge branch 'main' into feature/hive-experimental-comp-pipeline 2026-04-07 18:49:14 -07:00
Bryan d9644eaa39 chore: remove old workspace GUI and dependencies 2026-04-07 18:45:56 -07:00
Bryan 3976ea6934 feat: add home redesign, credentials, and org chart pages 2026-04-07 18:45:38 -07:00
Bryan cc00ae8999 feat: add colony chat and queen DM pages 2026-04-07 18:45:20 -07:00
Bryan 70bf337c03 refactor: update graph and subagent display components 2026-04-07 18:44:55 -07:00
Bryan 6ecdbf47b0 feat: add model switcher, settings modal, and template card 2026-04-07 18:44:07 -07:00
Bryan e0e1abbb64 feat: add sidebar and header components 2026-04-07 18:43:48 -07:00
Bryan cb8c26ee18 feat: add app layout, routing, and global styles 2026-04-07 18:43:25 -07:00
Bryan 3d6beca577 feat: add config and credential API client endpoints 2026-04-07 18:43:05 -07:00
Bryan bed9670395 feat: add colony types, registry, and context providers 2026-04-07 18:42:46 -07:00
Bryan 61bb0b6594 refactor: update session, credential routes and runner 2026-04-07 18:42:25 -07:00
Bryan e7506fcd25 feat: add runtime config and model switching API 2026-04-07 18:42:08 -07:00
Timothy 7cc92eb8c3 fix: queen session start prompt 2026-04-07 18:00:15 -07:00
Timothy 3a70243b82 refactor(queen): use new infra 2026-04-07 17:50:45 -07:00
Richard Tang b701605a62 feat: update queen profile apis 2026-04-07 17:10:42 -07:00
Timothy a5b17a293b refactor: simplify agent loading 2026-04-07 17:03:12 -07:00
Richard Tang 7ad25d986b feat: queen profile 2026-04-07 16:49:34 -07:00
Timothy db572b9be6 fix: mcp registry pipeline stage 2026-04-07 16:15:40 -07:00
Timothy 0ee653a164 fix: agent loading pipeline 2026-04-07 15:20:31 -07:00
RichardTang-Aden c92662bdb1 Merge pull request #6976 from aden-hive/feat/simplify-queen-memory
Simplify queen memory: remove colony memory, keep global only
2026-04-07 13:58:26 -07:00
Richard Tang 19469ff404 chore: lint format 2026-04-07 13:57:05 -07:00
Richard Tang 7fcb51985d fix: edge case for memory recall 2026-04-07 13:55:18 -07:00
Timothy 3c9911c25b refactor: grand clean-up 2026-04-07 13:42:39 -07:00
Richard Tang 3dbd20040a fix: reflection agent runner 2026-04-07 13:07:41 -07:00
Richard Tang c9d62139af feat: add extra logging 2026-04-07 12:45:37 -07:00
Timothy 172b180477 refactor: remove deprecated shims 2026-04-07 12:28:19 -07:00
Richard Tang 6637bc8d96 feat: simplify memory implementation 2026-04-07 12:08:35 -07:00
Timothy 93dc35dcbb refactor(architecture): revamp 2026-04-07 09:19:03 -07:00
Richard Tang 30ad3edfbf docs: readme improvement 2026-04-07 09:18:34 -07:00
Timothy d10912be15 feat: key pool 2026-04-06 16:53:55 -07:00
Timothy b9c5191059 chore: update gitignore 2026-04-06 16:49:13 -07:00
Bryan @ Aden d9037172d8 Merge pull request #6898 from sundaram2021/fix/ast_pow_ddos_mitigation
micro-fix(security): mitigate ast.Pow DoS and enforce safe_eval timeout
2026-04-06 13:36:03 -07:00
Richard Tang df41732e95 chore: enhance log for LLM 2026-04-06 13:30:31 -07:00
Richard Tang cd9a625041 fix: dynamic absolute path and instruction 2026-04-06 13:21:06 -07:00
Richard Tang 420d703138 fix: quickstart extension instructions 2026-04-06 13:11:10 -07:00
Richard Tang 66866e524d fix: remove old new agent button 2026-04-06 13:05:03 -07:00
Rodrigo M.V.S. 33e6c018a3 docs: Add Frontend Dev Workflow subsection to CONTRIBUTING.md (#6523)
* chore: update package-lock.json after npm install

* fix: export validate_agent_path from server module

* fix: remove circular import in server module

* docs: Add Frontend Dev Workflow subsection to CONTRIBUTING.md

* chore: revert accidental package-lock.json changes

* docs: clarify frontend dev requires both backend and dev server

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-04-06 23:16:29 +08:00
Akash 1ac50ab532 feat: add theme toggle and tab improvements (#6062)
Co-authored-by: Akash Kumar <akash369kumar369@gmail.com>
2026-04-06 23:02:26 +08:00
Aashutosh Pandey 4df924d3d7 fix(security): prevent error_middleware from leaking internal exception details to HTTP clients (#6903)
The error_middleware was returning str(e) and type(e).__name__ directly
in JSON responses, which could expose file paths, database connection
strings, API key names, and internal class names to untrusted clients.

Changes:
- Return generic 'Internal server error' message instead of raw exception
- Improve server-side log to include request method and path
- Add unit tests verifying no internal details are leaked

The full exception traceback remains available via logger.exception()
for server-side debugging.

Co-authored-by: Aashutosh Pandey <aashutoshpandey@Aashutoshs-MacBook-Air.local>
2026-04-06 22:50:43 +08:00
Sujan Kumar MV 8f2d87cc5d docs(tools): add README for 10 tools (batch 3) (#6913)
* docs(tools): add README for 10 tools (batch 3)

Adds README.md for: supabase_tool, zoom_tool, twitter_tool,
twilio_tool, shopify_tool, snowflake_tool, zendesk_tool,
yahoo_finance_tool, youtube_transcript_tool, docker_hub_tool

Partial fix for #6486

* docs(shopify): fix fulfillment_status value and body_html param name

- fulfillment_status example: "unfulfilled" is not a valid Shopify API
  value, changed to "unshipped"
- tool table: "description" is not the actual param name, it's body_html

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-04-06 22:10:22 +08:00
Leayx 4b795584f6 micro-fix(quickstart): correct npm invocation in powershell script (#6816)
- powerShell supports direct command invocation without requiring the call operator "&"
- the script used the call operator "&" to invoke `npm install` and `npm run build`
- which caused incorrect command parsing in PowerShell when combined with output redirection "2>&1"
- This resulted in unexpected errors such as "Unknown command: pm", even though the commands worked correctly when executed manually

- removed the unnecessary use of the call operator "&" and invoked npm commands directly
- npm commands now execute correctly within the script, aligning with standard PowerShell behavior and eliminating the parsing issue
2026-04-06 21:56:59 +08:00
Faryal Rzwan 6024ae4241 docs(tools): add README for 9 tools (batch 1) (#6881)
* docs(tools): add README for huggingface, jira, pinecone, langfuse, linear, mongodb, redis, vercel, confluence

* docs(tools): fix review comments in confluence, mongodb and vercel READMEs

* docs(mongodb): add MONGODB_DATA_SOURCE to setup section

The code uses os.getenv("MONGODB_DATA_SOURCE") in every API request
body as the dataSource field. Without it, requests send an empty
dataSource and fail. Add it back to the setup section.

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-04-06 21:44:51 +08:00
Hundao aaa5d661c3 fix(ci): unbreak main - playwright deps + framework test suite (#6955)
* fix(tools): move playwright back to main dependencies

playwright was moved to the browser extra in c7e85aa9 as part of the GCU
refactor to use a browser extension. But web_scrape_tool still imports
playwright at module level and requires it unconditionally, so CI's
Test Tools job breaks with ModuleNotFoundError.

web_scrape_tool has no fallback without playwright — it's a hard
dependency, not optional. Put it back in main deps.

Fixes CI failure on Test Tools (ubuntu-latest).

* chore: remove dead test_highlights.py script

tools/test_highlights.py is orphaned from the GCU refactor in c7e85aa9:

- imports highlight_coordinate and highlight_element from gcu.browser.highlight,
  but highlight.py was deleted in that refactor
- calls BrowserSession.start(), open_tab(), get_active_page(), stop() — none
  of these methods exist on the current BrowserSession class

The script can't run at all, and it's tripping ruff's I001 import-order
check (fail on Lint CI after cache invalidation).

* test: fix browser/refs tests broken by GCU refactor

Tests were still testing the old Playwright-based API after c7e85aa9
moved GCU to an extension-bridge architecture.

test_refs.py (6 tests):
  Refs system now produces CSS selectors like
  [role="button"][aria-label="Submit"]:nth-of-type(1) for the bridge's
  DOM matcher, instead of Playwright's role=button[name="Submit"] >> nth=0.
  Updated expected values to match. Renamed test_escapes_quotes_in_name to
  test_quoted_name_passes_through and added a comment noting that inner
  quotes aren't currently escaped (follow-up concern).

test_browser_tools_comprehensive.py (4 tests):
  - test_screenshot_full_page: browser_screenshot passes selector=None
    when no selector is provided; update assertion.
  - test_file_upload: browser_upload validates file paths exist on disk.
    Create real tmp files and mock the CDP calls it makes.
  - test_evaluate_with_bare_return: renamed to
    test_evaluate_passes_script_through_to_bridge. IIFE wrapping lives
    in bridge.evaluate, not in the browser_evaluate tool — mocking the
    bridge bypasses the wrapping logic, so the tool just passes the
    script through.
  - test_evaluate_complex_script: browser_evaluate returns bridge's raw
    result (no 'ok' wrapper); check for 'result' key instead.

test_browser_advanced_tools.py (deleted):
  The whole file patched get_session and page.wait_for_function (the old
  Playwright-based API). The bug it guarded against (user text interpolated
  into a JS source string) is architecturally impossible in the new
  bridge-based tools, which send text via structured RPC. Coverage for
  browser_wait exists in test_browser_tools_comprehensive.py.

* test(core): fix event_loop tests broken by hive-v1 refactor

Several framework tests were left failing or hanging after the hive-v1
refactor landed. This un-breaks CI without touching production code.

- Worker auto-escalation: 8 tests were hanging because EventLoopNode
  with event_bus treats non-queen/non-subagent nodes as workers and
  auto-escalates to queen, then blocks on _await_user_input forever
  (no queen in standalone tests). Opt out via is_subagent_mode=True.
- MockConversationStore: added clear() to match the production store
  (storage/conversation_store.py), which event_loop_node.py:425 calls.
- Executor output semantics: result.output now only contains terminal-
  node outputs; two handoff tests now read intermediate outputs from
  result.session_state["data_buffer"].
- Restore filter: test_restore_from_checkpoint needs set_current_phase
  so restore()'s phase_id filter matches.
- Removed two _build_context tests whose target method no longer exists
  (replaced by standalone build_node_context()). Remaining execution_id
  coverage is adequate in TestExecutionId + integration tests.

* style: ruff format + drop em dash in comment

* test(core): fix remaining framework tests broken by hive-v1 refactor

Rounds out the fix started in the previous commit. Full framework
suite now passes (1589 passed, 0 failed).

- conftest.py: force-bind framework.runner submodules (mcp_registry,
  mcp_client, mcp_connection_manager) as attributes on the parent
  package. Without this, pytest monkeypatch.setattr with dotted-string
  paths fails because the attribute walker can't resolve the submodule
  even though __init__.py imports from it. Affects ~25 MCP tests.
- test_queen_memory: _execute_tool() grew a required caller kwarg for
  worker type-restrictions. Pass caller="queen" so path-traversal
  checks run without caller restrictions interfering.
- test_session_manager_worker_handoff: _subscribe_worker_digest was
  removed in the refactor, dropped the dead monkeypatches.
- test_skill_context_protection: NodeConversation now reads _run_id
  in add_tool_result(), so the __new__-based test helper has to
  initialise it.
- test_node_conversation: restore() now filters parts by run_id for
  crash recovery. Renamed the stale test and flipped the assertion
  to match the new filtering semantics.
- test_tool_registry: CONTEXT_PARAMS was updated (workspace_id out,
  profile in). Switched the test's example stripped params.

* docs: drop circular PR reference in test_refs comment

Addresses CodeRabbit nitpick. The comment referenced the PR that was
adding the comment, which becomes a self-reference after merge.
2026-04-05 14:21:32 +08:00
Emmanuel Nwanguma 2e5670ace6 docs(tools): add README for 11 tools (batch 2 of 2) (#6887)
Partial fix for #6486

Add README.md for: n8n_tool, obsidian_tool, pagerduty_tool,
pipedrive_tool, plaid_tool, powerbi_tool, quickbooks_tool,
salesforce_tool, sap_tool, terraform_tool, tines_tool
2026-04-05 10:00:14 +08:00
Emmanuel Nwanguma 634658e829 docs(tools): add README for 11 tools (batch 1 of 2) (#6886)
Partial fix for #6486

Add README.md for: aws_s3_tool, azure_sql_tool, cloudinary_tool,
duckduckgo_tool, file_system_toolkits, gitlab_tool,
google_search_console_tool, greenhouse_tool, hubspot_tool,
kafka_tool, microsoft_graph_tool
2026-04-05 09:52:47 +08:00
Richard Tang dc64cc68a1 feat: add a html file for browser extension instruction 2026-04-03 21:52:42 -07:00
RichardTang-Aden e8d56c815d Merge pull request #6905 from aden-hive/feature/hive-v1
Release / Create Release (push) Waiting to run
Release v0.9.0 — Browser Extension, Queen Memory v2 & Graph Executor Refactor
2026-04-03 21:13:16 -07:00
Richard Tang cc21780e99 test: ignore dummy agent in make 2026-04-03 20:49:12 -07:00
Richard Tang 714de59d2a test: ignore e2e dummy agent tests 2026-04-03 20:47:27 -07:00
Richard Tang ed8d417bef chore: ruff lint 2026-04-03 20:31:14 -07:00
Richard Tang 294df7f066 Merge remote-tracking branch 'origin/main' into feature/hive-v1 2026-04-03 20:21:53 -07:00
Richard Tang b46fe69712 fix: handle KIMI pause turn 2026-04-03 20:21:28 -07:00
Timothy 6e6efb97bd fix: close browser before finishing 2026-04-03 20:12:01 -07:00
Timothy dc4be4f906 fix: remove standalone gcu tool registry 2026-04-03 19:32:54 -07:00
Timothy 9cb20986d2 Merge branch 'feature/queen-lifecycle' into feature/hive-v1 2026-04-03 19:22:58 -07:00
Timothy aab38222db fix: legacy import 2026-04-03 19:16:00 -07:00
Timothy c46082780f feat: gcu extra learnings 2026-04-03 18:45:34 -07:00
Richard Tang ff8123acb9 fix: browser log path 2026-04-03 18:33:14 -07:00
Richard Tang 358b4e1bf2 Merge remote-tracking branch 'origin/feature/hive-v1' into feature/hive-v1 2026-04-03 18:29:31 -07:00
Richard Tang 934c424510 feat: handle gemini tool call tags 2026-04-03 18:29:04 -07:00
Bryan f12db37d75 fix: quickstart launch chrome extensions page 2026-04-03 18:27:48 -07:00
Richard Tang 2b263f6e10 feat: move thinking tags handling on frontend 2026-04-03 18:20:45 -07:00
Timothy 4513f5dcd7 fix: capture random errors 2026-04-03 18:12:29 -07:00
Richard Tang 45cfae5217 feat: add hive llm healthcheck 2026-04-02 14:50:05 -07:00
RichardTang-Aden 4d877469d5 Merge pull request #6195 from Sri-Likhita-adru/fix/stream-transient-retry-cap
fix(llm): separate retry counters in stream() - transient errors used wrong cap
2026-04-02 14:01:00 -07:00
Sundaram Kumar Jha 6022f6c911 refactor: bit estimation formula 2026-04-02 00:44:50 +05:30
Sundaram Kumar Jha dacda3337f test(safe_eval): cover alarm state preservation 2026-04-02 00:12:15 +05:30
Sundaram Kumar Jha 267f797abc fix(security): preserve host alarm state in safe_eval 2026-04-02 00:12:03 +05:30
Sundaram Kumar Jha 42fd1ec8d1 chore: formatted 2026-04-01 23:47:37 +05:30
Sundaram Kumar Jha 81774d5d0e test(safe_eval): cover execution timeout behavior 2026-04-01 23:36:14 +05:30
Sundaram Kumar Jha d1cbfd1e54 fix(security): enforce safe_eval execution timeout 2026-04-01 23:35:41 +05:30
Sundaram Kumar Jha fd71501215 test(safe_eval): add ast.Pow DoS regression coverage 2026-04-01 23:29:02 +05:30
Sundaram Kumar Jha 406bfb23b9 fix(security): bound ast.Pow in safe_eval 2026-04-01 23:28:57 +05:30
SRI LIKHITA ADRU 1e2e6e03dd Merge branch 'main' into fix/stream-transient-retry-cap 2026-03-20 09:08:14 -04:00
Sri-Likhita-adru 8b99bb8590 fix(llm): cap transient stream error retries at STREAM_TRANSIENT_MAX_RETRIES=3 2026-03-12 15:37:49 -04:00
712 changed files with 47989 additions and 49648 deletions
+53
View File
@@ -1,4 +1,57 @@
{
"permissions": {
"allow": [
"Bash(grep -n \"_is_context_too_large_error\" core/framework/agent_loop/agent_loop.py core/framework/agent_loop/internals/*.py)",
"Read(//^class/ {cls=$3} /def test_/**)",
"Read(//^ @pytest.mark.asyncio/{getline n; print NR\": \"n} /^ def test_/**)",
"Bash(python3)",
"Bash(grep -nE 'Tool\\\\\\(\\\\s*$|name=\"[a-z_]+\",' core/framework/tools/queen_lifecycle_tools.py)",
"Bash(awk -F'\"' '{print $2}')",
"Bash(grep -n \"create_colony\\\\|colony-spawn\\\\|colony_spawn\" /home/timothy/aden/hive/core/framework/agents/queen/nodes/__init__.py /home/timothy/aden/hive/core/framework/tools/*.py)",
"Bash(git stash:*)",
"Bash(python3 -c \"import sys,json; d=json.loads\\(sys.stdin.read\\(\\)\\); print\\('keys:', list\\(d.keys\\(\\)\\)[:10]\\)\")",
"Bash(python3 -c ':*)",
"Bash(uv run:*)",
"Read(//tmp/**)",
"Bash(grep -n \"useColony\\\\|const { queens, queenProfiles\" /home/timothy/aden/hive/core/frontend/src/pages/queen-dm.tsx)",
"Bash(awk 'NR==385,/\\\\}, \\\\[/' /home/timothy/aden/hive/core/frontend/src/pages/queen-dm.tsx)",
"Bash(xargs -I{} sh -c 'if ! grep -q \"^import base64\\\\|^from base64\" \"{}\"; then echo \"MISSING: {}\"; fi')",
"Bash(find /home/timothy/aden/hive/core/framework -name \"*.py\" -type f -exec grep -l \"FileConversationStore\\\\|class.*ConversationStore\" {} \\\\;)",
"Bash(find /home/timothy/aden/hive/core/framework -name \"*.py\" -exec grep -l \"run_parallel_workers\\\\|create_colony\" {} \\\\;)",
"Bash(awk '/^ async def execute\\\\\\(self, ctx: AgentContext\\\\\\)/,/^ async def [a-z_]+/ {print NR\": \"$0}' /home/timothy/aden/hive/core/framework/agent_loop/agent_loop.py)",
"Bash(grep -r \"max_concurrent_workers\\\\|max_depth\\\\|recursion\\\\|spawn.*bomb\" /home/timothy/aden/hive/core/framework/host/*.py)",
"Bash(wc -l /home/timothy/aden/hive/tools/src/gcu/browser/*.py /home/timothy/aden/hive/tools/src/gcu/browser/tools/*.py)",
"Bash(file /tmp/gcu_verify/*.png)",
"Bash(ps -eo pid,cmd)",
"Bash(ps -o pid,lstart,cmd -p 746640)",
"Bash(kill 746636)",
"Bash(ps -eo pid,lstart,cmd)",
"Bash(grep -E \"^d|\\\\.py$\")",
"Bash(grep -E \"\\\\.\\(ts|tsx\\)$\")",
"Bash(xargs cat:*)",
"Bash(find /home/timothy/aden/hive -path \"*/.venv\" -prune -o -name \"*.py\" -type f -exec grep -l \"frontend\\\\|UI\\\\|terminal\\\\|interactive\\\\|TUI\" {} \\\\;)",
"Bash(wc -l /home/timothy/.hive/backup/*/SKILL.md)",
"Bash(awk -F'::' '{print $1}')",
"Bash(wait)",
"Bash(pkill -f \"pytest.*test_event_loop_node\")",
"Bash(pkill -f \"pytest.*TestToolConcurrency\")",
"Bash(grep -n \"def.*discover\\\\|/api/agents\\\\|agents_discover\" /home/timothy/aden/hive/core/framework/server/*.py)",
"Bash(bun run:*)",
"Bash(npx eslint:*)",
"Bash(npm run:*)",
"Bash(npm test:*)",
"Bash(grep -n \"PIL\\\\|Image\\\\|to_thread\\\\|run_in_executor\" /home/timothy/aden/hive/tools/src/gcu/browser/*.py /home/timothy/aden/hive/tools/src/gcu/browser/tools/*.py)",
"WebFetch(domain:docs.litellm.ai)",
"Bash(cat /home/timothy/aden/hive/.venv/lib/python3.11/site-packages/litellm-*.dist-info/METADATA)",
"Bash(find \"/home/timothy/.hive/agents/queens/queen_brand_design/sessions/session_20260415_100751_d49f4c28/\" -type f -name \"*.json*\" -exec grep -l \"协日\" {} \\\\;)",
"Bash(grep -v ':0$')"
],
"additionalDirectories": [
"/home/timothy/.hive/skills/writing-hive-skills",
"/tmp",
"/home/timothy/.hive/skills"
]
},
"hooks": {
"PostToolUse": [
{
@@ -9,7 +9,6 @@ Fix: Add wait_for_selector between scroll calls
import asyncio
import sys
import time
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
@@ -36,7 +35,7 @@ async def test_twitter_lazy_scroll():
if bridge.is_connected:
print("✓ Extension connected!")
break
print(f"Waiting for extension... ({i+1}/10)")
print(f"Waiting for extension... ({i + 1}/10)")
else:
print("✗ Extension not connected")
return
@@ -58,7 +57,7 @@ async def test_twitter_lazy_scroll():
# Count initial tweets
initial_count = await bridge.evaluate(
tab_id,
'(function() { return document.querySelectorAll(\'[data-testid="tweet"]\').length; })()'
"(function() { return document.querySelectorAll('[data-testid=\"tweet\"]').length; })()",
)
print(f"Initial tweet count: {initial_count.get('result', 0)}")
@@ -70,7 +69,7 @@ async def test_twitter_lazy_scroll():
print("\n--- Scrolling with waits ---")
for i in range(3):
result = await bridge.scroll(tab_id, "down", 500)
print(f" Scroll {i+1}: {result.get('method', 'unknown')} method")
print(f" Scroll {i + 1}: {result.get('method', 'unknown')} method")
# Wait for new content to load
await asyncio.sleep(2)
@@ -78,20 +77,20 @@ async def test_twitter_lazy_scroll():
# Count tweets after scroll
count_result = await bridge.evaluate(
tab_id,
'(function() { return document.querySelectorAll(\'[data-testid="tweet"]\').length; })()'
"(function() { return document.querySelectorAll('[data-testid=\"tweet\"]').length; })()",
)
count = count_result.get('result', 0)
count = count_result.get("result", 0)
print(f" Tweet count after scroll: {count}")
# Final count
final_count = await bridge.evaluate(
tab_id,
'(function() { return document.querySelectorAll(\'[data-testid="tweet"]\').length; })()'
"(function() { return document.querySelectorAll('[data-testid=\"tweet\"]').length; })()",
)
final = final_count.get('result', 0)
initial = initial_count.get('result', 0)
final = final_count.get("result", 0)
initial = initial_count.get("result", 0)
print(f"\n--- Results ---")
print("\n--- Results ---")
print(f"Initial tweets: {initial}")
print(f"Final tweets: {final}")
@@ -9,7 +9,6 @@ Fix: Find visible modal container (highest z-index scrollable), scroll that
import asyncio
import sys
import time
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
@@ -60,7 +59,7 @@ async def test_modal_scroll():
# Click button to open modal
print("\n--- Opening modal ---")
# Find and click the "Open Modal" button
result = await bridge.click(tab_id, '.ws-btn', timeout_ms=5000)
result = await bridge.click(tab_id, ".ws-btn", timeout_ms=5000)
print(f"Click result: {result}")
await asyncio.sleep(1)
@@ -52,7 +52,9 @@ async def test_overlay_click():
<head><title>Overlay Test</title></head>
<body>
<button id="target-btn" onclick="alert('Clicked!')">Click Me</button>
<div id="overlay" style="position:fixed;top:0;left:0;width:100%;height:100%;background:rgba(0,0,0,0.3);z-index:1000;"></div>
<div id="overlay" style="position:fixed;top:0;left:0;
width:100%;height:100%;
background:rgba(0,0,0,0.3);z-index:1000;"></div>
<script>
window.clickCount = 0;
document.getElementById('target-btn').addEventListener('click', () => {
@@ -65,6 +67,7 @@ async def test_overlay_click():
# Navigate to data URL
import base64
data_url = f"data:text/html;base64,{base64.b64encode(test_html.encode()).decode()}"
await bridge.navigate(tab_id, data_url, wait_until="load")
@@ -91,7 +94,7 @@ async def test_overlay_click():
targetElement: btn.tagName
};
})();
"""
""",
)
print(f"Coverage check: {coverage_check.get('result', {})}")
@@ -100,10 +103,7 @@ async def test_overlay_click():
print(f"Click result: {click_result}")
# Check if click registered
count_result = await bridge.evaluate(
tab_id,
"(function() { return window.clickCount; })()"
)
count_result = await bridge.evaluate(tab_id, "(function() { return window.clickCount; })()")
count = count_result.get("result", 0)
print(f"Click count after CDP click: {count}")
@@ -10,7 +10,6 @@ Fix: Use piercing selector (host >>> target) or traverse shadow roots
import asyncio
import sys
import base64
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
@@ -100,7 +99,7 @@ async def test_shadow_dom():
});
return { count: hosts.length, hosts };
})();
"""
""",
)
print(f"Shadow DOM detection: {detection.get('result', {})}")
@@ -126,15 +125,12 @@ async def test_shadow_dom():
}
return { success: false, error: 'Button not found' };
})();
"""
""",
)
print(f"JS click result: {click_result.get('result', {})}")
# Verify click was registered
count_result = await bridge.evaluate(
tab_id,
"(function() { return window.shadowClickCount || 0; })()"
)
count_result = await bridge.evaluate(tab_id, "(function() { return window.shadowClickCount || 0; })()")
count = count_result.get("result") or 0
print(f"Shadow click count: {count}")
@@ -10,7 +10,6 @@ Fix: Focus via JavaScript, use execCommand('insertText') or Input.dispatchKeyEve
import asyncio
import sys
import base64
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
@@ -54,10 +53,14 @@ async def test_contenteditable():
<h2>ContentEditable Test</h2>
<h3>1. Simple contenteditable div</h3>
<div id="editor1" contenteditable="true" style="border:1px solid #ccc;padding:10px;min-height:50px;">Start text</div>
<div id="editor1" contenteditable="true"
style="border:1px solid #ccc;padding:10px;
min-height:50px;">Start text</div>
<h3>2. Rich text editor (like Notion)</h3>
<div id="editor2" contenteditable="true" style="border:1px solid #ccc;padding:10px;min-height:50px;">
<div id="editor2" contenteditable="true"
style="border:1px solid #ccc;padding:10px;
min-height:50px;">
<p>Type here...</p>
</div>
@@ -106,7 +109,7 @@ async def test_contenteditable():
ids: Array.from(editables).map(el => el.id)
};
})();
"""
""",
)
print(f"Contenteditable detection: {detection.get('result', {})}")
@@ -115,8 +118,7 @@ async def test_contenteditable():
await bridge.click(tab_id, "#input1")
await bridge.type_text(tab_id, "#input1", "Hello input")
input_result = await bridge.evaluate(
tab_id,
"(function() { return document.getElementById('input1').value; })()"
tab_id, "(function() { return document.getElementById('input1').value; })()"
)
print(f"Input value: {input_result.get('result', '')}")
@@ -126,7 +128,7 @@ async def test_contenteditable():
await bridge.type_text(tab_id, "#editor1", "Hello contenteditable", clear_first=True)
editor_result = await bridge.evaluate(
tab_id,
"(function() { return document.getElementById('editor1').innerText; })()"
"(function() { return document.getElementById('editor1').innerText; })()",
)
print(f"Editor1 innerText: {editor_result.get('result', '')}")
@@ -142,7 +144,7 @@ async def test_contenteditable():
document.execCommand('insertText', false, 'Hello from execCommand');
return editor.innerText;
})();
"""
""",
)
print(f"Editor2 after execCommand: {insert_result.get('result', '')}")
@@ -10,7 +10,6 @@ Fix: Add delay_ms between keystrokes
import asyncio
import sys
import base64
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
@@ -87,7 +86,26 @@ async def test_autocomplete():
<div id="log" style="margin-top:20px;font-family:monospace;"></div>
<script>
const countries = ["Afghanistan","Albania","Algeria","Andorra","Angola","Argentina","Armenia","Australia","Austria","Azerbaijan","Bahamas","Bahrain","Bangladesh","Belarus","Belgium","Belize","Benin","Bhutan","Bolivia","Brazil","Canada","China","Colombia","Denmark","Egypt","France","Germany","India","Indonesia","Italy","Japan","Mexico","Netherlands","Nigeria","Norway","Pakistan","Peru","Philippines","Poland","Portugal","Russia","Spain","Sweden","Switzerland","Thailand","Turkey","Ukraine","United Kingdom","United States","Vietnam"];
const countries = [
"Afghanistan","Albania","Algeria",
"Andorra","Angola","Argentina",
"Armenia","Australia","Austria",
"Azerbaijan","Bahamas","Bahrain",
"Bangladesh","Belarus","Belgium",
"Belize","Benin","Bhutan",
"Bolivia","Brazil","Canada",
"China","Colombia","Denmark",
"Egypt","France","Germany",
"India","Indonesia","Italy",
"Japan","Mexico","Netherlands",
"Nigeria","Norway","Pakistan",
"Peru","Philippines","Poland",
"Portugal","Russia","Spain",
"Sweden","Switzerland","Thailand",
"Turkey","Ukraine",
"United Kingdom","United States",
"Vietnam"
];
const input = document.getElementById('search');
const log = document.getElementById('log');
@@ -126,11 +144,15 @@ async def test_autocomplete():
div.setAttribute('class', 'autocomplete-items');
this.parentNode.appendChild(div);
countries.filter(c => c.substr(0, val.length).toUpperCase() === val.toUpperCase())
.slice(0, 5)
.forEach(country => {
countries.filter(
c => c.substr(0, val.length).toUpperCase()
=== val.toUpperCase()
).slice(0, 5).forEach(country => {
const item = document.createElement('div');
item.innerHTML = '<strong>' + country.substr(0, val.length) + '</strong>' + country.substr(val.length);
item.innerHTML = '<strong>'
+ country.substr(0, val.length)
+ '</strong>'
+ country.substr(val.length);
item.addEventListener('click', function() {
input.value = country;
closeAllLists();
@@ -172,17 +194,13 @@ async def test_autocomplete():
await asyncio.sleep(0.5)
fast_result = await bridge.evaluate(
tab_id,
"(function() { return document.getElementById('search').value; })()"
tab_id, "(function() { return document.getElementById('search').value; })()"
)
fast_value = fast_result.get("result", "")
print(f"Value after fast typing: '{fast_value}'")
# Check events
events_result = await bridge.evaluate(
tab_id,
"(function() { return window.inputEvents; })()"
)
events_result = await bridge.evaluate(tab_id, "(function() { return window.inputEvents; })()")
print(f"Events logged: {events_result.get('result', [])}")
# Test 2: Slow typing (with delay) - should work
@@ -192,8 +210,7 @@ async def test_autocomplete():
await asyncio.sleep(0.5)
slow_result = await bridge.evaluate(
tab_id,
"(function() { return document.getElementById('search').value; })()"
tab_id, "(function() { return document.getElementById('search').value; })()"
)
slow_value = slow_result.get("result", "")
print(f"Value after slow typing: '{slow_value}'")
@@ -201,7 +218,7 @@ async def test_autocomplete():
# Check if dropdown appeared
dropdown_result = await bridge.evaluate(
tab_id,
"(function() { return document.querySelectorAll('.autocomplete-items div').length; })()"
"(function() { return document.querySelectorAll('.autocomplete-items div').length; })()",
)
dropdown_count = dropdown_result.get("result", 0)
print(f"Dropdown items: {dropdown_count}")
@@ -87,10 +87,7 @@ async def test_huge_dom():
await bridge.navigate(tab_id, data_url, wait_until="load")
# Count elements
count_result = await bridge.evaluate(
tab_id,
"(function() { return document.querySelectorAll('*').length; })()"
)
count_result = await bridge.evaluate(tab_id, "(function() { return document.querySelectorAll('*').length; })()")
elem_count = count_result.get("result", 0)
print(f"DOM elements: {elem_count}")
@@ -126,10 +123,7 @@ async def test_huge_dom():
await bridge.navigate(tab_id, "https://www.linkedin.com/feed", wait_until="load", timeout_ms=30000)
await asyncio.sleep(2)
count_result = await bridge.evaluate(
tab_id,
"(function() { return document.querySelectorAll('*').length; })()"
)
count_result = await bridge.evaluate(tab_id, "(function() { return document.querySelectorAll('*').length; })()")
elem_count = count_result.get("result", 0)
print(f"LinkedIn DOM elements: {elem_count}")
@@ -11,7 +11,6 @@ Fix: Use wait_until="networkidle" or wait_for_selector
import asyncio
import sys
import time
import base64
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
@@ -82,9 +81,13 @@ async def test_spa_navigation():
// Render content
const content = {
home: '<h1>Home Page</h1><p>Welcome to the SPA!</p><button id="home-btn">Home Action</button>',
about: '<h1>About Page</h1><p>This is a simulated SPA.</p><button id="about-btn">About Action</button>',
contact: '<h1>Contact Page</h1><p>Contact us at test@example.com</p><button id="contact-btn">Contact Action</button>'
home: '<h1>Home Page</h1><p>Welcome!</p>'
+ '<button id="home-btn">Home Action</button>',
about: '<h1>About Page</h1><p>Simulated SPA.</p>'
+ '<button id="about-btn">About Action</button>',
contact: '<h1>Contact Page</h1>'
+ '<p>Contact us at test@example.com</p>'
+ '<button id="contact-btn">Contact Action</button>'
};
document.getElementById('app').innerHTML = content[page] || '<h1>404</h1>';
@@ -121,7 +124,7 @@ async def test_spa_navigation():
# Check content immediately
content = await bridge.evaluate(
tab_id,
"(function() { return document.getElementById('app').innerText; })()"
"(function() { return document.getElementById('app').innerText; })()",
)
print(f"Content immediately after load: '{content.get('result', '')}'")
@@ -137,7 +140,7 @@ async def test_spa_navigation():
# Check content after wait
content_after = await bridge.evaluate(
tab_id,
"(function() { return document.getElementById('app').innerText; })()"
"(function() { return document.getElementById('app').innerText; })()",
)
print(f"Content after wait: '{content_after.get('result', '')}'")
@@ -151,7 +154,7 @@ async def test_spa_navigation():
# Check if content changed
about_content = await bridge.evaluate(
tab_id,
"(function() { return document.getElementById('app').innerText; })()"
"(function() { return document.getElementById('app').innerText; })()",
)
print(f"Content after SPA nav: '{about_content.get('result', '')}'")
@@ -167,7 +170,7 @@ async def test_spa_navigation():
# Check content immediately
content_networkidle = await bridge.evaluate(
tab_id,
"(function() { return document.getElementById('app').innerText; })()"
"(function() { return document.getElementById('app').innerText; })()",
)
print(f"Content after networkidle: '{content_networkidle.get('result', '')}'")
@@ -44,7 +44,7 @@ def check_png(data: str) -> bool:
"""Verify that base64 data decodes to a valid PNG."""
try:
raw = base64.b64decode(data)
return raw[:8] == b'\x89PNG\r\n\x1a\n'
return raw[:8] == b"\x89PNG\r\n\x1a\n"
except Exception:
return False
@@ -218,7 +218,7 @@ async def main():
if bridge.is_connected:
print("✓ Extension connected!")
break
print(f"Waiting for extension... ({i+1}/10)")
print(f"Waiting for extension... ({i + 1}/10)")
else:
print("✗ Extension not connected. Ensure Chrome with Beeline extension is running.")
return
@@ -97,7 +97,7 @@ async def test_problematic_site(bridge: BeelineBridge, tab_id: int) -> dict:
});
return results;
})();
"""
""",
)
print(f" Before scroll: {before.get('result', {})}")
@@ -126,7 +126,7 @@ async def test_problematic_site(bridge: BeelineBridge, tab_id: int) -> dict:
});
return results;
})();
"""
""",
)
print(f" After scroll: {after.get('result', {})}")
@@ -194,7 +194,7 @@ async def detect_root_cause(bridge: BeelineBridge, tab_id: int) -> dict:
largest: candidates[0]
};
})();
"""
""",
)
detections["nested_scroll"] = scroll_check.get("result", {})
print(f" Nested scroll containers: {detections['nested_scroll']}")
@@ -212,7 +212,7 @@ async def detect_root_cause(bridge: BeelineBridge, tab_id: int) -> dict:
});
return { count: withShadow.length, elements: withShadow.slice(0, 5) };
})();
"""
""",
)
detections["shadow_dom"] = shadow_check.get("result", {})
print(f" Shadow DOM: {detections['shadow_dom']}")
@@ -225,7 +225,7 @@ async def detect_root_cause(bridge: BeelineBridge, tab_id: int) -> dict:
const iframes = document.querySelectorAll('iframe');
return { count: iframes.length };
})();
"""
""",
)
detections["iframes"] = iframe_check.get("result", {})
print(f" iframes: {detections['iframes']}")
@@ -240,7 +240,7 @@ async def detect_root_cause(bridge: BeelineBridge, tab_id: int) -> dict:
body_children: document.body.children.length
};
})();
"""
""",
)
detections["dom_size"] = dom_check.get("result", {})
print(f" DOM size: {detections['dom_size']}")
@@ -256,7 +256,7 @@ async def detect_root_cause(bridge: BeelineBridge, tab_id: int) -> dict:
angular: !!document.querySelector('[ng-app], [ng-version]')
};
})();
"""
""",
)
detections["frameworks"] = framework_check.get("result", {})
print(f" Frameworks: {detections['frameworks']}")
@@ -289,7 +289,7 @@ async def main():
if bridge.is_connected:
print("✓ Extension connected!")
break
print(f"Waiting for extension... ({i+1}/10)")
print(f"Waiting for extension... ({i + 1}/10)")
else:
print("✗ Extension not connected. Ensure Chrome with Beeline extension is running.")
return
+1 -1
View File
@@ -63,7 +63,7 @@ jobs:
working-directory: core
run: |
uv sync
uv run pytest tests/ -v
uv run pytest tests/ -v --ignore=tests/dummy_agents
test-tools:
name: Test Tools (${{ matrix.os }})
+3
View File
@@ -70,6 +70,8 @@ tmp/
temp/
exports/*
exports.old*
artifacts/*
.claude/settings.local.json
@@ -79,3 +81,4 @@ core/tests/*dumps/*
screenshots/*
.gemini/*
.coverage
@@ -0,0 +1,9 @@
{"type": "connection", "event": "connect", "ts": "2026-04-04T01:10:38.245667+00:00", "profile": "default"}
{"type": "connection", "event": "hello", "details": {"version": "1.0"}, "ts": "2026-04-04T01:10:38.247207+00:00", "profile": "default"}
{"type": "connection", "event": "disconnect", "ts": "2026-04-04T01:11:57.148273+00:00", "profile": "default"}
{"type": "connection", "event": "connect", "ts": "2026-04-04T01:12:09.162378+00:00", "profile": "default"}
{"type": "connection", "event": "hello", "details": {"version": "1.0"}, "ts": "2026-04-04T01:12:09.163899+00:00", "profile": "default"}
{"type": "connection", "event": "disconnect", "ts": "2026-04-04T01:15:12.826042+00:00", "profile": "default"}
{"type": "connection", "event": "connect", "ts": "2026-04-04T01:15:30.842533+00:00", "profile": "default"}
{"type": "connection", "event": "hello", "details": {"version": "1.0"}, "ts": "2026-04-04T01:15:30.845025+00:00", "profile": "default"}
{"type": "tool_call", "tool": "browser_stop", "params": {"profile": "gcu-browser-worker:3"}, "result": {"ok": true, "status": "not_running", "profile": "gcu-browser-worker:3"}, "ok": true, "duration_ms": 0.01, "ts": "2026-04-04T01:29:04.294954+00:00", "profile": "default"}
+19 -3
View File
@@ -333,6 +333,22 @@ make test-live # Run live API integration tests (requires credentials)
- **WebSocket** for real-time updates
- **Tailwind CSS** for styling
### Frontend Dev Workflow
> **Note:** `./quickstart.sh` handles the full setup including the web UI.
> The commands below are for contributors iterating on the frontend code after
> initial setup is complete.
```bash
# Start the backend server
hive serve
# In a separate terminal, run the frontend dev server with hot-reload
cd core/frontend
npm install # only needed after dependency changes
npm run dev
```
### Useful Development Commands
```bash
@@ -943,7 +959,7 @@ uv run pytest -m "not live"
**Unit Test**
```python
import pytest
from framework.graph.node import Node
from framework.orchestrator import NodeSpec as Node
def test_node_creation():
node = Node(id="test", name="Test Node", node_type="event_loop")
@@ -961,8 +977,8 @@ async def test_node_execution():
**Integration Test**
```python
import pytest
from framework.graph.executor import GraphExecutor
from framework.graph.node import Node
from framework.orchestrator.orchestrator import Orchestrator as GraphExecutor
from framework.orchestrator import NodeSpec as Node
@pytest.mark.asyncio
async def test_graph_execution_with_multiple_nodes():
+2 -2
View File
@@ -28,7 +28,7 @@ check: ## Run all checks without modifying files (CI-safe)
cd tools && uv run ruff format --check .
test: ## Run all tests (core + tools, excludes live)
cd core && uv run python -m pytest tests/ -v
cd core && uv run python -m pytest tests/ -v --ignore=tests/dummy_agents
cd tools && uv run python -m pytest -v
test-tools: ## Run tool tests only (mocked, no credentials needed)
@@ -38,7 +38,7 @@ test-live: ## Run live integration tests (requires real API credentials)
cd tools && uv run python -m pytest -m live -s -o "addopts=" --log-cli-level=INFO
test-all: ## Run everything including live tests
cd core && uv run python -m pytest tests/ -v
cd core && uv run python -m pytest tests/ -v --ignore=tests/dummy_agents
cd tools && uv run python -m pytest -v
cd tools && uv run python -m pytest -m live -s -o "addopts=" --log-cli-level=INFO
+12 -151
View File
@@ -1,5 +1,5 @@
<p align="center">
<img width="100%" alt="Hive Banner" src="https://github.com/user-attachments/assets/a027429b-5d3c-4d34-88e4-0feaeaabbab3" />
<img width="100%" alt="Hive Banner" src="https://asset.acho.io/github/img/banner.gif" />
</p>
<p align="center">
@@ -40,7 +40,16 @@
## Overview
Hive is a runtime harness for AI agents in production. You describe your goal in natural language; a coding agent (the queen) generates the agent graph and connection code to achieve it. During execution, the harness manages state isolation, checkpoint-based crash recovery, cost enforcement, and real-time observability. When agents fail, the framework captures failure data, evolves the graph through the coding agent, and redeploys automatically. Built-in human-in-the-loop nodes, browser control, credential management, and parallel execution give you production reliability without sacrificing adaptability.
OpenHive is a zero-setup, model-agnostic execution harness that dynamically generates multi-agent topologies to tackle complex, long-running business workflows without requiring any orchestration boilerplate. By simply defining your objective, the runtime compiles a strict, graph-based execution DAG that safely coordinates specialized agents to execute concurrent tasks in parallel. Backed by persistent, role-based memory that intelligently evolves with your project's context, OpenHive ensures deterministic fault tolerance, deep state observability, and seamless asynchronous execution across whichever underlying LLMs you choose to plug in.
## Features
- ✅ Multi-Agent Coordination for parallel task execution
- ✅ Graph-based execution for recurring and complex processes
- ✅ Role-based memory that evolves with your projects
- ✅ Zero Setup - No technical configuration required
- ✅ General Compute Use and Browser Use with Native Extension
- ✅ Custom Model Support
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
@@ -51,7 +60,7 @@ https://github.com/user-attachments/assets/bf10edc3-06ba-48b6-98ba-d069b15fb69d
## Who Is Hive For?
Hive is the harness layer for teams moving AI agents from prototype to production. Models are getting better on their own — the bottleneck is the infrastructure around them: state management, failure recovery, cost control, and observability.
Hive is the multi-agent harness layer for teams moving AI agents from prototype to production. Single agents like Openclaw and Cowork can finish personal jobs pretty well but lack the rigor to fulfil business processes.
Hive is a good fit if you:
@@ -139,17 +148,6 @@ Now you can run an agent by selecting the agent (either an existing agent or exa
<img width="2549" height="1174" alt="Screenshot 2026-03-12 at 9 27 36PM" src="https://github.com/user-attachments/assets/7c7d30fa-9ceb-4c23-95af-b1caa405547d" />
## Features
- **Browser-Use** - Control the browser on your computer to achieve hard tasks
- **Parallel Execution** - Execute the generated graph in parallel. This way you can have multiple agents completing the jobs for you
- **[Goal-Driven Generation](docs/key_concepts/goals_outcome.md)** - Define objectives in natural language; the coding agent generates the agent graph and connection code to achieve them
- **[Adaptiveness](docs/key_concepts/evolution.md)** - Framework captures failures, calibrates according to the objectives, and evolves the agent graph
- **[Dynamic Node Connections](docs/key_concepts/graph.md)** - No predefined edges; connection code is generated by any capable LLM based on your goals
- **SDK-Wrapped Nodes** - Every node gets a shared data buffer, local RLM memory, monitoring, tools, and LLM access out of the box
- **[Human-in-the-Loop](docs/key_concepts/graph.md#human-in-the-loop)** - Intervention nodes that pause execution for human input with configurable timeouts and escalation
- **Real-time Observability** - WebSocket streaming for live monitoring of agent execution, decisions, and node-to-node communication
## Integration
<a href="https://github.com/aden-hive/hive/tree/main/tools/src/aden_tools/tools"><img width="100%" alt="Integration" src="https://github.com/user-attachments/assets/a1573f93-cf02-4bb8-b3d5-b305b05b1e51" /></a>
@@ -194,18 +192,6 @@ flowchart LR
style V6 fill:#fff,stroke:#ed8c00,stroke-width:1px,color:#cc5d00
```
### The Hive Advantage
| Typical Agent Frameworks | Hive |
| -------------------------- | -------------------------------------- |
| Focus on model orchestration | **Production harness**: state, recovery, observability |
| Hardcode agent workflows | Describe goals in natural language |
| Manual graph definition | Auto-generated agent graphs |
| Reactive error handling | Outcome-evaluation and adaptiveness |
| Static tool configurations | Dynamic SDK-wrapped nodes |
| Separate monitoring setup | Built-in real-time observability |
| DIY budget management | Integrated cost controls & degradation |
### How It Works
1. **[Define Your Goal](docs/key_concepts/goals_outcome.md)** → Describe what you want to achieve in plain English
@@ -221,131 +207,6 @@ flowchart LR
- [Configuration Guide](docs/configuration.md) - All configuration options
- [Architecture Overview](docs/architecture/README.md) - System design and structure
## Roadmap
Aden Hive Agent Framework aims to help developers build outcome-oriented, self-adaptive agents. See [roadmap.md](docs/roadmap.md) for details.
```mermaid
flowchart TB
%% Main Entity
User([User])
%% =========================================
%% EXTERNAL EVENT SOURCES
%% =========================================
subgraph ExtEventSource [External Event Source]
E_Sch["Schedulers"]
E_WH["Webhook"]
E_SSE["SSE"]
end
%% =========================================
%% SYSTEM NODES
%% =========================================
subgraph WorkerBees [Worker Bees]
WB_C["Conversation"]
WB_SP["System prompt"]
subgraph Graph [Graph]
direction TB
N1["Node"] --> N2["Node"] --> N3["Node"]
N1 -.-> AN["Active Node"]
N2 -.-> AN
N3 -.-> AN
%% Nested Event Loop Node
subgraph EventLoopNode [Event Loop Node]
ELN_L["listener"]
ELN_SP["System Prompt<br/>(Task)"]
ELN_EL["Event loop"]
ELN_C["Conversation"]
end
end
end
subgraph JudgeNode [Judge]
J_C["Criteria"]
J_P["Principles"]
J_EL["Event loop"] <--> J_S["Scheduler"]
end
subgraph QueenBee [Queen Bee]
QB_SP["System prompt"]
QB_EL["Event loop"]
QB_C["Conversation"]
end
subgraph Infra [Infra]
SA["Sub Agent"]
TR["Tool Registry"]
WTM["Write through Conversation Memory<br/>(Logs/RAM/Harddrive)"]
SM["Shared Memory<br/>(State/Harddrive)"]
EB["Event Bus<br/>(RAM)"]
CS["Credential Store<br/>(Harddrive/Cloud)"]
end
subgraph PC [PC]
B["Browser"]
CB["Codebase<br/>v 0.0.x ... v n.n.n"]
end
%% =========================================
%% CONNECTIONS & DATA FLOW
%% =========================================
%% External Event Routing
E_Sch --> ELN_L
E_WH --> ELN_L
E_SSE --> ELN_L
ELN_L -->|"triggers"| ELN_EL
%% User Interactions
User -->|"Talk"| WB_C
User -->|"Talk"| QB_C
User -->|"Read/Write Access"| CS
%% Inter-System Logic
ELN_C <-->|"Mirror"| WB_C
WB_C -->|"Focus"| AN
WorkerBees -->|"Inquire"| JudgeNode
JudgeNode -->|"Approve"| WorkerBees
%% Judge Alignments
J_C <-.->|"aligns"| WB_SP
J_P <-.->|"aligns"| QB_SP
%% Escalate path
J_EL -->|"Report (Escalate)"| QB_EL
%% Pub/Sub Logic
AN -->|"publish"| EB
EB -->|"subscribe"| QB_C
%% Infra and Process Spawning
ELN_EL -->|"Spawn"| SA
SA -->|"Inform"| ELN_EL
SA -->|"Starts"| B
B -->|"Report"| ELN_EL
TR -->|"Assigned"| ELN_EL
CB -->|"Modify Worker Bee"| WB_C
%% =========================================
%% SHARED MEMORY & LOGS ACCESS
%% =========================================
%% Worker Bees Access (link to node inside Graph subgraph)
AN <-->|"Read/Write"| WTM
AN <-->|"Read/Write"| SM
%% Queen Bee Access
QB_C <-->|"Read/Write"| WTM
QB_EL <-->|"Read/Write"| SM
%% Credentials Access
CS -->|"Read Access"| QB_C
```
## Contributing
We welcome contributions from the community! Were especially looking for help building tools, integrations, and example agents for the framework ([check #2805](https://github.com/aden-hive/hive/issues/2805)). If youre interested in extending its functionality, this is the perfect place to start. Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
+7 -21
View File
@@ -52,9 +52,7 @@ _DEFAULT_REDIRECT_PORT = 51121
# This project reverse-engineered and published the public OAuth credentials
# for Google's Antigravity/Cloud Code Assist API.
# Source: https://github.com/NoeFabris/opencode-antigravity-auth
_CREDENTIALS_URL = (
"https://raw.githubusercontent.com/NoeFabris/opencode-antigravity-auth/dev/src/constants.ts"
)
_CREDENTIALS_URL = "https://raw.githubusercontent.com/NoeFabris/opencode-antigravity-auth/dev/src/constants.ts"
# Cached credentials fetched from public source
_cached_client_id: str | None = None
@@ -68,9 +66,7 @@ def _fetch_credentials_from_public_source() -> tuple[str | None, str | None]:
return _cached_client_id, _cached_client_secret
try:
req = urllib.request.Request(
_CREDENTIALS_URL, headers={"User-Agent": "Hive-Antigravity-Auth/1.0"}
)
req = urllib.request.Request(_CREDENTIALS_URL, headers={"User-Agent": "Hive-Antigravity-Auth/1.0"})
with urllib.request.urlopen(req, timeout=10) as resp:
content = resp.read().decode("utf-8")
import re
@@ -168,10 +164,7 @@ class OAuthCallbackHandler(BaseHTTPRequestHandler):
if "code" in query and "state" in query:
OAuthCallbackHandler.auth_code = query["code"][0]
OAuthCallbackHandler.state = query["state"][0]
self._send_response(
"Authentication successful! You can close this window "
"and return to the terminal."
)
self._send_response("Authentication successful! You can close this window and return to the terminal.")
return
self._send_response("Waiting for authentication...")
@@ -296,8 +289,7 @@ def validate_credentials(access_token: str, project_id: str = _DEFAULT_PROJECT_I
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json",
"User-Agent": (
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
"AppleWebKit/537.36 (KHTML, like Gecko) Antigravity/1.18.3"
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Antigravity/1.18.3"
),
"X-Goog-Api-Client": "google-cloud-sdk vscode_cloudshelleditor/0.1",
}
@@ -316,9 +308,7 @@ def validate_credentials(access_token: str, project_id: str = _DEFAULT_PROJECT_I
return False
def refresh_access_token(
refresh_token: str, client_id: str, client_secret: str | None
) -> dict | None:
def refresh_access_token(refresh_token: str, client_id: str, client_secret: str | None) -> dict | None:
"""Refresh the access token using the refresh token."""
data = {
"grant_type": "refresh_token",
@@ -361,9 +351,7 @@ def cmd_account_add(args: argparse.Namespace) -> int:
access_token = account.get("access")
refresh_token_str = account.get("refresh", "")
refresh_token = refresh_token_str.split("|")[0] if refresh_token_str else None
project_id = (
refresh_token_str.split("|")[1] if "|" in refresh_token_str else _DEFAULT_PROJECT_ID
)
project_id = refresh_token_str.split("|")[1] if "|" in refresh_token_str else _DEFAULT_PROJECT_ID
email = account.get("email", "unknown")
expires_ms = account.get("expires", 0)
expires_at = expires_ms / 1000.0 if expires_ms else 0.0
@@ -390,9 +378,7 @@ def cmd_account_add(args: argparse.Namespace) -> int:
# Update the account
account["access"] = new_access
account["expires"] = int((time.time() + expires_in) * 1000)
accounts_data["last_refresh"] = time.strftime(
"%Y-%m-%dT%H:%M:%SZ", time.gmtime()
)
accounts_data["last_refresh"] = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
save_accounts(accounts_data)
# Validate the refreshed token
-132
View File
@@ -1,132 +0,0 @@
"""
Minimal Manual Agent Example
----------------------------
This example demonstrates how to build and run an agent programmatically
without using the Claude Code CLI or external LLM APIs.
It uses custom NodeProtocol implementations to define logic in pure Python,
making it perfect for understanding the core runtime loop:
Setup -> Graph definition -> Execution -> Result
Run with:
uv run python core/examples/manual_agent.py
"""
import asyncio
from framework.graph import EdgeCondition, EdgeSpec, Goal, GraphSpec, NodeSpec
from framework.graph.executor import GraphExecutor
from framework.graph.node import NodeContext, NodeProtocol, NodeResult
from framework.runtime.core import Runtime
# 1. Define Node Logic (Custom NodeProtocol implementations)
class GreeterNode(NodeProtocol):
"""Generate a simple greeting."""
async def execute(self, ctx: NodeContext) -> NodeResult:
name = ctx.input_data.get("name", "World")
greeting = f"Hello, {name}!"
ctx.buffer.write("greeting", greeting)
return NodeResult(success=True, output={"greeting": greeting})
class UppercaserNode(NodeProtocol):
"""Convert text to uppercase."""
async def execute(self, ctx: NodeContext) -> NodeResult:
greeting = ctx.input_data.get("greeting") or ctx.buffer.read("greeting") or ""
result = greeting.upper()
ctx.buffer.write("final_greeting", result)
return NodeResult(success=True, output={"final_greeting": result})
async def main():
print("Setting up Manual Agent...")
# 2. Define the Goal
# Every agent needs a goal with success criteria
goal = Goal(
id="greet-user",
name="Greet User",
description="Generate a friendly uppercase greeting",
success_criteria=[
{
"id": "greeting_generated",
"description": "Greeting produced",
"metric": "custom",
"target": "any",
}
],
)
# 3. Define Nodes
# Nodes describe steps in the process
node1 = NodeSpec(
id="greeter",
name="Greeter",
description="Generates a simple greeting",
node_type="event_loop",
input_keys=["name"],
output_keys=["greeting"],
)
node2 = NodeSpec(
id="uppercaser",
name="Uppercaser",
description="Converts greeting to uppercase",
node_type="event_loop",
input_keys=["greeting"],
output_keys=["final_greeting"],
)
# 4. Define Edges
# Edges define the flow between nodes
edge1 = EdgeSpec(
id="greet-to-upper",
source="greeter",
target="uppercaser",
condition=EdgeCondition.ON_SUCCESS,
)
# 5. Create Graph
# The graph works like a blueprint connecting nodes and edges
graph = GraphSpec(
id="greeting-agent",
goal_id="greet-user",
entry_node="greeter",
terminal_nodes=["uppercaser"],
nodes=[node1, node2],
edges=[edge1],
)
# 6. Initialize Runtime & Executor
# Runtime handles state/memory; Executor runs the graph
from pathlib import Path
runtime = Runtime(storage_path=Path("./agent_logs"))
executor = GraphExecutor(runtime=runtime)
# 7. Register Node Implementations
# Connect node IDs in the graph to actual Python implementations
executor.register_node("greeter", GreeterNode())
executor.register_node("uppercaser", UppercaserNode())
# 8. Execute Agent
print("Executing agent with input: name='Alice'...")
result = await executor.execute(graph=graph, goal=goal, input_data={"name": "Alice"})
# 9. Verify Results
if result.success:
print("\nSuccess!")
print(f"Path taken: {' -> '.join(result.path)}")
print(f"Final output: {result.output.get('final_greeting')}")
else:
print(f"\nFailed: {result.error}")
if __name__ == "__main__":
# Optional: Enable logging to see internal decision flow
# logging.basicConfig(level=logging.INFO)
asyncio.run(main())
-119
View File
@@ -1,119 +0,0 @@
#!/usr/bin/env python3
"""
Example: Integrating MCP Servers with the Core Framework
This example demonstrates how to:
1. Register MCP servers programmatically
2. Use MCP tools in agents
3. Load MCP servers from configuration files
"""
import asyncio
from pathlib import Path
from framework.runner.runner import AgentRunner
async def example_1_programmatic_registration():
"""Example 1: Register MCP server programmatically"""
print("\n=== Example 1: Programmatic MCP Server Registration ===\n")
# Load an existing agent
runner = AgentRunner.load("exports/task-planner")
# Register tools MCP server via STDIO
num_tools = runner.register_mcp_server(
name="tools",
transport="stdio",
command="python",
args=["-m", "aden_tools.mcp_server", "--stdio"],
cwd="../tools",
)
print(f"Registered {num_tools} tools from tools MCP server")
# List all available tools
tools = runner._tool_registry.get_tools()
print(f"\nAvailable tools: {list(tools.keys())}")
# Run the agent with MCP tools available
result = await runner.run(
{"objective": "Search for 'Claude AI' and summarize the top 3 results"}
)
print(f"\nAgent result: {result}")
# Cleanup
runner.cleanup()
async def example_2_http_transport():
"""Example 2: Connect to MCP server via HTTP"""
print("\n=== Example 2: HTTP MCP Server Connection ===\n")
# First, start the tools MCP server in HTTP mode:
# cd tools && python mcp_server.py --port 4001
runner = AgentRunner.load("exports/task-planner")
# Register tools via HTTP
num_tools = runner.register_mcp_server(
name="tools-http",
transport="http",
url="http://localhost:4001",
)
print(f"Registered {num_tools} tools from HTTP MCP server")
# Cleanup
runner.cleanup()
async def example_3_config_file():
"""Example 3: Load MCP servers from configuration file"""
print("\n=== Example 3: Load from Configuration File ===\n")
# Create a test agent folder with mcp_servers.json
test_agent_path = Path("exports/task-planner")
# Copy example config (in practice, you'd place this in your agent folder)
import shutil
shutil.copy(Path(__file__).parent / "mcp_servers.json", test_agent_path / "mcp_servers.json")
# Load agent - MCP servers will be auto-discovered
runner = AgentRunner.load(test_agent_path)
# Tools are automatically available
tools = runner._tool_registry.get_tools()
print(f"Available tools: {list(tools.keys())}")
# Cleanup
runner.cleanup()
# Clean up the test config
(test_agent_path / "mcp_servers.json").unlink()
async def main():
"""Run all examples"""
print("=" * 60)
print("MCP Integration Examples")
print("=" * 60)
try:
# Run examples
await example_1_programmatic_registration()
# await example_2_http_transport() # Requires HTTP server running
# await example_3_config_file()
# await example_4_custom_agent_with_mcp_tools()
except Exception as e:
print(f"\nError running example: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
asyncio.run(main())
+14 -60
View File
@@ -1,66 +1,20 @@
"""
Aden Hive Framework: A goal-driven agent runtime optimized for Builder observability.
"""Hive Agent Framework.
The runtime is designed around DECISIONS, not just actions. Every significant
choice the agent makes is captured with:
- What it was trying to do (intent)
- What options it considered
- What it chose and why
- What happened as a result
- Whether that was good or bad (evaluated post-hoc)
This gives the Builder LLM the information it needs to improve agent behavior.
## Testing Framework
The framework includes a Goal-Based Testing system (Goal Agent Eval):
- Generate tests from Goal success_criteria and constraints
- Mandatory user approval before tests are stored
- Parallel test execution with error categorization
- Debug tools with fix suggestions
See `framework.testing` for details.
Core classes:
ColonyRuntime -- orchestrates parallel worker clones in a colony
AgentLoop -- the LLM + tool execution loop (one per worker)
AgentLoader -- loads agent config from disk, builds pipeline
DecisionTracker -- records decisions for post-hoc analysis
"""
from framework.llm import AnthropicProvider, LLMProvider
from framework.runner import AgentRunner
from framework.runtime.core import Runtime
from framework.schemas.decision import Decision, DecisionEvaluation, Option, Outcome
from framework.schemas.run import Problem, Run, RunSummary
# Testing framework
from framework.testing import (
ApprovalStatus,
DebugTool,
ErrorCategory,
Test,
TestResult,
TestStorage,
TestSuiteResult,
)
from framework.agent_loop import AgentLoop
from framework.host import ColonyRuntime
from framework.loader import AgentLoader
from framework.tracker import DecisionTracker
__all__ = [
# Schemas
"Decision",
"Option",
"Outcome",
"DecisionEvaluation",
"Run",
"RunSummary",
"Problem",
# Runtime
"Runtime",
# LLM
"LLMProvider",
"AnthropicProvider",
# Runner
"AgentRunner",
# Testing
"Test",
"TestResult",
"TestSuiteResult",
"TestStorage",
"ApprovalStatus",
"ErrorCategory",
"DebugTool",
"ColonyRuntime",
"AgentLoader",
"AgentLoop",
"DecisionTracker",
]
+34
View File
@@ -0,0 +1,34 @@
"""Agent loop -- the core agent execution primitive."""
from framework.agent_loop.conversation import ( # noqa: F401
ConversationStore,
Message,
NodeConversation,
)
from framework.agent_loop.types import ( # noqa: F401
AgentContext,
AgentProtocol,
AgentResult,
AgentSpec,
)
def __getattr__(name: str):
if name in ("AgentLoop", "JudgeProtocol", "JudgeVerdict", "LoopConfig", "OutputAccumulator"):
from framework.agent_loop.agent_loop import (
AgentLoop,
JudgeProtocol,
JudgeVerdict,
LoopConfig,
OutputAccumulator,
)
_exports = {
"AgentLoop": AgentLoop,
"JudgeProtocol": JudgeProtocol,
"JudgeVerdict": JudgeVerdict,
"LoopConfig": LoopConfig,
"OutputAccumulator": OutputAccumulator,
}
return _exports[name]
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
File diff suppressed because it is too large Load Diff
@@ -3,12 +3,14 @@
from __future__ import annotations
import json
import logging
import re
from dataclasses import dataclass
from pathlib import Path
from typing import Any, Literal, Protocol, runtime_checkable
LEGACY_RUN_ID = "__legacy_run__"
logger = logging.getLogger(__name__)
def is_legacy_run_id(run_id: str | None) -> bool:
@@ -59,9 +61,12 @@ class Message:
return {"role": "user", "content": self.content}
if self.role == "assistant":
d: dict[str, Any] = {"role": "assistant", "content": self.content}
d: dict[str, Any] = {"role": "assistant"}
if self.tool_calls:
d["tool_calls"] = self.tool_calls
d["content"] = self.content if self.content else None
else:
d["content"] = self.content or ""
return d
# role == "tool"
@@ -157,10 +162,17 @@ def update_run_cursor(
def _extract_spillover_filename(content: str) -> str | None:
"""Extract spillover filename from a tool result annotation.
Matches patterns produced by EventLoopNode._truncate_tool_result():
- Large result: "saved to 'web_search_1.txt'"
- Small result: "[Saved to 'web_search_1.txt']"
Matches patterns produced by ``truncate_tool_result``:
- New large-result header: "Full result saved at: /abs/path/file.txt"
- Legacy bracketed trailer: "[Saved to 'file.txt']" (pre-2026-04-15,
retained here so cold conversations still resolve)
"""
# New prose format — ``saved at: <absolute path>``, terminated by
# newline or end-of-string.
match = re.search(r"[Ss]aved at:\s*(\S+)", content)
if match:
return match.group(1)
# Legacy format.
match = re.search(r"[Ss]aved to '([^']+)'", content)
return match.group(1) if match else None
@@ -233,8 +245,8 @@ def extract_tool_call_history(messages: list[Message], max_entries: int = 30) ->
return args.get("query", "")
if name == "web_scrape":
return args.get("url", "")
if name in ("load_data", "save_data"):
return args.get("filename", "")
if name == "read_file":
return args.get("path", "")
return ""
for msg in messages:
@@ -250,8 +262,8 @@ def extract_tool_call_history(messages: list[Message], max_entries: int = 30) ->
summary = _summarize_input(name, args)
tool_calls_detail.setdefault(name, []).append(summary)
if name == "save_data" and args.get("filename"):
files_saved.append(args["filename"])
if name == "read_file" and args.get("path"):
files_saved.append(args["path"])
if name == "set_output" and args.get("key"):
outputs_set.append(args["key"])
@@ -324,7 +336,7 @@ def _try_extract_key(content: str, key: str) -> str | None:
3. Colon format: ``key: value``.
4. Equals format: ``key = value``.
"""
from framework.graph.node import find_json_object
from framework.orchestrator.node import find_json_object
# 1. Whole message is JSON
try:
@@ -376,10 +388,20 @@ class NodeConversation:
output_keys: list[str] | None = None,
store: ConversationStore | None = None,
run_id: str | None = None,
compaction_buffer_tokens: int | None = None,
compaction_warning_buffer_tokens: int | None = None,
) -> None:
self._system_prompt = system_prompt
self._max_context_tokens = max_context_tokens
self._compaction_threshold = compaction_threshold
# Buffer-based compaction trigger (Gap 7). When set, takes
# precedence over the multiplicative compaction_threshold so the
# loop reserves a fixed headroom for the next turn's input+output
# instead of trying to get exactly X% of the way to the hard
# limit. If left as None the legacy threshold-based rule is
# used, keeping old call sites behaving identically.
self._compaction_buffer_tokens = compaction_buffer_tokens
self._compaction_warning_buffer_tokens = compaction_warning_buffer_tokens
self._output_keys = output_keys
self._store = store
self._messages: list[Message] = []
@@ -486,6 +508,27 @@ class NodeConversation:
image_content: list[dict[str, Any]] | None = None,
is_skill_content: bool = False,
) -> Message:
# Dedup guard: reject a second tool_result for the same tool_use_id.
# Anthropic's API only accepts one result per tool_call, and a duplicate
# causes a hard 400 two turns later ("messages with role 'tool' must
# be a response to a preceding message with 'tool_calls'"). Duplicates
# can arise when a tool_call_timeout fires and records a placeholder
# error, then the real executor thread eventually delivers the actual
# result (the thread kept running inside run_in_executor — see
# tool_result_handler.execute_tool). We keep the FIRST result to
# preserve whatever state the agent already reasoned about.
for existing in reversed(self._messages):
if existing.role == "tool" and existing.tool_use_id == tool_use_id:
import logging as _logging
_logging.getLogger(__name__).warning(
"add_tool_result: dropping duplicate result for tool_use_id=%s "
"(first result preserved, %d chars; new result ignored, %d chars)",
tool_use_id,
len(existing.content),
len(content),
)
return existing
msg = Message(
seq=self._next_seq,
role="tool",
@@ -513,7 +556,48 @@ class NodeConversation:
can happen when a loop is cancelled mid-tool-execution.
"""
msgs = [m.to_llm_dict() for m in self._messages]
return self._repair_orphaned_tool_calls(msgs)
msgs = self._repair_orphaned_tool_calls(msgs)
msgs = self._sanitize_for_api(msgs)
return msgs
@staticmethod
def _sanitize_for_api(msgs: list[dict[str, Any]]) -> list[dict[str, Any]]:
"""Final pass: ensure message sequence is valid for strict APIs.
Rules:
1. No two consecutive messages with the same role (merge or drop)
2. Tool messages must have a tool_call_id
3. Assistant messages with tool_calls must have content=null, not ""
4. First message must not be 'tool' or 'assistant' (without prior context)
"""
cleaned: list[dict[str, Any]] = []
for m in msgs:
role = m.get("role")
# Fix assistant content when tool_calls present
if role == "assistant" and m.get("tool_calls"):
if m.get("content") == "":
m["content"] = None
# Drop tool messages without tool_call_id
if role == "tool" and not m.get("tool_call_id"):
continue
# Drop consecutive duplicate roles (merge user messages)
if cleaned and cleaned[-1].get("role") == role == "user":
prev_content = cleaned[-1].get("content", "")
curr_content = m.get("content", "")
if isinstance(prev_content, str) and isinstance(curr_content, str):
cleaned[-1]["content"] = f"{prev_content}\n{curr_content}"
continue
cleaned.append(m)
# Drop leading assistant/tool messages (no prior context)
while cleaned and cleaned[0].get("role") in ("assistant", "tool"):
cleaned.pop(0)
return cleaned
@staticmethod
def _repair_orphaned_tool_calls(
@@ -521,11 +605,18 @@ class NodeConversation:
) -> list[dict[str, Any]]:
"""Ensure tool_call / tool_result pairs are consistent.
1. **Orphaned tool results** (tool_result with no preceding tool_use)
are dropped. This happens when compaction removes an assistant
message but leaves its tool-result messages behind.
2. **Orphaned tool calls** (tool_use with no following tool_result)
get a synthetic error result appended. This happens when a loop
1. **Orphaned tool results** (tool_result with no matching tool_use
anywhere) are dropped. Happens after compaction removes the
parent assistant message.
2. **Positionally orphaned tool results** (tool_result separated
from its parent by a non-tool message, e.g. a user injection)
are dropped. The Anthropic API requires tool messages to
follow immediately after the assistant message that issued
the matching tool_call.
3. **Duplicate tool results** (same tool_call_id appearing more
than once) are dropped; only the first is kept.
4. **Orphaned tool calls** (tool_use with no following tool_result)
get a synthetic error result appended. Happens when the loop
is cancelled mid-tool-execution.
"""
# Pass 1: collect all tool_call IDs from assistant messages so we
@@ -538,41 +629,75 @@ class NodeConversation:
if tc_id:
all_tool_call_ids.add(tc_id)
# Pass 2: build repaired list — drop orphaned tool results, patch
# missing tool results.
# Pass 2: build repaired list — drop orphaned tool results, drop
# positional orphans and duplicates, patch missing tool results.
#
# ``open_tool_calls`` holds the tool_call IDs we're still expecting
# results for: it's populated when we emit an assistant-with-tool_calls
# and drained as matching tool messages follow. Any tool message
# whose id is not currently open is positionally invalid and gets
# dropped — that closes the gap that caused the tool-after-user
# 400 errors.
repaired: list[dict[str, Any]] = []
for i, m in enumerate(msgs):
# Drop tool-result messages whose tool_call_id has no matching
# tool_use in any assistant message (orphaned by compaction).
if m.get("role") == "tool":
tid = m.get("tool_call_id")
if tid and tid not in all_tool_call_ids:
continue # skip orphaned result
open_tool_calls: set[str] = set()
seen_tool_ids: set[str] = set()
for m in msgs:
role = m.get("role")
repaired.append(m)
tool_calls = m.get("tool_calls")
if m.get("role") != "assistant" or not tool_calls:
if role == "tool":
tid = m.get("tool_call_id")
# Drop tool results with no matching tool_use anywhere.
if not tid or tid not in all_tool_call_ids:
continue
# Drop duplicates (same id appearing twice) — keep first.
if tid in seen_tool_ids:
continue
# Drop positional orphans — tool messages whose parent
# assistant isn't the still-open assistant block.
if tid not in open_tool_calls:
continue
open_tool_calls.discard(tid)
seen_tool_ids.add(tid)
repaired.append(m)
continue
# Collect IDs of tool results that follow this assistant message
answered: set[str] = set()
for j in range(i + 1, len(msgs)):
if msgs[j].get("role") == "tool":
tid = msgs[j].get("tool_call_id")
if tid:
answered.add(tid)
else:
break # stop at first non-tool message
# Patch any missing results
for tc in tool_calls:
tc_id = tc.get("id")
if tc_id and tc_id not in answered:
# Any non-tool message closes the current assistant tool block.
# If the previous assistant left tool_calls unanswered, patch
# synthetic error results before emitting this message so the
# API sees a complete pairing.
if open_tool_calls:
for stale_id in list(open_tool_calls):
repaired.append(
{
"role": "tool",
"tool_call_id": tc_id,
"tool_call_id": stale_id,
"content": "ERROR: Tool execution was interrupted.",
}
)
seen_tool_ids.add(stale_id)
open_tool_calls.clear()
repaired.append(m)
if role == "assistant":
for tc in m.get("tool_calls") or []:
tc_id = tc.get("id")
if tc_id and tc_id not in seen_tool_ids:
open_tool_calls.add(tc_id)
# Tail: if the conversation ends with an assistant that issued
# tool_calls and no results followed, patch them so the next
# turn's first message can be a valid assistant/user response.
if open_tool_calls:
for stale_id in list(open_tool_calls):
repaired.append(
{
"role": "tool",
"tool_call_id": stale_id,
"content": "ERROR: Tool execution was interrupted.",
}
)
return repaired
def estimate_tokens(self) -> int:
@@ -621,8 +746,37 @@ class NodeConversation:
return self.estimate_tokens() / self._max_context_tokens
def needs_compaction(self) -> bool:
"""True when the conversation should be compacted before the
next LLM call.
Buffer-based rule (Gap 7): trigger when the current estimate
plus the configured buffer would exceed the hard context limit.
Prevents compaction from firing only AFTER we're already over
the wire and forced into a reactive binary-split pass.
When no buffer is configured, falls back to the multiplicative
threshold the old callers were built around.
"""
if self._max_context_tokens <= 0:
return False
if self._compaction_buffer_tokens is not None:
budget = self._max_context_tokens - self._compaction_buffer_tokens
return self.estimate_tokens() >= max(0, budget)
return self.estimate_tokens() >= self._max_context_tokens * self._compaction_threshold
def compaction_warning(self) -> bool:
"""True when the conversation has crossed the warning threshold
but not yet the hard compaction trigger.
Used by telemetry / UI to show a "context getting tight" hint
before a compaction pass actually runs. Returns False when no
warning buffer is configured (legacy behaviour).
"""
if self._max_context_tokens <= 0 or self._compaction_warning_buffer_tokens is None:
return False
warn_at = self._max_context_tokens - self._compaction_warning_buffer_tokens
return self.estimate_tokens() >= max(0, warn_at)
# --- Output-key extraction ---------------------------------------------
def _extract_protected_values(self, messages: list[Message]) -> dict[str, str]:
@@ -699,7 +853,7 @@ class NodeConversation:
continue # never prune errors
if msg.is_skill_content:
continue # never prune activated skill instructions (AS-10)
if msg.content.startswith("[Pruned tool result"):
if msg.content.startswith(("Pruned tool result", "[Pruned tool result")):
continue # already pruned
# Tiny results (set_output acks, confirmations) — pruning
# saves negligible space but makes the LLM think the call
@@ -731,12 +885,12 @@ class NodeConversation:
if spillover:
placeholder = (
f"[Pruned tool result: {orig_len} chars. "
f"Full data in '{spillover}'. "
f"Use load_data('{spillover}') to retrieve.]"
f"Pruned tool result ({orig_len:,} chars) cleared from context. "
f"Full data saved at: {spillover}\n"
f"Read the complete data with read_file(path='{spillover}')."
)
else:
placeholder = f"[Pruned tool result: {orig_len} chars cleared from context.]"
placeholder = f"Pruned tool result ({orig_len:,} chars) cleared from context."
self._messages[i] = Message(
seq=msg.seq,
@@ -758,6 +912,78 @@ class NodeConversation:
self._last_api_input_tokens = None
return count
async def evict_old_images(self, keep_latest: int = 2) -> int:
"""Strip ``image_content`` from older messages, keeping the most recent.
Screenshots from ``browser_screenshot`` are inlined into the
message's ``image_content`` as base64 data URLs. Each screenshot
costs ~250k tokens when the provider counts the base64 as
text four screenshots push a conversation over gemini's 1M
context limit and trigger out-of-context garbage output (see
``session_20260415_104727_5c4ed7ff`` for the terminal case
where the model emitted ``协日`` as its final text then stopped).
This method walks backward through messages and keeps
``image_content`` intact on the most recent ``keep_latest``
messages that have images. Older messages get their
``image_content`` nulled out the text content (metadata
like url, dimensions, scale hints) stays, but the raw bytes
are dropped. Storage is updated too so cold-restore sees the
same evicted state.
Run this right after every tool result is recorded so image
context stays bounded even within a single iteration (the
compaction pipeline only fires at iteration boundaries, too
late for a single turn that takes 4 screenshots).
Returns the number of messages whose image_content was evicted.
"""
if not self._messages or keep_latest < 0:
return 0
# Find messages carrying images, walking newest → oldest.
image_indices: list[int] = []
for i in range(len(self._messages) - 1, -1, -1):
if self._messages[i].image_content:
image_indices.append(i)
# Nothing to evict if we have ≤ keep_latest images total.
if len(image_indices) <= keep_latest:
return 0
# Evict everything past the first keep_latest (newest) entries.
to_evict = image_indices[keep_latest:]
evicted = 0
for idx in to_evict:
msg = self._messages[idx]
self._messages[idx] = Message(
seq=msg.seq,
role=msg.role,
content=msg.content,
tool_use_id=msg.tool_use_id,
tool_calls=msg.tool_calls,
is_error=msg.is_error,
phase_id=msg.phase_id,
is_transition_marker=msg.is_transition_marker,
is_client_input=msg.is_client_input,
image_content=None, # ← dropped
is_skill_content=msg.is_skill_content,
run_id=msg.run_id,
)
evicted += 1
if self._store:
await self._store.write_part(msg.seq, self._messages[idx].to_storage_dict())
if evicted:
# Reset token estimate — image blocks no longer contribute.
self._last_api_input_tokens = None
logger.info(
"evict_old_images: dropped image_content from %d message(s), kept %d most recent",
evicted,
keep_latest,
)
return evicted
async def compact(
self,
summary: str,
@@ -910,9 +1136,7 @@ class NodeConversation:
for msg in old_messages:
if msg.role != "assistant" or not msg.tool_calls:
continue
has_protected = any(
tc.get("function", {}).get("name") == "set_output" for tc in msg.tool_calls
)
has_protected = any(tc.get("function", {}).get("name") == "set_output" for tc in msg.tool_calls)
tc_ids = {tc.get("id", "") for tc in msg.tool_calls}
if has_protected:
protected_tc_ids |= tc_ids
@@ -1018,16 +1242,18 @@ class NodeConversation:
# Nothing to save — skip file creation
conv_filename = ""
# Build reference message
# Build reference message. Prose format (no brackets) — see the
# poison-pattern note on truncate_tool_result. Frontier models
# autocomplete `[...']` trailers into their own text turns.
ref_parts: list[str] = []
if conv_filename:
full_path = str((spill_path / conv_filename).resolve())
ref_parts.append(
f"[Previous conversation saved to '{full_path}'. "
f"Use load_data('{conv_filename}') to review if needed.]"
f"Previous conversation saved at: {full_path}\n"
f"Read the full transcript with read_file('{conv_filename}')."
)
elif not collapsed_msgs:
ref_parts.append("[Previous freeform messages compacted.]")
ref_parts.append("(Previous freeform messages compacted.)")
# Aggressive: add collapsed tool-call history to the reference
if collapsed_msgs:
@@ -1106,11 +1332,7 @@ class NodeConversation:
def export_summary(self) -> str:
"""Structured summary with [STATS], [CONFIG], [RECENT_MESSAGES] sections."""
prompt_preview = (
self._system_prompt[:80] + "..."
if len(self._system_prompt) > 80
else self._system_prompt
)
prompt_preview = self._system_prompt[:80] + "..." if len(self._system_prompt) > 80 else self._system_prompt
lines = [
"[STATS]",
@@ -1156,6 +1378,8 @@ class NodeConversation:
"system_prompt": self._system_prompt,
"max_context_tokens": self._max_context_tokens,
"compaction_threshold": self._compaction_threshold,
"compaction_buffer_tokens": self._compaction_buffer_tokens,
"compaction_warning_buffer_tokens": (self._compaction_warning_buffer_tokens),
"output_keys": self._output_keys,
}
await self._store.write_meta(run_meta)
@@ -1203,12 +1427,27 @@ class NodeConversation:
output_keys=meta.get("output_keys"),
store=store,
run_id=run_id,
compaction_buffer_tokens=meta.get("compaction_buffer_tokens"),
compaction_warning_buffer_tokens=meta.get("compaction_warning_buffer_tokens"),
)
conv._meta_persisted = True
parts = await store.read_parts()
if phase_id:
parts = [p for p in parts if p.get("phase_id") == phase_id]
filtered_parts = [p for p in parts if p.get("phase_id") == phase_id]
if filtered_parts:
parts = filtered_parts
elif parts and all(p.get("phase_id") is None for p in parts):
# Backward compatibility: older isolated stores (including queen
# sessions) persisted parts without phase_id. In that case, the
# phase filter would incorrectly hide the entire conversation.
logger.info(
"Restoring legacy unphased conversation without applying phase filter (phase_id=%s, parts=%d)",
phase_id,
len(parts),
)
else:
parts = filtered_parts
# Filter by run_id so intentional restarts (new run_id) start fresh
# while crash recovery (same run_id) loads prior parts.
if run_id and not is_legacy_run_id(run_id):
@@ -0,0 +1,7 @@
"""Agent loop internals -- compaction, judge, tools, subagent execution.
Re-exports from legacy locations for the new import path.
"""
from framework.agent_loop.internals.compaction import * # noqa: F401, F403
from framework.agent_loop.internals.synthetic_tools import * # noqa: F401, F403
@@ -19,11 +19,11 @@ from datetime import UTC, datetime
from pathlib import Path
from typing import Any
from framework.graph.conversation import Message, NodeConversation
from framework.graph.event_loop.event_publishing import publish_context_usage
from framework.graph.event_loop.types import LoopConfig, OutputAccumulator
from framework.graph.node import NodeContext
from framework.runtime.event_bus import EventBus
from framework.agent_loop.conversation import Message, NodeConversation
from framework.agent_loop.internals.event_publishing import publish_context_usage
from framework.agent_loop.internals.types import LoopConfig, OutputAccumulator
from framework.host.event_bus import EventBus
from framework.orchestrator.node import NodeContext
logger = logging.getLogger(__name__)
@@ -80,7 +80,7 @@ def microcompact(
msg = messages[i]
if msg.role != "tool" or msg.is_error or msg.is_skill_content:
continue
if msg.content.startswith(("[Pruned tool result", "[Old tool result")):
if msg.content.startswith(("Pruned tool result", "[Pruned tool result", "[Old tool result")):
continue
if len(msg.content) < 100:
continue
@@ -102,12 +102,12 @@ def microcompact(
orig_len = len(msg.content)
if spillover:
placeholder = (
f"[Old tool result cleared: {orig_len} chars. "
f"Full data in '{spillover}'. "
f"Use load_data('{spillover}') to retrieve.]"
f"Old tool result ({orig_len:,} chars) cleared from context. "
f"Full data saved at: {spillover}\n"
f"Read the complete data with read_file(path='{spillover}')."
)
else:
placeholder = f"[Old tool result cleared: {orig_len} chars.]"
placeholder = f"Old tool result ({orig_len:,} chars) cleared from context."
# Mutate in-place (microcompact is synchronous, no store writes)
conversation._messages[i] = Message(
@@ -142,7 +142,14 @@ def _find_tool_name_for_result(messages: list[Message], tool_msg: Message) -> st
def _extract_spillover_filename_inline(content: str) -> str | None:
"""Quick inline check for spillover filename in tool result content."""
"""Quick inline check for spillover filename in tool result content.
Matches both the new prose format ("saved at: /path") and the
legacy bracketed trailer ("saved to '/path'").
"""
match = re.search(r"saved at:\s*(\S+)", content, re.IGNORECASE)
if match:
return match.group(1)
match = re.search(r"saved to '([^']+)'", content, re.IGNORECASE)
return match.group(1) if match else None
@@ -168,13 +175,17 @@ async def compact(
"""
conv_id = id(conversation)
# Circuit breaker: stop auto-compacting after repeated failures
if _failure_counts.get(conv_id, 0) >= MAX_CONSECUTIVE_FAILURES:
# Circuit breaker: stop LLM-based compaction after repeated failures,
# but still fall through to the emergency deterministic summary so
# the conversation doesn't silently grow past the context window.
# Without this, a persistent LLM outage during compaction would
# leave the agent stuck sending oversized prompts until the API 400s.
_llm_compaction_skipped = _failure_counts.get(conv_id, 0) >= MAX_CONSECUTIVE_FAILURES
if _llm_compaction_skipped:
logger.warning(
"Circuit breaker: skipping compaction after %d consecutive failures",
"Circuit breaker: LLM compaction disabled after %d failures — skipping straight to emergency summary",
_failure_counts[conv_id],
)
return
# Recompaction detection
now = time.monotonic()
@@ -256,7 +267,7 @@ async def compact(
return
# --- Step 3: LLM summary compaction ---
if ctx.llm is not None:
if ctx.llm is not None and not _llm_compaction_skipped:
logger.info(
"LLM summary compaction triggered (%.0f%% usage)",
conversation.usage_ratio() * 100,
@@ -368,8 +379,8 @@ async def llm_compact(
in half and each half is summarised independently. Tool history is
appended once at the top-level call (``_depth == 0``).
"""
from framework.graph.conversation import extract_tool_call_history
from framework.graph.event_loop.tool_result_handler import is_context_too_large_error
from framework.agent_loop.conversation import extract_tool_call_history
from framework.agent_loop.internals.tool_result_handler import is_context_too_large_error
if _depth > max_depth:
raise RuntimeError(f"LLM compaction recursion limit ({max_depth})")
@@ -506,7 +517,7 @@ def build_llm_compaction_prompt(
service. Each section focuses on a different aspect of the conversation
so the summariser produces consistently useful, well-organised output.
"""
spec = ctx.node_spec
spec = ctx.agent_spec
ctx_lines = [f"NODE: {spec.name} (id={spec.id})"]
if spec.description:
ctx_lines.append(f"PURPOSE: {spec.description}")
@@ -518,10 +529,7 @@ def build_llm_compaction_prompt(
done = {k: v for k, v in acc.items() if v is not None}
todo = [k for k, v in acc.items() if v is None]
if done:
ctx_lines.append(
"OUTPUTS ALREADY SET:\n"
+ "\n".join(f" {k}: {str(v)[:150]}" for k, v in done.items())
)
ctx_lines.append("OUTPUTS ALREADY SET:\n" + "\n".join(f" {k}: {str(v)[:150]}" for k, v in done.items()))
if todo:
ctx_lines.append(f"OUTPUTS STILL NEEDED: {', '.join(todo)}")
elif spec.output_keys:
@@ -575,12 +583,8 @@ def build_message_inventory(conversation: NodeConversation) -> list[dict[str, An
if message.tool_calls:
for tool_call in message.tool_calls:
args = tool_call.get("function", {}).get("arguments", "")
tool_call_args_chars += (
len(args) if isinstance(args, str) else len(json.dumps(args))
)
names = [
tool_call.get("function", {}).get("name", "?") for tool_call in message.tool_calls
]
tool_call_args_chars += len(args) if isinstance(args, str) else len(json.dumps(args))
names = [tool_call.get("function", {}).get("name", "?") for tool_call in message.tool_calls]
tool_name = ", ".join(names)
elif message.role == "tool" and message.tool_use_id:
for previous in conversation.messages:
@@ -622,13 +626,13 @@ def write_compaction_debug_log(
log_dir.mkdir(parents=True, exist_ok=True)
ts = datetime.now(UTC).strftime("%Y%m%dT%H%M%S_%f")
node_label = ctx.node_id.replace("/", "_")
node_label = ctx.agent_id.replace("/", "_")
log_path = log_dir / f"{ts}_{node_label}.md"
lines: list[str] = [
f"# Compaction Debug — {ctx.node_id}",
f"# Compaction Debug — {ctx.agent_id}",
f"**Time:** {datetime.now(UTC).isoformat()}",
f"**Node:** {ctx.node_spec.name} (`{ctx.node_id}`)",
f"**Node:** {ctx.agent_spec.name} (`{ctx.agent_id}`)",
]
if ctx.stream_id:
lines.append(f"**Stream:** {ctx.stream_id}")
@@ -637,14 +641,8 @@ def write_compaction_debug_log(
lines.append("")
if inventory:
total_chars = sum(
entry.get("content_chars", 0) + entry.get("tool_call_args_chars", 0)
for entry in inventory
)
lines.append(
"## Pre-Compaction Message Inventory "
f"({len(inventory)} messages, {total_chars:,} total chars)"
)
total_chars = sum(entry.get("content_chars", 0) + entry.get("tool_call_args_chars", 0) for entry in inventory)
lines.append(f"## Pre-Compaction Message Inventory ({len(inventory)} messages, {total_chars:,} total chars)")
lines.append("")
ranked = sorted(
inventory,
@@ -663,8 +661,7 @@ def write_compaction_debug_log(
if entry.get("phase"):
flags.append(f"phase={entry['phase']}")
lines.append(
f"| {i} | {entry['seq']} | {entry['role']} | {tool} "
f"| {chars:,} | {pct:.1f}% | {', '.join(flags)} |"
f"| {i} | {entry['seq']} | {entry['role']} | {tool} | {chars:,} | {pct:.1f}% | {', '.join(flags)} |"
)
large = [entry for entry in ranked if entry.get("preview")]
@@ -672,9 +669,7 @@ def write_compaction_debug_log(
lines.append("")
lines.append("### Large message previews")
for entry in large:
lines.append(
f"\n**seq={entry['seq']}** ({entry['role']}, {entry.get('tool', '')}):"
)
lines.append(f"\n**seq={entry['seq']}** ({entry['role']}, {entry.get('tool', '')}):")
lines.append(f"```\n{entry['preview']}\n```")
lines.append("")
@@ -715,7 +710,7 @@ async def log_compaction(
if ctx.runtime_logger:
ctx.runtime_logger.log_step(
node_id=ctx.node_id,
node_id=ctx.agent_id,
node_type="event_loop",
step_index=-1,
llm_text=f"Context compacted ({level}): {before_pct}% \u2192 {after_pct}%",
@@ -724,7 +719,7 @@ async def log_compaction(
)
if event_bus:
from framework.runtime.event_bus import AgentEvent, EventType
from framework.host.event_bus import AgentEvent, EventType
event_data: dict[str, Any] = {
"level": level,
@@ -736,8 +731,8 @@ async def log_compaction(
await event_bus.publish(
AgentEvent(
type=EventType.CONTEXT_COMPACTED,
stream_id=ctx.stream_id or ctx.node_id,
node_id=ctx.node_id,
stream_id=ctx.stream_id or ctx.agent_id,
node_id=ctx.agent_id,
data=event_data,
)
)
@@ -762,13 +757,10 @@ def build_emergency_summary(
node's known state so the LLM can continue working after
compaction without losing track of its task and inputs.
"""
parts = [
"EMERGENCY COMPACTION — previous conversation was too large "
"and has been replaced with this summary.\n"
]
parts = ["EMERGENCY COMPACTION — previous conversation was too large and has been replaced with this summary.\n"]
# 1. Node identity
spec = ctx.node_spec
spec = ctx.agent_spec
parts.append(f"NODE: {spec.name} (id={spec.id})")
if spec.description:
parts.append(f"PURPOSE: {spec.description}")
@@ -776,7 +768,7 @@ def build_emergency_summary(
# 2. Inputs the node received
input_lines = []
for key in spec.input_keys:
value = ctx.input_data.get(key) or ctx.buffer.read(key)
value = ctx.input_data.get(key)
if value is not None:
# Truncate long values but keep them recognisable
v_str = str(value)
@@ -818,28 +810,21 @@ def build_emergency_summary(
data_files = [f for f in all_files if f not in conv_files]
if conv_files:
conv_list = "\n".join(
f" - {f} (full path: {data_dir / f})" for f in conv_files
)
conv_list = "\n".join(f" - {f} (full path: {data_dir / f})" for f in conv_files)
parts.append(
"CONVERSATION HISTORY (freeform messages saved during compaction — "
"use load_data('<filename>') to review earlier dialogue):\n" + conv_list
"use read_file('<filename>') to review earlier dialogue):\n" + conv_list
)
if data_files:
file_list = "\n".join(
f" - {f} (full path: {data_dir / f})" for f in data_files[:30]
)
parts.append("DATA FILES (use load_data('<filename>') to read):\n" + file_list)
file_list = "\n".join(f" - {f} (full path: {data_dir / f})" for f in data_files[:30])
parts.append("DATA FILES (use read_file('<filename>') to read):\n" + file_list)
if not all_files:
parts.append(
"NOTE: Large tool results may have been saved to files. "
"Use list_directory to check the data directory."
)
except Exception:
parts.append(
"NOTE: Large tool results were saved to files. "
"Use read_file(path='<path>') to read them."
)
parts.append("NOTE: Large tool results were saved to files. Use read_file(path='<path>') to read them.")
# 6. Tool call history (prevent re-calling tools)
if conversation is not None:
@@ -847,10 +832,7 @@ def build_emergency_summary(
if tool_history:
parts.append(tool_history)
parts.append(
"\nContinue working towards setting the remaining outputs. "
"Use your tools and the inputs above."
)
parts.append("\nContinue working towards setting the remaining outputs. Use your tools and the inputs above.")
return "\n\n".join(parts)
@@ -861,6 +843,6 @@ def _extract_tool_call_history(conversation: NodeConversation) -> str:
directly (vs. the module-level extract_tool_call_history in conversation.py
which works on raw message lists).
"""
from framework.graph.conversation import extract_tool_call_history
from framework.agent_loop.conversation import extract_tool_call_history
return extract_tool_call_history(list(conversation.messages))
@@ -14,10 +14,10 @@ from collections.abc import Awaitable, Callable
from dataclasses import dataclass
from typing import Any
from framework.graph.conversation import ConversationStore, NodeConversation
from framework.graph.event_loop.types import LoopConfig, OutputAccumulator, TriggerEvent
from framework.graph.node import NodeContext
from framework.agent_loop.conversation import ConversationStore, NodeConversation
from framework.agent_loop.internals.types import LoopConfig, OutputAccumulator, TriggerEvent
from framework.llm.capabilities import supports_image_tool_results
from framework.orchestrator.node import NodeContext
logger = logging.getLogger(__name__)
@@ -53,15 +53,31 @@ async def restore(
# continuous mode (or when _restore is called for timer-resume)
# load all parts — the full conversation threads across nodes.
_is_continuous = getattr(ctx, "continuous_mode", False)
phase_filter = None if _is_continuous else ctx.node_id
# The queen has agent_id="queen" but messages are stored with phase_id=None.
# Only apply phase filtering for non-queen workers in a multi-agent setup.
phase_filter = None if (_is_continuous or ctx.agent_id == "queen") else ctx.agent_id
conversation = await NodeConversation.restore(
conversation_store,
phase_id=phase_filter,
run_id=ctx.effective_run_id,
)
if conversation is None:
logger.info(
"[restore] No conversation found for agent_id=%s phase_filter=%s run_id=%s",
ctx.agent_id,
phase_filter,
ctx.effective_run_id,
)
return None
logger.info(
"[restore] Restored %d messages for agent_id=%s phase_filter=%s run_id=%s",
conversation.message_count,
ctx.agent_id,
phase_filter,
ctx.effective_run_id,
)
# If run_id filtering removed all messages, this is an intentional
# restart (new run), not a crash recovery. Return None so the caller
# falls through to the fresh-conversation path.
@@ -124,7 +140,7 @@ async def write_cursor(
cursor.update(
{
"iteration": iteration,
"node_id": ctx.node_id,
"node_id": ctx.agent_id,
"outputs": accumulator.to_dict(),
}
)
@@ -133,9 +149,7 @@ async def write_cursor(
cursor["recent_responses"] = recent_responses
if recent_tool_fingerprints is not None:
# Convert list[list[tuple]] → list[list[list]] for JSON
cursor["recent_tool_fingerprints"] = [
[list(pair) for pair in fps] for fps in recent_tool_fingerprints
]
cursor["recent_tool_fingerprints"] = [[list(pair) for pair in fps] for fps in recent_tool_fingerprints]
# Persist blocked-input state so restored runs re-block instead of
# manufacturing a synthetic continuation turn.
cursor["pending_input"] = pending_input
@@ -147,13 +161,14 @@ async def drain_injection_queue(
conversation: NodeConversation,
*,
ctx: NodeContext,
describe_images_as_text_fn: (
Callable[[list[dict[str, Any]]], Awaitable[str | None]] | None
) = None,
describe_images_as_text_fn: (Callable[[list[dict[str, Any]]], Awaitable[str | None]] | None) = None,
) -> int:
"""Drain all pending injected events as user messages. Returns count."""
count = 0
logger.debug("[drain_injection_queue] Starting to drain queue, initial queue size: %s", queue.qsize() if hasattr(queue, 'qsize') else 'unknown')
logger.debug(
"[drain_injection_queue] Starting to drain queue, initial queue size: %s",
queue.qsize() if hasattr(queue, "qsize") else "unknown",
)
while not queue.empty():
try:
content, is_client_input, image_content = queue.get_nowait()
@@ -242,11 +257,6 @@ async def check_pause(
# Check context-level pause flags (legacy/alternative methods)
pause_requested = ctx.input_data.get("pause_requested", False)
if not pause_requested:
try:
pause_requested = ctx.buffer.read("pause_requested") or False
except (PermissionError, KeyError):
pause_requested = False
if pause_requested:
completed = iteration
logger.info(f"⏸ Pausing after {completed} iteration(s) completed (context-level)")
@@ -9,10 +9,10 @@ from __future__ import annotations
import logging
import time
from framework.graph.conversation import NodeConversation
from framework.graph.event_loop.types import HookContext
from framework.graph.node import NodeContext
from framework.runtime.event_bus import EventBus
from framework.agent_loop.conversation import NodeConversation
from framework.agent_loop.internals.types import HookContext
from framework.host.event_bus import EventBus
from framework.orchestrator.node import NodeContext
logger = logging.getLogger(__name__)
@@ -45,14 +45,14 @@ async def generate_action_plan(
Runs as a fire-and-forget task so it never blocks the main loop.
"""
try:
system_prompt = ctx.node_spec.system_prompt or ""
system_prompt = ctx.agent_spec.system_prompt or ""
# Trim to keep the prompt small
prompt_summary = system_prompt[:500]
if len(system_prompt) > 500:
prompt_summary += "..."
tool_names = [t.name for t in ctx.available_tools]
output_keys = ctx.node_spec.output_keys or []
output_keys = ctx.agent_spec.output_keys or []
prompt = (
f'You are about to work on a task as node "{node_id}".\n\n'
@@ -177,7 +177,7 @@ async def publish_context_usage(
if not event_bus:
return
from framework.runtime.event_bus import AgentEvent, EventType
from framework.host.event_bus import AgentEvent, EventType
estimated = conversation.estimate_tokens()
max_tokens = conversation._max_context_tokens
@@ -185,8 +185,8 @@ async def publish_context_usage(
await event_bus.publish(
AgentEvent(
type=EventType.CONTEXT_USAGE_UPDATED,
stream_id=ctx.stream_id or ctx.node_id,
node_id=ctx.node_id,
stream_id=ctx.stream_id or ctx.agent_id,
node_id=ctx.agent_id,
data={
"usage_ratio": round(ratio, 4),
"usage_pct": round(ratio * 100),
@@ -319,9 +319,7 @@ async def publish_output_key_set(
execution_id: str = "",
) -> None:
if event_bus:
await event_bus.emit_output_key_set(
stream_id=stream_id, node_id=node_id, key=key, execution_id=execution_id
)
pass
async def run_hooks(
@@ -5,9 +5,9 @@ from __future__ import annotations
import logging
from collections.abc import Callable
from framework.graph.conversation import NodeConversation
from framework.graph.event_loop.types import JudgeProtocol, JudgeVerdict, OutputAccumulator
from framework.graph.node import NodeContext
from framework.agent_loop.conversation import NodeConversation
from framework.agent_loop.internals.types import JudgeProtocol, JudgeVerdict, OutputAccumulator
from framework.orchestrator.node import NodeContext
logger = logging.getLogger(__name__)
@@ -31,14 +31,10 @@ class SubagentJudge:
if remaining <= 3:
urgency = (
f"URGENT: Only {remaining} iterations left. "
f"Stop all other work and call set_output NOW for: {missing}"
f"URGENT: Only {remaining} iterations left. Stop all other work and call set_output NOW for: {missing}"
)
elif remaining <= self._max_iterations // 2:
urgency = (
f"WARNING: {remaining} iterations remaining. "
f"You must call set_output for: {missing}"
)
urgency = f"WARNING: {remaining} iterations remaining. You must call set_output for: {missing}"
else:
urgency = f"Missing output keys: {missing}. Use set_output to provide them."
@@ -79,7 +75,7 @@ async def judge_turn(
if mark_complete_flag:
return JudgeVerdict(action="ACCEPT")
if ctx.node_spec.skip_judge:
if ctx.agent_spec.skip_judge:
return JudgeVerdict(action="RETRY") # feedback=None → not logged
# --- Level 1: custom judge -----------------------------------------
@@ -92,9 +88,9 @@ async def judge_turn(
"accumulator": accumulator,
"iteration": iteration,
"conversation_summary": conversation.export_summary(),
"output_keys": ctx.node_spec.output_keys,
"output_keys": ctx.agent_spec.output_keys,
"missing_keys": get_missing_output_keys_fn(
accumulator, ctx.node_spec.output_keys, ctx.node_spec.nullable_output_keys
accumulator, ctx.agent_spec.output_keys, ctx.agent_spec.nullable_output_keys
),
}
verdict = await judge.evaluate(context)
@@ -109,9 +105,7 @@ async def judge_turn(
if tool_results:
return JudgeVerdict(action="RETRY") # feedback=None → not logged
missing = get_missing_output_keys_fn(
accumulator, ctx.node_spec.output_keys, ctx.node_spec.nullable_output_keys
)
missing = get_missing_output_keys_fn(accumulator, ctx.agent_spec.output_keys, ctx.agent_spec.nullable_output_keys)
if missing:
return JudgeVerdict(
@@ -124,8 +118,8 @@ async def judge_turn(
# All output keys present — run safety checks before accepting.
output_keys = ctx.node_spec.output_keys or []
nullable_keys = set(ctx.node_spec.nullable_output_keys or [])
output_keys = ctx.agent_spec.output_keys or []
nullable_keys = set(ctx.agent_spec.nullable_output_keys or [])
# All-nullable with nothing set → node produced nothing useful.
all_nullable = output_keys and nullable_keys >= set(output_keys)
@@ -133,36 +127,19 @@ async def judge_turn(
if all_nullable and none_set:
return JudgeVerdict(
action="RETRY",
feedback=(
f"No output keys have been set yet. "
f"Use set_output to set at least one of: {output_keys}"
),
)
# Queen with no output keys → continuous interaction node.
# Inject tool-use pressure instead of auto-accepting.
if not output_keys and ctx.supports_direct_user_io:
return JudgeVerdict(
action="RETRY",
feedback=(
"STOP describing what you will do. "
"You have FULL access to all tools — file creation, "
"shell commands, MCP tools — and you CAN call them "
"directly in your response. Respond ONLY with tool "
"calls, no prose. Execute the task now."
),
feedback=(f"No output keys have been set yet. Use set_output to set at least one of: {output_keys}"),
)
# Level 2b: conversation-aware quality check (if success_criteria set)
if ctx.node_spec.success_criteria and ctx.llm:
from framework.graph.conversation_judge import evaluate_phase_completion
if ctx.agent_spec.success_criteria and ctx.llm:
from framework.orchestrator.conversation_judge import evaluate_phase_completion
verdict = await evaluate_phase_completion(
llm=ctx.llm,
conversation=conversation,
phase_name=ctx.node_spec.name,
phase_description=ctx.node_spec.description,
success_criteria=ctx.node_spec.success_criteria,
phase_name=ctx.agent_spec.name,
phase_description=ctx.agent_spec.description,
success_criteria=ctx.agent_spec.success_criteria,
accumulator_state=accumulator.to_dict(),
max_context_tokens=max_context_tokens,
)
@@ -15,6 +15,82 @@ from typing import Any
from framework.llm.provider import Tool, ToolResult
def sanitize_ask_user_inputs(
raw_question: Any,
raw_options: Any,
) -> tuple[str, list[str] | None]:
"""Self-heal a malformed ``ask_user`` tool call.
Some model families (notably when the system prompt teaches them
XML-ish scratchpad tags like ``<relationship>...</relationship>``)
carry that style into tool arguments and produce calls like::
ask_user({
"question": "What now?</question>\\n_OPTIONS: [\\"A\\", \\"B\\"]"
})
Symptoms:
- The chat UI renders ``</question>`` and ``_OPTIONS: [...]`` as
literal text in the question bubble.
- No buttons appear because the real ``options`` parameter is
empty.
This function:
- Strips leading/trailing whitespace.
- Removes a trailing ``</question>`` (with optional preceding
whitespace) from the question text.
- Detects an inline ``_OPTIONS:``, ``OPTIONS:``, or ``options:``
line followed by a JSON array, parses it, and returns the
recovered list as the second element.
- Removes the parsed line from the returned question text.
Returns ``(cleaned_question, recovered_options_or_None)``. The
caller should treat the recovered list as a fallback only when
the model did not also supply a real ``options`` array.
"""
import json as _json
import re as _re
if raw_question is None:
return "", None
q = str(raw_question)
# Strip a stray </question> tag (case-insensitive, with optional
# preceding whitespace) anywhere in the string. This is the most
# common failure mode and never represents valid content.
q = _re.sub(r"\s*</\s*question\s*>\s*", "\n", q, flags=_re.IGNORECASE)
# Look for an inline options line. Match _OPTIONS, OPTIONS, options
# (with or without leading underscore), followed by ':' or '=', then
# a JSON array on the same line OR on the next line.
inline_options_re = _re.compile(
r"(?im)^\s*_?options\s*[:=]\s*(\[.*?\])\s*$",
_re.DOTALL,
)
recovered: list[str] | None = None
match = inline_options_re.search(q)
if match is not None:
try:
parsed = _json.loads(match.group(1))
if isinstance(parsed, list):
cleaned = [str(o).strip() for o in parsed if str(o).strip()]
if 1 <= len(cleaned) <= 8:
recovered = cleaned
except (ValueError, TypeError):
pass
if recovered is not None:
# Remove the parsed line so it doesn't leak into the
# rendered question text.
q = inline_options_re.sub("", q, count=1)
# Strip any final whitespace / leftover blank lines from the
# question after removals.
q = _re.sub(r"\n{3,}", "\n\n", q).strip()
return q, recovered
def build_ask_user_tool() -> Tool:
"""Build the synthetic ask_user tool for explicit user-input requests.
@@ -28,7 +104,20 @@ def build_ask_user_tool() -> Tool:
"You MUST call this tool whenever you need the user's response. "
"Always call it after greeting the user, asking a question, or "
"requesting approval. Do NOT call it for status updates or "
"summaries that don't require a response. "
"summaries that don't require a response.\n\n"
"STRUCTURE RULES (CRITICAL):\n"
"- The 'question' field is PLAIN TEXT shown to the user. Do NOT "
"include XML tags, pseudo-tags like </question>, or option lists "
"in the question string. The UI does not parse them — they "
"render as raw text and look broken.\n"
"- The 'options' parameter is the ONLY way to render buttons. "
"If you want buttons, put them in the 'options' array, not in "
"the question string. Do NOT write 'OPTIONS: [...]', "
"'_options: [...]', or any inline list inside 'question'.\n"
"- The question text must read as a single clean prompt with "
"no markup. Example: 'What would you like to do?' — not "
"'What would you like to do?</question>'.\n\n"
"USAGE:\n"
"Always include 2-3 predefined options. The UI automatically "
"appends an 'Other' free-text input after your options, so NEVER "
"include catch-all options like 'Custom idea', 'Something else', "
@@ -39,11 +128,14 @@ def build_ask_user_tool() -> Tool:
"free-text input. "
"The ONLY exception: omit options when the question demands a "
"free-form answer the user must type out (e.g. 'Describe your "
"agent idea', 'Paste the error message'). "
"agent idea', 'Paste the error message').\n\n"
"CORRECT EXAMPLE:\n"
'{"question": "What would you like to do?", "options": '
'["Build a new agent", "Modify existing agent", "Run tests"]} '
"Free-form example: "
'{"question": "Describe the agent you want to build."}'
'["Build a new agent", "Modify existing agent", "Run tests"]}\n\n'
"FREE-FORM EXAMPLE:\n"
'{"question": "Describe the agent you want to build."}\n\n'
"WRONG (do NOT do this — buttons will not render):\n"
'{"question": "What now?</question>\\n_OPTIONS: [\\"A\\", \\"B\\"]"}'
),
parameters={
"type": "object",
@@ -106,9 +198,7 @@ def build_ask_user_multiple_tool() -> Tool:
"properties": {
"id": {
"type": "string",
"description": (
"Short identifier for this question (used in the response)."
),
"description": ("Short identifier for this question (used in the response)."),
},
"prompt": {
"type": "string",
@@ -164,10 +254,7 @@ def build_set_output_tool(output_keys: list[str] | None) -> Tool | None:
},
"value": {
"type": "string",
"description": (
"The output value — a brief note, count, status, "
"or data filename reference."
),
"description": ("The output value — a brief note, count, status, or data filename reference."),
},
},
"required": ["key", "value"],
@@ -191,9 +278,7 @@ def build_escalate_tool() -> Tool:
"properties": {
"reason": {
"type": "string",
"description": (
"Short reason for escalation (e.g. 'Tool repeatedly failing')."
),
"description": ("Short reason for escalation (e.g. 'Tool repeatedly failing')."),
},
"context": {
"type": "string",
@@ -205,117 +290,90 @@ def build_escalate_tool() -> Tool:
)
def build_delegate_tool(sub_agents: list[str], node_registry: dict[str, Any]) -> Tool | None:
"""Build the synthetic delegate_to_sub_agent tool for subagent invocation.
Args:
sub_agents: List of node IDs that can be invoked as subagents.
node_registry: Map of node_id -> NodeSpec for looking up subagent descriptions.
Returns:
Tool definition if sub_agents is non-empty, None otherwise.
"""
if not sub_agents:
return None
agent_descriptions = []
for agent_id in sub_agents:
spec = node_registry.get(agent_id)
if spec:
desc = getattr(spec, "description", "(no description)")
agent_descriptions.append(f"- {agent_id}: {desc}")
else:
agent_descriptions.append(f"- {agent_id}: (not found in registry)")
return Tool(
name="delegate_to_sub_agent",
description=(
"Delegate a task to a specialized sub-agent. The sub-agent runs "
"autonomously with read-only access to current memory and returns "
"its result. Use this to parallelize work or leverage specialized capabilities.\n\n"
"Available sub-agents:\n" + "\n".join(agent_descriptions)
),
parameters={
"type": "object",
"properties": {
"agent_id": {
"type": "string",
"description": f"The sub-agent to invoke. Must be one of: {sub_agents}",
"enum": sub_agents,
},
"task": {
"type": "string",
"description": (
"The task description for the sub-agent to execute. "
"Be specific about what you want the sub-agent to do and "
"what information to return."
),
},
},
"required": ["agent_id", "task"],
},
)
def build_report_to_parent_tool() -> Tool:
"""Build the synthetic report_to_parent tool for sub-agent progress reports.
"""Build the synthetic ``report_to_parent`` tool.
Sub-agents call this to send one-way progress updates, partial findings,
or status reports to the parent node (and external observers via event bus)
without blocking execution.
Parallel workers (those spawned by the overseer via
``run_parallel_workers``) call this to send a structured report back
to the overseer queen when they have finished their task. Calling
``report_to_parent`` terminates the worker's loop cleanly -- do not
call other tools after it.
When ``wait_for_response`` is True, the sub-agent blocks until the parent
relays the user's response — used for escalation (e.g. login pages, CAPTCHAs).
When ``mark_complete`` is True, the sub-agent terminates immediately after
sending the report no need to call set_output for each output key.
The overseer receives these as ``SUBAGENT_REPORT`` events and
aggregates them into a single summary for the user.
"""
return Tool(
name="report_to_parent",
description=(
"Send a report to the parent agent. By default this is fire-and-forget: "
"the parent receives the report but does not respond. "
"Set wait_for_response=true to BLOCK until the user replies — use this "
"when you need human intervention (e.g. login pages, CAPTCHAs, "
"authentication walls). The user's response is returned as the tool result. "
"Set mark_complete=true to finish your task and terminate immediately "
"after sending the report — use this when your findings are in the "
"message/data fields and you don't need to call set_output."
"Send a structured report back to the parent overseer and "
"terminate. Call this when you have finished your task "
"(success, partial, or failed) or cannot make further "
"progress. Your loop ends after this call -- do not call any "
"other tool afterwards. The overseer reads the summary + "
"data fields and aggregates them into a user-facing response."
),
parameters={
"type": "object",
"properties": {
"message": {
"status": {
"type": "string",
"description": "A human-readable status or progress message.",
"enum": ["success", "partial", "failed"],
"description": (
"Overall outcome. 'success' = task complete. "
"'partial' = some progress but incomplete. "
"'failed' = could not make progress."
),
},
"summary": {
"type": "string",
"description": (
"One-paragraph narrative for the overseer. What "
"you did, what you found, and any notable issues."
),
},
"data": {
"type": "object",
"description": "Optional structured data to include with the report.",
},
"wait_for_response": {
"type": "boolean",
"description": (
"If true, block execution until the user responds. "
"Use for escalation scenarios requiring human intervention."
"Optional structured payload (rows fetched, IDs "
"processed, files written, etc.) that the "
"overseer can merge into its final summary."
),
"default": False,
},
"mark_complete": {
"type": "boolean",
"description": (
"If true, terminate the sub-agent immediately after sending "
"this report. The report message and data are delivered to the "
"parent as the final result. No set_output calls are needed."
),
"default": False,
},
},
"required": ["message"],
"required": ["status", "summary"],
},
)
def handle_report_to_parent(tool_input: dict[str, Any]) -> ToolResult:
"""Normalise + validate a ``report_to_parent`` tool call.
Returns a ``ToolResult`` with the acknowledgement text the LLM sees;
the side effects (record on Worker, emit SUBAGENT_REPORT, terminate
loop) are performed by ``AgentLoop`` after this helper returns.
"""
status = str(tool_input.get("status", "success")).strip().lower()
if status not in ("success", "partial", "failed"):
status = "success"
summary = str(tool_input.get("summary", "")).strip()
if not summary:
summary = f"(worker returned {status} with no summary)"
data = tool_input.get("data") or {}
if not isinstance(data, dict):
data = {"value": data}
# Store the normalised payload back on the input dict so the caller
# can pick it up without re-parsing.
tool_input["_normalised"] = {
"status": status,
"summary": summary,
"data": data,
}
return ToolResult(
tool_use_id=tool_input.get("tool_use_id", ""),
content=(f"Report delivered to overseer (status={status}). This worker will terminate now."),
)
def handle_set_output(
tool_input: dict[str, Any],
output_keys: list[str] | None,
@@ -215,14 +215,30 @@ def truncate_tool_result(
"""Persist tool result to file and optionally truncate for context.
When *spillover_dir* is configured, EVERY non-error tool result is
saved to a file (short filename like ``web_search_1.txt``). A
``[Saved to '...']`` annotation is appended so the reference
survives pruning and compaction.
written to disk for debugging. The LLM-visible content is then
shaped to avoid a **poison pattern** that we traced on 2026-04-15
through a gemini-3.1-pro-preview-customtools queen session: the prior format
appended ``\\n\\n[Saved to '/abs/path/file.txt']`` after every
small result, and frontier pattern-matching models (gemini 3.x in
particular) learned to autocomplete the `[Saved to '...']` trailer
in their own assistant turns, eventually degenerating into echoing
the whole tool result instead of deciding what to do next. See
``session_20260415_100751_d49f4c28/conversations/parts/0000000056.json``
for the terminal case where the model's "text" output was the full
tool_result JSON.
- Small results ( limit): full content kept + file annotation
- Large results (> limit): preview + file reference
- Errors: pass through unchanged
- read_file/load_data results: truncate with pagination hint (no re-spill)
Rules after the fix:
- **Small results ( limit):** pass content through unchanged. No
trailer. No annotation. The full content is already in the
message; the disk copy is for debugging only.
- **Large results (> limit):** preview + file reference, but
formatted as plain prose instead of a bracketed ``[...]``
pattern. Structured JSON metadata ("_saved_to") is embedded
inside the JSON body when the preview is JSON-shaped so the
model can locate the full file without seeing a mimicry-prone
bracket token outside the body.
- **Errors:** pass through unchanged.
- **read_file results:** truncate with pagination hint (no re-spill).
"""
limit = max_tool_result_chars
@@ -230,9 +246,9 @@ def truncate_tool_result(
if result.is_error:
return result
# read_file/load_data reads FROM spilled files — never re-spill (circular).
# read_file reads FROM spilled files — never re-spill (circular).
# Just truncate with a pagination hint if the result is too large.
if tool_name in ("load_data", "read_file"):
if tool_name == "read_file":
if limit <= 0 or len(result.content) <= limit:
return result # Small result — pass through as-is
# Large result — truncate with smart preview
@@ -252,18 +268,19 @@ def truncate_tool_result(
else:
preview_block = result.content[:PREVIEW_CAP] + ""
# Prose header (no brackets).
header = (
f"[{tool_name} result: {len(result.content):,} chars — "
f"too large for context. Use offset_bytes/limit_bytes "
f"parameters to read smaller chunks.]"
f"Tool `{tool_name}` returned {len(result.content):,} characters "
f"(too large for context). Use offset_bytes / limit_bytes "
f"parameters to paginate smaller chunks."
)
if metadata_str:
header += f"\n\nData structure:\n{metadata_str}"
header += (
"\n\nWARNING: This is an INCOMPLETE preview. Do NOT draw conclusions or counts from it."
"\n\nWARNING: the preview below is a SAMPLE only — do NOT draw counts, totals, or conclusions from it."
)
truncated = f"{header}\n\nPreview (small sample only):\n{preview_block}"
truncated = f"{header}\n\nPreview (truncated):\n{preview_block}"
logger.info(
"%s result truncated: %d%d chars (use offset/limit to paginate)",
tool_name,
@@ -301,7 +318,10 @@ def truncate_tool_result(
if limit > 0 and len(result.content) > limit:
# Large result: build a small, metadata-rich preview so the
# LLM cannot mistake it for the complete dataset.
# LLM cannot mistake it for the complete dataset. The
# preview is introduced as plain prose (no bracketed
# ``[Result from …]`` token) so it doesn't prime the model
# to autocomplete the same pattern in its next turn.
PREVIEW_CAP = 5000
# Extract structural metadata (array lengths, key names)
@@ -316,21 +336,21 @@ def truncate_tool_result(
else:
preview_block = result.content[:PREVIEW_CAP] + ""
# Assemble header with structural info + warning
# Prose header (no brackets). Absolute path still surfaced
# so the agent can read the full file, but it's framed as
# a sentence, not a bracketed trailer.
header = (
f"[Result from {tool_name}: {len(result.content):,} chars — "
f"too large for context, saved to '{abs_path}'.]\n"
f"Tool `{tool_name}` returned {len(result.content):,} characters "
f"(too large for context). Full result saved at: {abs_path}\n"
f"Read the complete data with read_file(path='{abs_path}').\n"
)
if metadata_str:
header += f"\nData structure:\n{metadata_str}"
header += f"\nData structure:\n{metadata_str}\n"
header += (
f"\n\nWARNING: The preview below is INCOMPLETE. "
f"Do NOT draw conclusions or counts from it. "
f"Use read_file(path='{abs_path}') to read the "
f"full data before analysis."
"\nWARNING: the preview below is a SAMPLE only — do NOT draw counts, totals, or conclusions from it."
)
content = f"{header}\n\nPreview (small sample only):\n{preview_block}"
content = f"{header}\n\nPreview (truncated):\n{preview_block}"
logger.info(
"Tool result spilled to file: %s (%d chars → %s)",
tool_name,
@@ -338,10 +358,22 @@ def truncate_tool_result(
abs_path,
)
else:
# Small result: keep full content + annotation with absolute path
content = f"{result.content}\n\n[Saved to '{abs_path}']"
# Small result: pass content through UNCHANGED.
#
# The prior design appended `\n\n[Saved to '/abs/path']`
# after every small result so the agent could re-read the
# file later. But (a) the full content is already in the
# message, so there's nothing to re-read; (b) the
# `[Saved to '…']` trailer is a repeating token pattern
# that frontier pattern-matching models autocomplete into
# their own assistant turns, eventually echoing whole tool
# results as "text" instead of making decisions. Dropping
# the trailer entirely kills the poison pattern. Spilled
# files on disk still exist for debugging — they just
# aren't advertised in the LLM-visible message.
content = result.content
logger.info(
"Tool result saved to file: %s (%d chars → %s)",
"Tool result saved to file: %s (%d chars → %s, no trailer)",
tool_name,
len(result.content),
filename,
@@ -373,15 +405,16 @@ def truncate_tool_result(
else:
preview_block = result.content[:PREVIEW_CAP] + ""
# Prose header (no brackets) — see docstring for the poison
# pattern that the bracket format triggered.
header = (
f"[Result from {tool_name}: {len(result.content):,} chars — "
f"truncated to fit context budget.]"
f"Tool `{tool_name}` returned {len(result.content):,} characters "
f"(truncated to fit context budget — no spillover dir configured)."
)
if metadata_str:
header += f"\n\nData structure:\n{metadata_str}"
header += (
"\n\nWARNING: This is an INCOMPLETE preview. "
"Do NOT draw conclusions or counts from the preview alone."
"\n\nWARNING: the preview below is a SAMPLE only — do NOT draw counts, totals, or conclusions from it."
)
truncated = f"{header}\n\n{preview_block}"
@@ -423,7 +456,7 @@ async def execute_tool(
)
skill_dirs = skill_dirs or []
skill_read_tools = {"view_file", "load_data", "read_file"}
skill_read_tools = {"view_file", "read_file"}
if tc.tool_name in skill_read_tools and skill_dirs:
raw_path = tc.tool_input.get("path", "")
if raw_path:
@@ -467,6 +500,22 @@ async def execute_tool(
result = await _run()
except TimeoutError:
logger.warning("Tool '%s' timed out after %.0fs", tc.tool_name, timeout)
# asyncio.wait_for cancels the awaiting coroutine, but the sync
# executor running inside run_in_executor keeps going — and so
# does any MCP subprocess it is blocked on. Reach through to the
# owning MCPClient and force-disconnect it so the subprocess is
# torn down. Next call_tool triggers a reconnect. Without this
# the executor thread and MCP child leak on every timeout.
kill_for_tool = getattr(tool_executor, "kill_for_tool", None)
if callable(kill_for_tool):
try:
await asyncio.to_thread(kill_for_tool, tc.tool_name)
except Exception as exc: # defensive — never let cleanup crash the loop
logger.warning(
"kill_for_tool('%s') raised during timeout handling: %s",
tc.tool_name,
exc,
)
return ToolResult(
tool_use_id=tc.tool_use_id,
content=(
@@ -0,0 +1,284 @@
"""Shared types and state containers for the event loop package."""
from __future__ import annotations
import asyncio
import json
import logging
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Literal, Protocol, runtime_checkable
from framework.agent_loop.conversation import (
ConversationStore,
)
logger = logging.getLogger(__name__)
@dataclass
class TriggerEvent:
"""A framework-level trigger signal (timer tick or webhook hit)."""
trigger_type: str
source_id: str
payload: dict[str, Any] = field(default_factory=dict)
timestamp: float = field(default_factory=time.time)
@dataclass
class JudgeVerdict:
"""Result of judge evaluation for the event loop."""
action: Literal["ACCEPT", "RETRY", "ESCALATE"]
# None = no evaluation happened (skip_judge, tool-continue); not logged.
# "" = evaluated but no feedback; logged with default text.
# "..." = evaluated with feedback; logged as-is.
feedback: str | None = None
@runtime_checkable
class JudgeProtocol(Protocol):
"""Protocol for event-loop judges."""
async def evaluate(self, context: dict[str, Any]) -> JudgeVerdict: ...
@dataclass
class LoopConfig:
"""Configuration for the event loop."""
max_iterations: int = 50
# 0 (or any non-positive value) disables the per-turn hard limit,
# letting a single assistant turn fan out arbitrarily many tool
# calls. Models like Gemini 3.1 Pro routinely emit 40-80 tool
# calls in one turn during browser exploration; capping them
# strands work half-finished and makes the next turn repeat the
# discarded calls, which is worse than just running them.
max_tool_calls_per_turn: int = 0
judge_every_n_turns: int = 1
stall_detection_threshold: int = 3
stall_similarity_threshold: float = 0.85
max_context_tokens: int = 32_000
# Headroom reserved for the NEXT turn's input + output so that
# proactive compaction always finishes before the hard context limit
# is hit mid-stream. Scaled to match Claude Code's 13k-buffer-on-
# 200k-window ratio (~6.5%) applied to hive's default 32k window,
# with extra margin because hive's token estimator is char-based
# and less tight than Anthropic's own counting. Override via
# LoopConfig for larger windows.
compaction_buffer_tokens: int = 8_000
# Warning is emitted one buffer earlier so the user/telemetry gets
# a "we're close" signal without triggering a compaction pass.
compaction_warning_buffer_tokens: int = 12_000
store_prefix: str = ""
# Overflow margin for max_tool_calls_per_turn. When the limit is
# enabled (>0), tool calls are only discarded when the count
# exceeds max_tool_calls_per_turn * (1 + margin). Ignored when
# max_tool_calls_per_turn is 0.
tool_call_overflow_margin: float = 0.5
# Tool result context management.
max_tool_result_chars: int = 30_000
spillover_dir: str | None = None
# Image retention in conversation history.
# Screenshots from ``browser_screenshot`` are inlined as base64
# data URLs inside message ``image_content``. Each full-page
# screenshot costs ~250k tokens when the provider counts the
# base64 as text (gemini, most non-Anthropic providers). Four
# screenshots in one conversation push gemini's 1M context over
# the limit and the model starts emitting garbage.
#
# The framework strips image_content from older messages after
# every tool-result batch, keeping only the most recent N
# screenshots. The text metadata on evicted messages (url, size,
# scale hints) is preserved so the agent can still reason about
# "I took a screenshot at step N that showed the compose modal".
# Raise this only if you genuinely need longer visual history AND
# you know your provider is using native image tokenization.
max_retained_screenshots: int = 2
# set_output value spilling.
max_output_value_chars: int = 2_000
# Stream retry.
max_stream_retries: int = 5
stream_retry_backoff_base: float = 2.0
stream_retry_max_delay: float = 60.0
# Persistent retry for capacity-class errors (429, 529, overloaded).
# Unlike the bounded retry above, these keep trying until the wall-clock
# budget below is exhausted — modelled after claude-code's withRetry.
# The loop still publishes a retry event each attempt so the UI can
# see progress. Set to 0 to disable and fall back to bounded retry.
capacity_retry_max_seconds: float = 600.0
capacity_retry_max_delay: float = 60.0
# Tool doom loop detection.
tool_doom_loop_threshold: int = 3
# Client-facing auto-block grace period.
cf_grace_turns: int = 1
# Worker auto-escalation: text-only turns before escalating to queen.
worker_escalation_grace_turns: int = 1
tool_doom_loop_enabled: bool = True
# Silent worker: consecutive tool-only turns (no user-facing text)
# before injecting a nudge to communicate progress.
silent_tool_streak_threshold: int = 5
# Per-tool-call timeout.
tool_call_timeout_seconds: float = 60.0
# LLM stream inactivity watchdog. If no stream event (delta, tool call,
# finish) arrives within this many seconds, the stream task is cancelled
# and a transient error is raised so the retry loop can back off and
# reconnect. Prevents agents from hanging forever on a silently dead
# HTTP connection (no provider heartbeat, no exception, just silence).
# Set to 0 to disable.
llm_stream_inactivity_timeout_seconds: float = 120.0
# Subagent delegation timeout (wall-clock max).
subagent_timeout_seconds: float = 3600.0
# Subagent inactivity timeout - only timeout if no activity for this duration.
# This resets whenever the subagent makes progress (tool calls, LLM responses).
# Set to 0 to use only the wall-clock timeout.
subagent_inactivity_timeout_seconds: float = 300.0
# Lifecycle hooks.
hooks: dict[str, list] | None = None
def __post_init__(self) -> None:
if self.hooks is None:
object.__setattr__(self, "hooks", {})
@dataclass
class HookContext:
"""Context passed to every lifecycle hook."""
event: str
trigger: str | None
system_prompt: str
@dataclass
class HookResult:
"""What a hook may return to modify node state."""
system_prompt: str | None = None
inject: str | None = None
@dataclass
class OutputAccumulator:
"""Accumulates output key-value pairs with optional write-through persistence."""
values: dict[str, Any] = field(default_factory=dict)
store: ConversationStore | None = None
spillover_dir: str | None = None
max_value_chars: int = 0
run_id: str | None = None
async def set(self, key: str, value: Any) -> None:
"""Set a key-value pair, auto-spilling large values to files."""
value = await self._auto_spill(key, value)
self.values[key] = value
if self.store:
cursor = await self.store.read_cursor() or {}
outputs = cursor.get("outputs", {})
outputs[key] = value
cursor["outputs"] = outputs
await self.store.write_cursor(cursor)
async def _auto_spill(self, key: str, value: Any) -> Any:
"""Save large values to a file and return a reference string.
Runs the JSON serialization and file write on a worker thread
so they don't block the asyncio event loop. For a 100k-char
dict this used to freeze every concurrent tool call for ~50ms
of ``json.dumps(indent=2)`` + a sync disk write; for bigger
payloads or slow storage (NFS, networked FS) the freeze was
proportionally worse.
"""
if self.max_value_chars <= 0 or not self.spillover_dir:
return value
# Cheap size probe first — if the value is already a short
# string we can skip both the JSON round-trip and the thread
# hop entirely.
if isinstance(value, str) and len(value) <= self.max_value_chars:
return value
def _spill_sync() -> Any:
# JSON serialization for size check (only for non-strings).
if isinstance(value, str):
val_str = value
else:
val_str = json.dumps(value, ensure_ascii=False)
if len(val_str) <= self.max_value_chars:
return value
spill_path = Path(self.spillover_dir)
spill_path.mkdir(parents=True, exist_ok=True)
ext = ".json" if isinstance(value, (dict, list)) else ".txt"
filename = f"output_{key}{ext}"
write_content = (
json.dumps(value, indent=2, ensure_ascii=False) if isinstance(value, (dict, list)) else str(value)
)
file_path = spill_path / filename
file_path.write_text(write_content, encoding="utf-8")
file_size = file_path.stat().st_size
logger.info(
"set_output value auto-spilled: key=%s, %d chars -> %s (%d bytes)",
key,
len(val_str),
filename,
file_size,
)
# Use absolute path so parent agents can find files from subagents.
#
# Prose format (no brackets) — same fix as tool_result_handler:
# frontier pattern-matching models autocomplete bracketed
# `[Saved to '...']` trailers into their own assistant turns,
# eventually degenerating into echoing the file path as text.
# Keep the path accessible but frame it as plain prose.
abs_path = str(file_path.resolve())
return (
f"Output saved at: {abs_path} ({file_size:,} bytes). "
f"Read the full data with read_file(path='{abs_path}')."
)
return await asyncio.to_thread(_spill_sync)
def get(self, key: str) -> Any | None:
return self.values.get(key)
def to_dict(self) -> dict[str, Any]:
return dict(self.values)
def has_all_keys(self, required: list[str]) -> bool:
return all(key in self.values and self.values[key] is not None for key in required)
@classmethod
async def restore(
cls,
store: ConversationStore,
run_id: str | None = None,
) -> OutputAccumulator:
cursor = await store.read_cursor()
values = cursor.get("outputs", {}) if cursor else {}
return cls(values=values, store=store, run_id=run_id)
__all__ = [
"HookContext",
"HookResult",
"JudgeProtocol",
"JudgeVerdict",
"LoopConfig",
"OutputAccumulator",
"TriggerEvent",
]
+98
View File
@@ -0,0 +1,98 @@
"""Prompt composition for agent loops.
Builds canonical system prompts from AgentContext fields.
Extracted from the former orchestrator/prompting module.
"""
from __future__ import annotations
from dataclasses import dataclass
from datetime import datetime
from typing import Any
@dataclass(frozen=True)
class PromptSpec:
identity_prompt: str = ""
focus_prompt: str = ""
narrative: str = ""
accounts_prompt: str = ""
skills_catalog_prompt: str = ""
protocols_prompt: str = ""
memory_prompt: str = ""
agent_type: str = "event_loop"
output_keys: tuple[str, ...] = ()
def stamp_prompt_datetime(prompt: str) -> str:
local = datetime.now().astimezone()
stamp = f"Current date and time: {local.strftime('%Y-%m-%d %H:%M %Z (UTC%z)')}"
return f"{prompt}\n\n{stamp}" if prompt else stamp
def build_prompt_spec(
ctx: Any,
*,
focus_prompt: str | None = None,
narrative: str | None = None,
memory_prompt: str | None = None,
) -> PromptSpec:
from framework.skills.tool_gating import augment_catalog_for_tools
resolved_memory = memory_prompt
if resolved_memory is None:
resolved_memory = getattr(ctx, "memory_prompt", "") or ""
dynamic = getattr(ctx, "dynamic_memory_provider", None)
if dynamic is not None:
try:
resolved_memory = dynamic() or ""
except Exception:
resolved_memory = getattr(ctx, "memory_prompt", "") or ""
# Tool-gated pre-activation: inject full body of default skills whose
# trigger tools are present in this agent's tool list (e.g. browser_*
# pulls in hive.browser-automation). Keeps non-browser agents lean.
tool_names = [getattr(t, "name", "") for t in (getattr(ctx, "available_tools", None) or [])]
skills_catalog_prompt = augment_catalog_for_tools(ctx.skills_catalog_prompt or "", tool_names)
return PromptSpec(
identity_prompt=ctx.identity_prompt or "",
focus_prompt=focus_prompt if focus_prompt is not None else (ctx.agent_spec.system_prompt or ""),
narrative=narrative if narrative is not None else (ctx.narrative or ""),
accounts_prompt=ctx.accounts_prompt or "",
skills_catalog_prompt=skills_catalog_prompt,
protocols_prompt=ctx.protocols_prompt or "",
memory_prompt=resolved_memory,
agent_type=ctx.agent_spec.agent_type,
output_keys=tuple(ctx.agent_spec.output_keys or ()),
)
def build_system_prompt(spec: PromptSpec) -> str:
parts: list[str] = []
if spec.identity_prompt:
parts.append(spec.identity_prompt)
if spec.accounts_prompt:
parts.append(f"\n{spec.accounts_prompt}")
if spec.skills_catalog_prompt:
parts.append(f"\n{spec.skills_catalog_prompt}")
if spec.protocols_prompt:
parts.append(f"\n{spec.protocols_prompt}")
if spec.memory_prompt:
parts.append(f"\n{spec.memory_prompt}")
if spec.focus_prompt:
parts.append(f"\n{spec.focus_prompt}")
if spec.narrative:
parts.append(f"\n{spec.narrative}")
return "\n".join(parts)
def build_system_prompt_for_context(
ctx: Any,
*,
focus_prompt: str | None = None,
narrative: str | None = None,
memory_prompt: str | None = None,
) -> str:
spec = build_prompt_spec(ctx, focus_prompt=focus_prompt, narrative=narrative, memory_prompt=memory_prompt)
return build_system_prompt(spec)
+264
View File
@@ -0,0 +1,264 @@
"""Core types for the agent loop — the execution primitive of the colony.
AgentSpec: Declarative definition of what an agent does.
AgentContext: Everything an agent loop needs to execute.
AgentResult: What comes out of an agent loop execution.
AgentProtocol: Interface that all agent implementations must satisfy.
"""
from __future__ import annotations
from abc import ABC, abstractmethod
from dataclasses import dataclass, field
from typing import Any
from pydantic import BaseModel, Field
from framework.llm.provider import LLMProvider, Tool
from framework.tracker.decision_tracker import DecisionTracker
class AgentSpec(BaseModel):
"""Declarative definition of an agent's capabilities and configuration.
This is the blueprint from which AgentLoop instances are created.
Workers in a colony are exact copies of the queen's AgentSpec.
"""
id: str
name: str
description: str
agent_type: str = Field(
default="event_loop",
description="Type: 'event_loop' (recommended), 'gcu' (browser automation).",
)
input_keys: list[str] = Field(
default_factory=list,
description="Keys this agent reads from input data",
)
output_keys: list[str] = Field(
default_factory=list,
description="Keys this agent produces as output",
)
nullable_output_keys: list[str] = Field(
default_factory=list,
description="Output keys that can be None without triggering validation errors",
)
input_schema: dict[str, dict] = Field(
default_factory=dict,
description="Optional schema for input validation.",
)
output_schema: dict[str, dict] = Field(
default_factory=dict,
description="Optional schema for output validation.",
)
system_prompt: str | None = Field(default=None, description="System prompt for the LLM")
tools: list[str] = Field(default_factory=list, description="Tool names this agent can use")
tool_access_policy: str = Field(
default="explicit",
description=(
"'all' = all tools from registry, "
"'explicit' = only tools listed in `tools` (default), "
"'none' = no tools at all."
),
)
model: str | None = Field(default=None, description="Specific model override")
function: str | None = Field(default=None, description="Function name or path")
routes: dict[str, str] = Field(default_factory=dict, description="Condition -> target mapping")
max_retries: int = Field(default=3)
retry_on: list[str] = Field(default_factory=list, description="Error types to retry on")
max_visits: int = Field(
default=0,
description=("Max times this agent executes in one colony run. 0 = unlimited. Set >1 for one-shot agents."),
)
output_model: type[BaseModel] | None = Field(
default=None,
description="Optional Pydantic model for validating LLM output.",
)
max_validation_retries: int = Field(
default=2,
description="Maximum retries when Pydantic validation fails",
)
client_facing: bool = Field(
default=False,
description="Deprecated — the queen is intrinsically interactive.",
)
success_criteria: str | None = Field(
default=None,
description="Natural-language criteria for phase completion.",
)
skip_judge: bool = Field(
default=False,
description="When True, the implicit judge is bypassed entirely.",
)
model_config = {"extra": "allow", "arbitrary_types_allowed": True}
def is_queen(self) -> bool:
return self.id == "queen"
def supports_direct_user_io(self) -> bool:
return self.is_queen()
def deprecated_client_facing_warning(spec: AgentSpec) -> str | None:
if spec.client_facing and not spec.is_queen():
return (
f"Agent '{spec.id}' sets deprecated client_facing=True. "
"Non-queen direct human I/O is no longer supported; route worker "
"questions and approvals through queen escalation instead."
)
return None
def warn_if_deprecated_client_facing(spec: AgentSpec) -> None:
import logging
warning = deprecated_client_facing_warning(spec)
if warning:
logging.getLogger(__name__).warning(warning)
@dataclass
class AgentContext:
"""Everything an agent loop needs to execute.
Passed to every agent implementation and provides:
- Runtime (for decision logging)
- LLM access
- Tools
- Goal context
- Execution metadata
"""
runtime: DecisionTracker
agent_id: str
agent_spec: AgentSpec
input_data: dict[str, Any] = field(default_factory=dict)
llm: LLMProvider | None = None
available_tools: list[Tool] = field(default_factory=list)
goal_context: str = ""
goal: Any = None
max_tokens: int = 4096
attempt: int = 1
max_attempts: int = 3
runtime_logger: Any = None
pause_event: Any = None
accounts_prompt: str = ""
identity_prompt: str = ""
narrative: str = ""
memory_prompt: str = ""
event_triggered: bool = False
execution_id: str = ""
run_id: str = ""
@property
def effective_run_id(self) -> str | None:
return self.run_id or None
stream_id: str = ""
dynamic_tools_provider: Any = None
dynamic_prompt_provider: Any = None
dynamic_memory_provider: Any = None
skills_catalog_prompt: str = ""
protocols_prompt: str = ""
skill_dirs: list[str] = field(default_factory=list)
default_skill_batch_nudge: str | None = None
default_skill_warn_ratio: float | None = None
iteration_metadata_provider: Any = None
@property
def is_queen_stream(self) -> bool:
return self.stream_id == "queen" or self.agent_spec.is_queen()
@property
def emits_client_io(self) -> bool:
return self.is_queen_stream
@property
def supports_direct_user_io(self) -> bool:
return self.is_queen_stream and not self.event_triggered
@dataclass
class AgentResult:
"""Output of an agent loop execution."""
success: bool
output: dict[str, Any] = field(default_factory=dict)
error: str | None = None
next_agent: str | None = None
route_reason: str | None = None
tokens_used: int = 0
latency_ms: int = 0
validation_errors: list[str] = field(default_factory=list)
conversation: Any = None
# Machine-readable reason the loop stopped (see LoopExitReason in
# agent_loop/internals/types.py). "?" means the loop didn't set one,
# which should itself be treated as a diagnostic.
exit_reason: str = "?"
# Counters for reliability events surfaced during this execution.
# Populated from the loop's TaskRegistry-style counters at return
# time so callers can spot recurring failure modes without tailing
# logs. Keys are stable strings; missing keys mean "zero".
reliability_stats: dict[str, int] = field(default_factory=dict)
def to_summary(self, spec: Any = None) -> str:
if not self.success:
return f"Failed: {self.error}"
if not self.output:
return "Completed (no output)"
parts = [f"Completed with {len(self.output)} outputs:"]
for key, value in list(self.output.items())[:5]:
value_str = str(value)[:100]
if len(str(value)) > 100:
value_str += "..."
parts.append(f" - {key}: {value_str}")
return "\n".join(parts)
class AgentProtocol(ABC):
"""Interface all agent implementations must satisfy."""
@abstractmethod
async def execute(self, ctx: AgentContext) -> AgentResult:
pass
def validate_input(self, ctx: AgentContext) -> list[str]:
errors = []
for key in ctx.agent_spec.input_keys:
if key not in ctx.input_data:
errors.append(f"Missing required input: {key}")
return errors
+5 -1
View File
@@ -8,6 +8,10 @@ FRAMEWORK_AGENTS_DIR = Path(__file__).parent
def list_framework_agents() -> list[Path]:
"""List all framework agent directories."""
return sorted(
[p for p in FRAMEWORK_AGENTS_DIR.iterdir() if p.is_dir() and (p / "agent.py").exists()],
[
p
for p in FRAMEWORK_AGENTS_DIR.iterdir()
if p.is_dir() and ((p / "agent.json").exists() or (p / "agent.py").exists())
],
key=lambda p: p.name,
)
@@ -21,15 +21,15 @@ from pathlib import Path
from typing import TYPE_CHECKING
from framework.config import get_max_context_tokens
from framework.graph import Goal, NodeSpec, SuccessCriterion
from framework.graph.checkpoint_config import CheckpointConfig
from framework.graph.edge import GraphSpec
from framework.graph.executor import ExecutionResult
from framework.host.agent_host import AgentHost
from framework.host.execution_manager import EntryPointSpec
from framework.llm import LiteLLMProvider
from framework.runner.mcp_registry import MCPRegistry
from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from framework.loader.mcp_registry import MCPRegistry
from framework.loader.tool_registry import ToolRegistry
from framework.orchestrator import Goal, NodeSpec, SuccessCriterion
from framework.orchestrator.checkpoint_config import CheckpointConfig
from framework.orchestrator.edge import GraphSpec
from framework.orchestrator.orchestrator import ExecutionResult
from .config import default_config
from .nodes import build_tester_node
@@ -37,7 +37,7 @@ from .nodes import build_tester_node
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from framework.runner import AgentRunner
from framework.loader import AgentLoader
logger = logging.getLogger(__name__)
@@ -126,9 +126,7 @@ def _list_local_accounts() -> list[dict]:
try:
from framework.credentials.local.registry import LocalCredentialRegistry
return [
info.to_account_dict() for info in LocalCredentialRegistry.default().list_accounts()
]
return [info.to_account_dict() for info in LocalCredentialRegistry.default().list_accounts()]
except ImportError as exc:
logger.debug("Local credential registry unavailable: %s", exc)
return []
@@ -181,9 +179,7 @@ def _list_env_fallback_accounts() -> list[dict]:
if spec.credential_group in seen_groups:
continue
group_available = all(
_is_configured(n, s)
for n, s in CREDENTIAL_SPECS.items()
if s.credential_group == spec.credential_group
_is_configured(n, s) for n, s in CREDENTIAL_SPECS.items() if s.credential_group == spec.credential_group
)
if not group_available:
continue
@@ -215,9 +211,7 @@ def list_connected_accounts() -> list[dict]:
# Show env-var fallbacks only for credentials not already in the named registry
local_providers = {a["provider"] for a in local}
env_fallbacks = [
a for a in _list_env_fallback_accounts() if a["provider"] not in local_providers
]
env_fallbacks = [a for a in _list_env_fallback_accounts() if a["provider"] not in local_providers]
return aden + local + env_fallbacks
@@ -233,7 +227,7 @@ requires_account_selection = True
"""Signal TUI to show account picker before starting the agent."""
def configure_for_account(runner: AgentRunner, account: dict) -> None:
def configure_for_account(runner: AgentLoader, account: dict) -> None:
"""Scope the tester node's tools to the selected provider.
Handles both Aden accounts (account= routing) and local accounts
@@ -272,9 +266,7 @@ def _activate_local_account(credential_id: str, alias: str) -> None:
group_specs = [
(cred_name, spec)
for cred_name, spec in CREDENTIAL_SPECS.items()
if spec.credential_group == credential_id
or spec.credential_id == credential_id
or cred_name == credential_id
if spec.credential_group == credential_id or spec.credential_id == credential_id or cred_name == credential_id
]
# Deduplicate — credential_id and credential_group may both match the same spec
seen_env_vars: set[str] = set()
@@ -325,7 +317,7 @@ def _activate_local_account(credential_id: str, alias: str) -> None:
def _configure_aden_node(
runner: AgentRunner,
runner: AgentLoader,
provider: str,
alias: str,
detail: str,
@@ -368,7 +360,7 @@ or any other identifier — always use the alias exactly as shown.
def _configure_local_node(
runner: AgentRunner,
runner: AgentLoader,
provider: str,
alias: str,
identity: dict,
@@ -419,10 +411,7 @@ nodes = [
NodeSpec(
id="tester",
name="Credential Tester",
description=(
"Interactive credential testing — lets the user pick an account "
"and verify it via API calls."
),
description=("Interactive credential testing — lets the user pick an account and verify it via API calls."),
node_type="event_loop",
client_facing=True,
max_node_visits=0,
@@ -469,10 +458,7 @@ pause_nodes = []
terminal_nodes = ["tester"] # Tester node can terminate
conversation_mode = "continuous"
identity_prompt = (
"You are a credential tester that verifies connected accounts and API keys "
"can make real API calls."
)
identity_prompt = "You are a credential tester that verifies connected accounts and API keys can make real API calls."
loop_config = {
"max_iterations": 50,
"max_tool_calls_per_turn": 30,
@@ -497,7 +483,7 @@ class CredentialTesterAgent:
def __init__(self, config=None):
self.config = config or default_config
self._selected_account: dict | None = None
self._agent_runtime: AgentRuntime | None = None
self._agent_runtime: AgentHost | None = None
self._tool_registry: ToolRegistry | None = None
self._storage_path: Path | None = None
@@ -613,7 +599,7 @@ class CredentialTesterAgent:
graph = self._build_graph()
self._agent_runtime = create_agent_runtime(
self._agent_runtime = AgentHost(
graph=graph,
goal=goal,
storage_path=self._storage_path,
@@ -1,9 +1,9 @@
{
"hive-tools": {
"hive_tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": "../../../../tools",
"description": "Hive tools MCP server with provider-specific tools"
"description": "hive_tools MCP server with provider-specific tools"
}
}
@@ -1,6 +1,6 @@
"""Node definitions for Credential Tester agent."""
from framework.graph import NodeSpec
from framework.orchestrator import NodeSpec
def build_tester_node(
+136 -78
View File
@@ -7,6 +7,32 @@ from dataclasses import dataclass, field
from pathlib import Path
@dataclass
class WorkerEntry:
"""A single worker within a colony."""
name: str
config_path: Path
description: str = ""
tool_count: int = 0
task: str = ""
spawned_at: str = ""
queen_name: str = ""
colony_name: str = ""
def to_dict(self) -> dict:
return {
"name": self.name,
"config_path": str(self.config_path),
"description": self.description,
"tool_count": self.tool_count,
"task": self.task,
"spawned_at": self.spawned_at,
"queen_name": self.queen_name,
"colony_name": self.colony_name,
}
@dataclass
class AgentEntry:
"""Lightweight agent metadata for the picker / API discover endpoint."""
@@ -21,14 +47,15 @@ class AgentEntry:
tool_count: int = 0
tags: list[str] = field(default_factory=list)
last_active: str | None = None
workers: list[WorkerEntry] = field(default_factory=list)
def _get_last_active(agent_path: Path) -> str | None:
"""Return the most recent updated_at timestamp across all sessions.
Checks both worker sessions (``~/.hive/agents/{name}/sessions/``) and
queen sessions (``~/.hive/queen/session/``) whose ``meta.json`` references
the same *agent_path*.
queen sessions (``~/.hive/agents/queens/default/sessions/``) whose
``meta.json`` references the same *agent_path*.
"""
from datetime import datetime
@@ -52,26 +79,33 @@ def _get_last_active(agent_path: Path) -> str | None:
except Exception:
continue
# 2. Queen sessions
queen_sessions_dir = Path.home() / ".hive" / "queen" / "session"
if queen_sessions_dir.exists():
# 2. Queen sessions (scan all queen identity directories)
from framework.config import QUEENS_DIR
if QUEENS_DIR.exists():
resolved = agent_path.resolve()
for d in queen_sessions_dir.iterdir():
if not d.is_dir():
for queen_dir in QUEENS_DIR.iterdir():
if not queen_dir.is_dir():
continue
meta_file = d / "meta.json"
if not meta_file.exists():
sessions_dir = queen_dir / "sessions"
if not sessions_dir.exists():
continue
try:
meta = json.loads(meta_file.read_text(encoding="utf-8"))
stored = meta.get("agent_path")
if not stored or Path(stored).resolve() != resolved:
for d in sessions_dir.iterdir():
if not d.is_dir():
continue
meta_file = d / "meta.json"
if not meta_file.exists():
continue
try:
meta = json.loads(meta_file.read_text(encoding="utf-8"))
stored = meta.get("agent_path")
if not stored or Path(stored).resolve() != resolved:
continue
ts = datetime.fromtimestamp(d.stat().st_mtime).isoformat()
if latest is None or ts > latest:
latest = ts
except Exception:
continue
ts = datetime.fromtimestamp(d.stat().st_mtime).isoformat()
if latest is None or ts > latest:
latest = ts
except Exception:
continue
return latest
@@ -109,85 +143,106 @@ def _count_runs(agent_name: str) -> int:
return len(run_ids)
_EXCLUDED_JSON_STEMS = {"agent", "flowchart", "triggers", "configuration", "metadata"}
def _is_colony_dir(path: Path) -> bool:
"""Check if a directory is a colony with worker config files."""
if not path.is_dir():
return False
return any(f.suffix == ".json" and f.stem not in _EXCLUDED_JSON_STEMS for f in path.iterdir() if f.is_file())
def _find_worker_configs(colony_dir: Path) -> list[Path]:
"""Find all worker config JSON files in a colony directory."""
return sorted(
p for p in colony_dir.iterdir() if p.is_file() and p.suffix == ".json" and p.stem not in _EXCLUDED_JSON_STEMS
)
def _extract_agent_stats(agent_path: Path) -> tuple[int, int, list[str]]:
"""Extract node count, tool count, and tags from an agent directory.
"""Extract worker count, tool count, and tags from a colony directory."""
tags: list[str] = []
Prefers agent.py (AST-parsed) over agent.json for node/tool counts
since agent.json may be stale. Tags are only available from agent.json.
"""
import ast
worker_configs = _find_worker_configs(agent_path)
if worker_configs:
all_tools: set[str] = set()
for wc_path in worker_configs:
try:
data = json.loads(wc_path.read_text(encoding="utf-8"))
if isinstance(data, dict):
tools = data.get("tools", [])
if isinstance(tools, list):
all_tools.update(tools)
except Exception:
pass
return len(worker_configs), len(all_tools), tags
node_count, tool_count, tags = 0, 0, []
agent_py = agent_path / "agent.py"
if agent_py.exists():
try:
tree = ast.parse(agent_py.read_text(encoding="utf-8"))
for node in ast.walk(tree):
if isinstance(node, ast.Assign):
for target in node.targets:
if isinstance(target, ast.Name) and target.id == "nodes":
if isinstance(node.value, ast.List):
node_count = len(node.value.elts)
except Exception:
pass
agent_json = agent_path / "agent.json"
if agent_json.exists():
try:
data = json.loads(agent_json.read_text(encoding="utf-8"))
json_nodes = data.get("graph", {}).get("nodes", []) or data.get("nodes", [])
if node_count == 0:
node_count = len(json_nodes)
tools: set[str] = set()
for n in json_nodes:
tools.update(n.get("tools", []))
tool_count = len(tools)
tags = data.get("agent", {}).get("tags", [])
except Exception:
pass
return node_count, tool_count, tags
return 0, 0, tags
def discover_agents() -> dict[str, list[AgentEntry]]:
"""Discover agents from all known sources grouped by category."""
from framework.runner.cli import (
_extract_python_agent_metadata,
_get_framework_agents_dir,
_is_valid_agent_dir,
)
from framework.config import COLONIES_DIR
groups: dict[str, list[AgentEntry]] = {}
sources = [
("Your Agents", Path("exports")),
("Framework", _get_framework_agents_dir()),
("Examples", Path("examples/templates")),
("Your Agents", COLONIES_DIR),
]
# Track seen agent directory names to avoid duplicates when the same
# agent exists in both colonies/ and exports/ (colonies takes priority).
_seen_agent_names: set[str] = set()
for category, base_dir in sources:
if not base_dir.exists():
continue
entries: list[AgentEntry] = []
for path in sorted(base_dir.iterdir(), key=lambda p: p.name):
if not _is_valid_agent_dir(path):
if not _is_colony_dir(path):
continue
if path.name in _seen_agent_names:
continue
_seen_agent_names.add(path.name)
name, desc = _extract_python_agent_metadata(path)
config_fallback_name = path.name.replace("_", " ").title()
used_config = name != config_fallback_name
name = config_fallback_name
desc = ""
node_count, tool_count, tags = _extract_agent_stats(path)
if not used_config:
agent_json = path / "agent.json"
if agent_json.exists():
try:
data = json.loads(agent_json.read_text(encoding="utf-8"))
meta = data.get("agent", {})
name = meta.get("name", name)
desc = meta.get("description", desc)
except Exception:
pass
# Read colony metadata for queen provenance
colony_queen_name = ""
metadata_path = path / "metadata.json"
if metadata_path.exists():
try:
mdata = json.loads(metadata_path.read_text(encoding="utf-8"))
colony_queen_name = mdata.get("queen_name", "")
except Exception:
pass
worker_entries: list[WorkerEntry] = []
worker_configs = _find_worker_configs(path)
for wc_path in worker_configs:
try:
data = json.loads(wc_path.read_text(encoding="utf-8"))
if isinstance(data, dict):
w = WorkerEntry(
name=data.get("name", wc_path.stem),
config_path=wc_path,
description=data.get("description", ""),
tool_count=len(data.get("tools", [])),
task=data.get("goal", {}).get("description", ""),
spawned_at=data.get("spawned_at", ""),
queen_name=colony_queen_name,
colony_name=path.name,
)
worker_entries.append(w)
if not desc:
desc = data.get("description", "")
except Exception:
pass
node_count = len(worker_entries)
tool_count = max((w.tool_count for w in worker_entries), default=0)
entries.append(
AgentEntry(
@@ -199,11 +254,14 @@ def discover_agents() -> dict[str, list[AgentEntry]]:
run_count=_count_runs(path.name),
node_count=node_count,
tool_count=tool_count,
tags=tags,
tags=[],
last_active=_get_last_active(path),
workers=worker_entries,
)
)
if entries:
groups[category] = entries
existing = groups.get(category, [])
existing.extend(entries)
groups[category] = existing
return groups
+3 -9
View File
@@ -1,19 +1,13 @@
"""
Queen Native agent builder for the Hive framework.
"""Queen -- the agent builder for the Hive framework."""
Deeply understands the agent framework and produces complete Python packages
with goals, nodes, edges, system prompts, MCP configuration, and tests
from natural language specifications.
"""
from .agent import queen_goal, queen_graph
from .agent import queen_goal, queen_loop_config
from .config import AgentMetadata, RuntimeConfig, default_config, metadata
__version__ = "1.0.0"
__all__ = [
"queen_goal",
"queen_graph",
"queen_loop_config",
"RuntimeConfig",
"AgentMetadata",
"default_config",
+15 -27
View File
@@ -1,38 +1,26 @@
"""Queen graph definition."""
"""Queen agent definition.
from framework.graph import Goal
from framework.graph.edge import GraphSpec
The queen is a single AgentLoop no orchestrator dependency.
Loaded by queen_orchestrator.create_queen().
"""
from framework.schemas.goal import Goal
from .nodes import queen_node
# ---------------------------------------------------------------------------
# Queen graph — the primary persistent conversation.
# Loaded by queen_orchestrator.create_queen(), NOT by AgentRunner.
# ---------------------------------------------------------------------------
queen_goal = Goal(
id="queen-manager",
name="Queen Manager",
description=(
"Manage the worker agent lifecycle and serve as the user's primary interactive interface."
),
description=("Manage the worker agent lifecycle and serve as the user's primary interactive interface."),
success_criteria=[],
constraints=[],
)
queen_graph = GraphSpec(
id="queen-graph",
goal_id=queen_goal.id,
version="1.0.0",
entry_node="queen",
entry_points={"start": "queen"},
terminal_nodes=[],
pause_nodes=[],
nodes=[queen_node],
edges=[],
conversation_mode="continuous",
loop_config={
"max_iterations": 999_999,
"max_tool_calls_per_turn": 30,
},
)
# Loop config -- used by queen_orchestrator to build LoopConfig
queen_loop_config = {
"max_iterations": 999_999,
"max_tool_calls_per_turn": 30,
"max_context_tokens": 180_000,
}
__all__ = ["queen_goal", "queen_loop_config", "queen_node"]
@@ -0,0 +1,3 @@
{
"include": ["gcu-tools", "hive_tools"]
}
@@ -5,5 +5,19 @@
"args": ["run", "python", "coder_tools_server.py", "--stdio"],
"cwd": "../../../../tools",
"description": "Unsandboxed file system tools for code generation and validation"
},
"gcu-tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "-m", "gcu.server", "--stdio", "--capabilities", "browser"],
"cwd": "../../../../tools",
"description": "Browser automation tools (Playwright-based)"
},
"hive_tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": "../../../../tools",
"description": "Aden integration tools (gmail, calendar, hubspot, etc.) — gated by credentials and the verified manifest"
}
}
File diff suppressed because it is too large Load Diff
@@ -1,138 +0,0 @@
"""Queen thinking hook — persona + communication style classifier.
Fires once when the queen enters building mode at session start.
Makes a single non-streaming LLM call (acting as an HR Director) to select
the best-fit expert persona for the user's request AND classify the user's
communication style, then returns a PersonaResult containing both.
This is designed to activate the model's latent domain expertise — a CFO
persona on a financial question, a Lawyer on a legal question, etc. while
also adapting the Queen's communication approach to the individual user.
"""
from __future__ import annotations
import json
import logging
from dataclasses import dataclass
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from framework.llm.provider import LLMProvider
logger = logging.getLogger(__name__)
_HR_SYSTEM_PROMPT = """\
You are an expert HR Director and communication consultant at a world-class firm.
A new request has arrived. You must:
1. Identify which professional role best serves this request.
2. Read the user's signals to determine HOW to communicate with them.
For communication style, look for:
- Technical depth: Do they use precise terms? Do they ask "how" or "what"?
- Pace: Short messages = fast and direct. Long explanations = exploratory.
- Tone: Are they casual ("hey, can you...") or formal ("I need a system that...")?
If cross-session memory is provided, factor in what is already known about this \
person don't rediscover what's already understood.
Reply with ONLY a valid JSON object no markdown, no prose, no explanation:
{"role": "<job title>", "persona": "<2-3 sentence first-person identity statement>", \
"style": "<one of: peer-technical, mentor-guiding, consultant-structured>"}
Rules:
- Choose from any real professional role: CFO, CEO, CTO, Lawyer, Data Scientist,
Product Manager, Security Engineer, DevOps Engineer, Software Architect,
HR Director, Marketing Director, Business Analyst, UX Designer,
Financial Analyst, Operations Director, Legal Counsel, etc.
- The persona statement must be written in first person ("I am..." or "I have...").
- Select the role whose domain knowledge most directly applies to solving the request.
- If the request is clearly about coding or building software systems, pick Software Architect.
- "Queen" is your internal alias do not include it in the persona.
- For style: "peer-technical" for users who demonstrate domain expertise, \
"mentor-guiding" for users who are learning or exploring, \
"consultant-structured" for users who want structured, accountable delivery.
- Default to "peer-technical" if signals are ambiguous.
"""
# Communication style directives injected into the Queen's system prompt.
_STYLE_DIRECTIVES: dict[str, str] = {
"peer-technical": (
"## Communication Style: Peer\n\n"
"This person is technical. Use precise language, skip high-level "
"overviews they already know, and get into specifics quickly. "
"When they push back on a design choice, engage with the technical "
"argument directly."
),
"mentor-guiding": (
"## Communication Style: Guide\n\n"
"This person is learning or exploring. Explain your reasoning as you "
"go — not patronizingly, but so they can follow the logic. When you "
"make a design choice, briefly say why. Offer to go deeper on anything."
),
"consultant-structured": (
"## Communication Style: Structured\n\n"
"This person wants structured, accountable delivery. Lead with "
"summaries and options. Number your proposals. Be explicit about "
"trade-offs. Avoid open-ended questions — give them choices to react to."
),
}
@dataclass
class PersonaResult:
"""Result of persona + style classification."""
persona_prefix: str # e.g. "You are a CFO. I am a CFO with 20 years..."
style_directive: str # e.g. "## Communication Style: Peer\n\n..."
async def select_expert_persona(
user_message: str,
llm: LLMProvider,
*,
memory_context: str = "",
) -> PersonaResult | None:
"""Run the HR classifier and return a PersonaResult.
Makes a single non-streaming acomplete() call with the session LLM.
Returns None on any failure so the queen falls back gracefully to its
default character with no style directive.
Args:
user_message: The user's opening message for the session.
llm: The session LLM provider.
memory_context: Optional cross-session memory to inform style classification.
Returns:
A PersonaResult with persona_prefix and style_directive, or None on failure.
"""
if not user_message.strip():
return None
prompt = user_message
if memory_context:
prompt = f"{user_message}\n\n{memory_context}"
try:
response = await llm.acomplete(
messages=[{"role": "user", "content": prompt}],
system=_HR_SYSTEM_PROMPT,
max_tokens=1024,
json_mode=True,
)
raw = response.content.strip()
parsed = json.loads(raw)
role = parsed.get("role", "").strip()
persona = parsed.get("persona", "").strip()
style_key = parsed.get("style", "peer-technical").strip()
if not role or not persona:
logger.warning("Thinking hook: empty role/persona in response: %r", raw)
return None
persona_prefix = f"You are a {role}. {persona}"
style_directive = _STYLE_DIRECTIVES.get(style_key, _STYLE_DIRECTIVES["peer-technical"])
logger.info("Thinking hook: selected persona — %s, style — %s", role, style_key)
return PersonaResult(persona_prefix=persona_prefix, style_directive=style_directive)
except Exception:
logger.warning("Thinking hook: persona classification failed", exc_info=True)
return None
-420
View File
@@ -1,420 +0,0 @@
"""Queen global cross-session memory.
Three-tier memory architecture:
~/.hive/queen/MEMORY.md semantic (who, what, why)
~/.hive/queen/memories/MEMORY-YYYY-MM-DD.md episodic (daily journals)
~/.hive/queen/session/{id}/data/adapt.md working (session-scoped)
Semantic and episodic files are injected at queen session start.
Semantic memory (MEMORY.md) is updated automatically at session end via
consolidate_queen_memory() the queen never rewrites this herself.
Episodic memory (MEMORY-date.md) can be written by the queen during a session
via the write_to_diary tool, and is also appended to at session end by
consolidate_queen_memory().
"""
from __future__ import annotations
import asyncio
import json
import logging
import traceback
from datetime import date, datetime
from pathlib import Path
logger = logging.getLogger(__name__)
def _queen_dir() -> Path:
return Path.home() / ".hive" / "queen"
def format_memory_date(d: date) -> str:
"""Return a cross-platform long date label without a zero-padded day."""
return f"{d.strftime('%B')} {d.day}, {d.year}"
def semantic_memory_path() -> Path:
return _queen_dir() / "MEMORY.md"
def episodic_memory_path(d: date | None = None) -> Path:
d = d or date.today()
return _queen_dir() / "memories" / f"MEMORY-{d.strftime('%Y-%m-%d')}.md"
def read_semantic_memory() -> str:
path = semantic_memory_path()
return path.read_text(encoding="utf-8").strip() if path.exists() else ""
def read_episodic_memory(d: date | None = None) -> str:
path = episodic_memory_path(d)
return path.read_text(encoding="utf-8").strip() if path.exists() else ""
def _find_recent_episodic(lookback: int = 7) -> tuple[date, str] | None:
"""Find the most recent non-empty episodic memory within *lookback* days."""
from datetime import timedelta
today = date.today()
for offset in range(lookback):
d = today - timedelta(days=offset)
content = read_episodic_memory(d)
if content:
return d, content
return None
# Budget (in characters) for episodic memory in the system prompt.
_EPISODIC_CHAR_BUDGET = 6_000
def format_for_injection() -> str:
"""Format cross-session memory for system prompt injection.
Returns an empty string if no meaningful content exists yet (e.g. first
session with only the seed template).
"""
semantic = read_semantic_memory()
recent = _find_recent_episodic()
# Suppress injection if semantic is still just the seed template
if semantic and semantic.startswith("# My Understanding of the User\n\n*No sessions"):
semantic = ""
parts: list[str] = []
if semantic:
parts.append(semantic)
if recent:
d, content = recent
# Trim oversized episodic entries to keep the prompt manageable
if len(content) > _EPISODIC_CHAR_BUDGET:
content = content[:_EPISODIC_CHAR_BUDGET] + "\n\n…(truncated)"
today = date.today()
if d == today:
label = f"## Today — {format_memory_date(d)}"
else:
label = f"## {format_memory_date(d)}"
parts.append(f"{label}\n\n{content}")
if not parts:
return ""
body = "\n\n---\n\n".join(parts)
return "--- Your Cross-Session Memory ---\n\n" + body + "\n\n--- End Cross-Session Memory ---"
_SEED_TEMPLATE = """\
# My Understanding of the User
*No sessions recorded yet.*
## Who They Are
## How They Communicate
## What They're Trying to Achieve
## What's Working
## What I've Learned
"""
def append_episodic_entry(content: str) -> None:
"""Append a timestamped prose entry to today's episodic memory file.
Creates the file (with a date heading) if it doesn't exist yet.
Used both by the queen's diary tool and by the consolidation hook.
"""
ep_path = episodic_memory_path()
ep_path.parent.mkdir(parents=True, exist_ok=True)
today = date.today()
today_str = format_memory_date(today)
timestamp = datetime.now().strftime("%H:%M")
if not ep_path.exists():
header = f"# {today_str}\n\n"
block = f"{header}### {timestamp}\n\n{content.strip()}\n"
else:
block = f"\n\n### {timestamp}\n\n{content.strip()}\n"
with ep_path.open("a", encoding="utf-8") as f:
f.write(block)
def seed_if_missing() -> None:
"""Create MEMORY.md with a blank template if it doesn't exist yet."""
path = semantic_memory_path()
if path.exists():
return
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(_SEED_TEMPLATE, encoding="utf-8")
# ---------------------------------------------------------------------------
# Consolidation prompt
# ---------------------------------------------------------------------------
_SEMANTIC_SYSTEM = """\
You maintain the persistent cross-session memory of an AI assistant called the Queen.
Review the session notes and rewrite MEMORY.md the Queen's durable understanding of the
person she works with across all sessions.
Write entirely in the Queen's voice — first person, reflective, honest.
Not a log of events, but genuine understanding of who this person is over time.
Rules:
- Update and synthesise: incorporate new understanding, update facts that have changed, remove
details that are stale, superseded, or no longer say anything meaningful about the person.
- Keep it as structured markdown with named sections about the PERSON, not about today.
- Do NOT include diary sections, daily logs, or session summaries. Those belong elsewhere.
MEMORY.md is about who they are, what they want, what works not what happened today.
- Maintain a "How They Communicate" section: technical depth, preferred pace
(fast/exploratory/thorough), what communication approaches have worked or not,
tone preferences. Update based on diary reflections about communication.
This section should evolve "prefers direct answers" is useful on day 1;
"prefers direct answers for technical questions but wants more context when
discussing architecture trade-offs" is better by day 5.
- Reference dates only when noting a lasting milestone (e.g. "since March 8th they prefer X").
- If the session had no meaningful new information about the person,
return the existing text unchanged.
- Do not add fictional details. Only reflect what is evidenced in the notes.
- Stay concise. Prune rather than accumulate. A lean, accurate file is more useful than a
dense one. If something was true once but has been resolved or superseded, remove it.
- Output only the raw markdown content of MEMORY.md. No preamble, no code fences.
"""
_DIARY_SYSTEM = """\
You maintain the daily episodic diary of an AI assistant called the Queen.
You receive: (1) today's existing diary so far, and (2) notes from the latest session.
Rewrite the complete diary for today as a single unified narrative
first person, reflective, honest.
Merge and deduplicate: if the same story (e.g. a research agent stalling) recurred several times,
describe it once with appropriate weight rather than retelling it. Weave in new developments from
the session notes. Preserve important milestones, emotional texture, and session path references.
Preserve reflections about communication effectiveness these are important inputs for the
Queen's evolving understanding of the user. A reflection like "they responded much better when
I led with the recommendation instead of listing options" is as important as
"we built a Gmail agent."
If today's diary is empty, write the initial entry based on the session notes alone.
Output only the full diary prose no date heading, no timestamp headers,
no preamble, no code fences.
"""
def read_session_context(session_dir: Path, max_messages: int = 80) -> str:
"""Extract a readable transcript from conversation parts + adapt.md.
Reads the last ``max_messages`` conversation parts and the session's
adapt.md (working memory). Tool results are omitted only user and
assistant turns (with tool-call names noted) are included.
"""
parts: list[str] = []
# Working notes
adapt_path = session_dir / "data" / "adapt.md"
if adapt_path.exists():
text = adapt_path.read_text(encoding="utf-8").strip()
if text:
parts.append(f"## Session Working Notes (adapt.md)\n\n{text}")
# Conversation transcript
parts_dir = session_dir / "conversations" / "parts"
if parts_dir.exists():
part_files = sorted(parts_dir.glob("*.json"))[-max_messages:]
lines: list[str] = []
for pf in part_files:
try:
data = json.loads(pf.read_text(encoding="utf-8"))
role = data.get("role", "")
content = str(data.get("content", "")).strip()
tool_calls = data.get("tool_calls") or []
if role == "tool":
continue # skip verbose tool results
if role == "assistant" and tool_calls and not content:
names = [tc.get("function", {}).get("name", "?") for tc in tool_calls]
lines.append(f"[queen calls: {', '.join(names)}]")
elif content:
label = "user" if role == "user" else "queen"
lines.append(f"[{label}]: {content[:600]}")
except (KeyError, TypeError) as exc:
logger.debug("Skipping malformed conversation message: %s", exc)
continue
except Exception:
logger.warning("Unexpected error parsing conversation message", exc_info=True)
continue
if lines:
parts.append("## Conversation\n\n" + "\n".join(lines))
return "\n\n".join(parts)
# ---------------------------------------------------------------------------
# Context compaction (binary-split LLM summarisation)
# ---------------------------------------------------------------------------
# If the raw session context exceeds this many characters, compact it first
# before sending to the consolidation LLM. ~200 k chars ≈ 50 k tokens.
_CTX_COMPACT_CHAR_LIMIT = 200_000
_CTX_COMPACT_MAX_DEPTH = 8
_COMPACT_SYSTEM = (
"Summarise this conversation segment. Preserve: user goals, key decisions, "
"what was built or changed, emotional tone, and important outcomes. "
"Write concisely in third person past tense. Omit routine tool invocations "
"unless the result matters."
)
async def _compact_context(text: str, llm: object, *, _depth: int = 0) -> str:
"""Binary-split and LLM-summarise *text* until it fits within the char limit.
Mirrors the recursive binary-splitting strategy used by the main agent
compaction pipeline (EventLoopNode._llm_compact).
"""
if len(text) <= _CTX_COMPACT_CHAR_LIMIT or _depth >= _CTX_COMPACT_MAX_DEPTH:
return text
# Split near the midpoint on a line boundary so we don't cut mid-message
mid = len(text) // 2
split_at = text.rfind("\n", 0, mid) + 1
if split_at <= 0:
split_at = mid
half1, half2 = text[:split_at], text[split_at:]
async def _summarise(chunk: str) -> str:
try:
resp = await llm.acomplete(
messages=[{"role": "user", "content": chunk}],
system=_COMPACT_SYSTEM,
max_tokens=2048,
)
return resp.content.strip()
except Exception:
logger.warning(
"queen_memory: context compaction LLM call failed (depth=%d), truncating",
_depth,
)
return chunk[: _CTX_COMPACT_CHAR_LIMIT // 4]
s1, s2 = await asyncio.gather(_summarise(half1), _summarise(half2))
combined = s1 + "\n\n" + s2
if len(combined) > _CTX_COMPACT_CHAR_LIMIT:
return await _compact_context(combined, llm, _depth=_depth + 1)
return combined
async def consolidate_queen_memory(
session_id: str,
session_dir: Path,
llm: object,
) -> None:
"""Update MEMORY.md and append a diary entry based on the current session.
Reads conversation parts and adapt.md from session_dir. Called
periodically in the background and once at session end. Failures are
logged and silently swallowed so they never block teardown.
Args:
session_id: The session ID (used for the adapt.md path reference).
session_dir: Path to the session directory (~/.hive/queen/session/{id}).
llm: LLMProvider instance (must support acomplete()).
"""
try:
session_context = read_session_context(session_dir)
if not session_context:
logger.debug("queen_memory: no session context, skipping consolidation")
return
logger.info("queen_memory: consolidating memory for session %s ...", session_id)
# If the transcript is very large, compact it with recursive binary LLM
# summarisation before sending to the consolidation model.
if len(session_context) > _CTX_COMPACT_CHAR_LIMIT:
logger.info(
"queen_memory: session context is %d chars — compacting first",
len(session_context),
)
session_context = await _compact_context(session_context, llm)
logger.info("queen_memory: compacted to %d chars", len(session_context))
existing_semantic = read_semantic_memory()
today_journal = read_episodic_memory()
today = date.today()
today_str = format_memory_date(today)
adapt_path = session_dir / "data" / "adapt.md"
user_msg = (
f"## Existing Semantic Memory (MEMORY.md)\n\n"
f"{existing_semantic or '(none yet)'}\n\n"
f"## Today's Diary So Far ({today_str})\n\n"
f"{today_journal or '(none yet)'}\n\n"
f"{session_context}\n\n"
f"## Session Reference\n\n"
f"Session ID: {session_id}\n"
f"Session path: {adapt_path}\n"
)
logger.debug(
"queen_memory: calling LLM (%d chars of context, ~%d tokens est.)",
len(user_msg),
len(user_msg) // 4,
)
from framework.agents.queen.config import default_config
semantic_resp, diary_resp = await asyncio.gather(
llm.acomplete(
messages=[{"role": "user", "content": user_msg}],
system=_SEMANTIC_SYSTEM,
max_tokens=default_config.max_tokens,
),
llm.acomplete(
messages=[{"role": "user", "content": user_msg}],
system=_DIARY_SYSTEM,
max_tokens=default_config.max_tokens,
),
)
new_semantic = semantic_resp.content.strip()
diary_entry = diary_resp.content.strip()
if new_semantic:
path = semantic_memory_path()
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(new_semantic, encoding="utf-8")
logger.info("queen_memory: semantic memory updated (%d chars)", len(new_semantic))
if diary_entry:
# Rewrite today's episodic file in-place — the LLM has merged and
# deduplicated the full day's content, so we replace rather than append.
ep_path = episodic_memory_path()
ep_path.parent.mkdir(parents=True, exist_ok=True)
heading = f"# {today_str}"
ep_path.write_text(f"{heading}\n\n{diary_entry}\n", encoding="utf-8")
logger.info(
"queen_memory: episodic diary rewritten for %s (%d chars)",
today_str,
len(diary_entry),
)
except Exception:
tb = traceback.format_exc()
logger.exception("queen_memory: consolidation failed")
# Write to file so the cause is findable regardless of log verbosity.
error_path = _queen_dir() / "consolidation_error.txt"
try:
error_path.parent.mkdir(parents=True, exist_ok=True)
error_path.write_text(
f"session: {session_id}\ntime: {datetime.now().isoformat()}\n\n{tb}",
encoding="utf-8",
)
except OSError:
pass # Cannot write error file; original exception already logged
+38 -356
View File
@@ -1,25 +1,25 @@
"""Shared memory helpers for queen/worker recall and reflection.
"""Queen global memory helpers.
Each memory is an individual ``.md`` file in ``~/.hive/queen/memories/``
with optional YAML frontmatter (name, type, description). Frontmatter
is a convention enforced by prompt instructions parsing is lenient and
malformed files degrade gracefully (appear in scans with ``None`` metadata).
Memory hierarchy::
Cursor-based incremental processing tracks which conversation messages
have already been processed by the reflection agent.
~/.hive/memories/
global/ # shared across all queens and colonies
colonies/{name}/ # colony-scoped memories
agents/queens/{name}/ # queen-specific memories
agents/{name}/ # per-worker-agent memories
Each memory is an individual ``.md`` file with optional YAML frontmatter
(name, type, description).
"""
from __future__ import annotations
import json
import logging
import re
import shutil
import time
from dataclasses import dataclass, field
from datetime import date
from pathlib import Path
from typing import Any
from framework.config import MEMORIES_DIR
logger = logging.getLogger(__name__)
@@ -27,54 +27,33 @@ logger = logging.getLogger(__name__)
# Constants
# ---------------------------------------------------------------------------
MEMORY_TYPES: tuple[str, ...] = ("goal", "environment", "technique", "reference", "diary")
GLOBAL_MEMORY_CATEGORIES: tuple[str, ...] = ("profile", "preference", "environment", "feedback")
_HIVE_QUEEN_DIR = Path.home() / ".hive" / "queen"
# Legacy shared v2 root. Colony memory now lives under queen sessions.
MEMORY_DIR: Path = _HIVE_QUEEN_DIR / "memories"
MAX_FILES: int = 200
MAX_FILE_SIZE_BYTES: int = 4096 # 4 KB hard limit per memory file
# How many lines of a memory file to read for header scanning.
_HEADER_LINE_LIMIT: int = 30
_MIGRATION_MARKER = ".migrated-from-shared-memory"
_GLOBAL_MEMORY_CODE_PATTERN = re.compile(
r"(/Users/|~/.hive|\.py\b|\.ts\b|\.tsx\b|\.js\b|"
r"\b(graph|node|runtime|session|execution|worker|queen|subagent|checkpoint|flowchart)\b)",
re.IGNORECASE,
)
# Frontmatter example provided to the reflection agent via prompt.
MEMORY_FRONTMATTER_EXAMPLE: list[str] = [
"```markdown",
"---",
"name: {{memory name}}",
(
"description: {{one-line description — used to decide "
"relevance in future conversations, so be specific}}"
),
f"type: {{{{{', '.join(MEMORY_TYPES)}}}}}",
"---",
"",
(
"{{memory content — for feedback/project types, "
"structure as: rule/fact, then **Why:** "
"and **How to apply:** lines}}"
),
"```",
]
def colony_memory_dir(colony_id: str) -> Path:
"""Return the colony memory directory for a queen session."""
return _HIVE_QUEEN_DIR / "session" / colony_id / "memory" / "colony"
def global_memory_dir() -> Path:
"""Return the queen-global memory directory."""
return _HIVE_QUEEN_DIR / "global_memory"
"""Return the global memory directory (shared across all queens/colonies)."""
return MEMORIES_DIR / "global"
def colony_memory_dir(colony_name: str) -> Path:
"""Return the memory directory for a named colony."""
return MEMORIES_DIR / "colonies" / colony_name
def queen_memory_dir(queen_name: str = "default") -> Path:
"""Return the memory directory for a named queen."""
return MEMORIES_DIR / "agents" / "queens" / queen_name
def agent_memory_dir(agent_name: str) -> Path:
"""Return the memory directory for a worker agent."""
return MEMORIES_DIR / "agents" / agent_name
# ---------------------------------------------------------------------------
@@ -108,15 +87,6 @@ def parse_frontmatter(text: str) -> dict[str, str]:
return result
def parse_memory_type(raw: str | None) -> str | None:
"""Validate *raw* against supported memory categories."""
if raw is None:
return None
normalized = raw.strip().lower()
allowed = set(MEMORY_TYPES) | set(GLOBAL_MEMORY_CATEGORIES)
return normalized if normalized in allowed else None
def parse_global_memory_category(raw: str | None) -> str | None:
"""Validate *raw* against ``GLOBAL_MEMORY_CATEGORIES``."""
if raw is None:
@@ -165,7 +135,7 @@ class MemoryFile:
filename=path.name,
path=path,
name=fm.get("name"),
type=parse_memory_type(fm.get("type")),
type=parse_global_memory_category(fm.get("type")),
description=fm.get("description"),
header_lines=lines,
mtime=mtime,
@@ -183,7 +153,7 @@ def scan_memory_files(memory_dir: Path | None = None) -> list[MemoryFile]:
Files are sorted by modification time (newest first). Dotfiles and
subdirectories are ignored.
"""
d = memory_dir or MEMORY_DIR
d = memory_dir or global_memory_dir()
if not d.is_dir():
return []
@@ -236,318 +206,30 @@ def build_memory_document(
)
def diary_filename(d: date | None = None) -> str:
"""Return the diary memory filename for date *d* (default: today)."""
d = d or date.today()
return f"MEMORY-{d.strftime('%Y-%m-%d')}.md"
def build_diary_document(*, date_str: str, body: str) -> str:
"""Build a diary memory file with frontmatter."""
return build_memory_document(
name=f"diary-{date_str}",
description=f"Daily session narrative for {date_str}",
mem_type="diary",
body=body,
)
def validate_global_memory_payload(
*,
category: str,
description: str,
content: str,
) -> str:
"""Validate a queen-global memory save request."""
parsed = parse_global_memory_category(category)
if parsed is None:
raise ValueError(
"Invalid global memory category. Use one of: "
+ ", ".join(GLOBAL_MEMORY_CATEGORIES)
)
if not description.strip():
raise ValueError("Global memory description cannot be empty.")
if not content.strip():
raise ValueError("Global memory content cannot be empty.")
probe = f"{description}\n{content}"
if _GLOBAL_MEMORY_CODE_PATTERN.search(probe):
raise ValueError(
"Global memory is only for durable user profile, preferences, "
"environment, or feedback — not task/code/runtime details."
)
return parsed
def save_global_memory(
*,
category: str,
description: str,
content: str,
name: str | None = None,
memory_dir: Path | None = None,
) -> tuple[str, Path]:
"""Persist one queen-global memory entry."""
parsed = validate_global_memory_payload(
category=category,
description=description,
content=content,
)
target_dir = memory_dir or global_memory_dir()
target_dir.mkdir(parents=True, exist_ok=True)
memory_name = (name or description).strip()
filename = allocate_memory_filename(target_dir, memory_name)
doc = build_memory_document(
name=memory_name,
description=description,
mem_type=parsed,
body=content,
)
if len(doc.encode("utf-8")) > MAX_FILE_SIZE_BYTES:
raise ValueError(
f"Global memory entry exceeds the {MAX_FILE_SIZE_BYTES} byte limit."
)
path = target_dir / filename
path.write_text(doc, encoding="utf-8")
return filename, path
# ---------------------------------------------------------------------------
# Manifest formatting
# ---------------------------------------------------------------------------
def _age_label(mtime: float) -> str:
"""Human-readable age string from an mtime."""
age_days = memory_age_days(mtime)
if age_days <= 0:
return "today"
if age_days == 1:
return "1 day ago"
return f"{age_days} days ago"
def format_memory_manifest(files: list[MemoryFile]) -> str:
"""One-line-per-file text manifest for the recall selector / reflection agent.
"""One-line-per-file text manifest.
Format: ``[type] filename (age): description``
Format: ``[type] filename: description``
"""
lines: list[str] = []
for mf in files:
t = mf.type or "unknown"
desc = mf.description or "(no description)"
age = _age_label(mf.mtime)
lines.append(f"[{t}] {mf.filename} ({age}): {desc}")
lines.append(f"[{t}] {mf.filename}: {desc}")
return "\n".join(lines)
# ---------------------------------------------------------------------------
# Freshness / staleness
# ---------------------------------------------------------------------------
_SECONDS_PER_DAY = 86_400
def memory_age_days(mtime: float) -> int:
"""Return the age of a memory file in whole days."""
if mtime <= 0:
return 0
return int((time.time() - mtime) / _SECONDS_PER_DAY)
def memory_freshness_text(mtime: float) -> str:
"""Return a staleness warning for injection, or empty string if fresh."""
d = memory_age_days(mtime)
if d <= 1:
return ""
return (
f"This memory is {d} days old. "
"Memories are point-in-time observations, not live state — "
"claims about code behavior or file:line citations may be outdated. "
"Verify against current code before asserting as fact."
)
# ---------------------------------------------------------------------------
# Cursor-based incremental processing
# Initialisation
# ---------------------------------------------------------------------------
async def read_conversation_parts(session_dir: Path) -> list[dict[str, Any]]:
"""Read all conversation parts for a session using FileConversationStore.
Returns a list of raw message dicts in sequence order.
"""
from framework.storage.conversation_store import FileConversationStore
store = FileConversationStore(session_dir / "conversations")
return await store.read_parts()
# ---------------------------------------------------------------------------
# Initialisation and legacy migration
# ---------------------------------------------------------------------------
def init_memory_dir(
memory_dir: Path | None = None,
*,
migrate_legacy: bool = False,
) -> None:
"""Create the memory directory if missing.
When ``migrate_legacy`` is true, migrate both v1 memory files and the
previous shared v2 queen memory store into this directory.
"""
d = memory_dir or MEMORY_DIR
first_run = not d.exists()
def init_memory_dir(memory_dir: Path | None = None) -> None:
"""Create the memory directory if missing."""
d = memory_dir or global_memory_dir()
d.mkdir(parents=True, exist_ok=True)
if migrate_legacy:
migrate_legacy_memories(d)
migrate_shared_v2_memories(d)
elif first_run and d == MEMORY_DIR:
migrate_legacy_memories(d)
def migrate_legacy_memories(memory_dir: Path | None = None) -> None:
"""Convert old MEMORY.md + MEMORY-YYYY-MM-DD.md files to individual memory files.
Originals are moved to ``{memory_dir}/.legacy/``.
"""
d = memory_dir or MEMORY_DIR
queen_dir = _HIVE_QUEEN_DIR
legacy_archive = d / ".legacy"
migrated_any = False
# --- Semantic memory (MEMORY.md) ---
semantic = queen_dir / "MEMORY.md"
if semantic.exists():
content = semantic.read_text(encoding="utf-8").strip()
# Skip the blank seed template.
if content and not content.startswith("# My Understanding of the User\n\n*No sessions"):
_write_migration_file(
d,
filename="legacy-semantic-memory.md",
name="legacy-semantic-memory",
mem_type="reference",
description="Migrated semantic memory from previous memory system",
body=content,
)
migrated_any = True
# Archive original.
legacy_archive.mkdir(parents=True, exist_ok=True)
semantic.rename(legacy_archive / "MEMORY.md")
# --- Episodic memories (MEMORY-YYYY-MM-DD.md) ---
old_memories_dir = queen_dir / "memories"
if old_memories_dir.is_dir():
for ep_file in sorted(old_memories_dir.glob("MEMORY-*.md")):
content = ep_file.read_text(encoding="utf-8").strip()
if not content:
continue
date_part = ep_file.stem.replace("MEMORY-", "")
slug = f"legacy-diary-{date_part}.md"
_write_migration_file(
d,
filename=slug,
name=f"legacy-diary-{date_part}",
mem_type="diary",
description=f"Migrated diary entry from {date_part}",
body=content,
)
migrated_any = True
# Archive original.
legacy_archive.mkdir(parents=True, exist_ok=True)
ep_file.rename(legacy_archive / ep_file.name)
if migrated_any:
logger.info("queen_memory_v2: migrated legacy memory files to %s", d)
def migrate_shared_v2_memories(
memory_dir: Path | None = None,
*,
source_dir: Path | None = None,
) -> None:
"""Move shared queen v2 memory files into a colony directory once."""
d = memory_dir or MEMORY_DIR
d.mkdir(parents=True, exist_ok=True)
src = source_dir or MEMORY_DIR
if d.resolve() == src.resolve():
return
marker = d / _MIGRATION_MARKER
if marker.exists():
return
if not src.is_dir():
return
md_files = sorted(
f for f in src.glob("*.md")
if f.is_file() and not f.name.startswith(".")
)
if not md_files:
marker.write_text("no shared memories found\n", encoding="utf-8")
return
archive = src / ".legacy_colony_migration"
archive.mkdir(parents=True, exist_ok=True)
migrated_any = False
for src_file in md_files:
target = d / src_file.name
if not target.exists():
try:
shutil.copy2(src_file, target)
migrated_any = True
except OSError:
logger.debug("shared memory migration copy failed for %s", src_file, exc_info=True)
continue
archived = archive / src_file.name
counter = 2
while archived.exists():
archived = archive / f"{src_file.stem}-{counter}{src_file.suffix}"
counter += 1
try:
src_file.rename(archived)
except OSError:
logger.debug("shared memory migration archive failed for %s", src_file, exc_info=True)
if migrated_any:
logger.info("queen_memory_v2: migrated shared queen memories to %s", d)
marker.write_text(
f"migrated_at={int(time.time())}\nsource={src}\n",
encoding="utf-8",
)
def _write_migration_file(
memory_dir: Path,
filename: str,
name: str,
mem_type: str,
description: str,
body: str,
) -> None:
"""Write a single migrated memory file with frontmatter."""
# Truncate body to respect file size limit (leave room for frontmatter).
header = (
f"---\n"
f"name: {name}\n"
f"description: {description}\n"
f"type: {mem_type}\n"
f"---\n\n"
)
max_body = MAX_FILE_SIZE_BYTES - len(header.encode("utf-8"))
if len(body.encode("utf-8")) > max_body:
# Rough truncation — cut at character level then trim to last newline.
body = body[: max_body - 20]
nl = body.rfind("\n")
if nl > 0:
body = body[:nl]
body += "\n\n...(truncated during migration)"
path = memory_dir / filename
path.write_text(header + body + "\n", encoding="utf-8")
File diff suppressed because it is too large Load Diff
+97 -118
View File
@@ -1,11 +1,11 @@
"""Recall selector — pre-turn memory selection for queen and worker memory.
"""Recall selector — pre-turn memory selection for the queen.
Before each conversation turn the system:
1. Scans the memory directory for ``.md`` files (cap: 200).
1. Scans one or more memory directories for ``.md`` files (cap: 200 each).
2. Reads headers (frontmatter + first 30 lines).
3. Uses a single LLM call with structured JSON output to pick the ~5
most relevant memories.
4. Injects them into context with staleness warnings for older ones.
3. Uses an LLM call with structured JSON output to pick the most relevant
memories for each scope.
4. Injects them into the system prompt.
The selector only sees the user's query string — no full conversation
context. This keeps it cheap and fast. Errors are caught and return
@@ -20,9 +20,8 @@ from pathlib import Path
from typing import Any
from framework.agents.queen.queen_memory_v2 import (
MEMORY_DIR,
format_memory_manifest,
memory_freshness_text,
global_memory_dir as _default_global_memory_dir,
scan_memory_files,
)
@@ -32,29 +31,6 @@ logger = logging.getLogger(__name__)
# Structured output schema
# ---------------------------------------------------------------------------
RECALL_SCHEMA: dict[str, Any] = {
"type": "json_schema",
"json_schema": {
"name": "memory_selection",
"strict": True,
"schema": {
"type": "object",
"properties": {
"selected_memories": {
"type": "array",
"items": {"type": "string"},
},
},
"required": ["selected_memories"],
"additionalProperties": False,
},
},
}
# ---------------------------------------------------------------------------
# System prompt
# ---------------------------------------------------------------------------
SELECT_MEMORIES_SYSTEM_PROMPT = """\
You are selecting memories that will be useful to the Queen agent as it \
processes a user's query.
@@ -72,9 +48,6 @@ name and description.
query, then do not include it in your list. Be selective and discerning.
- If there are no memories in the list that would clearly be useful, \
return an empty list.
- If a list of recently-used tools is provided, do not select memories \
that are usage reference or API documentation for those tools (the Queen \
is already exercising them). Still select warnings or gotchas about them.
"""
# ---------------------------------------------------------------------------
@@ -86,7 +59,6 @@ async def select_memories(
query: str,
llm: Any,
memory_dir: Path | None = None,
active_tools: list[str] | None = None,
*,
max_results: int = 5,
) -> list[str]:
@@ -94,51 +66,83 @@ async def select_memories(
Returns a list of filenames. Best-effort: on any error returns ``[]``.
"""
mem_dir = memory_dir or MEMORY_DIR
mem_dir = memory_dir or _default_global_memory_dir()
files = scan_memory_files(mem_dir)
if not files:
logger.debug("recall: no memory files found, skipping selection")
return []
logger.debug("recall: selecting from %d memory files for query: %.80s", len(files), query)
logger.debug("recall: selecting from %d memories for query: %.100s", len(files), query)
manifest = format_memory_manifest(files)
user_msg_parts = [f"## User query\n\n{query}\n\n## Available memories\n\n{manifest}"]
if active_tools:
user_msg_parts.append(f"\n\n## Recently-used tools\n\n{', '.join(active_tools)}")
user_msg = "".join(user_msg_parts)
user_msg = f"## User query\n\n{query}\n\n## Available memories\n\n{manifest}"
try:
resp = await llm.acomplete(
messages=[{"role": "user", "content": user_msg}],
system=SELECT_MEMORIES_SYSTEM_PROMPT,
max_tokens=512,
response_format=RECALL_SCHEMA,
max_tokens=1024,
response_format={"type": "json_object"},
)
data = json.loads(resp.content)
raw = (resp.content or "").strip()
if not raw:
logger.warning(
"recall: LLM returned empty response (model=%s, stop=%s)",
resp.model,
resp.stop_reason,
)
return []
# Some models wrap JSON in markdown fences or add preamble text.
# Try to extract the JSON object if raw parse fails.
try:
data = json.loads(raw)
except json.JSONDecodeError:
import re
m = re.search(r"\{.*\}", raw, re.DOTALL)
if m:
data = json.loads(m.group())
else:
logger.warning("recall: LLM returned non-JSON: %.200s", raw)
return []
selected = data.get("selected_memories", [])
# Validate: only return filenames that actually exist.
valid_names = {f.filename for f in files}
result = [s for s in selected if s in valid_names][:max_results]
logger.debug("recall: selected %d memories: %s", len(result), result)
return result
except Exception:
logger.debug("recall: memory selection failed, returning []", exc_info=True)
except Exception as exc:
logger.warning("recall: memory selection failed (%s), returning []", exc)
return []
def _format_relative_age(mtime: float) -> str | None:
"""Return age description if memory is older than 48 hours.
Returns None if 48 hours or newer, otherwise returns "X days old".
"""
import time
age_seconds = time.time() - mtime
hours = age_seconds / 3600
if hours <= 48:
return None
days = int(age_seconds / 86400)
if days == 1:
return "1 day old"
return f"{days} days old"
def format_recall_injection(
filenames: list[str],
memory_dir: Path | None = None,
*,
heading: str = "Selected Memories",
label: str = "Global Memories",
) -> str:
"""Read selected memory files and format for system prompt injection.
Prepends a staleness warning for memories older than 1 day.
Includes relative timestamp (e.g., "3 days old") for memories older than 48 hours.
"""
mem_dir = memory_dir or MEMORY_DIR
mem_dir = memory_dir or _default_global_memory_dir()
if not filenames:
return ""
@@ -149,88 +153,63 @@ def format_recall_injection(
continue
try:
content = path.read_text(encoding="utf-8").strip()
# Get file modification time for age calculation
mtime = path.stat().st_mtime
age_note = _format_relative_age(mtime)
except OSError:
continue
try:
mtime = path.stat().st_mtime
except OSError:
mtime = 0.0
freshness = memory_freshness_text(mtime)
header = f"### {fname}"
if freshness:
header += f"\n\n> {freshness}"
# Build header with optional age note
if age_note:
header = f"### {fname} ({age_note})"
else:
header = f"### {fname}"
blocks.append(f"{header}\n\n{content}")
if not blocks:
return ""
body = "\n\n---\n\n".join(blocks)
logger.debug("recall: injecting %d memory blocks into context", len(blocks))
return f"--- {heading} ---\n\n{body}\n\n--- End {heading} ---"
return f"--- {label} ---\n\n{body}\n\n--- End {label} ---"
# ---------------------------------------------------------------------------
# Cache update (called after each queen turn)
# ---------------------------------------------------------------------------
async def update_recall_cache(
session_dir: Path,
async def build_scoped_recall_blocks(
query: str,
llm: Any,
phase_state: Any | None = None,
memory_dir: Path | None = None,
*,
cache_setter: Any = None,
heading: str = "Selected Memories",
active_tools: list[str] | None = None,
) -> None:
"""Update the recall cache on *phase_state* for the next turn.
global_memory_dir: Path | None = None,
queen_memory_dir: Path | None = None,
queen_id: str | None = None,
global_max_results: int = 3,
queen_max_results: int = 3,
) -> tuple[str, str]:
"""Build separate recall blocks for global and queen-scoped memory."""
global_dir = global_memory_dir or _default_global_memory_dir()
global_selected = await select_memories(
query,
llm,
memory_dir=global_dir,
max_results=global_max_results,
)
global_block = format_recall_injection(
global_selected,
memory_dir=global_dir,
label="Global Memories",
)
Reads the latest user message from conversation parts to use as the
query for memory selection.
"""
mem_dir = memory_dir or MEMORY_DIR
# Extract latest user message as the query.
query = _extract_latest_user_query(session_dir)
if not query:
logger.debug("recall: no user query found, skipping cache update")
return
logger.debug("recall: updating cache for query: %.80s", query)
try:
selected = await select_memories(
queen_block = ""
if queen_memory_dir is not None:
queen_selected = await select_memories(
query,
llm,
mem_dir,
active_tools=active_tools,
memory_dir=queen_memory_dir,
max_results=queen_max_results,
)
queen_label = f"Queen Memories: {queen_id}" if queen_id else "Queen Memories"
queen_block = format_recall_injection(
queen_selected,
memory_dir=queen_memory_dir,
label=queen_label,
)
injection = format_recall_injection(selected, mem_dir, heading=heading)
if cache_setter is not None:
cache_setter(injection)
elif phase_state is not None:
phase_state._cached_recall_block = injection
except Exception:
logger.debug("recall: cache update failed", exc_info=True)
def _extract_latest_user_query(session_dir: Path) -> str:
"""Read the most recent user message from conversation parts."""
parts_dir = session_dir / "conversations" / "parts"
if not parts_dir.is_dir():
return ""
part_files = sorted(parts_dir.glob("*.json"), reverse=True)
for f in part_files[:20]: # Look back at most 20 messages.
try:
data = json.loads(f.read_text(encoding="utf-8"))
if data.get("role") == "user":
content = str(data.get("content", "")).strip()
if content:
# Truncate very long queries.
return content[:1000] if len(content) > 1000 else content
except (json.JSONDecodeError, OSError):
continue
return ""
return global_block, queen_block
@@ -13,7 +13,7 @@
6. **Calling set_output in same turn as tool calls** — Call set_output in a SEPARATE turn.
## File Template Errors
7. **Wrong import paths** — Use `from framework.graph import ...`, NOT `from core.framework.graph import ...`.
7. **Wrong import paths** — Use `from framework.orchestrator import ...`, NOT `from framework.graph import ...` or `from core.framework...`.
8. **Missing storage path** — Agent class must set `self._storage_path = Path.home() / ".hive" / "agents" / "agent_name"`.
9. **Missing mcp_servers.json** — Without this, the agent has no tools at runtime.
10. **Bare `python` command** — Use `"command": "uv"` with args `["run", "python", ...]`.
@@ -25,10 +25,7 @@
14. **Forgetting sys.path setup in conftest.py** — Tests need `exports/` and `core/` on sys.path.
## GCU Errors
15. **Manually wiring browser tools on event_loop nodes**Use `node_type="gcu"` which auto-includes browser tools. Do NOT manually list browser tool names.
16. **Using GCU nodes as regular graph nodes** — GCU nodes are subagents only. They must ONLY appear in `sub_agents=["gcu-node-id"]` and be invoked via `delegate_to_sub_agent()`. Never connect via edges or use as entry/terminal nodes.
17. **Reusing the same GCU node ID for parallel tasks** — Each concurrent browser task needs a distinct GCU node ID (e.g. `gcu-site-a`, `gcu-site-b`). Two `delegate_to_sub_agent` calls with the same `agent_id` share a browser profile and will interfere with each other's pages.
18. **Passing `profile=` in GCU tool calls** — Profile isolation for parallel subagents is automatic. The framework injects a unique profile per subagent via an asyncio `ContextVar`. Hardcoding `profile="default"` in a GCU system prompt breaks this isolation.
15. **Manually wiring browser tools on event_loop nodes**Browser nodes use tools: {policy: "all"} to get all browser tools.
## Worker Agent Errors
19. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Route worker review/approval through queen escalation instead of direct worker HITL.
@@ -55,7 +55,7 @@ metadata = AgentMetadata()
```python
"""Node definitions for My Agent."""
from framework.graph import NodeSpec
from framework.orchestrator import NodeSpec
# Node 1: Process (autonomous entry node)
# The queen handles intake and passes structured input via
@@ -123,14 +123,15 @@ __all__ = ["process_node", "handoff_node"]
from pathlib import Path
from framework.graph import EdgeSpec, EdgeCondition, Goal, SuccessCriterion, Constraint
from framework.graph.edge import GraphSpec
from framework.graph.executor import ExecutionResult
from framework.graph.checkpoint_config import CheckpointConfig
from framework.orchestrator import EdgeSpec, EdgeCondition, Goal, SuccessCriterion, Constraint
from framework.orchestrator.edge import GraphSpec
from framework.orchestrator.orchestrator import ExecutionResult
from framework.orchestrator.checkpoint_config import CheckpointConfig
from framework.llm import LiteLLMProvider
from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from framework.loader.tool_registry import ToolRegistry
from framework.host.agent_host import AgentHost
from framework.host.execution_manager import EntryPointSpec
from .config import default_config, metadata
from .nodes import process_node, handoff_node
@@ -227,7 +228,7 @@ class MyAgent:
tools = list(self._tool_registry.get_tools().values())
tool_executor = self._tool_registry.get_executor()
self._graph = self._build_graph()
self._agent_runtime = create_agent_runtime(
self._agent_runtime = AgentHost(
graph=self._graph, goal=self.goal, storage_path=self._storage_path,
entry_points=[EntryPointSpec(id="default", name="Default", entry_node=self.entry_node,
trigger_type="manual", isolation_level="shared")],
@@ -460,8 +461,8 @@ def tui():
from framework.tui.app import AdenTUI
from framework.llm import LiteLLMProvider
from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from framework.host.agent_host import AgentHost
from framework.host.execution_manager import EntryPointSpec
async def run_tui():
agent = MyAgent()
@@ -471,7 +472,7 @@ def tui():
mcp_cfg = Path(__file__).parent / "mcp_servers.json"
if mcp_cfg.exists(): agent._tool_registry.load_mcp_config(mcp_cfg)
llm = LiteLLMProvider(model=agent.config.model, api_key=agent.config.api_key, api_base=agent.config.api_base)
runtime = create_agent_runtime(
runtime = AgentHost(
graph=agent._build_graph(), goal=agent.goal, storage_path=storage,
entry_points=[EntryPointSpec(id="start", name="Start", entry_node="process", trigger_type="manual", isolation_level="isolated")],
llm=llm, tools=list(agent._tool_registry.get_tools().values()), tool_executor=agent._tool_registry.get_executor())
@@ -509,17 +510,17 @@ if __name__ == "__main__":
## mcp_servers.json
> **Auto-generated.** `initialize_and_build_agent` creates this file with hive-tools
> **Auto-generated.** `initialize_and_build_agent` creates this file with hive_tools
> as the default. Only edit manually to add additional MCP servers.
```json
{
"hive-tools": {
"hive_tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": "../../tools",
"description": "Hive tools MCP server"
"description": "hive_tools MCP server"
}
}
```
@@ -0,0 +1,227 @@
# Declarative Agent File Templates
Agents are defined as a single `agent.yaml` file. No Python code needed.
The runner loads this file directly -- no `agent.py`, `config.py`, or
`nodes/__init__.py` required.
## agent.yaml -- Complete Agent Definition
```yaml
name: my-agent
version: 1.0.0
description: What this agent does.
metadata:
intro_message: Welcome! What would you like me to do?
# Template variables -- substituted into system_prompt and identity_prompt
# via {{variable_name}} syntax. Use this for config values that appear
# in prompts (spreadsheet IDs, API endpoints, account names, etc.)
variables:
spreadsheet_id: "1ZVxWDL..."
sheet_name: "contacts"
goal:
description: What this agent achieves.
success_criteria:
- "First success criterion"
- "Second success criterion"
constraints:
- "Hard constraint the agent must respect"
identity_prompt: |
You are a helpful agent.
conversation_mode: continuous # always "continuous" for Hive agents
loop_config:
max_iterations: 100
max_tool_calls_per_turn: 30
max_context_tokens: 32000
# MCP servers to connect (resolved by name from ~/.hive/mcp_registry/)
mcp_servers:
- name: hive_tools
- name: gcu-tools
nodes:
# Node 1: Process (autonomous entry node)
# The queen handles intake and passes structured input via
# run_agent_with_input(task). NO client-facing intake node.
- id: process
name: Process
description: Execute the task using available tools
max_node_visits: 0 # 0 = unlimited (forever-alive agents)
input_keys: [user_request, feedback]
output_keys: [results]
nullable_output_keys: [feedback]
tools:
policy: explicit
allowed: [web_search, web_scrape, save_data, load_data, list_data_files]
success_criteria: Results are complete and accurate.
system_prompt: |
You are a processing agent. Your task is in memory under "user_request".
If "feedback" is present, this is a revision.
Work in phases:
1. Use tools to gather/process data
2. Analyze results
3. Call set_output in a SEPARATE turn:
- set_output("results", "structured results")
# Node 2: Handoff (autonomous)
- id: handoff
name: Handoff
description: Prepare worker results for queen review
max_node_visits: 0
input_keys: [results, user_request]
output_keys: [next_action, feedback, worker_summary]
nullable_output_keys: [feedback, worker_summary]
tools:
policy: none # handoff nodes don't need tools
success_criteria: Results are packaged for queen decision-making.
system_prompt: |
Do NOT talk to the user directly. The queen is the only user interface.
If blocked, call escalate(reason, context) then set:
- set_output("next_action", "escalated")
- set_output("feedback", "what help is needed")
Otherwise summarize and set:
- set_output("worker_summary", "short summary for queen")
- set_output("next_action", "done") or "revise"
- set_output("feedback", "what to revise") only when revising
edges:
- from_node: process
to_node: handoff
# Feedback loop
- from_node: handoff
to_node: process
condition: conditional
condition_expr: "str(next_action).lower() == 'revise'"
priority: 2
# Escalation loop
- from_node: handoff
to_node: process
condition: conditional
condition_expr: "str(next_action).lower() == 'escalated'"
priority: 3
# Loop back for next task
- from_node: handoff
to_node: process
condition: conditional
condition_expr: "str(next_action).lower() == 'done'"
entry_node: process
terminal_nodes: [] # [] = forever-alive
```
## Key differences from Python templates
| Before (Python) | After (YAML) |
|-------------------------------------|----------------------------------------|
| `agent.py` (250 lines boilerplate) | Not needed |
| `config.py` (dataclass + metadata) | `variables:` + `metadata:` in YAML |
| `nodes/__init__.py` (NodeSpec calls)| `nodes:` list in YAML |
| `__init__.py`, `__main__.py` | Not needed |
| f-string config injection | `{{variable_name}}` templates |
| `mcp_servers.json` (separate file) | `mcp_servers:` in YAML (or keep file) |
## Node types
| Type | Description | Tools |
|--------------|---------------------------------------|--------------------------|
| `event_loop` | LLM-driven orchestration (default) | Explicit list or `none` |
| `gcu` | Browser automation via GCU tools | `policy: all` (auto) |
## Tool access policies
```yaml
# Explicit list (recommended for most nodes)
tools:
policy: explicit
allowed: [web_search, save_data]
# All tools (for browser automation nodes)
tools:
policy: all
# No tools (for handoff/summary nodes)
tools:
policy: none
```
## Edge conditions
| Condition | When to use |
|---------------|-------------------------------------------------------|
| `on_success` | Default. Next node after current succeeds. |
| `on_failure` | Fallback path when current node fails. |
| `always` | Always traverse regardless of outcome. |
| `conditional` | Evaluate `condition_expr` against shared memory keys. |
| `llm_decide` | Let the LLM decide at runtime. |
## Template variables
Use `{{variable_name}}` in `system_prompt` and `identity_prompt`.
Variables are defined in the top-level `variables:` map.
```yaml
variables:
spreadsheet_id: "1ZVxWDL..."
api_endpoint: "https://api.example.com"
nodes:
- id: start
system_prompt: |
Connect to spreadsheet: {{spreadsheet_id}}
API endpoint: {{api_endpoint}}
```
## Entry points
Default is a single manual entry point. For timer/scheduled triggers:
```yaml
entry_points:
- id: default
trigger_type: manual
- id: daily-check
trigger_type: timer
trigger_config:
interval_minutes: 30
```
## mcp_servers.json -- Still Supported
The `mcp_servers.json` file is still loaded automatically if present alongside
`agent.yaml`. You can also inline servers in the YAML:
```yaml
mcp_servers:
- name: hive_tools
- name: gcu-tools
```
Both approaches work. The JSON file takes precedence for backward compatibility.
## Migration from Python agents
Run the migration tool to convert existing agents:
```bash
uv run python -m framework.tools.migrate_agent exports/my_agent
```
This generates `agent.yaml` from the existing `agent.py` + `nodes/` + `config.py`.
The original files are left untouched. Once verified, you can delete the Python files.
## Files after migration
```
my_agent/
agent.yaml # The only required file
mcp_servers.json # Optional (can inline in YAML)
flowchart.json # Optional (auto-generated)
```
@@ -1,306 +1,193 @@
# Hive Agent Framework Condensed Reference
# Hive Agent Framework -- Condensed Reference
## Architecture
Agents are Python packages in `exports/`:
Agents are declarative JSON configs in `exports/`:
```
exports/my_agent/
├── __init__.py # MUST re-export ALL module-level vars from agent.py
├── __main__.py # CLI (run, tui, info, validate, shell)
├── agent.py # Graph construction (goal, edges, agent class)
├── config.py # Runtime config
├── nodes/__init__.py # Node definitions (NodeSpec)
├── mcp_servers.json # MCP tool server config
└── tests/ # pytest tests
agent.json # The entire agent definition
mcp_servers.json # MCP tool server config (optional, prefer registry refs)
```
## Agent Loading Contract
No Python files. No `__init__.py`, `__main__.py`, `config.py`, or `nodes/`.
`AgentRunner.load()` imports the package (`__init__.py`) and reads these
module-level variables via `getattr()`:
## Agent Loading
| Variable | Required | Default if missing | Consequence |
|----------|----------|--------------------|-------------|
| `goal` | YES | `None` | **FATAL** — "must define goal, nodes, edges" |
| `nodes` | YES | `None` | **FATAL** — same error |
| `edges` | YES | `None` | **FATAL** — same error |
| `entry_node` | no | `nodes[0].id` | Probably wrong node |
| `entry_points` | no | `{}` | **Nodes unreachable** — validation fails |
| `terminal_nodes` | **YES** | `[]` | **FATAL** — graph must have at least one terminal node |
| `pause_nodes` | no | `[]` | OK |
| `conversation_mode` | no | not passed | Isolated mode (no context carryover) |
| `identity_prompt` | no | not passed | No agent-level identity |
| `loop_config` | no | `{}` | No iteration limits |
| `triggers.json` (file) | no | not present | No triggers (timers, webhooks) |
`AgentLoader.load()` reads `agent.json` and builds the execution graph.
If `agent.py` exists (legacy), it's loaded as a Python module instead.
**CRITICAL:** `__init__.py` MUST import and re-export ALL of these from
`agent.py`. Missing exports silently fall back to defaults, causing
hard-to-debug failures.
## agent.json Schema
**Why `default_agent.validate()` is NOT sufficient:**
`validate()` checks the agent CLASS's internal graph (self.nodes, self.edges).
These are always correct because the constructor references agent.py's module
vars directly. But `AgentRunner.load()` reads from the PACKAGE (`__init__.py`),
not the class. So `validate()` passes while `AgentRunner.load()` fails.
Always test with `AgentRunner.load("exports/{name}")` — this is the same
code path the TUI and `hive run` use.
## Goal
Defines success criteria and constraints:
```python
goal = Goal(
id="kebab-case-id",
name="Display Name",
description="What the agent does",
success_criteria=[
SuccessCriterion(id="sc-id", description="...", metric="...", target="...", weight=0.25),
],
constraints=[
Constraint(id="c-id", description="...", constraint_type="hard", category="quality"),
],
)
```json
{
"name": "my-agent",
"version": "1.0.0",
"description": "What this agent does",
"goal": {
"description": "What to achieve",
"success_criteria": ["criterion 1", "criterion 2"],
"constraints": ["constraint 1"]
},
"identity_prompt": "You are a helpful agent.",
"conversation_mode": "continuous",
"loop_config": {
"max_iterations": 100,
"max_tool_calls_per_turn": 30,
"max_context_tokens": 32000
},
"mcp_servers": [
{"name": "hive_tools"},
{"name": "gcu-tools"}
],
"variables": {
"spreadsheet_id": "1ZVx..."
},
"nodes": [...],
"edges": [...],
"entry_node": "process",
"terminal_nodes": []
}
```
- 3-5 success criteria, weights sum to 1.0
- 1-5 constraints (hard/soft, categories: quality, accuracy, interaction, functional)
## NodeSpec Fields
## Template Variables
Use `{{variable_name}}` in `system_prompt` and `identity_prompt`. Variables
are defined in the top-level `variables` object:
```json
{
"variables": {"sheet_id": "1ZVx..."},
"nodes": [{
"id": "start",
"system_prompt": "Use sheet: {{sheet_id}}"
}]
}
```
## Node Fields
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| id | str | required | kebab-case identifier |
| name | str | required | Display name |
| name | str | id | Display name |
| description | str | required | What the node does |
| node_type | str | required | `"event_loop"` or `"gcu"` (browser automation — see GCU Guide appendix) |
| input_keys | list[str] | required | Memory keys this node reads |
| output_keys | list[str] | required | Memory keys this node writes via set_output |
| node_type | str | "event_loop" | `"event_loop"` |
| input_keys | list | [] | Memory keys this node reads |
| output_keys | list | [] | Memory keys this node writes via set_output |
| system_prompt | str | "" | LLM instructions |
| tools | list[str] | [] | Tool names from MCP servers |
| client_facing | bool | False | Deprecated compatibility field. Queen interactivity is implicit; workers should escalate instead |
| nullable_output_keys | list[str] | [] | Keys that may remain unset |
| max_node_visits | int | 0 | 0=unlimited (default); >1 for one-shot feedback loops |
| max_retries | int | 3 | Retries on failure |
| tools | object | {} | Tool access policy (see below) |
| nullable_output_keys | list | [] | Keys that may remain unset |
| max_node_visits | int | 1 | 0=unlimited (for forever-alive agents) |
| success_criteria | str | "" | Natural language for judge evaluation |
| client_facing | bool | false | Whether output is shown to user |
## EdgeSpec Fields
## Tool Access Policies
Each node declares its tools via a policy object:
```json
{"tools": {"policy": "explicit", "allowed": ["web_search", "save_data"]}}
{"tools": {"policy": "all"}}
{"tools": {"policy": "none"}}
```
- `explicit` (default): only named tools. Empty `allowed` = zero tools.
- `all`: all tools from registry (e.g. for browser automation nodes).
- `none`: no tools (for handoff/summary nodes).
## Edge Fields
| Field | Type | Description |
|-------|------|-------------|
| id | str | kebab-case identifier |
| source | str | Source node ID |
| target | str | Target node ID |
| condition | EdgeCondition | ON_SUCCESS, ON_FAILURE, ALWAYS, CONDITIONAL |
| condition_expr | str | Python expression evaluated against memory (for CONDITIONAL) |
| priority | int | Positive=forward (evaluated first), negative=feedback (loop-back) |
| from_node | str | Source node ID |
| to_node | str | Target node ID |
| condition | str | `on_success`, `on_failure`, `always`, `conditional` |
| condition_expr | str | Python expression for conditional routing |
| priority | int | Higher = evaluated first |
condition_expr examples:
- `"needs_more_research == True"`
- `"str(next_action).lower() == 'revise'"`
## Key Patterns
### STEP 1/STEP 2 (Client-Facing Nodes)
```
**STEP 1 — Respond to the user (text only, NO tool calls):**
[Present information, ask questions]
**STEP 2 — After the user responds, call set_output:**
- set_output("key", "value based on user response")
```
This prevents premature set_output before user interaction.
### Fewer, Richer Nodes (CRITICAL)
**Hard limit: 3-6 nodes for most agents.** Never exceed 6 unless the user
explicitly requests a complex multi-phase pipeline.
**Hard limit: 3-6 nodes for most agents.** Each node boundary serializes
outputs and destroys in-context information. Merge unless:
1. Client-facing boundary (different interaction models)
2. Disjoint tool sets
3. Parallel execution (fan-out branches)
Each node boundary serializes outputs to the shared buffer and **destroys** all
in-context information: tool call results, intermediate reasoning, conversation
history. A research node that searches, fetches, and analyzes in ONE node keeps
all source material in its conversation context. Split across 3 nodes, each
downstream node only sees the serialized summary string.
**Decision framework — merge unless ANY of these apply:**
1. **Client-facing boundary** — Autonomous and client-facing work MUST be
separate nodes (different interaction models)
2. **Disjoint tool sets** — If tools are fundamentally different (e.g., web
search vs database), separate nodes make sense
3. **Parallel execution** — Fan-out branches must be separate nodes
**Red flags that you have too many nodes:**
- A node with 0 tools (pure LLM reasoning) → merge into predecessor/successor
- A node that sets only 1 trivial output → collapse into predecessor
- Multiple consecutive autonomous nodes → combine into one rich node
- A "report" node that presents analysis → merge into the client-facing node
- A "confirm" or "schedule" node that doesn't call any external service → remove
**Typical agent structure (2 nodes):**
**Typical structure (2 nodes):**
```
process (autonomous) ←→ review (queen-mediated)
```
The queen owns intake — she gathers requirements from the user, then
passes structured input via `run_agent_with_input(task)`. When building
the agent, design the entry node's `input_keys` to match what the queen
will provide at run time. Worker agents should NOT have a client-facing
intake node. Mid-execution review/approval should happen through queen
escalation rather than direct worker HITL.
For simpler agents, just 1 autonomous node:
```
process (autonomous) — loops back to itself
process (autonomous) <-> review (queen-mediated)
```
### nullable_output_keys
For inputs that only arrive on certain edges:
```python
research_node = NodeSpec(
input_keys=["brief", "feedback"],
nullable_output_keys=["feedback"], # Only present on feedback edge
max_node_visits=3,
)
```
### Mutually Exclusive Outputs
For routing decisions:
```python
review_node = NodeSpec(
output_keys=["approved", "feedback"],
nullable_output_keys=["approved", "feedback"], # Node sets one or the other
)
```
### Continuous Loop Pattern
Mark the primary event_loop node as terminal: `terminal_nodes=["process"]`.
The node has `output_keys` and can complete when the agent finishes its work.
Use `conversation_mode="continuous"` to preserve context across transitions.
The queen owns intake. Worker agents should NOT have a client-facing intake
node. Mid-execution review should happen through queen escalation.
### set_output
- Synthetic tool injected by framework
- Call separately from real tool calls (separate turn)
- `set_output("key", "value")` stores to the shared buffer
## Edge Conditions
| Condition | When |
|-----------|------|
| ON_SUCCESS | Node completed successfully |
| ON_FAILURE | Node failed |
| ALWAYS | Unconditional |
| CONDITIONAL | condition_expr evaluates to True against memory |
condition_expr examples:
- `"needs_more_research == True"`
- `"str(next_action).lower() == 'new_agent'"`
- `"feedback is not None"`
## Graph Lifecycle
### Graph Lifecycle
| Pattern | terminal_nodes | When |
|---------|---------------|------|
| **Continuous loop** | `["node-with-output-keys"]` | **DEFAULT for all agents** |
| Continuous loop | `["node-with-output-keys"]` | DEFAULT for all agents |
| Linear | `["last-node"]` | One-shot/batch agents |
**Every graph must have at least one terminal node.** Terminal nodes
define where execution ends. For interactive agents that loop continuously,
mark the primary event_loop node as terminal (it has `output_keys` and can
complete at any point). The framework default for `max_node_visits` is 0
(unbounded), so nodes work correctly in continuous loops without explicit
override. Only set `max_node_visits > 0` in one-shot agents with feedback loops.
Every node must have at least one outgoing edge — no dead ends.
Every graph must have at least one terminal node.
## Continuous Conversation Mode
### Continuous Conversation Mode
`conversation_mode` has ONLY two valid states:
- `"continuous"` recommended for interactive agents
- Omit entirely isolated per-node conversations (each node starts fresh)
- `"continuous"` -- recommended (context carries across node transitions)
- Omit entirely -- isolated per-node conversations
**INVALID values** (do NOT use): `"client_facing"`, `"interactive"`,
`"adaptive"`, `"shared"`. These do not exist in the framework.
When `conversation_mode="continuous"`:
- Same conversation thread carries across node transitions
- Layered system prompts: identity (agent-level) + narrative + focus (per-node)
- Transition markers inserted at boundaries
- Compaction happens opportunistically at phase transitions
**INVALID values:** `"client_facing"`, `"interactive"`, `"shared"`.
## loop_config
Only three valid keys:
```python
loop_config = {
"max_iterations": 100, # Max LLM turns per node visit
"max_tool_calls_per_turn": 20, # Max tool calls per LLM response
"max_context_tokens": 32000, # Triggers conversation compaction
```json
{
"max_iterations": 100,
"max_tool_calls_per_turn": 20,
"max_context_tokens": 32000
}
```
**INVALID keys** (do NOT use): `"strategy"`, `"mode"`, `"timeout"`,
`"temperature"`. These are silently ignored or cause errors.
## Data Tools (Spillover)
For large data that exceeds context:
- `save_data(filename, data)` — Write to session data dir
- `load_data(filename, offset, limit)` — Read with pagination
- `list_data_files()` — List files
- `serve_file_to_user(filename, label)` — Clickable file:// URI
- `save_data(filename, data)` -- write to session data dir
- `load_data(filename, offset, limit)` -- read with pagination
- `list_data_files()` -- list files
- `serve_file_to_user(filename, label)` -- clickable file URI
`data_dir` is auto-injected by framework — LLM never sees it.
`data_dir` is auto-injected by framework.
## Fan-Out / Fan-In
Multiple ON_SUCCESS edges from same source parallel execution via asyncio.gather().
- Parallel nodes must have disjoint output_keys
- Only one branch may have client_facing nodes
- Fan-in node gets all outputs in the shared buffer
Multiple `on_success` edges from same source = parallel execution.
Parallel nodes must have disjoint output_keys.
## Judge System
- **Implicit** (default): ACCEPTs when LLM finishes with no tool calls and all required outputs set
- **SchemaJudge**: Validates against Pydantic model
- **Custom**: Implement `evaluate(context) -> JudgeVerdict`
Judge is the SOLE acceptance mechanism — no ad-hoc framework gating.
## Triggers (Timers, Webhooks)
For agents that react to external events, create a `triggers.json` file
in the agent's export directory:
```json
[
{
"id": "daily-check",
"name": "Daily Check",
"trigger_type": "timer",
"trigger_config": {"cron": "0 9 * * *"},
"task": "Run the daily check process"
}
]
```
### Key Fields
- `trigger_type`: `"timer"` or `"webhook"`
- `trigger_config`: `{"cron": "0 9 * * *"}` or `{"interval_minutes": 20}`
- `task`: describes what the worker should do when the trigger fires
- Triggers can also be created/removed at runtime via `set_trigger` / `remove_trigger` queen tools
## Tool Discovery
Do NOT rely on a static tool list — it will be outdated. Always call
`list_agent_tools()` with NO arguments first to see ALL available tools.
Only use `group=` or `output_schema=` as follow-up calls after seeing the
full list.
Always call `list_agent_tools()` first to see available tools.
Do NOT rely on a static tool list.
```
list_agent_tools() # ALWAYS call this first
list_agent_tools(group="gmail", output_schema="full") # then drill into a category
list_agent_tools("exports/my_agent/mcp_servers.json") # specific agent's tools
list_agent_tools() # full summary
list_agent_tools(group="gmail", output_schema="full") # drill into category
```
After building, run `validate_agent_package("{name}")` to check everything at once.
Common tool categories (verify via list_agent_tools):
- **Web**: search, scrape, PDF
- **Data**: save/load/append/list data files, serve to user
- **File**: view, write, replace, diff, list, grep
- **Communication**: email, gmail, slack, telegram
- **CRM**: hubspot, apollo, calcom
- **GitHub**: stargazers, user profiles, repos
- **Vision**: image analysis
- **Time**: current time
After building, run `validate_agent_package("{name}")` to check everything.
@@ -1,158 +1,79 @@
# GCU Browser Automation Guide
# Browser Automation Guide
## When to Use GCU Nodes
## When to Use Browser Nodes
Use `node_type="gcu"` when:
- The user's workflow requires **navigating real websites** (scraping, form-filling, social media interaction, testing web UIs)
- The task involves **dynamic/JS-rendered pages** that `web_scrape` cannot handle (SPAs, infinite scroll, login-gated content)
- The agent needs to **interact with a website** — clicking, typing, scrolling, selecting, uploading files
Use browser nodes (with `tools: {policy: "all"}`) when:
- The task requires interacting with web pages (clicking, typing, navigating)
- No API is available for the target service
- The user is already logged in to the target site
Do NOT use GCU for:
- Static content that `web_scrape` handles fine
- API-accessible data (use the API directly)
- PDF/file processing
- Anything that doesn't require a browser UI
## What Browser Nodes Are
## What GCU Nodes Are
- Regular `event_loop` nodes with browser tools from gcu-tools MCP server
- Set `tools: {policy: "all"}` to give access to all browser tools
- Wire into the graph with edges like any other node
- No special node_type needed
- `node_type="gcu"` — a declarative enhancement over `event_loop`
- Framework auto-prepends browser best-practices system prompt
- Framework auto-includes all 31 browser tools from `gcu-tools` MCP server
- Same underlying `EventLoopNode` class — no new imports needed
- `tools=[]` is correct — tools are auto-populated at runtime
## Available Browser Tools
## GCU Architecture Pattern
All tools are prefixed with `browser_`:
- `browser_start`, `browser_open`, `browser_navigate` — launch/navigate
- `browser_click`, `browser_click_coordinate`, `browser_fill`, `browser_type` — interact
- `browser_press` (with optional `modifiers=["ctrl"]` etc.) — keyboard shortcuts
- `browser_snapshot` — compact accessibility-tree read (structured)
<!-- vision-only -->
- `browser_screenshot` — visual capture (annotated PNG)
<!-- /vision-only -->
- `browser_shadow_query`, `browser_get_rect` — locate elements (shadow-piercing via `>>>`)
- `browser_scroll`, `browser_wait` — navigation helpers
- `browser_evaluate` — run JavaScript
- `browser_close`, `browser_close_finished` — tab cleanup
GCU nodes are **subagents** — invoked via `delegate_to_sub_agent()`, not connected via edges.
## Pick the right reading tool
- Primary nodes (`event_loop`, client-facing) orchestrate; GCU nodes do browser work
- Parent node declares `sub_agents=["gcu-node-id"]` and calls `delegate_to_sub_agent(agent_id="gcu-node-id", task="...")`
- GCU nodes set `max_node_visits=1` (single execution per delegation), `client_facing=False`
- GCU nodes use `output_keys=["result"]` and return structured JSON via `set_output("result", ...)`
**`browser_snapshot`** — compact accessibility tree of interactive elements. Fast, cheap, good for static or form-heavy pages where the DOM matches what's visually rendered (documentation, simple dashboards, search results, settings pages).
## GCU Node Definition Template
**`browser_screenshot`** — visual capture + metadata (`cssWidth`, `devicePixelRatio`, scale fields). **Use this on any complex SPA** — LinkedIn, Twitter/X, Reddit, Gmail, Notion, Slack, Discord, any site using shadow DOM, virtual scrolling, React reconciliation, or dynamic layout. On these pages, snapshot refs go stale in seconds, shadow contents aren't in the AX tree, and virtual-scrolled elements disappear from the tree entirely. Screenshot is the **only** reliable way to orient yourself.
```python
gcu_browser_node = NodeSpec(
id="gcu-browser-worker",
name="Browser Worker",
description="Browser subagent that does X.",
node_type="gcu",
client_facing=False,
max_node_visits=1,
input_keys=[],
output_keys=["result"],
tools=[], # Auto-populated with all browser tools
system_prompt="""\
You are a browser agent. Your job: [specific task].
Neither tool is "preferred" universally — they're for different jobs. Default to snapshot on text-heavy static pages, screenshot on SPAs and anything shadow-DOM-heavy. Activate the `browser-automation` skill for the full decision tree.
## Workflow
1. browser_start (only if no browser is running yet)
2. browser_open(url=TARGET_URL) — note the returned targetId
3. browser_snapshot to read the page
4. [task-specific steps]
5. set_output("result", JSON)
## Coordinate rule
## Output format
set_output("result", JSON) with:
- [field]: [type and description]
""",
)
`browser_screenshot` delivers the image at the CSS viewport's own dimensions, so a pixel you read off the screenshot is the same coordinate `browser_click_coordinate`, `browser_hover_coordinate`, and `browser_press_at` expect — no conversion. `getBoundingClientRect()` likewise returns CSS pixels; pass through unchanged.
## System prompt tips for browser nodes
```
1. On LinkedIn / X / Reddit / Gmail / any SPA — use browser_screenshot to orient,
not browser_snapshot. Shadow DOM and virtual scrolling make snapshots unreliable.
2. For static pages (docs, forms, search results), browser_snapshot is fine.
3. Before typing into a rich-text editor (X compose, LinkedIn DM, Gmail, Reddit),
click the input area first with browser_click_coordinate so React / Draft.js /
Lexical register a native focus event. Otherwise the send button stays disabled.
4. Use browser_wait(seconds=2-3) after navigation for SPA hydration.
5. If you hit an auth wall, call set_output with an error and move on.
6. Keep tool calls per turn <= 10 for reliability.
```
## Parent Node Template (orchestrating GCU subagents)
```python
orchestrator_node = NodeSpec(
id="orchestrator",
...
node_type="event_loop",
sub_agents=["gcu-browser-worker"],
system_prompt="""\
...
delegate_to_sub_agent(
agent_id="gcu-browser-worker",
task="Navigate to [URL]. Do [specific task]. Return JSON with [fields]."
)
...
""",
tools=[], # Orchestrator doesn't need browser tools
)
```
## mcp_servers.json with GCU
## Example
```json
{
"hive-tools": { ... },
"gcu-tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "-m", "gcu.server", "--stdio"],
"cwd": "../../tools",
"description": "GCU tools for browser automation"
}
"id": "scan-profiles",
"name": "Scan LinkedIn Profiles",
"description": "Navigate LinkedIn search results and collect profile data",
"tools": {"policy": "all"},
"input_keys": ["search_url"],
"output_keys": ["profiles"],
"system_prompt": "Navigate to the search URL via browser_navigate(wait_until='load', timeout_ms=20000). Wait 3s for SPA hydration. On LinkedIn, use browser_screenshot to see the page — browser_snapshot misses shadow-DOM and virtual-scrolled content. Paginate through results by scrolling and screenshotting; extract each profile card by reading its visible layout..."
}
```
Note: `gcu-tools` is auto-added if any node uses `node_type="gcu"`, but including it explicitly is fine.
## GCU System Prompt Best Practices
Key rules to bake into GCU node prompts:
- Prefer `browser_snapshot` over `browser_get_text("body")` — compact accessibility tree vs 100KB+ raw HTML
- Always `browser_wait` after navigation
- Use large scroll amounts (~2000-5000) for lazy-loaded content
- For spillover files, use `run_command` with grep, not `read_file`
- If auth wall detected, report immediately — don't attempt login
- Keep tool calls per turn ≤10
- Tab isolation: when browser is already running, use `browser_open(background=true)` and pass `target_id` to every call
## Multiple Concurrent GCU Subagents
When a task can be parallelized across multiple sites or profiles, declare a distinct GCU
node for each and invoke them all in the same LLM turn. The framework batches all
`delegate_to_sub_agent` calls made in one turn and runs them with `asyncio.gather`, so
they execute concurrently — not sequentially.
**Each GCU subagent automatically gets its own isolated browser context** — no `profile=`
argument is needed in tool calls. The framework derives a unique profile from the subagent's
node ID and instance counter and injects it via an asyncio `ContextVar` before the subagent
runs.
### Example: three sites in parallel
```python
# Three distinct GCU nodes
gcu_site_a = NodeSpec(id="gcu-site-a", node_type="gcu", ...)
gcu_site_b = NodeSpec(id="gcu-site-b", node_type="gcu", ...)
gcu_site_c = NodeSpec(id="gcu-site-c", node_type="gcu", ...)
orchestrator = NodeSpec(
id="orchestrator",
node_type="event_loop",
sub_agents=["gcu-site-a", "gcu-site-b", "gcu-site-c"],
system_prompt="""\
Call all three subagents in a single response to run them in parallel:
delegate_to_sub_agent(agent_id="gcu-site-a", task="Scrape prices from site A")
delegate_to_sub_agent(agent_id="gcu-site-b", task="Scrape prices from site B")
delegate_to_sub_agent(agent_id="gcu-site-c", task="Scrape prices from site C")
""",
)
Connected via regular edges:
```
search-setup -> scan-profiles -> process-results
```
**Rules:**
- Use distinct node IDs for each concurrent task — sharing an ID shares the browser context.
- The GCU node prompts do not need to mention `profile=`; isolation is automatic.
- Cleanup is automatic at session end, but GCU nodes can call `browser_stop()` explicitly
if they want to release resources mid-run.
## Further detail
## GCU Anti-Patterns
- Using `browser_screenshot` to read text (use `browser_snapshot` instead; screenshots are for visual context only)
- Re-navigating after scrolling (resets scroll position)
- Attempting login on auth walls
- Forgetting `target_id` in multi-tab scenarios
- Putting browser tools directly on `event_loop` nodes instead of using GCU subagent pattern
- Making GCU nodes `client_facing=True` (they should be autonomous subagents)
For rich-text editor quirks (Lexical, Draft.js, ProseMirror), shadow-DOM shortcuts, `beforeunload` dialog neutralization, Trusted Types CSP on LinkedIn, keyboard shortcut dispatch, and per-site selector tables — **activate the `browser-automation` skill**. That skill has the full verified guidance and is refreshed against real production sites.
File diff suppressed because it is too large Load Diff
@@ -22,10 +22,10 @@ def mock_mode():
@pytest_asyncio.fixture(scope="session")
async def runner(tmp_path_factory, mock_mode):
from framework.runner.runner import AgentRunner
from framework.loader.agent_loader import AgentLoader
storage = tmp_path_factory.mktemp("agent_storage")
r = AgentRunner.load(AGENT_PATH, mock_mode=mock_mode, storage_path=storage)
r = AgentLoader.load(AGENT_PATH, mock_mode=mock_mode, storage_path=storage)
r._setup()
yield r
await r.cleanup_async()
+30 -54
View File
@@ -2,17 +2,22 @@
Command-line interface for Aden Hive.
Usage:
hive run exports/my-agent --input '{"key": "value"}'
hive info exports/my-agent
hive validate exports/my-agent
hive list exports/
hive shell exports/my-agent
hive serve Start the HTTP API server
hive open Start the server and open the dashboard
hive queen list List queen profiles
hive queen show <queen_id> Inspect a queen profile
hive queen sessions <queen_id> List a queen's sessions
hive colony list List colonies on disk
hive colony info <name> Inspect a colony
hive colony delete <name> Delete a colony
hive session list List live sessions (use --cold for on-disk)
hive session stop <session_id> Stop a live session
hive chat <session_id> "msg" Send a message to a live queen
Testing commands:
hive test-run <agent_path> --goal <goal_id>
hive test-debug <agent_path> <test_name>
hive test-list <agent_path>
hive test-stats <agent_path>
Subsystems:
hive skill ... Manage skills (~/.hive/skills/)
hive mcp ... Manage MCP servers
hive debugger LLM debug log viewer
"""
import argparse
@@ -20,86 +25,57 @@ import sys
from pathlib import Path
def _configure_paths():
"""Auto-configure sys.path so agents in exports/ are discoverable.
def _configure_paths() -> None:
"""Auto-configure sys.path so the framework is importable from any cwd.
Resolves the project root by walking up from this file (framework/cli.py lives
inside core/framework/) or from CWD, then adds the exports/ directory to sys.path
if it exists. This eliminates the need for manual PYTHONPATH configuration.
Walks up from this file to find the project root, then ensures
`core/` is on sys.path so `framework.*` imports resolve when the
package isn't installed via `pip install -e .`.
"""
# Strategy 1: resolve relative to this file (works when installed via pip install -e core/)
framework_dir = Path(__file__).resolve().parent # core/framework/
core_dir = framework_dir.parent # core/
project_root = core_dir.parent # project root
# Strategy 2: if project_root doesn't look right, fall back to CWD
if not (project_root / "exports").is_dir() and not (project_root / "core").is_dir():
if not (project_root / "core").is_dir():
project_root = Path.cwd()
# Add exports/ to sys.path so agents are importable as top-level packages
exports_dir = project_root / "exports"
if exports_dir.is_dir():
exports_str = str(exports_dir)
if exports_str not in sys.path:
sys.path.insert(0, exports_str)
# Add examples/templates/ to sys.path so template agents are importable
templates_dir = project_root / "examples" / "templates"
if templates_dir.is_dir():
templates_str = str(templates_dir)
if templates_str not in sys.path:
sys.path.insert(0, templates_str)
# Ensure core/ is also in sys.path (for non-editable-install scenarios)
core_str = str(project_root / "core")
if (project_root / "core").is_dir() and core_str not in sys.path:
sys.path.insert(0, core_str)
# Add core/framework/agents/ so framework agents are importable as top-level packages
framework_agents_dir = project_root / "core" / "framework" / "agents"
if framework_agents_dir.is_dir():
fa_str = str(framework_agents_dir)
if fa_str not in sys.path:
sys.path.insert(0, fa_str)
def main():
def main() -> None:
_configure_paths()
parser = argparse.ArgumentParser(
prog="hive",
description="Aden Hive - Build and run goal-driven agents",
description="Aden Hive — Queens, colonies, and live agent sessions",
)
parser.add_argument(
"--model",
default="claude-haiku-4-5-20251001",
help="Anthropic model to use",
help="Default LLM model (Anthropic ID)",
)
subparsers = parser.add_subparsers(dest="command", required=True)
# Register runner commands (run, info, validate, list, shell)
from framework.runner.cli import register_commands
# Core commands: serve, open, queen, colony, session, chat
from framework.loader.cli import register_commands
register_commands(subparsers)
# Register testing commands (test-run, test-debug, test-list, test-stats)
from framework.testing.cli import register_testing_commands
register_testing_commands(subparsers)
# Register skill commands (skill list, skill trust, ...)
# Skill management (~/.hive/skills/)
from framework.skills.cli import register_skill_commands
register_skill_commands(subparsers)
# Register debugger commands (debugger)
# LLM debug log viewer
from framework.debugger.cli import register_debugger_commands
register_debugger_commands(subparsers)
# Register MCP registry commands (mcp install, mcp add, ...)
from framework.runner.mcp_registry_cli import register_mcp_commands
# MCP server registry
from framework.loader.mcp_registry_cli import register_mcp_commands
register_mcp_commands(subparsers)
+116 -17
View File
@@ -12,13 +12,47 @@ from dataclasses import dataclass, field
from pathlib import Path
from typing import Any
from framework.graph.edge import DEFAULT_MAX_TOKENS
DEFAULT_MAX_TOKENS = 8192
# ---------------------------------------------------------------------------
# Hive home directory structure
# ---------------------------------------------------------------------------
HIVE_HOME = Path.home() / ".hive"
QUEENS_DIR = HIVE_HOME / "agents" / "queens"
COLONIES_DIR = HIVE_HOME / "colonies"
MEMORIES_DIR = HIVE_HOME / "memories"
def queen_dir(queen_name: str = "default") -> Path:
"""Return the storage directory for a named queen agent."""
return QUEENS_DIR / queen_name
def colony_dir(colony_name: str) -> Path:
"""Return the directory for a named colony."""
return COLONIES_DIR / colony_name
def memory_dir(scope: str, name: str | None = None) -> Path:
"""Return memory dir for a scope.
Examples::
memory_dir("global") -> ~/.hive/memories/global
memory_dir("colonies", "my_agent") -> ~/.hive/memories/colonies/my_agent
memory_dir("agents/queens", "default")-> ~/.hive/memories/agents/queens/default
memory_dir("agents", "worker_name") -> ~/.hive/memories/agents/worker_name
"""
base = MEMORIES_DIR / scope
return base / name if name else base
# ---------------------------------------------------------------------------
# Low-level config file access
# ---------------------------------------------------------------------------
HIVE_CONFIG_FILE = Path.home() / ".hive" / "configuration.json"
HIVE_CONFIG_FILE = HIVE_HOME / "configuration.json"
# Hive LLM router endpoint (Anthropic-compatible).
# litellm's Anthropic handler appends /v1/messages, so this is just the base host.
@@ -42,6 +76,48 @@ def get_hive_config() -> dict[str, Any]:
return {}
# ---------------------------------------------------------------------------
# Credential store helpers (for BYOK keys)
# ---------------------------------------------------------------------------
# Provider name → credential store ID mapping
_PROVIDER_CRED_MAP: dict[str, str] = {
"anthropic": "anthropic",
"openai": "openai",
"gemini": "gemini",
"google": "gemini",
"minimax": "minimax",
"groq": "groq",
"cerebras": "cerebras",
"openrouter": "openrouter",
"mistral": "mistral",
"together": "together",
"together_ai": "together",
"deepseek": "deepseek",
"kimi": "kimi",
"hive": "hive",
}
def _get_api_key_from_credential_store(provider: str) -> str | None:
"""Look up a BYOK API key from the encrypted credential store.
Returns None if no key is found or the credential store is unavailable.
"""
if not os.environ.get("HIVE_CREDENTIAL_KEY"):
return None
cred_id = _PROVIDER_CRED_MAP.get(provider.lower())
if not cred_id:
return None
try:
from framework.credentials import CredentialStore
store = CredentialStore.with_encrypted_storage()
return store.get(cred_id)
except Exception:
return None
# ---------------------------------------------------------------------------
# Derived helpers
# ---------------------------------------------------------------------------
@@ -88,7 +164,7 @@ def get_worker_api_key() -> str | None:
# Worker-specific subscription / env var
if worker_llm.get("use_claude_code_subscription"):
try:
from framework.runner.runner import get_claude_code_token
from framework.loader.agent_loader import get_claude_code_token
token = get_claude_code_token()
if token:
@@ -98,7 +174,7 @@ def get_worker_api_key() -> str | None:
if worker_llm.get("use_codex_subscription"):
try:
from framework.runner.runner import get_codex_token
from framework.loader.agent_loader import get_codex_token
token = get_codex_token()
if token:
@@ -108,7 +184,7 @@ def get_worker_api_key() -> str | None:
if worker_llm.get("use_kimi_code_subscription"):
try:
from framework.runner.runner import get_kimi_code_token
from framework.loader.agent_loader import get_kimi_code_token
token = get_kimi_code_token()
if token:
@@ -118,7 +194,7 @@ def get_worker_api_key() -> str | None:
if worker_llm.get("use_antigravity_subscription"):
try:
from framework.runner.runner import get_antigravity_token
from framework.loader.agent_loader import get_antigravity_token
token = get_antigravity_token()
if token:
@@ -174,7 +250,7 @@ def get_worker_llm_extra_kwargs() -> dict[str, Any]:
"User-Agent": "CodexBar",
}
try:
from framework.runner.runner import get_codex_account_id
from framework.loader.agent_loader import get_codex_account_id
account_id = get_codex_account_id()
if account_id:
@@ -221,22 +297,43 @@ def get_max_context_tokens() -> int:
return get_hive_config().get("llm", {}).get("max_context_tokens", DEFAULT_MAX_CONTEXT_TOKENS)
def get_api_keys() -> list[str] | None:
"""Return a list of API keys if ``api_keys`` is configured, else ``None``.
This supports key-pool rotation: configure multiple keys in
``~/.hive/configuration.json`` under ``llm.api_keys`` and the
:class:`~framework.llm.key_pool.KeyPool` will rotate through them.
"""
llm = get_hive_config().get("llm", {})
keys = llm.get("api_keys")
if keys and isinstance(keys, list) and len(keys) > 0:
return [k for k in keys if k] # filter empties
return None
def get_api_key() -> str | None:
"""Return the API key, supporting env var, Claude Code subscription, Codex, and ZAI Code.
Priority:
0. Explicit key pool (``api_keys`` list) -- returns first key for
single-key callers; full pool available via :func:`get_api_keys`.
1. Claude Code subscription (``use_claude_code_subscription: true``)
reads the OAuth token from ``~/.claude/.credentials.json``.
2. Codex subscription (``use_codex_subscription: true``)
reads the OAuth token from macOS Keychain or ``~/.codex/auth.json``.
3. Environment variable named in ``api_key_env_var``.
"""
# If an explicit key pool is configured, use the first key.
pool_keys = get_api_keys()
if pool_keys:
return pool_keys[0]
llm = get_hive_config().get("llm", {})
# Claude Code subscription: read OAuth token directly
if llm.get("use_claude_code_subscription"):
try:
from framework.runner.runner import get_claude_code_token
from framework.loader.agent_loader import get_claude_code_token
token = get_claude_code_token()
if token:
@@ -247,7 +344,7 @@ def get_api_key() -> str | None:
# Codex subscription: read OAuth token from Keychain / auth.json
if llm.get("use_codex_subscription"):
try:
from framework.runner.runner import get_codex_token
from framework.loader.agent_loader import get_codex_token
token = get_codex_token()
if token:
@@ -258,7 +355,7 @@ def get_api_key() -> str | None:
# Kimi Code subscription: read API key from ~/.kimi/config.toml
if llm.get("use_kimi_code_subscription"):
try:
from framework.runner.runner import get_kimi_code_token
from framework.loader.agent_loader import get_kimi_code_token
token = get_kimi_code_token()
if token:
@@ -269,7 +366,7 @@ def get_api_key() -> str | None:
# Antigravity subscription: read OAuth token from accounts JSON
if llm.get("use_antigravity_subscription"):
try:
from framework.runner.runner import get_antigravity_token
from framework.loader.agent_loader import get_antigravity_token
token = get_antigravity_token()
if token:
@@ -280,8 +377,12 @@ def get_api_key() -> str | None:
# Standard env-var path (covers ZAI Code and all API-key providers)
api_key_env_var = llm.get("api_key_env_var")
if api_key_env_var:
return os.environ.get(api_key_env_var)
return None
key = os.environ.get(api_key_env_var)
if key:
return key
# Credential store fallback — BYOK keys stored via the UI
return _get_api_key_from_credential_store(llm.get("provider", ""))
# OAuth credentials for Antigravity are fetched from the opencode-antigravity-auth project.
@@ -304,9 +405,7 @@ def _fetch_antigravity_credentials() -> tuple[str | None, str | None]:
import urllib.request
try:
req = urllib.request.Request(
_ANTIGRAVITY_CREDENTIALS_URL, headers={"User-Agent": "Hive/1.0"}
)
req = urllib.request.Request(_ANTIGRAVITY_CREDENTIALS_URL, headers={"User-Agent": "Hive/1.0"})
with urllib.request.urlopen(req, timeout=10) as resp:
content = resp.read().decode("utf-8")
id_match = re.search(r'ANTIGRAVITY_CLIENT_ID\s*=\s*"([^"]+)"', content)
@@ -422,7 +521,7 @@ def get_llm_extra_kwargs() -> dict[str, Any]:
"User-Agent": "CodexBar",
}
try:
from framework.runner.runner import get_codex_account_id
from framework.loader.agent_loader import get_codex_account_id
account_id = get_codex_account_id()
if account_id:
+4
View File
@@ -51,6 +51,7 @@ from .key_storage import (
from .models import (
CredentialDecryptionError,
CredentialError,
CredentialExpiredError,
CredentialKey,
CredentialKeyNotFoundError,
CredentialNotFoundError,
@@ -84,6 +85,7 @@ from .template import TemplateResolver
from .validation import (
CredentialStatus,
CredentialValidationResult,
compute_unavailable_tools,
ensure_credential_key_env,
validate_agent_credentials,
)
@@ -136,6 +138,7 @@ __all__ = [
"CredentialNotFoundError",
"CredentialKeyNotFoundError",
"CredentialRefreshError",
"CredentialExpiredError",
"CredentialValidationError",
"CredentialDecryptionError",
# Key storage (bootstrap credentials)
@@ -148,6 +151,7 @@ __all__ = [
# Validation
"ensure_credential_key_env",
"validate_agent_credentials",
"compute_unavailable_tools",
"CredentialStatus",
"CredentialValidationResult",
# Interactive setup
+2 -6
View File
@@ -332,9 +332,7 @@ class AdenCredentialClient:
last_error = e
if attempt < self.config.retry_attempts - 1:
delay = self.config.retry_delay * (2**attempt)
logger.warning(
f"Aden request failed (attempt {attempt + 1}), retrying in {delay}s: {e}"
)
logger.warning(f"Aden request failed (attempt {attempt + 1}), retrying in {delay}s: {e}")
time.sleep(delay)
else:
raise AdenClientError(f"Failed to connect to Aden server: {e}") from e
@@ -347,9 +345,7 @@ class AdenCredentialClient:
):
raise
raise AdenClientError(
f"Request failed after {self.config.retry_attempts} attempts"
) from last_error
raise AdenClientError(f"Request failed after {self.config.retry_attempts} attempts") from last_error
def list_integrations(self) -> list[AdenIntegrationInfo]:
"""
+2 -6
View File
@@ -192,9 +192,7 @@ class AdenSyncProvider(CredentialProvider):
f"Visit: {e.reauthorization_url or 'your Aden dashboard'}"
) from e
raise CredentialRefreshError(
f"Failed to refresh credential '{credential.id}': {e}"
) from e
raise CredentialRefreshError(f"Failed to refresh credential '{credential.id}': {e}") from e
except AdenClientError as e:
logger.error(f"Aden client error for '{credential.id}': {e}")
@@ -206,9 +204,7 @@ class AdenSyncProvider(CredentialProvider):
logger.warning(f"Aden unavailable, using cached token for '{credential.id}'")
return credential
raise CredentialRefreshError(
f"Aden server unavailable and token expired for '{credential.id}'"
) from e
raise CredentialRefreshError(f"Aden server unavailable and token expired for '{credential.id}'") from e
def validate(self, credential: CredentialObject) -> bool:
"""
+14 -3
View File
@@ -168,9 +168,7 @@ class AdenCachedStorage(CredentialStorage):
if rid != credential_id:
result = self._load_by_id(rid)
if result is not None:
logger.info(
f"Loaded credential '{credential_id}' via provider index (id='{rid}')"
)
logger.info(f"Loaded credential '{credential_id}' via provider index (id='{rid}')")
return result
# Direct lookup (exact credential_id match)
@@ -199,6 +197,19 @@ class AdenCachedStorage(CredentialStorage):
if local_cred is None:
return None
# Skip Aden fetch for credentials not managed by Aden (BYOK credentials).
# Only OAuth credentials synced from Aden are in the provider index.
# BYOK credentials like anthropic, brave_search are local-only.
# Also check the _aden_managed flag on the credential itself.
is_aden_managed = (
credential_id in self._provider_index
or any(credential_id in ids for ids in self._provider_index.values())
or (local_cred is not None and local_cred.keys.get("_aden_managed") is not None)
)
if not is_aden_managed:
logger.debug(f"Credential '{credential_id}' is local-only, skipping Aden refresh")
return local_cred
# Try to refresh stale local credential from Aden
try:
aden_cred = self._aden_provider.fetch_from_aden(credential_id)
@@ -493,9 +493,7 @@ class TestAdenCachedStorage:
assert loaded is not None
assert loaded.keys["access_token"].value.get_secret_value() == "cached-token"
def test_load_from_aden_when_stale(
self, cached_storage, local_storage, provider, mock_client, aden_response
):
def test_load_from_aden_when_stale(self, cached_storage, local_storage, provider, mock_client, aden_response):
"""Test load fetches from Aden when cache is stale."""
# Create stale cached credential
cred = CredentialObject(
@@ -521,9 +519,7 @@ class TestAdenCachedStorage:
assert loaded is not None
assert loaded.keys["access_token"].value.get_secret_value() == "test-access-token"
def test_load_falls_back_to_stale_when_aden_fails(
self, cached_storage, local_storage, provider, mock_client
):
def test_load_falls_back_to_stale_when_aden_fails(self, cached_storage, local_storage, provider, mock_client):
"""Test load falls back to stale cache when Aden fails."""
# Create stale cached credential
cred = CredentialObject(
+23
View File
@@ -333,6 +333,29 @@ class CredentialRefreshError(CredentialError):
pass
class CredentialExpiredError(CredentialError):
"""Raised when a credential is expired and refresh has failed.
Carries the metadata an agent (or the tool runner) needs to surface a
reauth request to the user without having to look anything else up.
"""
def __init__(
self,
credential_id: str,
message: str,
*,
provider: str | None = None,
alias: str | None = None,
help_url: str | None = None,
):
self.credential_id = credential_id
self.provider = provider
self.alias = alias
self.help_url = help_url
super().__init__(message)
class CredentialValidationError(CredentialError):
"""Raised when credential validation fails."""
@@ -95,9 +95,7 @@ class BaseOAuth2Provider(CredentialProvider):
self._client = httpx.Client(timeout=self.config.request_timeout)
except ImportError as e:
raise ImportError(
"OAuth2 provider requires 'httpx'. Install with: uv pip install httpx"
) from e
raise ImportError("OAuth2 provider requires 'httpx'. Install with: uv pip install httpx") from e
return self._client
def _close_client(self) -> None:
@@ -311,8 +309,7 @@ class BaseOAuth2Provider(CredentialProvider):
except OAuth2Error as e:
if e.error == "invalid_grant":
raise CredentialRefreshError(
f"Refresh token for '{credential.id}' is invalid or revoked. "
"Re-authorization required."
f"Refresh token for '{credential.id}' is invalid or revoked. Re-authorization required."
) from e
raise CredentialRefreshError(f"Failed to refresh '{credential.id}': {e}") from e
@@ -422,9 +419,7 @@ class BaseOAuth2Provider(CredentialProvider):
if response.status_code != 200 or "error" in response_data:
error = response_data.get("error", "unknown_error")
description = response_data.get("error_description", response.text)
raise OAuth2Error(
error=error, description=description, status_code=response.status_code
)
raise OAuth2Error(error=error, description=description, status_code=response.status_code)
return OAuth2Token.from_token_response(response_data)
@@ -158,9 +158,7 @@ class TokenLifecycleManager:
"""
# Run in executor to avoid blocking
loop = asyncio.get_event_loop()
token = await loop.run_in_executor(
None, lambda: self.provider.client_credentials_grant(scopes=scopes)
)
token = await loop.run_in_executor(None, lambda: self.provider.client_credentials_grant(scopes=scopes))
self._save_token_to_store(token)
self._cached_token = token
@@ -100,9 +100,7 @@ class ZohoOAuth2Provider(BaseOAuth2Provider):
)
super().__init__(config, provider_id="zoho_crm_oauth2")
self._accounts_domain = base
self._api_domain = (
api_domain or os.getenv("ZOHO_API_DOMAIN", "https://www.zohoapis.com")
).rstrip("/")
self._api_domain = (api_domain or os.getenv("ZOHO_API_DOMAIN", "https://www.zohoapis.com")).rstrip("/")
@property
def supported_types(self) -> list[CredentialType]:
+23 -13
View File
@@ -36,7 +36,7 @@ from pathlib import Path
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from framework.graph import NodeSpec
from framework.orchestrator import NodeSpec
logger = logging.getLogger(__name__)
@@ -268,9 +268,7 @@ class CredentialSetupSession:
self._print(f"{Colors.YELLOW}Initializing credential store...{Colors.NC}")
try:
generate_and_save_credential_key()
self._print(
f"{Colors.GREEN}✓ Encryption key saved to ~/.hive/secrets/credential_key{Colors.NC}"
)
self._print(f"{Colors.GREEN}✓ Encryption key saved to ~/.hive/secrets/credential_key{Colors.NC}")
return True
except Exception as e:
self._print(f"{Colors.RED}Failed to initialize credential store: {e}{Colors.NC}")
@@ -449,9 +447,7 @@ class CredentialSetupSession:
logger.warning("Unexpected error exporting credential to env", exc_info=True)
return True
else:
self._print(
f"{Colors.YELLOW}{cred.credential_name} not found in Aden account.{Colors.NC}"
)
self._print(f"{Colors.YELLOW}{cred.credential_name} not found in Aden account.{Colors.NC}")
self._print("Please connect this integration on https://hive.adenhq.com first.")
return False
except Exception as e:
@@ -533,7 +529,9 @@ class CredentialSetupSession:
def load_agent_nodes(agent_path: str | Path) -> list:
"""Load NodeSpec list from an agent's agent.py or agent.json.
"""Load NodeSpec list from an agent directory.
Checks agent.json (declarative) first, then agent.py (legacy).
Args:
agent_path: Path to agent directory.
@@ -542,16 +540,28 @@ def load_agent_nodes(agent_path: str | Path) -> list:
List of NodeSpec objects (empty list if agent can't be loaded).
"""
agent_path = Path(agent_path)
agent_json_file = agent_path / "agent.json"
agent_py = agent_path / "agent.py"
agent_json = agent_path / "agent.json"
if agent_py.exists():
if agent_json_file.exists():
return _load_nodes_from_json_declarative(agent_json_file)
elif agent_py.exists():
return _load_nodes_from_python_agent(agent_path)
elif agent_json.exists():
return _load_nodes_from_json_agent(agent_json)
return []
def _load_nodes_from_json_declarative(agent_json: Path) -> list:
"""Load nodes from a declarative JSON agent."""
try:
from framework.loader.agent_loader import load_agent_config
data = json.loads(agent_json.read_text(encoding="utf-8"))
graph, _ = load_agent_config(data)
return list(graph.nodes)
except Exception:
return []
def _load_nodes_from_python_agent(agent_path: Path) -> list:
"""Load nodes from a Python-based agent."""
import importlib.util
@@ -590,7 +600,7 @@ def _load_nodes_from_json_agent(agent_json: Path) -> list:
with open(agent_json, encoding="utf-8-sig") as f:
data = json.load(f)
from framework.graph import NodeSpec
from framework.orchestrator import NodeSpec
nodes_data = data.get("graph", {}).get("nodes", [])
nodes = []
+156 -36
View File
@@ -136,8 +136,7 @@ class EncryptedFileStorage(CredentialStorage):
from cryptography.fernet import Fernet
except ImportError as e:
raise ImportError(
"Encrypted storage requires 'cryptography'. "
"Install with: uv pip install cryptography"
"Encrypted storage requires 'cryptography'. Install with: uv pip install cryptography"
) from e
self.base_path = Path(base_path or self.DEFAULT_PATH).expanduser()
@@ -161,6 +160,14 @@ class EncryptedFileStorage(CredentialStorage):
self._fernet = Fernet(self._key)
# Rebuild the metadata index from disk if it's missing or older than
# the current index schema. The index is a developer-readable JSON
# snapshot of the encrypted store; the .enc files remain authoritative.
try:
self._maybe_rebuild_index()
except Exception:
logger.debug("Initial index rebuild failed (non-fatal)", exc_info=True)
def _ensure_dirs(self) -> None:
"""Create directory structure."""
(self.base_path / "credentials").mkdir(parents=True, exist_ok=True)
@@ -186,8 +193,8 @@ class EncryptedFileStorage(CredentialStorage):
with open(cred_path, "wb") as f:
f.write(encrypted)
# Update index
self._update_index(credential.id, "save", credential.credential_type.value)
# Update developer-readable index
self._index_upsert(credential)
logger.debug(f"Saved encrypted credential '{credential.id}'")
def load(self, credential_id: str) -> CredentialObject | None:
@@ -205,9 +212,7 @@ class EncryptedFileStorage(CredentialStorage):
json_bytes = self._fernet.decrypt(encrypted)
data = json.loads(json_bytes.decode("utf-8-sig"))
except Exception as e:
raise CredentialDecryptionError(
f"Failed to decrypt credential '{credential_id}': {e}"
) from e
raise CredentialDecryptionError(f"Failed to decrypt credential '{credential_id}': {e}") from e
# Deserialize
return self._deserialize_credential(data)
@@ -217,7 +222,7 @@ class EncryptedFileStorage(CredentialStorage):
cred_path = self._cred_path(credential_id)
if cred_path.exists():
cred_path.unlink()
self._update_index(credential_id, "delete")
self._index_remove(credential_id)
logger.debug(f"Deleted credential '{credential_id}'")
return True
return False
@@ -258,33 +263,151 @@ class EncryptedFileStorage(CredentialStorage):
return CredentialObject.model_validate(data)
def _update_index(
self,
credential_id: str,
operation: str,
credential_type: str | None = None,
) -> None:
"""Update the metadata index."""
index_path = self.base_path / "metadata" / "index.json"
# ------------------------------------------------------------------
# Developer-readable metadata index
#
# The index lives at ``<base_path>/metadata/index.json`` and mirrors what
# is in the encrypted store at a glance: credential id, provider, alias,
# identity, key names, timestamps, and earliest expiry. It contains NO
# secret values and is safe to share when filing a bug report. The .enc
# files remain authoritative — the index is purely for human inspection
# and for cheap ``list_all()`` enumeration.
#
# Schema version is bumped whenever the entry shape changes; the store
# rebuilds the index from the encrypted files on load when the on-disk
# version is older.
# ------------------------------------------------------------------
if index_path.exists():
with open(index_path, encoding="utf-8-sig") as f:
index = json.load(f)
else:
index = {"credentials": {}, "version": "1.0"}
INDEX_VERSION = "2.0"
INDEX_INTERNAL_KEY_NAMES = ("_alias", "_integration_type")
if operation == "save":
index["credentials"][credential_id] = {
"updated_at": datetime.now(UTC).isoformat(),
"type": credential_type,
}
elif operation == "delete":
index["credentials"].pop(credential_id, None)
def _index_path(self) -> Path:
return self.base_path / "metadata" / "index.json"
index["last_modified"] = datetime.now(UTC).isoformat()
def _read_index(self) -> dict[str, Any]:
"""Read the index from disk; return an empty skeleton if missing."""
path = self._index_path()
if not path.exists():
return {"version": self.INDEX_VERSION, "credentials": {}}
try:
with open(path, encoding="utf-8-sig") as f:
return json.load(f)
except Exception:
logger.debug("Failed to read credential index, starting fresh", exc_info=True)
return {"version": self.INDEX_VERSION, "credentials": {}}
with open(index_path, "w", encoding="utf-8") as f:
json.dump(index, f, indent=2)
def _write_index(self, index: dict[str, Any]) -> None:
"""Write the index to disk with consistent envelope fields."""
index["version"] = self.INDEX_VERSION
index["store_path"] = str(self.base_path)
index["generated_at"] = datetime.now(UTC).isoformat()
path = self._index_path()
path.parent.mkdir(parents=True, exist_ok=True)
with open(path, "w", encoding="utf-8") as f:
json.dump(index, f, indent=2, sort_keys=False, default=str)
def _index_entry_for(self, credential: CredentialObject) -> dict[str, Any]:
"""Build a single index entry from a CredentialObject (no secrets)."""
# Visible key names: drop internal markers like _alias / _integration_type
# / _identity_* so the entry shows what's actually a credential key.
visible_keys = [
name
for name in credential.keys.keys()
if name not in self.INDEX_INTERNAL_KEY_NAMES and not name.startswith("_identity_")
]
# Earliest expiry across all keys (most likely the access_token).
earliest_expiry: datetime | None = None
for key in credential.keys.values():
if key.expires_at is None:
continue
if earliest_expiry is None or key.expires_at < earliest_expiry:
earliest_expiry = key.expires_at
return {
"credential_type": credential.credential_type.value,
"provider": credential.provider_type,
"alias": credential.alias,
"identity": credential.identity.to_dict(),
"key_names": sorted(visible_keys),
"created_at": credential.created_at.isoformat() if credential.created_at else None,
"updated_at": credential.updated_at.isoformat() if credential.updated_at else None,
"last_refreshed": (credential.last_refreshed.isoformat() if credential.last_refreshed else None),
"expires_at": earliest_expiry.isoformat() if earliest_expiry else None,
"auto_refresh": credential.auto_refresh,
"tags": list(credential.tags),
}
def _index_upsert(self, credential: CredentialObject) -> None:
"""Insert or update one credential entry in the index."""
try:
index = self._read_index()
if index.get("version") != self.INDEX_VERSION:
# Old schema — rebuild from disk so we don't blend formats.
self._rebuild_index()
return
credentials = index.setdefault("credentials", {})
credentials[credential.id] = self._index_entry_for(credential)
self._write_index(index)
except Exception:
logger.debug("Index upsert failed (non-fatal)", exc_info=True)
def _index_remove(self, credential_id: str) -> None:
"""Remove one credential entry from the index."""
try:
index = self._read_index()
if index.get("version") != self.INDEX_VERSION:
self._rebuild_index()
return
credentials = index.setdefault("credentials", {})
credentials.pop(credential_id, None)
self._write_index(index)
except Exception:
logger.debug("Index remove failed (non-fatal)", exc_info=True)
def _maybe_rebuild_index(self) -> None:
"""Rebuild the index if it's missing, malformed, or on an old schema.
Called once at startup. The check is cheap read the version field
and bail out if it matches. Encrypted files remain authoritative; this
only refreshes the developer-facing snapshot.
"""
path = self._index_path()
if path.exists():
try:
with open(path, encoding="utf-8-sig") as f:
index = json.load(f)
if index.get("version") == self.INDEX_VERSION:
return
except Exception:
pass # fall through to rebuild
self._rebuild_index()
def _rebuild_index(self) -> None:
"""Walk the encrypted credentials directory and rewrite a fresh index."""
cred_dir = self.base_path / "credentials"
if not cred_dir.is_dir():
return
entries: dict[str, Any] = {}
for cred_file in sorted(cred_dir.glob("*.enc")):
credential_id = cred_file.stem
try:
cred = self.load(credential_id)
except Exception:
logger.debug(
"Failed to load %s during index rebuild — skipping",
credential_id,
exc_info=True,
)
continue
if cred is None:
continue
entries[cred.id] = self._index_entry_for(cred)
index = {"credentials": entries}
self._write_index(index)
logger.info("Rebuilt credential index with %d entries", len(entries))
class EnvVarStorage(CredentialStorage):
@@ -351,8 +474,7 @@ class EnvVarStorage(CredentialStorage):
def save(self, credential: CredentialObject) -> None:
"""Cannot save to environment variables at runtime."""
raise NotImplementedError(
"EnvVarStorage is read-only. Set environment variables "
"externally or use EncryptedFileStorage."
"EnvVarStorage is read-only. Set environment variables externally or use EncryptedFileStorage."
)
def load(self, credential_id: str) -> CredentialObject | None:
@@ -372,9 +494,7 @@ class EnvVarStorage(CredentialStorage):
def delete(self, credential_id: str) -> bool:
"""Cannot delete environment variables at runtime."""
raise NotImplementedError(
"EnvVarStorage is read-only. Unset environment variables externally."
)
raise NotImplementedError("EnvVarStorage is read-only. Unset environment variables externally.")
def list_all(self) -> list[str]:
"""List credentials that are available in environment."""
+52 -11
View File
@@ -19,6 +19,7 @@ from typing import Any
from pydantic import SecretStr
from .models import (
CredentialExpiredError,
CredentialKey,
CredentialObject,
CredentialRefreshError,
@@ -123,9 +124,7 @@ class CredentialStore:
"""
return self._providers.get(provider_id)
def get_provider_for_credential(
self, credential: CredentialObject
) -> CredentialProvider | None:
def get_provider_for_credential(self, credential: CredentialObject) -> CredentialProvider | None:
"""
Get the appropriate provider for a credential.
@@ -177,6 +176,8 @@ class CredentialStore:
self,
credential_id: str,
refresh_if_needed: bool = True,
*,
raise_on_refresh_failure: bool = False,
) -> CredentialObject | None:
"""
Get a credential by ID.
@@ -184,6 +185,11 @@ class CredentialStore:
Args:
credential_id: The credential identifier
refresh_if_needed: If True, refresh expired credentials
raise_on_refresh_failure: If True, raise ``CredentialExpiredError``
when refresh fails instead of silently returning the stale
credential. Tool-execution call sites should pass True so the
agent gets a structured "reauth needed" signal rather than a
later 401 from the provider.
Returns:
CredentialObject or None if not found
@@ -193,7 +199,7 @@ class CredentialStore:
cached = self._get_from_cache(credential_id)
if cached is not None:
if refresh_if_needed and self._should_refresh(cached):
return self._refresh_credential(cached)
return self._refresh_credential(cached, raise_on_failure=raise_on_refresh_failure)
return cached
# Load from storage
@@ -203,30 +209,42 @@ class CredentialStore:
# Refresh if needed
if refresh_if_needed and self._should_refresh(credential):
credential = self._refresh_credential(credential)
credential = self._refresh_credential(credential, raise_on_failure=raise_on_refresh_failure)
# Cache
self._add_to_cache(credential)
return credential
def get_key(self, credential_id: str, key_name: str) -> str | None:
def get_key(
self,
credential_id: str,
key_name: str,
*,
raise_on_refresh_failure: bool = False,
) -> str | None:
"""
Convenience method to get a specific key value.
Args:
credential_id: The credential identifier
key_name: The key within the credential
raise_on_refresh_failure: See ``get_credential``.
Returns:
The key value or None if not found
"""
credential = self.get_credential(credential_id)
credential = self.get_credential(credential_id, raise_on_refresh_failure=raise_on_refresh_failure)
if credential is None:
return None
return credential.get_key(key_name)
def get(self, credential_id: str) -> str | None:
def get(
self,
credential_id: str,
*,
raise_on_refresh_failure: bool = False,
) -> str | None:
"""
Legacy compatibility: get the primary key value.
@@ -235,11 +253,12 @@ class CredentialStore:
Args:
credential_id: The credential identifier
raise_on_refresh_failure: See ``get_credential``.
Returns:
The primary key value or None
"""
credential = self.get_credential(credential_id)
credential = self.get_credential(credential_id, raise_on_refresh_failure=raise_on_refresh_failure)
if credential is None:
return None
return credential.get_default_key()
@@ -510,8 +529,20 @@ class CredentialStore:
return provider.should_refresh(credential)
def _refresh_credential(self, credential: CredentialObject) -> CredentialObject:
"""Refresh a credential using its provider."""
def _refresh_credential(
self,
credential: CredentialObject,
*,
raise_on_failure: bool = False,
) -> CredentialObject:
"""Refresh a credential using its provider.
When ``raise_on_failure`` is True, a refresh failure raises
``CredentialExpiredError`` carrying provider/alias/help_url metadata
for the caller (typically the tool runner) to surface a reauth
request. Otherwise, the stale credential is returned to preserve
legacy best-effort behavior.
"""
provider = self.get_provider_for_credential(credential)
if provider is None:
logger.warning(f"No provider found for credential '{credential.id}'")
@@ -530,6 +561,16 @@ class CredentialStore:
except CredentialRefreshError as e:
logger.error(f"Failed to refresh credential '{credential.id}': {e}")
if raise_on_failure:
raise CredentialExpiredError(
credential_id=credential.id,
message=(
f"OAuth token for '{credential.id}' is expired and "
f"refresh failed: {e}. Reauthorization required."
),
provider=credential.provider_type,
alias=credential.alias,
) from e
return credential
def refresh_credential(self, credential_id: str) -> CredentialObject | None:
+2 -6
View File
@@ -88,9 +88,7 @@ class TemplateResolver:
if key_name:
value = credential.get_key(key_name)
if value is None:
raise CredentialKeyNotFoundError(
f"Key '{key_name}' not found in credential '{cred_id}'"
)
raise CredentialKeyNotFoundError(f"Key '{key_name}' not found in credential '{cred_id}'")
else:
# Use default key
value = credential.get_default_key()
@@ -126,9 +124,7 @@ class TemplateResolver:
... })
{"Authorization": "Bearer ghp_xxx", "X-API-Key": "BSAKxxx"}
"""
return {
key: self.resolve(value, fail_on_missing) for key, value in header_templates.items()
}
return {key: self.resolve(value, fail_on_missing) for key, value in header_templates.items()}
def resolve_params(
self,
@@ -130,9 +130,7 @@ class TestCredentialObject:
# With access_token
cred2 = CredentialObject(
id="test",
keys={
"access_token": CredentialKey(name="access_token", value=SecretStr("token-value"))
},
keys={"access_token": CredentialKey(name="access_token", value=SecretStr("token-value"))},
)
assert cred2.get_default_key() == "token-value"
@@ -297,9 +295,7 @@ class TestEncryptedFileStorage:
key = Fernet.generate_key().decode()
with patch.dict(os.environ, {"HIVE_CREDENTIAL_KEY": key}):
storage = EncryptedFileStorage(temp_dir)
cred = CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))}
)
cred = CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))})
storage.save(cred)
# Create new storage instance with same key
@@ -330,18 +326,10 @@ class TestCompositeStorage:
def test_read_from_primary(self):
"""Test reading from primary storage."""
primary = InMemoryStorage()
primary.save(
CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("primary"))}
)
)
primary.save(CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("primary"))}))
fallback = InMemoryStorage()
fallback.save(
CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("fallback"))}
)
)
fallback.save(CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("fallback"))}))
storage = CompositeStorage(primary, [fallback])
cred = storage.load("test")
@@ -353,11 +341,7 @@ class TestCompositeStorage:
"""Test fallback when credential not in primary."""
primary = InMemoryStorage()
fallback = InMemoryStorage()
fallback.save(
CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("fallback"))}
)
)
fallback.save(CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("fallback"))}))
storage = CompositeStorage(primary, [fallback])
cred = storage.load("test")
@@ -393,9 +377,7 @@ class TestStaticProvider:
def test_refresh_returns_unchanged(self):
"""Test that refresh returns credential unchanged."""
provider = StaticProvider()
cred = CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))}
)
cred = CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))})
refreshed = provider.refresh(cred)
assert refreshed.get_key("k") == "v"
@@ -403,9 +385,7 @@ class TestStaticProvider:
def test_validate_with_keys(self):
"""Test validation with keys present."""
provider = StaticProvider()
cred = CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))}
)
cred = CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))})
assert provider.validate(cred)
@@ -606,9 +586,7 @@ class TestCredentialStore:
storage = InMemoryStorage()
store = CredentialStore(storage=storage, cache_ttl_seconds=60)
storage.save(
CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))})
)
storage.save(CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))}))
# First load
store.get_credential("test")
@@ -686,9 +664,7 @@ class TestOAuth2Module:
from core.framework.credentials.oauth2 import OAuth2Config, TokenPlacement
# Valid config
config = OAuth2Config(
token_url="https://example.com/token", client_id="id", client_secret="secret"
)
config = OAuth2Config(token_url="https://example.com/token", client_id="id", client_secret="secret")
assert config.token_url == "https://example.com/token"
# Missing token_url
+44 -20
View File
@@ -160,15 +160,9 @@ class CredentialValidationResult:
if aden_nc:
if missing or invalid:
lines.append("")
lines.append(
"Aden integrations not connected "
"(ADEN_API_KEY is set but OAuth tokens unavailable):\n"
)
lines.append("Aden integrations not connected (ADEN_API_KEY is set but OAuth tokens unavailable):\n")
for c in aden_nc:
lines.append(
f" {c.env_var} for {_label(c)}"
f"\n Connect this integration at hive.adenhq.com first."
)
lines.append(f" {c.env_var} for {_label(c)}\n Connect this integration at hive.adenhq.com first.")
lines.append("\nIf you've already set up credentials, restart your terminal to load them.")
return "\n".join(lines)
@@ -236,6 +230,45 @@ def _presync_aden_tokens(credential_specs: dict, *, force: bool = False) -> None
)
def compute_unavailable_tools(nodes: list) -> tuple[set[str], list[str]]:
"""Return (tool_names_to_drop, human_messages).
Runs credential validation *without* raising, collects every tool
bound to a failed credential (missing / invalid / Aden-not-connected
and no alternative provider available), and returns the set of tool
names that should be silently dropped from the worker's effective
tool list.
Use this at every worker-spawn preflight so missing credentials
filter tools out of the graph instead of hard-failing the whole
spawn. Only affects non-MCP tools the MCP admission gate
(``_build_mcp_admission_gate``) already handles MCP tools at
registration time.
"""
try:
result = validate_agent_credentials(nodes, verify=False, raise_on_error=False)
except Exception as exc:
logger.debug("compute_unavailable_tools: validation raised: %s", exc)
return set(), []
drop: set[str] = set()
messages: list[str] = []
for status in result.failed:
if not status.tools:
continue
drop.update(status.tools)
reason = "missing"
if status.aden_not_connected:
reason = "aden_not_connected"
elif status.available and status.valid is False:
reason = "invalid"
messages.append(
f"{status.env_var} ({reason}) → drops {len(status.tools)} tool(s): "
f"{', '.join(status.tools[:6])}" + (f" +{len(status.tools) - 6} more" if len(status.tools) > 6 else "")
)
return drop, messages
def validate_agent_credentials(
nodes: list,
quiet: bool = False,
@@ -292,9 +325,7 @@ def validate_agent_credentials(
if os.environ.get("ADEN_API_KEY"):
_presync_aden_tokens(CREDENTIAL_SPECS, force=force_refresh)
env_mapping = {
(spec.credential_id or name): spec.env_var for name, spec in CREDENTIAL_SPECS.items()
}
env_mapping = {(spec.credential_id or name): spec.env_var for name, spec in CREDENTIAL_SPECS.items()}
env_storage = EnvVarStorage(env_mapping=env_mapping)
if os.environ.get("HIVE_CREDENTIAL_KEY"):
storage = CompositeStorage(primary=env_storage, fallbacks=[EncryptedFileStorage()])
@@ -328,12 +359,7 @@ def validate_agent_credentials(
available = store.is_available(cred_id)
# Aden-not-connected: ADEN_API_KEY set, Aden-only cred, but integration missing
is_aden_nc = (
not available
and has_aden_key
and spec.aden_supported
and not spec.direct_api_key_supported
)
is_aden_nc = not available and has_aden_key and spec.aden_supported and not spec.direct_api_key_supported
status = CredentialStatus(
credential_name=cred_name,
@@ -451,9 +477,7 @@ def validate_agent_credentials(
identity_data = result.details.get("identity")
if identity_data and isinstance(identity_data, dict):
try:
cred_obj = store.get_credential(
status.credential_id, refresh_if_needed=False
)
cred_obj = store.get_credential(status.credential_id, refresh_if_needed=False)
if cred_obj:
cred_obj.set_identity(**identity_data)
store.save_credential(cred_obj)
-65
View File
@@ -1,65 +0,0 @@
"""Graph structures: Goals, Nodes, Edges, and Execution."""
from framework.graph.context import GraphContext
from framework.graph.context_handoff import ContextHandoff, HandoffContext
from framework.graph.conversation import ConversationStore, Message, NodeConversation
from framework.graph.edge import DEFAULT_MAX_TOKENS, EdgeCondition, EdgeSpec, GraphSpec
from framework.graph.event_loop_node import (
EventLoopNode,
JudgeProtocol,
JudgeVerdict,
LoopConfig,
OutputAccumulator,
)
from framework.graph.executor import GraphExecutor
from framework.graph.goal import Constraint, Goal, GoalStatus, SuccessCriterion
from framework.graph.node import NodeContext, NodeProtocol, NodeResult, NodeSpec
from framework.graph.worker_agent import (
Activation,
FanOutTag,
FanOutTracker,
WorkerAgent,
WorkerCompletion,
WorkerLifecycle,
)
__all__ = [
# Goal
"Goal",
"SuccessCriterion",
"Constraint",
"GoalStatus",
# Node
"NodeSpec",
"NodeContext",
"NodeResult",
"NodeProtocol",
# Edge
"EdgeSpec",
"EdgeCondition",
"GraphSpec",
"DEFAULT_MAX_TOKENS",
# Executor
"GraphExecutor",
# Conversation
"NodeConversation",
"ConversationStore",
"Message",
# Event Loop
"EventLoopNode",
"LoopConfig",
"OutputAccumulator",
"JudgeProtocol",
"JudgeVerdict",
# Context Handoff
"ContextHandoff",
"HandoffContext",
# Worker Agent
"WorkerAgent",
"WorkerLifecycle",
"WorkerCompletion",
"Activation",
"FanOutTag",
"FanOutTracker",
"GraphContext",
]
@@ -1,6 +0,0 @@
"""EventLoopNode subpackage — modular components of the event loop orchestrator.
All public symbols are re-exported by the parent ``event_loop_node.py`` for
backward compatibility. Internal consumers may import directly from these
submodules for clarity.
"""
@@ -1,381 +0,0 @@
"""Subagent execution for the event loop.
Handles the full subagent lifecycle: validation, context setup, tool filtering,
conversation store derivation, execution, and cleanup.
"""
from __future__ import annotations
import asyncio
import json
import logging
import time
from collections.abc import Awaitable, Callable
from pathlib import Path
from typing import TYPE_CHECKING, Any
from framework.graph.conversation import ConversationStore
from framework.graph.event_loop.judge_pipeline import SubagentJudge
from framework.graph.event_loop.types import LoopConfig, OutputAccumulator
from framework.graph.node import DataBuffer, NodeContext
from framework.llm.provider import ToolResult, ToolUse
from framework.runtime.event_bus import EventBus
if TYPE_CHECKING:
from framework.graph.event_loop_node import EventLoopNode
logger = logging.getLogger(__name__)
async def execute_subagent(
ctx: NodeContext,
agent_id: str,
task: str,
*,
config: LoopConfig,
event_loop_node_cls: type[EventLoopNode],
escalation_receiver_cls: Callable[[], Any],
accumulator: OutputAccumulator | None = None,
event_bus: EventBus | None = None,
tool_executor: Callable[[ToolUse], ToolResult | Awaitable[ToolResult]] | None = None,
conversation_store: ConversationStore | None = None,
subagent_instance_counter: dict[str, int] | None = None,
) -> ToolResult:
"""Execute a subagent and return the result as a ToolResult.
The subagent:
- Gets a fresh conversation with just the task
- Has read-only access to the parent's readable memory
- Cannot delegate to its own subagents (prevents recursion)
- Returns its output in structured JSON format
Args:
ctx: Parent node's context (for memory, tools, LLM access).
agent_id: The node ID of the subagent to invoke.
task: The task description to give the subagent.
accumulator: Parent's OutputAccumulator.
event_bus: EventBus for lifecycle events.
config: LoopConfig for iteration/tool limits.
tool_executor: Tool executor callable.
conversation_store: Parent conversation store (for deriving subagent store).
subagent_instance_counter: Mutable counter dict for unique subagent paths.
Returns:
ToolResult with structured JSON output.
"""
# Log subagent invocation start
logger.info(
"\n" + "=" * 60 + "\n"
"🤖 SUBAGENT INVOCATION\n"
"=" * 60 + "\n"
"Parent Node: %s\n"
"Subagent ID: %s\n"
"Task: %s\n" + "=" * 60,
ctx.node_id,
agent_id,
task[:500] + "..." if len(task) > 500 else task,
)
# 1. Validate agent exists in registry
if agent_id not in ctx.node_registry:
return ToolResult(
tool_use_id="",
content=json.dumps(
{
"message": f"Sub-agent '{agent_id}' not found in registry",
"data": None,
"metadata": {"agent_id": agent_id, "success": False, "error": "not_found"},
}
),
is_error=True,
)
subagent_spec = ctx.node_registry[agent_id]
# 2. Create read-only memory snapshot
parent_data = ctx.buffer.read_all()
# Merge in-flight outputs from the parent's accumulator.
if accumulator:
for key, value in accumulator.to_dict().items():
if key not in parent_data:
parent_data[key] = value
subagent_buffer = DataBuffer()
for key, value in parent_data.items():
subagent_buffer.write(key, value, validate=False)
read_keys = set(parent_data.keys()) | set(subagent_spec.input_keys or [])
scoped_buffer = subagent_buffer.with_permissions(
read_keys=list(read_keys),
write_keys=[], # Read-only!
)
# 2b. Compute instance counter early so the callback and child context
# share the same stable node_id for this subagent invocation.
if subagent_instance_counter is not None:
subagent_instance_counter.setdefault(agent_id, 0)
subagent_instance_counter[agent_id] += 1
subagent_instance = str(subagent_instance_counter[agent_id])
else:
subagent_instance = "1"
if subagent_instance == "1":
sa_node_id = f"{ctx.node_id}:subagent:{agent_id}"
else:
sa_node_id = f"{ctx.node_id}:subagent:{agent_id}:{subagent_instance}"
# 2c. Set up report callback (one-way channel to parent / event bus)
subagent_reports: list[dict] = []
async def _report_callback(
message: str,
data: dict | None = None,
*,
wait_for_response: bool = False,
) -> str | None:
subagent_reports.append({"message": message, "data": data, "timestamp": time.time()})
if event_bus:
await event_bus.emit_subagent_report(
stream_id=ctx.node_id,
node_id=sa_node_id,
subagent_id=agent_id,
message=message,
data=data,
execution_id=ctx.execution_id,
)
if not wait_for_response:
return None
if not event_bus:
logger.warning(
"Subagent '%s' requested user response but no event_bus available",
agent_id,
)
return None
# Create isolated receiver and register for input routing
import uuid
escalation_id = f"{ctx.node_id}:escalation:{uuid.uuid4().hex[:8]}"
receiver = escalation_receiver_cls()
registry = ctx.shared_node_registry
registry[escalation_id] = receiver
try:
await event_bus.emit_escalation_requested(
stream_id=ctx.stream_id or ctx.node_id,
node_id=escalation_id,
reason=f"Subagent report (wait_for_response) from {agent_id}",
context=message,
execution_id=ctx.execution_id,
)
# Block until queen responds
return await receiver.wait()
finally:
registry.pop(escalation_id, None)
# 3. Filter tools for subagent
subagent_tool_names = set(subagent_spec.tools or [])
tool_source = ctx.all_tools if ctx.all_tools else ctx.available_tools
# GCU auto-population
if subagent_spec.node_type == "gcu" and not subagent_tool_names:
subagent_tools = [t for t in tool_source if t.name != "delegate_to_sub_agent"]
else:
subagent_tools = [
t
for t in tool_source
if t.name in subagent_tool_names and t.name != "delegate_to_sub_agent"
]
missing = subagent_tool_names - {t.name for t in subagent_tools}
if missing:
logger.warning(
"Subagent '%s' requested tools not found in catalog: %s",
agent_id,
sorted(missing),
)
logger.info(
"📦 Subagent '%s' configuration:\n"
" - System prompt: %s\n"
" - Tools available (%d): %s\n"
" - Memory keys inherited: %s",
agent_id,
(subagent_spec.system_prompt[:200] + "...")
if subagent_spec.system_prompt and len(subagent_spec.system_prompt) > 200
else subagent_spec.system_prompt,
len(subagent_tools),
[t.name for t in subagent_tools],
list(parent_data.keys()),
)
# 4. Build subagent context
max_iter = min(config.max_iterations, 10)
subagent_ctx = NodeContext(
runtime=ctx.runtime,
node_id=sa_node_id,
node_spec=subagent_spec,
buffer=scoped_buffer,
input_data={"task": task, **parent_data},
llm=ctx.llm,
available_tools=subagent_tools,
goal_context=(
f"Your specific task: {task}\n\n"
f"COMPLETION REQUIREMENTS:\n"
f"When your task is done, you MUST call set_output() "
f"for each required key: {subagent_spec.output_keys}\n"
f"Alternatively, call report_to_parent(mark_complete=true) "
f"with your findings in message/data.\n"
f"You have a maximum of {max_iter} turns to complete this task."
),
goal=ctx.goal,
max_tokens=ctx.max_tokens,
runtime_logger=ctx.runtime_logger,
is_subagent_mode=True, # Prevents nested delegation
report_callback=_report_callback,
node_registry={}, # Empty - no nested subagents
shared_node_registry=ctx.shared_node_registry, # For escalation routing
)
# 5. Create and execute subagent EventLoopNode
subagent_conv_store = None
if conversation_store is not None:
from framework.storage.conversation_store import FileConversationStore
parent_base = getattr(conversation_store, "_base", None)
if parent_base is not None:
conversations_dir = parent_base.parent
subagent_dir_name = f"{agent_id}-{subagent_instance}"
subagent_store_path = conversations_dir / subagent_dir_name
subagent_conv_store = FileConversationStore(base_path=subagent_store_path)
# Derive a subagent-scoped spillover dir
subagent_spillover = None
if config.spillover_dir:
subagent_spillover = str(Path(config.spillover_dir) / agent_id / subagent_instance)
subagent_node = event_loop_node_cls(
event_bus=event_bus,
judge=SubagentJudge(task=task, max_iterations=max_iter),
config=LoopConfig(
max_iterations=max_iter,
max_tool_calls_per_turn=config.max_tool_calls_per_turn,
tool_call_overflow_margin=config.tool_call_overflow_margin,
max_context_tokens=config.max_context_tokens,
stall_detection_threshold=config.stall_detection_threshold,
max_tool_result_chars=config.max_tool_result_chars,
spillover_dir=subagent_spillover,
),
tool_executor=tool_executor,
conversation_store=subagent_conv_store,
)
# Each subagent instance gets its own unique browser profile so concurrent
# subagents don't share tab groups. The profile is injected into every
# browser_* tool call by wrapping the tool executor.
_gcu_profile = f"{agent_id}:{subagent_instance}"
_original_tool_executor = None
if tool_executor is not None:
_original_tool_executor = tool_executor
async def _gcu_profile_injecting_executor(
tool_use: ToolUse,
) -> ToolResult | Awaitable[ToolResult]:
if tool_use.name.startswith("browser_") and "profile" not in (tool_use.input or {}):
from dataclasses import replace
tool_use = replace(tool_use, input={**(tool_use.input or {}), "profile": _gcu_profile})
result = _original_tool_executor(tool_use)
if asyncio.isfuture(result) or asyncio.iscoroutine(result):
return await result
return result
tool_executor = _gcu_profile_injecting_executor
try:
logger.info("🚀 Starting subagent '%s' execution...", agent_id)
start_time = time.time()
result = await subagent_node.execute(subagent_ctx)
latency_ms = int((time.time() - start_time) * 1000)
separator = "-" * 60
logger.info(
"\n%s\n"
"✅ SUBAGENT '%s' COMPLETED\n"
"%s\n"
"Success: %s\n"
"Latency: %dms\n"
"Tokens used: %s\n"
"Output keys: %s\n"
"%s",
separator,
agent_id,
separator,
result.success,
latency_ms,
result.tokens_used,
list(result.output.keys()) if result.output else [],
separator,
)
result_json = {
"message": (
f"Sub-agent '{agent_id}' completed successfully"
if result.success
else f"Sub-agent '{agent_id}' failed: {result.error}"
),
"data": result.output,
"reports": subagent_reports if subagent_reports else None,
"metadata": {
"agent_id": agent_id,
"success": result.success,
"tokens_used": result.tokens_used,
"latency_ms": latency_ms,
"report_count": len(subagent_reports),
},
}
return ToolResult(
tool_use_id="",
content=json.dumps(result_json, indent=2, default=str),
is_error=not result.success,
)
except Exception as e:
logger.exception(
"\n" + "!" * 60 + "\n❌ SUBAGENT '%s' FAILED\nError: %s\n" + "!" * 60,
agent_id,
str(e),
)
result_json = {
"message": f"Sub-agent '{agent_id}' raised exception: {e}",
"data": None,
"metadata": {
"agent_id": agent_id,
"success": False,
"error": str(e),
},
}
return ToolResult(
tool_use_id="",
content=json.dumps(result_json, indent=2),
is_error=True,
)
finally:
# Close the tab group this subagent created, if any.
if _original_tool_executor is not None:
try:
stop_call = ToolUse(
id="__subagent_cleanup__",
name="browser_stop",
input={"profile": _gcu_profile},
)
result = _original_tool_executor(stop_call)
if asyncio.isfuture(result) or asyncio.iscoroutine(result):
await result
except Exception:
pass
@@ -1,151 +0,0 @@
"""Streaming XML tag filter for thinking tags.
Strips configured XML tags (e.g. ``<situation>``, ``<monologue>``) from
a chunked text stream while preserving the full text for conversation
storage. The filter is stateful it handles chunks that split mid-tag.
Only touches text content. Tool calls flow through a completely separate
code path and are never affected by this filter.
"""
from __future__ import annotations
from collections.abc import Sequence
class ThinkingTagFilter:
"""Strips XML thinking tags from a streaming text output.
Buffers content inside configured tags and yields only the visible
content outside those tags. Handles chunks that split across tag
boundaries (e.g. a chunk ending with ``"<mono"``).
Args:
tag_names: Tag names to strip (e.g. ``["situation", "monologue"]``).
"""
def __init__(self, tag_names: Sequence[str]) -> None:
self._tag_names: set[str] = set(tag_names)
# Pre-compute all opening and closing tag strings for matching.
self._open_tags: dict[str, str] = {name: f"<{name}>" for name in tag_names}
self._close_tags: dict[str, str] = {name: f"</{name}>" for name in tag_names}
# All possible tag prefixes for partial-match detection.
self._all_tag_strings: list[str] = sorted(
list(self._open_tags.values()) + list(self._close_tags.values()),
key=len,
reverse=True,
)
self._inside_tag: str | None = None # Which tag we're inside, or None.
self._pending: str = "" # Chars that might be a partial tag.
self._visible_text: str = "" # Accumulated visible snapshot.
def feed(self, chunk: str) -> str:
"""Feed a text chunk and return the visible portion.
Characters inside thinking tags are suppressed. Characters that
*might* be the start of a tag are buffered until the next chunk
resolves the ambiguity.
Returns:
The portion of text that should be shown to the user.
"""
buf = self._pending + chunk
self._pending = ""
visible = self._process(buf)
self._visible_text += visible
return visible
@property
def visible_snapshot(self) -> str:
"""Accumulated visible text so far (for the snapshot field)."""
return self._visible_text
def flush(self) -> str:
"""Flush any pending partial tag as visible text.
Called at end-of-stream. If characters were buffered because they
looked like the start of a tag but the stream ended before the tag
completed, they are emitted as visible text (graceful degradation).
"""
result = ""
if self._pending:
if self._inside_tag is None:
result = self._pending
# If inside a tag, discard pending (unclosed tag content).
self._pending = ""
self._visible_text += result
return result
# ------------------------------------------------------------------
# Internal processing
# ------------------------------------------------------------------
def _process(self, buf: str) -> str:
"""Process a buffer, returning visible text and updating state."""
visible_parts: list[str] = []
i = 0
n = len(buf)
while i < n:
if self._inside_tag is not None:
# Inside a tag — look for the closing tag.
close = self._close_tags[self._inside_tag]
close_pos = buf.find(close, i)
if close_pos == -1:
# Closing tag might be split across chunks.
# Check if the tail of buf is a prefix of the close tag.
tail_len = min(len(close) - 1, n - i)
for tl in range(tail_len, 0, -1):
if close.startswith(buf[n - tl :]):
self._pending = buf[n - tl :]
i = n
break
else:
# No partial match — discard everything (inside tag).
i = n
break
else:
# Found closing tag — skip past it and exit tag.
i = close_pos + len(close)
self._inside_tag = None
else:
# Outside any tag — look for '<'.
lt_pos = buf.find("<", i)
if lt_pos == -1:
# No '<' — everything is visible.
visible_parts.append(buf[i:])
i = n
else:
# Emit text before the '<'.
if lt_pos > i:
visible_parts.append(buf[i:lt_pos])
# Try to match an opening tag at this position.
remainder = buf[lt_pos:]
matched = False
for name, open_tag in self._open_tags.items():
if remainder.startswith(open_tag):
# Full opening tag found — enter tag.
self._inside_tag = name
i = lt_pos + len(open_tag)
matched = True
break
if not matched:
# Check if remainder could be a partial tag prefix.
if self._is_partial_tag_prefix(remainder):
# Buffer and wait for next chunk.
self._pending = remainder
i = n
else:
# Not a known tag — '<' is visible text.
visible_parts.append("<")
i = lt_pos + 1
return "".join(visible_parts)
def _is_partial_tag_prefix(self, text: str) -> bool:
"""Check if text could be the start of a known tag string."""
for tag_str in self._all_tag_strings:
if tag_str.startswith(text) and len(text) < len(tag_str):
return True
return False
-207
View File
@@ -1,207 +0,0 @@
"""Shared types and state containers for the event loop package."""
from __future__ import annotations
import json
import logging
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Literal, Protocol, runtime_checkable
from framework.graph.conversation import (
ConversationStore,
get_run_cursor,
update_run_cursor,
)
logger = logging.getLogger(__name__)
@dataclass
class TriggerEvent:
"""A framework-level trigger signal (timer tick or webhook hit)."""
trigger_type: str
source_id: str
payload: dict[str, Any] = field(default_factory=dict)
timestamp: float = field(default_factory=time.time)
@dataclass
class JudgeVerdict:
"""Result of judge evaluation for the event loop."""
action: Literal["ACCEPT", "RETRY", "ESCALATE"]
# None = no evaluation happened (skip_judge, tool-continue); not logged.
# "" = evaluated but no feedback; logged with default text.
# "..." = evaluated with feedback; logged as-is.
feedback: str | None = None
@runtime_checkable
class JudgeProtocol(Protocol):
"""Protocol for event-loop judges."""
async def evaluate(self, context: dict[str, Any]) -> JudgeVerdict: ...
@dataclass
class LoopConfig:
"""Configuration for the event loop."""
max_iterations: int = 50
max_tool_calls_per_turn: int = 30
judge_every_n_turns: int = 1
stall_detection_threshold: int = 3
stall_similarity_threshold: float = 0.85
max_context_tokens: int = 32_000
store_prefix: str = ""
# Overflow margin for max_tool_calls_per_turn. Tool calls are only
# discarded when the count exceeds max_tool_calls_per_turn * (1 + margin).
tool_call_overflow_margin: float = 0.5
# Tool result context management.
max_tool_result_chars: int = 30_000
spillover_dir: str | None = None
# set_output value spilling.
max_output_value_chars: int = 2_000
# Stream retry.
max_stream_retries: int = 3
stream_retry_backoff_base: float = 2.0
stream_retry_max_delay: float = 60.0
# Tool doom loop detection.
tool_doom_loop_threshold: int = 3
# Client-facing auto-block grace period.
cf_grace_turns: int = 1
# Worker auto-escalation: text-only turns before escalating to queen.
worker_escalation_grace_turns: int = 1
tool_doom_loop_enabled: bool = True
# Per-tool-call timeout.
tool_call_timeout_seconds: float = 60.0
# Subagent delegation timeout (wall-clock max).
subagent_timeout_seconds: float = 3600.0
# Subagent inactivity timeout - only timeout if no activity for this duration.
# This resets whenever the subagent makes progress (tool calls, LLM responses).
# Set to 0 to use only the wall-clock timeout.
subagent_inactivity_timeout_seconds: float = 300.0
# Lifecycle hooks.
hooks: dict[str, list] | None = None
def __post_init__(self) -> None:
if self.hooks is None:
object.__setattr__(self, "hooks", {})
@dataclass
class HookContext:
"""Context passed to every lifecycle hook."""
event: str
trigger: str | None
system_prompt: str
@dataclass
class HookResult:
"""What a hook may return to modify node state."""
system_prompt: str | None = None
inject: str | None = None
@dataclass
class OutputAccumulator:
"""Accumulates output key-value pairs with optional write-through persistence."""
values: dict[str, Any] = field(default_factory=dict)
store: ConversationStore | None = None
spillover_dir: str | None = None
max_value_chars: int = 0
run_id: str | None = None
async def set(self, key: str, value: Any) -> None:
"""Set a key-value pair, auto-spilling large values to files."""
value = self._auto_spill(key, value)
self.values[key] = value
if self.store:
cursor = await self.store.read_cursor() or {}
outputs = cursor.get("outputs", {})
outputs[key] = value
cursor["outputs"] = outputs
await self.store.write_cursor(cursor)
def _auto_spill(self, key: str, value: Any) -> Any:
"""Save large values to a file and return a reference string."""
if self.max_value_chars <= 0 or not self.spillover_dir:
return value
val_str = json.dumps(value, ensure_ascii=False) if not isinstance(value, str) else value
if len(val_str) <= self.max_value_chars:
return value
spill_path = Path(self.spillover_dir)
spill_path.mkdir(parents=True, exist_ok=True)
ext = ".json" if isinstance(value, (dict, list)) else ".txt"
filename = f"output_{key}{ext}"
write_content = (
json.dumps(value, indent=2, ensure_ascii=False)
if isinstance(value, (dict, list))
else str(value)
)
file_path = spill_path / filename
file_path.write_text(write_content, encoding="utf-8")
file_size = file_path.stat().st_size
logger.info(
"set_output value auto-spilled: key=%s, %d chars -> %s (%d bytes)",
key,
len(val_str),
filename,
file_size,
)
# Use absolute path so parent agents can find files from subagents
abs_path = str(file_path.resolve())
return (
f"[Saved to '{abs_path}' ({file_size:,} bytes). "
f"Use read_file(path='{abs_path}') "
f"to access full data.]"
)
def get(self, key: str) -> Any | None:
return self.values.get(key)
def to_dict(self) -> dict[str, Any]:
return dict(self.values)
def has_all_keys(self, required: list[str]) -> bool:
return all(key in self.values and self.values[key] is not None for key in required)
@classmethod
async def restore(
cls,
store: ConversationStore,
run_id: str | None = None,
) -> OutputAccumulator:
cursor = await store.read_cursor()
values = cursor.get("outputs", {}) if cursor else {}
return cls(values=values, store=store, run_id=run_id)
__all__ = [
"HookContext",
"HookResult",
"JudgeProtocol",
"JudgeVerdict",
"LoopConfig",
"OutputAccumulator",
"TriggerEvent",
]
-129
View File
@@ -1,129 +0,0 @@
"""GCU (browser automation) node type constants.
A ``gcu`` node is an ``event_loop`` node with two automatic enhancements:
1. A canonical browser best-practices system prompt is prepended.
2. All tools from the GCU MCP server are auto-included.
No new ``NodeProtocol`` subclass the ``gcu`` type is purely a declarative
signal processed by the runner and executor at setup time.
"""
# ---------------------------------------------------------------------------
# MCP server identity
# ---------------------------------------------------------------------------
GCU_SERVER_NAME = "gcu-tools"
"""Name used to identify the GCU MCP server in ``mcp_servers.json``."""
GCU_MCP_SERVER_CONFIG: dict = {
"name": GCU_SERVER_NAME,
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "-m", "gcu.server", "--stdio"],
"cwd": "../../tools",
"description": "GCU tools for browser automation",
}
"""Default stdio config for the GCU MCP server (relative to exports/<agent>/)."""
# ---------------------------------------------------------------------------
# Browser best-practices system prompt
# ---------------------------------------------------------------------------
GCU_BROWSER_SYSTEM_PROMPT = """\
# Browser Automation Best Practices
Follow these rules for reliable, efficient browser interaction.
## Reading Pages
- ALWAYS prefer `browser_snapshot` over `browser_get_text("body")`
it returns a compact ~1-5 KB accessibility tree vs 100+ KB of raw HTML.
- Interaction tools (`browser_click`, `browser_type`, `browser_fill`,
`browser_scroll`, etc.) return a page snapshot automatically in their
result. Use it to decide your next action do NOT call
`browser_snapshot` separately after every action.
Only call `browser_snapshot` when you need a fresh view without
performing an action, or after setting `auto_snapshot=false`.
- Do NOT use `browser_screenshot` to read text use
`browser_snapshot` for that (compact, searchable, fast).
- DO use `browser_screenshot` when you need visual context:
charts, images, canvas elements, layout verification, or when
the snapshot doesn't capture what you need.
- Only fall back to `browser_get_text` for extracting specific
small elements by CSS selector.
## Navigation & Waiting
- `browser_navigate` and `browser_open` already wait for the page to
load (`domcontentloaded`). Do NOT call `browser_wait` with no
arguments after navigation it wastes time.
Only use `browser_wait` when you need a *specific element* or *text*
to appear (pass `selector` or `text`).
- NEVER re-navigate to the same URL after scrolling
this resets your scroll position and loses loaded content.
## Scrolling
- Use large scroll amounts ~2000 when loading more content
sites like twitter and linkedin have lazy loading for paging.
- The scroll result includes a snapshot automatically no need to call
`browser_snapshot` separately.
## Batching Actions
- You can call multiple tools in a single turn they execute in parallel.
ALWAYS batch independent actions together. Examples:
- Fill multiple form fields in one turn.
- Navigate + snapshot in one turn.
- Click + scroll if targeting different elements.
- When batching, set `auto_snapshot=false` on all but the last action
to avoid redundant snapshots.
- Aim for 3-5 tool calls per turn minimum. One tool call per turn is
wasteful.
## Error Recovery
- If a tool fails, retry once with the same approach.
- If it fails a second time, STOP retrying and switch approach.
- If `browser_snapshot` fails try `browser_get_text` with a
specific small selector as fallback.
- If `browser_open` fails or page seems stale `browser_stop`,
then `browser_start`, then retry.
## Tab Management
**Close tabs as soon as you are done with them** not only at the end of the task.
After reading or extracting data from a tab, close it immediately.
**Decision rules:**
- Finished reading/extracting from a tab? `browser_close(target_id=...)`
- Completed a multi-tab workflow? `browser_close_finished()` to clean up all your tabs
- More than 3 tabs open? stop and close finished ones before opening more
- Popup appeared that you didn't need? → close it immediately
**Origin awareness:** `browser_tabs` returns an `origin` field for each tab:
- `"agent"` you opened it; you own it; close it when done
- `"popup"` opened by a link or script; close after extracting what you need
- `"startup"` or `"user"` leave these alone unless the task requires it
**Cleanup tools:**
- `browser_close(target_id=...)` close one specific tab
- `browser_close_finished()` close all your agent/popup tabs (safe: leaves startup/user tabs)
- `browser_close_all()` close everything except the active tab (use only for full reset)
**Multi-tab workflow pattern:**
1. Open background tabs with `browser_open(url=..., background=true)` to stay on current tab
2. Process each tab and close it with `browser_close` when done
3. When the full workflow completes, call `browser_close_finished()` to confirm cleanup
4. Check `browser_tabs` at any point it shows `origin` and `age_seconds` per tab
Never accumulate tabs. Treat every tab you open as a resource you must free.
## Login & Auth Walls
- If you see a "Log in" or "Sign up" prompt instead of expected
content, report the auth wall immediately do NOT attempt to log in.
- Check for cookie consent banners and dismiss them if they block content.
## Efficiency
- Minimize tool calls combine actions where possible.
- When a snapshot result is saved to a spillover file, use
`run_command` with grep to extract specific data rather than
re-reading the full file.
- Call `set_output` in the same turn as your last browser action
when possible don't waste a turn.
"""
+15
View File
@@ -0,0 +1,15 @@
"""Host layer -- how agents are triggered and hosted."""
from framework.host.colony_runtime import ( # noqa: F401
ColonyConfig,
ColonyRuntime,
StreamEventBus,
TriggerSpec,
)
from framework.host.event_bus import AgentEvent, EventBus, EventType # noqa: F401
from framework.host.worker import ( # noqa: F401
Worker,
WorkerInfo,
WorkerResult,
WorkerStatus,
)
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
@@ -108,14 +108,10 @@ class EventType(StrEnum):
# Judge decisions (implicit judge in event loop nodes)
JUDGE_VERDICT = "judge_verdict"
# Output tracking
OUTPUT_KEY_SET = "output_key_set"
# Retry / edge tracking
# Retry tracking
NODE_RETRY = "node_retry"
EDGE_TRAVERSED = "edge_traversed"
# Worker agent lifecycle (event-driven graph execution)
# Worker agent lifecycle
WORKER_COMPLETED = "worker_completed"
WORKER_FAILED = "worker_failed"
@@ -135,21 +131,19 @@ class EventType(StrEnum):
# Execution resurrection (auto-restart on non-fatal failure)
EXECUTION_RESURRECTED = "execution_resurrected"
# Graph lifecycle (session manager → frontend)
WORKER_GRAPH_LOADED = "worker_graph_loaded"
# Colony lifecycle (session manager → frontend)
WORKER_COLONY_LOADED = "worker_colony_loaded"
# Queen create_colony tool finished forking; carries colony_name +
# path so the frontend can render a system message linking to the
# new colony page at /colony/{colony_name}.
COLONY_CREATED = "colony_created"
CREDENTIALS_REQUIRED = "credentials_required"
# Draft graph (planning phase — lightweight graph preview)
DRAFT_GRAPH_UPDATED = "draft_graph_updated"
# Flowchart map updated (after reconciliation with runtime graph)
FLOWCHART_MAP_UPDATED = "flowchart_map_updated"
# Queen phase changes (building <-> staging <-> running)
# Queen phase changes (working <-> reviewing)
QUEEN_PHASE_CHANGED = "queen_phase_changed"
# Queen thinking hook — persona selected for the current building session
QUEEN_PERSONA_SELECTED = "queen_persona_selected"
# Queen identity — which queen profile was selected for this session
QUEEN_IDENTITY_SELECTED = "queen_identity_selected"
# Subagent reports (one-way progress updates from sub-agents)
SUBAGENT_REPORT = "subagent_report"
@@ -174,7 +168,7 @@ class AgentEvent:
data: dict[str, Any] = field(default_factory=dict)
timestamp: datetime = field(default_factory=datetime.now)
correlation_id: str | None = None # For tracking related events
graph_id: str | None = None # Which graph emitted this event (multi-graph sessions)
colony_id: str | None = None # Which colony emitted this event
run_id: str | None = None # Unique ID per trigger() invocation — used for run dividers
def to_dict(self) -> dict:
@@ -187,7 +181,7 @@ class AgentEvent:
"data": self.data,
"timestamp": self.timestamp.isoformat(),
"correlation_id": self.correlation_id,
"graph_id": self.graph_id,
"colony_id": self.colony_id,
}
if self.run_id is not None:
d["run_id"] = self.run_id
@@ -208,7 +202,7 @@ class Subscription:
filter_stream: str | None = None # Only receive events from this stream
filter_node: str | None = None # Only receive events from this node
filter_execution: str | None = None # Only receive events from this execution
filter_graph: str | None = None # Only receive events from this graph
filter_colony: str | None = None # Only receive events from this colony
class EventBus:
@@ -390,7 +384,7 @@ class EventBus:
filter_stream: str | None = None,
filter_node: str | None = None,
filter_execution: str | None = None,
filter_graph: str | None = None,
filter_colony: str | None = None,
) -> str:
"""
Subscribe to events.
@@ -401,7 +395,7 @@ class EventBus:
filter_stream: Only receive events from this stream
filter_node: Only receive events from this node
filter_execution: Only receive events from this execution
filter_graph: Only receive events from this graph
filter_colony: Only receive events from this colony
Returns:
Subscription ID (use to unsubscribe)
@@ -416,7 +410,7 @@ class EventBus:
filter_stream=filter_stream,
filter_node=filter_node,
filter_execution=filter_execution,
filter_graph=filter_graph,
filter_colony=filter_colony,
)
self._subscriptions[sub_id] = subscription
@@ -452,11 +446,7 @@ class EventBus:
# iteration values. Without this, live SSE would use raw iterations
# while events.jsonl would use offset iterations, causing ID collisions
# on the frontend when replaying after cold resume.
if (
self._session_log_iteration_offset
and isinstance(event.data, dict)
and "iteration" in event.data
):
if self._session_log_iteration_offset and isinstance(event.data, dict) and "iteration" in event.data:
offset = self._session_log_iteration_offset
event.data = {**event.data, "iteration": event.data["iteration"] + offset}
@@ -518,23 +508,41 @@ class EventBus:
if subscription.filter_execution and subscription.filter_execution != event.execution_id:
return False
# Check graph filter
if subscription.filter_graph and subscription.filter_graph != event.graph_id:
# Check colony filter
if subscription.filter_colony and subscription.filter_colony != event.colony_id:
return False
return True
# Per-handler wall-clock timeout. A subscriber that deadlocks or
# blocks on slow I/O would otherwise freeze the publisher (and via
# ``await publish(...)`` any coroutine that emits events) indefinitely.
# 15 s is generous for legitimate handlers and cheap to tune later.
_HANDLER_TIMEOUT_SECONDS: float = 15.0
async def _execute_handlers(
self,
event: AgentEvent,
handlers: list[EventHandler],
) -> None:
"""Execute handlers concurrently with rate limiting."""
"""Execute handlers concurrently with rate limiting + hard timeout."""
async def run_handler(handler: EventHandler) -> None:
async with self._semaphore:
try:
await handler(event)
await asyncio.wait_for(
handler(event),
timeout=self._HANDLER_TIMEOUT_SECONDS,
)
except TimeoutError:
handler_name = getattr(handler, "__qualname__", repr(handler))
logger.error(
"EventBus handler %s exceeded %.0fs on event %s — dropping; "
"fix the handler or the publisher will stall",
handler_name,
self._HANDLER_TIMEOUT_SECONDS,
getattr(event.type, "name", event.type),
)
except Exception:
logger.exception(f"Handler error for {event.type}")
@@ -1029,24 +1037,6 @@ class EventBus:
)
)
async def emit_output_key_set(
self,
stream_id: str,
node_id: str,
key: str,
execution_id: str | None = None,
) -> None:
"""Emit output key set event."""
await self.publish(
AgentEvent(
type=EventType.OUTPUT_KEY_SET,
stream_id=stream_id,
node_id=node_id,
execution_id=execution_id,
data={"key": key},
)
)
async def emit_node_retry(
self,
stream_id: str,
@@ -1071,29 +1061,6 @@ class EventBus:
)
)
async def emit_edge_traversed(
self,
stream_id: str,
source_node: str,
target_node: str,
edge_condition: str = "",
execution_id: str | None = None,
) -> None:
"""Emit edge traversed event."""
await self.publish(
AgentEvent(
type=EventType.EDGE_TRAVERSED,
stream_id=stream_id,
node_id=source_node,
execution_id=execution_id,
data={
"source_node": source_node,
"target_node": target_node,
"edge_condition": edge_condition,
},
)
)
async def emit_worker_completed(
self,
stream_id: str,
@@ -1208,15 +1175,25 @@ class EventBus:
reason: str = "",
context: str = "",
execution_id: str | None = None,
request_id: str | None = None,
) -> None:
"""Emit escalation requested event (agent wants queen)."""
"""Emit escalation requested event (agent wants queen).
``request_id`` is a caller-supplied handle used by the queen to
address its reply back to the specific escalation. When omitted the
event still fires but the queen cannot route a targeted reply.
"""
await self.publish(
AgentEvent(
type=EventType.ESCALATION_REQUESTED,
stream_id=stream_id,
node_id=node_id,
execution_id=execution_id,
data={"reason": reason, "context": context},
data={
"request_id": request_id,
"reason": reason,
"context": context,
},
)
)
@@ -1297,7 +1274,7 @@ class EventBus:
stream_id: str | None = None,
node_id: str | None = None,
execution_id: str | None = None,
graph_id: str | None = None,
colony_id: str | None = None,
timeout: float | None = None,
) -> AgentEvent | None:
"""
@@ -1308,7 +1285,7 @@ class EventBus:
stream_id: Filter by stream
node_id: Filter by node
execution_id: Filter by execution
graph_id: Filter by graph
colony_id: Filter by colony
timeout: Maximum time to wait (seconds)
Returns:
@@ -1329,7 +1306,7 @@ class EventBus:
filter_stream=stream_id,
filter_node=node_id,
filter_execution=execution_id,
filter_graph=graph_id,
filter_colony=colony_id,
)
try:
@@ -18,18 +18,18 @@ from dataclasses import dataclass, field
from datetime import datetime
from typing import TYPE_CHECKING, Any
from framework.graph.checkpoint_config import CheckpointConfig
from framework.graph.executor import ExecutionResult, GraphExecutor
from framework.runtime.event_bus import EventBus
from framework.runtime.shared_state import IsolationLevel, SharedBufferManager
from framework.runtime.stream_runtime import StreamRuntime, StreamRuntimeAdapter
from framework.host.event_bus import EventBus
from framework.host.shared_state import IsolationLevel, SharedBufferManager
from framework.host.stream_runtime import StreamDecisionTracker, StreamRuntimeAdapter
from framework.orchestrator.checkpoint_config import CheckpointConfig
from framework.orchestrator.orchestrator import ExecutionResult, Orchestrator
if TYPE_CHECKING:
from framework.graph.edge import GraphSpec
from framework.graph.goal import Goal
from framework.host.event_bus import AgentEvent
from framework.host.outcome_aggregator import OutcomeAggregator
from framework.llm.provider import LLMProvider, Tool
from framework.runtime.event_bus import AgentEvent
from framework.runtime.outcome_aggregator import OutcomeAggregator
from framework.orchestrator.edge import GraphSpec
from framework.orchestrator.goal import Goal
from framework.storage.concurrent import ConcurrentStorage
from framework.storage.session_store import SessionStore
@@ -133,7 +133,7 @@ class ExecutionContext:
status: str = "pending" # pending, running, completed, failed, paused
class ExecutionStream:
class ExecutionManager:
"""
Manages concurrent executions for a single entry point.
@@ -172,7 +172,7 @@ class ExecutionStream:
goal: "Goal",
state_manager: SharedBufferManager,
storage: "ConcurrentStorage",
outcome_aggregator: "OutcomeAggregator",
outcome_aggregator: "OutcomeAggregator | None" = None,
event_bus: "EventBus | None" = None,
llm: "LLMProvider | None" = None,
tools: list["Tool"] | None = None,
@@ -192,10 +192,6 @@ class ExecutionStream:
context_warn_ratio: float | None = None,
batch_init_nudge: str | None = None,
dynamic_memory_provider_factory: Callable[[str], Callable[[], str] | None] | None = None,
colony_memory_dir: Any = None,
colony_worker_sessions_dir: Any = None,
colony_recall_cache: dict[str, str] | None = None,
colony_reflect_llm: Any = None,
):
"""
Initialize execution stream.
@@ -251,10 +247,6 @@ class ExecutionStream:
self._context_warn_ratio: float | None = context_warn_ratio
self._batch_init_nudge: str | None = batch_init_nudge
self._dynamic_memory_provider_factory = dynamic_memory_provider_factory
self._colony_memory_dir = colony_memory_dir
self._colony_worker_sessions_dir = colony_worker_sessions_dir
self._colony_recall_cache = colony_recall_cache
self._colony_reflect_llm = colony_reflect_llm
_es_logger = logging.getLogger(__name__)
if protocols_prompt:
@@ -270,16 +262,15 @@ class ExecutionStream:
)
# Create stream-scoped runtime
self._runtime = StreamRuntime(
self._runtime = StreamDecisionTracker(
stream_id=stream_id,
storage=storage,
outcome_aggregator=outcome_aggregator,
)
# Execution tracking
self._active_executions: dict[str, ExecutionContext] = {}
self._execution_tasks: dict[str, asyncio.Task] = {}
self._active_executors: dict[str, GraphExecutor] = {}
self._active_executors: dict[str, Orchestrator] = {}
self._cancel_reasons: dict[str, str] = {}
self._execution_results: OrderedDict[str, ExecutionResult] = OrderedDict()
self._execution_result_times: dict[str, float] = {}
@@ -309,7 +300,7 @@ class ExecutionStream:
# Emit stream started event
if self._scoped_event_bus:
from framework.runtime.event_bus import AgentEvent, EventType
from framework.host.event_bus import AgentEvent, EventType
await self._scoped_event_bus.publish(
AgentEvent(
@@ -434,7 +425,7 @@ class ExecutionStream:
# Emit stream stopped event
if self._scoped_event_bus:
from framework.runtime.event_bus import AgentEvent, EventType
from framework.host.event_bus import AgentEvent, EventType
await self._scoped_event_bus.publish(
AgentEvent(
@@ -461,9 +452,7 @@ class ExecutionStream:
for executor in self._active_executors.values():
node = executor.node_registry.get(node_id)
if node is not None and hasattr(node, "inject_event"):
await node.inject_event(
content, is_client_input=is_client_input, image_content=image_content
)
await node.inject_event(content, is_client_input=is_client_input, image_content=image_content)
return True
return False
@@ -676,11 +665,9 @@ class ExecutionStream:
# Create per-execution runtime logger
runtime_logger = None
if self._runtime_log_store:
from framework.runtime.runtime_logger import RuntimeLogger
from framework.tracker.runtime_logger import RuntimeLogger
runtime_logger = RuntimeLogger(
store=self._runtime_log_store, agent_id=self.graph.id
)
runtime_logger = RuntimeLogger(store=self._runtime_log_store, agent_id=self.graph.id)
# Derive storage from session_store (graph-specific for secondary
# graphs) so that all files — conversations, state, checkpoints,
@@ -705,12 +692,7 @@ class ExecutionStream:
# forward so the next attempt resumes at the failed node.
while True:
# Create executor for this execution.
# Each execution gets its own storage under sessions/{exec_id}/
# so conversations, spillover, and data files are all scoped
# to this execution. The executor sets data_dir via execution
# context (contextvars) so data tools and spillover share the
# same session-scoped directory.
executor = GraphExecutor(
executor = Orchestrator(
runtime=runtime_adapter,
llm=self._llm,
tools=self._tools,
@@ -735,10 +717,6 @@ class ExecutionStream:
if self._dynamic_memory_provider_factory is not None
else None
),
colony_memory_dir=self._colony_memory_dir,
colony_worker_sessions_dir=self._colony_worker_sessions_dir,
colony_recall_cache=self._colony_recall_cache,
colony_reflect_llm=self._colony_reflect_llm,
)
# Track executor so inject_input() can reach EventLoopNode instances
self._active_executors[execution_id] = executor
@@ -775,7 +753,7 @@ class ExecutionStream:
# Emit resurrection event
if self._scoped_event_bus:
from framework.runtime.event_bus import AgentEvent, EventType
from framework.host.event_bus import AgentEvent, EventType
await self._scoped_event_bus.publish(
AgentEvent(
@@ -905,9 +883,7 @@ class ExecutionStream:
if has_result and result.paused_at:
await self._write_session_state(execution_id, ctx, result=result)
else:
await self._write_session_state(
execution_id, ctx, error="Execution cancelled"
)
await self._write_session_state(execution_id, ctx, error="Execution cancelled")
# Emit SSE event so the frontend knows the execution stopped.
# The executor does NOT emit on CancelledError, so there is no
@@ -1131,7 +1107,7 @@ class ExecutionStream:
Each stream only executes from its own entry_node, but the full
graph must validate with all entry points accounted for.
"""
from framework.graph.edge import GraphSpec
from framework.orchestrator.edge import GraphSpec
# Merge entry points: this stream's entry + original graph's primary
# entry + any other entry points. This ensures all nodes are
@@ -1236,8 +1212,7 @@ class ExecutionStream:
# The task will continue winding down in the background and its
# finally block will harmlessly pop already-removed keys.
logger.warning(
"Execution %s did not finish within cancel timeout; "
"force-cleaning bookkeeping",
"Execution %s did not finish within cancel timeout; force-cleaning bookkeeping",
execution_id,
)
async with self._lock:
+9
View File
@@ -0,0 +1,9 @@
"""State isolation level enum."""
from enum import StrEnum
class IsolationLevel(StrEnum):
ISOLATED = "isolated"
SHARED = "shared"
SYNCHRONIZED = "synchronized"
+21
View File
@@ -0,0 +1,21 @@
"""Stub — outcome aggregator removed in colony refactor."""
from framework.schemas.goal import Goal
class OutcomeAggregator:
def __init__(self, goal: Goal, event_bus=None):
self._goal = goal
self._event_bus = event_bus
def record_decision(self, **kwargs):
pass
def record_outcome(self, **kwargs):
pass
def evaluate_goal_progress(self):
return {"progress": 0.0, "criteria_status": {}}
def get_stats(self):
return {"total_decisions": 0, "total_outcomes": 0}
+61
View File
@@ -0,0 +1,61 @@
"""Stub — shared state removed in colony refactor."""
import asyncio
import logging
from enum import StrEnum
from typing import Any
logger = logging.getLogger(__name__)
class IsolationLevel(StrEnum):
ISOLATED = "isolated"
SHARED = "shared"
SYNCHRONIZED = "synchronized"
class StateScope(StrEnum):
EXECUTION = "execution"
STREAM = "stream"
GLOBAL = "global"
class SharedBufferManager:
def __init__(self):
self._global_state: dict[str, Any] = {}
self._stream_states: dict[str, dict[str, Any]] = {}
self._execution_states: dict[str, dict[str, Any]] = {}
self._lock = asyncio.Lock()
def create_buffer(
self,
execution_id: str,
stream_id: str = "",
isolation: IsolationLevel = IsolationLevel.ISOLATED,
):
execution_key = f"{stream_id}:{execution_id}"
if execution_key not in self._execution_states:
self._execution_states[execution_key] = {}
return self._execution_states[execution_key]
def get_stream_state(self, stream_id: str) -> dict[str, Any]:
return self._stream_states.setdefault(stream_id, {})
def get_global_state(self) -> dict[str, Any]:
return self._global_state
def cleanup_execution(self, execution_id: str, stream_id: str = "") -> None:
"""Drop the per-execution state bucket.
No-op when the key is absent. Called from
``ExecutionManager._run_execution``'s finally block. Before this
stub existed, the call raised ``AttributeError`` on every
execution teardown because the SharedBufferManager stub had no
such method.
"""
execution_key = f"{stream_id}:{execution_id}"
self._execution_states.pop(execution_key, None)
def get_recent_changes(self, limit: int = 10) -> list[dict[str, Any]]:
"""Compat stub — returns empty list. Shared buffer was removed."""
return []
@@ -10,20 +10,17 @@ import asyncio
import logging
import uuid
from datetime import datetime
from typing import TYPE_CHECKING, Any
from typing import Any
from framework.observability import set_trace_context
from framework.schemas.decision import Decision, DecisionType, Option, Outcome
from framework.schemas.run import Run, RunStatus
from framework.storage.concurrent import ConcurrentStorage
if TYPE_CHECKING:
from framework.runtime.outcome_aggregator import OutcomeAggregator
logger = logging.getLogger(__name__)
class StreamRuntime:
class StreamDecisionTracker:
"""
Thread-safe runtime for a single execution stream.
@@ -75,7 +72,6 @@ class StreamRuntime:
self,
stream_id: str,
storage: ConcurrentStorage,
outcome_aggregator: "OutcomeAggregator | None" = None,
):
"""
Initialize stream runtime.
@@ -83,11 +79,9 @@ class StreamRuntime:
Args:
stream_id: Unique identifier for this stream
storage: Concurrent storage backend
outcome_aggregator: Optional aggregator for cross-stream evaluation
"""
self.stream_id = stream_id
self._storage = storage
self._outcome_aggregator = outcome_aggregator
# Track runs by execution_id (thread-safe via lock)
self._runs: dict[str, Run] = {}
@@ -142,9 +136,7 @@ class StreamRuntime:
self._run_locks[execution_id] = asyncio.Lock()
self._current_nodes[execution_id] = "unknown"
logger.debug(
f"Started run {run_id} for execution {execution_id} in stream {self.stream_id}"
)
logger.debug(f"Started run {run_id} for execution {execution_id} in stream {self.stream_id}")
return run_id
def end_run(
@@ -268,14 +260,6 @@ class StreamRuntime:
run.add_decision(decision)
# Report to outcome aggregator if available
if self._outcome_aggregator:
self._outcome_aggregator.record_decision(
stream_id=self.stream_id,
execution_id=execution_id,
decision=decision,
)
return decision_id
def record_outcome(
@@ -321,15 +305,6 @@ class StreamRuntime:
run.record_outcome(decision_id, outcome)
# Report to outcome aggregator if available
if self._outcome_aggregator:
self._outcome_aggregator.record_outcome(
stream_id=self.stream_id,
execution_id=execution_id,
decision_id=decision_id,
outcome=outcome,
)
# === PROBLEM RECORDING ===
def report_problem(
@@ -357,10 +332,7 @@ class StreamRuntime:
"""
run = self._runs.get(execution_id)
if run is None:
logger.warning(
f"report_problem called but no run for execution {execution_id}: "
f"[{severity}] {description}"
)
logger.warning(f"report_problem called but no run for execution {execution_id}: [{severity}] {description}")
return ""
return run.add_problem(
@@ -431,7 +403,7 @@ class StreamRuntimeAdapter:
by providing the same API as Runtime but routing to a specific execution.
"""
def __init__(self, stream_runtime: StreamRuntime, execution_id: str):
def __init__(self, stream_runtime: StreamDecisionTracker, execution_id: str):
"""
Create adapter for a specific execution.
@@ -13,7 +13,7 @@ from dataclasses import dataclass
from aiohttp import web
from framework.runtime.event_bus import EventBus
from framework.host.event_bus import EventBus
logger = logging.getLogger(__name__)
@@ -89,8 +89,7 @@ class WebhookServer:
)
await self._site.start()
logger.info(
f"Webhook server started on {self._config.host}:{self._config.port} "
f"with {len(self._routes)} route(s)"
f"Webhook server started on {self._config.host}:{self._config.port} with {len(self._routes)} route(s)"
)
async def stop(self) -> None:
+424
View File
@@ -0,0 +1,424 @@
"""Worker — a single autonomous AgentLoop clone in a colony.
Two modes:
**Ephemeral (default)**: runs a single AgentLoop execution with a task,
emits a `SUBAGENT_REPORT` event on termination (success, partial, or
failed), and terminates. Used for parallel fan-out from the overseer.
**Persistent (``persistent=True``)**: runs an initial AgentLoop execution
(usually idle, no task) and then loops forever, receiving user chat via
``inject(message)`` and pumping each message into the already-running
agent loop via ``inject_event``. Used for the colony's long-running
client-facing overseer.
"""
from __future__ import annotations
import asyncio
import logging
import time
from dataclasses import dataclass, field
from enum import StrEnum
from pathlib import Path
from typing import Any
logger = logging.getLogger(__name__)
class WorkerStatus(StrEnum):
PENDING = "pending"
RUNNING = "running"
COMPLETED = "completed"
FAILED = "failed"
STOPPED = "stopped"
@dataclass
class WorkerResult:
output: dict[str, Any] = field(default_factory=dict)
error: str | None = None
tokens_used: int = 0
duration_seconds: float = 0.0
# New: structured report fields. Populated by report_to_parent tool or
# synthesised from AgentResult on termination.
status: str = "success" # "success" | "partial" | "failed" | "timeout" | "stopped"
summary: str = ""
data: dict[str, Any] = field(default_factory=dict)
@dataclass
class WorkerInfo:
id: str
task: str
status: WorkerStatus
started_at: float = 0.0
result: WorkerResult | None = None
class Worker:
"""A single autonomous clone in a colony.
Ephemeral mode (default):
- PENDING RUNNING COMPLETED/FAILED/STOPPED, one shot, terminates.
Persistent mode (``persistent=True``, used by the overseer):
- PENDING RUNNING (never transitions out by itself).
- Receives user chat via ``inject(message)``.
- Each injected message is pumped into the running AgentLoop via
``inject_event``, triggering another turn.
"""
def __init__(
self,
worker_id: str,
task: str,
agent_loop: Any,
context: Any,
event_bus: Any = None,
colony_id: str = "",
persistent: bool = False,
storage_path: Path | None = None,
):
self.id = worker_id
self.task = task
self.status = WorkerStatus.PENDING
self._agent_loop = agent_loop
self._context = context
self._event_bus = event_bus
self._colony_id = colony_id
self._persistent = persistent
# Canonical on-disk home for this worker (conversations, events,
# result.json, data). Required when seed_conversation() is used —
# we deliberately do NOT fall back to CWD, which previously caused
# conversation parts to leak into the process working directory.
self._storage_path: Path | None = Path(storage_path) if storage_path is not None else None
self._task_handle: asyncio.Task | None = None
self._started_at: float = 0.0
self._result: WorkerResult | None = None
self._input_queue: asyncio.Queue[str | None] = asyncio.Queue()
# Set by AgentLoop when the worker's LLM calls ``report_to_parent``.
# Takes precedence over the synthesised report from AgentResult.
self._explicit_report: dict[str, Any] | None = None
# Back-reference so AgentLoop's report_to_parent handler can call
# record_explicit_report on the owning Worker. The agent_loop's
# _owner_worker attribute is set here during construction.
if agent_loop is not None:
agent_loop._owner_worker = self
@property
def info(self) -> WorkerInfo:
return WorkerInfo(
id=self.id,
task=self.task,
status=self.status,
started_at=self._started_at,
result=self._result,
)
@property
def is_active(self) -> bool:
return self.status in (WorkerStatus.PENDING, WorkerStatus.RUNNING)
@property
def is_persistent(self) -> bool:
return self._persistent
@property
def agent_loop(self) -> Any:
"""The wrapped AgentLoop. Used by the SessionManager chat path."""
return self._agent_loop
# ------------------------------------------------------------------
# Lifecycle
# ------------------------------------------------------------------
async def run(self) -> WorkerResult:
"""Entry point for the worker's background task.
Ephemeral workers run ``AgentLoop.execute`` once and terminate,
emitting a ``SUBAGENT_REPORT`` event.
Persistent workers run the initial execute then loop forever
processing injected user messages.
"""
self.status = WorkerStatus.RUNNING
self._started_at = time.monotonic()
try:
result = await self._agent_loop.execute(self._context)
duration = time.monotonic() - self._started_at
if result.success:
self.status = WorkerStatus.COMPLETED
self._result = self._build_result(result, duration, default_status="success")
else:
self.status = WorkerStatus.FAILED
self._result = self._build_result(result, duration, default_status="failed")
await self._emit_terminal_events(result)
if self._persistent:
# Persistent worker: keep the loop alive, pump injected
# messages forever. Status stays RUNNING; info reflects
# current progress.
self.status = WorkerStatus.RUNNING
await self._persistent_input_loop()
return self._result # type: ignore[return-value]
except asyncio.CancelledError:
self.status = WorkerStatus.STOPPED
duration = time.monotonic() - self._started_at
self._result = WorkerResult(
error="Worker stopped by queen",
duration_seconds=duration,
status="stopped",
summary="Worker was cancelled before completion.",
)
await self._emit_terminal_events(None, force_status="stopped")
return self._result
except Exception as exc:
self.status = WorkerStatus.FAILED
duration = time.monotonic() - self._started_at
self._result = WorkerResult(
error=str(exc),
duration_seconds=duration,
status="failed",
summary=f"Worker crashed: {exc}",
)
logger.error("Worker %s failed: %s", self.id, exc, exc_info=True)
await self._emit_terminal_events(None, force_status="failed")
return self._result
async def _persistent_input_loop(self) -> None:
"""Pump injected messages into the running AgentLoop forever.
Each ``inject(msg)`` call puts a string on ``_input_queue``. This
loop awaits it and calls ``agent_loop.inject_event(msg)`` which
wakes the loop's pending user-input gate.
"""
while True:
msg = await self._input_queue.get()
if msg is None:
# Sentinel: shutdown
return
try:
await self._agent_loop.inject_event(msg, is_client_input=True)
except Exception:
logger.exception(
"Overseer %s: inject_event failed for injected message",
self.id,
)
# ------------------------------------------------------------------
# Reporting
# ------------------------------------------------------------------
def record_explicit_report(
self,
status: str,
summary: str,
data: dict[str, Any] | None = None,
) -> None:
"""Called by AgentLoop when the worker's LLM invokes ``report_to_parent``.
Stores the report so that when ``run()`` reaches the termination
block, the explicit report wins over a synthesised one.
"""
self._explicit_report = {
"status": status,
"summary": summary,
"data": data or {},
}
def _build_result(
self,
agent_result: Any,
duration: float,
default_status: str,
) -> WorkerResult:
"""Construct a WorkerResult from AgentResult + optional explicit report."""
explicit = self._explicit_report
if explicit is not None:
return WorkerResult(
output=dict(agent_result.output or {}),
error=agent_result.error,
tokens_used=getattr(agent_result, "tokens_used", 0),
duration_seconds=duration,
status=explicit["status"],
summary=explicit["summary"],
data=explicit["data"],
)
# Synthesise a minimal report from AgentResult
if agent_result.success:
summary = f"Completed task '{self.task[:80]}' with {len(agent_result.output or {})} outputs."
data = dict(agent_result.output or {})
else:
summary = f"Task '{self.task[:80]}' failed: {agent_result.error or 'unknown'}"
data = {}
return WorkerResult(
output=dict(agent_result.output or {}),
error=agent_result.error,
tokens_used=getattr(agent_result, "tokens_used", 0),
duration_seconds=duration,
status=default_status,
summary=summary,
data=data,
)
async def _emit_terminal_events(
self,
agent_result: Any,
force_status: str | None = None,
) -> None:
"""Emit EXECUTION_COMPLETED/FAILED AND SUBAGENT_REPORT on termination.
Both events are published so that consumers that listen for
either shape keep working. The SUBAGENT_REPORT carries the
structured summary the overseer actually cares about.
"""
if self._event_bus is None:
return
from framework.host.event_bus import AgentEvent, EventType
# EXECUTION_COMPLETED / EXECUTION_FAILED (backwards-compat)
if agent_result is not None:
lifecycle_type = EventType.EXECUTION_COMPLETED if agent_result.success else EventType.EXECUTION_FAILED
await self._event_bus.publish(
AgentEvent(
type=lifecycle_type,
stream_id=self._context.stream_id or self.id,
node_id=self.id,
execution_id=self._context.execution_id or self.id,
data={
"worker_id": self.id,
"colony_id": self._colony_id,
"task": self.task,
"success": agent_result.success,
"error": agent_result.error,
"output_keys": (list(agent_result.output.keys()) if agent_result.output else []),
},
)
)
# SUBAGENT_REPORT — the structured channel the overseer awaits
result = self._result
if result is None:
return
await self._event_bus.publish(
AgentEvent(
type=EventType.SUBAGENT_REPORT,
stream_id=self._context.stream_id or self.id,
node_id=self.id,
execution_id=self._context.execution_id or self.id,
data={
"worker_id": self.id,
"colony_id": self._colony_id,
"task": self.task,
"status": force_status or result.status,
"summary": result.summary,
"data": result.data,
"error": result.error,
"duration_seconds": result.duration_seconds,
"tokens_used": result.tokens_used,
},
)
)
# ------------------------------------------------------------------
# External control
# ------------------------------------------------------------------
async def start_background(self) -> None:
"""Spawn the worker's run() as an asyncio background task."""
self._task_handle = asyncio.create_task(self.run(), name=f"worker:{self.id}")
# Surface any exception that escapes run(); without this callback
# a crash here only becomes visible when stop() eventually awaits
# the handle (and is silently lost if stop() is never called).
self._task_handle.add_done_callback(self._on_task_done)
def _on_task_done(self, task: asyncio.Task) -> None:
if task.cancelled():
return
exc = task.exception()
if exc is not None:
logger.error(
"Worker '%s' background task crashed: %s",
self.id,
exc,
exc_info=exc,
)
async def stop(self) -> None:
"""Cancel the worker's background task, if any."""
if self._persistent:
# Signal the input loop to exit cleanly first
await self._input_queue.put(None)
if self._task_handle and not self._task_handle.done():
self._task_handle.cancel()
try:
await self._task_handle
except asyncio.CancelledError:
pass
async def inject(self, message: str) -> None:
"""Pump a user message into the worker.
For ephemeral workers this is rarely used (they don't take
follow-up input). For persistent overseers this is the chat
injection path.
"""
await self._input_queue.put(message)
async def seed_conversation(self, messages: list[dict[str, Any]]) -> None:
"""Pre-populate the worker's ConversationStore before starting.
Used when forking a queen DM into a colony: the DM's prior
conversation becomes the colony overseer's starting point so the
overseer resumes mid-thought instead of greeting the user fresh.
``messages`` is a list of dicts matching the ConversationStore's
part format: ``{seq, role, content, tool_calls, tool_use_id,
created_at, phase}``. The caller is responsible for rewriting
``agent_id`` to match the new worker, and for numbering ``seq``
monotonically from 0.
Must be called BEFORE ``start_background``.
"""
if self.status != WorkerStatus.PENDING:
raise RuntimeError(
f"seed_conversation must be called before start_background (worker {self.id} is {self.status})"
)
# Write parts directly to the worker's on-disk conversation store
# so that the AgentLoop's FileConversationStore picks them up when
# NodeConversation loads from disk. We require an explicit
# storage_path — falling back to CWD previously caused part files
# to leak into the process working directory.
if self._storage_path is None:
raise RuntimeError(
f"seed_conversation requires storage_path to be set on "
f"Worker {self.id}; construct Worker with storage_path=..."
)
parts_dir = self._storage_path / "conversations" / "parts"
parts_dir.mkdir(parents=True, exist_ok=True)
import json
for i, msg in enumerate(messages):
msg = dict(msg) # copy
msg.setdefault("seq", i)
msg.setdefault("agent_id", self.id)
part_file = parts_dir / f"{msg['seq']:010d}.json"
part_file.write_text(json.dumps(msg), encoding="utf-8")
logger.info(
"Worker %s: seeded %d messages into %s",
self.id,
len(messages),
parts_dir,
)
+1 -3
View File
@@ -50,9 +50,7 @@ class AnthropicProvider(LLMProvider):
# Delegate to LiteLLMProvider internally.
self.api_key = api_key or _get_api_key_from_credential_store()
if not self.api_key:
raise ValueError(
"Anthropic API key required. Set ANTHROPIC_API_KEY env var or pass api_key."
)
raise ValueError("Anthropic API key required. Set ANTHROPIC_API_KEY env var or pass api_key.")
self.model = model
+8 -29
View File
@@ -53,17 +53,9 @@ _TOKEN_REFRESH_BUFFER_SECS = 60
# Credentials file in ~/.hive/ (native implementation)
_ACCOUNTS_FILE = Path.home() / ".hive" / "antigravity-accounts.json"
_IDE_STATE_DB_MAC = (
Path.home()
/ "Library"
/ "Application Support"
/ "Antigravity"
/ "User"
/ "globalStorage"
/ "state.vscdb"
)
_IDE_STATE_DB_LINUX = (
Path.home() / ".config" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
Path.home() / "Library" / "Application Support" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
)
_IDE_STATE_DB_LINUX = Path.home() / ".config" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
_IDE_STATE_DB_KEY = "antigravityUnifiedStateSync.oauthToken"
_BASE_HEADERS: dict[str, str] = {
@@ -368,9 +360,7 @@ def _to_gemini_contents(
def _map_finish_reason(reason: str) -> str:
return {"STOP": "stop", "MAX_TOKENS": "max_tokens", "OTHER": "tool_use"}.get(
(reason or "").upper(), "stop"
)
return {"STOP": "stop", "MAX_TOKENS": "max_tokens", "OTHER": "tool_use"}.get((reason or "").upper(), "stop")
def _parse_complete_response(raw: dict[str, Any], model: str) -> LLMResponse:
@@ -538,8 +528,7 @@ class AntigravityProvider(LLMProvider):
return self._access_token
raise RuntimeError(
"No valid Antigravity credentials. "
"Run: uv run python core/antigravity_auth.py auth account add"
"No valid Antigravity credentials. Run: uv run python core/antigravity_auth.py auth account add"
)
# --- Request building -------------------------------------------------- #
@@ -593,11 +582,7 @@ class AntigravityProvider(LLMProvider):
token = self._ensure_token()
body_bytes = json.dumps(body).encode("utf-8")
path = (
"/v1internal:streamGenerateContent?alt=sse"
if streaming
else "/v1internal:generateContent"
)
path = "/v1internal:streamGenerateContent?alt=sse" if streaming else "/v1internal:generateContent"
headers = {
**_BASE_HEADERS,
"Authorization": f"Bearer {token}",
@@ -619,9 +604,7 @@ class AntigravityProvider(LLMProvider):
if result:
self._access_token, self._token_expires_at = result
headers["Authorization"] = f"Bearer {self._access_token}"
req2 = urllib.request.Request(
url, data=body_bytes, headers=headers, method="POST"
)
req2 = urllib.request.Request(url, data=body_bytes, headers=headers, method="POST")
try:
return urllib.request.urlopen(req2, timeout=120) # noqa: S310
except urllib.error.HTTPError as exc2:
@@ -642,9 +625,7 @@ class AntigravityProvider(LLMProvider):
last_exc = exc
continue
raise RuntimeError(
f"All Antigravity endpoints failed. Last error: {last_exc}"
) from last_exc
raise RuntimeError(f"All Antigravity endpoints failed. Last error: {last_exc}") from last_exc
# --- LLMProvider interface --------------------------------------------- #
@@ -683,9 +664,7 @@ class AntigravityProvider(LLMProvider):
try:
body = self._build_body(messages, system, tools, max_tokens)
http_resp = self._post(body, streaming=True)
for event in _parse_sse_stream(
http_resp, self.model, self._thought_sigs.__setitem__
):
for event in _parse_sse_stream(http_resp, self.model, self._thought_sigs.__setitem__):
loop.call_soon_threadsafe(queue.put_nowait, event)
except Exception as exc:
logger.error("Antigravity stream error: %s", exc)
+24
View File
@@ -12,6 +12,11 @@ Vision support rules are derived from official vendor documentation:
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from framework.llm.provider import Tool
def _model_name(model: str) -> str:
"""Return the bare model name after stripping any 'provider/' prefix."""
@@ -104,3 +109,22 @@ def supports_image_tool_results(model: str) -> bool:
# 5. Default: assume vision capable
# Covers: OpenAI, Anthropic, Google, Mistral, Kimi, and other hosted providers
return True
def filter_tools_for_model(tools: list[Tool], model: str) -> tuple[list[Tool], list[str]]:
"""Drop image-producing tools for text-only models.
Returns ``(filtered_tools, hidden_names)``. For vision-capable models
(or when *model* is empty) the input list is returned unchanged and
``hidden_names`` is empty. For text-only models any tool with
``produces_image=True`` is removed so the LLM never sees it in its
schema avoids wasted calls and stale "screenshot failed" entries
in agent memory.
"""
if not model or supports_image_tool_results(model):
return list(tools), []
hidden = [t.name for t in tools if t.produces_image]
if not hidden:
return list(tools), []
kept = [t for t in tools if not t.produces_image]
return kept, hidden
+101
View File
@@ -0,0 +1,101 @@
"""Thread-safe API key pool with round-robin rotation and health tracking.
When multiple API keys are configured, the pool rotates through them on each
request. Keys that hit rate limits are temporarily cooled-down so the next
call automatically uses a healthy key -- no sleep required.
"""
from __future__ import annotations
import logging
import threading
import time
from dataclasses import dataclass
logger = logging.getLogger(__name__)
@dataclass
class KeyHealth:
"""Per-key health counters."""
rate_limited_until: float = 0.0 # monotonic timestamp
consecutive_errors: int = 0
total_requests: int = 0
total_successes: int = 0
class KeyPool:
"""Round-robin key pool with health tracking.
Thread-safe: all mutations protected by a lock so concurrent LLM calls
(e.g. parallel tool execution in EventLoopNode) don't race.
"""
def __init__(self, keys: list[str]) -> None:
if not keys:
raise ValueError("KeyPool requires at least one key")
self._keys = list(keys)
self._index = 0
self._health: dict[str, KeyHealth] = {k: KeyHealth() for k in keys}
self._lock = threading.Lock()
@property
def size(self) -> int:
return len(self._keys)
def get_key(self) -> str:
"""Return the next healthy key (round-robin).
If every key is currently rate-limited, returns the one whose cooldown
expires soonest so the caller can proceed with minimal delay.
"""
with self._lock:
now = time.monotonic()
for _ in range(len(self._keys)):
key = self._keys[self._index]
self._index = (self._index + 1) % len(self._keys)
health = self._health[key]
if health.rate_limited_until <= now:
health.total_requests += 1
return key
# All rate-limited -- pick the one that expires soonest.
soonest = min(self._keys, key=lambda k: self._health[k].rate_limited_until)
self._health[soonest].total_requests += 1
return soonest
def mark_rate_limited(self, key: str, retry_after: float = 60.0) -> None:
"""Mark *key* as rate-limited for *retry_after* seconds."""
with self._lock:
health = self._health.get(key)
if health:
health.rate_limited_until = time.monotonic() + retry_after
health.consecutive_errors += 1
logger.info(
"[key-pool] Key ...%s rate-limited for %.0fs (errors=%d)",
key[-6:],
retry_after,
health.consecutive_errors,
)
def mark_success(self, key: str) -> None:
"""Record a successful call on *key*."""
with self._lock:
health = self._health.get(key)
if health:
health.consecutive_errors = 0
health.total_successes += 1
def get_stats(self) -> dict[str, dict]:
"""Return health stats keyed by the last 6 chars of each key."""
with self._lock:
now = time.monotonic()
return {
f"...{k[-6:]}": {
"healthy": self._health[k].rate_limited_until <= now,
"requests": self._health[k].total_requests,
"successes": self._health[k].total_successes,
"consecutive_errors": self._health[k].consecutive_errors,
}
for k in self._keys
}
+479 -97
View File
@@ -7,6 +7,8 @@ Groq, and local models.
See: https://docs.litellm.ai/docs/providers
"""
from __future__ import annotations
import ast
import asyncio
import hashlib
@@ -18,7 +20,10 @@ import time
from collections.abc import AsyncIterator
from datetime import datetime
from pathlib import Path
from typing import Any
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from framework.llm.key_pool import KeyPool
try:
import litellm
@@ -33,6 +38,10 @@ from framework.llm.stream_events import StreamEvent
logger = logging.getLogger(__name__)
logging.getLogger("openai._base_client").setLevel(logging.WARNING)
logging.getLogger("httpx").setLevel(logging.WARNING)
logging.getLogger("httpcore").setLevel(logging.WARNING)
def _patch_litellm_anthropic_oauth() -> None:
"""Patch litellm's Anthropic header construction to fix OAuth token handling.
@@ -91,9 +100,7 @@ def _patch_litellm_anthropic_oauth() -> None:
result["authorization"] = f"Bearer {token}"
# Merge the OAuth beta header with any existing beta headers.
existing_beta = result.get("anthropic-beta", "")
beta_parts = (
[b.strip() for b in existing_beta.split(",") if b.strip()] if existing_beta else []
)
beta_parts = [b.strip() for b in existing_beta.split(",") if b.strip()] if existing_beta else []
if ANTHROPIC_OAUTH_BETA_HEADER not in beta_parts:
beta_parts.append(ANTHROPIC_OAUTH_BETA_HEADER)
result["anthropic-beta"] = ",".join(beta_parts)
@@ -182,6 +189,14 @@ def _ensure_ollama_chat_prefix(model: str) -> str:
RATE_LIMIT_MAX_RETRIES = 10
RATE_LIMIT_BACKOFF_BASE = 2 # seconds
RATE_LIMIT_MAX_DELAY = 120 # seconds - cap to prevent absurd waits
# Separate, much lower cap for "empty response, finish_reason=stop"
# scenarios. Unlike a real 429, these are rarely transient: Gemini
# returns stop+empty on silently-filtered safety blocks, poisoned
# conversation state (dangling tool_result after compaction), or
# malformed tool schemas. Waiting minutes doesn't fix any of those, so
# give up after 3 attempts (2+4+8 = 14s) and surface an actionable
# error instead of burning 12+ minutes on exponential backoff.
EMPTY_RESPONSE_MAX_RETRIES = 3
MINIMAX_API_BASE = "https://api.minimax.io/v1"
OPENROUTER_API_BASE = "https://openrouter.ai/api/v1"
@@ -245,9 +260,7 @@ def _claude_code_billing_header(messages: list[dict[str, Any]]) -> str:
break
sampled = "".join(_sample_js_code_unit(first_text, i) for i in (4, 7, 20))
version_hash = hashlib.sha256(
f"{_CLAUDE_CODE_BILLING_SALT}{sampled}{CLAUDE_CODE_VERSION}".encode()
).hexdigest()
version_hash = hashlib.sha256(f"{_CLAUDE_CODE_BILLING_SALT}{sampled}{CLAUDE_CODE_VERSION}".encode()).hexdigest()
entrypoint = os.environ.get("CLAUDE_CODE_ENTRYPOINT", "").strip() or "cli"
return (
f"x-anthropic-billing-header: cc_version={CLAUDE_CODE_VERSION}.{version_hash[:3]}; "
@@ -272,6 +285,10 @@ OPENROUTER_TOOL_COMPAT_CACHE_TTL_SECONDS = 3600
# OpenRouter routing can change over time, so tool-compat caching must expire.
OPENROUTER_TOOL_COMPAT_MODEL_CACHE: dict[str, float] = {}
# Transient stream errors (network blips, timeouts) use a separate cap
# from rate-limit retries — 3 retries is sufficient for connection failures.
STREAM_TRANSIENT_MAX_RETRIES = 3
# Directory for dumping failed requests
FAILED_REQUESTS_DIR = Path.home() / ".hive" / "failed_requests"
@@ -315,9 +332,7 @@ def _prune_failed_request_dumps(max_files: int = MAX_FAILED_REQUEST_DUMPS) -> No
def _remember_openrouter_tool_compat_model(model: str) -> None:
"""Cache OpenRouter tool-compat fallback for a bounded time window."""
OPENROUTER_TOOL_COMPAT_MODEL_CACHE[model] = (
time.monotonic() + OPENROUTER_TOOL_COMPAT_CACHE_TTL_SECONDS
)
OPENROUTER_TOOL_COMPAT_MODEL_CACHE[model] = time.monotonic() + OPENROUTER_TOOL_COMPAT_CACHE_TTL_SECONDS
def _is_openrouter_tool_compat_cached(model: str) -> bool:
@@ -338,34 +353,145 @@ def _dump_failed_request(
attempt: int,
) -> str:
"""Dump failed request to a file for debugging. Returns the file path."""
FAILED_REQUESTS_DIR.mkdir(parents=True, exist_ok=True)
try:
FAILED_REQUESTS_DIR.mkdir(parents=True, exist_ok=True)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S_%f")
filename = f"{error_type}_{model.replace('/', '_')}_{timestamp}.json"
filepath = FAILED_REQUESTS_DIR / filename
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S_%f")
filename = f"{error_type}_{model.replace('/', '_')}_{timestamp}.json"
filepath = FAILED_REQUESTS_DIR / filename
# Build dump data
messages = kwargs.get("messages", [])
dump_data = {
"timestamp": datetime.now().isoformat(),
"model": model,
"error_type": error_type,
"attempt": attempt,
"estimated_tokens": _estimate_tokens(model, messages),
"num_messages": len(messages),
"messages": messages,
"tools": kwargs.get("tools"),
"max_tokens": kwargs.get("max_tokens"),
"temperature": kwargs.get("temperature"),
# Build dump data
messages = kwargs.get("messages", [])
dump_data = {
"timestamp": datetime.now().isoformat(),
"model": model,
"error_type": error_type,
"attempt": attempt,
"estimated_tokens": _estimate_tokens(model, messages),
"num_messages": len(messages),
"api_base": kwargs.get("api_base"),
"request_keys": sorted(kwargs.keys()),
"messages": messages,
"tools": kwargs.get("tools"),
"max_tokens": kwargs.get("max_tokens"),
"temperature": kwargs.get("temperature"),
"stream": kwargs.get("stream"),
"tool_choice": kwargs.get("tool_choice"),
"response_format": kwargs.get("response_format"),
}
with open(filepath, "w", encoding="utf-8") as f:
json.dump(dump_data, f, indent=2, default=str)
# Prune old dumps to prevent unbounded disk growth
_prune_failed_request_dumps()
return str(filepath)
except OSError as e:
logger.warning(f"Failed to dump request debug log to {FAILED_REQUESTS_DIR}: {e}")
return "log_write_failed"
def _summarize_message_content(content: Any) -> dict[str, Any]:
"""Return a structural summary of one message content payload."""
if isinstance(content, str):
return {
"content_kind": "string",
"text_chars": len(content),
}
if isinstance(content, list):
block_types: list[str] = []
text_chars = 0
for block in content:
if isinstance(block, dict):
block_type = str(block.get("type", "unknown"))
block_types.append(block_type)
if block_type == "text":
text_chars += len(str(block.get("text", "")))
elif block_type == "tool_result":
block_content = block.get("content")
if isinstance(block_content, str):
text_chars += len(block_content)
elif isinstance(block_content, list):
for inner in block_content:
if isinstance(inner, dict) and inner.get("type") == "text":
text_chars += len(str(inner.get("text", "")))
else:
block_types.append(type(block).__name__)
return {
"content_kind": "list",
"blocks": len(content),
"block_types": block_types,
"text_chars": text_chars,
}
return {
"content_kind": type(content).__name__,
}
with open(filepath, "w", encoding="utf-8") as f:
json.dump(dump_data, f, indent=2, default=str)
# Prune old dumps to prevent unbounded disk growth
_prune_failed_request_dumps()
def _summarize_messages_for_log(messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
"""Build a high-signal, no-secret summary of the outgoing messages payload."""
summary: list[dict[str, Any]] = []
for idx, message in enumerate(messages):
item: dict[str, Any] = {
"idx": idx,
"role": message.get("role"),
"keys": sorted(message.keys()),
}
item.update(_summarize_message_content(message.get("content")))
tool_calls = message.get("tool_calls")
if isinstance(tool_calls, list):
item["tool_calls"] = len(tool_calls)
tool_names = []
for tc in tool_calls:
if isinstance(tc, dict):
fn = tc.get("function")
if isinstance(fn, dict) and fn.get("name"):
tool_names.append(str(fn["name"]))
if tool_names:
item["tool_call_names"] = tool_names
if message.get("cache_control"):
item["cache_control"] = True
if message.get("tool_call_id"):
item["tool_call_id"] = str(message.get("tool_call_id"))
summary.append(item)
return summary
return str(filepath)
def _summarize_request_for_log(kwargs: dict[str, Any]) -> dict[str, Any]:
"""Return a compact structural summary of a LiteLLM request payload."""
tools = kwargs.get("tools")
tool_names: list[str] = []
if isinstance(tools, list):
for tool in tools:
if isinstance(tool, dict):
fn = tool.get("function")
if isinstance(fn, dict) and fn.get("name"):
tool_names.append(str(fn["name"]))
messages = kwargs.get("messages", [])
if isinstance(messages, list):
non_system_roles = [m.get("role") for m in messages if m.get("role") != "system"]
else:
non_system_roles = []
return {
"model": kwargs.get("model"),
"api_base": kwargs.get("api_base"),
"stream": kwargs.get("stream"),
"max_tokens": kwargs.get("max_tokens"),
"tool_count": len(tools) if isinstance(tools, list) else 0,
"tool_names": tool_names,
"tool_choice": kwargs.get("tool_choice"),
"response_format": bool(kwargs.get("response_format")),
"message_count": len(messages) if isinstance(messages, list) else 0,
"non_system_message_count": len(non_system_roles),
"first_non_system_role": non_system_roles[0] if non_system_roles else None,
"last_non_system_role": non_system_roles[-1] if non_system_roles else None,
"system_only": bool(messages) and not non_system_roles,
"messages": _summarize_messages_for_log(messages if isinstance(messages, list) else []),
}
def _compute_retry_delay(
@@ -458,6 +584,59 @@ def _is_stream_transient_error(exc: BaseException) -> bool:
return isinstance(exc, transient_types)
def _extract_text_tool_calls(
text: str,
) -> tuple[list, str]:
"""Extract hallucinated tool calls from ``<tool_code>`` blocks in LLM text.
Some models (notably Gemini) emit tool invocations as text instead of using
the structured function-calling API. This function parses those blocks and
returns ``(tool_call_events, cleaned_text)`` where *cleaned_text* has the
``<tool_code>`` blocks removed.
Expected format::
<tool_code>
{
"tool_name": { ...args }
}
</tool_code>
"""
from framework.llm.stream_events import ToolCallEvent
pattern = re.compile(r"<tool_code>\s*(.*?)\s*</tool_code>", re.DOTALL)
events: list[ToolCallEvent] = []
cleaned = text
for match in pattern.finditer(text):
raw = match.group(1).strip()
try:
payload = json.loads(raw)
except json.JSONDecodeError:
logger.warning("[_extract_text_tool_calls] failed to parse JSON: %s", raw[:200])
continue
if not isinstance(payload, dict):
continue
for tool_name, tool_args in payload.items():
key = f"{tool_name}:{json.dumps(tool_args, sort_keys=True)}"
digest = hashlib.md5(key.encode()).hexdigest()[:12]
call_id = f"synth_{digest}"
events.append(
ToolCallEvent(
tool_use_id=call_id,
tool_name=tool_name,
tool_input=tool_args if isinstance(tool_args, dict) else {},
)
)
if events:
cleaned = pattern.sub("", text).strip()
return events, cleaned
class LiteLLMProvider(LLMProvider):
"""
LiteLLM-based LLM provider for multi-provider support.
@@ -500,6 +679,7 @@ class LiteLLMProvider(LLMProvider):
model: str = "gpt-4o-mini",
api_key: str | None = None,
api_base: str | None = None,
api_keys: list[str] | None = None,
**kwargs: Any,
):
"""
@@ -512,6 +692,9 @@ class LiteLLMProvider(LLMProvider):
look for the appropriate env var (OPENAI_API_KEY,
ANTHROPIC_API_KEY, etc.)
api_base: Custom API base URL (for proxies or local deployments)
api_keys: Optional list of API keys for key-pool rotation. When
provided with 2+ keys, a :class:`KeyPool` is created and
keys are rotated on rate-limit errors.
**kwargs: Additional arguments passed to litellm.completion()
"""
# Kimi For Coding exposes an Anthropic-compatible endpoint at
@@ -533,27 +716,64 @@ class LiteLLMProvider(LLMProvider):
if api_base and api_base.rstrip("/").endswith("/v1"):
api_base = api_base.rstrip("/")[:-3]
self.model = model
self.api_key = api_key
# Key pool: when multiple keys are provided, enable rotation.
self._key_pool: KeyPool | None = None
if api_keys and len(api_keys) > 1:
from framework.llm.key_pool import KeyPool
self._key_pool = KeyPool(api_keys)
self.api_key = api_keys[0] # default for OAuth detection below
logger.info(
"[litellm] Key pool enabled with %d keys for model %s",
len(api_keys),
model,
)
else:
self.api_key = api_key or (api_keys[0] if api_keys else None)
self.api_base = api_base or self._default_api_base_for_model(_original_model)
self.extra_kwargs = kwargs
# Detect Claude Code OAuth subscription by checking the api_key prefix.
self._claude_code_oauth = bool(api_key and api_key.startswith("sk-ant-oat"))
self._claude_code_oauth = bool(self.api_key and self.api_key.startswith("sk-ant-oat"))
if self._claude_code_oauth:
# Anthropic requires a specific User-Agent for OAuth requests.
eh = self.extra_kwargs.setdefault("extra_headers", {})
eh.setdefault("user-agent", CLAUDE_CODE_USER_AGENT)
# The Codex ChatGPT backend (chatgpt.com/backend-api/codex) rejects
# several standard OpenAI params: max_output_tokens, stream_options.
self._codex_backend = bool(
self.api_base and "chatgpt.com/backend-api/codex" in self.api_base
)
self._codex_backend = bool(self.api_base and "chatgpt.com/backend-api/codex" in self.api_base)
# Antigravity routes through a local OpenAI-compatible proxy — no patches needed.
self._antigravity = bool(self.api_base and "localhost:8069" in self.api_base)
if litellm is None:
raise ImportError(
"LiteLLM is not installed. Please install it with: uv pip install litellm"
)
raise ImportError("LiteLLM is not installed. Please install it with: uv pip install litellm")
def reconfigure(self, model: str, api_key: str | None = None, api_base: str | None = None) -> None:
"""Hot-swap the model, API key, and/or base URL on this provider instance.
Since the same LiteLLMProvider object is shared by reference across the
session, queen runner, agent runtime, and execution streams, mutating
these attributes in-place propagates to all callers on the next LLM call.
"""
_original_model = model
if _is_ollama_model(model):
model = _ensure_ollama_chat_prefix(model)
elif model.lower().startswith("kimi/"):
model = "anthropic/" + model[len("kimi/") :]
if api_base and api_base.rstrip("/").endswith("/v1"):
api_base = api_base.rstrip("/")[:-3]
elif model.lower().startswith("hive/"):
model = "anthropic/" + model[len("hive/") :]
if api_base and api_base.rstrip("/").endswith("/v1"):
api_base = api_base.rstrip("/")[:-3]
self.model = model
self.api_key = api_key
self.api_base = api_base or self._default_api_base_for_model(_original_model)
self._claude_code_oauth = bool(api_key and api_key.startswith("sk-ant-oat"))
if self._claude_code_oauth:
eh = self.extra_kwargs.setdefault("extra_headers", {})
eh.setdefault("user-agent", CLAUDE_CODE_USER_AGENT)
self._codex_backend = bool(self.api_base and "chatgpt.com/backend-api/codex" in self.api_base)
self._antigravity = bool(self.api_base and "localhost:8069" in self.api_base)
# Note: The Codex ChatGPT backend is a Responses API endpoint at
# chatgpt.com/backend-api/codex/responses. LiteLLM's model registry
@@ -575,13 +795,21 @@ class LiteLLMProvider(LLMProvider):
return HIVE_API_BASE
return None
def _completion_with_rate_limit_retry(
self, max_retries: int | None = None, **kwargs: Any
) -> Any:
"""Call litellm.completion with retry on 429 rate limit errors and empty responses."""
def _completion_with_rate_limit_retry(self, max_retries: int | None = None, **kwargs: Any) -> Any:
"""Call litellm.completion with retry on 429 rate limit errors and empty responses.
When a :class:`KeyPool` is configured, rate-limited keys are rotated
automatically so the next attempt uses a different key -- no sleep
needed between attempts.
"""
model = kwargs.get("model", self.model)
retries = max_retries if max_retries is not None else RATE_LIMIT_MAX_RETRIES
for attempt in range(retries + 1):
# Rotate key from pool when available.
current_key: str | None = None
if self._key_pool:
current_key = self._key_pool.get_key()
kwargs["api_key"] = current_key
try:
response = litellm.completion(**kwargs) # type: ignore[union-attr]
@@ -599,15 +827,10 @@ class LiteLLMProvider(LLMProvider):
None,
)
if last_role == "assistant":
logger.debug(
"[retry] Empty response after assistant message — "
"expected, not retrying."
)
logger.debug("[retry] Empty response after assistant message — expected, not retrying.")
return response
finish_reason = (
response.choices[0].finish_reason if response.choices else "unknown"
)
finish_reason = response.choices[0].finish_reason if response.choices else "unknown"
# Dump full request to file for debugging
token_count, token_method = _estimate_tokens(model, messages)
dump_path = _dump_failed_request(
@@ -636,28 +859,51 @@ class LiteLLMProvider(LLMProvider):
)
return response
if attempt == retries:
empty_cap = min(retries, EMPTY_RESPONSE_MAX_RETRIES)
if attempt >= empty_cap:
logger.error(
f"[retry] GAVE UP on {model} after {retries + 1} "
f"attempts — empty response "
f"[retry] GAVE UP on {model} after "
f"{attempt + 1} attempts — empty response "
f"(finish_reason={finish_reason}, "
f"choices={len(response.choices) if response.choices else 0})"
f"choices={len(response.choices) if response.choices else 0}). "
f"This is almost never a rate limit despite the "
f"earlier log message — check the dumped request "
f"at {dump_path} for poisoned conversation state "
f"(dangling tool_result after compaction), a "
f"safety-filter trigger in the prompt, or a "
f"malformed tool schema."
)
return response
wait = _compute_retry_delay(attempt)
logger.warning(
f"[retry] {model} returned empty response "
f"(finish_reason={finish_reason}, "
f"choices={len(response.choices) if response.choices else 0}) "
f"likely rate limited or quota exceeded. "
f"choices={len(response.choices) if response.choices else 0}). "
f"Retrying in {wait}s "
f"(attempt {attempt + 1}/{retries})"
f"(attempt {attempt + 1}/{empty_cap}). "
f"Note: empty-response retries are capped at "
f"{EMPTY_RESPONSE_MAX_RETRIES} because this is rarely "
f"a transient rate limit on small payloads."
)
time.sleep(wait)
continue
if self._key_pool and current_key:
self._key_pool.mark_success(current_key)
return response
except RateLimitError as e:
# Key pool: mark the offending key and rotate immediately.
if self._key_pool and current_key:
self._key_pool.mark_rate_limited(current_key, retry_after=60.0)
# When we have other healthy keys, skip the sleep -- the
# next iteration will pick a different key automatically.
if attempt < retries:
logger.info(
"[retry] Key pool rotating away from ...%s on 429",
current_key[-6:],
)
continue
# Dump full request to file for debugging
messages = kwargs.get("messages", [])
token_count, token_method = _estimate_tokens(model, messages)
@@ -670,7 +916,7 @@ class LiteLLMProvider(LLMProvider):
if attempt == retries:
logger.error(
f"[retry] GAVE UP on {model} after {retries + 1} "
f"attempts rate limit error: {e!s}. "
f"attempts -- rate limit error: {e!s}. "
f"~{token_count} tokens ({token_method}). "
f"Full request dumped to: {dump_path}"
)
@@ -783,16 +1029,20 @@ class LiteLLMProvider(LLMProvider):
# Async variants — non-blocking on the event loop
# ------------------------------------------------------------------
async def _acompletion_with_rate_limit_retry(
self, max_retries: int | None = None, **kwargs: Any
) -> Any:
async def _acompletion_with_rate_limit_retry(self, max_retries: int | None = None, **kwargs: Any) -> Any:
"""Async version of _completion_with_rate_limit_retry.
Uses litellm.acompletion and asyncio.sleep instead of blocking calls.
When a :class:`KeyPool` is configured, rate-limited keys are rotated.
"""
model = kwargs.get("model", self.model)
retries = max_retries if max_retries is not None else RATE_LIMIT_MAX_RETRIES
for attempt in range(retries + 1):
# Rotate key from pool when available.
current_key: str | None = None
if self._key_pool:
current_key = self._key_pool.get_key()
kwargs["api_key"] = current_key
try:
response = await litellm.acompletion(**kwargs) # type: ignore[union-attr]
@@ -805,15 +1055,10 @@ class LiteLLMProvider(LLMProvider):
None,
)
if last_role == "assistant":
logger.debug(
"[async-retry] Empty response after assistant message — "
"expected, not retrying."
)
logger.debug("[async-retry] Empty response after assistant message — expected, not retrying.")
return response
finish_reason = (
response.choices[0].finish_reason if response.choices else "unknown"
)
finish_reason = response.choices[0].finish_reason if response.choices else "unknown"
token_count, token_method = _estimate_tokens(model, messages)
dump_path = _dump_failed_request(
model=model,
@@ -841,28 +1086,53 @@ class LiteLLMProvider(LLMProvider):
)
return response
if attempt == retries:
# Use a much lower retry cap for empty-response
# recoveries than for real exceptions. These are
# almost never transient (see EMPTY_RESPONSE_MAX_RETRIES
# rationale at the top of the file).
empty_cap = min(retries, EMPTY_RESPONSE_MAX_RETRIES)
if attempt >= empty_cap:
logger.error(
f"[async-retry] GAVE UP on {model} after {retries + 1} "
f"attempts — empty response "
f"[async-retry] GAVE UP on {model} after "
f"{attempt + 1} attempts — empty response "
f"(finish_reason={finish_reason}, "
f"choices={len(response.choices) if response.choices else 0})"
f"choices={len(response.choices) if response.choices else 0}). "
f"This is almost never a rate limit despite the "
f"earlier log message — check the dumped request "
f"at {dump_path} for poisoned conversation state "
f"(dangling tool_result after compaction), a "
f"safety-filter trigger in the prompt, or a "
f"malformed tool schema."
)
return response
wait = _compute_retry_delay(attempt)
logger.warning(
f"[async-retry] {model} returned empty response "
f"(finish_reason={finish_reason}, "
f"choices={len(response.choices) if response.choices else 0}) "
f"likely rate limited or quota exceeded. "
f"choices={len(response.choices) if response.choices else 0}). "
f"Retrying in {wait}s "
f"(attempt {attempt + 1}/{retries})"
f"(attempt {attempt + 1}/{empty_cap}). "
f"Note: empty-response retries are capped at "
f"{EMPTY_RESPONSE_MAX_RETRIES} because this is rarely "
f"a transient rate limit on small payloads."
)
await asyncio.sleep(wait)
continue
if self._key_pool and current_key:
self._key_pool.mark_success(current_key)
return response
except RateLimitError as e:
# Key pool: mark the offending key and rotate immediately.
if self._key_pool and current_key:
self._key_pool.mark_rate_limited(current_key, retry_after=60.0)
if attempt < retries:
logger.info(
"[async-retry] Key pool rotating away from ...%s on 429",
current_key[-6:],
)
continue
messages = kwargs.get("messages", [])
token_count, token_method = _estimate_tokens(model, messages)
dump_path = _dump_failed_request(
@@ -874,7 +1144,7 @@ class LiteLLMProvider(LLMProvider):
if attempt == retries:
logger.error(
f"[async-retry] GAVE UP on {model} after {retries + 1} "
f"attempts rate limit error: {e!s}. "
f"attempts -- rate limit error: {e!s}. "
f"~{token_count} tokens ({token_method}). "
f"Full request dumped to: {dump_path}"
)
@@ -1001,6 +1271,12 @@ class LiteLLMProvider(LLMProvider):
api_base = (self.api_base or "").lower()
return "openrouter.ai/api/v1" in api_base
def _is_zai_openai_backend(self) -> bool:
"""Return True when using Z-AI's OpenAI-compatible chat endpoint."""
model = (self.model or "").lower()
api_base = (self.api_base or "").lower()
return "api.z.ai" in api_base or model.startswith("openai/glm-") or model == "glm-5"
def _should_use_openrouter_tool_compat(
self,
error: BaseException,
@@ -1066,8 +1342,7 @@ class LiteLLMProvider(LLMProvider):
)
return text_tool_content, text_tool_calls
logger.info(
"[openrouter-tool-compat] %s returned non-JSON fallback content; "
"treating it as plain text.",
"[openrouter-tool-compat] %s returned non-JSON fallback content; treating it as plain text.",
self.model,
)
return content.strip(), []
@@ -1219,9 +1494,7 @@ class LiteLLMProvider(LLMProvider):
)
return repaired
raise ValueError(
f"Failed to parse tool call arguments for '{tool_name}' (likely truncated JSON)."
)
raise ValueError(f"Failed to parse tool call arguments for '{tool_name}' (likely truncated JSON).")
def _parse_openrouter_text_tool_calls(
self,
@@ -1378,11 +1651,7 @@ class LiteLLMProvider(LLMProvider):
return [
message
for message in full_messages
if not (
message.get("role") == "assistant"
and not message.get("content")
and not message.get("tool_calls")
)
if not (message.get("role") == "assistant" and not message.get("content") and not message.get("tool_calls"))
]
async def _acomplete_via_openrouter_tool_compat(
@@ -1608,6 +1877,38 @@ class LiteLLMProvider(LLMProvider):
full_messages.append(sys_msg)
full_messages.extend(messages)
if logger.isEnabledFor(logging.DEBUG) and full_messages:
import json as _json
from datetime import datetime as _dt
from pathlib import Path as _Path
_debug_dir = _Path.home() / ".hive" / "debug_logs"
_debug_dir.mkdir(parents=True, exist_ok=True)
_ts = _dt.now().strftime("%Y%m%d_%H%M%S_%f")
_dump_file = _debug_dir / f"llm_request_{_ts}.json"
_summary = []
for _mi, _m in enumerate(full_messages):
_role = _m.get("role", "?")
_c = _m.get("content")
_tc = _m.get("tool_calls")
_tcid = _m.get("tool_call_id")
_summary.append(
{
"idx": _mi,
"role": _role,
"content_length": len(str(_c)) if _c else 0,
"content_preview": str(_c)[:200] if _c else repr(_c),
"has_tool_calls": bool(_tc),
"tool_call_count": len(_tc) if _tc else 0,
"tool_call_id": _tcid,
}
)
try:
_dump_file.write_text(_json.dumps(_summary, indent=2, ensure_ascii=False), encoding="utf-8")
logger.debug("[LLM-MSG] %d messages dumped to %s", len(full_messages), _dump_file)
except Exception:
pass
# Codex Responses API requires an `instructions` field (system prompt).
# Inject a minimal one when callers don't provide a system message.
if self._codex_backend and not any(m["role"] == "system" for m in full_messages):
@@ -1628,9 +1929,7 @@ class LiteLLMProvider(LLMProvider):
full_messages = [
m
for m in full_messages
if not (
m.get("role") == "assistant" and not m.get("content") and not m.get("tool_calls")
)
if not (m.get("role") == "assistant" and not m.get("content") and not m.get("tool_calls"))
]
kwargs: dict[str, Any] = {
@@ -1661,6 +1960,33 @@ class LiteLLMProvider(LLMProvider):
kwargs.pop("max_tokens", None)
kwargs.pop("stream_options", None)
request_summary = _summarize_request_for_log(kwargs)
logger.debug(
"[stream] prepared request: %s",
json.dumps(request_summary, default=str),
)
if request_summary["system_only"]:
logger.warning(
"[stream] %s request has no non-system chat messages "
"(api_base=%s tools=%d system_chars=%d). "
"Some chat-completions backends reject system-only payloads.",
self.model,
self.api_base,
request_summary["tool_count"],
sum(
message.get("text_chars", 0)
for message in request_summary["messages"]
if message.get("role") == "system"
),
)
if self._is_zai_openai_backend():
logger.warning(
"[stream] %s appears to be using Z-AI/GLM's OpenAI-compatible backend. "
"This backend has rejected system-only payloads with "
"'The messages parameter is illegal.' in prior requests.",
self.model,
)
for attempt in range(RATE_LIMIT_MAX_RETRIES + 1):
# Post-stream events (ToolCall, TextEnd, Finish) are buffered
# because they depend on the full stream. TextDeltaEvents are
@@ -1751,6 +2077,10 @@ class LiteLLMProvider(LLMProvider):
# --- Finish ---
if choice.finish_reason:
# Kimi's 'pause_turn' means the model emitted tool
# calls and expects results — equivalent to 'tool_calls'.
if choice.finish_reason == "pause_turn":
choice.finish_reason = "tool_calls" if tool_calls_acc else "stop"
stream_finish_reason = choice.finish_reason
for _idx, tc_data in sorted(tool_calls_acc.items()):
parsed_args = self._parse_tool_call_arguments(
@@ -1785,8 +2115,7 @@ class LiteLLMProvider(LLMProvider):
else getattr(usage, "cache_read_input_tokens", 0) or 0
)
logger.debug(
"[tokens] finish-chunk usage: "
"input=%d output=%d cached=%d model=%s",
"[tokens] finish-chunk usage: input=%d output=%d cached=%d model=%s",
input_tokens,
output_tokens,
cached_tokens,
@@ -1833,8 +2162,7 @@ class LiteLLMProvider(LLMProvider):
else getattr(_usage, "cache_read_input_tokens", 0) or 0
)
logger.debug(
"[tokens] post-loop chunks fallback:"
" input=%d output=%d cached=%d model=%s",
"[tokens] post-loop chunks fallback: input=%d output=%d cached=%d model=%s",
input_tokens,
output_tokens,
cached_tokens,
@@ -1918,6 +2246,39 @@ class LiteLLMProvider(LLMProvider):
f"(last_role={last_role}). Returning empty result."
)
# Gemini sometimes outputs tool calls as text in
# <tool_code>{"name": {...args}}</tool_code> blocks
# instead of using the function-calling API. Extract
# these as real ToolCallEvents and strip them from the
# text so the rest of the system treats them normally.
if accumulated_text and "<tool_code>" in accumulated_text:
extracted, cleaned = _extract_text_tool_calls(accumulated_text)
if extracted:
tool_names = [tc.tool_name for tc in extracted]
logger.info(
"[stream] Model emitted %d tool call(s) as <tool_code> text "
"instead of structured function calls; converting to "
"synthetic ToolCallEvents: %s",
len(extracted),
tool_names,
)
accumulated_text = cleaned
# Emit a corrected TextDeltaEvent so the caller's
# accumulated_text is overwritten with the cleaned text.
yield TextDeltaEvent(content="", snapshot=cleaned)
# Insert synthetic ToolCallEvents before FinishEvent.
finish_idx = next(
(i for i, ev in enumerate(tail_events) if isinstance(ev, FinishEvent)),
len(tail_events),
)
for tc_ev in reversed(extracted):
tail_events.insert(finish_idx, tc_ev)
# Update TextEndEvent if present.
for _i, _ev in enumerate(tail_events):
if isinstance(_ev, TextEndEvent):
tail_events[_i] = TextEndEvent(full_text=cleaned)
break
# Success (or empty after exhausted retries) — flush events.
for event in tail_events:
yield event
@@ -1944,8 +2305,15 @@ class LiteLLMProvider(LLMProvider):
# tail_events before the error, the stream was successful —
# yield what we have instead of discarding it.
if (accumulated_text or tool_calls_acc) and tail_events:
# LiteLLM may wrap the original ValidationError in an
# APIError with a different message. Check the full
# exception chain (str(e) + str(__cause__)).
_err_chain = f"{e} {e.__cause__}" if e.__cause__ else str(e)
_is_finish_reason_err = (
"finish_reason" in str(e) and "validation error" in str(e).lower()
"finish_reason" in _err_chain and "validation error" in _err_chain.lower()
) or (
# Fallback: the APIError wrapper message for chunk-building failures
"building chunks" in str(e).lower() and (accumulated_text or tool_calls_acc)
)
if _is_finish_reason_err:
logger.warning(
@@ -1970,16 +2338,30 @@ class LiteLLMProvider(LLMProvider):
):
yield event
return
if _is_stream_transient_error(e) and attempt < RATE_LIMIT_MAX_RETRIES:
if _is_stream_transient_error(e) and attempt < STREAM_TRANSIENT_MAX_RETRIES:
wait = _compute_retry_delay(attempt, exception=e)
logger.warning(
f"[stream-retry] {self.model} transient error "
f"({type(e).__name__}): {e!s}. "
f"Retrying in {wait:.1f}s "
f"(attempt {attempt + 1}/{RATE_LIMIT_MAX_RETRIES})"
f"(attempt {attempt + 1}/{STREAM_TRANSIENT_MAX_RETRIES})"
)
await asyncio.sleep(wait)
continue
dump_path = _dump_failed_request(
model=self.model,
kwargs=kwargs,
error_type=f"stream_exception_{type(e).__name__.lower()}",
attempt=attempt,
)
logger.error(
"[stream] %s request failed with %s: %s | request=%s | dump=%s",
self.model,
type(e).__name__,
e,
json.dumps(_summarize_request_for_log(kwargs), default=str),
dump_path,
)
recoverable = _is_stream_transient_error(e)
yield StreamErrorEvent(error=str(e), recoverable=recoverable)
return
+400
View File
@@ -0,0 +1,400 @@
{
"schema_version": 1,
"providers": {
"anthropic": {
"default_model": "claude-haiku-4-5-20251001",
"models": [
{
"id": "claude-haiku-4-5-20251001",
"label": "Haiku 4.5 - Fast + cheap",
"recommended": false,
"max_tokens": 64000,
"max_context_tokens": 136000
},
{
"id": "claude-sonnet-4-5-20250929",
"label": "Sonnet 4.5 - Best balance",
"recommended": false,
"max_tokens": 64000,
"max_context_tokens": 136000
},
{
"id": "claude-opus-4-6",
"label": "Opus 4.6 - Most capable",
"recommended": true,
"max_tokens": 128000,
"max_context_tokens": 872000
}
]
},
"openai": {
"default_model": "gpt-5.4",
"models": [
{
"id": "gpt-5.4",
"label": "GPT-5.4 - Best intelligence",
"recommended": true,
"max_tokens": 128000,
"max_context_tokens": 960000
},
{
"id": "gpt-5.4-mini",
"label": "GPT-5.4 Mini - Faster + cheaper",
"recommended": false,
"max_tokens": 128000,
"max_context_tokens": 400000
},
{
"id": "gpt-5.4-nano",
"label": "GPT-5.4 Nano - Cheapest high-volume",
"recommended": false,
"max_tokens": 128000,
"max_context_tokens": 400000
}
]
},
"gemini": {
"default_model": "gemini-3-flash-preview",
"models": [
{
"id": "gemini-3-flash-preview",
"label": "Gemini 3 Flash - Fast",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 900000
},
{
"id": "gemini-3.1-pro-preview-customtools",
"label": "Gemini 3.1 Pro - Best quality",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 900000
}
]
},
"groq": {
"default_model": "openai/gpt-oss-120b",
"models": [
{
"id": "openai/gpt-oss-120b",
"label": "GPT-OSS 120B - Best reasoning",
"recommended": true,
"max_tokens": 65536,
"max_context_tokens": 131072
},
{
"id": "openai/gpt-oss-20b",
"label": "GPT-OSS 20B - Fast + cheaper",
"recommended": false,
"max_tokens": 65536,
"max_context_tokens": 131072
},
{
"id": "llama-3.3-70b-versatile",
"label": "Llama 3.3 70B - General purpose",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 131072
},
{
"id": "llama-3.1-8b-instant",
"label": "Llama 3.1 8B - Fastest",
"recommended": false,
"max_tokens": 131072,
"max_context_tokens": 131072
}
]
},
"cerebras": {
"default_model": "gpt-oss-120b",
"models": [
{
"id": "gpt-oss-120b",
"label": "GPT-OSS 120B - Best production reasoning",
"recommended": true,
"max_tokens": 40960,
"max_context_tokens": 131072
},
{
"id": "llama3.1-8b",
"label": "Llama 3.1 8B - Fastest production",
"recommended": false,
"max_tokens": 8192,
"max_context_tokens": 32768
},
{
"id": "zai-glm-4.7",
"label": "Z.ai GLM 4.7 - Strong coding preview",
"recommended": true,
"max_tokens": 40960,
"max_context_tokens": 131072
},
{
"id": "qwen-3-235b-a22b-instruct-2507",
"label": "Qwen 3 235B Instruct - Frontier preview",
"recommended": false,
"max_tokens": 40960,
"max_context_tokens": 131072
}
]
},
"minimax": {
"default_model": "MiniMax-M2.7",
"models": [
{
"id": "MiniMax-M2.7",
"label": "MiniMax M2.7 - Best coding quality",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 204800
},
{
"id": "MiniMax-M2.5",
"label": "MiniMax M2.5 - Strong value",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 204800
}
]
},
"mistral": {
"default_model": "mistral-large-2512",
"models": [
{
"id": "mistral-large-2512",
"label": "Mistral Large 3 - Best quality",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 256000
},
{
"id": "mistral-medium-2508",
"label": "Mistral Medium 3.1 - Balanced",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 128000
},
{
"id": "mistral-small-2603",
"label": "Mistral Small 4 - Fast + capable",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 256000
},
{
"id": "codestral-2508",
"label": "Codestral - Coding specialist",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 128000
}
]
},
"together": {
"default_model": "deepseek-ai/DeepSeek-V3.1",
"models": [
{
"id": "deepseek-ai/DeepSeek-V3.1",
"label": "DeepSeek V3.1 - Best general coding",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 128000
},
{
"id": "Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8",
"label": "Qwen3 Coder 480B - Advanced coding",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 262144
},
{
"id": "openai/gpt-oss-120b",
"label": "GPT-OSS 120B - Strong reasoning",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 128000
},
{
"id": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"label": "Llama 3.3 70B Turbo - Fast baseline",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 131072
}
]
},
"deepseek": {
"default_model": "deepseek-chat",
"models": [
{
"id": "deepseek-chat",
"label": "DeepSeek Chat - Fast default",
"recommended": true,
"max_tokens": 8192,
"max_context_tokens": 128000
},
{
"id": "deepseek-reasoner",
"label": "DeepSeek Reasoner - Deep thinking",
"recommended": false,
"max_tokens": 64000,
"max_context_tokens": 128000
}
]
},
"kimi": {
"default_model": "kimi-k2.5",
"models": [
{
"id": "kimi-k2.5",
"label": "Kimi K2.5 - Best coding",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 200000
}
]
},
"hive": {
"default_model": "queen",
"models": [
{
"id": "queen",
"label": "Queen - Hive native",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 180000
},
{
"id": "kimi-2.5",
"label": "Kimi 2.5 - Via Hive",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 240000
},
{
"id": "GLM-5",
"label": "GLM-5 - Via Hive",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 180000
}
]
},
"openrouter": {
"default_model": "openai/gpt-5.4",
"models": [
{
"id": "openai/gpt-5.4",
"label": "GPT-5.4 - Best overall",
"recommended": true,
"max_tokens": 128000,
"max_context_tokens": 922000
},
{
"id": "anthropic/claude-sonnet-4.6",
"label": "Claude Sonnet 4.6 - Best coding balance",
"recommended": false,
"max_tokens": 64000,
"max_context_tokens": 936000
},
{
"id": "anthropic/claude-opus-4.6",
"label": "Claude Opus 4.6 - Most capable",
"recommended": false,
"max_tokens": 128000,
"max_context_tokens": 872000
},
{
"id": "google/gemini-3.1-pro-preview-customtools",
"label": "Gemini 3.1 Pro Preview - Long-context reasoning",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 1048576
},
{
"id": "deepseek/deepseek-v3.2",
"label": "DeepSeek V3.2 - Best value",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 163840
}
]
}
},
"presets": {
"claude_code": {
"provider": "anthropic",
"model": "claude-opus-4-6",
"max_tokens": 128000,
"max_context_tokens": 872000
},
"zai_code": {
"provider": "openai",
"api_key_env_var": "ZAI_API_KEY",
"model": "glm-5",
"max_tokens": 32768,
"max_context_tokens": 180000,
"api_base": "https://api.z.ai/api/coding/paas/v4"
},
"codex": {
"provider": "openai",
"model": "gpt-5.3-codex",
"max_tokens": 16384,
"max_context_tokens": 120000,
"api_base": "https://chatgpt.com/backend-api/codex"
},
"minimax_code": {
"provider": "minimax",
"api_key_env_var": "MINIMAX_API_KEY",
"model": "MiniMax-M2.7",
"max_tokens": 32768,
"max_context_tokens": 204800,
"api_base": "https://api.minimax.io/v1"
},
"kimi_code": {
"provider": "kimi",
"api_key_env_var": "KIMI_API_KEY",
"model": "kimi-k2.5",
"max_tokens": 32768,
"max_context_tokens": 240000,
"api_base": "https://api.kimi.com/coding"
},
"hive_llm": {
"provider": "hive",
"api_key_env_var": "HIVE_API_KEY",
"model": "queen",
"max_tokens": 32768,
"max_context_tokens": 180000,
"api_base": "https://api.adenhq.com",
"model_choices": [
{
"id": "queen",
"label": "queen",
"recommended": true
},
{
"id": "kimi-2.5",
"label": "kimi-2.5",
"recommended": false
},
{
"id": "GLM-5",
"label": "GLM-5",
"recommended": false
}
]
},
"antigravity": {
"provider": "openai",
"model": "gemini-3-flash",
"max_tokens": 32768,
"max_context_tokens": 1000000
},
"ollama_local": {
"provider": "ollama",
"max_tokens": 8192,
"max_context_tokens": 16384,
"api_base": "http://localhost:11434"
}
}
}
+197
View File
@@ -0,0 +1,197 @@
"""Shared curated model metadata loaded from ``model_catalog.json``."""
from __future__ import annotations
import copy
import json
from functools import lru_cache
from pathlib import Path
from typing import Any
MODEL_CATALOG_PATH = Path(__file__).with_name("model_catalog.json")
class ModelCatalogError(RuntimeError):
"""Raised when the curated model catalogue is missing or malformed."""
def _require_mapping(value: Any, path: str) -> dict[str, Any]:
if not isinstance(value, dict):
raise ModelCatalogError(f"{path} must be an object")
return value
def _require_list(value: Any, path: str) -> list[Any]:
if not isinstance(value, list):
raise ModelCatalogError(f"{path} must be an array")
return value
def _validate_model_catalog(data: dict[str, Any]) -> dict[str, Any]:
providers = _require_mapping(data.get("providers"), "providers")
for provider_id, provider_info in providers.items():
provider_path = f"providers.{provider_id}"
provider_map = _require_mapping(provider_info, provider_path)
default_model = provider_map.get("default_model")
if not isinstance(default_model, str) or not default_model.strip():
raise ModelCatalogError(f"{provider_path}.default_model must be a non-empty string")
models = _require_list(provider_map.get("models"), f"{provider_path}.models")
if not models:
raise ModelCatalogError(f"{provider_path}.models must not be empty")
seen_model_ids: set[str] = set()
default_found = False
for idx, model in enumerate(models):
model_path = f"{provider_path}.models[{idx}]"
model_map = _require_mapping(model, model_path)
model_id = model_map.get("id")
if not isinstance(model_id, str) or not model_id.strip():
raise ModelCatalogError(f"{model_path}.id must be a non-empty string")
if model_id in seen_model_ids:
raise ModelCatalogError(f"Duplicate model id {model_id!r} in {provider_path}.models")
seen_model_ids.add(model_id)
if model_id == default_model:
default_found = True
label = model_map.get("label")
if not isinstance(label, str) or not label.strip():
raise ModelCatalogError(f"{model_path}.label must be a non-empty string")
recommended = model_map.get("recommended")
if not isinstance(recommended, bool):
raise ModelCatalogError(f"{model_path}.recommended must be a boolean")
for key in ("max_tokens", "max_context_tokens"):
value = model_map.get(key)
if not isinstance(value, int) or value <= 0:
raise ModelCatalogError(f"{model_path}.{key} must be a positive integer")
if not default_found:
raise ModelCatalogError(
f"{provider_path}.default_model={default_model!r} is not present in {provider_path}.models"
)
presets = _require_mapping(data.get("presets"), "presets")
for preset_id, preset_info in presets.items():
preset_path = f"presets.{preset_id}"
preset_map = _require_mapping(preset_info, preset_path)
provider = preset_map.get("provider")
if not isinstance(provider, str) or not provider.strip():
raise ModelCatalogError(f"{preset_path}.provider must be a non-empty string")
model = preset_map.get("model")
if model is not None and (not isinstance(model, str) or not model.strip()):
raise ModelCatalogError(f"{preset_path}.model must be a non-empty string when present")
api_base = preset_map.get("api_base")
if api_base is not None and (not isinstance(api_base, str) or not api_base.strip()):
raise ModelCatalogError(f"{preset_path}.api_base must be a non-empty string when present")
api_key_env_var = preset_map.get("api_key_env_var")
if api_key_env_var is not None and (not isinstance(api_key_env_var, str) or not api_key_env_var.strip()):
raise ModelCatalogError(f"{preset_path}.api_key_env_var must be a non-empty string when present")
for key in ("max_tokens", "max_context_tokens"):
value = preset_map.get(key)
if not isinstance(value, int) or value <= 0:
raise ModelCatalogError(f"{preset_path}.{key} must be a positive integer")
model_choices = preset_map.get("model_choices")
if model_choices is not None:
for idx, choice in enumerate(_require_list(model_choices, f"{preset_path}.model_choices")):
choice_path = f"{preset_path}.model_choices[{idx}]"
choice_map = _require_mapping(choice, choice_path)
choice_id = choice_map.get("id")
if not isinstance(choice_id, str) or not choice_id.strip():
raise ModelCatalogError(f"{choice_path}.id must be a non-empty string")
label = choice_map.get("label")
if not isinstance(label, str) or not label.strip():
raise ModelCatalogError(f"{choice_path}.label must be a non-empty string")
recommended = choice_map.get("recommended")
if not isinstance(recommended, bool):
raise ModelCatalogError(f"{choice_path}.recommended must be a boolean")
return data
@lru_cache(maxsize=1)
def load_model_catalog() -> dict[str, Any]:
"""Load and validate the curated model catalogue."""
try:
raw = json.loads(MODEL_CATALOG_PATH.read_text(encoding="utf-8"))
except FileNotFoundError as exc:
raise ModelCatalogError(f"Model catalogue not found: {MODEL_CATALOG_PATH}") from exc
except json.JSONDecodeError as exc:
raise ModelCatalogError(f"Model catalogue JSON is invalid: {exc}") from exc
return _validate_model_catalog(_require_mapping(raw, "root"))
def get_models_catalogue() -> dict[str, list[dict[str, Any]]]:
"""Return provider -> model list."""
providers = load_model_catalog()["providers"]
return {provider_id: copy.deepcopy(provider_info["models"]) for provider_id, provider_info in providers.items()}
def get_default_models() -> dict[str, str]:
"""Return provider -> default model id."""
providers = load_model_catalog()["providers"]
return {provider_id: str(provider_info["default_model"]) for provider_id, provider_info in providers.items()}
def get_provider_models(provider: str) -> list[dict[str, Any]]:
"""Return the curated models for one provider."""
provider_info = load_model_catalog()["providers"].get(provider)
if not provider_info:
return []
return copy.deepcopy(provider_info["models"])
def get_default_model(provider: str) -> str | None:
"""Return the curated default model id for one provider."""
provider_info = load_model_catalog()["providers"].get(provider)
if not provider_info:
return None
return str(provider_info["default_model"])
def find_model(provider: str, model_id: str) -> dict[str, Any] | None:
"""Return one model entry for a provider, if present."""
for model in load_model_catalog()["providers"].get(provider, {}).get("models", []):
if model["id"] == model_id:
return copy.deepcopy(model)
return None
def find_model_any_provider(model_id: str) -> tuple[str, dict[str, Any]] | None:
"""Return the first curated provider/model entry matching a model id."""
for provider_id, provider_info in load_model_catalog()["providers"].items():
for model in provider_info["models"]:
if model["id"] == model_id:
return provider_id, copy.deepcopy(model)
return None
def get_model_limits(provider: str, model_id: str) -> tuple[int, int] | None:
"""Return ``(max_tokens, max_context_tokens)`` for one provider/model pair."""
model = find_model(provider, model_id)
if not model:
return None
return int(model["max_tokens"]), int(model["max_context_tokens"])
def get_preset(preset_id: str) -> dict[str, Any] | None:
"""Return one preset entry."""
preset = load_model_catalog()["presets"].get(preset_id)
if not preset:
return None
return copy.deepcopy(preset)
def get_presets() -> dict[str, dict[str, Any]]:
"""Return all preset entries."""
return copy.deepcopy(load_model_catalog()["presets"])
+9
View File
@@ -27,6 +27,15 @@ class Tool:
name: str
description: str
parameters: dict[str, Any] = field(default_factory=dict)
# If True, the tool may return ImageContent in its result. Text-only models
# (e.g. glm-5, deepseek-chat) have this hidden from their schema entirely.
produces_image: bool = False
# If True, this tool performs no filesystem/process/network writes and is
# safe to run concurrently with other safe-flagged tools inside the same
# assistant turn. Unsafe tools (writes, shell, browser actions) are always
# serialized after the safe batch. Default False - the conservative choice
# when a tool's behavior isn't explicitly vetted.
concurrency_safe: bool = False
@dataclass

Some files were not shown because too many files have changed in this diff Show More