Compare commits

...

269 Commits

Author SHA1 Message Date
Timothy 4ef951447d fix: compaction issues 2026-04-28 10:43:54 -07:00
Timothy ccb6556a41 Merge branch 'main' into feature/colony-push 2026-04-27 18:24:41 -07:00
Timothy 5ca5021fc1 feat: colony routes 2026-04-27 18:24:18 -07:00
Richard Tang 9eeba74851 Merge branch 'feat/tasks-system' 2026-04-27 10:55:50 -07:00
Richard Tang facd919371 chore: task list order 2026-04-27 10:55:18 -07:00
Richard Tang cb1484be85 feat: multi task creation 2026-04-27 10:35:02 -07:00
Timothy 82ce6bed68 Merge branch 'feat/api-colonies-import' 2026-04-26 21:13:18 -07:00
Timothy efdb404655 feat: POST /api/colonies/import — onboard a colony from a tarball
Accepts a multipart upload of `tar` / `tar.gz` (any compression
tarfile.open auto-detects) containing a single top-level directory and
unpacks it into HIVE_HOME/colonies/<name>. Lets a desktop client (or any
external tool) hand a colony spec to a remote runtime to run.

Form fields:
  file              (required)  the archive blob
  name              (optional)  override the colony name; defaults to
                                the archive's top-level dir
  replace_existing  (optional)  "true" to overwrite; else 409 if the
                                target dir already exists

Safety:
- 50 MB upload cap (multipart reader streams + caps each part)
- Manual path-traversal validation per member (Python 3.11 compatible —
  tarfile's safe `filter='data'` only landed in 3.12)
- Symlinks, hardlinks, device, fifo entries all rejected
- Colony name validated against the existing [a-z0-9_]+ pattern used by
  routes_colony_workers + queen_lifecycle_tools
- Mode bits masked to 0o755 / 0o644 so a tampered tar can't ship
  world-writable scripts

Tests cover happy path, name override, 409 / 201 around replace_existing,
path traversal, absolute paths, symlinks, multiple top-level dirs,
invalid colony name, missing file part, corrupt tar, non-multipart, and
uncompressed tar.

Future work (not in this PR): export endpoint, colony list/delete via
this same prefix, and an MCP tool wrapper so queens can move colonies
between hosts mid-conversation.
2026-04-26 20:16:10 -07:00
Richard Tang da361f735d chore: lint 2026-04-26 19:45:52 -07:00
Richard Tang eea0429f93 fix: improve prompt 2026-04-26 19:38:14 -07:00
Richard Tang 833aa4bc7a feat): fix structural blockers preventing the queen from using task_* , also enhanced the hook 2026-04-26 19:15:23 -07:00
Richard Tang 0af597881f feat(tasks): file-backed task system with colony template + UI 2026-04-26 18:49:45 -07:00
RichardTang-Aden 6fae1f04c8 Merge pull request #7143 from aden-hive/fix/scrolling-container
Fix/scrolling container
2026-04-26 12:14:29 -07:00
Richard Tang 8c4085f5e8 chore: lint 2026-04-26 11:35:16 -07:00
Richard Tang 53240eb888 fix: scroll with certian element selector 2026-04-26 11:34:47 -07:00
Hundao de8d6f0946 fix(tests): unblock main CI (#7141)
Two unrelated test failures were keeping main red:

- test_capabilities.py: fixtures referenced deprecated model identifiers
  no longer in model_catalog.json. After the catalog refactor unknown
  models default to vision-capable, so 12 "expect False" assertions
  flipped to True. Replace fixtures with current catalog entries that
  carry an explicit supports_vision flag.

- test_colony_runtime_overseer.py: a 200ms hard sleep racing the
  background worker was flaky on Windows CI. Poll for llm.stream_calls
  with a 5s deadline instead.
2026-04-26 21:34:21 +08:00
SAURABH KUMAR ea707438f2 feat(tools): add SimilarWeb V5 API integration (#7066)
Adds 29 MCP tools for SimilarWeb V5 covering traffic and engagement,
competitor intelligence, keywords/SERP, audience demographics, and
segment analysis. Includes credential spec, health checker, README,
and tests on ubuntu and windows.

Closes #7022
2026-04-26 20:37:44 +08:00
Richard Tang 445c9600ab chore: release v0.10.5
Release / Create Release (push) Waiting to run
Cache-aware cost reporting + new frontier models (GPT-5.5, DeepSeek V4
Pro/Flash, GLM-5.1). cache_control now propagates through OpenRouter
sub-providers (anthropic / gemini / glm / minimax) so the static system
prefix actually hits cache, and every response/finish event carries
cost_usd computed from a four-source fallback chain.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 20:21:03 -07:00
Richard Tang 2ab5e6d784 feat: model support 2026-04-24 20:17:41 -07:00
RichardTang-Aden e7f9b7d791 Merge pull request #7132 from vincentjiang777/feat/colony-session-transfer
feat: redesign configuration UI for sidebar, prompts, skills, and tools
2026-04-24 19:02:03 -07:00
Vincent Jiang 3cb0c69a96 feat: redesign configuration UI — sidebar, prompt library, skills, and tools
- Sidebar: rename Library to Configuration, reorder nav (Credentials 3rd, Configuration 4th), reorder sub-items (Prompts, Skills, Tools)
- Prompt Library: separate My Prompts from Community Prompts into distinct sections
- Skills Configuration: rename page title, sort queens by org chart order, group active/inactive skills, style Upload button as primary
- Tool Configuration: rename page title, sort queens by org chart order, add Save/Cancel/Allow all/Reset to defaults workflow, filter lifecycle tool names to fix "Unknown MCP tool name" save errors
- Fix (unknown) tool group label in server fallback catalog

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-24 10:41:38 -07:00
Richard Tang 22d75bfb05 chore: lint format 2026-04-24 10:12:06 -07:00
Vincent Jiang 357df1bbcb merge: pull upstream/main into feat/colony-session-transfer 2026-04-24 09:28:46 -07:00
Richard Tang 386bbd5780 feat: persistent cost tracking 2026-04-24 09:19:57 -07:00
Richard Tang 235022b35d feat: support glm 5.1 2026-04-24 07:45:37 -07:00
Richard Tang 4d8f312c3e Merge remote-tracking branch 'origin/feat/cache-token' into feature/vision-subagent 2026-04-23 22:21:45 -07:00
Timothy 4651a6a85a fix: vision caption 2026-04-23 21:30:59 -07:00
Timothy ea9c163438 feat: image vision fallback 2026-04-23 21:24:56 -07:00
Richard Tang 77cc169606 feat: cost tracking 2026-04-23 15:34:07 -07:00
Richard Tang 8c6428f445 feat: token comsumption usage 2026-04-23 15:05:30 -07:00
Richard Tang 44cb0c0f4c feat: hybrid compaction buffer (fixed tokens + ratio of context)
The compaction trigger now reserves headroom equal to
compaction_buffer_tokens + compaction_buffer_ratio * max_context_tokens.
The fixed component (default 8k, sized for one max-sized tool result)
gives a floor on small windows; the ratio (default 0.15) keeps the
trigger meaningful on large windows where any constant buffer becomes
a rounding error (8k buffer is 75% on a 32k window but 96% on a 200k
window). Result: ~80% pre-turn trigger on 200k+ windows so the inner
tool loop has room to grow without firing the mid-turn pre-send guard.
2026-04-23 15:04:19 -07:00
Richard Tang 2621fb88b1 fix: drain bg fork tasks before colony-spawn artifact asserts
Compaction + worker-storage copy moved to a background task in f39c1c87;
the test checked the worker-storage file before the task ran, which flaked
under CI load.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 21:38:21 -07:00
Richard Tang a70f92edbe chore: lint format 2026-04-22 21:33:33 -07:00
Richard Tang b2efa179ea docs: note cache fix in v0.10.4 release notes
Release / Create Release (push) Waiting to run
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 21:27:24 -07:00
Richard Tang 8c6e76d052 fix: no cache for queen config 2026-04-22 21:24:00 -07:00
Richard Tang c7f1fbf19f chore: release v0.10.4
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 21:12:28 -07:00
Richard Tang 7047ecbf46 chore: fixed ci 2026-04-22 20:14:36 -07:00
Richard Tang b96ee5aaab fix: create new session and switch branch 2026-04-22 20:05:21 -07:00
Richard Tang 6744bea01a feat: move date time inject from system prompt 2026-04-22 17:11:34 -07:00
Richard Tang 390038225b feat static system prompt 2026-04-22 16:54:47 -07:00
Richard Tang b55c8fdf86 fix: validate session creation inputs and tighten skill/reflection edges 2026-04-22 15:08:50 -07:00
Richard Tang e9aea0bbc4 fix: tools and skills registration 2026-04-22 13:54:10 -07:00
Richard Tang 0ba1fa8262 feat: created colony inherit skills and tools 2026-04-21 19:23:33 -07:00
Richard Tang 0fd96d410e feat: configurable default tools and skills 2026-04-21 19:15:40 -07:00
Richard Tang c658a7c50b feat: default skills and tools 2026-04-21 19:15:28 -07:00
Richard Tang 56c3659bda feat: refactor tool config and library menu 2026-04-21 18:57:11 -07:00
Richard Tang 14f927996c feat: skill library 2026-04-21 18:48:22 -07:00
Richard Tang 8a0ec070b8 feat: tool library 2026-04-21 17:20:54 -07:00
Richard Tang 80cd77ac30 chore: release v0.10.3
Release / Create Release (push) Waiting to run
2026-04-20 19:49:28 -07:00
Richard Tang c67521a09c chore: ruff lint 2026-04-20 19:14:14 -07:00
Richard Tang 8da06f4f90 Merge remote-tracking branch 'origin/feat/queue-message' into feat/colony-merge-candidate
# Conflicts:
#	core/frontend/src/components/ChatPanel.tsx
#	core/frontend/src/pages/colony-chat.tsx
#	core/frontend/src/pages/queen-dm.tsx
2026-04-20 19:11:58 -07:00
Richard Tang 46e0413eb8 chore: create colony popup 2026-04-20 19:01:43 -07:00
Richard Tang 81731587ff feat(tool call): add format _coerce before execution 2026-04-20 18:58:12 -07:00
Richard Tang 4e9d9bf1ea feat: group tools by sessions 2026-04-20 18:20:10 -07:00
Richard Tang 2644ab953d fix: tool calls in chat 2026-04-20 18:10:53 -07:00
Richard Tang e7daa59573 feat: queen ask_user tool prompt 2026-04-20 16:48:43 -07:00
Richard Tang 1bec43afad feat: ask_user tool prompt 2026-04-20 16:38:29 -07:00
Richard Tang 3d1357595d refactor: ask_user 2026-04-20 16:34:18 -07:00
bryan 59ccbba810 fix: suppress typing flicker on queue auto-flush and dedup user bubble on bootstrap race 2026-04-20 15:30:01 -07:00
Richard Tang 8b2ae369ac fix:remove deuplicate parts in indenpendent prompt 2026-04-20 14:52:32 -07:00
Richard Tang 96a667cbd9 feat: better identity prompt structure 2026-04-20 14:41:20 -07:00
Richard Tang 17150a53bd chore: lint 2026-04-20 13:09:02 -07:00
Richard Tang c1d7b0ee69 feat: fix reply message bubble and improve code reuse 2026-04-20 13:07:26 -07:00
bryan 16ea9b52d3 feat: queue messages during queen turns in colony/queen chats 2026-04-20 12:45:38 -07:00
bryan dcbfd4ab01 feat: add pending-queue hook and Steer/Cancel UI in ChatPanel 2026-04-20 12:45:14 -07:00
bryan b762020793 refactor: carry executionId on user SSE events 2026-04-20 12:44:56 -07:00
Richard Tang 4ffddc53e6 fix: trigger message 2026-04-20 11:54:11 -07:00
Richard Tang 24bcc5aea7 feat: update trigger ui 2026-04-20 11:19:57 -07:00
Richard Tang 3c91119f67 feat: improvements for scheduler 2026-04-20 10:49:37 -07:00
Richard Tang 923e773c14 feat: improve the tab switching tool 2026-04-20 10:21:32 -07:00
Naresh Chandanbatve 199c3a235e feat(tool): add Prometheus tool support (#7047)
Adds prometheus_query (instant PromQL) and prometheus_query_range
(time-range) tools. Includes credential spec, /-/ready health check,
unit tests, and docs.

Optional Bearer token and Basic auth via env vars
(PROMETHEUS_TOKEN, PROMETHEUS_USERNAME/PASSWORD).

Fixes #6945.
2026-04-20 18:13:49 +08:00
Kavin a881fe68da fix(llm): ensure store=False is passed to Codex Responses API (#7089)
Forces store: false into the extra_body payload for Codex-style models
so that LiteLLM successfully passes it down to the ChatGPT Responses
API backend, fixing the BadRequestError.

Fixes #7056.

Original investigation and first PR by @Darshan174 (#7065).

Co-authored-by: Darshan174 <Darshan002321@gmail.com>
2026-04-20 17:54:41 +08:00
Hundao 6b9040477f fix(ci): unbreak main, ruff format browser and refresh test_model_catalog (#7095)
* chore: ruff format browser bridge and tools

* fix(tests): refresh test_model_catalog expectations after catalog drift
2026-04-20 17:23:26 +08:00
Richard Tang c7cc031060 fix: handling broken Aden API Key 2026-04-19 20:05:14 -07:00
Richard Tang 93c0ef672a fix: queen badge 2026-04-19 19:37:49 -07:00
Richard Tang 67d55e6cce feat: scheduler tools for incubating 2026-04-19 19:30:31 -07:00
Richard Tang 0907ff9cec Merge branch 'pr-7093-vincent' into feat/colony-session-transfer 2026-04-19 19:01:19 -07:00
Vincent Jiang ed2e7125ac feat: colony creation, queen identity in colonies, and org chart improvements
- Colony creation: add "Create a Colony" button in queen DM (conversation header),
  queen profile panel, and sidebar with queen picker + goal input
- Queen identity in colonies: resolve queen profile name for colony chat messages,
  fix duplicate messages on refresh via SSE replay deduplication with restore cutoff
- Colony header: show colony name with Component icon, queen profile link preserved
- Org chart: colony detail drawer with metadata (start date, goal, status, stats),
  icon picker for colonies (16 icons, persisted to metadata.json), fixed queen card
  heights, fixed queen display order via shared sortQueenProfiles()
- Chat: add headerAction slot for inline buttons next to "Conversation" header
- Backend: PATCH /api/agents/metadata for colony icon, created_at in discover API
  with filesystem fallback, chat-helpers queen name passthrough for cold restore

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-19 18:55:01 -07:00
Richard Tang f39c1c87af feat: compact the queen session when creating colony 2026-04-19 18:51:51 -07:00
Richard Tang 1229b4ad4d feat: incubating phase 2026-04-19 18:07:09 -07:00
Richard Tang 0d11a946a5 feat: mark-colony-spawned for a session that created colony 2026-04-19 17:21:06 -07:00
Richard Tang b007ed753b chore: xiaomi model context limit 2026-04-19 15:28:32 -07:00
Richard Tang bb39424e99 chore: update model context config 2026-04-19 15:19:26 -07:00
Richard Tang b27c7a029e chore: update openrouter model selections 2026-04-19 15:10:36 -07:00
Timothy a3433f2c9e Merge branch 'main' into fix/image-coordinate-precision 2026-04-19 13:25:41 -07:00
Richard Tang 24ef2c247d chore: tidy editorconfig and gitattributes, drop unused reference 2026-04-19 13:24:34 -07:00
Richard Tang a8f9661626 chore: remove unused files 2026-04-19 13:19:01 -07:00
Timothy 3005bcaa96 chore: bump extension version to 1.0.1 2026-04-19 13:06:51 -07:00
Timothy 40c4591d65 fix: extension icons 2026-04-19 13:06:13 -07:00
Timothy e2bfb9d3af fix: frame resize 2026-04-19 13:02:12 -07:00
Timothy e55cea97ef fix: diagnostics 2026-04-19 12:52:04 -07:00
Timothy ddaafe0307 Merge remote-tracking branch 'origin/main' into fix/image-coordinate-precision 2026-04-18 23:32:28 -07:00
Richard Tang c17205a453 test: align stale tests with current behavior 2026-04-18 22:02:03 -07:00
Richard Tang 8e4468851c chore: ruff format 2026-04-18 21:45:34 -07:00
Richard Tang ccf4216841 fix: resolve merge conflict markers and ruff issues 2026-04-18 21:45:11 -07:00
Richard Tang 82ffcb17ac Merge remote-tracking branch 'origin/main' into fix/colony-skill-leak 2026-04-18 21:36:23 -07:00
Richard Tang 4da5bcc1e4 feat: queen bar in colony 2026-04-18 21:30:19 -07:00
Richard Tang 3df7194003 feat: worker tab by clicking on the worker 2026-04-18 21:21:22 -07:00
Richard Tang 6f1f27b6e9 feat: load table by colony 2026-04-18 20:55:20 -07:00
Richard Tang 7b52ed9fa7 fix: outdated jsonledger 2026-04-18 20:35:05 -07:00
Richard Tang 4d32526a29 feat: real available parallel size 2026-04-18 20:18:54 -07:00
Richard Tang 656401e199 feat: real snapshot after interaction 2026-04-18 19:51:52 -07:00
Richard Tang f2e51157dc feat: snapshot related prompts 2026-04-18 19:39:00 -07:00
Timothy 0d13c805b1 fix: colony skill leakage 2026-04-18 15:34:31 -07:00
Kowshik Mente b1ec64438c fix(runtime): prevent session restart until cancelled execution fully terminates (#7001)
* fix(runtime): prevent dual execution after forced cancel

- keep bookkeeping until task termination
- block restart while any execution task is still alive
- make execution registration atomic under lock
- avoid premature cleanup on cancel timeout
- add regression tests for forced-cancel restart scenarios

* chore: ruff format and import order

---------

Co-authored-by: kowshikmente <kowshikmente@kowshikmentes-MacBook-Pro.local>
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-04-18 19:36:50 +08:00
Hundao 90aadf247a fix(ci): unbreak main — ruff format, test_refs, test_model_catalog (#7084)
* fix(ci): apply ruff format to browser tool files

Refs #7083

* fix(ci): unbreak test_refs (img regression) and test_model_catalog

test_refs:
- Add `img` back to CONTENT_ROLES so named images get refs again. The
  recent `cc6ec97a feat: multiple modes browser snapshot tool` refactor
  renamed NAMED_CONTENT_ROLES → CONTENT_ROLES and accidentally dropped
  `img`, breaking `test_named_content_roles_get_refs`.
- Drop the `navigation` assertion from `test_skips_structural_roles`.
  That same refactor intentionally added landmark roles (navigation,
  main, listitem) to CONTENT_ROLES so AI agents can ref them by name,
  and the test was not updated to reflect that.

test_model_catalog:
- Add 5 openrouter models that were added to model_catalog.json by
  #7081 (UI/UX improvements) but not reflected in the test.

Refs #7083

* fix(ci): wait for event propagation in subagent report test on Windows

`test_worker_report_emits_subagent_report_event` waited only for
`worker.is_active` to flip to False, then immediately asserted on the
collected events. On Windows the event loop scheduling differs enough
that the SUBAGENT_REPORT subscriber callback can run a few ticks after
the worker is marked inactive, so the assertion fires against an empty
list. Wait for both conditions.

Refs #7083
2026-04-18 19:09:15 +08:00
RichardTang-Aden 49317ac5f5 Merge pull request #7081 from vincentjiang777/feat/ui-ux-improvements
feat: UI/UX improvements across BYOK, org chart, profiles, and prompt…
2026-04-17 21:03:01 -07:00
Richard Tang 7216e9d9f0 chore: ruff lint and format
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 21:01:18 -07:00
Richard Tang 91b1070d80 Merge remote-tracking branch 'origin/main' into feat/ui-ux-improvements
# Conflicts:
#	core/frontend/src/components/SidebarQueenItem.tsx
2026-04-17 20:58:20 -07:00
Richard Tang 08aeffd977 chore: more create colony logs 2026-04-17 20:27:22 -07:00
Richard Tang 651b57b928 feat: hive open performance issue 2026-04-17 20:16:01 -07:00
Richard Tang 8c10fc2e1c fix: queen dm session loading 2026-04-17 20:11:48 -07:00
Richard Tang e3154ca0ee fix: colony session loading 2026-04-17 19:45:31 -07:00
Richard Tang 84a92af41b fix: patch the correct db path 2026-04-17 19:40:59 -07:00
Richard Tang 78fc62210a feat table tab improvements 2026-04-17 19:25:15 -07:00
Timothy 2fd7e9172a fix: y-offset inspection 2026-04-17 19:24:41 -07:00
Richard Tang ca63fd9ee9 feat: create skill along with colony 2026-04-17 19:03:28 -07:00
Richard Tang b99f25c8d7 feat: DataGrid for colony side bar 2026-04-17 18:47:19 -07:00
Timothy e972112074 feature: merge sidebars with functionalities 2026-04-17 18:12:18 -07:00
Vincent Jiang 6e97191f21 feat: UI/UX improvements across BYOK, org chart, profiles, and prompt library
- BYOK: unified styling (remove purple, consistent grey headers), model selector opens settings modal directly, backend validates API keys before activation
- Org chart: queen profiles are now editable (name, title, about, skills, achievement) with changes persisted to YAML
- Avatars: upload profile pictures for queens and user with client-side compression, displayed across org chart, sidebar, chat, and header
- Colony deletion: await backend delete and re-fetch to prevent ghost colonies
- Prompt library: add pagination (24/page), custom prompt upload/delete with backend persistence
- Settings modal: performance cleanup (remove backdrop-blur, reduce transitions)
- Fix ensure_default_queens() overwriting user edits on every API call

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 14:21:18 -07:00
Richard Tang 023fb9b8d0 refactor: use SSE for worker and browser status 2026-04-17 14:11:19 -07:00
Richard Tang b7924b1ad0 feat: colony tab by group 2026-04-17 14:05:55 -07:00
Timothy b6640b8592 fix: prevent watcher to be gced 2026-04-17 13:13:39 -07:00
Timothy 43a1d5797c Merge branch 'fix/worker-tab-groups' into feature/clean-context 2026-04-17 12:35:09 -07:00
Timothy 5cb814f2dc fix: worker tab groups 2026-04-17 12:34:38 -07:00
Richard Tang f52c44821a feat: partially validation after typing 2026-04-17 12:16:13 -07:00
Richard Tang 97432ea08c feat: colony side bar 2026-04-17 11:52:49 -07:00
Timothy 0abd1125b7 fix: parallel execution 2026-04-17 11:20:06 -07:00
Timothy 803337ec74 feat: new queen phases 2026-04-17 06:19:15 -07:00
Timothy 2b055d4d42 fix: simplify system prompt 2026-04-17 04:47:51 -07:00
Timothy dde4dfaec9 Merge branch 'feature/colony-sqlite' into feature/clean-context 2026-04-17 04:12:35 -07:00
Timothy 6be026fcb1 fix: partial parts and system nudge 2026-04-17 04:06:59 -07:00
Richard Tang 3c2161aad5 chore: release v0.10.2
Release / Create Release (push) Waiting to run
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 23:43:20 -07:00
Richard Tang e74ebe6835 feat: reduce gemini context window to improve reliability 2026-04-16 23:41:24 -07:00
Richard Tang d788e5b2f7 chore: ruff lint 2026-04-16 23:33:48 -07:00
Richard Tang 583a5b41b4 fix: ununsed reference 2026-04-16 23:23:38 -07:00
Richard Tang 83cc44bdef Merge branch 'feature/full-image-size' 2026-04-16 23:15:59 -07:00
Timothy 558813e7fa feat: fraction-based visual clicks 2026-04-16 22:36:41 -07:00
Timothy aba0ff07ba fix: model invariant screenshot 2026-04-16 20:29:05 -07:00
Timothy 4303a36df0 fix: namespaced browser tab groups 2026-04-16 20:07:05 -07:00
Timothy e68d8ef10b fix: do not kill queen when switching 2026-04-16 19:29:00 -07:00
Richard Tang c6b6a5a2f7 feat: GCP skills and prompts improvements 2026-04-16 17:43:52 -07:00
Richard Tang 18f5f078fc feat: dashed highlighter for browser type focus 2026-04-16 17:26:09 -07:00
Richard Tang cc6ec97a75 feat: multiple modes browser snapshot tool 2026-04-16 17:22:44 -07:00
Richard Tang 44d114f0d0 feat: default 1ms delay and prompt improvements 2026-04-16 16:19:38 -07:00
Richard Tang 9e71f16d15 Merge remote-tracking branch 'origin/fix/browser-behaviour-improvements' into fix/browser-behaviour-improvements 2026-04-16 16:14:43 -07:00
Richard Tang 28cad2376c feat: separate type focus tool 2026-04-16 16:08:43 -07:00
Timothy 8222cd306e fix: simplify canonical workflow 2026-04-16 16:02:37 -07:00
Timothy b50f237506 fix: screenshot skill diction 2026-04-16 15:16:22 -07:00
Richard Tang 916803889f feat: browswer control tools improvement and debugger 2026-04-16 15:14:08 -07:00
Timothy 59b1bc9338 fix: tool grouping logic 2026-04-16 12:55:10 -07:00
Timothy 37672c5581 fix: remove worker tool from dm 2026-04-16 12:23:19 -07:00
Timothy 7b0948cd62 Merge branch 'refactor/worker-message' into feature/colony-sqlite 2026-04-16 11:26:46 -07:00
Timothy 4aa5fd7a90 refactor: align worker display 2026-04-16 11:26:32 -07:00
Richard Tang d20b617008 feat: queen profile in message bubbles 2026-04-16 11:21:02 -07:00
Timothy c4ee12532f fix: worker message display 2026-04-16 11:20:17 -07:00
Richard Tang 36ebf27e3e feat: make side bar size adjustble 2026-04-16 11:15:47 -07:00
Richard Tang ae1599c66a feat: queen profile side bar 2026-04-16 11:15:30 -07:00
Richard Tang 810cf5a6d3 Merge remote-tracking branch 'origin/main' into feature/colony-sqlite 2026-04-16 11:10:34 -07:00
Timothy 1ee0d5a2e8 feat: worker bubble display 2026-04-16 10:48:44 -07:00
Hundao 9051c443fb fix(tests): resolve Windows CI failures (#7061)
- test_background_job: use sys.executable and double quotes instead of
  single-quoted 'python -c' which Windows cmd.exe doesn't understand
- test_cli_entry_point: guard against None stdout on Windows with
  (result.stdout or "").lower()
- test_safe_eval: bump DEFAULT_TIMEOUT_MS from 100 to 500 to accommodate
  slow Windows CI runners where SIGALRM is unavailable
2026-04-16 21:05:09 +08:00
Hundao e5a93b059f fix(tests): resolve test failures across framework and tools (#7059)
* fix(tests): resolve test failures across framework and tools

Framework tests (52 -> 1 failure):
- Add missing `model` attribute to mock LLM classes (MockStreamingLLM,
  CrashingLLM, ErrorThenSuccessLLM, etc.) to match new agent_loop.py
  requirement at line 624
- Update skill count assertions from 6 to 7 (new writing-hive-skills)
- Fix phase compaction test to match new message format (no brackets)
- Update model catalog test for current gemini model names
- Fix queen memory test: set phase="building" to match prompt_building,
  adjust reflection trigger count to match cooldown behavior

Tools tests (52 -> 0 failures):
- Update csv_tool tests: remove agent_id parameter, use absolute paths,
  patch _ALLOWED_ROOTS instead of AGENT_SANDBOXES_DIR
- Fix browser_evaluate test to allow toast wrapper around script

Remaining: 1 pre-existing failure in test_worker_report where mock LLM
gets stuck when scenarios are exhausted (separate bug).

* fix(tests): resolve remaining test failures

- Add text stop scenario to test_worker_report so worker terminates
  cleanly after tool_calls finish instead of replaying the last
  scenario forever
- Remove duplicated hive home isolation fixture from test_colony_fork_live;
  reuse conftest autouse fixture and only add config copy on top

* fix(tests): prevent mock LLM infinite loops on exhausted scenarios

fix(core): accept both pruned tool result sentinel formats

MockStreamingLLM and _ByTaskMockLLM replay the last scenario forever
when call_index exceeds the scenario list, causing worker timeouts in
CI. Fix by emitting a text stop when scenarios are exhausted (scenarios
mode) or already consumed (by_task mode).

Also fix pruned tool result sentinel mismatch: conversation.py produces
"Pruned tool result ..." but compaction.py and conversation.py only
checked for "[Pruned tool result". Now both formats are accepted.

Also remove duplicated hive home isolation fixture from
test_colony_fork_live; reuse conftest autouse fixture instead.
2026-04-16 20:13:43 +08:00
Hundao 589c5b06fe fix: resolve all ruff lint and format errors across codebase (#7058)
- Auto-fixed 70 lint errors (import sorting, aliased errors, datetime.UTC)
- Fixed 85 remaining errors manually:
  - E501: wrapped long lines in queen_profiles, catalog, routes_credentials
  - F821: added missing TYPE_CHECKING imports for AgentHost, ToolRegistry,
    HookContext, HookResult; added runtime imports where needed
  - F811: removed duplicate method definitions in queen_lifecycle_tools
  - F841/B007: removed unused variables in discovery.py
  - W291: removed trailing whitespace in queen nodes
  - E402: moved import to top of queen_memory_v2.py
  - Fixed AgentRuntime -> AgentHost in example template type annotations
- Reformatted 343 files with ruff format
2026-04-16 19:30:01 +08:00
Richard Tang be94c611bd fix: queen fail when no worker is running 2026-04-15 22:14:36 -07:00
Timothy 45df68c146 feat: ensure sqlite3 installation 2026-04-15 18:34:33 -07:00
Richard Tang 4fdbc438f9 chore: release v0.10.1
Release / Create Release (push) Waiting to run
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 18:15:40 -07:00
Timothy 2231dc5742 fix: delete spilled skill 2026-04-15 18:14:10 -07:00
Timothy 446844b2ad fix: tighten worker with sqlite skills 2026-04-15 18:11:15 -07:00
Richard Tang 78301274cd feat: broswer tool improvements 2026-04-15 18:09:28 -07:00
Timothy e719523434 fix: remove conflicting tools 2026-04-15 17:38:05 -07:00
Richard Tang 451a5d55d2 feat: queen independent prompt improvements 2026-04-15 17:36:48 -07:00
Richard Tang e2a21b3613 chore: title of finance 2026-04-15 16:55:00 -07:00
Richard Tang 5c251645d3 Merge branch 'main' into feat/gui-ux-updates 2026-04-15 16:45:39 -07:00
Richard Tang 8783f372fc feat: use the customtools model for gemini 2026-04-15 16:44:23 -07:00
bryan 2790d13bb6 Merge branch 'main' into feat/gui-ux-updates 2026-04-15 15:45:56 -07:00
bryan 900d94e49f feat: add message timestamps, day-divider rows, and stable createdAt across stream updates 2026-04-15 15:45:31 -07:00
bryan 70e3eb539b feat: extract QueenProfilePanel and open it from the app header 2026-04-15 15:45:20 -07:00
bryan deeb7de800 feat: sort queens by last DM activity and trim "Head of" title prefix 2026-04-15 15:44:52 -07:00
bryan 57ad98005d feat: derive last_active_at from latest message timestamp and sort history newest-first 2026-04-15 15:44:32 -07:00
Timothy 79c5d43006 feat: colony sqlite and skills 2026-04-15 15:28:37 -07:00
Timothy 252710fb41 fix: context health and eviction 2026-04-15 11:40:45 -07:00
Richard Tang 22df99ef51 Merge remote-tracking branch 'origin/main'
Release / Create Release (push) Waiting to run
2026-04-14 19:56:33 -07:00
Richard Tang edc3135797 Merge branch 'feature/new-colony' 2026-04-14 19:56:08 -07:00
Richard Tang 27b15789fb fix: skills prompts 2026-04-14 18:51:14 -07:00
RichardTang-Aden 5ba5933edc Merge pull request #7046 from vincentjiang777/main
docs: new readme
2026-04-14 18:02:49 -07:00
Timothy 50eb4b0e8f Merge branch 'feature/colony-creation' into feature/new-colony 2026-04-14 16:34:30 -07:00
Richard Tang 3e4a4c9924 Merge remote-tracking branch 'origin/feat/text-only-tool-filter' into feature/new-colony 2026-04-14 16:29:19 -07:00
Richard Tang c47987e73c fix: ask user widget fallback 2026-04-14 16:27:12 -07:00
Timothy 256b52b818 fix: skills for colonies 2026-04-14 16:23:17 -07:00
Richard Tang 8f5daf0569 fix: swtiching model and new chat 2026-04-14 16:04:07 -07:00
bryan af5c72e785 feat: hide image-producing tools and vision-only prompt blocks from text-only models 2026-04-14 12:50:44 -07:00
Timothy 958bafea29 fix: tool gated skill activation 2026-04-14 11:17:03 -07:00
bryan 5cdc01cb8c fix: preserve tool pill mapping across turn boundary for deferred ask_user completions 2026-04-14 10:56:38 -07:00
Timothy 6979ea825d fix: remove tool limit 2026-04-14 10:35:08 -07:00
Timothy d6093a560f Merge branch 'feature/new-colony' into feature/colony-creation 2026-04-14 10:19:24 -07:00
Hundao 2f58cce781 fix(tools): web_scrape truncation no longer exceeds max_length (#7044)
The previous code did `text[:max_length] + "..."`, which made the
returned content always 3 chars longer than the requested max_length.
Reserve room for the ellipsis inside the limit so the contract holds.

Fixes #2098
2026-04-14 14:24:42 +08:00
Richard Tang ab76a66646 fix: queen loading 2026-04-13 22:39:39 -07:00
Richard Tang c575ff3fe7 feat: queen messages improvements 2026-04-13 22:31:49 -07:00
Timothy 8668d103a8 Merge branch 'feature/new-colony' into feature/colony-creation 2026-04-13 21:34:17 -07:00
Timothy 133f393f8b feat: scheduled triggers 2026-04-13 21:33:54 -07:00
Timothy fd3ef36a15 fix: side panel 2026-04-13 21:08:11 -07:00
Timothy aa281aad34 fix: remove deprecated graphs 2026-04-13 20:56:47 -07:00
Richard Tang a3d0c7e0cb fix: remove No ask_user prompt in the examples 2026-04-13 20:54:17 -07:00
Richard Tang de3042ba3f fix Prompt in the home page are not given to the queen directly, users have to wait till the hello message to be finished. 2026-04-13 20:34:11 -07:00
Timothy 326d7f201c Merge branch 'feature/new-colony' into feature/colony-creation 2026-04-13 19:59:34 -07:00
Timothy db30ef3094 fix: reframe colony creation 2026-04-13 19:56:14 -07:00
Timothy e3d1cb6739 fix: colony creation link 2026-04-13 19:46:24 -07:00
Timothy 846f3f2470 feat: improve tool call reliability 2026-04-13 19:34:47 -07:00
Richard Tang 913437ea0b fix: build error 2026-04-13 18:06:40 -07:00
Richard Tang 520bd635e2 Merge branch 'feature/hive-experimental-comp-pipeline' into feature/new-colony 2026-04-13 18:02:34 -07:00
bryan b7d850ddd0 feat: add LLM key validation endpoint, emit agent errors via SSE, and improve key management UI 2026-04-13 16:25:43 -07:00
Timothy 0a251278f1 feat: learned default skills 2026-04-13 10:34:25 -07:00
Timothy 857af8e6a3 fix: gcu system prompt 2026-04-13 10:00:00 -07:00
Timothy 273d4ec66e fix: upgrade browser skills 2026-04-13 09:45:07 -07:00
Timothy eeb46a2b3e fix: tool credential filter 2026-04-11 12:54:26 -07:00
Timothy b5e05fefae fix: screenshot 2026-04-11 09:53:53 -07:00
Timothy bdfbb7698a fix: browser click 2026-04-10 23:34:39 -07:00
Timothy 35b1eadb7f fix: improve reliability 2026-04-10 22:46:30 -07:00
Timothy 38036eb7bd fix: reliability tunes 2026-04-10 22:12:13 -07:00
Timothy 70d90fda19 fix: screenshot 2026-04-10 21:11:49 -07:00
vincentjiang777 9dc214cfd2 Merge branch 'aden-hive:main' into main 2026-04-10 20:35:42 -07:00
Bryan 1e3dcbbbc2 feat: ask user tool in queen prompt 2026-04-10 17:46:18 -07:00
Bryan 53b095cdcb feat: use ask_user and ask_user_multiple 2026-04-10 17:31:32 -07:00
Timothy d04862053f fix: queen instruction on colony creation 2026-04-10 17:31:01 -07:00
Timothy df0e0ea082 Merge branch 'fix/after-colony-refresh' into feature/new-colony 2026-04-10 17:19:22 -07:00
Timothy b1724ee360 fix: after colony creation list needs refresh 2026-04-10 17:18:59 -07:00
Bryan a59493835d fix: new session for prompt library and new chat 2026-04-10 17:17:55 -07:00
Timothy 334af2b74e fix: default log level 2026-04-10 16:58:27 -07:00
Richard Tang 81c72949ce feat: prompt library ui improvement 2026-04-10 16:54:34 -07:00
Timothy 97fd45d36a fix: mcp tool initialization 2026-04-10 16:52:04 -07:00
Timothy caebbea1aa fix: initialize default mcps 2026-04-10 16:42:03 -07:00
Richard Tang 574a3a284e Merge remote-tracking branch 'origin/feature/new-colony' into feature/new-colony 2026-04-10 16:38:50 -07:00
Richard Tang 8ea3fb8cfe chore: align the hive tool names 2026-04-10 16:38:21 -07:00
Timothy 69d16a8f6c fix: remove deprecated tools 2026-04-10 16:26:29 -07:00
Richard Tang f16cb0ea1f fix: frontend dm fix 2026-04-10 16:25:33 -07:00
Richard Tang e0f1e9d494 feat: efficient mcp loading in initialization 2026-04-10 16:23:36 -07:00
Richard Tang 7fb0da26fc feat: register available MCP tools 2026-04-10 16:01:42 -07:00
Timothy f5f72c1c9c Merge branch 'feature/hive-experimental-comp-pipeline' into feature/new-colony 2026-04-10 15:56:41 -07:00
Timothy 06d0a16201 Merge branch 'feature/colony-orchestrate' into feature/new-colony 2026-04-10 15:52:16 -07:00
Timothy 0964758b12 Merge branch 'feature/colony-orchestrate' into feature/hive-experimental-comp-pipeline 2026-04-10 15:48:02 -07:00
Bryan c25abdfd84 feat: natural chat replies + cleaner home-prompt bootstrap 2026-04-10 15:47:28 -07:00
Bryan b763226a64 docs: update references for orchestrator/host/loader renames 2026-04-10 15:39:36 -07:00
Richard Tang 802f64f4a7 feat: cooldown for reflection 2026-04-09 19:00:10 -07:00
Richard Tang 9ad95fde59 chore: ruff lint 2026-04-09 18:22:16 -07:00
Richard Tang b812f6a03a feat: user memory structure and identity 2026-04-09 18:09:38 -07:00
Richard Tang 0299a87d0c fix: queen identity for new session 2026-04-09 18:07:42 -07:00
Richard Tang bc8a97079e feat: queen role and examples 2026-04-09 17:55:22 -07:00
Richard Tang 6eaa609f63 feat: queen scope memory 2026-04-09 17:33:14 -07:00
Bryan 8f0101b273 fix(queen): handle extra text in selector JSON response 2026-04-09 17:13:20 -07:00
Bryan 5ee98ac7cf feat: add prompt library with search and category filtering 2026-04-09 17:00:09 -07:00
Bryan c058029ac0 feat: add aden credentials storage adapter 2026-04-09 16:59:16 -07:00
Bryan 6a79728d99 feat: update model switcher and enhance queen DM page with navigation 2026-04-09 16:58:55 -07:00
Bryan 200c202465 refactor: update provider descriptions and simplify subscription activation 2026-04-09 16:58:36 -07:00
Bryan 791da46f59 feat: add subscription-based LLM config activation endpoint 2026-04-09 16:58:21 -07:00
Bryan 6377c5b094 refactor: cache tool registry and add queen identity selection hook 2026-04-09 16:58:09 -07:00
Bryan 8f4e901c3c feat: add kimi and hive providers to model catalog 2026-04-09 16:57:53 -07:00
Richard Tang ac46ce7bfb fix: unavailable minimax model and enhance reflection log 2026-04-09 16:37:09 -07:00
Richard Tang 110d7e0075 fix: remove outdated queen communication prompt 2026-04-09 15:36:56 -07:00
Richard Tang 749185e760 feat: queen dm prompt 2026-04-09 15:26:35 -07:00
Richard Tang 5cb75d1822 chore: instruction on resetting the port 2026-04-09 15:01:22 -07:00
Richard Tang 3febef106d fix: queen identity loading 2026-04-09 14:47:42 -07:00
Richard Tang db18186825 Merge remote-tracking branch 'origin/feature/hive-experimental-comp-pipeline' into feature/hive-experimental-comp-pipeline 2026-04-09 13:59:25 -07:00
Richard Tang 87918b5263 feat: queen selection like a CEO 2026-04-09 13:58:38 -07:00
Bryan @ Aden 01f258c4c4 Merge pull request #7006 from vincentjiang777/main
micro-fix: readme & 500 use cases
2026-04-09 13:46:36 -07:00
Vincent Jiang 3d992bbda3 readme & 500 use cases 2026-04-09 13:43:35 -07:00
Richard Tang bdd099bb78 feat: queen selection prompt 2026-04-09 12:58:59 -07:00
Richard Tang acca008772 feat: update provider config 2026-04-09 11:59:41 -07:00
Richard Tang 0bf4d8b9fa fix: session resume 2026-04-09 11:44:03 -07:00
Richard Tang 7a2752eb42 feat: consolidate model config 2026-04-09 09:53:05 -07:00
608 changed files with 55442 additions and 18012 deletions
+55 -2
View File
@@ -10,10 +10,63 @@
"Bash(grep -n \"create_colony\\\\|colony-spawn\\\\|colony_spawn\" /home/timothy/aden/hive/core/framework/agents/queen/nodes/__init__.py /home/timothy/aden/hive/core/framework/tools/*.py)",
"Bash(git stash:*)",
"Bash(python3 -c \"import sys,json; d=json.loads\\(sys.stdin.read\\(\\)\\); print\\('keys:', list\\(d.keys\\(\\)\\)[:10]\\)\")",
"Bash(python3 -c ':*)"
"Bash(python3 -c ':*)",
"Bash(uv run:*)",
"Read(//tmp/**)",
"Bash(grep -n \"useColony\\\\|const { queens, queenProfiles\" /home/timothy/aden/hive/core/frontend/src/pages/queen-dm.tsx)",
"Bash(awk 'NR==385,/\\\\}, \\\\[/' /home/timothy/aden/hive/core/frontend/src/pages/queen-dm.tsx)",
"Bash(xargs -I{} sh -c 'if ! grep -q \"^import base64\\\\|^from base64\" \"{}\"; then echo \"MISSING: {}\"; fi')",
"Bash(find /home/timothy/aden/hive/core/framework -name \"*.py\" -type f -exec grep -l \"FileConversationStore\\\\|class.*ConversationStore\" {} \\\\;)",
"Bash(find /home/timothy/aden/hive/core/framework -name \"*.py\" -exec grep -l \"run_parallel_workers\\\\|create_colony\" {} \\\\;)",
"Bash(awk '/^ async def execute\\\\\\(self, ctx: AgentContext\\\\\\)/,/^ async def [a-z_]+/ {print NR\": \"$0}' /home/timothy/aden/hive/core/framework/agent_loop/agent_loop.py)",
"Bash(grep -r \"max_concurrent_workers\\\\|max_depth\\\\|recursion\\\\|spawn.*bomb\" /home/timothy/aden/hive/core/framework/host/*.py)",
"Bash(wc -l /home/timothy/aden/hive/tools/src/gcu/browser/*.py /home/timothy/aden/hive/tools/src/gcu/browser/tools/*.py)",
"Bash(file /tmp/gcu_verify/*.png)",
"Bash(ps -eo pid,cmd)",
"Bash(ps -o pid,lstart,cmd -p 746640)",
"Bash(kill 746636)",
"Bash(ps -eo pid,lstart,cmd)",
"Bash(grep -E \"^d|\\\\.py$\")",
"Bash(grep -E \"\\\\.\\(ts|tsx\\)$\")",
"Bash(xargs cat:*)",
"Bash(find /home/timothy/aden/hive -path \"*/.venv\" -prune -o -name \"*.py\" -type f -exec grep -l \"frontend\\\\|UI\\\\|terminal\\\\|interactive\\\\|TUI\" {} \\\\;)",
"Bash(wc -l /home/timothy/.hive/backup/*/SKILL.md)",
"Bash(awk -F'::' '{print $1}')",
"Bash(wait)",
"Bash(pkill -f \"pytest.*test_event_loop_node\")",
"Bash(pkill -f \"pytest.*TestToolConcurrency\")",
"Bash(grep -n \"def.*discover\\\\|/api/agents\\\\|agents_discover\" /home/timothy/aden/hive/core/framework/server/*.py)",
"Bash(bun run:*)",
"Bash(npx eslint:*)",
"Bash(npm run:*)",
"Bash(npm test:*)",
"Bash(grep -n \"PIL\\\\|Image\\\\|to_thread\\\\|run_in_executor\" /home/timothy/aden/hive/tools/src/gcu/browser/*.py /home/timothy/aden/hive/tools/src/gcu/browser/tools/*.py)",
"WebFetch(domain:docs.litellm.ai)",
"Bash(cat /home/timothy/aden/hive/.venv/lib/python3.11/site-packages/litellm-*.dist-info/METADATA)",
"Bash(find \"/home/timothy/.hive/agents/queens/queen_brand_design/sessions/session_20260415_100751_d49f4c28/\" -type f -name \"*.json*\" -exec grep -l \"协日\" {} \\\\;)",
"Bash(grep -v ':0$')",
"Bash(curl -s -m 2 http://127.0.0.1:4002/sse -o /dev/null -w 'status=%{http_code} time=%{time_total}s\\\\n')",
"mcp__gcu-tools__browser_status",
"mcp__gcu-tools__browser_start",
"mcp__gcu-tools__browser_navigate",
"mcp__gcu-tools__browser_evaluate",
"mcp__gcu-tools__browser_screenshot",
"mcp__gcu-tools__browser_open",
"mcp__gcu-tools__browser_click_coordinate",
"mcp__gcu-tools__browser_get_rect",
"mcp__gcu-tools__browser_type_focused",
"mcp__gcu-tools__browser_wait",
"Bash(python3 -c ' *)",
"Bash(python3 scripts/debug_queen_prompt.py independent)",
"Bash(curl -s --max-time 2 http://127.0.0.1:9230/status)",
"Bash(python3 -c \"import json, sys; print\\(json.loads\\(sys.stdin.read\\(\\)\\)['data']['content']\\)\")",
"Bash(python3 -c \"import json; json.load\\(open\\('/home/timothy/aden/hive/tools/browser-extension/manifest.json'\\)\\)\")"
],
"additionalDirectories": [
"/home/timothy/.hive/skills/writing-hive-skills"
"/home/timothy/.hive/skills/writing-hive-skills",
"/tmp",
"/home/timothy/.hive/skills",
"/home/timothy/aden/hive/core/frontend/src/components"
]
},
"hooks": {
+2 -2
View File
@@ -64,7 +64,7 @@ snapshot = await browser_snapshot(tab_id)
|---------|--------------|-------|
| Scroll doesn't move | Nested scroll container | Look for `overflow: scroll` divs |
| Click no effect | Element covered | Check `getBoundingClientRect` vs viewport |
| Type clears | Autocomplete/React | Check for event listeners on input |
| Type clears | Autocomplete/React | Check for event listeners on input; try `browser_type_focused` |
| Snapshot hangs | Huge DOM | Check node count in snapshot |
| Snapshot stale | SPA hydration | Wait after navigation |
@@ -229,7 +229,7 @@ function queryShadow(selector) {
|-------|-------------|----------|
| Scroll not working | Find scrollable container | Mouse wheel at container center |
| Click no effect | JavaScript click() | CDP mouse events |
| Type clears | Add delay_ms | Use execCommand |
| Type clears | Add delay_ms | Use `browser_type_focused` (Input.insertText) |
| Snapshot hangs | Add timeout_s | DOM snapshot fallback |
| Stale content | Wait for selector | Increase wait_until timeout |
| Shadow DOM | Pierce selector | JavaScript traversal |
@@ -57,8 +57,7 @@ async def test_twitter_lazy_scroll():
# Count initial tweets
initial_count = await bridge.evaluate(
tab_id,
"(function() { return document.querySelectorAll("
"'[data-testid=\"tweet\"]').length; })()",
"(function() { return document.querySelectorAll('[data-testid=\"tweet\"]').length; })()",
)
print(f"Initial tweet count: {initial_count.get('result', 0)}")
@@ -78,8 +77,7 @@ async def test_twitter_lazy_scroll():
# Count tweets after scroll
count_result = await bridge.evaluate(
tab_id,
"(function() { return document.querySelectorAll("
"'[data-testid=\"tweet\"]').length; })()",
"(function() { return document.querySelectorAll('[data-testid=\"tweet\"]').length; })()",
)
count = count_result.get("result", 0)
print(f" Tweet count after scroll: {count}")
@@ -87,8 +85,7 @@ async def test_twitter_lazy_scroll():
# Final count
final_count = await bridge.evaluate(
tab_id,
"(function() { return document.querySelectorAll("
"'[data-testid=\"tweet\"]').length; })()",
"(function() { return document.querySelectorAll('[data-testid=\"tweet\"]').length; })()",
)
final = final_count.get("result", 0)
initial = initial_count.get("result", 0)
@@ -130,9 +130,7 @@ async def test_shadow_dom():
print(f"JS click result: {click_result.get('result', {})}")
# Verify click was registered
count_result = await bridge.evaluate(
tab_id, "(function() { return window.shadowClickCount || 0; })()"
)
count_result = await bridge.evaluate(tab_id, "(function() { return window.shadowClickCount || 0; })()")
count = count_result.get("result") or 0
print(f"Shadow click count: {count}")
@@ -200,9 +200,7 @@ async def test_autocomplete():
print(f"Value after fast typing: '{fast_value}'")
# Check events
events_result = await bridge.evaluate(
tab_id, "(function() { return window.inputEvents; })()"
)
events_result = await bridge.evaluate(tab_id, "(function() { return window.inputEvents; })()")
print(f"Events logged: {events_result.get('result', [])}")
# Test 2: Slow typing (with delay) - should work
@@ -220,8 +218,7 @@ async def test_autocomplete():
# Check if dropdown appeared
dropdown_result = await bridge.evaluate(
tab_id,
"(function() { return document.querySelectorAll("
"'.autocomplete-items div').length; })()",
"(function() { return document.querySelectorAll('.autocomplete-items div').length; })()",
)
dropdown_count = dropdown_result.get("result", 0)
print(f"Dropdown items: {dropdown_count}")
@@ -87,9 +87,7 @@ async def test_huge_dom():
await bridge.navigate(tab_id, data_url, wait_until="load")
# Count elements
count_result = await bridge.evaluate(
tab_id, "(function() { return document.querySelectorAll('*').length; })()"
)
count_result = await bridge.evaluate(tab_id, "(function() { return document.querySelectorAll('*').length; })()")
elem_count = count_result.get("result", 0)
print(f"DOM elements: {elem_count}")
@@ -122,14 +120,10 @@ async def test_huge_dom():
# Test 3: Real LinkedIn
print("\n--- Test 3: Real LinkedIn Feed ---")
await bridge.navigate(
tab_id, "https://www.linkedin.com/feed", wait_until="load", timeout_ms=30000
)
await bridge.navigate(tab_id, "https://www.linkedin.com/feed", wait_until="load", timeout_ms=30000)
await asyncio.sleep(2)
count_result = await bridge.evaluate(
tab_id, "(function() { return document.querySelectorAll('*').length; })()"
)
count_result = await bridge.evaluate(tab_id, "(function() { return document.querySelectorAll('*').length; })()")
elem_count = count_result.get("result", 0)
print(f"LinkedIn DOM elements: {elem_count}")
@@ -136,10 +136,7 @@ async def test_selector_screenshot(bridge: BeelineBridge, tab_id: int, data_url:
print(" ⚠ WARNING: Selector screenshot not smaller (may be full page)")
return False
else:
print(
" ⚠ NOT IMPLEMENTED: selector param ignored"
f" (returns full page) - error={result.get('error')}"
)
print(f" ⚠ NOT IMPLEMENTED: selector param ignored (returns full page) - error={result.get('error')}")
print(" NOTE: selector parameter exists in signature but is not used in implementation")
return False
@@ -181,9 +178,7 @@ async def test_screenshot_timeout(bridge: BeelineBridge, tab_id: int, data_url:
print(f" ⚠ Fast enough to beat timeout: {err!r} in {elapsed:.3f}s")
return True # Not a failure, just fast
else:
print(
f" ⚠ Screenshot completed before timeout ({elapsed:.3f}s) - too fast to test timeout"
)
print(f" ⚠ Screenshot completed before timeout ({elapsed:.3f}s) - too fast to test timeout")
return True # Still ok, just very fast
@@ -137,14 +137,8 @@ async def test_problematic_site(bridge: BeelineBridge, tab_id: int) -> dict:
changed = False
for key in after_data:
if key in before_data:
b_val = (
before_data[key].get("scrollTop", 0)
if isinstance(before_data[key], dict)
else 0
)
a_val = (
after_data[key].get("scrollTop", 0) if isinstance(after_data[key], dict) else 0
)
b_val = before_data[key].get("scrollTop", 0) if isinstance(before_data[key], dict) else 0
a_val = after_data[key].get("scrollTop", 0) if isinstance(after_data[key], dict) else 0
if a_val != b_val:
print(f" ✓ CHANGE DETECTED: {key} scrolled from {b_val} to {a_val}")
changed = True
-18
View File
@@ -1,18 +0,0 @@
This project uses ruff for Python linting and formatting.
Rules:
- Line length: 100 characters
- Python target: 3.11+
- Use double quotes for strings
- Sort imports with isort (ruff I rules): stdlib, third-party, first-party (framework), local
- Combine as-imports
- Use type hints on all function signatures
- Use `from __future__ import annotations` for modern type syntax
- Raise exceptions with `from` in except blocks (B904)
- No unused imports (F401), no unused variables (F841)
- Prefer list/dict/set comprehensions over map/filter (C4)
Run `make lint` to auto-fix, `make check` to verify without modifying files.
Run `make format` to apply ruff formatting.
The ruff config lives in core/pyproject.toml under [tool.ruff].
-35
View File
@@ -1,35 +0,0 @@
# Git
.git/
.gitignore
# Documentation
*.md
docs/
LICENSE
# IDE
.idea/
.vscode/
# Dependencies (rebuilt in container)
node_modules/
# Build artifacts
dist/
build/
coverage/
# Environment files
.env*
config.yaml
# Logs
*.log
logs/
# OS
.DS_Store
Thumbs.db
# GitHub
.github/
+3
View File
@@ -22,3 +22,6 @@ indent_size = 2
[Makefile]
indent_style = tab
[*.{sh,ps1}]
end_of_line = lf
+5 -1
View File
@@ -16,7 +16,6 @@
# Shell scripts (must use LF)
*.sh text eol=lf
quickstart.sh text eol=lf
# PowerShell scripts (Windows-friendly)
*.ps1 text eol=lf
@@ -122,3 +121,8 @@ CODE_OF_CONDUCT* text
*.db binary
*.sqlite binary
*.sqlite3 binary
# Lockfiles — mark generated so GitHub collapses them in PR diffs
*.lock linguist-generated=true -diff
package-lock.json linguist-generated=true -diff
uv.lock linguist-generated=true -diff
-3
View File
@@ -1,3 +0,0 @@
{
"mcpServers": {}
}
+3 -3
View File
@@ -959,7 +959,7 @@ uv run pytest -m "not live"
**Unit Test**
```python
import pytest
from framework.graph.node import Node
from framework.orchestrator import NodeSpec as Node
def test_node_creation():
node = Node(id="test", name="Test Node", node_type="event_loop")
@@ -977,8 +977,8 @@ async def test_node_execution():
**Integration Test**
```python
import pytest
from framework.graph.executor import GraphExecutor
from framework.graph.node import Node
from framework.orchestrator.orchestrator import Orchestrator as GraphExecutor
from framework.orchestrator import NodeSpec as Node
@pytest.mark.asyncio
async def test_graph_execution_with_multiple_nodes():
+11 -138
View File
@@ -1,5 +1,5 @@
<p align="center">
<img width="100%" alt="Hive Banner" src="https://github.com/user-attachments/assets/a027429b-5d3c-4d34-88e4-0feaeaabbab3" />
<img width="100%" alt="Hive Banner" src="https://asset.acho.io/github/img/banner.gif" />
</p>
<p align="center">
@@ -40,7 +40,16 @@
## Overview
Hive is a runtime harness for AI agents in production. You describe your goal in natural language; a coding agent (the queen) generates the agent graph and connection code to achieve it. During execution, the harness manages state isolation, checkpoint-based crash recovery, cost enforcement, and real-time observability. When agents fail, the framework captures failure data, evolves the graph through the coding agent, and redeploys automatically. Built-in human-in-the-loop nodes, browser control, credential management, and parallel execution give you production reliability without sacrificing adaptability.
OpenHive is a zero-setup, model-agnostic execution harness that dynamically generates multi-agent topologies to tackle complex, long-running business workflows without requiring any orchestration boilerplate. By simply defining your objective, the runtime compiles a strict, graph-based execution DAG that safely coordinates specialized agents to execute concurrent tasks in parallel. Backed by persistent, role-based memory that intelligently evolves with your project's context, OpenHive ensures deterministic fault tolerance, deep state observability, and seamless asynchronous execution across whichever underlying LLMs you choose to plug in.
## Features
- ✅ Multi-Agent Coordination for parallel task execution
- ✅ Graph-based execution for recurring and complex processes
- ✅ Role-based memory that evolves with your projects
- ✅ Zero Setup - No technical configuration required
- ✅ General Compute Use and Browser Use with Native Extension
- ✅ Custom Model Support
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
@@ -139,17 +148,6 @@ Now you can run an agent by selecting the agent (either an existing agent or exa
<img width="2549" height="1174" alt="Screenshot 2026-03-12 at 9 27 36PM" src="https://github.com/user-attachments/assets/7c7d30fa-9ceb-4c23-95af-b1caa405547d" />
## Features
- **Browser-Use** - Control the browser on your computer to achieve hard tasks
- **Parallel Execution** - Execute the generated graph in parallel. This way you can have multiple agents completing the jobs for you
- **[Goal-Driven Generation](docs/key_concepts/goals_outcome.md)** - Define objectives in natural language; the coding agent generates the agent graph and connection code to achieve them
- **[Adaptiveness](docs/key_concepts/evolution.md)** - Framework captures failures, calibrates according to the objectives, and evolves the agent graph
- **[Dynamic Node Connections](docs/key_concepts/graph.md)** - No predefined edges; connection code is generated by any capable LLM based on your goals
- **SDK-Wrapped Nodes** - Every node gets a shared data buffer, local RLM memory, monitoring, tools, and LLM access out of the box
- **[Human-in-the-Loop](docs/key_concepts/graph.md#human-in-the-loop)** - Intervention nodes that pause execution for human input with configurable timeouts and escalation
- **Real-time Observability** - WebSocket streaming for live monitoring of agent execution, decisions, and node-to-node communication
## Integration
<a href="https://github.com/aden-hive/hive/tree/main/tools/src/aden_tools/tools"><img width="100%" alt="Integration" src="https://github.com/user-attachments/assets/a1573f93-cf02-4bb8-b3d5-b305b05b1e51" /></a>
@@ -209,131 +207,6 @@ flowchart LR
- [Configuration Guide](docs/configuration.md) - All configuration options
- [Architecture Overview](docs/architecture/README.md) - System design and structure
## Roadmap
Aden Hive Agent Framework aims to help developers build outcome-oriented, self-adaptive agents. See [roadmap.md](docs/roadmap.md) for details.
```mermaid
flowchart TB
%% Main Entity
User([User])
%% =========================================
%% EXTERNAL EVENT SOURCES
%% =========================================
subgraph ExtEventSource [External Event Source]
E_Sch["Schedulers"]
E_WH["Webhook"]
E_SSE["SSE"]
end
%% =========================================
%% SYSTEM NODES
%% =========================================
subgraph WorkerBees [Worker Bees]
WB_C["Conversation"]
WB_SP["System prompt"]
subgraph Graph [Graph]
direction TB
N1["Node"] --> N2["Node"] --> N3["Node"]
N1 -.-> AN["Active Node"]
N2 -.-> AN
N3 -.-> AN
%% Nested Event Loop Node
subgraph EventLoopNode [Event Loop Node]
ELN_L["listener"]
ELN_SP["System Prompt<br/>(Task)"]
ELN_EL["Event loop"]
ELN_C["Conversation"]
end
end
end
subgraph JudgeNode [Judge]
J_C["Criteria"]
J_P["Principles"]
J_EL["Event loop"] <--> J_S["Scheduler"]
end
subgraph QueenBee [Queen Bee]
QB_SP["System prompt"]
QB_EL["Event loop"]
QB_C["Conversation"]
end
subgraph Infra [Infra]
SA["Sub Agent"]
TR["Tool Registry"]
WTM["Write through Conversation Memory<br/>(Logs/RAM/Harddrive)"]
SM["Shared Memory<br/>(State/Harddrive)"]
EB["Event Bus<br/>(RAM)"]
CS["Credential Store<br/>(Harddrive/Cloud)"]
end
subgraph PC [PC]
B["Browser"]
CB["Codebase<br/>v 0.0.x ... v n.n.n"]
end
%% =========================================
%% CONNECTIONS & DATA FLOW
%% =========================================
%% External Event Routing
E_Sch --> ELN_L
E_WH --> ELN_L
E_SSE --> ELN_L
ELN_L -->|"triggers"| ELN_EL
%% User Interactions
User -->|"Talk"| WB_C
User -->|"Talk"| QB_C
User -->|"Read/Write Access"| CS
%% Inter-System Logic
ELN_C <-->|"Mirror"| WB_C
WB_C -->|"Focus"| AN
WorkerBees -->|"Inquire"| JudgeNode
JudgeNode -->|"Approve"| WorkerBees
%% Judge Alignments
J_C <-.->|"aligns"| WB_SP
J_P <-.->|"aligns"| QB_SP
%% Escalate path
J_EL -->|"Report (Escalate)"| QB_EL
%% Pub/Sub Logic
AN -->|"publish"| EB
EB -->|"subscribe"| QB_C
%% Infra and Process Spawning
ELN_EL -->|"Spawn"| SA
SA -->|"Inform"| ELN_EL
SA -->|"Starts"| B
B -->|"Report"| ELN_EL
TR -->|"Assigned"| ELN_EL
CB -->|"Modify Worker Bee"| WB_C
%% =========================================
%% SHARED MEMORY & LOGS ACCESS
%% =========================================
%% Worker Bees Access (link to node inside Graph subgraph)
AN <-->|"Read/Write"| WTM
AN <-->|"Read/Write"| SM
%% Queen Bee Access
QB_C <-->|"Read/Write"| WTM
QB_EL <-->|"Read/Write"| SM
%% Credentials Access
CS -->|"Read Access"| QB_C
```
## Contributing
We welcome contributions from the community! Were especially looking for help building tools, integrations, and example agents for the framework ([check #2805](https://github.com/aden-hive/hive/issues/2805)). If youre interested in extending its functionality, this is the perfect place to start. Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
+7 -21
View File
@@ -52,9 +52,7 @@ _DEFAULT_REDIRECT_PORT = 51121
# This project reverse-engineered and published the public OAuth credentials
# for Google's Antigravity/Cloud Code Assist API.
# Source: https://github.com/NoeFabris/opencode-antigravity-auth
_CREDENTIALS_URL = (
"https://raw.githubusercontent.com/NoeFabris/opencode-antigravity-auth/dev/src/constants.ts"
)
_CREDENTIALS_URL = "https://raw.githubusercontent.com/NoeFabris/opencode-antigravity-auth/dev/src/constants.ts"
# Cached credentials fetched from public source
_cached_client_id: str | None = None
@@ -68,9 +66,7 @@ def _fetch_credentials_from_public_source() -> tuple[str | None, str | None]:
return _cached_client_id, _cached_client_secret
try:
req = urllib.request.Request(
_CREDENTIALS_URL, headers={"User-Agent": "Hive-Antigravity-Auth/1.0"}
)
req = urllib.request.Request(_CREDENTIALS_URL, headers={"User-Agent": "Hive-Antigravity-Auth/1.0"})
with urllib.request.urlopen(req, timeout=10) as resp:
content = resp.read().decode("utf-8")
import re
@@ -168,10 +164,7 @@ class OAuthCallbackHandler(BaseHTTPRequestHandler):
if "code" in query and "state" in query:
OAuthCallbackHandler.auth_code = query["code"][0]
OAuthCallbackHandler.state = query["state"][0]
self._send_response(
"Authentication successful! You can close this window "
"and return to the terminal."
)
self._send_response("Authentication successful! You can close this window and return to the terminal.")
return
self._send_response("Waiting for authentication...")
@@ -296,8 +289,7 @@ def validate_credentials(access_token: str, project_id: str = _DEFAULT_PROJECT_I
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json",
"User-Agent": (
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
"AppleWebKit/537.36 (KHTML, like Gecko) Antigravity/1.18.3"
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Antigravity/1.18.3"
),
"X-Goog-Api-Client": "google-cloud-sdk vscode_cloudshelleditor/0.1",
}
@@ -316,9 +308,7 @@ def validate_credentials(access_token: str, project_id: str = _DEFAULT_PROJECT_I
return False
def refresh_access_token(
refresh_token: str, client_id: str, client_secret: str | None
) -> dict | None:
def refresh_access_token(refresh_token: str, client_id: str, client_secret: str | None) -> dict | None:
"""Refresh the access token using the refresh token."""
data = {
"grant_type": "refresh_token",
@@ -361,9 +351,7 @@ def cmd_account_add(args: argparse.Namespace) -> int:
access_token = account.get("access")
refresh_token_str = account.get("refresh", "")
refresh_token = refresh_token_str.split("|")[0] if refresh_token_str else None
project_id = (
refresh_token_str.split("|")[1] if "|" in refresh_token_str else _DEFAULT_PROJECT_ID
)
project_id = refresh_token_str.split("|")[1] if "|" in refresh_token_str else _DEFAULT_PROJECT_ID
email = account.get("email", "unknown")
expires_ms = account.get("expires", 0)
expires_at = expires_ms / 1000.0 if expires_ms else 0.0
@@ -390,9 +378,7 @@ def cmd_account_add(args: argparse.Namespace) -> int:
# Update the account
account["access"] = new_access
account["expires"] = int((time.time() + expires_in) * 1000)
accounts_data["last_refresh"] = time.strftime(
"%Y-%m-%dT%H:%M:%SZ", time.gmtime()
)
accounts_data["last_refresh"] = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
save_accounts(accounts_data)
# Validate the refreshed token
File diff suppressed because it is too large Load Diff
+497 -54
View File
@@ -3,12 +3,14 @@
from __future__ import annotations
import json
import logging
import re
from dataclasses import dataclass
from pathlib import Path
from typing import Any, Literal, Protocol, runtime_checkable
LEGACY_RUN_ID = "__legacy_run__"
logger = logging.getLogger(__name__)
def is_legacy_run_id(run_id: str | None) -> bool:
@@ -46,6 +48,24 @@ class Message:
is_skill_content: bool = False
# Logical worker run identifier for shared-session persistence
run_id: str | None = None
# True when this is a framework-injected continuation hint (continue-nudge
# on stream stall). Stored as a user message for API compatibility, but
# the UI should render it as a compact system notice, not user speech.
is_system_nudge: bool = False
# True when this message is a partial/truncated assistant turn reconstructed
# from a crashed or watchdog-cancelled stream. Signals that the original
# turn never finished — the model may or may not choose to redo it.
truncated: bool = False
# When non-None, identifies the parent session id this message was
# carried over from — used by fork_session_into_colony on the single
# compacted-summary message it writes when a colony is born from a
# queen DM. Presence of the field IS the "inherited" signal.
inherited_from: str | None = None
# True when this user message was synthesized from one or more
# fired triggers (timer/webhook), not typed by a human. The LLM still
# sees the message as a regular user turn; the UI uses this flag to
# render it as a trigger banner instead of a speech bubble.
is_trigger: bool = False
def to_llm_dict(self) -> dict[str, Any]:
"""Convert to OpenAI-format message dict."""
@@ -107,6 +127,14 @@ class Message:
d["image_content"] = self.image_content
if self.run_id is not None:
d["run_id"] = self.run_id
if self.is_system_nudge:
d["is_system_nudge"] = self.is_system_nudge
if self.truncated:
d["truncated"] = self.truncated
if self.inherited_from is not None:
d["inherited_from"] = self.inherited_from
if self.is_trigger:
d["is_trigger"] = self.is_trigger
return d
@classmethod
@@ -124,6 +152,10 @@ class Message:
is_client_input=data.get("is_client_input", False),
image_content=data.get("image_content"),
run_id=data.get("run_id"),
is_system_nudge=data.get("is_system_nudge", False),
truncated=data.get("truncated", False),
inherited_from=data.get("inherited_from"),
is_trigger=data.get("is_trigger", False),
)
@@ -160,10 +192,17 @@ def update_run_cursor(
def _extract_spillover_filename(content: str) -> str | None:
"""Extract spillover filename from a tool result annotation.
Matches patterns produced by EventLoopNode._truncate_tool_result():
- Large result: "saved to 'web_search_1.txt'"
- Small result: "[Saved to 'web_search_1.txt']"
Matches patterns produced by ``truncate_tool_result``:
- New large-result header: "Full result saved at: /abs/path/file.txt"
- Legacy bracketed trailer: "[Saved to 'file.txt']" (pre-2026-04-15,
retained here so cold conversations still resolve)
"""
# New prose format — ``saved at: <absolute path>``, terminated by
# newline or end-of-string.
match = re.search(r"[Ss]aved at:\s*(\S+)", content)
if match:
return match.group(1)
# Legacy format.
match = re.search(r"[Ss]aved to '([^']+)'", content)
return match.group(1) if match else None
@@ -308,6 +347,14 @@ class ConversationStore(Protocol):
async def delete_parts_before(self, seq: int, run_id: str | None = None) -> None: ...
async def write_partial(self, seq: int, data: dict[str, Any]) -> None: ...
async def read_partial(self, seq: int) -> dict[str, Any] | None: ...
async def read_all_partials(self) -> list[dict[str, Any]]: ...
async def clear_partial(self, seq: int) -> None: ...
async def close(self) -> None: ...
async def destroy(self) -> None: ...
@@ -379,10 +426,36 @@ class NodeConversation:
output_keys: list[str] | None = None,
store: ConversationStore | None = None,
run_id: str | None = None,
compaction_buffer_tokens: int | None = None,
compaction_buffer_ratio: float | None = None,
compaction_warning_buffer_tokens: int | None = None,
) -> None:
self._system_prompt = system_prompt
# Optional split: when a caller updates the prompt with a
# ``dynamic_suffix`` argument, we remember the static prefix and
# suffix separately so the LLM wrapper can emit them as two
# Anthropic system content blocks with a cache breakpoint between
# them. ``_system_prompt`` stays as the concatenated form used for
# persistence and for the legacy single-block LLM path.
# On restore, these default to the concat/empty pair — the next
# AgentLoop iteration's dynamic-prompt refresh step repopulates.
self._system_prompt_static: str = system_prompt
self._system_prompt_dynamic_suffix: str = ""
self._max_context_tokens = max_context_tokens
self._compaction_threshold = compaction_threshold
# Buffer-based compaction trigger (Gap 7). When set, takes
# precedence over the multiplicative compaction_threshold so the
# loop reserves a fixed headroom for the next turn's input+output
# instead of trying to get exactly X% of the way to the hard
# limit. If left as None the legacy threshold-based rule is
# used, keeping old call sites behaving identically.
self._compaction_buffer_tokens = compaction_buffer_tokens
# Ratio component of the hybrid buffer. Combines additively with
# _compaction_buffer_tokens so callers can express "reserve N tokens
# plus M% of the window" — the absolute floor matters on tiny
# windows, the ratio matters on large ones.
self._compaction_buffer_ratio = compaction_buffer_ratio
self._compaction_warning_buffer_tokens = compaction_warning_buffer_tokens
self._output_keys = output_keys
self._store = store
self._messages: list[Message] = []
@@ -396,15 +469,56 @@ class NodeConversation:
@property
def system_prompt(self) -> str:
"""Full concatenated system prompt (static + dynamic suffix, if any).
This is the canonical form used for persistence and for the legacy
single-block LLM path. Split-prompt callers should read
``system_prompt_static`` and ``system_prompt_dynamic_suffix`` instead.
"""
return self._system_prompt
def update_system_prompt(self, new_prompt: str) -> None:
@property
def system_prompt_static(self) -> str:
"""Static prefix of the system prompt (cache-stable).
Equals ``system_prompt`` when no split is in use. When the AgentLoop
calls ``update_system_prompt(static, dynamic_suffix=...)``, this is
the piece sent as the cache-controlled first block.
"""
return self._system_prompt_static
@property
def system_prompt_dynamic_suffix(self) -> str:
"""Dynamic tail of the system prompt (not cached).
Empty unless the consumer splits its prompt. The LLM wrapper uses a
non-empty suffix to emit a two-block system content list with a
cache breakpoint between the static prefix and this tail.
"""
return self._system_prompt_dynamic_suffix
def update_system_prompt(self, new_prompt: str, dynamic_suffix: str | None = None) -> None:
"""Update the system prompt.
Used in continuous conversation mode at phase transitions to swap
Layer 3 (focus) while preserving the conversation history.
When ``dynamic_suffix`` is provided, ``new_prompt`` is interpreted as
the STATIC prefix and ``dynamic_suffix`` as the per-turn tail; they
travel to the LLM as two separate cache-controlled blocks but are
persisted as a single concatenated string for backward-compat
restore. ``new_prompt`` alone (suffix left None) keeps the legacy
single-string behavior.
"""
self._system_prompt = new_prompt
if dynamic_suffix is None:
# Legacy single-string path — static == full, no suffix split.
self._system_prompt = new_prompt
self._system_prompt_static = new_prompt
self._system_prompt_dynamic_suffix = ""
else:
self._system_prompt_static = new_prompt
self._system_prompt_dynamic_suffix = dynamic_suffix
self._system_prompt = f"{new_prompt}\n\n{dynamic_suffix}" if dynamic_suffix else new_prompt
self._meta_persisted = False # re-persist with new prompt
def set_current_phase(self, phase_id: str) -> None:
@@ -443,6 +557,8 @@ class NodeConversation:
is_transition_marker: bool = False,
is_client_input: bool = False,
image_content: list[dict[str, Any]] | None = None,
is_system_nudge: bool = False,
is_trigger: bool = False,
) -> Message:
msg = Message(
seq=self._next_seq,
@@ -453,6 +569,8 @@ class NodeConversation:
is_transition_marker=is_transition_marker,
is_client_input=is_client_input,
image_content=image_content,
is_system_nudge=is_system_nudge,
is_trigger=is_trigger,
)
self._messages.append(msg)
self._next_seq += 1
@@ -466,6 +584,8 @@ class NodeConversation:
self,
content: str,
tool_calls: list[dict[str, Any]] | None = None,
*,
truncated: bool = False,
) -> Message:
msg = Message(
seq=self._next_seq,
@@ -474,6 +594,7 @@ class NodeConversation:
tool_calls=tool_calls,
phase_id=self._current_phase,
run_id=self._run_id,
truncated=truncated,
)
self._messages.append(msg)
self._next_seq += 1
@@ -489,6 +610,27 @@ class NodeConversation:
image_content: list[dict[str, Any]] | None = None,
is_skill_content: bool = False,
) -> Message:
# Dedup guard: reject a second tool_result for the same tool_use_id.
# Anthropic's API only accepts one result per tool_call, and a duplicate
# causes a hard 400 two turns later ("messages with role 'tool' must
# be a response to a preceding message with 'tool_calls'"). Duplicates
# can arise when a tool_call_timeout fires and records a placeholder
# error, then the real executor thread eventually delivers the actual
# result (the thread kept running inside run_in_executor — see
# tool_result_handler.execute_tool). We keep the FIRST result to
# preserve whatever state the agent already reasoned about.
for existing in reversed(self._messages):
if existing.role == "tool" and existing.tool_use_id == tool_use_id:
import logging as _logging
_logging.getLogger(__name__).warning(
"add_tool_result: dropping duplicate result for tool_use_id=%s "
"(first result preserved, %d chars; new result ignored, %d chars)",
tool_use_id,
len(existing.content),
len(content),
)
return existing
msg = Message(
seq=self._next_seq,
role="tool",
@@ -508,6 +650,59 @@ class NodeConversation:
# --- Query -------------------------------------------------------------
def find_completed_tool_call(
self,
name: str,
tool_input: dict[str, Any],
within_last_turns: int = 3,
) -> Message | None:
"""Return the most recent assistant message that issued a tool call
with the same (name + canonical-json args) AND received a non-error
tool result, within the last ``within_last_turns`` assistant turns.
Used by the replay detector to flag when the model is about to redo
a successful call we prepend a steer onto the upcoming result but
still execute, so tools like browser_screenshot that are legitimately
repeated are not silently skipped.
"""
try:
target_canonical = json.dumps(tool_input, sort_keys=True, default=str)
except (TypeError, ValueError):
target_canonical = str(tool_input)
# Walk backwards over recent assistant messages
assistant_turns_seen = 0
for idx in range(len(self._messages) - 1, -1, -1):
m = self._messages[idx]
if m.role != "assistant":
continue
assistant_turns_seen += 1
if assistant_turns_seen > within_last_turns:
break
if not m.tool_calls:
continue
for tc in m.tool_calls:
func = tc.get("function", {}) if isinstance(tc, dict) else {}
tc_name = func.get("name")
if tc_name != name:
continue
args_str = func.get("arguments", "")
try:
parsed = json.loads(args_str) if isinstance(args_str, str) else args_str
canonical = json.dumps(parsed, sort_keys=True, default=str)
except (TypeError, ValueError):
canonical = str(args_str)
if canonical != target_canonical:
continue
# Found a match — now verify its result was not an error.
tc_id = tc.get("id")
for later in self._messages[idx + 1 :]:
if later.role == "tool" and later.tool_use_id == tc_id:
if not later.is_error:
return m
break
return None
def to_llm_messages(self) -> list[dict[str, Any]]:
"""Return messages as OpenAI-format dicts (system prompt excluded).
@@ -565,11 +760,18 @@ class NodeConversation:
) -> list[dict[str, Any]]:
"""Ensure tool_call / tool_result pairs are consistent.
1. **Orphaned tool results** (tool_result with no preceding tool_use)
are dropped. This happens when compaction removes an assistant
message but leaves its tool-result messages behind.
2. **Orphaned tool calls** (tool_use with no following tool_result)
get a synthetic error result appended. This happens when a loop
1. **Orphaned tool results** (tool_result with no matching tool_use
anywhere) are dropped. Happens after compaction removes the
parent assistant message.
2. **Positionally orphaned tool results** (tool_result separated
from its parent by a non-tool message, e.g. a user injection)
are dropped. The Anthropic API requires tool messages to
follow immediately after the assistant message that issued
the matching tool_call.
3. **Duplicate tool results** (same tool_call_id appearing more
than once) are dropped; only the first is kept.
4. **Orphaned tool calls** (tool_use with no following tool_result)
get a synthetic error result appended. Happens when the loop
is cancelled mid-tool-execution.
"""
# Pass 1: collect all tool_call IDs from assistant messages so we
@@ -582,41 +784,75 @@ class NodeConversation:
if tc_id:
all_tool_call_ids.add(tc_id)
# Pass 2: build repaired list — drop orphaned tool results, patch
# missing tool results.
# Pass 2: build repaired list — drop orphaned tool results, drop
# positional orphans and duplicates, patch missing tool results.
#
# ``open_tool_calls`` holds the tool_call IDs we're still expecting
# results for: it's populated when we emit an assistant-with-tool_calls
# and drained as matching tool messages follow. Any tool message
# whose id is not currently open is positionally invalid and gets
# dropped — that closes the gap that caused the tool-after-user
# 400 errors.
repaired: list[dict[str, Any]] = []
for i, m in enumerate(msgs):
# Drop tool-result messages whose tool_call_id has no matching
# tool_use in any assistant message (orphaned by compaction).
if m.get("role") == "tool":
tid = m.get("tool_call_id")
if tid and tid not in all_tool_call_ids:
continue # skip orphaned result
open_tool_calls: set[str] = set()
seen_tool_ids: set[str] = set()
for m in msgs:
role = m.get("role")
repaired.append(m)
tool_calls = m.get("tool_calls")
if m.get("role") != "assistant" or not tool_calls:
if role == "tool":
tid = m.get("tool_call_id")
# Drop tool results with no matching tool_use anywhere.
if not tid or tid not in all_tool_call_ids:
continue
# Drop duplicates (same id appearing twice) — keep first.
if tid in seen_tool_ids:
continue
# Drop positional orphans — tool messages whose parent
# assistant isn't the still-open assistant block.
if tid not in open_tool_calls:
continue
open_tool_calls.discard(tid)
seen_tool_ids.add(tid)
repaired.append(m)
continue
# Collect IDs of tool results that follow this assistant message
answered: set[str] = set()
for j in range(i + 1, len(msgs)):
if msgs[j].get("role") == "tool":
tid = msgs[j].get("tool_call_id")
if tid:
answered.add(tid)
else:
break # stop at first non-tool message
# Patch any missing results
for tc in tool_calls:
tc_id = tc.get("id")
if tc_id and tc_id not in answered:
# Any non-tool message closes the current assistant tool block.
# If the previous assistant left tool_calls unanswered, patch
# synthetic error results before emitting this message so the
# API sees a complete pairing.
if open_tool_calls:
for stale_id in list(open_tool_calls):
repaired.append(
{
"role": "tool",
"tool_call_id": tc_id,
"tool_call_id": stale_id,
"content": "ERROR: Tool execution was interrupted.",
}
)
seen_tool_ids.add(stale_id)
open_tool_calls.clear()
repaired.append(m)
if role == "assistant":
for tc in m.get("tool_calls") or []:
tc_id = tc.get("id")
if tc_id and tc_id not in seen_tool_ids:
open_tool_calls.add(tc_id)
# Tail: if the conversation ends with an assistant that issued
# tool_calls and no results followed, patch them so the next
# turn's first message can be a valid assistant/user response.
if open_tool_calls:
for stale_id in list(open_tool_calls):
repaired.append(
{
"role": "tool",
"tool_call_id": stale_id,
"content": "ERROR: Tool execution was interrupted.",
}
)
return repaired
def estimate_tokens(self) -> int:
@@ -665,8 +901,48 @@ class NodeConversation:
return self.estimate_tokens() / self._max_context_tokens
def needs_compaction(self) -> bool:
"""True when the conversation should be compacted before the
next LLM call.
Hybrid buffer rule: the headroom reserved before compaction fires
is the SUM of an absolute fixed component and a ratio of the hard
context limit:
effective_buffer = compaction_buffer_tokens
+ compaction_buffer_ratio * max_context_tokens
The fixed component gives a floor on tiny windows; the ratio
keeps the trigger meaningful on large windows where any constant
buffer becomes a rounding error (an 8k buffer is 75% on a 32k
window but 96% on a 200k window). Compaction fires when the
current estimate would consume more than (limit - effective_buffer).
When neither component is configured, falls back to the legacy
multiplicative threshold so old callers keep behaving identically.
"""
if self._max_context_tokens <= 0:
return False
fixed = self._compaction_buffer_tokens
ratio = self._compaction_buffer_ratio
if fixed is not None or ratio is not None:
effective_buffer = (fixed or 0) + (ratio or 0.0) * self._max_context_tokens
budget = self._max_context_tokens - effective_buffer
return self.estimate_tokens() >= max(0.0, budget)
return self.estimate_tokens() >= self._max_context_tokens * self._compaction_threshold
def compaction_warning(self) -> bool:
"""True when the conversation has crossed the warning threshold
but not yet the hard compaction trigger.
Used by telemetry / UI to show a "context getting tight" hint
before a compaction pass actually runs. Returns False when no
warning buffer is configured (legacy behaviour).
"""
if self._max_context_tokens <= 0 or self._compaction_warning_buffer_tokens is None:
return False
warn_at = self._max_context_tokens - self._compaction_warning_buffer_tokens
return self.estimate_tokens() >= max(0, warn_at)
# --- Output-key extraction ---------------------------------------------
def _extract_protected_values(self, messages: list[Message]) -> dict[str, str]:
@@ -743,7 +1019,7 @@ class NodeConversation:
continue # never prune errors
if msg.is_skill_content:
continue # never prune activated skill instructions (AS-10)
if msg.content.startswith("[Pruned tool result"):
if msg.content.startswith(("Pruned tool result", "[Pruned tool result")):
continue # already pruned
# Tiny results (set_output acks, confirmations) — pruning
# saves negligible space but makes the LLM think the call
@@ -775,12 +1051,12 @@ class NodeConversation:
if spillover:
placeholder = (
f"[Pruned tool result: {orig_len} chars. "
f"Full data in '{spillover}'. "
f"Use read_file('{spillover}') to retrieve.]"
f"Pruned tool result ({orig_len:,} chars) cleared from context. "
f"Full data saved at: {spillover}\n"
f"Read the complete data with read_file(path='{spillover}')."
)
else:
placeholder = f"[Pruned tool result: {orig_len} chars cleared from context.]"
placeholder = f"Pruned tool result ({orig_len:,} chars) cleared from context."
self._messages[i] = Message(
seq=msg.seq,
@@ -802,6 +1078,78 @@ class NodeConversation:
self._last_api_input_tokens = None
return count
async def evict_old_images(self, keep_latest: int = 2) -> int:
"""Strip ``image_content`` from older messages, keeping the most recent.
Screenshots from ``browser_screenshot`` are inlined into the
message's ``image_content`` as base64 data URLs. Each screenshot
costs ~250k tokens when the provider counts the base64 as
text four screenshots push a conversation over gemini's 1M
context limit and trigger out-of-context garbage output (see
``session_20260415_104727_5c4ed7ff`` for the terminal case
where the model emitted ``协日`` as its final text then stopped).
This method walks backward through messages and keeps
``image_content`` intact on the most recent ``keep_latest``
messages that have images. Older messages get their
``image_content`` nulled out the text content (metadata
like url, dimensions, scale hints) stays, but the raw bytes
are dropped. Storage is updated too so cold-restore sees the
same evicted state.
Run this right after every tool result is recorded so image
context stays bounded even within a single iteration (the
compaction pipeline only fires at iteration boundaries, too
late for a single turn that takes 4 screenshots).
Returns the number of messages whose image_content was evicted.
"""
if not self._messages or keep_latest < 0:
return 0
# Find messages carrying images, walking newest → oldest.
image_indices: list[int] = []
for i in range(len(self._messages) - 1, -1, -1):
if self._messages[i].image_content:
image_indices.append(i)
# Nothing to evict if we have ≤ keep_latest images total.
if len(image_indices) <= keep_latest:
return 0
# Evict everything past the first keep_latest (newest) entries.
to_evict = image_indices[keep_latest:]
evicted = 0
for idx in to_evict:
msg = self._messages[idx]
self._messages[idx] = Message(
seq=msg.seq,
role=msg.role,
content=msg.content,
tool_use_id=msg.tool_use_id,
tool_calls=msg.tool_calls,
is_error=msg.is_error,
phase_id=msg.phase_id,
is_transition_marker=msg.is_transition_marker,
is_client_input=msg.is_client_input,
image_content=None, # ← dropped
is_skill_content=msg.is_skill_content,
run_id=msg.run_id,
)
evicted += 1
if self._store:
await self._store.write_part(msg.seq, self._messages[idx].to_storage_dict())
if evicted:
# Reset token estimate — image blocks no longer contribute.
self._last_api_input_tokens = None
logger.info(
"evict_old_images: dropped image_content from %d message(s), kept %d most recent",
evicted,
keep_latest,
)
return evicted
async def compact(
self,
summary: str,
@@ -954,9 +1302,7 @@ class NodeConversation:
for msg in old_messages:
if msg.role != "assistant" or not msg.tool_calls:
continue
has_protected = any(
tc.get("function", {}).get("name") == "set_output" for tc in msg.tool_calls
)
has_protected = any(tc.get("function", {}).get("name") == "set_output" for tc in msg.tool_calls)
tc_ids = {tc.get("id", "") for tc in msg.tool_calls}
if has_protected:
protected_tc_ids |= tc_ids
@@ -1062,16 +1408,18 @@ class NodeConversation:
# Nothing to save — skip file creation
conv_filename = ""
# Build reference message
# Build reference message. Prose format (no brackets) — see the
# poison-pattern note on truncate_tool_result. Frontier models
# autocomplete `[...']` trailers into their own text turns.
ref_parts: list[str] = []
if conv_filename:
full_path = str((spill_path / conv_filename).resolve())
ref_parts.append(
f"[Previous conversation saved to '{full_path}'. "
f"Use read_file('{conv_filename}') to review if needed.]"
f"Previous conversation saved at: {full_path}\n"
f"Read the full transcript with read_file('{conv_filename}')."
)
elif not collapsed_msgs:
ref_parts.append("[Previous freeform messages compacted.]")
ref_parts.append("(Previous freeform messages compacted.)")
# Aggressive: add collapsed tool-call history to the reference
if collapsed_msgs:
@@ -1150,11 +1498,7 @@ class NodeConversation:
def export_summary(self) -> str:
"""Structured summary with [STATS], [CONFIG], [RECENT_MESSAGES] sections."""
prompt_preview = (
self._system_prompt[:80] + "..."
if len(self._system_prompt) > 80
else self._system_prompt
)
prompt_preview = self._system_prompt[:80] + "..." if len(self._system_prompt) > 80 else self._system_prompt
lines = [
"[STATS]",
@@ -1187,6 +1531,45 @@ class NodeConversation:
await self._persist_meta()
await self._store.write_part(message.seq, message.to_storage_dict())
await self._write_next_seq()
# Any partial checkpoint for this seq is now superseded by the real
# part — clear it so a future restore doesn't resurrect stale text.
try:
await self._store.clear_partial(message.seq)
except AttributeError:
# Older stores may not implement partials; ignore.
pass
async def checkpoint_partial_assistant(
self,
accumulated_text: str,
tool_calls: list[dict[str, Any]] | None = None,
) -> None:
"""Write an in-flight assistant turn's state to disk under the next seq.
Called from the stream event loop. Safe to call repeatedly each call
overwrites the prior checkpoint. Persisted via ``write_partial`` so it
does NOT appear in ``read_parts()`` and cannot be double-loaded. Cleared
automatically when ``add_assistant_message`` for this seq lands.
"""
if self._store is None:
return
if not self._meta_persisted:
await self._persist_meta()
payload: dict[str, Any] = {
"seq": self._next_seq,
"role": "assistant",
"content": accumulated_text,
"phase_id": self._current_phase,
"run_id": self._run_id,
"truncated": True,
}
if tool_calls:
payload["tool_calls"] = tool_calls
try:
await self._store.write_partial(self._next_seq, payload)
except AttributeError:
# Older stores may not implement partials; ignore.
pass
async def _persist_meta(self) -> None:
"""Lazily write conversation metadata to the store (called once).
@@ -1200,6 +1583,9 @@ class NodeConversation:
"system_prompt": self._system_prompt,
"max_context_tokens": self._max_context_tokens,
"compaction_threshold": self._compaction_threshold,
"compaction_buffer_tokens": self._compaction_buffer_tokens,
"compaction_buffer_ratio": self._compaction_buffer_ratio,
"compaction_warning_buffer_tokens": (self._compaction_warning_buffer_tokens),
"output_keys": self._output_keys,
}
await self._store.write_meta(run_meta)
@@ -1247,12 +1633,28 @@ class NodeConversation:
output_keys=meta.get("output_keys"),
store=store,
run_id=run_id,
compaction_buffer_tokens=meta.get("compaction_buffer_tokens"),
compaction_buffer_ratio=meta.get("compaction_buffer_ratio"),
compaction_warning_buffer_tokens=meta.get("compaction_warning_buffer_tokens"),
)
conv._meta_persisted = True
parts = await store.read_parts()
if phase_id:
parts = [p for p in parts if p.get("phase_id") == phase_id]
filtered_parts = [p for p in parts if p.get("phase_id") == phase_id]
if filtered_parts:
parts = filtered_parts
elif parts and all(p.get("phase_id") is None for p in parts):
# Backward compatibility: older isolated stores (including queen
# sessions) persisted parts without phase_id. In that case, the
# phase filter would incorrectly hide the entire conversation.
logger.info(
"Restoring legacy unphased conversation without applying phase filter (phase_id=%s, parts=%d)",
phase_id,
len(parts),
)
else:
parts = filtered_parts
# Filter by run_id so intentional restarts (new run_id) start fresh
# while crash recovery (same run_id) loads prior parts.
if run_id and not is_legacy_run_id(run_id):
@@ -1266,4 +1668,45 @@ class NodeConversation:
elif conv._messages:
conv._next_seq = conv._messages[-1].seq + 1
# Surface any leftover partial checkpoints as truncated messages so
# the next turn sees what the interrupted stream was in the middle
# of producing. Only partials whose seq is >= next_seq are meaningful;
# anything lower was already superseded by a real part.
try:
partials = await store.read_all_partials()
except AttributeError:
partials = []
for p in partials:
pseq = p.get("seq", -1)
if pseq < conv._next_seq:
# Stale — clean it up.
try:
await store.clear_partial(pseq)
except AttributeError:
pass
continue
# Only resurrect partials relevant to this run / phase.
if run_id and not is_legacy_run_id(run_id) and p.get("run_id") != run_id:
continue
if phase_id and p.get("phase_id") is not None and p.get("phase_id") != phase_id:
continue
# Reconstruct as a truncated assistant message.
msg = Message(
seq=pseq,
role="assistant",
content=p.get("content", "") or "",
tool_calls=p.get("tool_calls"),
phase_id=p.get("phase_id"),
run_id=p.get("run_id"),
truncated=True,
)
conv._messages.append(msg)
conv._next_seq = max(conv._next_seq, pseq + 1)
logger.info(
"restore: resurrected truncated partial seq=%d (text=%d chars, tool_calls=%d)",
pseq,
len(msg.content),
len(msg.tool_calls or []),
)
return conv
@@ -22,8 +22,8 @@ from typing import Any
from framework.agent_loop.conversation import Message, NodeConversation
from framework.agent_loop.internals.event_publishing import publish_context_usage
from framework.agent_loop.internals.types import LoopConfig, OutputAccumulator
from framework.orchestrator.node import NodeContext
from framework.host.event_bus import EventBus
from framework.orchestrator.node import NodeContext
logger = logging.getLogger(__name__)
@@ -80,7 +80,7 @@ def microcompact(
msg = messages[i]
if msg.role != "tool" or msg.is_error or msg.is_skill_content:
continue
if msg.content.startswith(("[Pruned tool result", "[Old tool result")):
if msg.content.startswith(("Pruned tool result", "[Pruned tool result", "[Old tool result")):
continue
if len(msg.content) < 100:
continue
@@ -102,12 +102,12 @@ def microcompact(
orig_len = len(msg.content)
if spillover:
placeholder = (
f"[Old tool result cleared: {orig_len} chars. "
f"Full data in '{spillover}'. "
f"Use read_file('{spillover}') to retrieve.]"
f"Old tool result ({orig_len:,} chars) cleared from context. "
f"Full data saved at: {spillover}\n"
f"Read the complete data with read_file(path='{spillover}')."
)
else:
placeholder = f"[Old tool result cleared: {orig_len} chars.]"
placeholder = f"Old tool result ({orig_len:,} chars) cleared from context."
# Mutate in-place (microcompact is synchronous, no store writes)
conversation._messages[i] = Message(
@@ -142,7 +142,14 @@ def _find_tool_name_for_result(messages: list[Message], tool_msg: Message) -> st
def _extract_spillover_filename_inline(content: str) -> str | None:
"""Quick inline check for spillover filename in tool result content."""
"""Quick inline check for spillover filename in tool result content.
Matches both the new prose format ("saved at: /path") and the
legacy bracketed trailer ("saved to '/path'").
"""
match = re.search(r"saved at:\s*(\S+)", content, re.IGNORECASE)
if match:
return match.group(1)
match = re.search(r"saved to '([^']+)'", content, re.IGNORECASE)
return match.group(1) if match else None
@@ -168,13 +175,17 @@ async def compact(
"""
conv_id = id(conversation)
# Circuit breaker: stop auto-compacting after repeated failures
if _failure_counts.get(conv_id, 0) >= MAX_CONSECUTIVE_FAILURES:
# Circuit breaker: stop LLM-based compaction after repeated failures,
# but still fall through to the emergency deterministic summary so
# the conversation doesn't silently grow past the context window.
# Without this, a persistent LLM outage during compaction would
# leave the agent stuck sending oversized prompts until the API 400s.
_llm_compaction_skipped = _failure_counts.get(conv_id, 0) >= MAX_CONSECUTIVE_FAILURES
if _llm_compaction_skipped:
logger.warning(
"Circuit breaker: skipping compaction after %d consecutive failures",
"Circuit breaker: LLM compaction disabled after %d failures — skipping straight to emergency summary",
_failure_counts[conv_id],
)
return
# Recompaction detection
now = time.monotonic()
@@ -256,7 +267,7 @@ async def compact(
return
# --- Step 3: LLM summary compaction ---
if ctx.llm is not None:
if ctx.llm is not None and not _llm_compaction_skipped:
logger.info(
"LLM summary compaction triggered (%.0f%% usage)",
conversation.usage_ratio() * 100,
@@ -360,6 +371,7 @@ async def llm_compact(
char_limit: int = LLM_COMPACT_CHAR_LIMIT,
max_depth: int = LLM_COMPACT_MAX_DEPTH,
max_context_tokens: int = 128_000,
preserve_user_messages: bool = False,
) -> str:
"""Summarise *messages* with LLM, splitting recursively if too large.
@@ -367,6 +379,11 @@ async def llm_compact(
rejects the call with a context-length error, the messages are split
in half and each half is summarised independently. Tool history is
appended once at the top-level call (``_depth == 0``).
When ``preserve_user_messages`` is True, the prompt and system message
are amplified to instruct the LLM to keep every user message verbatim
and in full used by the manual /compact-and-fork endpoint where the
user wants their voice carried into the new session intact.
"""
from framework.agent_loop.conversation import extract_tool_call_history
from framework.agent_loop.internals.tool_result_handler import is_context_too_large_error
@@ -390,6 +407,7 @@ async def llm_compact(
char_limit=char_limit,
max_depth=max_depth,
max_context_tokens=max_context_tokens,
preserve_user_messages=preserve_user_messages,
)
else:
prompt = build_llm_compaction_prompt(
@@ -397,17 +415,30 @@ async def llm_compact(
accumulator,
formatted,
max_context_tokens=max_context_tokens,
preserve_user_messages=preserve_user_messages,
)
if preserve_user_messages:
system_msg = (
"You are a conversation compactor for an AI agent. "
"Write a detailed summary that allows the agent to "
"continue its work. CRITICAL: reproduce every user "
"message verbatim and in full inside the 'User Messages' "
"section — do not paraphrase, truncate, or merge them. "
"Assistant turns and tool results may be summarised, but "
"user input is sacred."
)
else:
system_msg = (
"You are a conversation compactor for an AI agent. "
"Write a detailed summary that allows the agent to "
"continue its work. Preserve user-stated rules, "
"constraints, and account/identity preferences verbatim."
)
summary_budget = max(1024, max_context_tokens // 2)
try:
response = await ctx.llm.acomplete(
messages=[{"role": "user", "content": prompt}],
system=(
"You are a conversation compactor for an AI agent. "
"Write a detailed summary that allows the agent to "
"continue its work. Preserve user-stated rules, "
"constraints, and account/identity preferences verbatim."
),
system=system_msg,
max_tokens=summary_budget,
)
summary = response.content
@@ -426,6 +457,7 @@ async def llm_compact(
char_limit=char_limit,
max_depth=max_depth,
max_context_tokens=max_context_tokens,
preserve_user_messages=preserve_user_messages,
)
else:
raise
@@ -448,6 +480,7 @@ async def _llm_compact_split(
char_limit: int = LLM_COMPACT_CHAR_LIMIT,
max_depth: int = LLM_COMPACT_MAX_DEPTH,
max_context_tokens: int = 128_000,
preserve_user_messages: bool = False,
) -> str:
"""Split messages in half and summarise each half independently."""
mid = max(1, len(messages) // 2)
@@ -459,6 +492,7 @@ async def _llm_compact_split(
char_limit=char_limit,
max_depth=max_depth,
max_context_tokens=max_context_tokens,
preserve_user_messages=preserve_user_messages,
)
s2 = await llm_compact(
ctx,
@@ -468,6 +502,7 @@ async def _llm_compact_split(
char_limit=char_limit,
max_depth=max_depth,
max_context_tokens=max_context_tokens,
preserve_user_messages=preserve_user_messages,
)
return s1 + "\n\n" + s2
@@ -499,6 +534,7 @@ def build_llm_compaction_prompt(
formatted_messages: str,
*,
max_context_tokens: int = 128_000,
preserve_user_messages: bool = False,
) -> str:
"""Build prompt for LLM compaction targeting 50% of token budget.
@@ -518,10 +554,7 @@ def build_llm_compaction_prompt(
done = {k: v for k, v in acc.items() if v is not None}
todo = [k for k, v in acc.items() if v is None]
if done:
ctx_lines.append(
"OUTPUTS ALREADY SET:\n"
+ "\n".join(f" {k}: {str(v)[:150]}" for k, v in done.items())
)
ctx_lines.append("OUTPUTS ALREADY SET:\n" + "\n".join(f" {k}: {str(v)[:150]}" for k, v in done.items()))
if todo:
ctx_lines.append(f"OUTPUTS STILL NEEDED: {', '.join(todo)}")
elif spec.output_keys:
@@ -531,6 +564,18 @@ def build_llm_compaction_prompt(
target_chars = target_tokens * 4
node_ctx = "\n".join(ctx_lines)
user_messages_section = (
"6. **User Messages** — Reproduce EVERY user message verbatim and "
"in full, in chronological order, each on its own line prefixed "
'with the message index (e.g. "[U1] ..."). Do NOT paraphrase, '
"summarise, merge, or omit any user message. Preserve markdown, "
"code fences, whitespace, and punctuation exactly as the user "
"wrote them.\n"
if preserve_user_messages
else "6. **User Messages** — Preserve ALL user-stated rules, constraints, "
"identity preferences, and account details verbatim.\n"
)
return (
"You are compacting an AI agent's conversation history. "
"The agent is still working and needs to continue.\n\n"
@@ -551,8 +596,7 @@ def build_llm_compaction_prompt(
"resolved. Include root causes so the agent doesn't repeat them.\n"
"5. **Problem Solving Efforts** — Approaches tried, dead ends hit, "
"and reasoning behind the current strategy.\n"
"6. **User Messages** — Preserve ALL user-stated rules, constraints, "
"identity preferences, and account details verbatim.\n"
f"{user_messages_section}"
"7. **Pending Tasks** — Work remaining, outputs still needed, and "
"any blockers.\n"
"8. **Current Work** — The most recent action taken and the immediate "
@@ -575,12 +619,8 @@ def build_message_inventory(conversation: NodeConversation) -> list[dict[str, An
if message.tool_calls:
for tool_call in message.tool_calls:
args = tool_call.get("function", {}).get("arguments", "")
tool_call_args_chars += (
len(args) if isinstance(args, str) else len(json.dumps(args))
)
names = [
tool_call.get("function", {}).get("name", "?") for tool_call in message.tool_calls
]
tool_call_args_chars += len(args) if isinstance(args, str) else len(json.dumps(args))
names = [tool_call.get("function", {}).get("name", "?") for tool_call in message.tool_calls]
tool_name = ", ".join(names)
elif message.role == "tool" and message.tool_use_id:
for previous in conversation.messages:
@@ -637,14 +677,8 @@ def write_compaction_debug_log(
lines.append("")
if inventory:
total_chars = sum(
entry.get("content_chars", 0) + entry.get("tool_call_args_chars", 0)
for entry in inventory
)
lines.append(
"## Pre-Compaction Message Inventory "
f"({len(inventory)} messages, {total_chars:,} total chars)"
)
total_chars = sum(entry.get("content_chars", 0) + entry.get("tool_call_args_chars", 0) for entry in inventory)
lines.append(f"## Pre-Compaction Message Inventory ({len(inventory)} messages, {total_chars:,} total chars)")
lines.append("")
ranked = sorted(
inventory,
@@ -663,8 +697,7 @@ def write_compaction_debug_log(
if entry.get("phase"):
flags.append(f"phase={entry['phase']}")
lines.append(
f"| {i} | {entry['seq']} | {entry['role']} | {tool} "
f"| {chars:,} | {pct:.1f}% | {', '.join(flags)} |"
f"| {i} | {entry['seq']} | {entry['role']} | {tool} | {chars:,} | {pct:.1f}% | {', '.join(flags)} |"
)
large = [entry for entry in ranked if entry.get("preview")]
@@ -672,9 +705,7 @@ def write_compaction_debug_log(
lines.append("")
lines.append("### Large message previews")
for entry in large:
lines.append(
f"\n**seq={entry['seq']}** ({entry['role']}, {entry.get('tool', '')}):"
)
lines.append(f"\n**seq={entry['seq']}** ({entry['role']}, {entry.get('tool', '')}):")
lines.append(f"```\n{entry['preview']}\n```")
lines.append("")
@@ -762,10 +793,7 @@ def build_emergency_summary(
node's known state so the LLM can continue working after
compaction without losing track of its task and inputs.
"""
parts = [
"EMERGENCY COMPACTION — previous conversation was too large "
"and has been replaced with this summary.\n"
]
parts = ["EMERGENCY COMPACTION — previous conversation was too large and has been replaced with this summary.\n"]
# 1. Node identity
spec = ctx.agent_spec
@@ -818,17 +846,13 @@ def build_emergency_summary(
data_files = [f for f in all_files if f not in conv_files]
if conv_files:
conv_list = "\n".join(
f" - {f} (full path: {data_dir / f})" for f in conv_files
)
conv_list = "\n".join(f" - {f} (full path: {data_dir / f})" for f in conv_files)
parts.append(
"CONVERSATION HISTORY (freeform messages saved during compaction — "
"use read_file('<filename>') to review earlier dialogue):\n" + conv_list
)
if data_files:
file_list = "\n".join(
f" - {f} (full path: {data_dir / f})" for f in data_files[:30]
)
file_list = "\n".join(f" - {f} (full path: {data_dir / f})" for f in data_files[:30])
parts.append("DATA FILES (use read_file('<filename>') to read):\n" + file_list)
if not all_files:
parts.append(
@@ -836,10 +860,7 @@ def build_emergency_summary(
"Use list_directory to check the data directory."
)
except Exception:
parts.append(
"NOTE: Large tool results were saved to files. "
"Use read_file(path='<path>') to read them."
)
parts.append("NOTE: Large tool results were saved to files. Use read_file(path='<path>') to read them.")
# 6. Tool call history (prevent re-calling tools)
if conversation is not None:
@@ -847,10 +868,7 @@ def build_emergency_summary(
if tool_history:
parts.append(tool_history)
parts.append(
"\nContinue working towards setting the remaining outputs. "
"Use your tools and the inputs above."
)
parts.append("\nContinue working towards setting the remaining outputs. Use your tools and the inputs above.")
return "\n\n".join(parts)
@@ -12,12 +12,13 @@ import json
import logging
from collections.abc import Awaitable, Callable
from dataclasses import dataclass
from datetime import datetime
from typing import Any
from framework.agent_loop.conversation import ConversationStore, NodeConversation
from framework.agent_loop.internals.types import LoopConfig, OutputAccumulator, TriggerEvent
from framework.orchestrator.node import NodeContext
from framework.llm.capabilities import supports_image_tool_results
from framework.orchestrator.node import NodeContext
logger = logging.getLogger(__name__)
@@ -149,9 +150,7 @@ async def write_cursor(
cursor["recent_responses"] = recent_responses
if recent_tool_fingerprints is not None:
# Convert list[list[tuple]] → list[list[list]] for JSON
cursor["recent_tool_fingerprints"] = [
[list(pair) for pair in fps] for fps in recent_tool_fingerprints
]
cursor["recent_tool_fingerprints"] = [[list(pair) for pair in fps] for fps in recent_tool_fingerprints]
# Persist blocked-input state so restored runs re-block instead of
# manufacturing a synthetic continuation turn.
cursor["pending_input"] = pending_input
@@ -163,9 +162,7 @@ async def drain_injection_queue(
conversation: NodeConversation,
*,
ctx: NodeContext,
describe_images_as_text_fn: (
Callable[[list[dict[str, Any]]], Awaitable[str | None]] | None
) = None,
describe_images_as_text_fn: (Callable[[list[dict[str, Any]]], Awaitable[str | None]] | None) = None,
) -> int:
"""Drain all pending injected events as user messages. Returns count."""
count = 0
@@ -195,15 +192,21 @@ async def drain_injection_queue(
else:
logger.info("[drain] no vision fallback available; images dropped")
image_content = None
# Real user input is stored as-is; external events get a prefix
# Stamp every injected event with its arrival time so the model
# has a consistent temporal log to reason over (and so the
# stamp lives inside byte-stable conversation history instead
# of a per-turn system-prompt tail). Minute precision is what
# the queen needs for conversational / scheduling context.
stamp = datetime.now().astimezone().strftime("%Y-%m-%d %H:%M %Z")
if is_client_input:
stamped = f"[{stamp}] {content}" if content else f"[{stamp}]"
await conversation.add_user_message(
content,
stamped,
is_client_input=True,
image_content=image_content,
)
else:
await conversation.add_user_message(f"[External event]: {content}")
await conversation.add_user_message(f"[{stamp}] [External event] {content}")
count += 1
except asyncio.QueueEmpty:
break
@@ -236,9 +239,12 @@ async def drain_trigger_queue(
payload_str = json.dumps(t.payload, default=str)
parts.append(f"[TRIGGER: {t.trigger_type}/{t.source_id}]{task_line}\n{payload_str}")
combined = "\n\n".join(parts)
stamp = datetime.now().astimezone().strftime("%Y-%m-%d %H:%M %Z")
combined = f"[{stamp}]\n" + "\n\n".join(parts)
logger.info("[drain] %d trigger(s): %s", len(triggers), combined[:200])
await conversation.add_user_message(combined)
# Tag the message so the UI can render a banner instead of the raw
# `[TRIGGER: ...]` text. The LLM still sees `combined` verbatim.
await conversation.add_user_message(combined, is_trigger=True)
return len(triggers)
@@ -11,8 +11,8 @@ import time
from framework.agent_loop.conversation import NodeConversation
from framework.agent_loop.internals.types import HookContext
from framework.orchestrator.node import NodeContext
from framework.host.event_bus import EventBus
from framework.orchestrator.node import NodeContext
logger = logging.getLogger(__name__)
@@ -108,6 +108,8 @@ async def publish_llm_turn_complete(
input_tokens: int,
output_tokens: int,
cached_tokens: int = 0,
cache_creation_tokens: int = 0,
cost_usd: float = 0.0,
execution_id: str = "",
iteration: int | None = None,
) -> None:
@@ -120,6 +122,8 @@ async def publish_llm_turn_complete(
input_tokens=input_tokens,
output_tokens=output_tokens,
cached_tokens=cached_tokens,
cache_creation_tokens=cache_creation_tokens,
cost_usd=cost_usd,
execution_id=execution_id,
iteration=iteration,
)
@@ -31,14 +31,10 @@ class SubagentJudge:
if remaining <= 3:
urgency = (
f"URGENT: Only {remaining} iterations left. "
f"Stop all other work and call set_output NOW for: {missing}"
f"URGENT: Only {remaining} iterations left. Stop all other work and call set_output NOW for: {missing}"
)
elif remaining <= self._max_iterations // 2:
urgency = (
f"WARNING: {remaining} iterations remaining. "
f"You must call set_output for: {missing}"
)
urgency = f"WARNING: {remaining} iterations remaining. You must call set_output for: {missing}"
else:
urgency = f"Missing output keys: {missing}. Use set_output to provide them."
@@ -109,9 +105,7 @@ async def judge_turn(
if tool_results:
return JudgeVerdict(action="RETRY") # feedback=None → not logged
missing = get_missing_output_keys_fn(
accumulator, ctx.agent_spec.output_keys, ctx.agent_spec.nullable_output_keys
)
missing = get_missing_output_keys_fn(accumulator, ctx.agent_spec.output_keys, ctx.agent_spec.nullable_output_keys)
if missing:
return JudgeVerdict(
@@ -133,10 +127,7 @@ async def judge_turn(
if all_nullable and none_set:
return JudgeVerdict(
action="RETRY",
feedback=(
f"No output keys have been set yet. "
f"Use set_output to set at least one of: {output_keys}"
),
feedback=(f"No output keys have been set yet. Use set_output to set at least one of: {output_keys}"),
)
# Level 2b: conversation-aware quality check (if success_criteria set)
@@ -91,116 +91,72 @@ def sanitize_ask_user_inputs(
return q, recovered
ask_user_prompt = """\
Use this tool when you need to ask the user questions during execution. Reach for it when:
- The task is ambiguous and the user needs to choose an approach
- You need missing information to continue
- You want approval before taking a meaningful action
- A decision has real trade-offs the user should weigh in on
- You want post-task feedback, or to offer saving a skill or updating memory
Usage notes:
- Users will always be able to select "Other" to provide custom text input, \
so do not include catch-all options like "Other" or "Something else" yourself.
- Each option is a plain string. Do NOT wrap options in `{"label": "..."}` or \
`{"value": "..."}` objects pass the raw choice text directly, e.g. `"Email"`, \
not `{"label": "Email"}`.
- If you recommend a specific option, make that the first option in the list \
and append " (Recommended)" to the end of its text.
- Call this tool whenever you need the user's response.
- The prompt field must be plain text only.
- Do not include XML, pseudo-tags, or inline option lists inside prompt.
- Omit options only when the question truly requires a free-form response the \
user must type out, such as describing an idea or pasting an error message.
- Do not repeat the questions in your normal text response. The widget renders \
them, so keep any surrounding text to a brief intro only.
Example single question with options:
{"questions": [{"id": "next", "prompt": "What would you like to do?", \
"options": ["Build a new agent (Recommended)", "Modify existing agent", "Run tests"]}]}
Example batch:
{"questions": [
{"id": "scope", "prompt": "What scope?", "options": ["Full", "Partial"]},
{"id": "format", "prompt": "Output format?", "options": ["PDF", "CSV", "JSON"]},
{"id": "details", "prompt": "Any special requirements?"}
]}
Example free-form (queen only):
{"questions": [{"id": "idea", "prompt": "Describe the agent you want to build."}]}
"""
def build_ask_user_tool() -> Tool:
"""Build the synthetic ask_user tool for explicit user-input requests.
The queen calls ask_user() when it needs to pause and wait
for user input. Text-only turns WITHOUT ask_user flow through without
blocking, allowing progress updates and summaries to stream freely.
The queen calls ask_user() when it needs to pause and wait for user
input. Accepts an array of 1-8 questions a single question for the
common case, or a batch when several clarifications are needed at once.
Text-only turns WITHOUT ask_user flow through without blocking, allowing
progress updates and summaries to stream freely.
"""
return Tool(
name="ask_user",
description=(
"You MUST call this tool whenever you need the user's response. "
"Always call it after greeting the user, asking a question, or "
"requesting approval. Do NOT call it for status updates or "
"summaries that don't require a response.\n\n"
"STRUCTURE RULES (CRITICAL):\n"
"- The 'question' field is PLAIN TEXT shown to the user. Do NOT "
"include XML tags, pseudo-tags like </question>, or option lists "
"in the question string. The UI does not parse them — they "
"render as raw text and look broken.\n"
"- The 'options' parameter is the ONLY way to render buttons. "
"If you want buttons, put them in the 'options' array, not in "
"the question string. Do NOT write 'OPTIONS: [...]', "
"'_options: [...]', or any inline list inside 'question'.\n"
"- The question text must read as a single clean prompt with "
"no markup. Example: 'What would you like to do?' — not "
"'What would you like to do?</question>'.\n\n"
"USAGE:\n"
"Always include 2-3 predefined options. The UI automatically "
"appends an 'Other' free-text input after your options, so NEVER "
"include catch-all options like 'Custom idea', 'Something else', "
"'Other', or 'None of the above' — the UI handles that. "
"When the question primarily needs a typed answer but you must "
"include options, make one option signal that typing is expected "
"(e.g. 'I\\'ll type my response'). This helps users discover the "
"free-text input. "
"The ONLY exception: omit options when the question demands a "
"free-form answer the user must type out (e.g. 'Describe your "
"agent idea', 'Paste the error message').\n\n"
"CORRECT EXAMPLE:\n"
'{"question": "What would you like to do?", "options": '
'["Build a new agent", "Modify existing agent", "Run tests"]}\n\n'
"FREE-FORM EXAMPLE:\n"
'{"question": "Describe the agent you want to build."}\n\n'
"WRONG (do NOT do this — buttons will not render):\n"
'{"question": "What now?</question>\\n_OPTIONS: [\\"A\\", \\"B\\"]"}'
),
parameters={
"type": "object",
"properties": {
"question": {
"type": "string",
"description": "The question or prompt shown to the user.",
},
"options": {
"type": "array",
"items": {"type": "string"},
"description": (
"2-3 specific predefined choices. Include in most cases. "
'Example: ["Option A", "Option B", "Option C"]. '
"The UI always appends an 'Other' free-text input, so "
"do NOT include catch-alls like 'Custom idea' or 'Other'. "
"Omit ONLY when the user must type a free-form answer."
),
"minItems": 2,
"maxItems": 3,
},
},
"required": ["question"],
},
)
def build_ask_user_multiple_tool() -> Tool:
"""Build the synthetic ask_user_multiple tool for batched questions.
Queen-only tool that presents multiple questions at once so the user
can answer them all in a single interaction rather than one at a time.
"""
return Tool(
name="ask_user_multiple",
description=(
"Ask the user multiple questions at once. Use this instead of "
"ask_user when you have 2 or more questions to ask in the same "
"turn — it lets the user answer everything in one go rather than "
"going back and forth. Each question can have its own predefined "
"options (2-3 choices) or be free-form. The UI renders all "
"questions together with a single Submit button. "
"ALWAYS prefer this over ask_user when you have multiple things "
"to clarify. "
"IMPORTANT: Do NOT repeat the questions in your text response — "
"the widget renders them. Keep your text to a brief intro only. "
'{"questions": ['
' {"id": "scope", "prompt": "What scope?", "options": ["Full", "Partial"]},'
' {"id": "format", "prompt": "Output format?", "options": ["PDF", "CSV", "JSON"]},'
' {"id": "details", "prompt": "Any special requirements?"}'
"]}"
),
description=ask_user_prompt,
parameters={
"type": "object",
"properties": {
"questions": {
"type": "array",
"minItems": 1,
"maxItems": 8,
"description": "List of questions to present to the user.",
"items": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": (
"Short identifier for this question (used in the response)."
),
"description": ("Short identifier for this question (used in the response)."),
},
"prompt": {
"type": "string",
@@ -210,8 +166,13 @@ def build_ask_user_multiple_tool() -> Tool:
"type": "array",
"items": {"type": "string"},
"description": (
"2-3 predefined choices. The UI appends an "
"'Other' free-text input automatically. "
"2-3 predefined choices as plain strings "
'(e.g. ["Yes", "No", "Maybe"]). Do NOT '
'wrap items in {"label": "..."} or '
'{"value": "..."} objects — pass the raw '
"choice text directly. The UI appends an "
"'Other' free-text input automatically, "
"so don't include catch-all options. "
"Omit only when the user must type a free-form answer."
),
"minItems": 2,
@@ -220,9 +181,6 @@ def build_ask_user_multiple_tool() -> Tool:
},
"required": ["id", "prompt"],
},
"minItems": 2,
"maxItems": 8,
"description": "List of questions to present to the user.",
},
},
"required": ["questions"],
@@ -256,10 +214,7 @@ def build_set_output_tool(output_keys: list[str] | None) -> Tool | None:
},
"value": {
"type": "string",
"description": (
"The output value — a brief note, count, status, "
"or data filename reference."
),
"description": ("The output value — a brief note, count, status, or data filename reference."),
},
},
"required": ["key", "value"],
@@ -283,9 +238,7 @@ def build_escalate_tool() -> Tool:
"properties": {
"reason": {
"type": "string",
"description": (
"Short reason for escalation (e.g. 'Tool repeatedly failing')."
),
"description": ("Short reason for escalation (e.g. 'Tool repeatedly failing')."),
},
"context": {
"type": "string",
@@ -377,10 +330,7 @@ def handle_report_to_parent(tool_input: dict[str, Any]) -> ToolResult:
}
return ToolResult(
tool_use_id=tool_input.get("tool_use_id", ""),
content=(
f"Report delivered to overseer (status={status}). "
f"This worker will terminate now."
),
content=(f"Report delivered to overseer (status={status}). This worker will terminate now."),
)
@@ -0,0 +1,291 @@
"""Generic coercion of LLM-emitted tool arguments to match each tool's JSON schema.
Small/mid-size models drift from tool schemas in predictable, boring ways:
- A number field comes back as a string (``"42"`` instead of ``42``).
- A boolean field comes back as a string (``"true"`` instead of ``True``).
- An array-of-string field comes back as an array of objects
(``[{"label": "A"}, ...]`` instead of ``["A", ...]``).
- An array/object field comes back as a JSON-encoded string
(``'["A","B"]'`` instead of ``["A", "B"]``).
- A lone scalar arrives where the schema expects an array.
This module centralizes the healing in one schema-driven pass that runs
on every tool call before dispatch. Coercion is conservative:
- Values that already match the expected type are untouched.
- Shapes we don't recognize are returned as-is, so real bugs surface
instead of getting silently munged into something plausible.
- Every actual coercion is logged with the tool, property, and shape
transition so we can see which models/tools are drifting.
Tool-specific prompt drift (e.g. ``</question>`` tags leaking into an
``ask_user`` prompt string) is NOT this module's job — that belongs in
per-tool sanitizers, because it's about prompt style, not schema shape.
"""
from __future__ import annotations
import json
import logging
from typing import Any
from framework.llm.provider import Tool
logger = logging.getLogger(__name__)
# When an ``array<string>`` field arrives as an array of objects, look
# for a text-carrying field in preference order. Covers the wrappers
# small models tend to produce: ``[{"label": "A"}]``, ``[{"value": "A"}]``,
# ``[{"text": "A"}]``, etc.
_STRING_EXTRACT_KEYS: tuple[str, ...] = (
"label",
"value",
"text",
"name",
"title",
"display",
)
def coerce_tool_input(tool: Tool, raw_input: dict[str, Any] | None) -> dict[str, Any]:
"""Coerce *raw_input* in place to match *tool*'s JSON schema.
Returns the mutated input dict (same object as *raw_input* when
possible, for callers that assume in-place mutation). Properties
not present in the schema are left untouched.
"""
if not isinstance(raw_input, dict):
return raw_input or {}
schema = tool.parameters or {}
props = schema.get("properties")
if not isinstance(props, dict):
return raw_input
for key in list(raw_input.keys()):
prop_schema = props.get(key)
if not isinstance(prop_schema, dict):
continue
original = raw_input[key]
coerced = _coerce(original, prop_schema)
if coerced is not original:
logger.info(
"coerced tool input tool=%s prop=%s from=%s to=%s",
tool.name,
key,
_shape(original),
_shape(coerced),
)
raw_input[key] = coerced
return raw_input
def _coerce(value: Any, schema: dict[str, Any]) -> Any:
"""Dispatch on the schema's ``type`` field.
Returns the *same object* on passthrough so callers can detect
no-ops via identity (``coerced is value``).
"""
expected = schema.get("type")
if not expected:
return value
# Union type: try each in order, return the first coercion that
# actually changes the value. Falls back to the original.
if isinstance(expected, list):
for t in expected:
sub_schema = {**schema, "type": t}
coerced = _coerce(value, sub_schema)
if coerced is not value:
return coerced
return value
if expected == "integer":
return _coerce_integer(value)
if expected == "number":
return _coerce_number(value)
if expected == "boolean":
return _coerce_boolean(value)
if expected == "string":
return _coerce_string(value)
if expected == "array":
return _coerce_array(value, schema)
if expected == "object":
return _coerce_object(value, schema)
return value
def _coerce_integer(value: Any) -> Any:
# bool is a subclass of int in Python; don't mistake True for 1 here.
if isinstance(value, bool):
return value
if isinstance(value, int):
return value
if isinstance(value, str):
parsed = _parse_number(value)
if parsed is None:
return value
if parsed != int(parsed):
# Has a fractional part — caller asked for int, don't truncate.
return value
return int(parsed)
return value
def _coerce_number(value: Any) -> Any:
if isinstance(value, bool):
return value
if isinstance(value, (int, float)):
return value
if isinstance(value, str):
parsed = _parse_number(value)
if parsed is None:
return value
if parsed == int(parsed):
return int(parsed)
return parsed
return value
def _coerce_boolean(value: Any) -> Any:
if isinstance(value, bool):
return value
if isinstance(value, str):
low = value.strip().lower()
if low == "true":
return True
if low == "false":
return False
return value
def _coerce_string(value: Any) -> Any:
if isinstance(value, str):
return value
# Common drift: model sent ``{"label": "..."}`` when we wanted "...".
if isinstance(value, dict):
extracted = _extract_string_from_object(value)
if extracted is not None:
return extracted
return value
def _coerce_array(value: Any, schema: dict[str, Any]) -> Any:
# Heal: JSON-encoded array string → array.
if isinstance(value, str):
parsed = _try_parse_json(value)
if isinstance(parsed, list):
value = parsed
else:
# Scalar string where an array is expected — wrap it.
return [value]
elif not isinstance(value, list):
# Any other scalar (int, bool, dict, ...) — wrap.
return [value]
items_schema = schema.get("items")
if not isinstance(items_schema, dict):
return value
coerced_items: list[Any] = []
changed = False
for item in value:
c = _coerce(item, items_schema)
if c is not item:
changed = True
coerced_items.append(c)
return coerced_items if changed else value
def _coerce_object(value: Any, schema: dict[str, Any]) -> Any:
# Heal: JSON-encoded object string → object.
if isinstance(value, str):
parsed = _try_parse_json(value)
if isinstance(parsed, dict):
value = parsed
else:
return value
if not isinstance(value, dict):
return value
sub_props = schema.get("properties")
if not isinstance(sub_props, dict):
return value
changed = False
for k in list(value.keys()):
sub_schema = sub_props.get(k)
if not isinstance(sub_schema, dict):
continue
original = value[k]
coerced = _coerce(original, sub_schema)
if coerced is not original:
value[k] = coerced
changed = True
# Return the same dict on mutation so callers that passed a shared
# reference see the updates. ``changed`` is only used to decide
# whether we need to log at a coarser level upstream.
return value if changed or not sub_props else value
def _extract_string_from_object(obj: dict[str, Any]) -> str | None:
"""Pick a likely-text field out of a wrapper object.
Tries the known keys first, falls back to the sole value if the
object has exactly one entry. Returns None when nothing plausible
is found the caller keeps the original.
"""
for k in _STRING_EXTRACT_KEYS:
v = obj.get(k)
if isinstance(v, str) and v:
return v
if len(obj) == 1:
(only,) = obj.values()
if isinstance(only, str) and only:
return only
return None
def _try_parse_json(raw: str) -> Any:
try:
return json.loads(raw)
except (ValueError, TypeError):
return None
def _parse_number(raw: str) -> float | None:
try:
f = float(raw)
except (ValueError, OverflowError):
return None
# Reject NaN and inf — they pass float() but aren't useful numeric
# values for tool arguments.
if f != f or f == float("inf") or f == float("-inf"):
return None
return f
def _shape(value: Any) -> str:
"""Short type/shape description used in coercion log lines."""
if value is None:
return "None"
if isinstance(value, bool):
return "bool"
if isinstance(value, int):
return "int"
if isinstance(value, float):
return "float"
if isinstance(value, str):
return f"str[{len(value)}]"
if isinstance(value, list):
if not value:
return "list[0]"
return f"list[{len(value)}]<{_shape(value[0])}>"
if isinstance(value, dict):
keys = sorted(value.keys())[:3]
suffix = ",…" if len(value) > 3 else ""
return f"dict{{{','.join(keys)}{suffix}}}"
return type(value).__name__
@@ -215,14 +215,30 @@ def truncate_tool_result(
"""Persist tool result to file and optionally truncate for context.
When *spillover_dir* is configured, EVERY non-error tool result is
saved to a file (short filename like ``web_search_1.txt``). A
``[Saved to '...']`` annotation is appended so the reference
survives pruning and compaction.
written to disk for debugging. The LLM-visible content is then
shaped to avoid a **poison pattern** that we traced on 2026-04-15
through a gemini-3.1-pro-preview-customtools queen session: the prior format
appended ``\\n\\n[Saved to '/abs/path/file.txt']`` after every
small result, and frontier pattern-matching models (gemini 3.x in
particular) learned to autocomplete the `[Saved to '...']` trailer
in their own assistant turns, eventually degenerating into echoing
the whole tool result instead of deciding what to do next. See
``session_20260415_100751_d49f4c28/conversations/parts/0000000056.json``
for the terminal case where the model's "text" output was the full
tool_result JSON.
- Small results ( limit): full content kept + file annotation
- Large results (> limit): preview + file reference
- Errors: pass through unchanged
- read_file results: truncate with pagination hint (no re-spill)
Rules after the fix:
- **Small results ( limit):** pass content through unchanged. No
trailer. No annotation. The full content is already in the
message; the disk copy is for debugging only.
- **Large results (> limit):** preview + file reference, but
formatted as plain prose instead of a bracketed ``[...]``
pattern. Structured JSON metadata ("_saved_to") is embedded
inside the JSON body when the preview is JSON-shaped so the
model can locate the full file without seeing a mimicry-prone
bracket token outside the body.
- **Errors:** pass through unchanged.
- **read_file results:** truncate with pagination hint (no re-spill).
"""
limit = max_tool_result_chars
@@ -252,18 +268,19 @@ def truncate_tool_result(
else:
preview_block = result.content[:PREVIEW_CAP] + ""
# Prose header (no brackets).
header = (
f"[{tool_name} result: {len(result.content):,} chars — "
f"too large for context. Use offset_bytes/limit_bytes "
f"parameters to read smaller chunks.]"
f"Tool `{tool_name}` returned {len(result.content):,} characters "
f"(too large for context). Use offset_bytes / limit_bytes "
f"parameters to paginate smaller chunks."
)
if metadata_str:
header += f"\n\nData structure:\n{metadata_str}"
header += (
"\n\nWARNING: This is an INCOMPLETE preview. Do NOT draw conclusions or counts from it."
"\n\nWARNING: the preview below is a SAMPLE only — do NOT draw counts, totals, or conclusions from it."
)
truncated = f"{header}\n\nPreview (small sample only):\n{preview_block}"
truncated = f"{header}\n\nPreview (truncated):\n{preview_block}"
logger.info(
"%s result truncated: %d%d chars (use offset/limit to paginate)",
tool_name,
@@ -301,7 +318,10 @@ def truncate_tool_result(
if limit > 0 and len(result.content) > limit:
# Large result: build a small, metadata-rich preview so the
# LLM cannot mistake it for the complete dataset.
# LLM cannot mistake it for the complete dataset. The
# preview is introduced as plain prose (no bracketed
# ``[Result from …]`` token) so it doesn't prime the model
# to autocomplete the same pattern in its next turn.
PREVIEW_CAP = 5000
# Extract structural metadata (array lengths, key names)
@@ -316,21 +336,21 @@ def truncate_tool_result(
else:
preview_block = result.content[:PREVIEW_CAP] + ""
# Assemble header with structural info + warning
# Prose header (no brackets). Absolute path still surfaced
# so the agent can read the full file, but it's framed as
# a sentence, not a bracketed trailer.
header = (
f"[Result from {tool_name}: {len(result.content):,} chars — "
f"too large for context, saved to '{abs_path}'.]\n"
f"Tool `{tool_name}` returned {len(result.content):,} characters "
f"(too large for context). Full result saved at: {abs_path}\n"
f"Read the complete data with read_file(path='{abs_path}').\n"
)
if metadata_str:
header += f"\nData structure:\n{metadata_str}"
header += f"\nData structure:\n{metadata_str}\n"
header += (
f"\n\nWARNING: The preview below is INCOMPLETE. "
f"Do NOT draw conclusions or counts from it. "
f"Use read_file(path='{abs_path}') to read the "
f"full data before analysis."
"\nWARNING: the preview below is a SAMPLE only — do NOT draw counts, totals, or conclusions from it."
)
content = f"{header}\n\nPreview (small sample only):\n{preview_block}"
content = f"{header}\n\nPreview (truncated):\n{preview_block}"
logger.info(
"Tool result spilled to file: %s (%d chars → %s)",
tool_name,
@@ -338,10 +358,22 @@ def truncate_tool_result(
abs_path,
)
else:
# Small result: keep full content + annotation with absolute path
content = f"{result.content}\n\n[Saved to '{abs_path}']"
# Small result: pass content through UNCHANGED.
#
# The prior design appended `\n\n[Saved to '/abs/path']`
# after every small result so the agent could re-read the
# file later. But (a) the full content is already in the
# message, so there's nothing to re-read; (b) the
# `[Saved to '…']` trailer is a repeating token pattern
# that frontier pattern-matching models autocomplete into
# their own assistant turns, eventually echoing whole tool
# results as "text" instead of making decisions. Dropping
# the trailer entirely kills the poison pattern. Spilled
# files on disk still exist for debugging — they just
# aren't advertised in the LLM-visible message.
content = result.content
logger.info(
"Tool result saved to file: %s (%d chars → %s)",
"Tool result saved to file: %s (%d chars → %s, no trailer)",
tool_name,
len(result.content),
filename,
@@ -373,15 +405,16 @@ def truncate_tool_result(
else:
preview_block = result.content[:PREVIEW_CAP] + ""
# Prose header (no brackets) — see docstring for the poison
# pattern that the bracket format triggered.
header = (
f"[Result from {tool_name}: {len(result.content):,} chars — "
f"truncated to fit context budget.]"
f"Tool `{tool_name}` returned {len(result.content):,} characters "
f"(truncated to fit context budget — no spillover dir configured)."
)
if metadata_str:
header += f"\n\nData structure:\n{metadata_str}"
header += (
"\n\nWARNING: This is an INCOMPLETE preview. "
"Do NOT draw conclusions or counts from the preview alone."
"\n\nWARNING: the preview below is a SAMPLE only — do NOT draw counts, totals, or conclusions from it."
)
truncated = f"{header}\n\n{preview_block}"
@@ -467,6 +500,22 @@ async def execute_tool(
result = await _run()
except TimeoutError:
logger.warning("Tool '%s' timed out after %.0fs", tc.tool_name, timeout)
# asyncio.wait_for cancels the awaiting coroutine, but the sync
# executor running inside run_in_executor keeps going — and so
# does any MCP subprocess it is blocked on. Reach through to the
# owning MCPClient and force-disconnect it so the subprocess is
# torn down. Next call_tool triggers a reconnect. Without this
# the executor thread and MCP child leak on every timeout.
kill_for_tool = getattr(tool_executor, "kill_for_tool", None)
if callable(kill_for_tool):
try:
await asyncio.to_thread(kill_for_tool, tc.tool_name)
except Exception as exc: # defensive — never let cleanup crash the loop
logger.warning(
"kill_for_tool('%s') raised during timeout handling: %s",
tc.tool_name,
exc,
)
return ToolResult(
tool_use_id=tc.tool_use_id,
content=(
+149 -34
View File
@@ -2,6 +2,7 @@
from __future__ import annotations
import asyncio
import json
import logging
import time
@@ -49,21 +50,71 @@ class LoopConfig:
"""Configuration for the event loop."""
max_iterations: int = 50
max_tool_calls_per_turn: int = 30
# 0 (or any non-positive value) disables the per-turn hard limit,
# letting a single assistant turn fan out arbitrarily many tool
# calls. Models like Gemini 3.1 Pro routinely emit 40-80 tool
# calls in one turn during browser exploration; capping them
# strands work half-finished and makes the next turn repeat the
# discarded calls, which is worse than just running them.
max_tool_calls_per_turn: int = 0
judge_every_n_turns: int = 1
stall_detection_threshold: int = 3
stall_similarity_threshold: float = 0.85
max_context_tokens: int = 32_000
# Headroom reserved for the NEXT turn's input + output so that
# proactive compaction always finishes before the hard context limit
# is hit mid-stream. Scaled to match Claude Code's 13k-buffer-on-
# 200k-window ratio (~6.5%) applied to hive's default 32k window,
# with extra margin because hive's token estimator is char-based
# and less tight than Anthropic's own counting. Override via
# LoopConfig for larger windows.
compaction_buffer_tokens: int = 8_000
# Ratio-based component of the hybrid compaction buffer. Effective
# headroom reserved before compaction fires is
# compaction_buffer_tokens + compaction_buffer_ratio * max_context_tokens
# The ratio scales with the model's window where the absolute fixed
# component does not (an 8k absolute buffer is 75% trigger on a 32k
# window but 96% on a 200k window). Combining them gives an absolute
# floor sized for the worst-case single tool result (one un-spilled
# max_tool_result_chars payload ≈ 30k chars ≈ 7.5k tokens, rounded to
# 8k) plus a fractional headroom that keeps the trigger meaningful on
# large windows, so the inner tool loop always has room to grow
# without tripping the mid-turn pre-send guard. Defaults: 8k + 15%.
# On 32k that's a 12.8k buffer (~60% trigger); on 200k it's 38k
# (~81% trigger); on 1M it's 158k (~84% trigger).
compaction_buffer_ratio: float = 0.15
# Warning is emitted one buffer earlier so the user/telemetry gets
# a "we're close" signal without triggering a compaction pass.
compaction_warning_buffer_tokens: int = 12_000
store_prefix: str = ""
# Overflow margin for max_tool_calls_per_turn. Tool calls are only
# discarded when the count exceeds max_tool_calls_per_turn * (1 + margin).
# Overflow margin for max_tool_calls_per_turn. When the limit is
# enabled (>0), tool calls are only discarded when the count
# exceeds max_tool_calls_per_turn * (1 + margin). Ignored when
# max_tool_calls_per_turn is 0.
tool_call_overflow_margin: float = 0.5
# Tool result context management.
max_tool_result_chars: int = 30_000
spillover_dir: str | None = None
# Image retention in conversation history.
# Screenshots from ``browser_screenshot`` are inlined as base64
# data URLs inside message ``image_content``. Each full-page
# screenshot costs ~250k tokens when the provider counts the
# base64 as text (gemini, most non-Anthropic providers). Four
# screenshots in one conversation push gemini's 1M context over
# the limit and the model starts emitting garbage.
#
# The framework strips image_content from older messages after
# every tool-result batch, keeping only the most recent N
# screenshots. The text metadata on evicted messages (url, size,
# scale hints) is preserved so the agent can still reason about
# "I took a screenshot at step N that showed the compose modal".
# Raise this only if you genuinely need longer visual history AND
# you know your provider is using native image tokenization.
max_retained_screenshots: int = 2
# set_output value spilling.
max_output_value_chars: int = 2_000
@@ -71,6 +122,13 @@ class LoopConfig:
max_stream_retries: int = 5
stream_retry_backoff_base: float = 2.0
stream_retry_max_delay: float = 60.0
# Persistent retry for capacity-class errors (429, 529, overloaded).
# Unlike the bounded retry above, these keep trying until the wall-clock
# budget below is exhausted — modelled after claude-code's withRetry.
# The loop still publishes a retry event each attempt so the UI can
# see progress. Set to 0 to disable and fall back to bounded retry.
capacity_retry_max_seconds: float = 600.0
capacity_retry_max_delay: float = 60.0
# Tool doom loop detection.
tool_doom_loop_threshold: int = 3
@@ -87,6 +145,39 @@ class LoopConfig:
# Per-tool-call timeout.
tool_call_timeout_seconds: float = 60.0
# LLM stream inactivity watchdog. Split into two budgets so legitimate
# slow TTFT on large contexts doesn't get mistaken for a dead connection.
# - ttft: stream open -> first event. Large-context local models can
# legitimately take minutes before the first token arrives.
# - inter_event: last event -> now, ONLY after the first event. A stream
# that started producing and then went silent is a real stall.
# Whichever fires first cancels the stream. Set to 0 to disable that
# individual budget; set both to 0 to fully disable the watchdog.
llm_stream_ttft_timeout_seconds: float = 600.0
llm_stream_inter_event_idle_seconds: float = 120.0
# Deprecated alias — kept so existing configs keep working. If set to a
# non-default value it overrides inter_event_idle (historical behavior).
llm_stream_inactivity_timeout_seconds: float = 120.0
# Continue-nudge recovery. When the idle watchdog fires on a live but
# stuck stream, cancel the stream and append a short continuation
# hint to the conversation instead of raising a ConnectionError and
# re-running the whole turn. Preserves any partial text/tool-calls the
# stream emitted before the stall.
continue_nudge_enabled: bool = True
# Cap so a truly dead endpoint eventually falls back to the error path
# instead of nudging forever.
continue_nudge_max_per_turn: int = 3
# Tool-call replay detector. When the model emits a tool call whose
# (name + canonical-args) matches a prior successful call in the last
# K assistant turns, emit telemetry and prepend a short steer onto the
# tool result — but still execute. Weaker models legitimately repeat
# read-only calls (screenshot, evaluate), so silent skipping would
# cause surprising behavior.
replay_detector_enabled: bool = True
replay_detector_within_last_turns: int = 3
# Subagent delegation timeout (wall-clock max).
subagent_timeout_seconds: float = 3600.0
@@ -132,7 +223,7 @@ class OutputAccumulator:
async def set(self, key: str, value: Any) -> None:
"""Set a key-value pair, auto-spilling large values to files."""
value = self._auto_spill(key, value)
value = await self._auto_spill(key, value)
self.values[key] = value
if self.store:
cursor = await self.store.read_cursor() or {}
@@ -141,41 +232,65 @@ class OutputAccumulator:
cursor["outputs"] = outputs
await self.store.write_cursor(cursor)
def _auto_spill(self, key: str, value: Any) -> Any:
"""Save large values to a file and return a reference string."""
async def _auto_spill(self, key: str, value: Any) -> Any:
"""Save large values to a file and return a reference string.
Runs the JSON serialization and file write on a worker thread
so they don't block the asyncio event loop. For a 100k-char
dict this used to freeze every concurrent tool call for ~50ms
of ``json.dumps(indent=2)`` + a sync disk write; for bigger
payloads or slow storage (NFS, networked FS) the freeze was
proportionally worse.
"""
if self.max_value_chars <= 0 or not self.spillover_dir:
return value
val_str = json.dumps(value, ensure_ascii=False) if not isinstance(value, str) else value
if len(val_str) <= self.max_value_chars:
# Cheap size probe first — if the value is already a short
# string we can skip both the JSON round-trip and the thread
# hop entirely.
if isinstance(value, str) and len(value) <= self.max_value_chars:
return value
spill_path = Path(self.spillover_dir)
spill_path.mkdir(parents=True, exist_ok=True)
ext = ".json" if isinstance(value, (dict, list)) else ".txt"
filename = f"output_{key}{ext}"
write_content = (
json.dumps(value, indent=2, ensure_ascii=False)
if isinstance(value, (dict, list))
else str(value)
)
file_path = spill_path / filename
file_path.write_text(write_content, encoding="utf-8")
file_size = file_path.stat().st_size
logger.info(
"set_output value auto-spilled: key=%s, %d chars -> %s (%d bytes)",
key,
len(val_str),
filename,
file_size,
)
# Use absolute path so parent agents can find files from subagents
abs_path = str(file_path.resolve())
return (
f"[Saved to '{abs_path}' ({file_size:,} bytes). "
f"Use read_file(path='{abs_path}') "
f"to access full data.]"
)
def _spill_sync() -> Any:
# JSON serialization for size check (only for non-strings).
if isinstance(value, str):
val_str = value
else:
val_str = json.dumps(value, ensure_ascii=False)
if len(val_str) <= self.max_value_chars:
return value
spill_path = Path(self.spillover_dir)
spill_path.mkdir(parents=True, exist_ok=True)
ext = ".json" if isinstance(value, (dict, list)) else ".txt"
filename = f"output_{key}{ext}"
write_content = (
json.dumps(value, indent=2, ensure_ascii=False) if isinstance(value, (dict, list)) else str(value)
)
file_path = spill_path / filename
file_path.write_text(write_content, encoding="utf-8")
file_size = file_path.stat().st_size
logger.info(
"set_output value auto-spilled: key=%s, %d chars -> %s (%d bytes)",
key,
len(val_str),
filename,
file_size,
)
# Use absolute path so parent agents can find files from subagents.
#
# Prose format (no brackets) — same fix as tool_result_handler:
# frontier pattern-matching models autocomplete bracketed
# `[Saved to '...']` trailers into their own assistant turns,
# eventually degenerating into echoing the file path as text.
# Keep the path accessible but frame it as plain prose.
abs_path = str(file_path.resolve())
return (
f"Output saved at: {abs_path} ({file_size:,} bytes). "
f"Read the full data with read_file(path='{abs_path}')."
)
return await asyncio.to_thread(_spill_sync)
def get(self, key: str) -> Any | None:
return self.values.get(key)
@@ -0,0 +1,247 @@
"""Vision-fallback subagent for tool-result images on text-only LLMs.
When a tool returns image content but the main agent's model can't
accept image blocks (i.e. its catalog entry has ``supports_vision: false``),
the framework strips the images before they ever reach the LLM. Without
this module, the agent then sees only the tool's text envelope (URL,
dimensions, size) and is blind to whatever the image actually shows.
This module provides:
* ``caption_tool_image()`` direct LiteLLM call to a configured
vision model (``vision_fallback`` block in ``~/.hive/configuration.json``)
that takes the agent's intent + the image(s) and returns a textual
description tailored to that intent.
* ``extract_intent_for_tool()`` pull the most recent assistant text
+ the tool call descriptor and concatenate them into a 2KB intent
string the vision subagent can reason against.
Both helpers degrade silently return ``None`` / a placeholder rather
than raise so a vision-fallback failure can never kill the main
agent's run. The agent-loop call site is responsible for chaining
through to the existing generic-caption rotation
(``_describe_images_as_text``) on a None return.
"""
from __future__ import annotations
import json
import logging
from datetime import datetime
from typing import TYPE_CHECKING, Any
from framework.config import (
get_vision_fallback_api_base,
get_vision_fallback_api_key,
get_vision_fallback_model,
)
if TYPE_CHECKING:
from ..conversation import NodeConversation
logger = logging.getLogger(__name__)
# Hard cap on the intent string handed to the vision subagent. The
# subagent only needs the agent's recent reasoning + the tool descriptor;
# anything longer is wasted tokens (and risks pushing past the vision
# model's context with the image attached).
_INTENT_MAX_CHARS = 4096
# Cap on the tool args JSON snippet inside the intent. Some tool inputs
# (large strings, file contents) would dominate the intent if uncapped.
_TOOL_ARGS_MAX_CHARS = 4096
# Subagent system prompt — kept short so it fits within any provider's
# system-prompt budget alongside the user message + image. Tells the
# subagent its role and constrains output format.
#
# Coordinate labeling: the main agent's browser tools
# (browser_click_coordinate / browser_hover_coordinate / browser_press_at)
# accept VIEWPORT FRACTIONS (x, y) in [0..1] where (0,0) is the top-left
# and (1,1) is the bottom-right of the screenshot. Without coordinates
# the text-only agent has no way to act on what we describe — it can
# read the caption but cannot point. So for every interactive element
# we name (button, link, input, icon, tab, menu item, dialog control),
# include its approximate viewport-fraction centre as ``(fx, fy)``
# right after the element's name, e.g. ``"Submit" button (0.83, 0.92)``.
# Three rules: (1) coordinates only for things plausibly clickable /
# hoverable / typeable — don't tag pure body text or decorative
# graphics. (2) Eyeball to two decimal places; precision beyond that
# is false confidence. (3) Never invent — if an element is partly
# off-screen or you can't locate it, omit the coordinate rather than
# guessing.
_VISION_SUBAGENT_SYSTEM = (
"You are a vision subagent for a text-only main agent. The main "
"agent invoked a tool that returned the image(s) attached. Their "
"intent (their reasoning + the tool call) is below. Describe what "
"the image shows in service of their intent — concrete, factual, "
"no speculation. If their intent asks a yes/no question, answer it "
"directly first.\n\n"
"Coordinate labeling: the main agent uses fractional viewport "
"coordinates (x, y) in [0..1] — (0, 0) is the top-left of the "
"image, (1, 1) is the bottom-right — to drive its click / hover / "
"key-press tools. For every interactive element you mention "
"(button, link, input, checkbox, radio, dropdown, tab, menu item, "
"dialog control, icon), append its approximate centre as "
"``(fx, fy)`` immediately after the element's name or label, e.g. "
'``"Submit" button (0.83, 0.92)`` or ``profile avatar icon '
"(0.05, 0.07)``. Use two decimal places — more is false precision. "
"Skip coordinates for pure body text and decorative elements that "
"aren't clickable. If an element is partially off-screen or you "
"cannot reliably locate its centre, omit the coordinate rather "
"than guessing.\n\n"
"Output plain text, no markdown, ≤ 600 words."
)
def extract_intent_for_tool(
conversation: NodeConversation,
tool_name: str,
tool_args: dict[str, Any] | None,
) -> str:
"""Build the intent string passed to the vision subagent.
Combines the most recent assistant text (the LLM's reasoning right
before invoking the tool) with a structured tool-call descriptor.
Truncates to ``_INTENT_MAX_CHARS`` total, favouring the head of the
assistant text where goal-stating sentences usually live.
If no preceding assistant text exists (rare first turn), falls
back to ``"<no preceding reasoning>"`` so the subagent still gets
the tool descriptor.
"""
args_json: str
try:
args_json = json.dumps(tool_args or {}, default=str)
except Exception:
args_json = repr(tool_args)
if len(args_json) > _TOOL_ARGS_MAX_CHARS:
args_json = args_json[:_TOOL_ARGS_MAX_CHARS] + ""
tool_line = f"Called: {tool_name}({args_json})"
# Walk newest → oldest, take the first assistant message with text.
assistant_text = ""
try:
messages = getattr(conversation, "_messages", []) or []
for msg in reversed(messages):
if getattr(msg, "role", None) != "assistant":
continue
content = getattr(msg, "content", "") or ""
if isinstance(content, str) and content.strip():
assistant_text = content.strip()
break
except Exception:
# Defensive — the agent loop must keep running even if the
# conversation structure changes shape.
assistant_text = ""
if not assistant_text:
assistant_text = "<no preceding reasoning>"
# Intent = tool descriptor (always intact) + reasoning (truncated).
head = f"{tool_line}\n\nReasoning before call:\n"
budget = _INTENT_MAX_CHARS - len(head)
if budget < 100:
# Tool descriptor is huge somehow — truncate it.
return head[:_INTENT_MAX_CHARS]
if len(assistant_text) > budget:
assistant_text = assistant_text[: budget - 1] + ""
return head + assistant_text
async def caption_tool_image(
intent: str,
image_content: list[dict[str, Any]],
*,
timeout_s: float = 30.0,
) -> str | None:
"""Caption the given images using the configured ``vision_fallback`` model.
Returns the model's text response on success, or ``None`` on any
failure (no config, no API key, timeout, exception, empty
response). Callers chain to the next stage of the fallback on None.
Logs each call to ``~/.hive/llm_logs`` via ``log_llm_turn`` so the
cost / latency / quality are auditable post-hoc, tagged with
``execution_id="vision_fallback_subagent"``.
"""
model = get_vision_fallback_model()
if not model:
return None
api_key = get_vision_fallback_api_key()
api_base = get_vision_fallback_api_base()
if not api_key:
logger.debug("vision_fallback configured but no API key resolved; skipping")
return None
try:
import litellm
except ImportError:
return None
user_blocks: list[dict[str, Any]] = [{"type": "text", "text": intent}]
user_blocks.extend(image_content)
messages = [
{"role": "system", "content": _VISION_SUBAGENT_SYSTEM},
{"role": "user", "content": user_blocks},
]
kwargs: dict[str, Any] = {
"model": model,
"messages": messages,
"max_tokens": 1024,
"timeout": timeout_s,
"api_key": api_key,
}
if api_base:
kwargs["api_base"] = api_base
started = datetime.now()
caption: str | None = None
error_text: str | None = None
try:
response = await litellm.acompletion(**kwargs)
text = (response.choices[0].message.content or "").strip()
if text:
caption = text
except Exception as exc:
error_text = f"{type(exc).__name__}: {exc}"
logger.debug("vision_fallback model '%s' failed: %s", model, exc)
# Best-effort audit log so users can grep ~/.hive/llm_logs/ for
# vision-fallback subagent calls. Failures here must not bubble.
try:
from framework.tracker.llm_debug_logger import log_llm_turn
# Don't dump the base64 image data into the log file — that
# would balloon the jsonl with mostly-binary noise.
elided_blocks: list[dict[str, Any]] = [{"type": "text", "text": intent}]
elided_blocks.extend({"type": "image_url", "image_url": {"url": "<elided>"}} for _ in range(len(image_content)))
log_llm_turn(
node_id="vision_fallback_subagent",
stream_id="vision_fallback",
execution_id="vision_fallback_subagent",
iteration=0,
system_prompt=_VISION_SUBAGENT_SYSTEM,
messages=[{"role": "user", "content": elided_blocks}],
assistant_text=caption or "",
tool_calls=[],
tool_results=[],
token_counts={
"model": model,
"elapsed_s": (datetime.now() - started).total_seconds(),
"error": error_text,
"num_images": len(image_content),
"intent_chars": len(intent),
},
)
except Exception:
pass
return caption
__all__ = ["caption_tool_image", "extract_intent_for_tool"]
+19 -7
View File
@@ -37,6 +37,8 @@ def build_prompt_spec(
narrative: str | None = None,
memory_prompt: str | None = None,
) -> PromptSpec:
from framework.skills.tool_gating import augment_catalog_for_tools
resolved_memory = memory_prompt
if resolved_memory is None:
resolved_memory = getattr(ctx, "memory_prompt", "") or ""
@@ -46,14 +48,26 @@ def build_prompt_spec(
resolved_memory = dynamic() or ""
except Exception:
resolved_memory = getattr(ctx, "memory_prompt", "") or ""
# Tool-gated pre-activation: inject full body of default skills whose
# trigger tools are present in this agent's tool list (e.g. browser_*
# pulls in hive.browser-automation). Keeps non-browser agents lean.
tool_names = [getattr(t, "name", "") for t in (getattr(ctx, "available_tools", None) or [])]
raw_catalog = ctx.skills_catalog_prompt or ""
dynamic_catalog = getattr(ctx, "dynamic_skills_catalog_provider", None)
if dynamic_catalog is not None:
try:
raw_catalog = dynamic_catalog() or ""
except Exception:
raw_catalog = ctx.skills_catalog_prompt or ""
skills_catalog_prompt = augment_catalog_for_tools(raw_catalog, tool_names)
return PromptSpec(
identity_prompt=ctx.identity_prompt or "",
focus_prompt=focus_prompt
if focus_prompt is not None
else (ctx.agent_spec.system_prompt or ""),
focus_prompt=focus_prompt if focus_prompt is not None else (ctx.agent_spec.system_prompt or ""),
narrative=narrative if narrative is not None else (ctx.narrative or ""),
accounts_prompt=ctx.accounts_prompt or "",
skills_catalog_prompt=ctx.skills_catalog_prompt or "",
skills_catalog_prompt=skills_catalog_prompt,
protocols_prompt=ctx.protocols_prompt or "",
memory_prompt=resolved_memory,
agent_type=ctx.agent_spec.agent_type,
@@ -87,7 +101,5 @@ def build_system_prompt_for_context(
narrative: str | None = None,
memory_prompt: str | None = None,
) -> str:
spec = build_prompt_spec(
ctx, focus_prompt=focus_prompt, narrative=narrative, memory_prompt=memory_prompt
)
spec = build_prompt_spec(ctx, focus_prompt=focus_prompt, narrative=narrative, memory_prompt=memory_prompt)
return build_system_prompt(spec)
+41 -4
View File
@@ -76,10 +76,7 @@ class AgentSpec(BaseModel):
max_visits: int = Field(
default=0,
description=(
"Max times this agent executes in one colony run. "
"0 = unlimited. Set >1 for one-shot agents."
),
description=("Max times this agent executes in one colony run. 0 = unlimited. Set >1 for one-shot agents."),
)
output_model: type[BaseModel] | None = Field(
@@ -183,9 +180,39 @@ class AgentContext:
stream_id: str = ""
# ----- Task system fields (see framework/tasks) -------------------
# task_list_id: this agent's own session-scoped list, e.g.
# session:{agent_id}:{session_id}. Set by the runner / ColonyRuntime
# before the loop starts; immutable after first task_create.
task_list_id: str | None = None
# colony_id: set on the queen of a colony AND on every spawned worker
# so workers can render the "picked up" chip and the queen can address
# her colony template via colony_template_* tools.
colony_id: str | None = None
# picked_up_from: for workers, the (colony_task_list_id, template_task_id)
# pair their session was spawned for. None for the queen and queen-DM.
picked_up_from: tuple[str, int] | None = None
dynamic_tools_provider: Any = None
dynamic_prompt_provider: Any = None
# Optional Callable[[], str]: when set alongside ``dynamic_prompt_provider``,
# the AgentLoop sends the system prompt as two pieces — the result of
# ``dynamic_prompt_provider`` is the STATIC block (cached), and this
# provider returns the DYNAMIC suffix (not cached). The LLM wrapper
# emits them as two Anthropic system content blocks with a cache
# breakpoint between them for providers that honor ``cache_control``.
# For providers that don't, the two strings are concatenated. Used by
# the Queen to keep her persona/role/tools block warm across iterations
# while the recall + timestamp tail refreshes per user turn.
dynamic_prompt_suffix_provider: Any = None
dynamic_memory_provider: Any = None
# Optional Callable[[], str]: when set, the current skills-catalog
# prompt is sourced from this provider each iteration. Lets workers
# pick up UI toggles without restarting the run. Queen agents already
# rebuild the whole prompt via dynamic_prompt_provider — this field
# is a surgical alternative used by colony workers where the rest of
# the prompt stays constant and we don't want to thrash the cache.
dynamic_skills_catalog_provider: Any = None
skills_catalog_prompt: str = ""
protocols_prompt: str = ""
@@ -226,6 +253,16 @@ class AgentResult:
conversation: Any = None
# Machine-readable reason the loop stopped (see LoopExitReason in
# agent_loop/internals/types.py). "?" means the loop didn't set one,
# which should itself be treated as a diagnostic.
exit_reason: str = "?"
# Counters for reliability events surfaced during this execution.
# Populated from the loop's TaskRegistry-style counters at return
# time so callers can spot recurring failure modes without tailing
# logs. Keys are stable strings; missing keys mean "zero".
reliability_stats: dict[str, int] = field(default_factory=dict)
def to_summary(self, spec: Any = None) -> str:
if not self.success:
return f"Failed: {self.error}"
+1 -5
View File
@@ -11,11 +11,7 @@ def list_framework_agents() -> list[Path]:
[
p
for p in FRAMEWORK_AGENTS_DIR.iterdir()
if p.is_dir()
and (
(p / "agent.json").exists()
or (p / "agent.py").exists()
)
if p.is_dir() and ((p / "agent.json").exists() or (p / "agent.py").exists())
],
key=lambda p: p.name,
)
@@ -21,15 +21,15 @@ from pathlib import Path
from typing import TYPE_CHECKING
from framework.config import get_max_context_tokens
from framework.host.agent_host import AgentHost
from framework.host.execution_manager import EntryPointSpec
from framework.llm import LiteLLMProvider
from framework.loader.mcp_registry import MCPRegistry
from framework.loader.tool_registry import ToolRegistry
from framework.orchestrator import Goal, NodeSpec, SuccessCriterion
from framework.orchestrator.checkpoint_config import CheckpointConfig
from framework.orchestrator.edge import GraphSpec
from framework.orchestrator.orchestrator import ExecutionResult
from framework.llm import LiteLLMProvider
from framework.loader.mcp_registry import MCPRegistry
from framework.loader.tool_registry import ToolRegistry
from framework.host.agent_host import AgentHost
from framework.host.execution_manager import EntryPointSpec
from .config import default_config
from .nodes import build_tester_node
@@ -126,9 +126,7 @@ def _list_local_accounts() -> list[dict]:
try:
from framework.credentials.local.registry import LocalCredentialRegistry
return [
info.to_account_dict() for info in LocalCredentialRegistry.default().list_accounts()
]
return [info.to_account_dict() for info in LocalCredentialRegistry.default().list_accounts()]
except ImportError as exc:
logger.debug("Local credential registry unavailable: %s", exc)
return []
@@ -181,9 +179,7 @@ def _list_env_fallback_accounts() -> list[dict]:
if spec.credential_group in seen_groups:
continue
group_available = all(
_is_configured(n, s)
for n, s in CREDENTIAL_SPECS.items()
if s.credential_group == spec.credential_group
_is_configured(n, s) for n, s in CREDENTIAL_SPECS.items() if s.credential_group == spec.credential_group
)
if not group_available:
continue
@@ -215,9 +211,7 @@ def list_connected_accounts() -> list[dict]:
# Show env-var fallbacks only for credentials not already in the named registry
local_providers = {a["provider"] for a in local}
env_fallbacks = [
a for a in _list_env_fallback_accounts() if a["provider"] not in local_providers
]
env_fallbacks = [a for a in _list_env_fallback_accounts() if a["provider"] not in local_providers]
return aden + local + env_fallbacks
@@ -272,9 +266,7 @@ def _activate_local_account(credential_id: str, alias: str) -> None:
group_specs = [
(cred_name, spec)
for cred_name, spec in CREDENTIAL_SPECS.items()
if spec.credential_group == credential_id
or spec.credential_id == credential_id
or cred_name == credential_id
if spec.credential_group == credential_id or spec.credential_id == credential_id or cred_name == credential_id
]
# Deduplicate — credential_id and credential_group may both match the same spec
seen_env_vars: set[str] = set()
@@ -419,10 +411,7 @@ nodes = [
NodeSpec(
id="tester",
name="Credential Tester",
description=(
"Interactive credential testing — lets the user pick an account "
"and verify it via API calls."
),
description=("Interactive credential testing — lets the user pick an account and verify it via API calls."),
node_type="event_loop",
client_facing=True,
max_node_visits=0,
@@ -469,10 +458,7 @@ pause_nodes = []
terminal_nodes = ["tester"] # Tester node can terminate
conversation_mode = "continuous"
identity_prompt = (
"You are a credential tester that verifies connected accounts and API keys "
"can make real API calls."
)
identity_prompt = "You are a credential tester that verifies connected accounts and API keys can make real API calls."
loop_config = {
"max_iterations": 50,
"max_tool_calls_per_turn": 30,
@@ -1,9 +1,9 @@
{
"hive-tools": {
"hive_tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": "../../../../tools",
"description": "Hive tools MCP server with provider-specific tools"
"description": "hive_tools MCP server with provider-specific tools"
}
}
+22 -16
View File
@@ -4,6 +4,7 @@ from __future__ import annotations
import json
from dataclasses import dataclass, field
from datetime import UTC
from pathlib import Path
@@ -47,6 +48,8 @@ class AgentEntry:
tool_count: int = 0
tags: list[str] = field(default_factory=list)
last_active: str | None = None
created_at: str | None = None
icon: str | None = None
workers: list[WorkerEntry] = field(default_factory=list)
@@ -150,28 +153,19 @@ def _is_colony_dir(path: Path) -> bool:
"""Check if a directory is a colony with worker config files."""
if not path.is_dir():
return False
return any(
f.suffix == ".json"
and f.stem not in _EXCLUDED_JSON_STEMS
for f in path.iterdir()
if f.is_file()
)
return any(f.suffix == ".json" and f.stem not in _EXCLUDED_JSON_STEMS for f in path.iterdir() if f.is_file())
def _find_worker_configs(colony_dir: Path) -> list[Path]:
"""Find all worker config JSON files in a colony directory."""
return sorted(
p
for p in colony_dir.iterdir()
if p.is_file()
and p.suffix == ".json"
and p.stem not in _EXCLUDED_JSON_STEMS
p for p in colony_dir.iterdir() if p.is_file() and p.suffix == ".json" and p.stem not in _EXCLUDED_JSON_STEMS
)
def _extract_agent_stats(agent_path: Path) -> tuple[int, int, list[str]]:
"""Extract worker count, tool count, and tags from a colony directory."""
tool_count, tags = 0, []
tags: list[str] = []
worker_configs = _find_worker_configs(agent_path)
if worker_configs:
@@ -218,13 +212,26 @@ def discover_agents() -> dict[str, list[AgentEntry]]:
name = config_fallback_name
desc = ""
# Read colony metadata for queen provenance
# Read colony metadata for queen provenance and timestamps
colony_queen_name = ""
colony_created_at: str | None = None
colony_icon: str | None = None
metadata_path = path / "metadata.json"
if metadata_path.exists():
try:
mdata = json.loads(metadata_path.read_text(encoding="utf-8"))
colony_queen_name = mdata.get("queen_name", "")
colony_created_at = mdata.get("created_at")
colony_icon = mdata.get("icon")
except Exception:
pass
# Fallback: use directory creation time if metadata lacks created_at
if not colony_created_at:
try:
from datetime import datetime
stat = path.stat()
colony_created_at = datetime.fromtimestamp(stat.st_birthtime, tz=UTC).isoformat()
except Exception:
pass
@@ -251,9 +258,6 @@ def discover_agents() -> dict[str, list[AgentEntry]]:
pass
node_count = len(worker_entries)
all_tools: set[str] = set()
for w in worker_entries:
pass # tool_count already per-worker
tool_count = max((w.tool_count for w in worker_entries), default=0)
entries.append(
@@ -268,6 +272,8 @@ def discover_agents() -> dict[str, list[AgentEntry]]:
tool_count=tool_count,
tags=[],
last_active=_get_last_active(path),
created_at=colony_created_at,
icon=colony_icon,
workers=worker_entries,
)
)
+1 -3
View File
@@ -11,9 +11,7 @@ from .nodes import queen_node
queen_goal = Goal(
id="queen-manager",
name="Queen Manager",
description=(
"Manage the worker agent lifecycle and serve as the user's primary interactive interface."
),
description=("Manage the worker agent lifecycle and serve as the user's primary interactive interface."),
success_criteria=[],
constraints=[],
)
@@ -0,0 +1,240 @@
"""One-shot LLM gate that decides if a queen DM is ready to fork a colony.
The queen's ``start_incubating_colony`` tool calls :func:`evaluate` with
the queen's recent conversation, a proposed ``colony_name``, and a
one-paragraph ``intended_purpose``. The evaluator returns a structured
verdict:
{
"ready": bool,
"reasons": [str],
"missing_prerequisites": [str],
}
On ``ready=False`` the queen receives the verdict as her tool result and
self-corrects (asks the user, refines scope, drops the idea). On
``ready=True`` the tool flips the queen's phase to ``incubating``.
Failure mode is **fail-closed**: any LLM error or unparseable response
returns ``ready=False`` with reason ``"evaluation_failed"`` so the queen
cannot accidentally proceed past a broken gate.
"""
from __future__ import annotations
import json
import logging
import re
from typing import Any
from framework.agent_loop.conversation import Message
logger = logging.getLogger(__name__)
_INCUBATING_EVALUATOR_SYSTEM_PROMPT = """\
You gate whether a queen agent should commit to forking a persistent
"colony" (a headless worker spec written to disk). Forking is
expensive: it ends the user's chat with this queen and the worker runs
unattended afterward, so the spec must be settled before you approve.
Read the conversation excerpt and the queen's proposed colony_name +
intended_purpose, then decide.
APPROVE (ready=true) only when ALL of the following hold:
1. The user has explicitly asked for work that needs to outlive this
chat recurring (cron / interval), monitoring + alert, scheduled
batch, or "fire-and-forget background job". A one-shot question
that the queen can answer in chat does NOT qualify.
2. The scope of the work is concrete enough to write down what
inputs, what outputs, what success looks like. Vague ("help me
with my workflow") does NOT qualify.
3. The technical approach is at least sketched what data sources,
APIs, or tools the worker will use. The queen does not have to
have written the SKILL.md yet, but she must have the operational
ingredients available.
4. There are no open clarifying questions on the table that the user
hasn't answered. If the queen recently asked the user something
and is still waiting, do NOT approve.
REJECT (ready=false) on any of:
- Conversation is too short / too generic to support a settled spec.
- User is still describing what they want.
- User has expressed doubts, change-of-direction, or "let me think".
- Work is one-shot and could be done in chat instead.
- Open question awaiting user reply.
Reply with a JSON object exactly matching this shape:
{
"ready": true | false,
"reasons": ["short phrase", ...], // at least one entry
"missing_prerequisites": ["short phrase", ...] // empty when ready
}
``reasons`` explains the verdict in 1-3 short phrases.
``missing_prerequisites`` lists what's missing in queen-actionable
form ("user hasn't confirmed schedule", "no API auth flow discussed").
Empty list when ``ready=true``.
Output JSON only. Do not wrap in markdown. Do not add prose.
"""
# Bound the formatted excerpt so the eval call stays cheap and fits well
# under the LLM's context window even for long DM sessions.
_MAX_MESSAGES = 30
_MAX_TOOL_CONTENT_CHARS = 400
_MAX_USER_CONTENT_CHARS = 2_000
_MAX_ASSISTANT_CONTENT_CHARS = 2_000
def format_conversation_excerpt(messages: list[Message]) -> str:
"""Format the tail of a queen conversation for the evaluator prompt.
Keeps the most recent ``_MAX_MESSAGES`` messages. Tool results are
truncated hard since they're rarely load-bearing for the readiness
decision; user/assistant text is truncated more generously to
preserve the actual conversation signal.
"""
if not messages:
return "(no messages)"
tail = messages[-_MAX_MESSAGES:]
parts: list[str] = []
for msg in tail:
role = msg.role.upper()
content = (msg.content or "").strip()
if msg.role == "tool":
if len(content) > _MAX_TOOL_CONTENT_CHARS:
content = content[:_MAX_TOOL_CONTENT_CHARS] + "..."
elif msg.role == "assistant":
# Surface tool-call intent for empty assistant turns so the
# evaluator sees what the queen has been doing.
if not content and msg.tool_calls:
names = [tc.get("function", {}).get("name", "?") for tc in msg.tool_calls]
content = f"(called: {', '.join(names)})"
if len(content) > _MAX_ASSISTANT_CONTENT_CHARS:
content = content[:_MAX_ASSISTANT_CONTENT_CHARS] + "..."
else: # user
if len(content) > _MAX_USER_CONTENT_CHARS:
content = content[:_MAX_USER_CONTENT_CHARS] + "..."
if content:
parts.append(f"[{role}]: {content}")
return "\n\n".join(parts) if parts else "(no messages)"
def _build_user_message(
conversation_excerpt: str,
colony_name: str,
intended_purpose: str,
) -> str:
return (
f"## Proposed colony name\n{colony_name}\n\n"
f"## Queen's intended_purpose\n{intended_purpose.strip()}\n\n"
f"## Recent conversation (oldest → newest)\n{conversation_excerpt}\n\n"
"Decide: should this queen be approved to enter INCUBATING phase?"
)
def _parse_verdict(raw: str) -> dict[str, Any] | None:
"""Parse the evaluator's JSON. Returns None if parsing fails."""
if not raw:
return None
raw = raw.strip()
try:
return json.loads(raw)
except json.JSONDecodeError:
# Some models wrap JSON in markdown fences or add preamble.
# Pull the first { ... } block out as a best-effort fallback —
# mirrors the same recovery pattern used in recall_selector.py.
match = re.search(r"\{.*\}", raw, re.DOTALL)
if match:
try:
return json.loads(match.group())
except json.JSONDecodeError:
return None
return None
def _normalize_verdict(parsed: dict[str, Any]) -> dict[str, Any]:
"""Coerce a parsed verdict into the shape the tool returns to the queen."""
ready = bool(parsed.get("ready"))
reasons = parsed.get("reasons") or []
if isinstance(reasons, str):
reasons = [reasons]
reasons = [str(r).strip() for r in reasons if str(r).strip()]
missing = parsed.get("missing_prerequisites") or []
if isinstance(missing, str):
missing = [missing]
missing = [str(m).strip() for m in missing if str(m).strip()]
if ready:
# When approved we don't surface missing prerequisites — the
# incubating role prompt opens that floor itself.
missing = []
elif not reasons:
# Always give the queen at least one reason to reflect on.
reasons = ["evaluator returned no reasons"]
return {
"ready": ready,
"reasons": reasons,
"missing_prerequisites": missing,
}
async def evaluate(
llm: Any,
messages: list[Message],
colony_name: str,
intended_purpose: str,
) -> dict[str, Any]:
"""Run the incubating evaluator against the queen's conversation.
Args:
llm: An LLM provider exposing ``acomplete(messages, system, ...)``.
Pass the queen's own ``ctx.llm`` so the eval uses the same
model the user is talking to.
messages: The queen's conversation messages, oldest first. The
evaluator slices its own tail; pass the full list.
colony_name: Validated colony slug.
intended_purpose: Queen's one-paragraph brief.
Returns:
``{"ready": bool, "reasons": [str], "missing_prerequisites": [str]}``.
Fail-closed on any error.
"""
excerpt = format_conversation_excerpt(messages)
user_msg = _build_user_message(excerpt, colony_name, intended_purpose)
try:
response = await llm.acomplete(
messages=[{"role": "user", "content": user_msg}],
system=_INCUBATING_EVALUATOR_SYSTEM_PROMPT,
max_tokens=1024,
response_format={"type": "json_object"},
)
except Exception as exc: # noqa: BLE001 - fail-closed on any LLM failure
logger.warning("incubating_evaluator: LLM call failed (%s)", exc)
return {
"ready": False,
"reasons": ["evaluation_failed"],
"missing_prerequisites": ["evaluator LLM call failed; retry once the queen can reach the model again"],
}
raw = (getattr(response, "content", "") or "").strip()
parsed = _parse_verdict(raw)
if parsed is None:
logger.warning(
"incubating_evaluator: could not parse JSON verdict (raw=%.200s)",
raw,
)
return {
"ready": False,
"reasons": ["evaluation_failed"],
"missing_prerequisites": ["evaluator returned malformed JSON; retry"],
}
return _normalize_verdict(parsed)
@@ -1,3 +1,3 @@
{
"include": ["gcu-tools", "hive-tools"]
"include": ["gcu-tools", "hive_tools"]
}
+2 -2
View File
@@ -13,11 +13,11 @@
"cwd": "../../../../tools",
"description": "Browser automation tools (Playwright-based)"
},
"hive-tools": {
"hive_tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": "../../../../tools",
"description": "Hive tools MCP server (csv, pdf, web_search, web_scrape, email, integrations)"
"description": "Aden integration tools (gmail, calendar, hubspot, etc.) — gated by credentials and the verified manifest"
}
}
File diff suppressed because it is too large Load Diff
@@ -19,6 +19,8 @@ import re
from dataclasses import dataclass, field
from pathlib import Path
from framework.config import MEMORIES_DIR
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
@@ -27,8 +29,6 @@ logger = logging.getLogger(__name__)
GLOBAL_MEMORY_CATEGORIES: tuple[str, ...] = ("profile", "preference", "environment", "feedback")
from framework.config import MEMORIES_DIR
MAX_FILES: int = 200
MAX_FILE_SIZE_BYTES: int = 4096 # 4 KB hard limit per memory file
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,217 @@
"""Per-queen tool configuration sidecar (``tools.json``).
Lives at ``~/.hive/agents/queens/{queen_id}/tools.json`` alongside
``profile.yaml``. Kept separate so identity (name, title, core traits)
stays human-authored and lean, while the machine-managed tool allowlist
can grow (per-tool overrides, audit timestamps, future per-phase rules)
without bloating the profile.
Schema::
{
"enabled_mcp_tools": ["read_file", ...] | null,
"updated_at": "2026-04-21T12:34:56+00:00"
}
- ``null`` / missing file default "allow every MCP tool".
- ``[]`` explicitly disable every MCP tool.
- ``["foo", "bar"]`` only those MCP tool names pass the filter.
Atomic writes via ``os.replace`` follow the same pattern as
``framework.host.colony_metadata.update_colony_metadata``.
"""
from __future__ import annotations
import json
import logging
import os
import tempfile
from datetime import UTC, datetime
from pathlib import Path
from typing import Any
import yaml
from framework.config import QUEENS_DIR
logger = logging.getLogger(__name__)
def tools_config_path(queen_id: str) -> Path:
"""Return the on-disk path to a queen's ``tools.json``."""
return QUEENS_DIR / queen_id / "tools.json"
def _atomic_write_json(path: Path, data: dict[str, Any]) -> None:
"""Write ``data`` to ``path`` atomically via tempfile + replace."""
path.parent.mkdir(parents=True, exist_ok=True)
fd, tmp = tempfile.mkstemp(
prefix=".tools.",
suffix=".json.tmp",
dir=str(path.parent),
)
try:
with os.fdopen(fd, "w", encoding="utf-8") as fh:
json.dump(data, fh, indent=2)
fh.flush()
os.fsync(fh.fileno())
os.replace(tmp, path)
except BaseException:
try:
os.unlink(tmp)
except OSError:
pass
raise
def _migrate_from_profile_if_needed(queen_id: str) -> list[str] | None:
"""Hoist a legacy ``enabled_mcp_tools`` field out of ``profile.yaml``.
Returns the migrated value (or ``None`` if nothing to migrate). After
migration the sidecar exists on disk and the profile YAML no longer
contains ``enabled_mcp_tools``. Safe to call repeatedly.
"""
profile_path = QUEENS_DIR / queen_id / "profile.yaml"
if not profile_path.exists():
return None
try:
data = yaml.safe_load(profile_path.read_text(encoding="utf-8"))
except (yaml.YAMLError, OSError):
logger.warning("Could not read profile.yaml during tools migration: %s", queen_id)
return None
if not isinstance(data, dict):
return None
if "enabled_mcp_tools" not in data:
return None
raw = data.pop("enabled_mcp_tools")
enabled: list[str] | None
if raw is None:
enabled = None
elif isinstance(raw, list) and all(isinstance(x, str) for x in raw):
enabled = raw
else:
logger.warning(
"Legacy enabled_mcp_tools on queen %s had unexpected shape %r; dropping",
queen_id,
raw,
)
enabled = None
# Write sidecar first, then rewrite profile — if the second step
# fails we still have the config available and won't re-migrate.
_atomic_write_json(
tools_config_path(queen_id),
{
"enabled_mcp_tools": enabled,
"updated_at": datetime.now(UTC).isoformat(),
},
)
profile_path.write_text(
yaml.safe_dump(data, sort_keys=False, allow_unicode=True),
encoding="utf-8",
)
logger.info(
"Migrated enabled_mcp_tools for queen %s from profile.yaml to tools.json",
queen_id,
)
return enabled
def tools_config_exists(queen_id: str) -> bool:
"""Return True when the queen has a persisted ``tools.json`` sidecar.
Used by callers that need to tell an explicit user save apart from a
fallthrough to the role-based default (both can return the same
value from ``load_queen_tools_config``).
"""
return tools_config_path(queen_id).exists()
def delete_queen_tools_config(queen_id: str) -> bool:
"""Delete the queen's ``tools.json`` sidecar if present.
Returns ``True`` if a file was removed, ``False`` if none existed.
The next ``load_queen_tools_config`` call falls through to the
role-based default (or allow-all for unknown queens).
"""
path = tools_config_path(queen_id)
if not path.exists():
return False
try:
path.unlink()
return True
except OSError:
logger.warning("Failed to delete %s", path, exc_info=True)
return False
def load_queen_tools_config(
queen_id: str,
mcp_catalog: dict[str, list[dict]] | None = None,
) -> list[str] | None:
"""Return the queen's MCP tool allowlist, or ``None`` for default-allow.
Order of resolution:
1. ``tools.json`` sidecar (authoritative; user has saved).
2. Legacy ``profile.yaml`` field (migrated and deleted on first read).
3. Role-based default from ``queen_tools_defaults`` when the queen
is in the known persona table. ``mcp_catalog`` lets the helper
expand ``@server:NAME`` shorthands; without it, shorthand entries
are dropped.
4. ``None`` default "allow every MCP tool".
"""
path = tools_config_path(queen_id)
if path.exists():
try:
data = json.loads(path.read_text(encoding="utf-8"))
except (json.JSONDecodeError, OSError):
logger.warning("Invalid %s; treating as default-allow", path)
return None
if not isinstance(data, dict):
return None
raw = data.get("enabled_mcp_tools")
if raw is None:
return None
if isinstance(raw, list) and all(isinstance(x, str) for x in raw):
return raw
logger.warning("Unexpected enabled_mcp_tools shape in %s; ignoring", path)
return None
migrated = _migrate_from_profile_if_needed(queen_id)
if migrated is not None:
return migrated
# If migration just hoisted an explicit ``null`` out of profile.yaml,
# a sidecar with allow-all semantics now exists on disk. Honor that
# over the role default so an explicit user choice wins.
if tools_config_path(queen_id).exists():
return None
# No sidecar, nothing to migrate — fall back to role-based default.
from framework.agents.queen.queen_tools_defaults import resolve_queen_default_tools
return resolve_queen_default_tools(queen_id, mcp_catalog)
def update_queen_tools_config(
queen_id: str,
enabled_mcp_tools: list[str] | None,
) -> list[str] | None:
"""Persist the queen's MCP allowlist to ``tools.json``.
Raises ``FileNotFoundError`` if the queen's directory is missing —
we refuse to silently create a sidecar for a queen that doesn't
exist.
"""
queen_dir = QUEENS_DIR / queen_id
if not queen_dir.exists():
raise FileNotFoundError(f"Queen directory not found: {queen_id}")
_atomic_write_json(
tools_config_path(queen_id),
{
"enabled_mcp_tools": enabled_mcp_tools,
"updated_at": datetime.now(UTC).isoformat(),
},
)
return enabled_mcp_tools
@@ -0,0 +1,272 @@
"""Role-based default tool allowlists for queens.
Every queen inherits the same MCP surface (all servers loaded for the
queen agent), but exposing 94+ tools to every persona clutters the LLM
tool catalog and wastes prompt tokens. This module defines a sensible
default allowlist per queen persona so, e.g., Head of Legal doesn't
see port scanners and Head of Finance doesn't see ``apply_patch``.
Defaults apply only when the queen has no ``tools.json`` sidecar the
moment the user saves an allowlist through the Tool Library, the
sidecar becomes authoritative. A DELETE on the tools endpoint removes
the sidecar and brings the queen back to her role default.
Category entries support a ``@server:NAME`` shorthand that expands to
every tool name registered against that MCP server in the current
catalog. This keeps the category table short and drift-free when new
tools are added (e.g. browser_* auto-joins the ``browser`` category).
"""
from __future__ import annotations
import logging
from typing import Any
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Categories — reusable bundles of MCP tool names.
# ---------------------------------------------------------------------------
#
# Each category is a flat list of either concrete tool names or the
# ``@server:NAME`` shorthand. The shorthand expands to every tool the
# given MCP server currently exposes (requires a live catalog; when one
# is not available the shorthand is silently dropped so we fall back to
# the named entries only).
_TOOL_CATEGORIES: dict[str, list[str]] = {
# Read-only file operations — safe baseline for every knowledge queen.
"file_read": [
"read_file",
"list_directory",
"list_dir",
"list_files",
"search_files",
"grep_search",
"pdf_read",
],
# File mutation — only personas that author or edit artifacts.
"file_write": [
"write_file",
"edit_file",
"apply_diff",
"apply_patch",
"replace_file_content",
"hashline_edit",
"undo_changes",
],
# Shell + process control — engineering personas only.
"shell": [
"run_command",
"execute_command_tool",
"bash_kill",
"bash_output",
],
# Tabular data. CSV/Excel read/write + DuckDB SQL.
"data": [
"csv_read",
"csv_info",
"csv_write",
"csv_append",
"csv_sql",
"excel_read",
"excel_info",
"excel_write",
"excel_append",
"excel_search",
"excel_sheet_list",
"excel_sql",
],
# Browser automation — every tool from the gcu-tools MCP server.
"browser": ["@server:gcu-tools"],
# External research / information-gathering.
"research": [
"search_papers",
"download_paper",
"search_wikipedia",
"web_scrape",
],
# Security scanners — pentest-ish, only for engineering/security roles.
"security": [
"dns_security_scan",
"http_headers_scan",
"port_scan",
"ssl_tls_scan",
"subdomain_enumerate",
"tech_stack_detect",
"risk_score",
],
# Lightweight context helpers — good default for every queen.
"time_context": [
"get_current_time",
"get_account_info",
],
# Runtime log inspection — debug/observability for builder personas.
"runtime_inspection": [
"query_runtime_logs",
"query_runtime_log_details",
"query_runtime_log_raw",
],
# Agent-management tools — building/validating/checking agents.
"agent_mgmt": [
"list_agents",
"list_agent_tools",
"list_agent_sessions",
"get_agent_checkpoint",
"list_agent_checkpoints",
"run_agent_tests",
"save_agent_draft",
"confirm_and_build",
"validate_agent_package",
"validate_agent_tools",
"enqueue_task",
],
}
# ---------------------------------------------------------------------------
# Per-queen mapping.
# ---------------------------------------------------------------------------
#
# Built from the queen personas in ``queen_profiles.DEFAULT_QUEENS``. The
# goal is "just enough" — a queen should see tools she'd plausibly call
# for her stated role, nothing more. Users curate further via the Tool
# Library if they want.
#
# A queen whose ID is NOT in this map falls through to "allow every MCP
# tool" (the original behavior), which keeps the system compatible with
# user-added custom queen IDs that we don't know about.
QUEEN_DEFAULT_CATEGORIES: dict[str, list[str]] = {
# Head of Technology — builds and operates systems; full toolkit.
"queen_technology": [
"file_read",
"file_write",
"shell",
"data",
"browser",
"research",
"security",
"time_context",
"runtime_inspection",
"agent_mgmt",
],
# Head of Growth — data, experiments, competitor research; no shell/security.
"queen_growth": [
"file_read",
"file_write",
"data",
"browser",
"research",
"time_context",
],
# Head of Product Strategy — user research + roadmaps; no shell/security.
"queen_product_strategy": [
"file_read",
"file_write",
"data",
"browser",
"research",
"time_context",
],
# Head of Finance — financial models (CSV/Excel heavy), market research.
"queen_finance_fundraising": [
"file_read",
"file_write",
"data",
"browser",
"research",
"time_context",
],
# Head of Legal — reads contracts/PDFs, researches; no shell/data/security.
"queen_legal": [
"file_read",
"file_write",
"browser",
"research",
"time_context",
],
# Head of Brand & Design — visual refs, style guides; no shell/data/security.
"queen_brand_design": [
"file_read",
"file_write",
"browser",
"research",
"time_context",
],
# Head of Talent — candidate pipelines, resumes; data + browser heavy.
"queen_talent": [
"file_read",
"file_write",
"data",
"browser",
"research",
"time_context",
],
# Head of Operations — processes, automation, observability.
"queen_operations": [
"file_read",
"file_write",
"data",
"browser",
"research",
"time_context",
"runtime_inspection",
"agent_mgmt",
],
}
def has_role_default(queen_id: str) -> bool:
"""Return True when ``queen_id`` is known to the category table."""
return queen_id in QUEEN_DEFAULT_CATEGORIES
def resolve_queen_default_tools(
queen_id: str,
mcp_catalog: dict[str, list[dict[str, Any]]] | None = None,
) -> list[str] | None:
"""Return the role-based default allowlist for ``queen_id``.
Arguments:
queen_id: Profile ID (e.g. ``"queen_technology"``).
mcp_catalog: Optional mapping of ``{server_name: [{"name": ...}, ...]}``
used to expand ``@server:NAME`` shorthands in categories.
When absent, shorthand entries are dropped and the result
contains only the explicitly-named tools.
Returns:
A deduplicated list of tool names, or ``None`` if the queen has
no role entry (caller should treat as "allow every MCP tool").
"""
categories = QUEEN_DEFAULT_CATEGORIES.get(queen_id)
if not categories:
return None
names: list[str] = []
seen: set[str] = set()
def _add(name: str) -> None:
if name and name not in seen:
seen.add(name)
names.append(name)
for cat in categories:
for entry in _TOOL_CATEGORIES.get(cat, []):
if entry.startswith("@server:"):
server_name = entry[len("@server:") :]
if mcp_catalog is None:
logger.debug(
"resolve_queen_default_tools: catalog missing; cannot expand %s",
entry,
)
continue
for tool in mcp_catalog.get(server_name, []) or []:
tname = tool.get("name") if isinstance(tool, dict) else None
if tname:
_add(tname)
else:
_add(entry)
return names
+84 -10
View File
@@ -1,10 +1,10 @@
"""Recall selector — pre-turn global memory selection for the queen.
"""Recall selector — pre-turn memory selection for the queen.
Before each conversation turn the system:
1. Scans the global memory directory for ``.md`` files (cap: 200).
1. Scans one or more memory directories for ``.md`` files (cap: 200 each).
2. Reads headers (frontmatter + first 30 lines).
3. Uses a single LLM call with structured JSON output to pick the ~5
most relevant memories.
3. Uses an LLM call with structured JSON output to pick the most relevant
memories for each scope.
4. Injects them into the system prompt.
The selector only sees the user's query string — no full conversation
@@ -21,7 +21,7 @@ from typing import Any
from framework.agents.queen.queen_memory_v2 import (
format_memory_manifest,
global_memory_dir,
global_memory_dir as _default_global_memory_dir,
scan_memory_files,
)
@@ -66,7 +66,7 @@ async def select_memories(
Returns a list of filenames. Best-effort: on any error returns ``[]``.
"""
mem_dir = memory_dir or global_memory_dir()
mem_dir = memory_dir or _default_global_memory_dir()
files = scan_memory_files(mem_dir)
if not files:
logger.debug("recall: no memory files found, skipping selection")
@@ -114,12 +114,35 @@ async def select_memories(
return []
def _format_relative_age(mtime: float) -> str | None:
"""Return age description if memory is older than 48 hours.
Returns None if 48 hours or newer, otherwise returns "X days old".
"""
import time
age_seconds = time.time() - mtime
hours = age_seconds / 3600
if hours <= 48:
return None
days = int(age_seconds / 86400)
if days == 1:
return "1 day old"
return f"{days} days old"
def format_recall_injection(
filenames: list[str],
memory_dir: Path | None = None,
*,
label: str = "Global Memories",
) -> str:
"""Read selected memory files and format for system prompt injection."""
mem_dir = memory_dir or global_memory_dir()
"""Read selected memory files and format for system prompt injection.
Includes relative timestamp (e.g., "3 days old") for memories older than 48 hours.
"""
mem_dir = memory_dir or _default_global_memory_dir()
if not filenames:
return ""
@@ -130,12 +153,63 @@ def format_recall_injection(
continue
try:
content = path.read_text(encoding="utf-8").strip()
# Get file modification time for age calculation
mtime = path.stat().st_mtime
age_note = _format_relative_age(mtime)
except OSError:
continue
blocks.append(f"### {fname}\n\n{content}")
# Build header with optional age note
if age_note:
header = f"### {fname} ({age_note})"
else:
header = f"### {fname}"
blocks.append(f"{header}\n\n{content}")
if not blocks:
return ""
body = "\n\n---\n\n".join(blocks)
return f"--- Global Memories ---\n\n{body}\n\n--- End Global Memories ---"
return f"--- {label} ---\n\n{body}\n\n--- End {label} ---"
async def build_scoped_recall_blocks(
query: str,
llm: Any,
*,
global_memory_dir: Path | None = None,
queen_memory_dir: Path | None = None,
queen_id: str | None = None,
global_max_results: int = 3,
queen_max_results: int = 3,
) -> tuple[str, str]:
"""Build separate recall blocks for global and queen-scoped memory."""
global_dir = global_memory_dir or _default_global_memory_dir()
global_selected = await select_memories(
query,
llm,
memory_dir=global_dir,
max_results=global_max_results,
)
global_block = format_recall_injection(
global_selected,
memory_dir=global_dir,
label="Global Memories",
)
queen_block = ""
if queen_memory_dir is not None:
queen_selected = await select_memories(
query,
llm,
memory_dir=queen_memory_dir,
max_results=queen_max_results,
)
queen_label = f"Queen Memories: {queen_id}" if queen_id else "Queen Memories"
queen_block = format_recall_injection(
queen_selected,
memory_dir=queen_memory_dir,
label=queen_label,
)
return global_block, queen_block
@@ -13,7 +13,7 @@
6. **Calling set_output in same turn as tool calls** — Call set_output in a SEPARATE turn.
## File Template Errors
7. **Wrong import paths** — Use `from framework.graph import ...`, NOT `from core.framework.graph import ...`.
7. **Wrong import paths** — Use `from framework.orchestrator import ...`, NOT `from framework.graph import ...` or `from core.framework...`.
8. **Missing storage path** — Agent class must set `self._storage_path = Path.home() / ".hive" / "agents" / "agent_name"`.
9. **Missing mcp_servers.json** — Without this, the agent has no tools at runtime.
10. **Bare `python` command** — Use `"command": "uv"` with args `["run", "python", ...]`.
@@ -55,7 +55,7 @@ metadata = AgentMetadata()
```python
"""Node definitions for My Agent."""
from framework.graph import NodeSpec
from framework.orchestrator import NodeSpec
# Node 1: Process (autonomous entry node)
# The queen handles intake and passes structured input via
@@ -123,14 +123,15 @@ __all__ = ["process_node", "handoff_node"]
from pathlib import Path
from framework.graph import EdgeSpec, EdgeCondition, Goal, SuccessCriterion, Constraint
from framework.graph.edge import GraphSpec
from framework.graph.executor import ExecutionResult
from framework.graph.checkpoint_config import CheckpointConfig
from framework.orchestrator import EdgeSpec, EdgeCondition, Goal, SuccessCriterion, Constraint
from framework.orchestrator.edge import GraphSpec
from framework.orchestrator.orchestrator import ExecutionResult
from framework.orchestrator.checkpoint_config import CheckpointConfig
from framework.llm import LiteLLMProvider
from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from framework.loader.tool_registry import ToolRegistry
from framework.host.agent_host import AgentHost
from framework.host.execution_manager import EntryPointSpec
from .config import default_config, metadata
from .nodes import process_node, handoff_node
@@ -227,7 +228,7 @@ class MyAgent:
tools = list(self._tool_registry.get_tools().values())
tool_executor = self._tool_registry.get_executor()
self._graph = self._build_graph()
self._agent_runtime = create_agent_runtime(
self._agent_runtime = AgentHost(
graph=self._graph, goal=self.goal, storage_path=self._storage_path,
entry_points=[EntryPointSpec(id="default", name="Default", entry_node=self.entry_node,
trigger_type="manual", isolation_level="shared")],
@@ -460,8 +461,8 @@ def tui():
from framework.tui.app import AdenTUI
from framework.llm import LiteLLMProvider
from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from framework.host.agent_host import AgentHost
from framework.host.execution_manager import EntryPointSpec
async def run_tui():
agent = MyAgent()
@@ -471,7 +472,7 @@ def tui():
mcp_cfg = Path(__file__).parent / "mcp_servers.json"
if mcp_cfg.exists(): agent._tool_registry.load_mcp_config(mcp_cfg)
llm = LiteLLMProvider(model=agent.config.model, api_key=agent.config.api_key, api_base=agent.config.api_base)
runtime = create_agent_runtime(
runtime = AgentHost(
graph=agent._build_graph(), goal=agent.goal, storage_path=storage,
entry_points=[EntryPointSpec(id="start", name="Start", entry_node="process", trigger_type="manual", isolation_level="isolated")],
llm=llm, tools=list(agent._tool_registry.get_tools().values()), tool_executor=agent._tool_registry.get_executor())
@@ -509,17 +510,17 @@ if __name__ == "__main__":
## mcp_servers.json
> **Auto-generated.** `initialize_and_build_agent` creates this file with hive-tools
> **Auto-generated.** `initialize_and_build_agent` creates this file with hive_tools
> as the default. Only edit manually to add additional MCP servers.
```json
{
"hive-tools": {
"hive_tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": "../../tools",
"description": "Hive tools MCP server"
"description": "hive_tools MCP server"
}
}
```
@@ -41,7 +41,7 @@ loop_config:
# MCP servers to connect (resolved by name from ~/.hive/mcp_registry/)
mcp_servers:
- name: hive-tools
- name: hive_tools
- name: gcu-tools
nodes:
@@ -200,7 +200,7 @@ The `mcp_servers.json` file is still loaded automatically if present alongside
```yaml
mcp_servers:
- name: hive-tools
- name: hive_tools
- name: gcu-tools
```
@@ -36,7 +36,7 @@ If `agent.py` exists (legacy), it's loaded as a Python module instead.
"max_context_tokens": 32000
},
"mcp_servers": [
{"name": "hive-tools"},
{"name": "hive_tools"},
{"name": "gcu-tools"}
],
"variables": {
@@ -17,20 +17,43 @@ Use browser nodes (with `tools: {policy: "all"}`) when:
## Available Browser Tools
All tools are prefixed with `browser_`:
- `browser_start`, `browser_open` -- launch/navigate
- `browser_click`, `browser_fill`, `browser_type` -- interact
- `browser_snapshot` -- read page content (preferred over screenshot)
- `browser_screenshot` -- visual capture
- `browser_scroll`, `browser_wait` -- navigation helpers
- `browser_evaluate` -- run JavaScript
- `browser_start`, `browser_open`, `browser_navigate` launch/navigate
- `browser_click`, `browser_click_coordinate`, `browser_fill`, `browser_type`, `browser_type_focused` interact
- `browser_press` (with optional `modifiers=["ctrl"]` etc.) — keyboard shortcuts
- `browser_snapshot` — compact accessibility-tree read (structured)
<!-- vision-only -->
- `browser_screenshot` — visual capture (annotated PNG)
<!-- /vision-only -->
- `browser_shadow_query`, `browser_get_rect` — locate elements (shadow-piercing via `>>>`)
- `browser_scroll`, `browser_wait` — navigation helpers
- `browser_evaluate` — run JavaScript
- `browser_close`, `browser_close_finished` — tab cleanup
## System Prompt Tips for Browser Nodes
## Pick the right reading tool
**`browser_snapshot`** — compact accessibility tree of interactive elements. Fast, cheap, good for static or form-heavy pages where the DOM matches what's visually rendered (documentation, simple dashboards, search results, settings pages).
**`browser_screenshot`** — visual capture + metadata (`cssWidth`, `devicePixelRatio`, scale fields). Use this when `browser_snapshot` does not show the thing you need, when refs look stale, or when visual position/layout matters. This often happens on complex SPAs — LinkedIn, Twitter/X, Reddit, Gmail, Notion, Slack, Discord — and on sites using shadow DOM, virtual scrolling, React reconciliation, or dynamic layout.
Neither tool is "preferred" universally — they're for different jobs. Start with snapshot for page structure and ordinary controls; use screenshot as the fallback when snapshot can't find or verify the visible target. Activate the `browser-automation` skill for the full decision tree.
## Coordinate rule
Every browser tool that takes or returns coordinates operates in **fractions of the viewport (0..1 for both axes)**. Read a target's proportional position off `browser_screenshot` ("~35% from the left, ~20% from the top" → `(0.35, 0.20)`) and pass that to `browser_click_coordinate` / `browser_hover_coordinate` / `browser_press_at`. `browser_get_rect` and `browser_shadow_query` return `rect.cx` / `rect.cy` as fractions. The tools multiply by `cssWidth` / `cssHeight` internally — no scale awareness required. Fractions are used because every vision model (Claude, GPT-4o, Gemini, local VLMs) resizes/tiles images differently; proportions are invariant. Avoid raw `getBoundingClientRect()` via `browser_evaluate` for coord lookup; use `browser_get_rect` instead.
## System prompt tips for browser nodes
```
1. Use browser_snapshot() to read page content (NOT browser_get_text)
2. Use browser_wait(seconds=2-3) after navigation for page load
3. If you hit an auth wall, call set_output with an error and move on
4. Keep tool calls per turn <= 10 for reliability
1. Start with browser_snapshot or the snapshot returned by the latest interaction.
2. If the target is missing, ambiguous, stale, or visibly present but absent from the tree,
use browser_screenshot to orient and then click by fractional coordinates.
3. Before typing into a rich-text editor (X compose, LinkedIn DM, Gmail, Reddit),
click the input area first with browser_click_coordinate so React / Draft.js /
Lexical register a native focus event, then use browser_type_focused(text=...)
for shadow-DOM inputs or browser_type(selector, text) for light-DOM inputs.
4. Use browser_wait(seconds=2-3) after navigation for SPA hydration.
5. If you hit an auth wall, call set_output with an error and move on.
6. Keep tool calls per turn <= 10 for reliability.
```
## Example
@@ -43,7 +66,7 @@ All tools are prefixed with `browser_`:
"tools": {"policy": "all"},
"input_keys": ["search_url"],
"output_keys": ["profiles"],
"system_prompt": "Navigate to the search URL, paginate through results..."
"system_prompt": "Navigate to the search URL via browser_navigate(wait_until='load', timeout_ms=20000). Wait 3s for SPA hydration. Use the returned snapshot to look for result cards first. If the cards are missing, stale, or visually present but absent from the tree, use browser_screenshot to orient; paginate through results by scrolling and use screenshots only when the snapshot cannot find or verify the visible cards..."
}
```
@@ -51,3 +74,7 @@ Connected via regular edges:
```
search-setup -> scan-profiles -> process-results
```
## Further detail
For rich-text editor quirks (Lexical, Draft.js, ProseMirror), shadow-DOM shortcuts, `beforeunload` dialog neutralization, Trusted Types CSP on LinkedIn, keyboard shortcut dispatch, and per-site selector tables — **activate the `browser-automation` skill**. That skill has the full verified guidance and is refreshed against real production sites.
+488 -104
View File
@@ -1,14 +1,14 @@
"""Reflection agent — background global memory extraction for the queen.
"""Reflection agent — background memory extraction for the queen.
A lightweight side agent that runs after each queen LLM turn. It inspects
recent conversation messages and extracts durable user knowledge into
individual memory files in ``~/.hive/memories/global/``.
individual memory files in the configured memory directories.
Two reflection types:
- **Short reflection**: after conversational queen turns. Distills
learnings about the user (profile, preferences, environment, feedback).
learnings into either global or queen-scoped memory.
- **Long reflection**: every 5 short reflections and on CONTEXT_COMPACTED.
Organises, deduplicates, trims the global memory directory.
Organises, deduplicates, and trims a memory directory.
Concurrency: an ``asyncio.Lock`` prevents overlapping runs. If a trigger
fires while a reflection is already active the event is skipped.
@@ -22,6 +22,7 @@ from __future__ import annotations
import asyncio
import json
import logging
import time
import traceback
from datetime import datetime
from pathlib import Path
@@ -32,11 +33,12 @@ from framework.agents.queen.queen_memory_v2 import (
MAX_FILE_SIZE_BYTES,
MAX_FILES,
format_memory_manifest,
global_memory_dir,
global_memory_dir as _default_global_memory_dir,
parse_frontmatter,
scan_memory_files,
)
from framework.llm.provider import LLMResponse, Tool
from framework.tracker.llm_debug_logger import log_llm_turn
logger = logging.getLogger(__name__)
@@ -48,18 +50,23 @@ _REFLECTION_TOOLS: list[Tool] = [
Tool(
name="list_memory_files",
description=(
"List all memory files with their type, name, and description. "
"Returns a text manifest — one line per file."
"List memory files with their type, name, and description. "
"When scope is omitted, returns all scopes grouped by scope."
),
parameters={
"type": "object",
"properties": {},
"properties": {
"scope": {
"type": "string",
"description": "Optional scope to inspect: 'global' or 'queen'.",
},
},
"additionalProperties": False,
},
),
Tool(
name="read_memory_file",
description="Read the full content of a memory file by filename.",
description="Read the full content of a memory file by filename from a scope.",
parameters={
"type": "object",
"properties": {
@@ -67,6 +74,10 @@ _REFLECTION_TOOLS: list[Tool] = [
"type": "string",
"description": "The filename (e.g. 'user-prefers-dark-mode.md').",
},
"scope": {
"type": "string",
"description": "Memory scope: 'global' or 'queen'. Defaults to 'global'.",
},
},
"required": ["filename"],
"additionalProperties": False,
@@ -86,6 +97,10 @@ _REFLECTION_TOOLS: list[Tool] = [
"type": "string",
"description": "Filename ending in .md (e.g. 'user-prefers-dark-mode.md').",
},
"scope": {
"type": "string",
"description": "Memory scope: 'global' or 'queen'. Defaults to 'global'.",
},
"content": {
"type": "string",
"description": "Full file content including frontmatter.",
@@ -98,8 +113,7 @@ _REFLECTION_TOOLS: list[Tool] = [
Tool(
name="delete_memory_file",
description=(
"Delete a memory file by filename. Use during long "
"reflection to prune stale or redundant memories."
"Delete a memory file by filename. Use during long reflection to prune stale or redundant memories."
),
parameters={
"type": "object",
@@ -108,6 +122,10 @@ _REFLECTION_TOOLS: list[Tool] = [
"type": "string",
"description": "The filename to delete.",
},
"scope": {
"type": "string",
"description": "Memory scope: 'global' or 'queen'. Defaults to 'global'.",
},
},
"required": ["filename"],
"additionalProperties": False,
@@ -116,6 +134,58 @@ _REFLECTION_TOOLS: list[Tool] = [
]
def _normalize_memory_dirs(
memory_dir: Path | dict[str, Path],
*,
queen_memory_dir: Path | None = None,
) -> dict[str, Path]:
"""Normalize memory directory input into a scope -> path mapping."""
if isinstance(memory_dir, dict):
return {scope: path for scope, path in memory_dir.items() if path is not None}
dirs: dict[str, Path] = {"global": memory_dir}
if queen_memory_dir is not None:
dirs["queen"] = queen_memory_dir
return dirs
def _scope_label(scope: str, queen_id: str | None = None) -> str:
"""Human-readable label for a memory scope."""
if scope == "queen":
return f"queen ({queen_id})" if queen_id else "queen"
return scope
def _resolve_memory_scope(args: dict[str, Any], memory_dirs: dict[str, Path]) -> str:
"""Resolve and validate the requested memory scope."""
raw_scope = args.get("scope")
if raw_scope is None:
if len(memory_dirs) == 1:
return next(iter(memory_dirs))
scope = "global"
else:
scope = str(raw_scope).strip().lower() or "global"
if scope not in memory_dirs:
available = ", ".join(sorted(memory_dirs))
raise ValueError(f"Invalid scope '{scope}'. Available scopes: {available}.")
return scope
def _format_multi_scope_manifest(
memory_dirs: dict[str, Path],
*,
queen_id: str | None = None,
) -> str:
"""Format a manifest that groups memory files by scope."""
blocks: list[str] = []
for scope, memory_dir in memory_dirs.items():
files = scan_memory_files(memory_dir)
label = _scope_label(scope, queen_id)
body = format_memory_manifest(files) if files else "(no memory files yet)"
blocks.append(f"## Scope: {label}\n\n{body}")
return "\n\n".join(blocks)
def _safe_memory_path(filename: str, memory_dir: Path) -> Path:
"""Resolve *filename* inside *memory_dir*, raising if it escapes."""
if not filename or filename.strip() != filename:
@@ -129,23 +199,41 @@ def _safe_memory_path(filename: str, memory_dir: Path) -> Path:
return candidate
def _execute_tool(name: str, args: dict[str, Any], memory_dir: Path) -> str:
def _execute_tool(
name: str,
args: dict[str, Any],
memory_dir: Path | dict[str, Path],
*,
queen_id: str | None = None,
) -> str:
"""Execute a reflection tool synchronously. Returns the result string."""
memory_dirs = _normalize_memory_dirs(memory_dir)
if name == "list_memory_files":
files = scan_memory_files(memory_dir)
logger.debug("reflect: tool list_memory_files → %d files", len(files))
if not files:
return "(no memory files yet)"
return format_memory_manifest(files)
requested_scope = args.get("scope")
if requested_scope is not None:
try:
scope = _resolve_memory_scope(args, memory_dirs)
except ValueError as exc:
return f"ERROR: {exc}"
files = scan_memory_files(memory_dirs[scope])
logger.debug("reflect: tool list_memory_files[%s] → %d files", scope, len(files))
if not files:
return f"(no {scope} memory files yet)"
return format_memory_manifest(files)
return _format_multi_scope_manifest(memory_dirs, queen_id=queen_id)
if name == "read_memory_file":
filename = args.get("filename", "")
try:
path = _safe_memory_path(filename, memory_dir)
scope = _resolve_memory_scope(args, memory_dirs)
except ValueError as exc:
return f"ERROR: {exc}"
try:
path = _safe_memory_path(filename, memory_dirs[scope])
except ValueError as exc:
return f"ERROR: {exc}"
if not path.exists() or not path.is_file():
return f"ERROR: File not found: {filename}"
return f"ERROR: File not found in {scope}: {filename}"
try:
return path.read_text(encoding="utf-8")
except OSError as e:
@@ -154,48 +242,90 @@ def _execute_tool(name: str, args: dict[str, Any], memory_dir: Path) -> str:
if name == "write_memory_file":
filename = args.get("filename", "")
content = args.get("content", "")
try:
scope = _resolve_memory_scope(args, memory_dirs)
except ValueError as exc:
return f"ERROR: {exc}"
scope_dir = memory_dirs[scope]
if not filename.endswith(".md"):
return "ERROR: Filename must end with .md"
# Enforce global memory type restrictions.
fm = parse_frontmatter(content)
mem_type = (fm.get("type") or "").strip().lower()
if mem_type and mem_type not in GLOBAL_MEMORY_CATEGORIES:
return (
f"ERROR: Invalid memory type '{mem_type}'. "
f"Allowed types: {', '.join(GLOBAL_MEMORY_CATEGORIES)}."
)
return f"ERROR: Invalid memory type '{mem_type}'. Allowed types: {', '.join(GLOBAL_MEMORY_CATEGORIES)}."
# Enforce file size limit.
if len(content.encode("utf-8")) > MAX_FILE_SIZE_BYTES:
return f"ERROR: Content exceeds {MAX_FILE_SIZE_BYTES} byte limit."
# Enforce file cap (only for new files).
try:
path = _safe_memory_path(filename, memory_dir)
path = _safe_memory_path(filename, scope_dir)
except ValueError as exc:
return f"ERROR: {exc}"
if not path.exists():
existing = list(memory_dir.glob("*.md"))
existing = list(scope_dir.glob("*.md"))
if len(existing) >= MAX_FILES:
return f"ERROR: File cap reached ({MAX_FILES}). Delete a file first."
memory_dir.mkdir(parents=True, exist_ok=True)
return f"ERROR: File cap reached in {scope} ({MAX_FILES}). Delete a file first."
scope_dir.mkdir(parents=True, exist_ok=True)
path.write_text(content, encoding="utf-8")
logger.debug("reflect: tool write_memory_file → %s (%d chars)", filename, len(content))
return f"Wrote {filename} ({len(content)} chars)."
logger.debug(
"reflect: tool write_memory_file[%s] → %s (%d chars)",
scope,
filename,
len(content),
)
return f"Wrote {scope}:{filename} ({len(content)} chars)."
if name == "delete_memory_file":
filename = args.get("filename", "")
try:
path = _safe_memory_path(filename, memory_dir)
scope = _resolve_memory_scope(args, memory_dirs)
except ValueError as exc:
return f"ERROR: {exc}"
try:
path = _safe_memory_path(filename, memory_dirs[scope])
except ValueError as exc:
return f"ERROR: {exc}"
if not path.exists():
return f"ERROR: File not found: {filename}"
return f"ERROR: File not found in {scope}: {filename}"
path.unlink()
logger.debug("reflect: tool delete_memory_file → %s", filename)
return f"Deleted {filename}."
logger.debug("reflect: tool delete_memory_file[%s]%s", scope, filename)
return f"Deleted {scope}:{filename}."
return f"ERROR: Unknown tool: {name}"
# ---------------------------------------------------------------------------
# Reflection logging helper
# ---------------------------------------------------------------------------
def _log_reflection_turn(
*,
reflection_id: str,
iteration: int,
system_prompt: str,
messages: list[dict[str, Any]],
assistant_text: str,
tool_calls: list[dict[str, Any]],
tool_results: list[dict[str, Any]],
token_counts: dict[str, Any],
) -> None:
"""Log a reflection turn using the same JSONL format as the main agent loop."""
log_llm_turn(
node_id="reflection",
stream_id=reflection_id,
execution_id=reflection_id,
iteration=iteration,
system_prompt=system_prompt,
messages=messages,
assistant_text=assistant_text,
tool_calls=tool_calls,
tool_results=tool_results,
token_counts=token_counts,
)
# ---------------------------------------------------------------------------
# Mini event loop
# ---------------------------------------------------------------------------
@@ -207,8 +337,10 @@ async def _reflection_loop(
llm: Any,
system: str,
user_msg: str,
memory_dir: Path,
memory_dir: Path | dict[str, Path],
max_turns: int = _MAX_TURNS,
*,
queen_id: str | None = None,
) -> tuple[bool, list[str], str]:
"""Run a mini tool-use loop: LLM → tool calls → repeat.
@@ -217,6 +349,9 @@ async def _reflection_loop(
messages: list[dict[str, Any]] = [{"role": "user", "content": user_msg}]
changed_files: list[str] = []
last_text: str = ""
reflection_id = f"reflection_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
token_counts: dict[str, Any] = {}
memory_dirs = _normalize_memory_dirs(memory_dir)
for _turn in range(max_turns):
logger.info("reflect: loop turn %d/%d (msgs=%d)", _turn + 1, max_turns, len(messages))
@@ -265,6 +400,21 @@ async def _reflection_loop(
len(tool_calls_raw),
)
# Capture token counts from the LLM response.
try:
raw_usage = getattr(raw, "usage", None) if raw else None
if raw_usage:
token_counts = {
"model": getattr(raw, "model", ""),
"input": getattr(raw_usage, "prompt_tokens", 0) or 0,
"output": getattr(raw_usage, "completion_tokens", 0) or 0,
"cached": getattr(raw_usage, "prompt_tokens_details", None)
and getattr(raw_usage.prompt_tokens_details, "cached_tokens", 0),
"stop_reason": getattr(raw.choices[0], "finish_reason", "") if raw else "",
}
except Exception:
token_counts = {}
turn_text = resp.content or ""
if turn_text:
last_text = turn_text
@@ -286,13 +436,32 @@ async def _reflection_loop(
if not tool_calls_raw:
break
tool_results: list[dict[str, Any]] = []
for tc in tool_calls_raw:
result = _execute_tool(tc["name"], tc.get("input", {}), memory_dir)
tc_input = tc.get("input", {})
result = _execute_tool(tc["name"], tc_input, memory_dirs, queen_id=queen_id)
if tc["name"] in ("write_memory_file", "delete_memory_file"):
fname = tc.get("input", {}).get("filename", "")
fname = tc_input.get("filename", "")
try:
scope = _resolve_memory_scope(tc_input, memory_dirs)
except ValueError:
scope = str(tc_input.get("scope", "global")).strip().lower() or "global"
if fname and not result.startswith("ERROR"):
changed_files.append(fname)
changed_files.append(f"{scope}:{fname}")
messages.append({"role": "tool", "tool_call_id": tc["id"], "content": result})
tool_results.append({"tool_call_id": tc["id"], "name": tc["name"], "result": result})
# Log the reflection turn in the same JSONL format as the main agent loop.
_log_reflection_turn(
reflection_id=reflection_id,
iteration=_turn,
system_prompt=system,
messages=messages,
assistant_text=turn_text,
tool_calls=tool_calls_raw,
tool_results=tool_results,
token_counts=token_counts,
)
return True, changed_files, last_text
@@ -303,17 +472,25 @@ async def _reflection_loop(
_CATEGORIES_STR = ", ".join(GLOBAL_MEMORY_CATEGORIES)
_SHORT_REFLECT_SYSTEM = f"""\
def _build_unified_short_reflect_system(queen_id: str | None = None) -> str:
"""Build the unified short reflection prompt across memory scopes."""
queen_scope = (
f"- `queen`: durable learnings specific to how queen '{queen_id}' should work with this user\n"
if queen_id
else ""
)
return f"""\
You are a reflection agent that distills durable knowledge about the USER
into persistent global memory files. You run in the background after each
into persistent memory files. You run in the background after each
assistant turn.
Your goal: identify anything from the recent messages worth remembering
about the user across ALL future sessions their profile, preferences,
environment setup, or feedback on assistant behavior.
Memory categories: {_CATEGORIES_STR}
Available memory scopes:
- `global`: durable user facts that should help every queen in future sessions
{queen_scope}
Expected format for each memory file:
```markdown
---
@@ -326,47 +503,69 @@ type: {{{{{_CATEGORIES_STR}}}}}
```
Workflow (aim for 2 turns):
Turn 1 call list_memory_files to see what exists, then read_memory_file
for any that might need updating.
Turn 2 call write_memory_file for new/updated memories.
Turn 1 call list_memory_files without a scope to inspect all scopes, then
read_memory_file for any files that might need updating.
Turn 2 call write_memory_file / delete_memory_file with an explicit scope.
Rules:
- ONLY persist durable knowledge about the USER who they are, how they
like to work, their tech environment, their feedback on your behavior.
- Do NOT store task-specific details, code patterns, file paths, or
ephemeral session state.
- Keep files concise. Each file should cover ONE topic.
- If an existing memory already covers the learning, UPDATE it rather than
creating a duplicate.
- Make ONE coordinated storage decision per learning.
- Prefer `global` for broad user facts: identity, general preferences, environment,
and feedback that should help all queens.
- Prefer `queen` only for stable domain-specific learnings about how this queen
should reason, prioritize, communicate, or make tradeoffs for this user.
- Avoid storing the same fact in both scopes unless the scoped version adds
genuinely distinct queen-specific nuance. When in doubt, keep only one copy.
- Update existing files instead of creating duplicates when possible.
- If the same learning already exists in the wrong scope or both scopes,
you may update one file and delete the redundant one.
- Do NOT store task-specific details, code patterns, file paths, or ephemeral
session state.
- Keep files concise. Each file should cover ONE topic.
- If there is nothing worth remembering, do nothing (respond with a brief
reason no tool calls needed).
- File names should be kebab-case slugs ending in .md.
- For user identity/profile information (name, role, background), ALWAYS use
the canonical filename 'user-profile.md'. This is the single source of
truth for user profile data, shared with the settings UI.
- When updating user-profile.md, preserve the '## Identity' section it is
managed by the settings UI. Add/update other sections (Professional Style,
Current Focus, Preferences, etc.) below it.
- Do NOT exceed {MAX_FILE_SIZE_BYTES} bytes per file or {MAX_FILES} total files.
- For user identity/profile information about the human user (name, role,
background), ALWAYS use the canonical filename 'user-profile.md' in the
`global` scope. This is the single source of truth for user profile data,
shared with the settings UI.
- When updating `global:user-profile.md`, preserve the '## User Identity'
section it is managed by the settings UI. Never describe the assistant,
queen, or agent as the identity in this file. Add/update other sections
below it.
- Do NOT exceed {MAX_FILE_SIZE_BYTES} bytes per file or {MAX_FILES} total files per scope.
"""
_LONG_REFLECT_SYSTEM = f"""\
def _build_unified_long_reflect_system(queen_id: str | None = None) -> str:
"""Build the unified housekeeping prompt across memory scopes."""
queen_scope = (
f"- `queen`: memories specific to how queen '{queen_id}' should work with this user\n" if queen_id else ""
)
return f"""\
You are a reflection agent performing a periodic housekeeping pass over the
global memory directory. Your job is to organise, deduplicate, and trim
noise from the accumulated memory files.
memory system for this user.
Memory categories: {_CATEGORIES_STR}
Available memory scopes:
- `global`: facts useful to every queen
{queen_scope}
Workflow:
1. list_memory_files to get the full manifest.
2. read_memory_file for files that look redundant, stale, or overlapping.
3. Merge duplicates, delete stale entries, consolidate related memories.
1. Call list_memory_files without a scope to inspect all scopes together.
2. Read files that look redundant, stale, overlapping, or misplaced.
3. Merge duplicates, move memories to the correct scope, and delete
redundant copies when appropriate.
4. Ensure descriptions are specific and search-friendly.
5. Enforce limits: max {MAX_FILES} files, max {MAX_FILE_SIZE_BYTES} bytes each.
5. Enforce limits: max {MAX_FILES} files and {MAX_FILE_SIZE_BYTES} bytes per file in each scope.
Rules:
- Prefer merging over deleting combine related memories into one file.
- Remove memories that are no longer relevant or are superseded.
- Treat deduplication across scopes as part of the job, not just within a scope.
- Prefer `global` for broad durable user facts and `queen` for queen-specific nuance.
- If two files store materially the same fact, keep the best one and delete or
rewrite the redundant one.
- Prefer merging over deleting when the memories contain complementary signal.
- Remove memories that are stale, superseded, or misplaced.
- Keep the total collection lean and high-signal.
- Do NOT invent new information only reorganise what exists.
"""
@@ -390,9 +589,77 @@ async def run_short_reflection(
llm: Any,
memory_dir: Path | None = None,
) -> None:
"""Run a short reflection: extract user knowledge from conversation."""
logger.info("reflect: starting short reflection for %s", session_dir)
mem_dir = memory_dir or global_memory_dir()
"""Run a global-only short reflection (compatibility wrapper)."""
logger.info("reflect: starting global short reflection for %s", session_dir)
mem_dir = memory_dir or _default_global_memory_dir()
await _run_short_reflection_with_prompt(
session_dir,
llm,
mem_dir,
system_prompt=_build_unified_short_reflect_system(),
log_label="global",
queen_id=None,
)
async def run_queen_short_reflection(
session_dir: Path,
llm: Any,
queen_id: str,
memory_dir: Path,
) -> None:
"""Run a queen-only short reflection (compatibility wrapper)."""
logger.info("reflect: starting queen short reflection for %s (%s)", session_dir, queen_id)
await _run_short_reflection_with_prompt(
session_dir,
llm,
{"queen": memory_dir},
system_prompt=_build_unified_short_reflect_system(queen_id),
log_label=f"queen:{queen_id}",
queen_id=queen_id,
)
async def run_unified_short_reflection(
session_dir: Path,
llm: Any,
*,
global_memory_dir: Path | None = None,
queen_memory_dir: Path | None = None,
queen_id: str | None = None,
) -> None:
"""Run one short reflection loop over all active memory scopes."""
global_dir = global_memory_dir or _default_global_memory_dir()
memory_dirs = {"global": global_dir}
if queen_memory_dir is not None and queen_id:
memory_dirs["queen"] = queen_memory_dir
logger.info(
"reflect: starting unified short reflection for %s (scopes=%s)",
session_dir,
sorted(memory_dirs),
)
await _run_short_reflection_with_prompt(
session_dir,
llm,
memory_dirs,
system_prompt=_build_unified_short_reflect_system(queen_id if "queen" in memory_dirs else None),
log_label="unified",
queen_id=queen_id if "queen" in memory_dirs else None,
)
async def _run_short_reflection_with_prompt(
session_dir: Path,
llm: Any,
memory_dir: Path | dict[str, Path],
*,
system_prompt: str,
log_label: str,
queen_id: str | None,
) -> None:
"""Run a short reflection with a scope-specific system prompt."""
mem_dir = memory_dir
messages = await _read_conversation_parts(session_dir)
if not messages:
@@ -421,24 +688,36 @@ async def run_short_reflection(
f"Timestamp: {datetime.now().isoformat(timespec='minutes')}"
)
_, changed, reason = await _reflection_loop(llm, _SHORT_REFLECT_SYSTEM, user_msg, mem_dir)
_, changed, reason = await _reflection_loop(
llm,
system_prompt,
user_msg,
mem_dir,
queen_id=queen_id,
)
if changed:
logger.info("reflect: short reflection done, changed files: %s", changed)
logger.info("reflect: %s short reflection done, changed files: %s", log_label, changed)
else:
logger.info("reflect: short reflection done, no changes — %s", reason or "no reason")
logger.info(
"reflect: %s short reflection done, no changes — %s",
log_label,
reason or "no reason",
)
async def run_long_reflection(
llm: Any,
memory_dir: Path | None = None,
*,
scope_label: str = "global",
) -> None:
"""Run a long reflection: organise and deduplicate all global memories."""
logger.debug("reflect: starting long reflection")
mem_dir = memory_dir or global_memory_dir()
"""Run a single-scope long reflection (compatibility wrapper)."""
logger.debug("reflect: starting long reflection for %s", scope_label)
mem_dir = memory_dir or _default_global_memory_dir()
files = scan_memory_files(mem_dir)
if not files:
logger.debug("reflect: no memory files, skipping long reflection")
logger.debug("reflect: no %s memory files, skipping long reflection", scope_label)
return
manifest = format_memory_manifest(files)
@@ -448,21 +727,70 @@ async def run_long_reflection(
f"Timestamp: {datetime.now().isoformat(timespec='minutes')}"
)
_, changed, reason = await _reflection_loop(llm, _LONG_REFLECT_SYSTEM, user_msg, mem_dir)
_, changed, reason = await _reflection_loop(
llm,
_build_unified_long_reflect_system(),
user_msg,
mem_dir,
queen_id=None,
)
if changed:
logger.debug("reflect: long reflection done (%d files), changed: %s", len(files), changed)
logger.debug(
"reflect: long reflection done for %s (%d files), changed: %s",
scope_label,
len(files),
changed,
)
else:
logger.debug(
"reflect: long reflection done (%d files), no changes — %s",
"reflect: long reflection done for %s (%d files), no changes — %s",
scope_label,
len(files),
reason or "no reason",
)
async def run_unified_long_reflection(
llm: Any,
*,
global_memory_dir: Path | None = None,
queen_memory_dir: Path | None = None,
queen_id: str | None = None,
) -> None:
"""Run one housekeeping loop across all active memory scopes."""
global_dir = global_memory_dir or _default_global_memory_dir()
memory_dirs = {"global": global_dir}
if queen_memory_dir is not None and queen_id:
memory_dirs["queen"] = queen_memory_dir
manifest = _format_multi_scope_manifest(memory_dirs, queen_id=queen_id if "queen" in memory_dirs else None)
user_msg = (
"## Current memory manifest across scopes\n\n"
f"{manifest}\n\n"
f"Timestamp: {datetime.now().isoformat(timespec='minutes')}"
)
_, changed, reason = await _reflection_loop(
llm,
_build_unified_long_reflect_system(queen_id if "queen" in memory_dirs else None),
user_msg,
memory_dirs,
queen_id=queen_id if "queen" in memory_dirs else None,
)
if changed:
logger.debug("reflect: unified long reflection changed: %s", changed)
else:
logger.debug("reflect: unified long reflection no changes — %s", reason or "no reason")
async def run_shutdown_reflection(
session_dir: Path,
llm: Any,
memory_dir: Path | None = None,
*,
global_memory_dir_override: Path | None = None,
queen_memory_dir: Path | None = None,
queen_id: str | None = None,
) -> None:
"""Run a final short reflection on session shutdown.
@@ -470,15 +798,24 @@ async def run_shutdown_reflection(
persisted before the session is destroyed.
"""
logger.info("reflect: running shutdown reflection for %s", session_dir)
mem_dir = memory_dir or global_memory_dir()
try:
await run_short_reflection(session_dir, llm, mem_dir)
global_dir = global_memory_dir_override or memory_dir or _default_global_memory_dir()
await run_unified_short_reflection(
session_dir,
llm,
global_memory_dir=global_dir,
queen_memory_dir=queen_memory_dir,
queen_id=queen_id,
)
logger.info("reflect: shutdown reflection completed for %s", session_dir)
except asyncio.CancelledError:
logger.warning("reflect: shutdown reflection cancelled for %s", session_dir)
except Exception:
logger.warning("reflect: shutdown reflection failed", exc_info=True)
_write_error("shutdown reflection")
_write_error(
"shutdown reflection",
global_memory_dir_override or memory_dir or _default_global_memory_dir(),
)
# ---------------------------------------------------------------------------
@@ -486,13 +823,17 @@ async def run_shutdown_reflection(
# ---------------------------------------------------------------------------
_LONG_REFLECT_INTERVAL = 5
_SHORT_REFLECT_TURN_INTERVAL = 3
_SHORT_REFLECT_COOLDOWN_SEC = 300.0
async def subscribe_reflection_triggers(
event_bus: Any,
session_dir: Path,
llm: Any,
memory_dir: Path | None = None,
global_memory_dir: Path | None = None,
queen_memory_dir: Path | None = None,
queen_id: str | None = None,
) -> list[str]:
"""Subscribe to queen turn events and return subscription IDs.
@@ -501,30 +842,58 @@ async def subscribe_reflection_triggers(
"""
from framework.host.event_bus import EventType
mem_dir = memory_dir or global_memory_dir()
global_mem_dir = global_memory_dir or _default_global_memory_dir()
queen_mem_dir = queen_memory_dir
_lock = asyncio.Lock()
_short_count = 0
_short_has_run = False
_last_short_time: float = 0.0
_background_tasks: set[asyncio.Task] = set()
async def _run_with_error_capture(coro: Any, *, context: str, memory_dir: Path) -> None:
try:
await coro
except Exception:
logger.warning("reflect: %s failed", context, exc_info=True)
_write_error(context, memory_dir)
async def _do_turn_reflect(is_interval: bool, count: int) -> None:
async with _lock:
try:
if is_interval:
await run_short_reflection(session_dir, llm, mem_dir)
await run_long_reflection(llm, mem_dir)
else:
await run_short_reflection(session_dir, llm, mem_dir)
except Exception:
logger.warning("reflect: reflection failed", exc_info=True)
_write_error("short/long reflection")
await _run_with_error_capture(
run_unified_short_reflection(
session_dir,
llm,
global_memory_dir=global_mem_dir,
queen_memory_dir=queen_mem_dir,
queen_id=queen_id,
),
context="unified short reflection",
memory_dir=global_mem_dir,
)
if is_interval:
await _run_with_error_capture(
run_unified_long_reflection(
llm,
global_memory_dir=global_mem_dir,
queen_memory_dir=queen_mem_dir,
queen_id=queen_id,
),
context="unified long reflection",
memory_dir=global_mem_dir,
)
async def _do_compaction_reflect() -> None:
async with _lock:
try:
await run_long_reflection(llm, mem_dir)
except Exception:
logger.warning("reflect: compaction-triggered reflection failed", exc_info=True)
_write_error("compaction reflection")
await _run_with_error_capture(
run_unified_long_reflection(
llm,
global_memory_dir=global_mem_dir,
queen_memory_dir=queen_mem_dir,
queen_id=queen_id,
),
context="unified compaction reflection",
memory_dir=global_mem_dir,
)
def _fire_and_forget(coro: Any) -> None:
"""Spawn a background task and prevent GC before it finishes."""
@@ -533,7 +902,7 @@ async def subscribe_reflection_triggers(
task.add_done_callback(_background_tasks.discard)
async def _on_turn_complete(event: Any) -> None:
nonlocal _short_count
nonlocal _short_count, _short_has_run, _last_short_time
if getattr(event, "stream_id", None) != "queen":
return
@@ -549,10 +918,25 @@ async def subscribe_reflection_triggers(
logger.debug("reflect: skipping tool turn (count=%d)", _short_count)
return
# Apply turn-interval and cooldown gates after the first reflection.
if _short_has_run:
now = time.monotonic()
turn_ok = _short_count % _SHORT_REFLECT_TURN_INTERVAL == 0
cooldown_ok = (now - _last_short_time) >= _SHORT_REFLECT_COOLDOWN_SEC
if not turn_ok and not cooldown_ok:
logger.debug(
"reflect: skipping, below turn/cooldown threshold (count=%d)",
_short_count,
)
return
if _lock.locked():
logger.debug("reflect: skipping, already running (count=%d)", _short_count)
return
_short_has_run = True
_last_short_time = time.monotonic()
logger.debug(
"reflect: triggered (count=%d, interval=%s, stop_reason=%s)",
_short_count,
@@ -587,10 +971,10 @@ async def subscribe_reflection_triggers(
return sub_ids
def _write_error(context: str) -> None:
def _write_error(context: str, memory_dir: Path) -> None:
"""Best-effort write of the last traceback to an error file."""
try:
error_path = global_memory_dir() / ".reflection_error.txt"
error_path = memory_dir / ".reflection_error.txt"
error_path.parent.mkdir(parents=True, exist_ok=True)
error_path.write_text(
f"context: {context}\ntime: {datetime.now().isoformat()}\n\n{traceback.format_exc()}",
+52 -3
View File
@@ -155,6 +155,57 @@ def get_preferred_worker_model() -> str | None:
return None
def get_vision_fallback_model() -> str | None:
"""Return the configured vision-fallback model, or None if not configured.
Reads from the ``vision_fallback`` section of ~/.hive/configuration.json.
Used by the agent-loop hook that captions tool-result images when the
main agent's model cannot accept image content (text-only LLMs).
When this returns None the fallback chain skips the configured-subagent
stage and proceeds straight to the generic caption rotation
(``_describe_images_as_text``).
"""
vision = get_hive_config().get("vision_fallback", {})
if vision.get("provider") and vision.get("model"):
provider = str(vision["provider"])
model = str(vision["model"]).strip()
if provider.lower() == "openrouter" and model.lower().startswith("openrouter/"):
model = model[len("openrouter/") :]
if model:
return f"{provider}/{model}"
return None
def get_vision_fallback_api_key() -> str | None:
"""Return the API key for the vision-fallback model.
Resolution order: ``vision_fallback.api_key_env_var`` from the env,
then the default ``get_api_key()``. No subscription-token branches
vision fallback is intended for hosted vision models (Anthropic,
OpenAI, Google), not for the subscription-bearer providers.
"""
vision = get_hive_config().get("vision_fallback", {})
if not vision:
return get_api_key()
api_key_env_var = vision.get("api_key_env_var")
if api_key_env_var:
return os.environ.get(api_key_env_var)
return get_api_key()
def get_vision_fallback_api_base() -> str | None:
"""Return the api_base for the vision-fallback model, or None."""
vision = get_hive_config().get("vision_fallback", {})
if not vision:
return None
if vision.get("api_base"):
return vision["api_base"]
if str(vision.get("provider", "")).lower() == "openrouter":
return OPENROUTER_API_BASE
return None
def get_worker_api_key() -> str | None:
"""Return the API key for the worker LLM, falling back to the default key."""
worker_llm = get_hive_config().get("worker_llm", {})
@@ -405,9 +456,7 @@ def _fetch_antigravity_credentials() -> tuple[str | None, str | None]:
import urllib.request
try:
req = urllib.request.Request(
_ANTIGRAVITY_CREDENTIALS_URL, headers={"User-Agent": "Hive/1.0"}
)
req = urllib.request.Request(_ANTIGRAVITY_CREDENTIALS_URL, headers={"User-Agent": "Hive/1.0"})
with urllib.request.urlopen(req, timeout=10) as resp:
content = resp.read().decode("utf-8")
id_match = re.search(r'ANTIGRAVITY_CLIENT_ID\s*=\s*"([^"]+)"', content)
+2
View File
@@ -85,6 +85,7 @@ from .template import TemplateResolver
from .validation import (
CredentialStatus,
CredentialValidationResult,
compute_unavailable_tools,
ensure_credential_key_env,
validate_agent_credentials,
)
@@ -150,6 +151,7 @@ __all__ = [
# Validation
"ensure_credential_key_env",
"validate_agent_credentials",
"compute_unavailable_tools",
"CredentialStatus",
"CredentialValidationResult",
# Interactive setup
+2 -6
View File
@@ -332,9 +332,7 @@ class AdenCredentialClient:
last_error = e
if attempt < self.config.retry_attempts - 1:
delay = self.config.retry_delay * (2**attempt)
logger.warning(
f"Aden request failed (attempt {attempt + 1}), retrying in {delay}s: {e}"
)
logger.warning(f"Aden request failed (attempt {attempt + 1}), retrying in {delay}s: {e}")
time.sleep(delay)
else:
raise AdenClientError(f"Failed to connect to Aden server: {e}") from e
@@ -347,9 +345,7 @@ class AdenCredentialClient:
):
raise
raise AdenClientError(
f"Request failed after {self.config.retry_attempts} attempts"
) from last_error
raise AdenClientError(f"Request failed after {self.config.retry_attempts} attempts") from last_error
def list_integrations(self) -> list[AdenIntegrationInfo]:
"""
+2 -6
View File
@@ -192,9 +192,7 @@ class AdenSyncProvider(CredentialProvider):
f"Visit: {e.reauthorization_url or 'your Aden dashboard'}"
) from e
raise CredentialRefreshError(
f"Failed to refresh credential '{credential.id}': {e}"
) from e
raise CredentialRefreshError(f"Failed to refresh credential '{credential.id}': {e}") from e
except AdenClientError as e:
logger.error(f"Aden client error for '{credential.id}': {e}")
@@ -206,9 +204,7 @@ class AdenSyncProvider(CredentialProvider):
logger.warning(f"Aden unavailable, using cached token for '{credential.id}'")
return credential
raise CredentialRefreshError(
f"Aden server unavailable and token expired for '{credential.id}'"
) from e
raise CredentialRefreshError(f"Aden server unavailable and token expired for '{credential.id}'") from e
def validate(self, credential: CredentialObject) -> bool:
"""
+14 -3
View File
@@ -168,9 +168,7 @@ class AdenCachedStorage(CredentialStorage):
if rid != credential_id:
result = self._load_by_id(rid)
if result is not None:
logger.info(
f"Loaded credential '{credential_id}' via provider index (id='{rid}')"
)
logger.info(f"Loaded credential '{credential_id}' via provider index (id='{rid}')")
return result
# Direct lookup (exact credential_id match)
@@ -199,6 +197,19 @@ class AdenCachedStorage(CredentialStorage):
if local_cred is None:
return None
# Skip Aden fetch for credentials not managed by Aden (BYOK credentials).
# Only OAuth credentials synced from Aden are in the provider index.
# BYOK credentials like anthropic, brave_search are local-only.
# Also check the _aden_managed flag on the credential itself.
is_aden_managed = (
credential_id in self._provider_index
or any(credential_id in ids for ids in self._provider_index.values())
or (local_cred is not None and local_cred.keys.get("_aden_managed") is not None)
)
if not is_aden_managed:
logger.debug(f"Credential '{credential_id}' is local-only, skipping Aden refresh")
return local_cred
# Try to refresh stale local credential from Aden
try:
aden_cred = self._aden_provider.fetch_from_aden(credential_id)
@@ -493,9 +493,7 @@ class TestAdenCachedStorage:
assert loaded is not None
assert loaded.keys["access_token"].value.get_secret_value() == "cached-token"
def test_load_from_aden_when_stale(
self, cached_storage, local_storage, provider, mock_client, aden_response
):
def test_load_from_aden_when_stale(self, cached_storage, local_storage, provider, mock_client, aden_response):
"""Test load fetches from Aden when cache is stale."""
# Create stale cached credential
cred = CredentialObject(
@@ -521,9 +519,7 @@ class TestAdenCachedStorage:
assert loaded is not None
assert loaded.keys["access_token"].value.get_secret_value() == "test-access-token"
def test_load_falls_back_to_stale_when_aden_fails(
self, cached_storage, local_storage, provider, mock_client
):
def test_load_falls_back_to_stale_when_aden_fails(self, cached_storage, local_storage, provider, mock_client):
"""Test load falls back to stale cache when Aden fails."""
# Create stale cached credential
cred = CredentialObject(
@@ -95,9 +95,7 @@ class BaseOAuth2Provider(CredentialProvider):
self._client = httpx.Client(timeout=self.config.request_timeout)
except ImportError as e:
raise ImportError(
"OAuth2 provider requires 'httpx'. Install with: uv pip install httpx"
) from e
raise ImportError("OAuth2 provider requires 'httpx'. Install with: uv pip install httpx") from e
return self._client
def _close_client(self) -> None:
@@ -311,8 +309,7 @@ class BaseOAuth2Provider(CredentialProvider):
except OAuth2Error as e:
if e.error == "invalid_grant":
raise CredentialRefreshError(
f"Refresh token for '{credential.id}' is invalid or revoked. "
"Re-authorization required."
f"Refresh token for '{credential.id}' is invalid or revoked. Re-authorization required."
) from e
raise CredentialRefreshError(f"Failed to refresh '{credential.id}': {e}") from e
@@ -422,9 +419,7 @@ class BaseOAuth2Provider(CredentialProvider):
if response.status_code != 200 or "error" in response_data:
error = response_data.get("error", "unknown_error")
description = response_data.get("error_description", response.text)
raise OAuth2Error(
error=error, description=description, status_code=response.status_code
)
raise OAuth2Error(error=error, description=description, status_code=response.status_code)
return OAuth2Token.from_token_response(response_data)
@@ -158,9 +158,7 @@ class TokenLifecycleManager:
"""
# Run in executor to avoid blocking
loop = asyncio.get_event_loop()
token = await loop.run_in_executor(
None, lambda: self.provider.client_credentials_grant(scopes=scopes)
)
token = await loop.run_in_executor(None, lambda: self.provider.client_credentials_grant(scopes=scopes))
self._save_token_to_store(token)
self._cached_token = token
@@ -100,9 +100,7 @@ class ZohoOAuth2Provider(BaseOAuth2Provider):
)
super().__init__(config, provider_id="zoho_crm_oauth2")
self._accounts_domain = base
self._api_domain = (
api_domain or os.getenv("ZOHO_API_DOMAIN", "https://www.zohoapis.com")
).rstrip("/")
self._api_domain = (api_domain or os.getenv("ZOHO_API_DOMAIN", "https://www.zohoapis.com")).rstrip("/")
@property
def supported_types(self) -> list[CredentialType]:
+2 -6
View File
@@ -268,9 +268,7 @@ class CredentialSetupSession:
self._print(f"{Colors.YELLOW}Initializing credential store...{Colors.NC}")
try:
generate_and_save_credential_key()
self._print(
f"{Colors.GREEN}✓ Encryption key saved to ~/.hive/secrets/credential_key{Colors.NC}"
)
self._print(f"{Colors.GREEN}✓ Encryption key saved to ~/.hive/secrets/credential_key{Colors.NC}")
return True
except Exception as e:
self._print(f"{Colors.RED}Failed to initialize credential store: {e}{Colors.NC}")
@@ -449,9 +447,7 @@ class CredentialSetupSession:
logger.warning("Unexpected error exporting credential to env", exc_info=True)
return True
else:
self._print(
f"{Colors.YELLOW}{cred.credential_name} not found in Aden account.{Colors.NC}"
)
self._print(f"{Colors.YELLOW}{cred.credential_name} not found in Aden account.{Colors.NC}")
self._print("Please connect this integration on https://hive.adenhq.com first.")
return False
except Exception as e:
+6 -15
View File
@@ -136,8 +136,7 @@ class EncryptedFileStorage(CredentialStorage):
from cryptography.fernet import Fernet
except ImportError as e:
raise ImportError(
"Encrypted storage requires 'cryptography'. "
"Install with: uv pip install cryptography"
"Encrypted storage requires 'cryptography'. Install with: uv pip install cryptography"
) from e
self.base_path = Path(base_path or self.DEFAULT_PATH).expanduser()
@@ -213,9 +212,7 @@ class EncryptedFileStorage(CredentialStorage):
json_bytes = self._fernet.decrypt(encrypted)
data = json.loads(json_bytes.decode("utf-8-sig"))
except Exception as e:
raise CredentialDecryptionError(
f"Failed to decrypt credential '{credential_id}': {e}"
) from e
raise CredentialDecryptionError(f"Failed to decrypt credential '{credential_id}': {e}") from e
# Deserialize
return self._deserialize_credential(data)
@@ -316,8 +313,7 @@ class EncryptedFileStorage(CredentialStorage):
visible_keys = [
name
for name in credential.keys.keys()
if name not in self.INDEX_INTERNAL_KEY_NAMES
and not name.startswith("_identity_")
if name not in self.INDEX_INTERNAL_KEY_NAMES and not name.startswith("_identity_")
]
# Earliest expiry across all keys (most likely the access_token).
@@ -336,9 +332,7 @@ class EncryptedFileStorage(CredentialStorage):
"key_names": sorted(visible_keys),
"created_at": credential.created_at.isoformat() if credential.created_at else None,
"updated_at": credential.updated_at.isoformat() if credential.updated_at else None,
"last_refreshed": (
credential.last_refreshed.isoformat() if credential.last_refreshed else None
),
"last_refreshed": (credential.last_refreshed.isoformat() if credential.last_refreshed else None),
"expires_at": earliest_expiry.isoformat() if earliest_expiry else None,
"auto_refresh": credential.auto_refresh,
"tags": list(credential.tags),
@@ -480,8 +474,7 @@ class EnvVarStorage(CredentialStorage):
def save(self, credential: CredentialObject) -> None:
"""Cannot save to environment variables at runtime."""
raise NotImplementedError(
"EnvVarStorage is read-only. Set environment variables "
"externally or use EncryptedFileStorage."
"EnvVarStorage is read-only. Set environment variables externally or use EncryptedFileStorage."
)
def load(self, credential_id: str) -> CredentialObject | None:
@@ -501,9 +494,7 @@ class EnvVarStorage(CredentialStorage):
def delete(self, credential_id: str) -> bool:
"""Cannot delete environment variables at runtime."""
raise NotImplementedError(
"EnvVarStorage is read-only. Unset environment variables externally."
)
raise NotImplementedError("EnvVarStorage is read-only. Unset environment variables externally.")
def list_all(self) -> list[str]:
"""List credentials that are available in environment."""
+5 -15
View File
@@ -124,9 +124,7 @@ class CredentialStore:
"""
return self._providers.get(provider_id)
def get_provider_for_credential(
self, credential: CredentialObject
) -> CredentialProvider | None:
def get_provider_for_credential(self, credential: CredentialObject) -> CredentialProvider | None:
"""
Get the appropriate provider for a credential.
@@ -201,9 +199,7 @@ class CredentialStore:
cached = self._get_from_cache(credential_id)
if cached is not None:
if refresh_if_needed and self._should_refresh(cached):
return self._refresh_credential(
cached, raise_on_failure=raise_on_refresh_failure
)
return self._refresh_credential(cached, raise_on_failure=raise_on_refresh_failure)
return cached
# Load from storage
@@ -213,9 +209,7 @@ class CredentialStore:
# Refresh if needed
if refresh_if_needed and self._should_refresh(credential):
credential = self._refresh_credential(
credential, raise_on_failure=raise_on_refresh_failure
)
credential = self._refresh_credential(credential, raise_on_failure=raise_on_refresh_failure)
# Cache
self._add_to_cache(credential)
@@ -240,9 +234,7 @@ class CredentialStore:
Returns:
The key value or None if not found
"""
credential = self.get_credential(
credential_id, raise_on_refresh_failure=raise_on_refresh_failure
)
credential = self.get_credential(credential_id, raise_on_refresh_failure=raise_on_refresh_failure)
if credential is None:
return None
return credential.get_key(key_name)
@@ -266,9 +258,7 @@ class CredentialStore:
Returns:
The primary key value or None
"""
credential = self.get_credential(
credential_id, raise_on_refresh_failure=raise_on_refresh_failure
)
credential = self.get_credential(credential_id, raise_on_refresh_failure=raise_on_refresh_failure)
if credential is None:
return None
return credential.get_default_key()
+2 -6
View File
@@ -88,9 +88,7 @@ class TemplateResolver:
if key_name:
value = credential.get_key(key_name)
if value is None:
raise CredentialKeyNotFoundError(
f"Key '{key_name}' not found in credential '{cred_id}'"
)
raise CredentialKeyNotFoundError(f"Key '{key_name}' not found in credential '{cred_id}'")
else:
# Use default key
value = credential.get_default_key()
@@ -126,9 +124,7 @@ class TemplateResolver:
... })
{"Authorization": "Bearer ghp_xxx", "X-API-Key": "BSAKxxx"}
"""
return {
key: self.resolve(value, fail_on_missing) for key, value in header_templates.items()
}
return {key: self.resolve(value, fail_on_missing) for key, value in header_templates.items()}
def resolve_params(
self,
@@ -130,9 +130,7 @@ class TestCredentialObject:
# With access_token
cred2 = CredentialObject(
id="test",
keys={
"access_token": CredentialKey(name="access_token", value=SecretStr("token-value"))
},
keys={"access_token": CredentialKey(name="access_token", value=SecretStr("token-value"))},
)
assert cred2.get_default_key() == "token-value"
@@ -297,9 +295,7 @@ class TestEncryptedFileStorage:
key = Fernet.generate_key().decode()
with patch.dict(os.environ, {"HIVE_CREDENTIAL_KEY": key}):
storage = EncryptedFileStorage(temp_dir)
cred = CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))}
)
cred = CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))})
storage.save(cred)
# Create new storage instance with same key
@@ -330,18 +326,10 @@ class TestCompositeStorage:
def test_read_from_primary(self):
"""Test reading from primary storage."""
primary = InMemoryStorage()
primary.save(
CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("primary"))}
)
)
primary.save(CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("primary"))}))
fallback = InMemoryStorage()
fallback.save(
CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("fallback"))}
)
)
fallback.save(CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("fallback"))}))
storage = CompositeStorage(primary, [fallback])
cred = storage.load("test")
@@ -353,11 +341,7 @@ class TestCompositeStorage:
"""Test fallback when credential not in primary."""
primary = InMemoryStorage()
fallback = InMemoryStorage()
fallback.save(
CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("fallback"))}
)
)
fallback.save(CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("fallback"))}))
storage = CompositeStorage(primary, [fallback])
cred = storage.load("test")
@@ -393,9 +377,7 @@ class TestStaticProvider:
def test_refresh_returns_unchanged(self):
"""Test that refresh returns credential unchanged."""
provider = StaticProvider()
cred = CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))}
)
cred = CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))})
refreshed = provider.refresh(cred)
assert refreshed.get_key("k") == "v"
@@ -403,9 +385,7 @@ class TestStaticProvider:
def test_validate_with_keys(self):
"""Test validation with keys present."""
provider = StaticProvider()
cred = CredentialObject(
id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))}
)
cred = CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))})
assert provider.validate(cred)
@@ -606,9 +586,7 @@ class TestCredentialStore:
storage = InMemoryStorage()
store = CredentialStore(storage=storage, cache_ttl_seconds=60)
storage.save(
CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))})
)
storage.save(CredentialObject(id="test", keys={"k": CredentialKey(name="k", value=SecretStr("v"))}))
# First load
store.get_credential("test")
@@ -686,9 +664,7 @@ class TestOAuth2Module:
from core.framework.credentials.oauth2 import OAuth2Config, TokenPlacement
# Valid config
config = OAuth2Config(
token_url="https://example.com/token", client_id="id", client_secret="secret"
)
config = OAuth2Config(token_url="https://example.com/token", client_id="id", client_secret="secret")
assert config.token_url == "https://example.com/token"
# Missing token_url
+44 -20
View File
@@ -160,15 +160,9 @@ class CredentialValidationResult:
if aden_nc:
if missing or invalid:
lines.append("")
lines.append(
"Aden integrations not connected "
"(ADEN_API_KEY is set but OAuth tokens unavailable):\n"
)
lines.append("Aden integrations not connected (ADEN_API_KEY is set but OAuth tokens unavailable):\n")
for c in aden_nc:
lines.append(
f" {c.env_var} for {_label(c)}"
f"\n Connect this integration at hive.adenhq.com first."
)
lines.append(f" {c.env_var} for {_label(c)}\n Connect this integration at hive.adenhq.com first.")
lines.append("\nIf you've already set up credentials, restart your terminal to load them.")
return "\n".join(lines)
@@ -236,6 +230,45 @@ def _presync_aden_tokens(credential_specs: dict, *, force: bool = False) -> None
)
def compute_unavailable_tools(nodes: list) -> tuple[set[str], list[str]]:
"""Return (tool_names_to_drop, human_messages).
Runs credential validation *without* raising, collects every tool
bound to a failed credential (missing / invalid / Aden-not-connected
and no alternative provider available), and returns the set of tool
names that should be silently dropped from the worker's effective
tool list.
Use this at every worker-spawn preflight so missing credentials
filter tools out of the graph instead of hard-failing the whole
spawn. Only affects non-MCP tools the MCP admission gate
(``_build_mcp_admission_gate``) already handles MCP tools at
registration time.
"""
try:
result = validate_agent_credentials(nodes, verify=False, raise_on_error=False)
except Exception as exc:
logger.debug("compute_unavailable_tools: validation raised: %s", exc)
return set(), []
drop: set[str] = set()
messages: list[str] = []
for status in result.failed:
if not status.tools:
continue
drop.update(status.tools)
reason = "missing"
if status.aden_not_connected:
reason = "aden_not_connected"
elif status.available and status.valid is False:
reason = "invalid"
messages.append(
f"{status.env_var} ({reason}) → drops {len(status.tools)} tool(s): "
f"{', '.join(status.tools[:6])}" + (f" +{len(status.tools) - 6} more" if len(status.tools) > 6 else "")
)
return drop, messages
def validate_agent_credentials(
nodes: list,
quiet: bool = False,
@@ -292,9 +325,7 @@ def validate_agent_credentials(
if os.environ.get("ADEN_API_KEY"):
_presync_aden_tokens(CREDENTIAL_SPECS, force=force_refresh)
env_mapping = {
(spec.credential_id or name): spec.env_var for name, spec in CREDENTIAL_SPECS.items()
}
env_mapping = {(spec.credential_id or name): spec.env_var for name, spec in CREDENTIAL_SPECS.items()}
env_storage = EnvVarStorage(env_mapping=env_mapping)
if os.environ.get("HIVE_CREDENTIAL_KEY"):
storage = CompositeStorage(primary=env_storage, fallbacks=[EncryptedFileStorage()])
@@ -328,12 +359,7 @@ def validate_agent_credentials(
available = store.is_available(cred_id)
# Aden-not-connected: ADEN_API_KEY set, Aden-only cred, but integration missing
is_aden_nc = (
not available
and has_aden_key
and spec.aden_supported
and not spec.direct_api_key_supported
)
is_aden_nc = not available and has_aden_key and spec.aden_supported and not spec.direct_api_key_supported
status = CredentialStatus(
credential_name=cred_name,
@@ -451,9 +477,7 @@ def validate_agent_credentials(
identity_data = result.details.get("identity")
if identity_data and isinstance(identity_data, dict):
try:
cred_obj = store.get_credential(
status.credential_id, refresh_if_needed=False
)
cred_obj = store.get_credential(status.credential_id, refresh_if_needed=False)
if cred_obj:
cred_obj.set_identity(**identity_data)
store.save_credential(cred_obj)
+29 -72
View File
@@ -16,20 +16,20 @@ from datetime import datetime
from pathlib import Path
from typing import TYPE_CHECKING, Any
from framework.orchestrator.checkpoint_config import CheckpointConfig
from framework.orchestrator.orchestrator import ExecutionResult
from framework.host.event_bus import EventBus
from framework.host.execution_manager import EntryPointSpec, ExecutionManager
from framework.host.outcome_aggregator import OutcomeAggregator
from framework.tracker.runtime_log_store import RuntimeLogStore
from framework.host.shared_state import SharedBufferManager
from framework.orchestrator.checkpoint_config import CheckpointConfig
from framework.orchestrator.orchestrator import ExecutionResult
from framework.storage.concurrent import ConcurrentStorage
from framework.storage.session_store import SessionStore
from framework.tracker.runtime_log_store import RuntimeLogStore
if TYPE_CHECKING:
from framework.llm.provider import LLMProvider, Tool
from framework.orchestrator.edge import GraphSpec
from framework.orchestrator.goal import Goal
from framework.llm.provider import LLMProvider, Tool
from framework.pipeline.stage import PipelineStage
from framework.skills.manager import SkillsManagerConfig
@@ -205,9 +205,7 @@ class AgentHost:
DeprecationWarning,
stacklevel=2,
)
self._skills_manager = SkillsManager.from_precomputed(
skills_catalog_prompt, protocols_prompt
)
self._skills_manager = SkillsManager.from_precomputed(skills_catalog_prompt, protocols_prompt)
else:
# Bare constructor: auto-load defaults
self._skills_manager = SkillsManager()
@@ -248,9 +246,7 @@ class AgentHost:
self._tools = tools or []
self._tool_executor = tool_executor
self._accounts_prompt = accounts_prompt
self._dynamic_memory_provider_factory: Callable[[str], Callable[[], str] | None] | None = (
None
)
self._dynamic_memory_provider_factory: Callable[[str], Callable[[], str] | None] | None = None
self._accounts_data = accounts_data
self._tool_provider_map = tool_provider_map
@@ -419,8 +415,7 @@ class AgentHost:
event_types = [_ET(et) for et in tc.get("event_types", [])]
if not event_types:
logger.warning(
f"Entry point '{ep_id}' has trigger_type='event' "
"but no event_types in trigger_config"
f"Entry point '{ep_id}' has trigger_type='event' but no event_types in trigger_config"
)
continue
@@ -450,9 +445,7 @@ class AgentHost:
# Run in the same session as the primary entry
# point so memory (e.g. user-defined rules) is
# shared and logs land in one session directory.
session_state = self._get_primary_session_state(
exclude_entry_point=entry_point_id
)
session_state = self._get_primary_session_state(exclude_entry_point=entry_point_id)
exec_id = await self.trigger(
entry_point_id,
{"event": event.to_dict()},
@@ -505,8 +498,7 @@ class AgentHost:
from croniter import croniter
except ImportError as e:
raise RuntimeError(
"croniter is required for cron-based entry points. "
"Install it with: uv pip install croniter"
"croniter is required for cron-based entry points. Install it with: uv pip install croniter"
) from e
try:
@@ -548,9 +540,7 @@ class AgentHost:
"Cron '%s': paused, skipping tick",
entry_point_id,
)
self._timer_next_fire[entry_point_id] = (
time.monotonic() + sleep_secs
)
self._timer_next_fire[entry_point_id] = time.monotonic() + sleep_secs
await asyncio.sleep(max(0, sleep_secs))
continue
@@ -578,9 +568,7 @@ class AgentHost:
"Cron '%s': agent actively working, skipping tick",
entry_point_id,
)
self._timer_next_fire[entry_point_id] = (
time.monotonic() + sleep_secs
)
self._timer_next_fire[entry_point_id] = time.monotonic() + sleep_secs
await asyncio.sleep(max(0, sleep_secs))
continue
@@ -590,24 +578,18 @@ class AgentHost:
is_isolated = ep_spec and ep_spec.isolation_level == "isolated"
if is_isolated:
if _persistent_session_id:
session_state = {
"resume_session_id": _persistent_session_id
}
session_state = {"resume_session_id": _persistent_session_id}
else:
session_state = None
else:
session_state = self._get_primary_session_state(
exclude_entry_point=entry_point_id
)
session_state = self._get_primary_session_state(exclude_entry_point=entry_point_id)
# Gate: skip tick if no active session
if session_state is None:
logger.debug(
"Cron '%s': no active session, skipping",
entry_point_id,
)
self._timer_next_fire[entry_point_id] = (
time.monotonic() + sleep_secs
)
self._timer_next_fire[entry_point_id] = time.monotonic() + sleep_secs
await asyncio.sleep(max(0, sleep_secs))
continue
@@ -680,9 +662,7 @@ class AgentHost:
"Timer '%s': paused, skipping tick",
entry_point_id,
)
self._timer_next_fire[entry_point_id] = (
time.monotonic() + interval_secs
)
self._timer_next_fire[entry_point_id] = time.monotonic() + interval_secs
await asyncio.sleep(interval_secs)
continue
@@ -708,9 +688,7 @@ class AgentHost:
"Timer '%s': agent actively working, skipping tick",
entry_point_id,
)
self._timer_next_fire[entry_point_id] = (
time.monotonic() + interval_secs
)
self._timer_next_fire[entry_point_id] = time.monotonic() + interval_secs
await asyncio.sleep(interval_secs)
continue
@@ -720,24 +698,18 @@ class AgentHost:
is_isolated = ep_spec and ep_spec.isolation_level == "isolated"
if is_isolated:
if _persistent_session_id:
session_state = {
"resume_session_id": _persistent_session_id
}
session_state = {"resume_session_id": _persistent_session_id}
else:
session_state = None
else:
session_state = self._get_primary_session_state(
exclude_entry_point=entry_point_id
)
session_state = self._get_primary_session_state(exclude_entry_point=entry_point_id)
# Gate: skip tick if no active session
if session_state is None:
logger.debug(
"Timer '%s': no active session, skipping",
entry_point_id,
)
self._timer_next_fire[entry_point_id] = (
time.monotonic() + interval_secs
)
self._timer_next_fire[entry_point_id] = time.monotonic() + interval_secs
await asyncio.sleep(interval_secs)
continue
@@ -1152,8 +1124,7 @@ class AgentHost:
event_types = [_ET(et) for et in tc.get("event_types", [])]
if not event_types:
logger.warning(
"Entry point '%s::%s' has trigger_type='event' "
"but no event_types in trigger_config",
"Entry point '%s::%s' has trigger_type='event' but no event_types in trigger_config",
graph_id,
ep_id,
)
@@ -1301,24 +1272,18 @@ class AgentHost:
break
stream = reg.streams.get(local_ep)
if not stream:
logger.warning(
"Timer: no stream '%s' in '%s', stopping", local_ep, gid
)
logger.warning("Timer: no stream '%s' in '%s', stopping", local_ep, gid)
break
# Isolated entry points get their own session;
# shared ones join the primary session.
ep_spec = reg.entry_points.get(local_ep)
if ep_spec and ep_spec.isolation_level == "isolated":
if _persistent_session_id:
session_state = {
"resume_session_id": _persistent_session_id
}
session_state = {"resume_session_id": _persistent_session_id}
else:
session_state = None
else:
session_state = self._get_primary_session_state(
local_ep, source_graph_id=gid
)
session_state = self._get_primary_session_state(local_ep, source_graph_id=gid)
# Gate: skip tick if no active session
if session_state is None:
logger.debug(
@@ -1335,11 +1300,7 @@ class AgentHost:
session_state=session_state,
)
# Remember session ID for reuse on next tick
if (
not _persistent_session_id
and ep_spec
and ep_spec.isolation_level == "isolated"
):
if not _persistent_session_id and ep_spec and ep_spec.isolation_level == "isolated":
_persistent_session_id = exec_id
except Exception:
logger.error(
@@ -1597,9 +1558,7 @@ class AgentHost:
src_graph_id = source_graph_id or self._graph_id
src_reg = self._graphs.get(src_graph_id)
ep_spec = (
src_reg.entry_points.get(exclude_entry_point)
if src_reg
else self._entry_points.get(exclude_entry_point)
src_reg.entry_points.get(exclude_entry_point) if src_reg else self._entry_points.get(exclude_entry_point)
)
if ep_spec:
graph = src_reg.graph if src_reg else self.graph
@@ -1633,9 +1592,7 @@ class AgentHost:
# Filter to only input keys so stale outputs
# from previous triggers don't leak through.
if allowed_keys is not None:
buffer_data = {
k: v for k, v in full_buffer.items() if k in allowed_keys
}
buffer_data = {k: v for k, v in full_buffer.items() if k in allowed_keys}
else:
buffer_data = full_buffer
if buffer_data:
@@ -1715,7 +1672,7 @@ class AgentHost:
entry_point_id: str,
execution_id: str,
graph_id: str | None = None,
) -> bool:
) -> str:
"""
Cancel a running execution.
@@ -1725,11 +1682,11 @@ class AgentHost:
graph_id: Graph to search (defaults to active graph)
Returns:
True if cancelled, False if not found
Cancellation outcome from the stream.
"""
stream = self._resolve_stream(entry_point_id, graph_id)
if stream is None:
return False
return "not_found"
return await stream.cancel_execution(execution_id)
# === QUERY OPERATIONS ===
+95
View File
@@ -0,0 +1,95 @@
"""Read/write helpers for per-colony metadata.json.
A colony's metadata.json lives at ``{COLONIES_DIR}/{colony_name}/metadata.json``
and holds immutable provenance: the queen that created it, the forked
session id, creation/update timestamps, and the list of workers.
Mutable user-editable tool configuration lives in a sibling
``tools.json`` sidecar see :mod:`framework.host.colony_tools_config`
so identity and tool gating evolve independently.
"""
from __future__ import annotations
import json
import logging
from pathlib import Path
from typing import Any
from framework.config import COLONIES_DIR
logger = logging.getLogger(__name__)
def colony_metadata_path(colony_name: str) -> Path:
"""Return the on-disk path to a colony's metadata.json."""
return COLONIES_DIR / colony_name / "metadata.json"
def load_colony_metadata(colony_name: str) -> dict[str, Any]:
"""Load metadata.json for ``colony_name``.
Returns an empty dict if the file is missing or malformed callers
are expected to treat missing fields as defaults.
"""
path = colony_metadata_path(colony_name)
if not path.exists():
return {}
try:
data = json.loads(path.read_text(encoding="utf-8"))
except (json.JSONDecodeError, OSError):
logger.warning("Failed to read colony metadata at %s", path)
return {}
return data if isinstance(data, dict) else {}
def update_colony_metadata(colony_name: str, updates: dict[str, Any]) -> dict[str, Any]:
"""Shallow-merge ``updates`` into metadata.json and persist.
Returns the full updated dict. Raises ``FileNotFoundError`` if the
colony does not exist. Writes atomically via ``os.replace`` to
minimize the window where a reader could see a half-written file.
"""
import os
import tempfile
path = colony_metadata_path(colony_name)
if not path.parent.exists():
raise FileNotFoundError(f"Colony '{colony_name}' not found")
data = load_colony_metadata(colony_name) if path.exists() else {}
for key, value in updates.items():
data[key] = value
path.parent.mkdir(parents=True, exist_ok=True)
fd, tmp_path = tempfile.mkstemp(
prefix=".metadata.",
suffix=".json.tmp",
dir=str(path.parent),
)
try:
with os.fdopen(fd, "w", encoding="utf-8") as fh:
json.dump(data, fh, indent=2)
fh.flush()
os.fsync(fh.fileno())
os.replace(tmp_path, path)
except BaseException:
try:
os.unlink(tmp_path)
except OSError:
pass
raise
return data
def list_colony_names() -> list[str]:
"""Return the names of every colony that has a metadata.json on disk."""
if not COLONIES_DIR.is_dir():
return []
names: list[str] = []
for entry in sorted(COLONIES_DIR.iterdir()):
if not entry.is_dir():
continue
if (entry / "metadata.json").exists():
names.append(entry.name)
return names
+632 -37
View File
@@ -14,8 +14,8 @@ from __future__ import annotations
import asyncio
import json
import logging
import os
import time
import uuid
from collections import OrderedDict
from collections.abc import Callable
from dataclasses import dataclass, field
@@ -25,25 +25,77 @@ from typing import TYPE_CHECKING, Any
from framework.agent_loop.types import AgentContext, AgentSpec
from framework.host.event_bus import AgentEvent, EventBus, EventType
from framework.host.triggers import TriggerDefinition
from framework.host.worker import Worker, WorkerInfo, WorkerResult, WorkerStatus
from framework.observability import set_trace_context
from framework.host.worker import Worker, WorkerInfo, WorkerResult
from framework.schemas.goal import Goal
from framework.storage.concurrent import ConcurrentStorage
from framework.storage.session_store import SessionStore
if TYPE_CHECKING:
from framework.agent_loop.agent_loop import AgentLoop
from framework.llm.provider import LLMProvider, Tool
from framework.pipeline.runner import PipelineRunner
from framework.skills.manager import SkillsManagerConfig
from framework.tracker.runtime_log_store import RuntimeLogStore
logger = logging.getLogger(__name__)
def _format_spawn_task_message(task: str, input_data: dict[str, Any]) -> str:
"""Render the spawn task into the worker's next user message.
Spawned workers inherit the queen's conversation via
``ColonyRuntime._fork_parent_conversation``; this helper builds
the content of the trailing user message that carries the new
task. The queen's chat already provides the context for the
task, so we frame this as an explicit hand-off.
Additional keys from ``input_data`` (other than the task itself)
are rendered below the hand-off line so the worker sees them as
structured hand-off data. This mirrors the fresh-path
``AgentLoop._build_initial_message`` shape so worker prompts look
roughly the same whether or not inheritance fired.
"""
lines = [
"# New task delegated by the queen",
"",
"The queen's conversation up to this point is visible above. "
"Use it as context (who the user is, what was already decided, "
"which skills apply). Your own system prompt and tool set are "
"set by the framework — the queen's tools may differ from "
"yours, so treat her prior tool calls as history only.",
"",
f"task: {task}",
]
for key, value in (input_data or {}).items():
if key in ("task", "user_request"):
# Already rendered above; don't duplicate.
continue
if value is None:
continue
lines.append(f"{key}: {value}")
return "\n".join(lines)
def _env_int(name: str, default: int) -> int:
"""Read a positive int from env; fall back to default on missing/invalid."""
raw = os.environ.get(name)
if not raw:
return default
try:
value = int(raw)
except ValueError:
logger.warning("Invalid %s=%r; using default %d", name, raw, default)
return default
return value if value > 0 else default
# Laptop-safe default. Each worker is a full AgentLoop (Claude SDK session +
# tool catalog), so ~4 concurrent is the realistic ceiling on a dev machine.
# Override via HIVE_MAX_CONCURRENT_WORKERS for servers.
_DEFAULT_MAX_CONCURRENT_WORKERS = _env_int("HIVE_MAX_CONCURRENT_WORKERS", 4)
@dataclass
class ColonyConfig:
max_concurrent_workers: int = 100
max_concurrent_workers: int = _DEFAULT_MAX_CONCURRENT_WORKERS
cache_ttl: float = 60.0
batch_interval: float = 0.1
max_history: int = 1000
@@ -133,6 +185,8 @@ class ColonyRuntime:
protocols_prompt: str = "",
skill_dirs: list[str] | None = None,
pipeline_stages: list | None = None,
queen_id: str | None = None,
colony_name: str | None = None,
):
from framework.pipeline.runner import PipelineRunner
from framework.skills.manager import SkillsManager
@@ -141,14 +195,27 @@ class ColonyRuntime:
self._goal = goal
self._config = config or ColonyConfig()
self._runtime_log_store = runtime_log_store
self._queen_id: str | None = queen_id
# ``colony_id`` is the event-bus scope (session.id in DM sessions);
# ``colony_name`` is the on-disk identity under ~/.hive/colonies/.
# They coincide for forked colonies but diverge for queen DM
# sessions, so separate them explicitly.
self._colony_name: str | None = colony_name
if pipeline_stages:
self._pipeline = PipelineRunner(pipeline_stages)
else:
self._pipeline = self._load_pipeline_from_config()
if skills_manager_config is not None:
self._skills_manager = SkillsManager(skills_manager_config)
# Resolve per-colony override paths so UI toggles can reach this
# runtime. Callers that build their own SkillsManagerConfig stay
# in charge; bare construction auto-wires the standard paths.
_effective_cfg = skills_manager_config
if _effective_cfg is None and not (skills_catalog_prompt or protocols_prompt):
_effective_cfg = self._build_default_skills_config(colony_name, queen_id)
if _effective_cfg is not None:
self._skills_manager = SkillsManager(_effective_cfg)
self._skills_manager.load()
elif skills_catalog_prompt or protocols_prompt:
import warnings
@@ -159,9 +226,7 @@ class ColonyRuntime:
DeprecationWarning,
stacklevel=2,
)
self._skills_manager = SkillsManager.from_precomputed(
skills_catalog_prompt, protocols_prompt
)
self._skills_manager = SkillsManager.from_precomputed(skills_catalog_prompt, protocols_prompt)
else:
self._skills_manager = SkillsManager()
self._skills_manager.load()
@@ -171,12 +236,32 @@ class ColonyRuntime:
self.batch_init_nudge: str | None = self._skills_manager.batch_init_nudge
self._colony_id: str = colony_id or "primary"
# Ensure the colony task template exists. Idempotent — if the
# colony was created previously, this is a no-op (it just stamps
# last_seen_session_ids if a session id is provided later).
try:
import asyncio as _asyncio
from framework.tasks import TaskListRole, get_task_store
from framework.tasks.scoping import colony_task_list_id
_store = get_task_store()
_list_id = colony_task_list_id(self._colony_id)
try:
# Best-effort: schedule on the running loop, or do it inline
# if no loop is yet running (e.g. during construction).
_loop = _asyncio.get_running_loop()
_loop.create_task(_store.ensure_task_list(_list_id, role=TaskListRole.TEMPLATE))
except RuntimeError:
_asyncio.run(_store.ensure_task_list(_list_id, role=TaskListRole.TEMPLATE))
except Exception:
logger.debug("Failed to ensure colony task template", exc_info=True)
self._accounts_prompt = accounts_prompt
self._accounts_data = accounts_data
self._tool_provider_map = tool_provider_map
self._dynamic_memory_provider_factory: Callable[[str], Callable[[], str] | None] | None = (
None
)
self._dynamic_memory_provider_factory: Callable[[str], Callable[[], str] | None] | None = None
storage_path_obj = Path(storage_path) if isinstance(storage_path, str) else storage_path
self._storage_path: Path = storage_path_obj
@@ -190,10 +275,33 @@ class ColonyRuntime:
self._event_bus = event_bus or EventBus(max_history=self._config.max_history)
self._scoped_event_bus = StreamEventBus(self._event_bus, self._colony_id)
# Make the event bus visible to the task-system event emitters so
# task lifecycle events fan out to the same bus the rest of the
# system uses. Idempotent — last writer wins.
try:
from framework.tasks.events import set_default_event_bus
set_default_event_bus(self._event_bus)
except Exception:
logger.debug("Failed to register default task event bus", exc_info=True)
self._llm = llm
self._tools = tools or []
self._tool_executor = tool_executor
# Per-colony MCP tool allowlist — applied when spawning workers. A
# value of ``None`` means "allow every MCP tool" (default), an empty
# list disables every MCP tool, and a list of names only enables
# those. Lifecycle / synthetic tools always pass through the filter
# because their names are absent from ``_mcp_tool_names_all``. The
# allowlist is re-read on every ``spawn`` so a PATCH that mutates
# this attribute via ``set_tool_allowlist`` takes effect on the
# NEXT worker spawn without a runtime restart. In-flight workers
# keep the tool list they booted with — workers have no dynamic
# tools provider today.
self._enabled_mcp_tools: list[str] | None = None
self._mcp_tool_names_all: set[str] = set()
# Worker management
self._workers: dict[str, Worker] = {}
# The persistent client-facing overseer (optional). Set by
@@ -210,6 +318,13 @@ class ColonyRuntime:
self._timer_tasks: list[asyncio.Task] = []
self._timer_next_fire: dict[str, float] = {}
self._webhook_server: Any = None
# Background tasks owned by the runtime that aren't timers —
# e.g. the per-spawn soft/hard timeout watchers kicked off by
# run_parallel_workers. We hold strong references so asyncio
# does not garbage-collect them mid-sleep (Python's asyncio
# docs explicitly warn that create_task() needs a referenced
# handle).
self._background_tasks: set[asyncio.Task] = set()
# Idempotency
self._idempotency_keys: OrderedDict[str, str] = OrderedDict()
@@ -304,6 +419,19 @@ class ColonyRuntime:
def _apply_pipeline_results(self) -> None:
for stage in self._pipeline.stages:
if stage.tool_registry is not None:
# Register task tools on the same registry every worker
# pulls from. Done here (not at worker spawn) so the
# colony's `_tools` snapshot includes them.
try:
from framework.tasks.tools import register_task_tools
register_task_tools(stage.tool_registry)
except Exception:
logger.warning(
"Failed to register task tools on pipeline registry",
exc_info=True,
)
tools = list(stage.tool_registry.get_tools().values())
if tools:
self._tools = tools
@@ -329,6 +457,136 @@ class ColonyRuntime:
return PipelineRunner([])
return build_pipeline_from_config(stages_config)
@staticmethod
def _build_default_skills_config(
colony_name: str | None,
queen_id: str | None,
) -> SkillsManagerConfig:
"""Assemble a ``SkillsManagerConfig`` that wires in the per-colony /
per-queen override files and the ``queen_ui`` / ``colony_ui`` scope
dirs based on the standard ``~/.hive`` layout.
``colony_name`` must be an actual on-disk colony name
(``~/.hive/colonies/{name}/``). DM sessions where the ``colony_id``
is a session UUID should pass ``None`` so we don't create a stray
override file under a session identifier.
"""
from framework.config import COLONIES_DIR, QUEENS_DIR
from framework.skills.discovery import ExtraScope
from framework.skills.manager import SkillsManagerConfig
extras: list[ExtraScope] = []
queen_overrides_path: Path | None = None
if queen_id:
queen_home = QUEENS_DIR / queen_id
queen_overrides_path = queen_home / "skills_overrides.json"
extras.append(ExtraScope(directory=queen_home / "skills", label="queen_ui", priority=2))
colony_overrides_path: Path | None = None
if colony_name:
colony_home = COLONIES_DIR / colony_name
colony_overrides_path = colony_home / "skills_overrides.json"
# Surface both the new flat ``skills/`` (where new skills are
# written) and the legacy nested ``.hive/skills/`` (left intact
# for pre-flatten colonies) as tagged ``colony_ui`` scopes, so
# UI-created entries resolve with correct provenance regardless
# of which on-disk layout the colony has.
extras.append(
ExtraScope(
directory=colony_home / "skills",
label="colony_ui",
priority=3,
)
)
extras.append(
ExtraScope(
directory=colony_home / ".hive" / "skills",
label="colony_ui",
priority=3,
)
)
return SkillsManagerConfig(
queen_id=queen_id,
queen_overrides_path=queen_overrides_path,
colony_name=colony_name,
colony_overrides_path=colony_overrides_path,
extra_scope_dirs=extras,
interactive=False, # HTTP-driven runtimes never prompt for consent
)
@property
def queen_id(self) -> str | None:
"""The queen that owns this runtime, if known."""
return self._queen_id
@property
def colony_name(self) -> str | None:
"""The on-disk colony name (distinct from event-bus scope ``colony_id``)."""
return self._colony_name
@property
def skills_manager(self):
"""Access the live :class:`SkillsManager` (for HTTP handlers)."""
return self._skills_manager
async def reload_skills(self) -> dict[str, Any]:
"""Rebuild the catalog after an override change; in-flight workers
pick up the new catalog on their next iteration via
``dynamic_skills_catalog_provider``.
Returns a small stats dict that HTTP handlers can echo back to
the UI ("applied — N skills now in catalog").
"""
async with self._skills_manager.mutation_lock:
self._skills_manager.reload()
self.skill_dirs = self._skills_manager.allowlisted_dirs
self.batch_init_nudge = self._skills_manager.batch_init_nudge
self.context_warn_ratio = self._skills_manager.context_warn_ratio
catalog_prompt = self._skills_manager.skills_catalog_prompt
return {
"catalog_chars": len(catalog_prompt),
"skill_dirs": list(self.skill_dirs),
}
# ── Per-colony tool allowlist ───────────────────────────────
def set_tool_allowlist(
self,
enabled_mcp_tools: list[str] | None,
mcp_tool_names_all: set[str] | None = None,
) -> None:
"""Configure the per-colony MCP tool allowlist.
Called at construction time (from SessionManager) and again from
the ``/api/colony/{name}/tools`` PATCH handler when a user edits
the allowlist. The change applies to the NEXT worker spawn we
never mutate the tool list of a worker that is already running
(workers have no dynamic tools provider, so hot-reloading their
tool set would diverge from the list the LLM was already using).
"""
self._enabled_mcp_tools = list(enabled_mcp_tools) if enabled_mcp_tools is not None else None
if mcp_tool_names_all is not None:
self._mcp_tool_names_all = set(mcp_tool_names_all)
def _apply_tool_allowlist(self, tools: list) -> list:
"""Filter ``tools`` against the colony's MCP allowlist.
Lifecycle / synthetic tools (those whose names are NOT in
``_mcp_tool_names_all``) are never gated. MCP tools are kept only
when ``_enabled_mcp_tools`` is None (default allow) or contains
their name. Input list order is preserved so downstream cache
keys and logs stay stable.
"""
if self._enabled_mcp_tools is None:
return tools
allowed = set(self._enabled_mcp_tools)
return [
t
for t in tools
if getattr(t, "name", None) not in self._mcp_tool_names_all or getattr(t, "name", None) in allowed
]
# ── Lifecycle ───────────────────────────────────────────────
async def start(self) -> None:
@@ -380,8 +638,24 @@ class ColonyRuntime:
async with self._lock:
await self.stop_all_workers()
for task in self._timer_tasks:
# Cancel timer tasks and *wait* for them to finish. Without
# the wait the tasks are merely scheduled for cancellation —
# if the runtime (or its event loop) shuts down before they
# run their cleanup code, trigger state leaks.
pending_timers = [t for t in self._timer_tasks if not t.done()]
for task in pending_timers:
task.cancel()
if pending_timers:
try:
await asyncio.wait_for(
asyncio.gather(*pending_timers, return_exceptions=True),
timeout=5.0,
)
except TimeoutError:
logger.warning(
"ColonyRuntime.stop: %d timer task(s) did not finish within 5s",
sum(1 for t in pending_timers if not t.done()),
)
self._timer_tasks.clear()
for sub_id in self._event_subscriptions:
@@ -398,12 +672,147 @@ class ColonyRuntime:
self._running = False
logger.info("ColonyRuntime stopped: colony_id=%s", self._colony_id)
def _on_timer_task_done(self, task: asyncio.Task) -> None:
if task.cancelled():
return
exc = task.exception()
if exc is not None:
logger.error(
"Timer task '%s' crashed: %s",
task.get_name(),
exc,
exc_info=exc,
)
def pause_timers(self) -> None:
self._timers_paused = True
def resume_timers(self) -> None:
self._timers_paused = False
async def _fork_parent_conversation(
self,
dest_conv_dir: Path,
*,
task: str,
input_data: dict[str, Any] | None = None,
) -> None:
"""Fork the colony's parent queen conversation into ``dest_conv_dir``.
Copies the queen's ``parts/*.json`` and ``meta.json`` into the
worker's fresh conversation dir, then appends a synthetic user
message carrying the new task. The worker's subsequent
``AgentLoop._restore`` reads this conversation via the usual
path the queen's history is visible as prior turns, the task
appears as the most recent user message, and the worker starts
acting on it with full context.
This is a no-op if the colony runtime doesn't own a parent
queen conversation (e.g. a standalone colony started without a
queen wrapper).
Notes on filtering compatibility:
- Queen parts have ``phase_id=None``. When the worker's
restore applies its own phase filter, the backward-compat
fallback in NodeConversation.restore kicks in: an
all-None-phased store bypasses the filter. See
``conversation.py:1369-1378``.
- ``cursor.json`` is deliberately NOT copied. The worker
should start fresh at iteration 0; copying the queen's
cursor would make the worker think it had already done
work.
- The queen's ``meta.json`` is copied but the AgentLoop
immediately rebuilds ``system_prompt`` from the worker's
own context post-restore (see agent_loop.py:533-535), so
the queen's system prompt does not leak into the worker.
"""
# Resolve the queen's own conversation dir. For a queen-backed
# ColonyRuntime, storage_path points at the queen's session dir
# and conversations/ lives inside it. For standalone runtimes
# (tests, legacy fork path under ~/.hive/agents/{name}/worker/)
# there's no parent conversation — fall through to the fresh
# spawn path.
src_conv_dir = self._storage_path / "conversations"
src_parts_dir = src_conv_dir / "parts"
if not src_parts_dir.exists():
# No queen conversation to inherit — the worker starts with
# only the task, same as the pre-fork behavior. AgentLoop's
# fresh-conversation branch will call _build_initial_message
# and render input_data into the worker's first user message.
return
def _copy_and_append() -> None:
dest_parts = dest_conv_dir / "parts"
dest_parts.mkdir(parents=True, exist_ok=True)
# Copy each queen part. Use json.dumps round-trip (not raw
# file copy) so we can be defensive about unreadable files —
# a corrupted queen part file shouldn't take down the worker
# spawn, just drop that one part.
max_seq = -1
for part_file in sorted(src_parts_dir.glob("*.json")):
try:
data = json.loads(part_file.read_text(encoding="utf-8"))
except (json.JSONDecodeError, OSError) as exc:
logger.warning(
"spawn fork: skipping unreadable queen part %s: %s",
part_file.name,
exc,
)
continue
seq = data.get("seq")
if isinstance(seq, int) and seq > max_seq:
max_seq = seq
(dest_parts / part_file.name).write_text(
json.dumps(data, ensure_ascii=False),
encoding="utf-8",
)
# Copy the queen's meta.json so the worker's restore finds
# the conversation during its first run. The meta fields
# (system_prompt, max_context_tokens, etc.) get overridden
# by the worker's own AgentLoop config + context after
# restore, so nothing here bleeds into runtime behavior.
src_meta = src_conv_dir / "meta.json"
if src_meta.exists():
try:
meta_data = json.loads(src_meta.read_text(encoding="utf-8"))
(dest_conv_dir / "meta.json").write_text(
json.dumps(meta_data, ensure_ascii=False),
encoding="utf-8",
)
except (json.JSONDecodeError, OSError) as exc:
logger.warning("spawn fork: failed to copy queen meta.json: %s", exc)
# Append the task as the next user message so the worker's
# LLM sees it as the most recent turn in the conversation
# after restore. This replaces the fresh-path call to
# _build_initial_message for spawned workers.
task_content = _format_spawn_task_message(task, input_data or {})
next_seq = max_seq + 1
task_part = {
"seq": next_seq,
"role": "user",
"content": task_content,
# phase_id omitted (None) so the backward-compat
# fallback in NodeConversation.restore keeps it visible
# to both queen-style and phase-filtered restores.
# run_id omitted so the worker's run_id filter (off by
# default since ctx.run_id is empty) doesn't reject it.
}
task_filename = f"{next_seq:010d}.json"
(dest_parts / task_filename).write_text(
json.dumps(task_part, ensure_ascii=False),
encoding="utf-8",
)
logger.info(
"spawn fork: inherited %d queen parts + appended task at seq %d",
max_seq + 1,
next_seq,
)
await asyncio.to_thread(_copy_and_append)
# ── Worker Spawning ─────────────────────────────────────────
async def spawn(
@@ -452,6 +861,52 @@ class ColonyRuntime:
spawn_tools = tools if tools is not None else self._tools
spawn_executor = tool_executor or self._tool_executor
# Apply the per-colony MCP tool allowlist (if any). Done HERE —
# after spawn_tools is resolved but before it's frozen into the
# worker's AgentContext — so the next spawn reflects any PATCH
# that happened since the last spawn. A value of ``None`` on
# ``_enabled_mcp_tools`` is a no-op so the default path is
# unchanged.
spawn_tools = self._apply_tool_allowlist(spawn_tools)
# Colony progress tracker: when the caller supplied a db_path
# in input_data, this worker is part of a SQLite task queue
# and must see the hive.colony-progress-tracker skill body in
# its system prompt from turn 0. Rebuild the catalog with the
# skill pre-activated; falls back to the colony default when
# no db_path is present.
_spawn_catalog = self.skills_catalog_prompt
_spawn_skill_dirs = self.skill_dirs
if isinstance(input_data, dict) and input_data.get("db_path"):
try:
from framework.skills.config import SkillsConfig
from framework.skills.manager import SkillsManager, SkillsManagerConfig
_pre = SkillsManager(
SkillsManagerConfig(
skills_config=SkillsConfig.from_agent_vars(
skills=["hive.colony-progress-tracker"],
),
)
)
_pre.load()
_spawn_catalog = _pre.skills_catalog_prompt
_spawn_skill_dirs = (
list(_pre.allowlisted_dirs) if hasattr(_pre, "allowlisted_dirs") else self.skill_dirs
)
logger.info(
"spawn: pre-activated hive.colony-progress-tracker "
"(catalog %d%d chars) for worker with db_path=%s",
len(self.skills_catalog_prompt),
len(_spawn_catalog),
input_data.get("db_path"),
)
except Exception as exc:
logger.warning(
"spawn: failed to pre-activate colony-progress-tracker skill, falling back to base catalog: %s",
exc,
)
# Resolve the SSE stream_id once. When the caller didn't supply
# one we use the per-worker fan-out tag (filtered out by the
# SSE handler). When the caller passed an explicit value we
@@ -469,10 +924,24 @@ class ColonyRuntime:
# (worse) the process CWD.
worker_storage = self._storage_path / "workers" / worker_id
worker_storage.mkdir(parents=True, exist_ok=True)
worker_conv_store = FileConversationStore(
worker_storage / "conversations"
# Fork the queen's conversation into the worker's store.
# The queen already accumulated the user chat, read relevant
# skills, and made decisions about how to approach the task;
# the worker would repeat that discovery work (and often
# mis-step — see the 2026-04-14 "dummy-target" incident)
# if spawned with a blank store. We snapshot the queen's
# parts + meta at spawn time, then append the task as the
# next user message so the worker's AgentLoop restores into
# a conversation that already ends with its new instruction.
await self._fork_parent_conversation(
worker_storage / "conversations",
task=task,
input_data=input_data,
)
worker_conv_store = FileConversationStore(worker_storage / "conversations")
# AgentLoop takes bus/judge/config/executor at construction;
# LLM, tools, stream_id, execution_id all come from the
# AgentContext passed to execute().
@@ -482,6 +951,34 @@ class ColonyRuntime:
conversation_store=worker_conv_store,
)
# Workers pick up UI-driven override changes via this provider,
# which reads the live catalog on each iteration. The db_path
# pre-activated catalog stays static because its contents are
# built for *this* worker's task (a tombstone toggle from the
# UI should not yank it mid-run).
_db_path_pre_activated = bool(isinstance(input_data, dict) and input_data.get("db_path"))
# Default-bind the manager into the closure so each loop iteration
# captures the same manager instance — pyflakes B023 would flag a
# free-variable capture here.
_provider = None if _db_path_pre_activated else (lambda mgr=self._skills_manager: mgr.skills_catalog_prompt)
# Task-system fields. Each worker owns its session task list;
# picked_up_from records the colony template entry it was
# spawned for, when applicable.
from framework.tasks.scoping import (
colony_task_list_id as _colony_list_id,
session_task_list_id as _session_list_id,
)
_worker_list_id = _session_list_id(worker_id, worker_id)
_picked_up = None
_template_id = input_data.get("__template_task_id") if isinstance(input_data, dict) else None
if _template_id is not None:
try:
_picked_up = (_colony_list_id(self._colony_id), int(_template_id))
except (TypeError, ValueError):
_picked_up = None
agent_context = AgentContext(
runtime=self._make_runtime_adapter(worker_id),
agent_id=worker_id,
@@ -492,11 +989,15 @@ class ColonyRuntime:
llm=self._llm,
available_tools=list(spawn_tools),
accounts_prompt=self._accounts_prompt,
skills_catalog_prompt=self.skills_catalog_prompt,
skills_catalog_prompt=_spawn_catalog,
protocols_prompt=self.protocols_prompt,
skill_dirs=self.skill_dirs,
skill_dirs=_spawn_skill_dirs,
dynamic_skills_catalog_provider=_provider,
execution_id=worker_id,
stream_id=explicit_stream_id or f"worker:{worker_id}",
task_list_id=_worker_list_id,
colony_id=self._colony_id,
picked_up_from=_picked_up,
)
worker = Worker(
@@ -527,6 +1028,8 @@ class ColonyRuntime:
async def spawn_batch(
self,
tasks: list[dict[str, Any]],
*,
tools_override: list[Any] | None = None,
) -> list[str]:
"""Spawn a batch of parallel workers, one per task spec.
@@ -539,6 +1042,12 @@ class ColonyRuntime:
The overseer's ``run_parallel_workers`` tool is the usual
caller; it pairs ``spawn_batch`` + ``wait_for_worker_reports``
into a single fan-out/fan-in primitive.
When ``tools_override`` is supplied, every spawned worker
receives that tool list instead of the colony's default. Used
by ``run_parallel_workers`` to drop tools whose credentials
failed the pre-flight check (so the spawned workers don't
waste a startup trying to use them).
"""
worker_ids: list[str] = []
for spec in tasks:
@@ -550,6 +1059,7 @@ class ColonyRuntime:
task=task_text,
count=1,
input_data=task_data or {"task": task_text},
tools=tools_override,
)
worker_ids.extend(ids)
return worker_ids
@@ -643,9 +1153,7 @@ class ColonyRuntime:
if remaining <= 0:
break
try:
report = await asyncio.wait_for(
report_queue.get(), timeout=remaining
)
report = await asyncio.wait_for(report_queue.get(), timeout=remaining)
except TimeoutError:
break
wid = report.get("worker_id")
@@ -714,10 +1222,7 @@ class ColonyRuntime:
return self._overseer
if not self._running:
raise RuntimeError(
"start_overseer requires the ColonyRuntime to be running "
"(call start() first)"
)
raise RuntimeError("start_overseer requires the ColonyRuntime to be running (call start() first)")
from framework.agent_loop.agent_loop import AgentLoop
from framework.storage.conversation_store import FileConversationStore
@@ -728,15 +1233,14 @@ class ColonyRuntime:
# {colony_session}/conversations/. Workers get their own sub-dirs
# under workers/{worker_id}/; the overseer is the root occupant.
self._storage_path.mkdir(parents=True, exist_ok=True)
overseer_conv_store = FileConversationStore(
self._storage_path / "conversations"
)
overseer_conv_store = FileConversationStore(self._storage_path / "conversations")
agent_loop = AgentLoop(
event_bus=self._scoped_event_bus,
tool_executor=self._tool_executor,
conversation_store=overseer_conv_store,
)
_overseer_skills_mgr = self._skills_manager
overseer_ctx = AgentContext(
runtime=self._make_runtime_adapter(overseer_id),
agent_id=overseer_id,
@@ -750,6 +1254,7 @@ class ColonyRuntime:
skills_catalog_prompt=self.skills_catalog_prompt,
protocols_prompt=self.protocols_prompt,
skill_dirs=self.skill_dirs,
dynamic_skills_catalog_provider=lambda: _overseer_skills_mgr.skills_catalog_prompt,
execution_id=overseer_id,
stream_id="overseer",
)
@@ -868,6 +1373,96 @@ class ColonyRuntime:
return True
return False
def watch_batch_timeouts(
self,
worker_ids: list[str],
*,
soft_timeout: float,
hard_timeout: float,
warning_message: str | None = None,
) -> asyncio.Task:
"""Schedule a background task that enforces soft + hard timeouts.
Semantics:
* At ``t = soft_timeout`` every worker in ``worker_ids`` that is
still active AND hasn't already filed an ``_explicit_report``
receives ``warning_message`` via ``send_to_worker`` the inject
appears as a user turn at the next agent-loop boundary, so the
worker's LLM can see it and call ``report_to_parent`` with
partial results.
* At ``t = hard_timeout`` any worker still active is force-stopped
via ``stop_worker``. ``Worker.run`` still emits its
``SUBAGENT_REPORT`` on cancel (the explicit report survives,
if the worker reported just before the stop) so the queen
always sees a terminal inject for every spawned worker.
Returns the scheduled task so callers can await or cancel it.
Non-blocking for the caller the watcher runs on the event loop
independently.
"""
if warning_message is None:
grace = max(0.0, hard_timeout - soft_timeout)
warning_message = (
f"[SOFT TIMEOUT] You've been running for {soft_timeout:.0f}s. "
"Wrap up now: call report_to_parent with whatever partial "
"results you have. You have "
f"~{grace:.0f}s more before a hard stop — anything not "
"reported by then will be lost."
)
async def _watch() -> None:
try:
await asyncio.sleep(soft_timeout)
for wid in worker_ids:
worker = self._workers.get(wid)
if worker is None or not worker.is_active:
continue
if getattr(worker, "_explicit_report", None) is not None:
continue
try:
await self.send_to_worker(wid, warning_message)
except Exception:
logger.warning(
"watch_batch_timeouts: soft-timeout inject failed for %s",
wid,
exc_info=True,
)
remaining = hard_timeout - soft_timeout
if remaining <= 0:
return
await asyncio.sleep(remaining)
for wid in worker_ids:
worker = self._workers.get(wid)
if worker is None or not worker.is_active:
continue
try:
await self.stop_worker(wid)
logger.info(
"watch_batch_timeouts: hard-stopped %s after %ss (no report)",
wid,
hard_timeout,
)
except Exception:
logger.warning(
"watch_batch_timeouts: hard-stop failed for %s",
wid,
exc_info=True,
)
except asyncio.CancelledError:
raise
except Exception:
logger.exception("watch_batch_timeouts: watcher crashed")
task = asyncio.create_task(_watch(), name=f"batch-timeout:{worker_ids[0] if worker_ids else '?'}")
# Hold a strong reference until completion. Without this the
# task can be garbage-collected during `await asyncio.sleep`,
# silently swallowing the soft-timeout inject (the exact bug
# surfaced by workers never seeing [SOFT TIMEOUT]).
self._background_tasks.add(task)
task.add_done_callback(self._background_tasks.discard)
return task
# ── Status & Query ──────────────────────────────────────────
def list_workers(self) -> list[WorkerInfo]:
@@ -891,9 +1486,7 @@ class ColonyRuntime:
def get_worker_result(self, worker_id: str) -> WorkerResult | None:
return self._execution_results.get(worker_id)
async def wait_for_worker(
self, worker_id: str, timeout: float | None = None
) -> WorkerResult | None:
async def wait_for_worker(self, worker_id: str, timeout: float | None = None) -> WorkerResult | None:
worker = self._workers.get(worker_id)
if worker is None:
return self._execution_results.get(worker_id)
@@ -901,7 +1494,7 @@ class ColonyRuntime:
return worker.info.result
try:
await asyncio.wait_for(asyncio.shield(worker._task_handle), timeout=timeout)
except asyncio.TimeoutError:
except TimeoutError:
return None
return worker.info.result
@@ -942,9 +1535,7 @@ class ColonyRuntime:
if worker and worker.is_active:
loop = worker._agent_loop
if hasattr(loop, "inject_event"):
await loop.inject_event(
content, is_client_input=is_client_input, image_content=image_content
)
await loop.inject_event(content, is_client_input=is_client_input, image_content=image_content)
return True
return False
@@ -1016,7 +1607,11 @@ class ColonyRuntime:
run_immediately = tc.get("run_immediately", False)
if interval and interval > 0 and self._running:
task = asyncio.create_task(self._timer_loop(trig_id, interval, run_immediately))
task = asyncio.create_task(
self._timer_loop(trig_id, interval, run_immediately),
name=f"timer:{trig_id}",
)
task.add_done_callback(self._on_timer_task_done)
self._timer_tasks.append(task)
async def _timer_loop(
+162
View File
@@ -0,0 +1,162 @@
"""Per-colony tool configuration sidecar (``tools.json``).
Lives at ``~/.hive/colonies/{colony_name}/tools.json`` alongside
``metadata.json``. Kept separate so provenance (queen_name,
created_at, workers) stays in metadata while the user-editable tool
allowlist gets its own file.
Schema::
{
"enabled_mcp_tools": ["read_file", ...] | null,
"updated_at": "2026-04-21T12:34:56+00:00"
}
- ``null`` / missing file default "allow every MCP tool".
- ``[]`` explicitly disable every MCP tool.
- ``["foo", "bar"]`` only those MCP tool names pass the filter.
Atomic writes via ``os.replace`` mirror
``framework.host.colony_metadata.update_colony_metadata``.
"""
from __future__ import annotations
import json
import logging
import os
import tempfile
from datetime import UTC, datetime
from pathlib import Path
from typing import Any
from framework.config import COLONIES_DIR
logger = logging.getLogger(__name__)
def tools_config_path(colony_name: str) -> Path:
"""Return the on-disk path to a colony's ``tools.json``."""
return COLONIES_DIR / colony_name / "tools.json"
def _metadata_path(colony_name: str) -> Path:
return COLONIES_DIR / colony_name / "metadata.json"
def _atomic_write_json(path: Path, data: dict[str, Any]) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
fd, tmp = tempfile.mkstemp(
prefix=".tools.",
suffix=".json.tmp",
dir=str(path.parent),
)
try:
with os.fdopen(fd, "w", encoding="utf-8") as fh:
json.dump(data, fh, indent=2)
fh.flush()
os.fsync(fh.fileno())
os.replace(tmp, path)
except BaseException:
try:
os.unlink(tmp)
except OSError:
pass
raise
def _migrate_from_metadata_if_needed(colony_name: str) -> list[str] | None:
"""Hoist a legacy ``enabled_mcp_tools`` field out of ``metadata.json``.
Returns the migrated value (or ``None`` if nothing to migrate). After
migration the sidecar exists and ``metadata.json`` no longer contains
``enabled_mcp_tools``. Safe to call repeatedly.
"""
meta_path = _metadata_path(colony_name)
if not meta_path.exists():
return None
try:
data = json.loads(meta_path.read_text(encoding="utf-8"))
except (json.JSONDecodeError, OSError):
logger.warning("Could not read metadata.json during tools migration: %s", colony_name)
return None
if not isinstance(data, dict) or "enabled_mcp_tools" not in data:
return None
raw = data.pop("enabled_mcp_tools")
enabled: list[str] | None
if raw is None:
enabled = None
elif isinstance(raw, list) and all(isinstance(x, str) for x in raw):
enabled = raw
else:
logger.warning(
"Legacy enabled_mcp_tools on colony %s had unexpected shape %r; dropping",
colony_name,
raw,
)
enabled = None
# Sidecar first so a partial failure leaves the config recoverable.
_atomic_write_json(
tools_config_path(colony_name),
{
"enabled_mcp_tools": enabled,
"updated_at": datetime.now(UTC).isoformat(),
},
)
_atomic_write_json(meta_path, data)
logger.info(
"Migrated enabled_mcp_tools for colony %s from metadata.json to tools.json",
colony_name,
)
return enabled
def load_colony_tools_config(colony_name: str) -> list[str] | None:
"""Return the colony's MCP tool allowlist, or ``None`` for default-allow.
Order of resolution:
1. ``tools.json`` sidecar (authoritative).
2. Legacy ``metadata.json`` field (migrated and deleted on first read).
3. ``None`` default "allow every MCP tool".
"""
path = tools_config_path(colony_name)
if path.exists():
try:
data = json.loads(path.read_text(encoding="utf-8"))
except (json.JSONDecodeError, OSError):
logger.warning("Invalid %s; treating as default-allow", path)
return None
if not isinstance(data, dict):
return None
raw = data.get("enabled_mcp_tools")
if raw is None:
return None
if isinstance(raw, list) and all(isinstance(x, str) for x in raw):
return raw
logger.warning("Unexpected enabled_mcp_tools shape in %s; ignoring", path)
return None
return _migrate_from_metadata_if_needed(colony_name)
def update_colony_tools_config(
colony_name: str,
enabled_mcp_tools: list[str] | None,
) -> list[str] | None:
"""Persist a colony's MCP allowlist to ``tools.json``.
Raises ``FileNotFoundError`` if the colony's directory is missing.
"""
colony_dir = COLONIES_DIR / colony_name
if not colony_dir.exists():
raise FileNotFoundError(f"Colony directory not found: {colony_name}")
_atomic_write_json(
tools_config_path(colony_name),
{
"enabled_mcp_tools": enabled_mcp_tools,
"updated_at": datetime.now(UTC).isoformat(),
},
)
return enabled_mcp_tools
+148 -19
View File
@@ -111,6 +111,15 @@ class EventType(StrEnum):
# Retry tracking
NODE_RETRY = "node_retry"
# Stream-health observability. Split from NODE_RETRY so the UI can
# distinguish "slow TTFT on a huge context" (healthy, just slow) from
# "stream went silent mid-generation" (probable stall) from "we nudged
# the model to continue" (recovery), which NODE_RETRY used to conflate.
STREAM_TTFT_EXCEEDED = "stream_ttft_exceeded"
STREAM_INACTIVE = "stream_inactive"
STREAM_NUDGE_SENT = "stream_nudge_sent"
TOOL_CALL_REPLAY_DETECTED = "tool_call_replay_detected"
# Worker agent lifecycle
WORKER_COMPLETED = "worker_completed"
WORKER_FAILED = "worker_failed"
@@ -156,6 +165,14 @@ class EventType(StrEnum):
TRIGGER_REMOVED = "trigger_removed"
TRIGGER_UPDATED = "trigger_updated"
# Task system lifecycle (per-list diffs streamed to the UI)
TASK_CREATED = "task_created"
TASK_UPDATED = "task_updated"
TASK_DELETED = "task_deleted"
TASK_LIST_RESET = "task_list_reset"
TASK_LIST_REATTACH_MISMATCH = "task_list_reattach_mismatch"
COLONY_TEMPLATE_ASSIGNMENT = "colony_template_assignment"
@dataclass
class AgentEvent:
@@ -446,11 +463,7 @@ class EventBus:
# iteration values. Without this, live SSE would use raw iterations
# while events.jsonl would use offset iterations, causing ID collisions
# on the frontend when replaying after cold resume.
if (
self._session_log_iteration_offset
and isinstance(event.data, dict)
and "iteration" in event.data
):
if self._session_log_iteration_offset and isinstance(event.data, dict) and "iteration" in event.data:
offset = self._session_log_iteration_offset
event.data = {**event.data, "iteration": event.data["iteration"] + offset}
@@ -518,17 +531,35 @@ class EventBus:
return True
# Per-handler wall-clock timeout. A subscriber that deadlocks or
# blocks on slow I/O would otherwise freeze the publisher (and via
# ``await publish(...)`` any coroutine that emits events) indefinitely.
# 15 s is generous for legitimate handlers and cheap to tune later.
_HANDLER_TIMEOUT_SECONDS: float = 15.0
async def _execute_handlers(
self,
event: AgentEvent,
handlers: list[EventHandler],
) -> None:
"""Execute handlers concurrently with rate limiting."""
"""Execute handlers concurrently with rate limiting + hard timeout."""
async def run_handler(handler: EventHandler) -> None:
async with self._semaphore:
try:
await handler(event)
await asyncio.wait_for(
handler(event),
timeout=self._HANDLER_TIMEOUT_SECONDS,
)
except TimeoutError:
handler_name = getattr(handler, "__qualname__", repr(handler))
logger.error(
"EventBus handler %s exceeded %.0fs on event %s — dropping; "
"fix the handler or the publisher will stall",
handler_name,
self._HANDLER_TIMEOUT_SECONDS,
getattr(event.type, "name", event.type),
)
except Exception:
logger.exception(f"Handler error for {event.type}")
@@ -786,16 +817,28 @@ class EventBus:
input_tokens: int,
output_tokens: int,
cached_tokens: int = 0,
cache_creation_tokens: int = 0,
cost_usd: float = 0.0,
execution_id: str | None = None,
iteration: int | None = None,
) -> None:
"""Emit LLM turn completion with stop reason and model metadata."""
"""Emit LLM turn completion with stop reason and model metadata.
``cached_tokens`` and ``cache_creation_tokens`` are subsets of
``input_tokens`` (already inside provider ``prompt_tokens``).
Subscribers should display them, not add them to a total.
``cost_usd`` is the USD cost for this turn when known (Anthropic,
OpenAI, OpenRouter). 0.0 means unreported (not free).
"""
data: dict = {
"stop_reason": stop_reason,
"model": model,
"input_tokens": input_tokens,
"output_tokens": output_tokens,
"cached_tokens": cached_tokens,
"cache_creation_tokens": cache_creation_tokens,
"cost_usd": cost_usd,
}
if iteration is not None:
data["iteration"] = iteration
@@ -891,24 +934,22 @@ class EventBus:
self,
stream_id: str,
node_id: str,
prompt: str = "",
execution_id: str | None = None,
options: list[str] | None = None,
questions: list[dict] | None = None,
) -> None:
"""Emit a user-input request for interactive queen turns.
Args:
options: Optional predefined choices for the user (1-3 items).
The frontend appends an "Other" free-text option
automatically.
questions: Optional list of question dicts for multi-question
batches (from ask_user_multiple). Each dict has id,
prompt, and optional options.
questions: Optional list of question dicts from ``ask_user``.
Each dict has ``id``, ``prompt``, and optional ``options``
(2-3 predefined choices). The frontend renders the
QuestionWidget for a single-entry list and the
MultiQuestionWidget for 2+ entries. Free-text asks (no
options) stream the prompt separately as a chat message;
auto-block turns have no questions at all and fall back
to the normal text input.
"""
data: dict[str, Any] = {"prompt": prompt}
if options:
data["options"] = options
data: dict[str, Any] = {}
if questions:
data["questions"] = questions
await self.publish(
@@ -1047,6 +1088,94 @@ class EventBus:
)
)
async def emit_stream_ttft_exceeded(
self,
stream_id: str,
node_id: str,
ttft_seconds: float,
limit_seconds: float,
execution_id: str | None = None,
) -> None:
"""Emit when a stream stayed silent past the TTFT budget (no first event)."""
await self.publish(
AgentEvent(
type=EventType.STREAM_TTFT_EXCEEDED,
stream_id=stream_id,
node_id=node_id,
execution_id=execution_id,
data={
"ttft_seconds": ttft_seconds,
"limit_seconds": limit_seconds,
},
)
)
async def emit_stream_inactive(
self,
stream_id: str,
node_id: str,
idle_seconds: float,
limit_seconds: float,
execution_id: str | None = None,
) -> None:
"""Emit when a stream that had produced events went silent past budget."""
await self.publish(
AgentEvent(
type=EventType.STREAM_INACTIVE,
stream_id=stream_id,
node_id=node_id,
execution_id=execution_id,
data={
"idle_seconds": idle_seconds,
"limit_seconds": limit_seconds,
},
)
)
async def emit_stream_nudge_sent(
self,
stream_id: str,
node_id: str,
reason: str,
nudge_count: int,
execution_id: str | None = None,
) -> None:
"""Emit when the continue-nudge was injected (recovery, not retry)."""
await self.publish(
AgentEvent(
type=EventType.STREAM_NUDGE_SENT,
stream_id=stream_id,
node_id=node_id,
execution_id=execution_id,
data={
"reason": reason,
"nudge_count": nudge_count,
},
)
)
async def emit_tool_call_replay_detected(
self,
stream_id: str,
node_id: str,
tool_name: str,
prior_seq: int,
execution_id: str | None = None,
) -> None:
"""Emit when the model is about to re-execute a prior successful call."""
await self.publish(
AgentEvent(
type=EventType.TOOL_CALL_REPLAY_DETECTED,
stream_id=stream_id,
node_id=node_id,
execution_id=execution_id,
data={
"tool_name": tool_name,
"prior_seq": prior_seq,
},
)
)
async def emit_worker_completed(
self,
stream_id: str,
+78 -53
View File
@@ -16,20 +16,20 @@ from collections import OrderedDict
from collections.abc import Callable
from dataclasses import dataclass, field
from datetime import datetime
from typing import TYPE_CHECKING, Any
from typing import TYPE_CHECKING, Any, Literal
from framework.orchestrator.checkpoint_config import CheckpointConfig
from framework.orchestrator.orchestrator import ExecutionResult, Orchestrator
from framework.host.event_bus import EventBus
from framework.host.shared_state import IsolationLevel, SharedBufferManager
from framework.host.stream_runtime import StreamDecisionTracker, StreamRuntimeAdapter
from framework.orchestrator.checkpoint_config import CheckpointConfig
from framework.orchestrator.orchestrator import ExecutionResult, Orchestrator
if TYPE_CHECKING:
from framework.orchestrator.edge import GraphSpec
from framework.orchestrator.goal import Goal
from framework.llm.provider import LLMProvider, Tool
from framework.host.event_bus import AgentEvent
from framework.host.outcome_aggregator import OutcomeAggregator
from framework.llm.provider import LLMProvider, Tool
from framework.orchestrator.edge import GraphSpec
from framework.orchestrator.goal import Goal
from framework.storage.concurrent import ConcurrentStorage
from framework.storage.session_store import SessionStore
@@ -48,6 +48,8 @@ class ExecutionAlreadyRunningError(RuntimeError):
logger = logging.getLogger(__name__)
CancelExecutionResult = Literal["cancelled", "cancelling", "not_found"]
class GraphScopedEventBus(EventBus):
"""Proxy that stamps ``graph_id`` on every published event.
@@ -130,7 +132,7 @@ class ExecutionContext:
run_id: str | None = None # Unique ID per trigger() invocation
started_at: datetime = field(default_factory=datetime.now)
completed_at: datetime | None = None
status: str = "pending" # pending, running, completed, failed, paused
status: str = "pending" # pending, running, cancelling, completed, failed, paused, cancelled
class ExecutionManager:
@@ -315,6 +317,22 @@ class ExecutionManager:
"""Return IDs of all currently active executions."""
return list(self._active_executions.keys())
def _get_blocking_execution_ids_locked(self) -> list[str]:
"""Return executions that still block a replacement from starting.
An execution continues to block replacement until its task has
terminated and the task's final cleanup has removed its bookkeeping.
This is intentional: a timed-out cancellation does not mean the old
task is harmless. If it is still alive, it can still write shared
session state, so letting a replacement start would guarantee
overlapping mutations on the same session.
"""
blocking_ids: list[str] = list(self._active_executions.keys())
for execution_id, task in self._execution_tasks.items():
if not task.done() and execution_id not in self._active_executions:
blocking_ids.append(execution_id)
return blocking_ids
@property
def agent_idle_seconds(self) -> float:
"""Seconds since the last agent activity (LLM call, tool call, node transition).
@@ -396,15 +414,22 @@ class ExecutionManager:
async def stop(self) -> None:
"""Stop the execution stream and cancel active executions."""
if not self._running:
return
async with self._lock:
if not self._running:
return
self._running = False
self._running = False
# Cancel all active executions
tasks_to_wait = []
for _, task in self._execution_tasks.items():
if not task.done():
# Cancel all active executions, but keep bookkeeping until each
# task reaches its own cleanup path.
tasks_to_wait: list[asyncio.Task] = []
for execution_id, task in self._execution_tasks.items():
if task.done():
continue
ctx = self._active_executions.get(execution_id)
if ctx is not None:
ctx.status = "cancelling"
self._cancel_reasons.setdefault(execution_id, "Execution cancelled")
task.cancel()
tasks_to_wait.append(task)
@@ -418,9 +443,6 @@ class ExecutionManager:
len(pending),
)
self._execution_tasks.clear()
self._active_executions.clear()
logger.info(f"ExecutionStream '{self.stream_id}' stopped")
# Emit stream stopped event
@@ -452,9 +474,7 @@ class ExecutionManager:
for executor in self._active_executors.values():
node = executor.node_registry.get(node_id)
if node is not None and hasattr(node, "inject_event"):
await node.inject_event(
content, is_client_input=is_client_input, image_content=image_content
)
await node.inject_event(content, is_client_input=is_client_input, image_content=image_content)
return True
return False
@@ -571,12 +591,16 @@ class ExecutionManager:
)
async with self._lock:
if not self._running:
raise RuntimeError(f"ExecutionStream '{self.stream_id}' is not running")
blocking_ids = self._get_blocking_execution_ids_locked()
if blocking_ids:
raise ExecutionAlreadyRunningError(self.stream_id, blocking_ids)
self._active_executions[execution_id] = ctx
self._completion_events[execution_id] = asyncio.Event()
# Start execution task
task = asyncio.create_task(self._run_execution(ctx))
self._execution_tasks[execution_id] = task
self._execution_tasks[execution_id] = asyncio.create_task(self._run_execution(ctx))
logger.debug(f"Queued execution {execution_id} for stream {self.stream_id}")
return execution_id
@@ -669,9 +693,7 @@ class ExecutionManager:
if self._runtime_log_store:
from framework.tracker.runtime_logger import RuntimeLogger
runtime_logger = RuntimeLogger(
store=self._runtime_log_store, agent_id=self.graph.id
)
runtime_logger = RuntimeLogger(store=self._runtime_log_store, agent_id=self.graph.id)
# Derive storage from session_store (graph-specific for secondary
# graphs) so that all files — conversations, state, checkpoints,
@@ -887,9 +909,7 @@ class ExecutionManager:
if has_result and result.paused_at:
await self._write_session_state(execution_id, ctx, result=result)
else:
await self._write_session_state(
execution_id, ctx, error="Execution cancelled"
)
await self._write_session_state(execution_id, ctx, error="Execution cancelled")
# Emit SSE event so the frontend knows the execution stopped.
# The executor does NOT emit on CancelledError, so there is no
@@ -1189,7 +1209,7 @@ class ExecutionManager:
"""Get execution context."""
return self._active_executions.get(execution_id)
async def cancel_execution(self, execution_id: str, *, reason: str | None = None) -> bool:
async def cancel_execution(self, execution_id: str, *, reason: str | None = None) -> CancelExecutionResult:
"""
Cancel a running execution.
@@ -1200,33 +1220,38 @@ class ExecutionManager:
provided, defaults to "Execution cancelled".
Returns:
True if cancelled, False if not found
"cancelled" if the task fully exited within the grace period,
"cancelling" if cancellation was requested but the task is still
shutting down, or "not_found" if no active task exists.
"""
task = self._execution_tasks.get(execution_id)
if task and not task.done():
async with self._lock:
task = self._execution_tasks.get(execution_id)
if task is None or task.done():
return "not_found"
# Store the reason so the CancelledError handler can use it
# when emitting the pause/fail event.
self._cancel_reasons[execution_id] = reason or "Execution cancelled"
ctx = self._active_executions.get(execution_id)
if ctx is not None:
ctx.status = "cancelling"
task.cancel()
# Wait briefly for the task to finish. Don't block indefinitely —
# the task may be stuck in a long LLM API call that doesn't
# respond to cancellation quickly.
done, _ = await asyncio.wait({task}, timeout=5.0)
if not done:
# Task didn't finish within timeout — clean up bookkeeping now
# so the session doesn't think it still has running executions.
# The task will continue winding down in the background and its
# finally block will harmlessly pop already-removed keys.
logger.warning(
"Execution %s did not finish within cancel timeout; force-cleaning bookkeeping",
execution_id,
)
async with self._lock:
self._active_executions.pop(execution_id, None)
self._execution_tasks.pop(execution_id, None)
self._active_executors.pop(execution_id, None)
return True
return False
# Wait briefly for the task to finish. Don't block indefinitely —
# the task may be stuck in a long LLM API call that doesn't
# respond to cancellation quickly.
done, _ = await asyncio.wait({task}, timeout=5.0)
if not done:
# Keep bookkeeping in place until the task's own finally block runs.
# We intentionally do not add deferred cleanup keyed by execution_id
# here because resumed executions reuse the same id; a delayed pop
# could otherwise delete bookkeeping that belongs to the new run.
logger.warning(
"Execution %s did not finish within cancel timeout; leaving bookkeeping in place until task exit",
execution_id,
)
return "cancelling"
return "cancelled"
# === STATS AND MONITORING ===
+487
View File
@@ -0,0 +1,487 @@
"""Per-colony SQLite task queue + progress ledger.
Every colony gets its own ``progress.db`` under ``~/.hive/colonies/{name}/data/``.
The DB holds the colony's task queue plus per-task step and SOP checklist
rows. Workers claim tasks atomically, write progress as they execute, and
verify SOP gates before marking a task done. This gives cross-run memory
that the existing per-iteration stall detectors don't have.
The DB is driven by agents via the ``sqlite3`` CLI through
``execute_command_tool``. This module handles framework-side lifecycle:
creation, migration, queen-side bulk seeding, stale-claim reclamation.
Concurrency model:
- WAL mode on from day one so 100 concurrent workers don't serialize.
- Workers hold NO long-running connection they ``sqlite3`` per call,
which naturally releases locks between LLM turns.
- Atomic claim via ``BEGIN IMMEDIATE; UPDATE tasks SET status='claimed'
WHERE id=(SELECT ... LIMIT 1)``. The subquery-form UPDATE runs inside
the immediate transaction so racers either win the row or find zero
affected rows.
- Stale-claim reclaimer runs on host startup: claims older than
``stale_after_minutes`` get returned to ``pending`` and the row's
``retry_count`` increments. When ``retry_count >= max_retries`` the
row is moved to ``failed`` instead.
All writes go through ``BEGIN IMMEDIATE`` so racing readers see
consistent snapshots.
"""
from __future__ import annotations
import json
import logging
import sqlite3
import uuid
from datetime import UTC, datetime
from pathlib import Path
from typing import Any
logger = logging.getLogger(__name__)
SCHEMA_VERSION = 1
_SCHEMA_V1 = """
CREATE TABLE IF NOT EXISTS tasks (
id TEXT PRIMARY KEY,
seq INTEGER,
priority INTEGER NOT NULL DEFAULT 0,
goal TEXT NOT NULL,
payload TEXT,
status TEXT NOT NULL DEFAULT 'pending',
worker_id TEXT,
claim_token TEXT,
claimed_at TEXT,
started_at TEXT,
completed_at TEXT,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
retry_count INTEGER NOT NULL DEFAULT 0,
max_retries INTEGER NOT NULL DEFAULT 3,
last_error TEXT,
parent_task_id TEXT REFERENCES tasks(id) ON DELETE SET NULL,
source TEXT
);
CREATE TABLE IF NOT EXISTS steps (
id TEXT PRIMARY KEY,
task_id TEXT NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
seq INTEGER NOT NULL,
title TEXT NOT NULL,
detail TEXT,
status TEXT NOT NULL DEFAULT 'pending',
evidence TEXT,
worker_id TEXT,
started_at TEXT,
completed_at TEXT,
UNIQUE (task_id, seq)
);
CREATE TABLE IF NOT EXISTS sop_checklist (
id TEXT PRIMARY KEY,
task_id TEXT NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
key TEXT NOT NULL,
description TEXT NOT NULL,
required INTEGER NOT NULL DEFAULT 1,
done_at TEXT,
done_by TEXT,
note TEXT,
UNIQUE (task_id, key)
);
CREATE TABLE IF NOT EXISTS colony_meta (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_tasks_claimable
ON tasks(status, priority DESC, seq, created_at)
WHERE status = 'pending';
CREATE INDEX IF NOT EXISTS idx_steps_task_seq
ON steps(task_id, seq);
CREATE INDEX IF NOT EXISTS idx_sop_required_open
ON sop_checklist(task_id, required, done_at);
CREATE INDEX IF NOT EXISTS idx_tasks_status
ON tasks(status, updated_at);
"""
_PRAGMAS = (
"PRAGMA journal_mode = WAL;",
"PRAGMA synchronous = NORMAL;",
"PRAGMA foreign_keys = ON;",
"PRAGMA busy_timeout = 5000;",
)
def _now_iso() -> str:
return datetime.now(UTC).isoformat(timespec="seconds")
def _new_id() -> str:
return str(uuid.uuid4())
def _connect(db_path: Path) -> sqlite3.Connection:
"""Open a connection with the standard pragmas applied.
WAL mode is sticky on the file once set, so re-applying on every
open is cheap. The other pragmas are per-connection and must be
set each time.
"""
con = sqlite3.connect(str(db_path), isolation_level=None, timeout=5.0)
for pragma in _PRAGMAS:
con.execute(pragma)
return con
def ensure_progress_db(colony_dir: Path) -> Path:
"""Create or migrate ``{colony_dir}/data/progress.db``.
Idempotent: safe to call on an already-initialized DB. Returns the
absolute path to the DB file.
Steps:
1. Ensure ``data/`` subdir exists.
2. Open the DB (creates the file if missing).
3. Apply WAL + pragmas.
4. Read ``PRAGMA user_version``; if < SCHEMA_VERSION, run the
schema block and bump user_version.
5. Reclaim any stale claims left from previous runs.
6. Patch every ``*.json`` worker config in the colony dir to
inject ``input_data.db_path`` and ``input_data.colony_id`` so
pre-existing colonies (forked before this feature landed) get
the tracker wiring on their next spawn.
"""
data_dir = Path(colony_dir) / "data"
data_dir.mkdir(parents=True, exist_ok=True)
db_path = data_dir / "progress.db"
con = _connect(db_path)
try:
current_version = con.execute("PRAGMA user_version").fetchone()[0]
if current_version < SCHEMA_VERSION:
con.executescript(_SCHEMA_V1)
con.execute(f"PRAGMA user_version = {SCHEMA_VERSION}")
con.execute(
"INSERT OR REPLACE INTO colony_meta(key, value, updated_at) VALUES (?, ?, ?)",
("schema_version", str(SCHEMA_VERSION), _now_iso()),
)
logger.info("progress_db: initialized schema v%d at %s", SCHEMA_VERSION, db_path)
reclaimed = _reclaim_stale_inner(con, stale_after_minutes=15)
if reclaimed:
logger.info(
"progress_db: reclaimed %d stale claims at startup (%s)",
reclaimed,
db_path,
)
finally:
con.close()
resolved_db_path = db_path.resolve()
_patch_worker_configs(Path(colony_dir), resolved_db_path)
return resolved_db_path
def _patch_worker_configs(colony_dir: Path, db_path: Path) -> int:
"""Inject ``input_data.db_path`` + ``input_data.colony_id`` +
``input_data.colony_data_dir`` into existing ``worker.json`` files
in a colony directory.
Runs on every ``ensure_progress_db`` call so colonies that were
forked before this feature landed get their worker spawn messages
patched in place. Idempotent: if ``input_data`` already contains
all three values, the file is not rewritten.
Returns the number of files that were actually modified (0 on
the common case of already-patched colonies).
Why ``colony_data_dir``? ``db_path`` alone points agents at
``progress.db``; for anything else (custom SQLite stores, JSON
ledgers, scraped artefacts) they need the *directory* so they
stop creating state under ``~/.hive/skills/`` which holds skill
*definitions*, not runtime data. See
``_default_skills/colony-storage-paths/SKILL.md``.
"""
colony_id = colony_dir.name
abs_db = str(db_path)
abs_data_dir = str(db_path.parent)
patched = 0
for worker_cfg in colony_dir.glob("*.json"):
# Only patch files that look like worker configs (have the
# worker_meta shape). ``metadata.json`` and ``triggers.json``
# are colony-level and must not be touched.
if worker_cfg.name in ("metadata.json", "triggers.json"):
continue
try:
data = json.loads(worker_cfg.read_text(encoding="utf-8"))
except (json.JSONDecodeError, OSError):
continue
if not isinstance(data, dict) or "system_prompt" not in data:
# Not a worker config (lacks the worker_meta schema).
continue
input_data = data.get("input_data")
if not isinstance(input_data, dict):
input_data = {}
if (
input_data.get("db_path") == abs_db
and input_data.get("colony_id") == colony_id
and input_data.get("colony_data_dir") == abs_data_dir
):
continue # already patched
input_data["db_path"] = abs_db
input_data["colony_id"] = colony_id
input_data["colony_data_dir"] = abs_data_dir
data["input_data"] = input_data
try:
worker_cfg.write_text(json.dumps(data, indent=2, ensure_ascii=False), encoding="utf-8")
patched += 1
except OSError as e:
logger.warning("progress_db: failed to patch worker config %s: %s", worker_cfg, e)
if patched:
logger.info(
"progress_db: patched %d worker config(s) in colony '%s' with db_path + colony_data_dir",
patched,
colony_id,
)
return patched
def ensure_all_colony_dbs(colonies_root: Path | None = None) -> list[Path]:
"""Idempotently ensure every existing colony has a progress.db.
Called on framework host startup to backfill older colonies and
run the stale-claim reclaimer on all of them in one pass.
"""
if colonies_root is None:
colonies_root = Path.home() / ".hive" / "colonies"
if not colonies_root.is_dir():
return []
initialized: list[Path] = []
for entry in sorted(colonies_root.iterdir()):
if not entry.is_dir():
continue
try:
initialized.append(ensure_progress_db(entry))
except Exception as e:
logger.warning("progress_db: failed to ensure DB for colony '%s': %s", entry.name, e)
return initialized
def seed_tasks(
db_path: Path,
tasks: list[dict[str, Any]],
*,
source: str = "queen_create",
) -> list[str]:
"""Bulk-insert tasks (with optional nested steps + sop_items).
Each task dict accepts:
- goal: str (required)
- seq: int (optional ordering hint)
- priority: int (default 0)
- payload: dict | str | None (stored as JSON text)
- max_retries: int (default 3)
- parent_task_id: str | None
- steps: list[{"title": str, "detail"?: str}] (optional)
- sop_items: list[{"key": str, "description": str, "required"?: bool, "note"?: str}] (optional)
All rows are inserted in a single BEGIN IMMEDIATE transaction so
10k-row seeds finish in one disk flush. Returns the created task ids
in the same order as input.
"""
if not tasks:
return []
created_ids: list[str] = []
now = _now_iso()
con = _connect(Path(db_path))
try:
con.execute("BEGIN IMMEDIATE")
for idx, task in enumerate(tasks):
goal = task.get("goal")
if not goal:
raise ValueError(f"task[{idx}] missing required 'goal' field")
task_id = task.get("id") or _new_id()
payload = task.get("payload")
if payload is not None and not isinstance(payload, str):
payload = json.dumps(payload, ensure_ascii=False)
con.execute(
"""
INSERT INTO tasks (
id, seq, priority, goal, payload, status,
created_at, updated_at, max_retries, parent_task_id, source
) VALUES (?, ?, ?, ?, ?, 'pending', ?, ?, ?, ?, ?)
""",
(
task_id,
task.get("seq"),
int(task.get("priority", 0)),
goal,
payload,
now,
now,
int(task.get("max_retries", 3)),
task.get("parent_task_id"),
source,
),
)
for step_seq, step in enumerate(task.get("steps") or [], start=1):
if not step.get("title"):
raise ValueError(f"task[{idx}].steps[{step_seq - 1}] missing required 'title'")
con.execute(
"""
INSERT INTO steps (id, task_id, seq, title, detail, status)
VALUES (?, ?, ?, ?, ?, 'pending')
""",
(
_new_id(),
task_id,
step.get("seq", step_seq),
step["title"],
step.get("detail"),
),
)
for sop in task.get("sop_items") or []:
key = sop.get("key")
description = sop.get("description")
if not key or not description:
raise ValueError(f"task[{idx}].sop_items missing 'key' or 'description'")
con.execute(
"""
INSERT INTO sop_checklist
(id, task_id, key, description, required, note)
VALUES (?, ?, ?, ?, ?, ?)
""",
(
_new_id(),
task_id,
key,
description,
1 if sop.get("required", True) else 0,
sop.get("note"),
),
)
created_ids.append(task_id)
con.execute("COMMIT")
except Exception:
con.execute("ROLLBACK")
raise
finally:
con.close()
return created_ids
def enqueue_task(
db_path: Path,
goal: str,
*,
steps: list[dict[str, Any]] | None = None,
sop_items: list[dict[str, Any]] | None = None,
payload: Any = None,
priority: int = 0,
parent_task_id: str | None = None,
source: str = "enqueue_tool",
) -> str:
"""Append a single task to an existing queue. Thin wrapper over seed_tasks."""
ids = seed_tasks(
db_path,
[
{
"goal": goal,
"steps": steps,
"sop_items": sop_items,
"payload": payload,
"priority": priority,
"parent_task_id": parent_task_id,
}
],
source=source,
)
return ids[0]
def _reclaim_stale_inner(con: sqlite3.Connection, *, stale_after_minutes: int) -> int:
"""Reclaim stale claims. Runs inside an existing open connection.
Two-step:
1. Tasks past max_retries go to 'failed' with last_error populated.
2. Remaining stale claims return to 'pending', retry_count++.
"""
cutoff_expr = f"datetime('now', '-{int(stale_after_minutes)} minutes')"
con.execute("BEGIN IMMEDIATE")
try:
con.execute(
f"""
UPDATE tasks
SET status = 'failed',
last_error = COALESCE(last_error, 'exceeded max_retries after stale claim'),
completed_at = datetime('now'),
updated_at = datetime('now')
WHERE status IN ('claimed', 'in_progress')
AND claimed_at IS NOT NULL
AND claimed_at < {cutoff_expr}
AND retry_count >= max_retries
"""
)
cur = con.execute(
f"""
UPDATE tasks
SET status = 'pending',
worker_id = NULL,
claim_token = NULL,
claimed_at = NULL,
started_at = NULL,
retry_count = retry_count + 1,
updated_at = datetime('now')
WHERE status IN ('claimed', 'in_progress')
AND claimed_at IS NOT NULL
AND claimed_at < {cutoff_expr}
AND retry_count < max_retries
"""
)
reclaimed = cur.rowcount or 0
con.execute("COMMIT")
return reclaimed
except Exception:
con.execute("ROLLBACK")
raise
def reclaim_stale(db_path: Path, stale_after_minutes: int = 15) -> int:
"""Public wrapper that opens its own connection."""
con = _connect(Path(db_path))
try:
return _reclaim_stale_inner(con, stale_after_minutes=stale_after_minutes)
finally:
con.close()
__all__ = [
"SCHEMA_VERSION",
"ensure_progress_db",
"ensure_all_colony_dbs",
"seed_tasks",
"enqueue_task",
"reclaim_stale",
]
-2
View File
@@ -2,8 +2,6 @@
import asyncio
import logging
import time
from dataclasses import dataclass, field
from enum import StrEnum
from typing import Any
+2 -7
View File
@@ -136,9 +136,7 @@ class StreamDecisionTracker:
self._run_locks[execution_id] = asyncio.Lock()
self._current_nodes[execution_id] = "unknown"
logger.debug(
f"Started run {run_id} for execution {execution_id} in stream {self.stream_id}"
)
logger.debug(f"Started run {run_id} for execution {execution_id} in stream {self.stream_id}")
return run_id
def end_run(
@@ -334,10 +332,7 @@ class StreamDecisionTracker:
"""
run = self._runs.get(execution_id)
if run is None:
logger.warning(
f"report_problem called but no run for execution {execution_id}: "
f"[{severity}] {description}"
)
logger.warning(f"report_problem called but no run for execution {execution_id}: [{severity}] {description}")
return ""
return run.add_problem(
+1 -2
View File
@@ -89,8 +89,7 @@ class WebhookServer:
)
await self._site.start()
logger.info(
f"Webhook server started on {self._config.host}:{self._config.port} "
f"with {len(self._routes)} route(s)"
f"Webhook server started on {self._config.host}:{self._config.port} with {len(self._routes)} route(s)"
)
async def stop(self) -> None:
+73 -29
View File
@@ -92,9 +92,7 @@ class Worker:
# result.json, data). Required when seed_conversation() is used —
# we deliberately do NOT fall back to CWD, which previously caused
# conversation parts to leak into the process working directory.
self._storage_path: Path | None = (
Path(storage_path) if storage_path is not None else None
)
self._storage_path: Path | None = Path(storage_path) if storage_path is not None else None
self._task_handle: asyncio.Task | None = None
self._started_at: float = 0.0
self._result: WorkerResult | None = None
@@ -147,20 +145,44 @@ class Worker:
self.status = WorkerStatus.RUNNING
self._started_at = time.monotonic()
# Scope browser profile (and any other CONTEXT_PARAMS) to this
# worker. asyncio.create_task() copies the parent's contextvars,
# so without this override every spawned worker inherits the
# queen's `profile=<queen_session_id>` and its browser_* tool
# calls end up driving the queen's Chrome tab group. Setting
# it here (inside the new Task's context) shadows the parent
# value without affecting the queen's ongoing calls.
try:
from framework.loader.tool_registry import ToolRegistry
from framework.tasks.scoping import session_task_list_id
ctx = self._context
agent_id = getattr(ctx, "agent_id", None) or self.id
list_id = getattr(ctx, "task_list_id", None) or session_task_list_id(agent_id, self.id)
ToolRegistry.set_execution_context(
profile=self.id,
agent_id=agent_id,
task_list_id=list_id,
colony_id=getattr(ctx, "colony_id", None),
picked_up_from=getattr(ctx, "picked_up_from", None),
)
except Exception:
logger.debug(
"Worker %s: failed to scope execution context",
self.id,
exc_info=True,
)
try:
result = await self._agent_loop.execute(self._context)
duration = time.monotonic() - self._started_at
if result.success:
self.status = WorkerStatus.COMPLETED
self._result = self._build_result(
result, duration, default_status="success"
)
self._result = self._build_result(result, duration, default_status="success")
else:
self.status = WorkerStatus.FAILED
self._result = self._build_result(
result, duration, default_status="failed"
)
self._result = self._build_result(result, duration, default_status="failed")
await self._emit_terminal_events(result)
@@ -176,13 +198,28 @@ class Worker:
except asyncio.CancelledError:
self.status = WorkerStatus.STOPPED
duration = time.monotonic() - self._started_at
self._result = WorkerResult(
error="Worker stopped by queen",
duration_seconds=duration,
status="stopped",
summary="Worker was cancelled before completion.",
)
await self._emit_terminal_events(None, force_status="stopped")
# Preserve any explicit report the worker's LLM already filed
# via ``report_to_parent`` before being cancelled — the caller
# cares about that payload even on a hard stop. Only fall back
# to the canned "stopped" message when no explicit report exists.
explicit = self._explicit_report
if explicit is not None:
self._result = WorkerResult(
error="Worker stopped by queen after reporting",
duration_seconds=duration,
status=explicit["status"],
summary=explicit["summary"],
data=explicit["data"],
)
await self._emit_terminal_events(None, force_status=explicit["status"])
else:
self._result = WorkerResult(
error="Worker stopped by queen",
duration_seconds=duration,
status="stopped",
summary="Worker was cancelled before completion.",
)
await self._emit_terminal_events(None, force_status="stopped")
return self._result
except Exception as exc:
@@ -292,11 +329,7 @@ class Worker:
# EXECUTION_COMPLETED / EXECUTION_FAILED (backwards-compat)
if agent_result is not None:
lifecycle_type = (
EventType.EXECUTION_COMPLETED
if agent_result.success
else EventType.EXECUTION_FAILED
)
lifecycle_type = EventType.EXECUTION_COMPLETED if agent_result.success else EventType.EXECUTION_FAILED
await self._event_bus.publish(
AgentEvent(
type=lifecycle_type,
@@ -309,11 +342,7 @@ class Worker:
"task": self.task,
"success": agent_result.success,
"error": agent_result.error,
"output_keys": (
list(agent_result.output.keys())
if agent_result.output
else []
),
"output_keys": (list(agent_result.output.keys()) if agent_result.output else []),
},
)
)
@@ -348,7 +377,23 @@ class Worker:
async def start_background(self) -> None:
"""Spawn the worker's run() as an asyncio background task."""
self._task_handle = asyncio.create_task(self.run())
self._task_handle = asyncio.create_task(self.run(), name=f"worker:{self.id}")
# Surface any exception that escapes run(); without this callback
# a crash here only becomes visible when stop() eventually awaits
# the handle (and is silently lost if stop() is never called).
self._task_handle.add_done_callback(self._on_task_done)
def _on_task_done(self, task: asyncio.Task) -> None:
if task.cancelled():
return
exc = task.exception()
if exc is not None:
logger.error(
"Worker '%s' background task crashed: %s",
self.id,
exc,
exc_info=exc,
)
async def stop(self) -> None:
"""Cancel the worker's background task, if any."""
@@ -388,8 +433,7 @@ class Worker:
"""
if self.status != WorkerStatus.PENDING:
raise RuntimeError(
f"seed_conversation must be called before start_background "
f"(worker {self.id} is {self.status})"
f"seed_conversation must be called before start_background (worker {self.id} is {self.status})"
)
# Write parts directly to the worker's on-disk conversation store
+1 -3
View File
@@ -50,9 +50,7 @@ class AnthropicProvider(LLMProvider):
# Delegate to LiteLLMProvider internally.
self.api_key = api_key or _get_api_key_from_credential_store()
if not self.api_key:
raise ValueError(
"Anthropic API key required. Set ANTHROPIC_API_KEY env var or pass api_key."
)
raise ValueError("Anthropic API key required. Set ANTHROPIC_API_KEY env var or pass api_key.")
self.model = model
+15 -29
View File
@@ -53,17 +53,9 @@ _TOKEN_REFRESH_BUFFER_SECS = 60
# Credentials file in ~/.hive/ (native implementation)
_ACCOUNTS_FILE = Path.home() / ".hive" / "antigravity-accounts.json"
_IDE_STATE_DB_MAC = (
Path.home()
/ "Library"
/ "Application Support"
/ "Antigravity"
/ "User"
/ "globalStorage"
/ "state.vscdb"
)
_IDE_STATE_DB_LINUX = (
Path.home() / ".config" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
Path.home() / "Library" / "Application Support" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
)
_IDE_STATE_DB_LINUX = Path.home() / ".config" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
_IDE_STATE_DB_KEY = "antigravityUnifiedStateSync.oauthToken"
_BASE_HEADERS: dict[str, str] = {
@@ -368,9 +360,7 @@ def _to_gemini_contents(
def _map_finish_reason(reason: str) -> str:
return {"STOP": "stop", "MAX_TOKENS": "max_tokens", "OTHER": "tool_use"}.get(
(reason or "").upper(), "stop"
)
return {"STOP": "stop", "MAX_TOKENS": "max_tokens", "OTHER": "tool_use"}.get((reason or "").upper(), "stop")
def _parse_complete_response(raw: dict[str, Any], model: str) -> LLMResponse:
@@ -538,8 +528,7 @@ class AntigravityProvider(LLMProvider):
return self._access_token
raise RuntimeError(
"No valid Antigravity credentials. "
"Run: uv run python core/antigravity_auth.py auth account add"
"No valid Antigravity credentials. Run: uv run python core/antigravity_auth.py auth account add"
)
# --- Request building -------------------------------------------------- #
@@ -593,11 +582,7 @@ class AntigravityProvider(LLMProvider):
token = self._ensure_token()
body_bytes = json.dumps(body).encode("utf-8")
path = (
"/v1internal:streamGenerateContent?alt=sse"
if streaming
else "/v1internal:generateContent"
)
path = "/v1internal:streamGenerateContent?alt=sse" if streaming else "/v1internal:generateContent"
headers = {
**_BASE_HEADERS,
"Authorization": f"Bearer {token}",
@@ -619,9 +604,7 @@ class AntigravityProvider(LLMProvider):
if result:
self._access_token, self._token_expires_at = result
headers["Authorization"] = f"Bearer {self._access_token}"
req2 = urllib.request.Request(
url, data=body_bytes, headers=headers, method="POST"
)
req2 = urllib.request.Request(url, data=body_bytes, headers=headers, method="POST")
try:
return urllib.request.urlopen(req2, timeout=120) # noqa: S310
except urllib.error.HTTPError as exc2:
@@ -642,9 +625,7 @@ class AntigravityProvider(LLMProvider):
last_exc = exc
continue
raise RuntimeError(
f"All Antigravity endpoints failed. Last error: {last_exc}"
) from last_exc
raise RuntimeError(f"All Antigravity endpoints failed. Last error: {last_exc}") from last_exc
# --- LLMProvider interface --------------------------------------------- #
@@ -672,10 +653,17 @@ class AntigravityProvider(LLMProvider):
system: str = "",
tools: list[Tool] | None = None,
max_tokens: int = 4096,
system_dynamic_suffix: str | None = None,
) -> AsyncIterator[StreamEvent]:
import asyncio # noqa: PLC0415
import concurrent.futures # noqa: PLC0415
# Antigravity (Google's proprietary endpoint) doesn't expose a
# cache_control hook. Concatenate the dynamic suffix so its shape
# matches the legacy single-string call site.
if system_dynamic_suffix:
system = f"{system}\n\n{system_dynamic_suffix}" if system else system_dynamic_suffix
loop = asyncio.get_running_loop()
queue: asyncio.Queue[StreamEvent | None] = asyncio.Queue()
@@ -683,9 +671,7 @@ class AntigravityProvider(LLMProvider):
try:
body = self._build_body(messages, system, tools, max_tokens)
http_resp = self._post(body, streaming=True)
for event in _parse_sse_stream(
http_resp, self.model, self._thought_sigs.__setitem__
):
for event in _parse_sse_stream(http_resp, self.model, self._thought_sigs.__setitem__):
loop.call_soon_threadsafe(queue.put_nowait, event)
except Exception as exc:
logger.error("Antigravity stream error: %s", exc)
+30 -88
View File
@@ -1,106 +1,48 @@
"""Model capability checks for LLM providers.
Vision support rules are derived from official vendor documentation:
- ZAI (z.ai): docs.z.ai/guides/vlm GLM-4.6V variants are vision; GLM-5/4.6/4.7 are text-only
- MiniMax: platform.minimax.io/docs minimax-vl-01 is vision; M2.x are text-only
- DeepSeek: api-docs.deepseek.com deepseek-vl2 is vision; chat/reasoner are text-only
- Cerebras: inference-docs.cerebras.ai no vision models at all
- Groq: console.groq.com/docs/vision vision capable; treat as supported by default
- Ollama/LM Studio/vLLM/llama.cpp: local runners denied by default; model names
don't reliably indicate vision support, so users must configure explicitly
Vision support is sourced from the curated ``model_catalog.json``. Each model
entry carries an optional ``supports_vision`` boolean; unknown models default
to vision-capable so hosted frontier models work out of the box. To toggle
support for a model, edit its catalog entry rather than this file.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
def _model_name(model: str) -> str:
"""Return the bare model name after stripping any 'provider/' prefix."""
if "/" in model:
return model.split("/", 1)[1]
return model
from framework.llm.model_catalog import model_supports_vision
# Step 1: explicit vision allow-list — these always support images regardless
# of what the provider-level rules say. Checked first so that e.g. glm-4.6v
# is allowed even though glm-4.6 is denied.
_VISION_ALLOW_BARE_PREFIXES: tuple[str, ...] = (
# ZAI/GLM vision models (docs.z.ai/guides/vlm)
"glm-4v", # GLM-4V series (legacy)
"glm-4.6v", # GLM-4.6V, GLM-4.6V-flash, GLM-4.6V-flashx
# DeepSeek vision models
"deepseek-vl", # deepseek-vl2, deepseek-vl2-small, deepseek-vl2-tiny
# MiniMax vision model
"minimax-vl", # minimax-vl-01
)
# Step 2: provider-level deny — every model from this provider is text-only.
_TEXT_ONLY_PROVIDER_PREFIXES: tuple[str, ...] = (
# Cerebras: inference-docs.cerebras.ai lists only text models
"cerebras/",
# Local runners: model names don't reliably indicate vision support
"ollama/",
"ollama_chat/",
"lm_studio/",
"vllm/",
"llamacpp/",
)
# Step 3: per-model deny — text-only models within otherwise mixed providers.
# Matched against the bare model name (provider prefix stripped, lower-cased).
# The vision allow-list above is checked first, so vision variants of the same
# family are already handled before these deny patterns are reached.
_TEXT_ONLY_MODEL_BARE_PREFIXES: tuple[str, ...] = (
# --- ZAI / GLM family ---
# text-only: glm-5, glm-4.6, glm-4.7, glm-4.5, zai-glm-*
# vision: glm-4v, glm-4.6v (caught by allow-list above)
"glm-5",
"glm-4.6", # bare glm-4.6 is text-only; glm-4.6v is caught by allow-list
"glm-4.7",
"glm-4.5",
"zai-glm",
# --- DeepSeek ---
# text-only: deepseek-chat, deepseek-coder, deepseek-reasoner
# vision: deepseek-vl2 (caught by allow-list above)
# Note: LiteLLM's deepseek handler may flatten content lists for some models;
# VL models are allowed through and rely on LiteLLM's native VL support.
"deepseek-chat",
"deepseek-coder",
"deepseek-reasoner",
# --- MiniMax ---
# text-only: minimax-m2.*, minimax-text-*, abab* (legacy)
# vision: minimax-vl-01 (caught by allow-list above)
"minimax-m2",
"minimax-text",
"abab",
)
if TYPE_CHECKING:
from framework.llm.provider import Tool
def supports_image_tool_results(model: str) -> bool:
"""Return whether *model* can receive image content in messages.
Used to gate both user-message images and tool-result image blocks.
Logic (checked in order):
1. Vision allow-list True (known vision model, skip all denies)
2. Provider deny False (entire provider is text-only)
3. Model deny False (specific text-only model within a mixed provider)
4. Default True (assume capable; unknown providers and models)
Thin wrapper over :func:`model_supports_vision` so existing call sites
keep working. Used to gate both user-message images and tool-result
image blocks. Empty model strings are treated as capable so the default
code path doesn't strip images before a provider is selected.
"""
model_lower = model.lower()
bare = _model_name(model_lower)
# 1. Explicit vision allow — takes priority over all denies
if any(bare.startswith(p) for p in _VISION_ALLOW_BARE_PREFIXES):
if not model:
return True
return model_supports_vision(model)
# 2. Provider-level deny (all models from this provider are text-only)
if any(model_lower.startswith(p) for p in _TEXT_ONLY_PROVIDER_PREFIXES):
return False
# 3. Per-model deny (text-only variants within mixed-capability families)
if any(bare.startswith(p) for p in _TEXT_ONLY_MODEL_BARE_PREFIXES):
return False
def filter_tools_for_model(tools: list[Tool], model: str) -> tuple[list[Tool], list[str]]:
"""Drop image-producing tools for text-only models.
# 5. Default: assume vision capable
# Covers: OpenAI, Anthropic, Google, Mistral, Kimi, and other hosted providers
return True
Returns ``(filtered_tools, hidden_names)``. For vision-capable models
(or when *model* is empty) the input list is returned unchanged and
``hidden_names`` is empty. For text-only models any tool with
``produces_image=True`` is removed so the LLM never sees it in its
schema avoids wasted calls and stale "screenshot failed" entries
in agent memory.
"""
if not model or supports_image_tool_results(model):
return list(tools), []
hidden = [t.name for t in tools if t.produces_image]
if not hidden:
return list(tools), []
kept = [t for t in tools if not t.produces_image]
return kept, hidden
File diff suppressed because it is too large Load Diff
+6
View File
@@ -155,8 +155,11 @@ class MockLLMProvider(LLMProvider):
response_format: dict[str, Any] | None = None,
json_mode: bool = False,
max_retries: int | None = None,
system_dynamic_suffix: str | None = None,
) -> LLMResponse:
"""Async mock completion (no I/O, returns immediately)."""
if system_dynamic_suffix:
system = f"{system}\n\n{system_dynamic_suffix}" if system else system_dynamic_suffix
return self.complete(
messages=messages,
system=system,
@@ -173,6 +176,7 @@ class MockLLMProvider(LLMProvider):
system: str = "",
tools: list[Tool] | None = None,
max_tokens: int = 4096,
system_dynamic_suffix: str | None = None,
) -> AsyncIterator[StreamEvent]:
"""Stream a mock completion as word-level TextDeltaEvents.
@@ -180,6 +184,8 @@ class MockLLMProvider(LLMProvider):
TextDeltaEvent with an accumulating snapshot, exercising the full
streaming pipeline without any API calls.
"""
if system_dynamic_suffix:
system = f"{system}\n\n{system_dynamic_suffix}" if system else system_dynamic_suffix
content = self._generate_mock_response(system=system, json_mode=False)
words = content.split(" ")
accumulated = ""
+516
View File
@@ -0,0 +1,516 @@
{
"schema_version": 1,
"providers": {
"anthropic": {
"default_model": "claude-haiku-4-5-20251001",
"models": [
{
"id": "claude-haiku-4-5-20251001",
"label": "Haiku 4.5 - Fast + cheap",
"recommended": false,
"max_tokens": 64000,
"max_context_tokens": 136000,
"supports_vision": true
},
{
"id": "claude-sonnet-4-5-20250929",
"label": "Sonnet 4.5 - Best balance",
"recommended": false,
"max_tokens": 64000,
"max_context_tokens": 136000,
"supports_vision": true
},
{
"id": "claude-opus-4-6",
"label": "Opus 4.6 - Most capable",
"recommended": true,
"max_tokens": 128000,
"max_context_tokens": 872000,
"supports_vision": true
}
]
},
"openai": {
"default_model": "gpt-5.5",
"models": [
{
"id": "gpt-5.5",
"label": "GPT-5.5 - Frontier coding + reasoning",
"recommended": true,
"max_tokens": 128000,
"max_context_tokens": 1050000,
"pricing_usd_per_mtok": {
"input": 5.00,
"output": 30.00
},
"supports_vision": true
},
{
"id": "gpt-5.4",
"label": "GPT-5.4 - Previous flagship",
"recommended": false,
"max_tokens": 128000,
"max_context_tokens": 960000,
"supports_vision": true
},
{
"id": "gpt-5.4-mini",
"label": "GPT-5.4 Mini - Faster + cheaper",
"recommended": false,
"max_tokens": 128000,
"max_context_tokens": 400000,
"supports_vision": true
},
{
"id": "gpt-5.4-nano",
"label": "GPT-5.4 Nano - Cheapest high-volume",
"recommended": false,
"max_tokens": 128000,
"max_context_tokens": 400000,
"supports_vision": true
}
]
},
"gemini": {
"default_model": "gemini-3-flash-preview",
"models": [
{
"id": "gemini-3-flash-preview",
"label": "Gemini 3 Flash - Fast",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 240000,
"supports_vision": true
},
{
"id": "gemini-3.1-pro-preview-customtools",
"label": "Gemini 3.1 Pro - Best quality",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 240000,
"supports_vision": true
}
]
},
"groq": {
"default_model": "openai/gpt-oss-120b",
"models": [
{
"id": "openai/gpt-oss-120b",
"label": "GPT-OSS 120B - Best reasoning",
"recommended": true,
"max_tokens": 65536,
"max_context_tokens": 131072,
"supports_vision": false
},
{
"id": "openai/gpt-oss-20b",
"label": "GPT-OSS 20B - Fast + cheaper",
"recommended": false,
"max_tokens": 65536,
"max_context_tokens": 131072,
"supports_vision": false
},
{
"id": "llama-3.3-70b-versatile",
"label": "Llama 3.3 70B - General purpose",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 131072,
"supports_vision": false
},
{
"id": "llama-3.1-8b-instant",
"label": "Llama 3.1 8B - Fastest",
"recommended": false,
"max_tokens": 131072,
"max_context_tokens": 131072,
"supports_vision": false
}
]
},
"cerebras": {
"default_model": "gpt-oss-120b",
"models": [
{
"id": "gpt-oss-120b",
"label": "GPT-OSS 120B - Best production reasoning",
"recommended": true,
"max_tokens": 40960,
"max_context_tokens": 131072,
"supports_vision": false
},
{
"id": "zai-glm-4.7",
"label": "Z.ai GLM 4.7 - Strong coding preview",
"recommended": true,
"max_tokens": 40960,
"max_context_tokens": 131072,
"supports_vision": false
},
{
"id": "qwen-3-235b-a22b-instruct-2507",
"label": "Qwen 3 235B Instruct - Frontier preview",
"recommended": false,
"max_tokens": 40960,
"max_context_tokens": 131072,
"supports_vision": false
}
]
},
"minimax": {
"default_model": "MiniMax-M2.7",
"models": [
{
"id": "MiniMax-M2.7",
"label": "MiniMax M2.7 - Best coding quality",
"recommended": true,
"max_tokens": 40960,
"max_context_tokens": 180000,
"pricing_usd_per_mtok": {
"input": 0.30,
"output": 1.20
},
"supports_vision": false
},
{
"id": "MiniMax-M2.5",
"label": "MiniMax M2.5 - Strong value",
"recommended": false,
"max_tokens": 40960,
"max_context_tokens": 180000,
"supports_vision": false
}
]
},
"mistral": {
"default_model": "mistral-large-2512",
"models": [
{
"id": "mistral-large-2512",
"label": "Mistral Large 3 - Best quality",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 256000,
"supports_vision": true
},
{
"id": "mistral-medium-2508",
"label": "Mistral Medium 3.1 - Balanced",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 128000,
"supports_vision": true
},
{
"id": "mistral-small-2603",
"label": "Mistral Small 4 - Fast + capable",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 256000,
"supports_vision": true
},
{
"id": "codestral-2508",
"label": "Codestral - Coding specialist",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 128000,
"supports_vision": false
}
]
},
"together": {
"default_model": "deepseek-ai/DeepSeek-V3.1",
"models": [
{
"id": "deepseek-ai/DeepSeek-V3.1",
"label": "DeepSeek V3.1 - Best general coding",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 128000,
"supports_vision": false
},
{
"id": "Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8",
"label": "Qwen3 Coder 480B - Advanced coding",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 262144,
"supports_vision": false
},
{
"id": "openai/gpt-oss-120b",
"label": "GPT-OSS 120B - Strong reasoning",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 128000,
"supports_vision": false
},
{
"id": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"label": "Llama 3.3 70B Turbo - Fast baseline",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 131072,
"supports_vision": false
}
]
},
"deepseek": {
"default_model": "deepseek-v4-pro",
"models": [
{
"id": "deepseek-v4-pro",
"label": "DeepSeek V4 Pro - Most capable",
"recommended": true,
"max_tokens": 384000,
"max_context_tokens": 1000000,
"pricing_usd_per_mtok": {
"input": 1.74,
"output": 3.48,
"cache_read": 0.145
},
"supports_vision": false
},
{
"id": "deepseek-v4-flash",
"label": "DeepSeek V4 Flash - Fast + cheap",
"recommended": true,
"max_tokens": 384000,
"max_context_tokens": 1000000,
"pricing_usd_per_mtok": {
"input": 0.14,
"output": 0.28,
"cache_read": 0.028
},
"supports_vision": false
},
{
"id": "deepseek-reasoner",
"label": "DeepSeek Reasoner - Legacy (deprecating)",
"recommended": false,
"max_tokens": 64000,
"max_context_tokens": 128000,
"supports_vision": false
}
]
},
"kimi": {
"default_model": "kimi-k2.5",
"models": [
{
"id": "kimi-k2.5",
"label": "Kimi K2.5 - Best coding",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 200000,
"pricing_usd_per_mtok": {
"input": 0.60,
"output": 2.50,
"cache_read": 0.15
},
"supports_vision": true
}
]
},
"hive": {
"default_model": "queen",
"models": [
{
"id": "queen",
"label": "Queen - Hive native",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 180000,
"supports_vision": false
},
{
"id": "kimi-2.5",
"label": "Kimi 2.5 - Via Hive",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 240000,
"supports_vision": true
},
{
"id": "glm-5.1",
"label": "GLM-5.1 - Via Hive",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 180000,
"pricing_usd_per_mtok": {
"input": 1.40,
"output": 4.40,
"cache_read": 0.26,
"cache_creation": 0.0
},
"supports_vision": false
}
]
},
"openrouter": {
"default_model": "openai/gpt-5.4",
"models": [
{
"id": "openai/gpt-5.4",
"label": "GPT-5.4 - Best overall",
"recommended": true,
"max_tokens": 128000,
"max_context_tokens": 872000,
"supports_vision": true
},
{
"id": "anthropic/claude-sonnet-4.6",
"label": "Claude Sonnet 4.6 - Best coding balance",
"recommended": false,
"max_tokens": 64000,
"max_context_tokens": 872000,
"supports_vision": true
},
{
"id": "anthropic/claude-opus-4.6",
"label": "Claude Opus 4.6 - Most capable",
"recommended": false,
"max_tokens": 128000,
"max_context_tokens": 872000,
"supports_vision": true
},
{
"id": "google/gemini-3.1-pro-preview-customtools",
"label": "Gemini 3.1 Pro Preview - Long-context reasoning",
"recommended": false,
"max_tokens": 32768,
"max_context_tokens": 872000,
"supports_vision": true
},
{
"id": "qwen/qwen3.6-plus",
"label": "Qwen 3.6 Plus - Strong reasoning",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 240000,
"supports_vision": false
},
{
"id": "z-ai/glm-5v-turbo",
"label": "GLM-5V Turbo - Vision capable",
"recommended": true,
"max_tokens": 32768,
"max_context_tokens": 192000,
"supports_vision": true
},
{
"id": "z-ai/glm-5.1",
"label": "GLM-5.1 - Better but Slower",
"recommended": true,
"max_tokens": 40960,
"max_context_tokens": 192000,
"pricing_usd_per_mtok": {
"input": 1.40,
"output": 4.40,
"cache_read": 0.26,
"cache_creation": 0.0
},
"supports_vision": false
},
{
"id": "minimax/minimax-m2.7",
"label": "Minimax M2.7 - Minimax flagship",
"recommended": false,
"max_tokens": 40960,
"max_context_tokens": 180000,
"pricing_usd_per_mtok": {
"input": 0.30,
"output": 1.20
},
"supports_vision": false
},
{
"id": "xiaomi/mimo-v2-pro",
"label": "MiMo V2 Pro - Xiaomi multimodal",
"recommended": true,
"max_tokens": 64000,
"max_context_tokens": 240000,
"supports_vision": true
}
]
}
},
"presets": {
"claude_code": {
"provider": "anthropic",
"model": "claude-opus-4-6",
"max_tokens": 128000,
"max_context_tokens": 872000
},
"zai_code": {
"provider": "openai",
"api_key_env_var": "ZAI_API_KEY",
"model": "glm-5.1",
"max_tokens": 32768,
"max_context_tokens": 180000,
"api_base": "https://api.z.ai/api/coding/paas/v4"
},
"codex": {
"provider": "openai",
"model": "gpt-5.3-codex",
"max_tokens": 16384,
"max_context_tokens": 120000,
"api_base": "https://chatgpt.com/backend-api/codex"
},
"minimax_code": {
"provider": "minimax",
"api_key_env_var": "MINIMAX_API_KEY",
"model": "MiniMax-M2.7",
"max_tokens": 40960,
"max_context_tokens": 180800,
"api_base": "https://api.minimax.io/v1"
},
"kimi_code": {
"provider": "kimi",
"api_key_env_var": "KIMI_API_KEY",
"model": "kimi-k2.5",
"max_tokens": 32768,
"max_context_tokens": 240000,
"api_base": "https://api.kimi.com/coding"
},
"hive_llm": {
"provider": "hive",
"api_key_env_var": "HIVE_API_KEY",
"model": "queen",
"max_tokens": 32768,
"max_context_tokens": 180000,
"api_base": "https://api.adenhq.com",
"model_choices": [
{
"id": "queen",
"label": "queen",
"recommended": true
},
{
"id": "kimi-2.5",
"label": "kimi-2.5",
"recommended": false
},
{
"id": "glm-5.1",
"label": "glm-5.1",
"recommended": false
}
]
},
"antigravity": {
"provider": "openai",
"model": "gemini-3-flash",
"max_tokens": 32768,
"max_context_tokens": 1000000
},
"ollama_local": {
"provider": "ollama",
"max_tokens": 8192,
"max_context_tokens": 16384,
"api_base": "http://localhost:11434"
}
}
}
+274
View File
@@ -0,0 +1,274 @@
"""Shared curated model metadata loaded from ``model_catalog.json``."""
from __future__ import annotations
import copy
import json
from functools import lru_cache
from pathlib import Path
from typing import Any
MODEL_CATALOG_PATH = Path(__file__).with_name("model_catalog.json")
class ModelCatalogError(RuntimeError):
"""Raised when the curated model catalogue is missing or malformed."""
def _require_mapping(value: Any, path: str) -> dict[str, Any]:
if not isinstance(value, dict):
raise ModelCatalogError(f"{path} must be an object")
return value
def _require_list(value: Any, path: str) -> list[Any]:
if not isinstance(value, list):
raise ModelCatalogError(f"{path} must be an array")
return value
_PRICING_KEYS = ("input", "output", "cache_read", "cache_creation")
def _validate_pricing(value: Any, path: str) -> None:
"""Validate an optional ``pricing_usd_per_mtok`` block.
Keys are USD-per-million-tokens rates. ``input``/``output`` are required;
``cache_read``/``cache_creation`` are optional. All values must be
non-negative numbers. Used as a last-resort fallback when neither the
provider nor LiteLLM's catalog reports a cost.
"""
pricing = _require_mapping(value, path)
for key in ("input", "output"):
if key not in pricing:
raise ModelCatalogError(f"{path}.{key} is required")
for key, rate in pricing.items():
if key not in _PRICING_KEYS:
raise ModelCatalogError(f"{path}.{key} is not a recognized pricing field")
if not isinstance(rate, (int, float)) or isinstance(rate, bool) or rate < 0:
raise ModelCatalogError(f"{path}.{key} must be a non-negative number")
def _validate_model_catalog(data: dict[str, Any]) -> dict[str, Any]:
providers = _require_mapping(data.get("providers"), "providers")
for provider_id, provider_info in providers.items():
provider_path = f"providers.{provider_id}"
provider_map = _require_mapping(provider_info, provider_path)
default_model = provider_map.get("default_model")
if not isinstance(default_model, str) or not default_model.strip():
raise ModelCatalogError(f"{provider_path}.default_model must be a non-empty string")
models = _require_list(provider_map.get("models"), f"{provider_path}.models")
if not models:
raise ModelCatalogError(f"{provider_path}.models must not be empty")
seen_model_ids: set[str] = set()
default_found = False
for idx, model in enumerate(models):
model_path = f"{provider_path}.models[{idx}]"
model_map = _require_mapping(model, model_path)
model_id = model_map.get("id")
if not isinstance(model_id, str) or not model_id.strip():
raise ModelCatalogError(f"{model_path}.id must be a non-empty string")
if model_id in seen_model_ids:
raise ModelCatalogError(f"Duplicate model id {model_id!r} in {provider_path}.models")
seen_model_ids.add(model_id)
if model_id == default_model:
default_found = True
label = model_map.get("label")
if not isinstance(label, str) or not label.strip():
raise ModelCatalogError(f"{model_path}.label must be a non-empty string")
recommended = model_map.get("recommended")
if not isinstance(recommended, bool):
raise ModelCatalogError(f"{model_path}.recommended must be a boolean")
for key in ("max_tokens", "max_context_tokens"):
value = model_map.get(key)
if not isinstance(value, int) or value <= 0:
raise ModelCatalogError(f"{model_path}.{key} must be a positive integer")
pricing = model_map.get("pricing_usd_per_mtok")
if pricing is not None:
_validate_pricing(pricing, f"{model_path}.pricing_usd_per_mtok")
supports_vision = model_map.get("supports_vision")
if supports_vision is not None and not isinstance(supports_vision, bool):
raise ModelCatalogError(f"{model_path}.supports_vision must be a boolean when present")
if not default_found:
raise ModelCatalogError(
f"{provider_path}.default_model={default_model!r} is not present in {provider_path}.models"
)
presets = _require_mapping(data.get("presets"), "presets")
for preset_id, preset_info in presets.items():
preset_path = f"presets.{preset_id}"
preset_map = _require_mapping(preset_info, preset_path)
provider = preset_map.get("provider")
if not isinstance(provider, str) or not provider.strip():
raise ModelCatalogError(f"{preset_path}.provider must be a non-empty string")
model = preset_map.get("model")
if model is not None and (not isinstance(model, str) or not model.strip()):
raise ModelCatalogError(f"{preset_path}.model must be a non-empty string when present")
api_base = preset_map.get("api_base")
if api_base is not None and (not isinstance(api_base, str) or not api_base.strip()):
raise ModelCatalogError(f"{preset_path}.api_base must be a non-empty string when present")
api_key_env_var = preset_map.get("api_key_env_var")
if api_key_env_var is not None and (not isinstance(api_key_env_var, str) or not api_key_env_var.strip()):
raise ModelCatalogError(f"{preset_path}.api_key_env_var must be a non-empty string when present")
for key in ("max_tokens", "max_context_tokens"):
value = preset_map.get(key)
if not isinstance(value, int) or value <= 0:
raise ModelCatalogError(f"{preset_path}.{key} must be a positive integer")
model_choices = preset_map.get("model_choices")
if model_choices is not None:
for idx, choice in enumerate(_require_list(model_choices, f"{preset_path}.model_choices")):
choice_path = f"{preset_path}.model_choices[{idx}]"
choice_map = _require_mapping(choice, choice_path)
choice_id = choice_map.get("id")
if not isinstance(choice_id, str) or not choice_id.strip():
raise ModelCatalogError(f"{choice_path}.id must be a non-empty string")
label = choice_map.get("label")
if not isinstance(label, str) or not label.strip():
raise ModelCatalogError(f"{choice_path}.label must be a non-empty string")
recommended = choice_map.get("recommended")
if not isinstance(recommended, bool):
raise ModelCatalogError(f"{choice_path}.recommended must be a boolean")
return data
@lru_cache(maxsize=1)
def load_model_catalog() -> dict[str, Any]:
"""Load and validate the curated model catalogue."""
try:
raw = json.loads(MODEL_CATALOG_PATH.read_text(encoding="utf-8"))
except FileNotFoundError as exc:
raise ModelCatalogError(f"Model catalogue not found: {MODEL_CATALOG_PATH}") from exc
except json.JSONDecodeError as exc:
raise ModelCatalogError(f"Model catalogue JSON is invalid: {exc}") from exc
return _validate_model_catalog(_require_mapping(raw, "root"))
def get_models_catalogue() -> dict[str, list[dict[str, Any]]]:
"""Return provider -> model list."""
providers = load_model_catalog()["providers"]
return {provider_id: copy.deepcopy(provider_info["models"]) for provider_id, provider_info in providers.items()}
def get_default_models() -> dict[str, str]:
"""Return provider -> default model id."""
providers = load_model_catalog()["providers"]
return {provider_id: str(provider_info["default_model"]) for provider_id, provider_info in providers.items()}
def get_provider_models(provider: str) -> list[dict[str, Any]]:
"""Return the curated models for one provider."""
provider_info = load_model_catalog()["providers"].get(provider)
if not provider_info:
return []
return copy.deepcopy(provider_info["models"])
def get_default_model(provider: str) -> str | None:
"""Return the curated default model id for one provider."""
provider_info = load_model_catalog()["providers"].get(provider)
if not provider_info:
return None
return str(provider_info["default_model"])
def find_model(provider: str, model_id: str) -> dict[str, Any] | None:
"""Return one model entry for a provider, if present."""
for model in load_model_catalog()["providers"].get(provider, {}).get("models", []):
if model["id"] == model_id:
return copy.deepcopy(model)
return None
def find_model_any_provider(model_id: str) -> tuple[str, dict[str, Any]] | None:
"""Return the first curated provider/model entry matching a model id."""
for provider_id, provider_info in load_model_catalog()["providers"].items():
for model in provider_info["models"]:
if model["id"] == model_id:
return provider_id, copy.deepcopy(model)
return None
def get_model_limits(provider: str, model_id: str) -> tuple[int, int] | None:
"""Return ``(max_tokens, max_context_tokens)`` for one provider/model pair."""
model = find_model(provider, model_id)
if not model:
return None
return int(model["max_tokens"]), int(model["max_context_tokens"])
def get_model_pricing(model_id: str) -> dict[str, float] | None:
"""Return ``pricing_usd_per_mtok`` for a model id, searching all providers.
Returns ``None`` when the model is absent from the catalog or has no
pricing entry. Used by the cost-extraction fallback in ``litellm.py``
when the provider response and LiteLLM's catalog both come up empty.
"""
if not model_id:
return None
for provider_info in load_model_catalog()["providers"].values():
for model in provider_info["models"]:
if model["id"] == model_id:
pricing = model.get("pricing_usd_per_mtok")
if pricing is None:
return None
return {key: float(rate) for key, rate in pricing.items()}
return None
def model_supports_vision(model_id: str) -> bool:
"""Return whether *model_id* supports image inputs per the curated catalog.
Looks up the bare model id (and the provider-prefix-stripped form) in the
catalog. Returns the model's ``supports_vision`` flag when found, defaulting
to ``True`` for unknown models or when the flag is absent assume vision
capable for hosted providers, since modern frontier models support images
by default and the captioning fallback is more expensive than just letting
the provider handle the image.
"""
if not model_id:
return True
candidates = [model_id]
if "/" in model_id:
candidates.append(model_id.split("/", 1)[1])
for candidate in candidates:
for provider_info in load_model_catalog()["providers"].values():
for model in provider_info["models"]:
if model["id"] == candidate:
flag = model.get("supports_vision")
if isinstance(flag, bool):
return flag
return True
return True
def get_preset(preset_id: str) -> dict[str, Any] | None:
"""Return one preset entry."""
preset = load_model_catalog()["presets"].get(preset_id)
if not preset:
return None
return copy.deepcopy(preset)
def get_presets() -> dict[str, dict[str, Any]]:
"""Return all preset entries."""
return copy.deepcopy(load_model_catalog()["presets"])
+40 -2
View File
@@ -10,12 +10,24 @@ from typing import Any
@dataclass
class LLMResponse:
"""Response from an LLM call."""
"""Response from an LLM call.
``cached_tokens`` and ``cache_creation_tokens`` are subsets of
``input_tokens`` (providers report them inside ``prompt_tokens``).
Surface them for visibility; do not add to a total.
``cost_usd`` is the per-call USD cost when the provider / pricing table
can produce one (Anthropic, OpenAI, OpenRouter are supported). 0.0 when
unknown or unpriced treat as "unreported", not "free".
"""
content: str
model: str
input_tokens: int = 0
output_tokens: int = 0
cached_tokens: int = 0
cache_creation_tokens: int = 0
cost_usd: float = 0.0
stop_reason: str = ""
raw_response: Any = None
@@ -27,6 +39,15 @@ class Tool:
name: str
description: str
parameters: dict[str, Any] = field(default_factory=dict)
# If True, the tool may return ImageContent in its result. Text-only models
# (e.g. glm-5, deepseek-chat) have this hidden from their schema entirely.
produces_image: bool = False
# If True, this tool performs no filesystem/process/network writes and is
# safe to run concurrently with other safe-flagged tools inside the same
# assistant turn. Unsafe tools (writes, shell, browser actions) are always
# serialized after the safe batch. Default False - the conservative choice
# when a tool's behavior isn't explicitly vetted.
concurrency_safe: bool = False
@dataclass
@@ -101,19 +122,28 @@ class LLMProvider(ABC):
response_format: dict[str, Any] | None = None,
json_mode: bool = False,
max_retries: int | None = None,
system_dynamic_suffix: str | None = None,
) -> "LLMResponse":
"""Async version of complete(). Non-blocking on the event loop.
Default implementation offloads the sync complete() to a thread pool.
Subclasses SHOULD override for native async I/O.
``system_dynamic_suffix`` is an optional per-turn tail for providers
that honor ``cache_control`` (see LiteLLMProvider for semantics).
The default implementation concatenates it onto ``system`` since the
sync ``complete()`` path does not support the split.
"""
combined_system = system
if system_dynamic_suffix:
combined_system = f"{system}\n\n{system_dynamic_suffix}" if system else system_dynamic_suffix
loop = asyncio.get_running_loop()
return await loop.run_in_executor(
None,
partial(
self.complete,
messages=messages,
system=system,
system=combined_system,
tools=tools,
max_tokens=max_tokens,
response_format=response_format,
@@ -128,6 +158,7 @@ class LLMProvider(ABC):
system: str = "",
tools: list[Tool] | None = None,
max_tokens: int = 4096,
system_dynamic_suffix: str | None = None,
) -> AsyncIterator["StreamEvent"]:
"""
Stream a completion as an async iterator of StreamEvents.
@@ -138,6 +169,9 @@ class LLMProvider(ABC):
Tool orchestration is the CALLER's responsibility:
- Caller detects ToolCallEvent, executes tool, adds result
to messages, calls stream() again.
``system_dynamic_suffix`` is forwarded to ``acomplete``; see its
docstring for the two-block split semantics.
"""
from framework.llm.stream_events import (
FinishEvent,
@@ -150,6 +184,7 @@ class LLMProvider(ABC):
system=system,
tools=tools,
max_tokens=max_tokens,
system_dynamic_suffix=system_dynamic_suffix,
)
yield TextDeltaEvent(content=response.content, snapshot=response.content)
yield TextEndEvent(full_text=response.content)
@@ -157,6 +192,9 @@ class LLMProvider(ABC):
stop_reason=response.stop_reason,
input_tokens=response.input_tokens,
output_tokens=response.output_tokens,
cached_tokens=response.cached_tokens,
cache_creation_tokens=response.cache_creation_tokens,
cost_usd=response.cost_usd,
model=response.model,
)
+11 -1
View File
@@ -65,13 +65,23 @@ class ReasoningDeltaEvent:
@dataclass(frozen=True)
class FinishEvent:
"""The LLM has finished generating."""
"""The LLM has finished generating.
``cached_tokens`` and ``cache_creation_tokens`` are subsets of
``input_tokens`` providers count both inside ``prompt_tokens`` already.
Surface them separately for visibility; never add to a total.
``cost_usd`` is the per-turn USD cost when the provider or LiteLLM's
pricing table supplies one; 0.0 means unreported (not free).
"""
type: Literal["finish"] = "finish"
stop_reason: str = ""
input_tokens: int = 0
output_tokens: int = 0
cached_tokens: int = 0
cache_creation_tokens: int = 0
cost_usd: float = 0.0
model: str = ""
+52 -44
View File
@@ -9,25 +9,23 @@ from datetime import UTC
from pathlib import Path
from typing import Any
from framework.config import get_hive_config, get_max_context_tokens, get_preferred_model
from framework.config import get_hive_config, get_preferred_model
from framework.credentials.validation import (
ensure_credential_key_env as _ensure_credential_key_env,
)
from framework.host.agent_host import AgentHost, AgentRuntimeConfig
from framework.host.execution_manager import EntryPointSpec
from framework.llm.provider import LLMProvider, Tool
from framework.loader.preload_validation import run_preload_validation
from framework.loader.tool_registry import ToolRegistry
from framework.orchestrator import Goal
from framework.orchestrator.edge import (
DEFAULT_MAX_TOKENS,
EdgeCondition,
EdgeSpec,
GraphSpec,
)
from framework.orchestrator.orchestrator import ExecutionResult
from framework.orchestrator.node import NodeSpec
from framework.llm.provider import LLMProvider, Tool
from framework.loader.preload_validation import run_preload_validation
from framework.loader.tool_registry import ToolRegistry
from framework.host.agent_host import AgentHost, AgentRuntimeConfig
from framework.host.execution_manager import EntryPointSpec
from framework.tools.flowchart_utils import generate_fallback_flowchart
from framework.orchestrator.orchestrator import ExecutionResult
logger = logging.getLogger(__name__)
@@ -555,18 +553,10 @@ def get_kimi_code_token() -> str | None:
# VSCode-style SQLite state database under the key
# "antigravityUnifiedStateSync.oauthToken" as a base64-encoded protobuf blob.
ANTIGRAVITY_IDE_STATE_DB = (
Path.home()
/ "Library"
/ "Application Support"
/ "Antigravity"
/ "User"
/ "globalStorage"
/ "state.vscdb"
Path.home() / "Library" / "Application Support" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
)
# Linux fallback for the IDE state DB
ANTIGRAVITY_IDE_STATE_DB_LINUX = (
Path.home() / ".config" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
)
ANTIGRAVITY_IDE_STATE_DB_LINUX = Path.home() / ".config" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
# Antigravity credentials stored by native OAuth implementation
ANTIGRAVITY_AUTH_FILE = Path.home() / ".hive" / "antigravity-accounts.json"
@@ -710,9 +700,7 @@ def _is_antigravity_token_expired(auth_data: dict) -> bool:
return True
elif isinstance(last_refresh_val, str):
try:
last_refresh_val = datetime.fromisoformat(
last_refresh_val.replace("Z", "+00:00")
).timestamp()
last_refresh_val = datetime.fromisoformat(last_refresh_val.replace("Z", "+00:00")).timestamp()
except (ValueError, TypeError):
return True
@@ -843,8 +831,7 @@ def get_antigravity_token() -> str | None:
return token_data["access_token"]
logger.warning(
"Antigravity token refresh failed. "
"Re-open the Antigravity IDE or run 'antigravity-auth accounts add'."
"Antigravity token refresh failed. Re-open the Antigravity IDE or run 'antigravity-auth accounts add'."
)
return access_token
@@ -1255,10 +1242,16 @@ class AgentLoader:
if tools_path.exists():
self._tool_registry.discover_from_module(tools_path)
# Set environment variables for MCP subprocesses
# These are inherited by MCP servers (e.g., GCU browser tools)
os.environ["HIVE_AGENT_NAME"] = agent_path.name
os.environ["HIVE_STORAGE_PATH"] = str(self._storage_path)
# Per-agent env for MCP subprocesses. Stored on the registry so
# parallel workers in the same process don't clobber each other
# via the shared os.environ dict — the registry merges these
# into every MCPServerConfig.env at registration time.
self._tool_registry.set_mcp_extra_env(
{
"HIVE_AGENT_NAME": agent_path.name,
"HIVE_STORAGE_PATH": str(self._storage_path),
}
)
# MCP tools are loaded by McpRegistryStage in the pipeline during AgentHost.start()
@@ -1291,11 +1284,7 @@ class AgentLoader:
# Evict cached submodules first (e.g. deep_research_agent.nodes,
# deep_research_agent.agent) so the top-level reload picks up
# changes in the entire package — not just __init__.py.
stale = [
name
for name in sys.modules
if name == package_name or name.startswith(f"{package_name}.")
]
stale = [name for name in sys.modules if name == package_name or name.startswith(f"{package_name}.")]
for name in stale:
del sys.modules[name]
@@ -1344,7 +1333,7 @@ class AgentLoader:
if not worker_jsons:
raise FileNotFoundError(f"No worker config found in {agent_path}")
from framework.orchestrator.edge import EdgeSpec, GraphSpec
from framework.orchestrator.edge import GraphSpec
from framework.orchestrator.goal import Constraint, Goal as GoalModel, SuccessCriterion
from framework.orchestrator.node import NodeSpec
@@ -1415,7 +1404,18 @@ class AgentLoader:
credential_store=credential_store,
)
runner._agent_default_skills = None
runner._agent_skills = None
# Colony workers attached to a SQLite task queue get the
# colony-progress-tracker skill pre-activated so its full
# claim / step / SOP-gate protocol lands in the system prompt
# on turn 0, bypassing the progressive-disclosure catalog
# lookup. Triggered by the presence of ``input_data.db_path``
# in worker.json (written by fork_session_into_colony and
# backfilled by ensure_progress_db for pre-existing colonies).
_preactivate: list[str] = []
_input_data = first_worker.get("input_data") or {}
if isinstance(_input_data, dict) and _input_data.get("db_path"):
_preactivate.append("hive.colony-progress-tracker")
runner._agent_skills = _preactivate or None
return runner
def register_tool(
@@ -1503,6 +1503,7 @@ class AgentLoader:
from framework.pipeline.stages.mcp_registry import McpRegistryStage
from framework.pipeline.stages.skill_registry import SkillRegistryStage
from framework.skills.config import SkillsConfig
from framework.skills.discovery import ExtraScope
configure_logging(level="INFO", format="auto")
@@ -1545,11 +1546,23 @@ class AgentLoader:
default_skills=getattr(self, "_agent_default_skills", None),
skills=getattr(self, "_agent_skills", None),
),
# Surface the colony's flat ``skills/`` directory as a
# ``colony_ui`` extra scope so SKILL.md files written there
# by ``create_colony`` (or the HTTP routes) are picked up
# with correct provenance. The legacy nested
# ``<colony>/.hive/skills/`` path is still picked up via
# project-scope auto-discovery (project_root above).
extra_scope_dirs=[
ExtraScope(
directory=self.agent_path / "skills",
label="colony_ui",
priority=3,
)
],
),
]
# Merge user-configured stages from ~/.hive/configuration.json
from framework.config import get_hive_config
from framework.pipeline.registry import build_pipeline_from_config
hive_config = get_hive_config()
@@ -1562,9 +1575,7 @@ class AgentLoader:
if agent_json.exists():
try:
agent_pipeline = (
_json.loads(agent_json.read_text(encoding="utf-8"))
.get("pipeline", {})
.get("stages", [])
_json.loads(agent_json.read_text(encoding="utf-8")).get("pipeline", {}).get("stages", [])
)
if agent_pipeline:
agent_stages = build_pipeline_from_config(agent_pipeline)
@@ -1980,8 +1991,7 @@ class AgentLoader:
for sc in self.goal.success_criteria
],
constraints=[
{"id": c.id, "description": c.description, "type": c.constraint_type}
for c in self.goal.constraints
{"id": c.id, "description": c.description, "type": c.constraint_type} for c in self.goal.constraints
],
required_tools=sorted(required_tools),
has_tools_module=(self.agent_path / "tools.py").exists(),
@@ -2052,9 +2062,7 @@ class AgentLoader:
if api_key_env and not os.environ.get(api_key_env):
if api_key_env not in missing_credentials:
missing_credentials.append(api_key_env)
warnings.append(
f"Agent has LLM nodes but {api_key_env} not set (model: {self.model})"
)
warnings.append(f"Agent has LLM nodes but {api_key_env} not set (model: {self.model})")
return ValidationResult(
valid=len(errors) == 0,
+92 -38
View File
@@ -17,14 +17,15 @@ from __future__ import annotations
import argparse
import asyncio
import json
import os
import shutil
import subprocess
import sys
import threading
from pathlib import Path
from typing import Any
from urllib import error as urlerror, parse as urlparse, request as urlrequest
# ---------------------------------------------------------------------------
# Public registration
# ---------------------------------------------------------------------------
@@ -85,6 +86,10 @@ def _register_open(subparsers: argparse._SubParsersAction) -> None:
def cmd_serve(args: argparse.Namespace) -> int:
"""Start the HTTP API server (the runtime hub)."""
import atexit
import logging
import signal
from aiohttp import web
_build_frontend()
@@ -94,16 +99,67 @@ def cmd_serve(args: argparse.Namespace) -> int:
if getattr(args, "debug", False):
configure_logging(level="DEBUG")
elif getattr(args, "verbose", False):
configure_logging(level="INFO")
else:
configure_logging(level="WARNING")
configure_logging(level="INFO")
# Last-resort MCP cleanup. Runs on any process exit path, including
# crashes — so hung MCP subprocesses don't outlive the server. The
# graceful shutdown path below also disconnects clients; atexit is
# belt-and-braces and no-ops if already cleaned.
def _atexit_cleanup_mcp() -> None:
try:
from framework.loader.mcp_connection_manager import MCPConnectionManager
MCPConnectionManager.get_instance().cleanup_all()
except Exception as exc: # noqa: BLE001
logging.getLogger(__name__).debug("atexit MCP cleanup failed: %s", exc)
atexit.register(_atexit_cleanup_mcp)
model = getattr(args, "model", None)
app = create_app(model=model)
async def run_server() -> None:
manager = app["manager"]
shutdown_event = asyncio.Event()
signal_count = {"n": 0}
def _request_shutdown(signame: str) -> None:
signal_count["n"] += 1
if signal_count["n"] == 1:
print(f"\nReceived {signame}, shutting down gracefully… (press Ctrl+C again to force quit)")
shutdown_event.set()
else:
# Second Ctrl+C (or SIGTERM) — the user is done waiting.
# Skip the graceful teardown and exit immediately. os._exit
# bypasses atexit handlers, so fire the MCP cleanup manually
# first to avoid leaking subprocesses.
print(f"\nReceived {signame} again — force quitting.")
try:
from framework.loader.mcp_connection_manager import (
MCPConnectionManager,
)
MCPConnectionManager.get_instance().cleanup_all()
except Exception: # noqa: BLE001
pass
os._exit(130)
# Register SIGTERM (and explicit SIGINT) so container orchestrators
# and plain Ctrl-C both route through the same graceful path —
# manager.shutdown_all() flushes state and disconnects MCP clients.
loop = asyncio.get_running_loop()
for signame in ("SIGINT", "SIGTERM"):
try:
loop.add_signal_handler(
getattr(signal, signame),
_request_shutdown,
signame,
)
except (NotImplementedError, AttributeError):
# Windows / restricted environments — fall back to default
# handlers (KeyboardInterrupt for SIGINT; SIGTERM kills).
pass
# Preload colonies specified via --colony
for colony_arg in getattr(args, "colony", []) or []:
@@ -112,9 +168,7 @@ def cmd_serve(args: argparse.Namespace) -> int:
print(f"Colony not found: {colony_arg}")
continue
try:
session = await manager.create_session_with_worker_colony(
str(colony_path), model=model
)
session = await manager.create_session_with_worker_colony(str(colony_path), model=model)
info = session.worker_info
name = info.name if info else session.colony_id
print(f"Loaded colony: {session.colony_id} ({name}) → session {session.id}")
@@ -145,7 +199,7 @@ def cmd_serve(args: argparse.Namespace) -> int:
_open_browser(dashboard_url)
try:
await asyncio.Event().wait()
await shutdown_event.wait()
except asyncio.CancelledError:
pass
finally:
@@ -161,7 +215,13 @@ def cmd_serve(args: argparse.Namespace) -> int:
def cmd_open(args: argparse.Namespace) -> int:
"""Start the HTTP server and open the dashboard in the browser."""
_ping_hive_gateway_availability("hive-open")
# Don't block local startup on a best-effort analytics probe.
threading.Thread(
target=_ping_hive_gateway_availability,
args=("hive-open",),
daemon=True,
name="hive-open-gateway-ping",
).start()
args.open = True
return cmd_serve(args)
@@ -260,12 +320,14 @@ def cmd_queen_sessions(args: argparse.Namespace) -> int:
meta = json.loads(meta_path.read_text(encoding="utf-8"))
except Exception:
meta = {}
rows.append({
"session_id": session_dir.name,
"phase": meta.get("phase", "?"),
"agent_path": meta.get("agent_path", ""),
"colony_fork": bool(meta.get("colony_fork")),
})
rows.append(
{
"session_id": session_dir.name,
"phase": meta.get("phase", "?"),
"agent_path": meta.get("agent_path", ""),
"colony_fork": bool(meta.get("colony_fork")),
}
)
if args.json:
print(json.dumps(rows, indent=2))
@@ -339,18 +401,18 @@ def cmd_colony_list(args: argparse.Namespace) -> int:
except Exception:
meta = {}
worker_count = sum(
1
for f in path.iterdir()
if f.is_file() and f.suffix == ".json" and f.stem not in _RESERVED_JSON_STEMS
1 for f in path.iterdir() if f.is_file() and f.suffix == ".json" and f.stem not in _RESERVED_JSON_STEMS
)
rows.append(
{
"name": path.name,
"queen_name": meta.get("queen_name", ""),
"queen_session_id": meta.get("queen_session_id", ""),
"workers": worker_count,
"created_at": meta.get("created_at", ""),
"path": str(path),
}
)
rows.append({
"name": path.name,
"queen_name": meta.get("queen_name", ""),
"queen_session_id": meta.get("queen_session_id", ""),
"workers": worker_count,
"created_at": meta.get("created_at", ""),
"path": str(path),
})
if args.json:
print(json.dumps(rows, indent=2))
@@ -363,9 +425,7 @@ def cmd_colony_list(args: argparse.Namespace) -> int:
print(f"{'NAME':<24} {'QUEEN':<28} {'WORKERS':<8} CREATED")
print("-" * 90)
for r in rows:
print(
f"{r['name']:<24} {r['queen_name']:<28} {r['workers']:<8} {r['created_at'][:19]}"
)
print(f"{r['name']:<24} {r['queen_name']:<28} {r['workers']:<8} {r['created_at'][:19]}")
return 0
@@ -592,9 +652,7 @@ def _http_get(url: str, timeout: float = 10.0) -> dict:
def _http_post(url: str, body: dict, timeout: float = 30.0) -> dict:
data = json.dumps(body).encode("utf-8")
req = urlrequest.Request(
url, data=data, method="POST", headers={"Content-Type": "application/json"}
)
req = urlrequest.Request(url, data=data, method="POST", headers={"Content-Type": "application/json"})
with urlrequest.urlopen(req, timeout=timeout) as r:
return json.loads(r.read().decode("utf-8"))
@@ -650,9 +708,7 @@ def _open_browser(url: str) -> None:
try:
if sys.platform == "darwin":
subprocess.Popen(
["open", url], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL
)
subprocess.Popen(["open", url], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
elif sys.platform == "win32":
subprocess.Popen(
["cmd", "/c", "start", "", url],
@@ -660,9 +716,7 @@ def _open_browser(url: str) -> None:
stderr=subprocess.DEVNULL,
)
elif sys.platform == "linux":
subprocess.Popen(
["xdg-open", url], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL
)
subprocess.Popen(["xdg-open", url], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
except Exception:
pass
+74 -19
View File
@@ -267,9 +267,7 @@ class MCPClient:
try:
response = self._http_client.get("/health")
response.raise_for_status()
logger.info(
f"Connected to MCP server '{self.config.name}' via HTTP at {self.config.url}"
)
logger.info(f"Connected to MCP server '{self.config.name}' via HTTP at {self.config.url}")
except Exception as e:
logger.warning(f"Health check failed for MCP server '{self.config.name}': {e}")
# Continue anyway, server might not have health endpoint
@@ -377,9 +375,8 @@ class MCPClient:
self._tools[tool.name] = tool
tool_names = list(self._tools.keys())
logger.info(
f"Discovered {len(self._tools)} tools from '{self.config.name}': {tool_names}"
)
logger.info(f"Discovered {len(self._tools)} tools from '{self.config.name}'")
logger.debug(f"Discovered tools from '{self.config.name}': {tool_names}")
except Exception as e:
logger.error(f"Failed to discover tools from '{self.config.name}': {e}")
raise
@@ -464,8 +461,12 @@ class MCPClient:
)
if self.config.transport == "stdio":
with self._stdio_call_lock:
return self._run_async(self._call_tool_stdio_async(tool_name, arguments))
def _stdio_call() -> Any:
with self._stdio_call_lock:
return self._run_async(self._call_tool_stdio_async(tool_name, arguments))
return self._call_tool_with_retry(_stdio_call)
elif self.config.transport == "sse":
return self._call_tool_with_retry(
lambda: self._run_async(self._call_tool_stdio_async(tool_name, arguments))
@@ -475,10 +476,70 @@ class MCPClient:
else:
return self._call_tool_http(tool_name, arguments)
# Exceptions that indicate the STDIO session/subprocess is dead and
# needs a fresh connect(). Keep this narrow — we don't want to mask
# tool-level errors as transport errors.
_STDIO_DEAD_SESSION_ERRORS = (
BrokenPipeError,
ConnectionError,
ConnectionResetError,
EOFError,
)
def _is_stdio_dead_session_error(self, exc: BaseException) -> bool:
if isinstance(exc, self._STDIO_DEAD_SESSION_ERRORS):
return True
# mcp SDK frequently wraps transport errors in RuntimeError with a
# readable message — match on the common signals.
if isinstance(exc, RuntimeError):
msg = str(exc).lower()
for needle in (
"broken pipe",
"connection closed",
"connection reset",
"stream closed",
"session not initialized",
"transport closed",
"anyio.closedresourceerror",
"read operation was cancelled",
):
if needle in msg:
return True
return False
def _call_tool_with_retry(self, call: Any) -> Any:
"""Retry transient MCP transport failures once after reconnecting."""
"""Retry once after reconnecting when the transport looks dead.
Applies to all transports:
- **stdio**: if the subprocess died (broken pipe, closed stream,
session not initialized), tear it down and start a fresh one.
- **sse / unix / http** (httpx-backed): same treatment for
``httpx.ConnectError`` / ``httpx.ReadTimeout``.
"""
if self.config.transport == "stdio":
return call()
try:
return call()
except BaseException as original_error:
if not self._is_stdio_dead_session_error(original_error):
raise
logger.warning(
"Retrying MCP STDIO tool call after dead-session signal from '%s': %s",
self.config.name,
original_error,
)
try:
self._reconnect()
except Exception as reconnect_error:
logger.warning(
"Reconnect failed for MCP STDIO server '%s': %s",
self.config.name,
reconnect_error,
)
raise original_error from reconnect_error
try:
return call()
except BaseException as retry_error:
raise original_error from retry_error
if self.config.transport not in {"unix", "sse"}:
return call()
@@ -603,9 +664,7 @@ class MCPClient:
if self._session:
await self._session.__aexit__(None, None, None)
except asyncio.CancelledError:
logger.warning(
"MCP session cleanup was cancelled; proceeding with best-effort shutdown"
)
logger.warning("MCP session cleanup was cancelled; proceeding with best-effort shutdown")
except Exception as e:
logger.warning(f"Error closing MCP session: {e}")
finally:
@@ -616,9 +675,7 @@ class MCPClient:
if self._stdio_context:
await self._stdio_context.__aexit__(None, None, None)
except asyncio.CancelledError:
logger.debug(
"STDIO context cleanup was cancelled; proceeding with best-effort shutdown"
)
logger.debug("STDIO context cleanup was cancelled; proceeding with best-effort shutdown")
except Exception as e:
msg = str(e).lower()
if "cancel scope" in msg or "different task" in msg:
@@ -659,9 +716,7 @@ class MCPClient:
# any exceptions that may occur if the loop stops between these calls.
if self._loop.is_running():
try:
cleanup_future = asyncio.run_coroutine_threadsafe(
self._cleanup_stdio_async(), self._loop
)
cleanup_future = asyncio.run_coroutine_threadsafe(self._cleanup_stdio_async(), self._loop)
cleanup_future.result(timeout=self._CLEANUP_TIMEOUT)
cleanup_attempted = True
except TimeoutError:
@@ -74,8 +74,7 @@ class MCPConnectionManager:
if not should_connect:
if not transition_event.wait(timeout=_TRANSITION_TIMEOUT):
logger.warning(
"Timed out waiting for transition on MCP server '%s', "
"forcing cleanup and retrying",
"Timed out waiting for transition on MCP server '%s', forcing cleanup and retrying",
server_name,
)
with self._pool_lock:
@@ -99,10 +98,7 @@ class MCPConnectionManager:
current = self._transitions.get(server_name)
if current is transition_event:
self._transitions.pop(server_name, None)
if (
server_name not in self._pool
and self._refcounts.get(server_name, 0) <= 0
):
if server_name not in self._pool and self._refcounts.get(server_name, 0) <= 0:
self._configs.pop(server_name, None)
transition_event.set()
raise
@@ -324,8 +320,7 @@ class MCPConnectionManager:
self._transitions.pop(server_name, None)
transition_event.set()
logger.info(
"Reconnected MCP server '%s' but refcount dropped to 0, "
"discarding new client",
"Reconnected MCP server '%s' but refcount dropped to 0, discarding new client",
server_name,
)
try:
@@ -336,9 +331,7 @@ class MCPConnectionManager:
server_name,
exc_info=True,
)
raise KeyError(
f"MCP server '{server_name}' was fully released during reconnect"
)
raise KeyError(f"MCP server '{server_name}' was fully released during reconnect")
self._pool[server_name] = new_client
self._configs[server_name] = config
@@ -380,8 +373,7 @@ class MCPConnectionManager:
all_resolved = all(event.wait(timeout=_TRANSITION_TIMEOUT) for event in pending)
if not all_resolved:
logger.warning(
"Timed out waiting for pending transitions during cleanup, "
"forcing cleanup of stuck transitions",
"Timed out waiting for pending transitions during cleanup, forcing cleanup of stuck transitions",
)
with self._pool_lock:
for sn, evt in list(self._transitions.items()):
+1 -3
View File
@@ -23,9 +23,7 @@ class MCPError(ValueError):
self.what = what
self.why = why
self.fix = fix
self.message = (
f"[{self.code.value}]\nWhat failed: {self.what}\nWhy: {self.why}\nFix: {self.fix}"
)
self.message = f"[{self.code.value}]\nWhat failed: {self.what}\nWhy: {self.why}\nFix: {self.fix}"
super().__init__(self.message)
+89 -5
View File
@@ -24,9 +24,7 @@ from framework.loader.mcp_errors import (
logger = logging.getLogger(__name__)
DEFAULT_INDEX_URL = (
"https://raw.githubusercontent.com/aden-hive/hive-mcp-registry/main/registry_index.json"
)
DEFAULT_INDEX_URL = "https://raw.githubusercontent.com/aden-hive/hive-mcp-registry/main/registry_index.json"
DEFAULT_REFRESH_INTERVAL_HOURS = 24
_LAST_FETCHED_FILENAME = "last_fetched"
_LEGACY_LAST_FETCHED_FILENAME = "last_fetched.json"
@@ -36,6 +34,32 @@ _DEFAULT_CONFIG = {
"refresh_interval_hours": DEFAULT_REFRESH_INTERVAL_HOURS,
}
# Default local MCP servers that ship with Hive. Seeded on first startup so
# fresh users get working file I/O, browser automation, and the hive tool
# suite without having to run `hive mcp add` manually. ``cwd`` is filled in
# at registration time with the absolute path to the ``tools/`` directory.
_DEFAULT_LOCAL_SERVERS: dict[str, dict[str, Any]] = {
"hive_tools": {
"description": "Hive tools: web search, email, CRM, calendar, and 100+ integrations",
"args": ["run", "python", "mcp_server.py", "--stdio"],
},
"gcu-tools": {
"description": "Browser automation: click, type, navigate, screenshot, snapshot",
"args": ["run", "python", "-m", "gcu.server", "--stdio"],
},
"files-tools": {
"description": "File I/O: read, write, edit, search, list, run commands",
"args": ["run", "python", "files_server.py", "--stdio"],
},
}
# Aliases that earlier versions of ensure_defaults wrote under the wrong name.
# When we see one of these stale entries, drop it before seeding the canonical
# name so the active agents (queen, credential_tester) can find their tools.
_STALE_DEFAULT_ALIASES: dict[str, str] = {
"hive_tools": "hive-tools",
}
class MCPRegistry:
"""Manages local MCP server state in ~/.hive/mcp_registry/."""
@@ -59,6 +83,67 @@ class MCPRegistry:
if not self._installed_path.exists():
self._write_json(self._installed_path, {"servers": {}})
def ensure_defaults(self) -> list[str]:
"""Seed the built-in local MCP servers (hive-tools, gcu-tools, files-tools).
Idempotent servers already present are left untouched. Skips seeding
entirely when the source-tree ``tools/`` directory cannot be located
(e.g. when Hive is installed from a wheel rather than a checkout).
Returns the list of names that were newly registered.
"""
self.initialize()
# parents: [0]=loader, [1]=framework, [2]=core, [3]=repo root
tools_dir = Path(__file__).resolve().parents[3] / "tools"
if not tools_dir.is_dir():
logger.debug(
"MCPRegistry.ensure_defaults: tools dir %s missing; skipping default seed",
tools_dir,
)
return []
cwd = str(tools_dir)
data = self._read_installed()
existing = data.get("servers", {})
added: list[str] = []
# Drop stale aliases (from earlier versions that wrote the wrong name).
# Only remove the alias when the canonical name isn't already installed,
# so we never clobber a hand-edited entry the user cares about.
mutated = False
for canonical, stale in _STALE_DEFAULT_ALIASES.items():
if stale in existing and canonical not in existing:
logger.info(
"MCPRegistry.ensure_defaults: removing stale alias '%s' (canonical: '%s')",
stale,
canonical,
)
del existing[stale]
mutated = True
if mutated:
self._write_installed(data)
for name, spec in _DEFAULT_LOCAL_SERVERS.items():
if name in existing:
continue
try:
self.add_local(
name=name,
transport="stdio",
command="uv",
args=list(spec["args"]),
cwd=cwd,
description=spec["description"],
)
added.append(name)
except MCPError as exc:
logger.warning("MCPRegistry.ensure_defaults: failed to seed '%s': %s", name, exc)
if added:
logger.info("MCPRegistry: seeded default local servers: %s", added)
return added
# ── Internal I/O ────────────────────────────────────────────────
def _read_installed(self) -> dict:
@@ -620,8 +705,7 @@ class MCPRegistry:
pinned_version = versions[name]
if installed_version != pinned_version:
logger.warning(
"Server '%s' version mismatch: installed=%s, pinned=%s. "
"Run: hive mcp update %s",
"Server '%s' version mismatch: installed=%s, pinned=%s. Run: hive mcp update %s",
name,
installed_version,
pinned_version,
+35 -30
View File
@@ -151,10 +151,7 @@ def _parse_key_value_pairs(values: list[str]) -> dict[str, str]:
result = {}
for item in values:
if "=" not in item:
raise ValueError(
f"Invalid format: '{item}'. Expected KEY=VALUE.\n"
f"Example: --set JIRA_API_TOKEN=abc123"
)
raise ValueError(f"Invalid format: '{item}'. Expected KEY=VALUE.\nExample: --set JIRA_API_TOKEN=abc123")
key, _, value = item.partition("=")
if not key:
raise ValueError(f"Invalid format: '{item}'. Key cannot be empty.")
@@ -300,12 +297,8 @@ def register_mcp_commands(subparsers) -> None:
# ── install ──
install_p = mcp_sub.add_parser("install", help="Install a server from the registry")
install_p.add_argument("name", help="Server name in the registry")
install_p.add_argument(
"--version", dest="version", default=None, help="Pin to a specific version"
)
install_p.add_argument(
"--transport", default=None, help="Override default transport (stdio, http, unix, sse)"
)
install_p.add_argument("--version", dest="version", default=None, help="Pin to a specific version")
install_p.add_argument("--transport", default=None, help="Override default transport (stdio, http, unix, sse)")
install_p.set_defaults(func=cmd_mcp_install)
# ── add ──
@@ -342,9 +335,7 @@ def register_mcp_commands(subparsers) -> None:
# ── list ──
list_p = mcp_sub.add_parser("list", help="List servers")
list_p.add_argument(
"--available", action="store_true", help="Show available servers from registry"
)
list_p.add_argument("--available", action="store_true", help="Show available servers from registry")
list_p.add_argument("--json", dest="output_json", action="store_true", help="Output as JSON")
list_p.set_defaults(func=cmd_mcp_list)
@@ -364,9 +355,7 @@ def register_mcp_commands(subparsers) -> None:
metavar="KEY=VAL",
help="Set environment variable overrides",
)
config_p.add_argument(
"--set-header", dest="set_header", nargs="+", metavar="KEY=VAL", help="Set header overrides"
)
config_p.add_argument("--set-header", dest="set_header", nargs="+", metavar="KEY=VAL", help="Set header overrides")
config_p.set_defaults(func=cmd_mcp_config)
# ── search ──
@@ -381,10 +370,15 @@ def register_mcp_commands(subparsers) -> None:
health_p.add_argument("--json", dest="output_json", action="store_true", help="Output as JSON")
health_p.set_defaults(func=cmd_mcp_health)
# ── update ──
update_p = mcp_sub.add_parser(
"update", help="Update installed servers or refresh the registry index"
# ── init ──
init_p = mcp_sub.add_parser(
"init",
help="Initialize the local MCP registry and seed built-in servers",
)
init_p.set_defaults(func=cmd_mcp_init)
# ── update ──
update_p = mcp_sub.add_parser("update", help="Update installed servers or refresh the registry index")
update_p.add_argument(
"name",
nargs="?",
@@ -488,8 +482,7 @@ def _cmd_mcp_add_from_manifest(registry, manifest_path: str) -> int:
manifest = json.loads(path.read_text(encoding="utf-8"))
except json.JSONDecodeError as exc:
print(
f"Error: invalid JSON in {manifest_path}: {exc}\n"
f"Validate with: python -m json.tool {manifest_path}",
f"Error: invalid JSON in {manifest_path}: {exc}\nValidate with: python -m json.tool {manifest_path}",
file=sys.stderr,
)
return 1
@@ -688,8 +681,7 @@ def cmd_mcp_config(args) -> int:
server = registry.get_server(args.name)
if server is None:
print(
f"Error: server '{args.name}' is not installed.\n"
f"Run 'hive mcp list' to see installed servers.",
f"Error: server '{args.name}' is not installed.\nRun 'hive mcp list' to see installed servers.",
file=sys.stderr,
)
return 1
@@ -786,6 +778,23 @@ def cmd_mcp_health(args) -> int:
return 0
def cmd_mcp_init(args) -> int:
"""Initialize the local MCP registry and seed built-in local servers."""
registry = _get_registry()
try:
added = registry.ensure_defaults()
except Exception as exc:
print(f"Error: failed to initialize MCP registry: {exc}", file=sys.stderr)
return 1
if added:
for name in added:
print(f"✓ Registered {name}")
else:
print("✓ MCP registry already initialized (no changes)")
return 0
def cmd_mcp_update(args) -> int:
"""Update a single server, or refresh the index and update all registry servers."""
registry = _get_registry()
@@ -798,8 +807,7 @@ def cmd_mcp_update(args) -> int:
count = registry.update_index()
except Exception as exc:
print(
f"Error: failed to update registry index: {exc}\n"
f"Check your network connection and try again.",
f"Error: failed to update registry index: {exc}\nCheck your network connection and try again.",
file=sys.stderr,
)
return 1
@@ -808,9 +816,7 @@ def cmd_mcp_update(args) -> int:
# Step 2: update all installed registry servers (skip local/pinned)
installed = registry.list_installed()
registry_servers = [
s for s in installed if s.get("source") == "registry" and not s.get("pinned")
]
registry_servers = [s for s in installed if s.get("source") == "registry" and not s.get("pinned")]
if not registry_servers:
return 0
@@ -838,8 +844,7 @@ def _cmd_mcp_update_server(name: str, registry=None) -> int:
server = registry.get_server(name)
if server is None:
print(
f"Error: server '{name}' is not installed.\n"
f"Run 'hive mcp install {name}' to install it.",
f"Error: server '{name}' is not installed.\nRun 'hive mcp install {name}' to install it.",
file=sys.stderr,
)
return 1
+1 -3
View File
@@ -98,9 +98,7 @@ def validate_credentials(
if not result.success:
# Preserve the original validation_result so callers can
# inspect which credentials are still missing.
exc = CredentialError(
"Credential setup incomplete. Run again after configuring the required credentials."
)
exc = CredentialError("Credential setup incomplete. Run again after configuring the required credentials.")
if hasattr(e, "validation_result"):
exc.validation_result = e.validation_result # type: ignore[attr-defined]
if hasattr(e, "failed_cred_names"):
+257 -31
View File
@@ -7,6 +7,7 @@ import inspect
import json
import logging
import os
import re
from collections.abc import Callable
from dataclasses import dataclass
from pathlib import Path
@@ -18,6 +19,16 @@ logger = logging.getLogger(__name__)
_INPUT_LOG_MAX_LEN = 500
# Tools whose names match this pattern are assumed to return ImageContent.
# Matched against the bare tool name (case-insensitive). Used to mark MCP
# tools with produces_image=True so they can be filtered out for text-only
# models before the schema is ever shown to the LLM (avoids wasted calls
# and "screenshot failed" entries polluting memory).
_IMAGE_TOOL_NAME_RE = re.compile(
r"(screenshot|screen_capture|capture_image|render_image|get_image|snapshot_image)",
re.IGNORECASE,
)
# Per-execution context overrides. Each asyncio task (and thus each
# concurrent graph execution) gets its own copy, so there are no races
# when multiple ExecutionStreams run in parallel.
@@ -50,6 +61,33 @@ class ToolRegistry:
# and auto-injected at call time for tools that accept them.
CONTEXT_PARAMS = frozenset({"agent_id", "data_dir", "profile"})
# Tools that perform no filesystem/process/network writes and are safe
# to run concurrently with other safe tools in the same assistant turn.
# Unknown tools default to unsafe (serialized) - adding a name here is
# an explicit promise about that tool's side effects. Keep this list
# conservative: anything that mutates state, writes to disk, issues
# POST/PUT/DELETE requests, or drives a browser MUST NOT be listed.
CONCURRENCY_SAFE_TOOLS = frozenset(
{
# File system reads
"read_file",
"list_directory",
"grep",
"glob",
# Web reads
"web_search",
"web_fetch",
# Browser read-only snapshots (mutate-free observations)
"browser_screenshot",
"browser_snapshot",
"browser_console",
"browser_get_text",
# Background bash polling - reads output buffers only, does
# not touch the subprocess itself.
"bash_output",
}
)
# Credential directory used for change detection
_CREDENTIAL_DIR = Path("~/.hive/credentials/credentials").expanduser()
@@ -66,9 +104,24 @@ class ToolRegistry:
self._mcp_cred_snapshot: set[str] = set() # Credential filenames at MCP load time
self._mcp_aden_key_snapshot: str | None = None # ADEN_API_KEY value at MCP load time
self._mcp_server_tools: dict[str, set[str]] = {} # server name -> tool names
# tool name -> owning MCPClient (for force-kill on timeout)
self._mcp_tool_clients: dict[str, Any] = {}
# Per-agent env injected into every MCP server config.env. Kept
# here (not on the process-wide os.environ) so parallel workers
# in the same interpreter don't clobber each other's identity.
self._mcp_extra_env: dict[str, str] = {}
# Agent dir for re-loading registry MCP after credential resync.
self._mcp_registry_agent_path: Path | None = None
def set_mcp_extra_env(self, env: dict[str, str]) -> None:
"""Attach per-agent env vars to every MCPServerConfig this registry builds.
Use this instead of mutating ``os.environ`` the global env dict
is shared across all workers in a single interpreter, so writes
from one worker race with MCP spawns from another.
"""
self._mcp_extra_env = dict(env)
def register(
self,
name: str,
@@ -137,6 +190,7 @@ class ToolRegistry:
"properties": properties,
"required": required,
},
concurrency_safe=tool_name in self.CONCURRENCY_SAFE_TOOLS,
)
def executor(inputs: dict) -> Any:
@@ -203,10 +257,7 @@ class ToolRegistry:
str(e),
)
return {
"error": (
f"Invalid JSON response from tool '{tool_name}': "
f"{str(e)}"
),
"error": (f"Invalid JSON response from tool '{tool_name}': {str(e)}"),
"raw_content": result.content,
}
return result
@@ -326,6 +377,9 @@ class ToolRegistry:
is_error=True,
)
# Expose force-kill hook so the timeout handler can tear down a
# hung MCP subprocess (asyncio.wait_for alone cannot).
executor.kill_for_tool = registry_ref.kill_mcp_for_tool # type: ignore[attr-defined]
return executor
def get_registered_names(self) -> list[str]:
@@ -372,15 +426,13 @@ class ToolRegistry:
"""Resolve cwd and script paths for MCP stdio config (Windows compatibility).
Use this when building MCPServerConfig from a config file (e.g. in
list_agent_tools, discover_mcp_tools) so hive-tools and other servers
list_agent_tools, discover_mcp_tools) so hive_tools and other servers
work on Windows. Call with base_dir = directory containing the config.
"""
registry = ToolRegistry()
return registry._resolve_mcp_server_config(server_config, base_dir)
def _resolve_mcp_server_config(
self, server_config: dict[str, Any], base_dir: Path
) -> dict[str, Any]:
def _resolve_mcp_server_config(self, server_config: dict[str, Any], base_dir: Path) -> dict[str, Any]:
"""Resolve cwd and script paths for MCP stdio servers (Windows compatibility).
On Windows, passing cwd to subprocess can cause WinError 267. We use cwd=None
@@ -445,12 +497,22 @@ class ToolRegistry:
config["cwd"] = str(resolved_cwd)
return config
# For coder_tools_server, inject --project-root so writes go to the expected workspace
# For coder_tools_server, inject --project-root so reads land
# in the expected workspace (hive repo, for framework skills
# and docs), and inject --write-root so writes land under
# ~/.hive/workspace/ instead of polluting the git checkout
# with queen-authored skills, ledgers, and scripts. Without
# the split, every ``write_file`` call from the queen landed
# in the hive repo root.
if script_name and "coder_tools" in script_name:
project_root = str(resolved_cwd.parent.resolve())
args = list(args)
if "--project-root" not in args:
args.extend(["--project-root", project_root])
if "--write-root" not in args:
_write_root = Path.home() / ".hive" / "workspace"
_write_root.mkdir(parents=True, exist_ok=True)
args.extend(["--write-root", str(_write_root)])
config["args"] = args
if os.name == "nt":
@@ -495,8 +557,7 @@ class ToolRegistry:
server_list = [{"name": name, **cfg} for name, cfg in config.items()]
resolved_server_list = [
self._resolve_mcp_server_config(server_config, base_dir)
for server_config in server_list
self._resolve_mcp_server_config(server_config, base_dir) for server_config in server_list
]
# Ordered first-wins for duplicate tool names across servers; keep tools.py tools.
self.load_registry_servers(
@@ -510,6 +571,8 @@ class ToolRegistry:
self._mcp_cred_snapshot = self._snapshot_credentials()
self._mcp_aden_key_snapshot = os.environ.get("ADEN_API_KEY")
self._log_registry_snapshot("after load_mcp_config")
def _register_mcp_server_with_retry(
self,
server_config: dict[str, Any],
@@ -518,8 +581,18 @@ class ToolRegistry:
tool_cap: int | None = None,
log_collisions: bool = False,
) -> tuple[bool, int, str | None]:
"""Register a single MCP server with one retry for transient failures."""
"""Register a single MCP server with one retry for transient failures.
When ``preserve_existing_tools=True`` and the server's tools are
already present from a prior registration, ``register_mcp_server``
returns ``count=0`` because every tool was shadowed. That's a
no-op success, not a failure don't retry / warn in that case.
Otherwise a duplicate-init path (e.g. a worker spawn re-loading
the MCP servers the queen already registered) spams shadow
warnings, sleeps 2s, and retries for no reason.
"""
name = server_config.get("name", "unknown")
already_loaded = bool(self._mcp_server_tools.get(name))
last_error: str | None = None
for attempt in range(2):
@@ -532,6 +605,10 @@ class ToolRegistry:
)
if count > 0:
return True, count, None
if already_loaded and preserve_existing_tools:
# All tools shadowed by the prior registration of
# the same server — nothing to do, server is usable.
return True, 0, None
last_error = "registered 0 tools"
except Exception as exc:
last_error = str(exc)
@@ -644,13 +721,17 @@ class ToolRegistry:
from framework.loader.mcp_client import MCPClient, MCPServerConfig
from framework.loader.mcp_connection_manager import MCPConnectionManager
# Build config object
# Build config object. Merge per-agent env on top of the
# server's own env so MCP subprocesses receive the identity
# of the worker that spawned them (instead of whichever
# worker most recently wrote to os.environ).
merged_env = {**self._mcp_extra_env, **(server_config.get("env") or {})}
config = MCPServerConfig(
name=server_config["name"],
transport=server_config["transport"],
command=server_config.get("command"),
args=server_config.get("args", []),
env=server_config.get("env", {}),
env=merged_env,
cwd=server_config.get("cwd"),
url=server_config.get("url"),
headers=server_config.get("headers", {}),
@@ -676,22 +757,37 @@ class ToolRegistry:
server_name = server_config["name"]
if server_name not in self._mcp_server_tools:
self._mcp_server_tools[server_name] = set()
# Build admission gate: only admit MCP tools that are either
# (a) credential-backed *and* have a configured account, or
# (b) credential-less *and* listed in the verified manifest.
# Servers that don't expose `__aden_verified_manifest` (third-party
# MCP servers) bypass the gate entirely — preserves prior behavior.
admit = self._build_mcp_admission_gate(client)
count = 0
admitted_names: list[str] = []
for mcp_tool in client.list_tools():
if not admit(mcp_tool.name):
continue
if tool_cap is not None and count >= tool_cap:
break
if preserve_existing_tools and mcp_tool.name in self._tools:
if log_collisions:
origin_server = (
self._find_mcp_origin_server_for_tool(mcp_tool.name) or "<existing>"
)
logger.warning(
"MCP tool '%s' from '%s' shadowed by '%s' (loaded first)",
mcp_tool.name,
server_name,
origin_server,
)
origin_server = self._find_mcp_origin_server_for_tool(mcp_tool.name) or "<existing>"
# Don't warn when a server is being re-registered
# by itself — that's a redundant-init case (e.g.
# the same tool_registry seeing the same server
# twice via pooled reconnect), not a real
# cross-server shadow worth flagging.
if origin_server != server_name:
logger.warning(
"MCP tool '%s' from '%s' shadowed by '%s' (loaded first)",
mcp_tool.name,
server_name,
origin_server,
)
# Skip registration; do not update MCP tool bookkeeping for this server.
continue
@@ -714,17 +810,11 @@ class ToolRegistry:
base_context.update(exec_ctx)
# Only inject context params the tool accepts
filtered_context = {
k: v for k, v in base_context.items() if k in tool_params
}
filtered_context = {k: v for k, v in base_context.items() if k in tool_params}
# Strip context params from LLM inputs — the framework
# values are authoritative (prevents the LLM from passing
# e.g. data_dir="/data" and overriding the real path).
clean_inputs = {
k: v
for k, v in inputs.items()
if k not in registry_ref.CONTEXT_PARAMS
}
clean_inputs = {k: v for k, v in inputs.items() if k not in registry_ref.CONTEXT_PARAMS}
merged_inputs = {**clean_inputs, **filtered_context}
result = client_ref.call_tool(tool_name, merged_inputs)
# MCP client already extracts content (returns str
@@ -757,7 +847,9 @@ class ToolRegistry:
make_mcp_executor(client, mcp_tool.name, self, tool_params),
)
self._mcp_tool_names.add(mcp_tool.name)
self._mcp_tool_clients[mcp_tool.name] = client
self._mcp_server_tools[server_name].add(mcp_tool.name)
admitted_names.append(mcp_tool.name)
count += 1
logger.info(
@@ -769,6 +861,12 @@ class ToolRegistry:
"skipped_reason": None,
},
)
logger.info(
"MCP server '%s' admitted %d tool(s): %s",
config.name,
len(admitted_names),
sorted(admitted_names),
)
return count
except Exception as e:
@@ -794,6 +892,104 @@ class ToolRegistry:
return server_name
return None
def _log_registry_snapshot(self, context: str) -> None:
"""Emit a one-line summary of the current tool registry.
Called after every tool-list mutation (initial load + resync) so that
operators can correlate "what tools does the queen have right now"
with credential changes and MCP server lifecycle events. Per-server
contents are already logged by `register_mcp_server`; this is just the
rollup so the resync path also gets a single anchor line.
"""
per_server_counts = {server: len(names) for server, names in self._mcp_server_tools.items()}
non_mcp_count = len(self._tools) - len(self._mcp_tool_names)
logger.info(
"ToolRegistry snapshot (%s): total=%d, mcp=%d, non_mcp=%d, per_server=%s",
context,
len(self._tools),
len(self._mcp_tool_names),
non_mcp_count,
per_server_counts,
)
_MCP_VERIFIED_MANIFEST_TOOL = "__aden_verified_manifest"
def _build_mcp_admission_gate(self, client: Any) -> Callable[[str], bool]:
"""Build a per-server predicate that filters MCP tools at registration.
Rules:
* The sentinel manifest tool itself is never admitted.
* Credential-backed tools (provider in `tool_provider_map`) are
admitted only when at least one account exists for that provider.
* Credential-less tools are admitted only when they appear in the
server's verified manifest.
* Servers that don't expose a manifest bypass the verified gate
entirely (third-party MCP servers behave as before).
"""
verified_names: set[str] = set()
manifest_present = False
# Only probe the sentinel when the server actually advertises it.
# Calling ``__aden_verified_manifest`` unconditionally on every
# MCP server at registration time (a) causes a bogus tool call
# round-trip to every third-party server, (b) pollutes any
# call-capturing fakes in tests, and (c) risks side effects on
# servers that eagerly execute unknown tool names. Listing is
# cheap and cached by the client; this keeps the manifest gate
# active for aden-flavoured servers without penalising others.
sentinel_advertised = False
try:
for t in client.list_tools():
if getattr(t, "name", None) == self._MCP_VERIFIED_MANIFEST_TOOL:
sentinel_advertised = True
break
except Exception:
sentinel_advertised = False
if sentinel_advertised:
try:
raw = client.call_tool(self._MCP_VERIFIED_MANIFEST_TOOL, {})
parsed: Any = raw
if isinstance(raw, str):
try:
parsed = json.loads(raw)
except json.JSONDecodeError:
parsed = None
# Only treat the response as a manifest when it's a list
# of strings. A malformed response shouldn't flip the gate
# on and silently hide every real tool from the server.
if isinstance(parsed, list) and all(isinstance(n, str) for n in parsed):
verified_names = set(parsed)
manifest_present = True
except Exception:
# Server advertised the sentinel but errored when called
# — treat as no manifest; fall back to third-party bypass.
pass
tool_provider_map: dict[str, str] = {}
live_providers: set[str] = set()
try:
from aden_tools.credentials.store_adapter import CredentialStoreAdapter
adapter = CredentialStoreAdapter.default()
tool_provider_map = adapter.get_tool_provider_map()
live_providers = {a.get("provider", "") for a in adapter.get_all_account_info() if a.get("provider")}
except Exception:
logger.debug("Credential snapshot unavailable for MCP gate", exc_info=True)
def admit(tool_name: str) -> bool:
if tool_name == self._MCP_VERIFIED_MANIFEST_TOOL:
return False
provider = tool_provider_map.get(tool_name)
if provider:
# Credentialed tool — needs an account.
return provider in live_providers
if not manifest_present:
# Third-party MCP server: preserve legacy "admit everything".
return True
return tool_name in verified_names
return admit
def _convert_mcp_tool_to_framework_tool(self, mcp_tool: Any) -> Tool:
"""
Convert an MCP tool to a framework Tool.
@@ -823,6 +1019,8 @@ class ToolRegistry:
"properties": properties,
"required": required,
},
produces_image=bool(_IMAGE_TOOL_NAME_RE.search(mcp_tool.name or "")),
concurrency_safe=mcp_tool.name in self.CONCURRENCY_SAFE_TOOLS,
)
return tool
@@ -970,6 +1168,7 @@ class ToolRegistry:
self.reload_registry_mcp_servers_after_resync()
logger.info("MCP server resync complete")
self._log_registry_snapshot("after resync_mcp_servers_if_needed")
return True
def cleanup(self) -> None:
@@ -996,6 +1195,33 @@ class ToolRegistry:
self._mcp_clients.clear()
self._mcp_client_servers.clear()
self._mcp_managed_clients.clear()
self._mcp_tool_clients.clear()
def kill_mcp_for_tool(self, tool_name: str) -> bool:
"""Force-disconnect the MCP client that owns *tool_name*.
Called from the timeout handler in ``execute_tool`` when a tool
call hangs. Plain ``asyncio.wait_for`` cancellation cannot stop
a sync executor running inside a thread pool (and therefore
cannot stop the MCP subprocess), so we reach through to the
client here and tear it down. The next ``call_tool`` triggers
an automatic reconnect.
Returns True if a client was found and disconnect was attempted.
"""
client = self._mcp_tool_clients.get(tool_name)
if client is None:
return False
try:
logger.warning(
"Force-disconnecting MCP client for hung tool '%s' on server '%s'",
tool_name,
getattr(client.config, "name", "?"),
)
client.disconnect()
except Exception as exc:
logger.warning("Error force-disconnecting MCP client for '%s': %s", tool_name, exc)
return True
def __del__(self):
"""Destructor to ensure cleanup."""
+14 -2
View File
@@ -7,21 +7,33 @@ Lazy imports to avoid circular dependencies with graph/event_loop/*.
def __getattr__(name: str):
if name in ("GraphContext",):
from framework.orchestrator.context import GraphContext
return GraphContext
if name in ("DEFAULT_MAX_TOKENS", "EdgeCondition", "EdgeSpec", "GraphSpec"):
from framework.orchestrator import edge as _e
return getattr(_e, name)
if name in ("Orchestrator", "ExecutionResult"):
from framework.orchestrator import orchestrator as _o
return getattr(_o, name)
if name in ("Constraint", "Goal", "GoalStatus", "SuccessCriterion"):
from framework.orchestrator import goal as _g
return getattr(_g, name)
if name in ("DataBuffer", "NodeContext", "NodeProtocol", "NodeResult", "NodeSpec"):
from framework.orchestrator import node as _n
return getattr(_n, name)
if name in ("NodeWorker", "Activation", "FanOutTag", "FanOutTracker",
"WorkerCompletion", "WorkerLifecycle"):
if name in (
"NodeWorker",
"Activation",
"FanOutTag",
"FanOutTracker",
"WorkerCompletion",
"WorkerLifecycle",
):
from framework.orchestrator import node_worker as _nw
return getattr(_nw, name)
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
@@ -50,11 +50,7 @@ class CheckpointConfig:
Returns:
True if should check for old checkpoints and prune them
"""
return (
self.enabled
and self.prune_every_n_nodes > 0
and nodes_executed % self.prune_every_n_nodes == 0
)
return self.enabled and self.prune_every_n_nodes > 0 and nodes_executed % self.prune_every_n_nodes == 0
# Default configuration for most agents
+3 -1
View File
@@ -89,10 +89,12 @@ class ActiveNodeClientIO(NodeClientIO):
self._input_result = None
if self._event_bus is not None:
# `prompt` is consumed by the caller separately (callers emit
# it as a text delta when needed). The event only carries the
# structured questions payload for widget rendering.
await self._event_bus.emit_client_input_requested(
stream_id=self.node_id,
node_id=self.node_id,
prompt=prompt,
execution_id=self._execution_id or None,
)
+1 -3
View File
@@ -175,9 +175,7 @@ def _resolve_available_tools(
return always_tools
declared = set(node_spec.tools)
declared_tools = [
t for t in tools if t.name in declared and t.name not in _ALWAYS_AVAILABLE_TOOLS
]
declared_tools = [t for t in tools if t.name in declared and t.name not in _ALWAYS_AVAILABLE_TOOLS]
return always_tools + declared_tools
@@ -169,11 +169,7 @@ class ContextHandoff:
key_hint = ""
if output_keys:
key_hint = (
"\nThe following output keys are especially important: "
+ ", ".join(output_keys)
+ ".\n"
)
key_hint = "\nThe following output keys are especially important: " + ", ".join(output_keys) + ".\n"
system_prompt = (
"You are a concise summarizer. Given the conversation below, "
+5 -14
View File
@@ -186,8 +186,7 @@ class EdgeSpec(BaseModel):
expr_vars = {
k: repr(context[k])
for k in context
if k not in ("output", "buffer", "result", "true", "false")
and k in self.condition_expr
if k not in ("output", "buffer", "result", "true", "false") and k in self.condition_expr
}
logger.info(
" Edge %s: condition '%s'%s (vars: %s)",
@@ -333,12 +332,8 @@ class GraphSpec(BaseModel):
default_factory=dict,
description="Named entry points for resuming execution. Format: {name: node_id}",
)
terminal_nodes: list[str] = Field(
default_factory=list, description="IDs of nodes that end execution"
)
pause_nodes: list[str] = Field(
default_factory=list, description="IDs of nodes that pause execution for HITL input"
)
terminal_nodes: list[str] = Field(default_factory=list, description="IDs of nodes that end execution")
pause_nodes: list[str] = Field(default_factory=list, description="IDs of nodes that pause execution for HITL input")
# Components
nodes: list[Any] = Field( # NodeSpec, but avoiding circular import
@@ -347,9 +342,7 @@ class GraphSpec(BaseModel):
edges: list[EdgeSpec] = Field(default_factory=list, description="All edge specifications")
# Data buffer keys
buffer_keys: list[str] = Field(
default_factory=list, description="Keys available in data buffer"
)
buffer_keys: list[str] = Field(default_factory=list, description="Keys available in data buffer")
# Default LLM settings
default_model: str = "claude-haiku-4-5-20251001"
@@ -557,9 +550,7 @@ class GraphSpec(BaseModel):
fan_outs = self.detect_fan_out_nodes()
for source_id, targets in fan_outs.items():
event_loop_targets = [
t
for t in targets
if self.get_node(t) and getattr(self.get_node(t), "node_type", "") == "event_loop"
t for t in targets if self.get_node(t) and getattr(self.get_node(t), "node_type", "") == "event_loop"
]
if len(event_loop_targets) > 1:
seen_keys: dict[str, str] = {}
+141 -155
View File
@@ -1,12 +1,19 @@
"""Browser automation best-practices prompt.
This module provides ``GCU_BROWSER_SYSTEM_PROMPT`` -- a canonical set of
This module provides ``GCU_BROWSER_SYSTEM_PROMPT`` a canonical set of
browser automation guidelines that can be included in any node's system
prompt that uses browser tools from the gcu-tools MCP server.
Browser tools are registered via the global MCP registry (gcu-tools).
Nodes that need browser access declare ``tools: {policy: "all"}`` in their
agent.json config.
Note: the canonical source of truth for browser automation guidance is
the ``browser-automation`` preset skill at
``core/framework/skills/_preset_skills/browser-automation/SKILL.md``.
Activate that skill for the full decision tree. This module holds a
compact subset suitable for direct inlining into a node's system prompt
when a skill activation is not desired.
"""
GCU_BROWSER_SYSTEM_PROMPT = """\
@@ -14,172 +21,151 @@ GCU_BROWSER_SYSTEM_PROMPT = """\
Follow these rules for reliable, efficient browser interaction.
## Reading Pages
- ALWAYS prefer `browser_snapshot` over `browser_get_text("body")`
it returns a compact ~1-5 KB accessibility tree vs 100+ KB of raw HTML.
- Interaction tools (`browser_click`, `browser_type`, `browser_fill`,
`browser_scroll`, etc.) return a page snapshot automatically in their
result. Use it to decide your next action do NOT call
`browser_snapshot` separately after every action.
Only call `browser_snapshot` when you need a fresh view without
performing an action, or after setting `auto_snapshot=false`.
- Do NOT use `browser_screenshot` to read text use
`browser_snapshot` for that (compact, searchable, fast).
- DO use `browser_screenshot` when you need visual context:
charts, images, canvas elements, layout verification, or when
the snapshot doesn't capture what you need.
- Only fall back to `browser_get_text` for extracting specific
small elements by CSS selector.
## Pick the right reading tool
## Navigation & Waiting
- `browser_navigate` and `browser_open` already wait for the page to
load (`domcontentloaded`). Do NOT call `browser_wait` with no
arguments after navigation it wastes time.
Only use `browser_wait` when you need a *specific element* or *text*
to appear (pass `selector` or `text`).
- NEVER re-navigate to the same URL after scrolling
this resets your scroll position and loses loaded content.
- **`browser_snapshot`** compact accessibility tree. Fast, cheap, good
for static / text-heavy pages where the DOM matches what's visually
rendered (docs, forms, search results, settings pages).
- **`browser_screenshot`** visual capture + scale metadata. Use when
the snapshot does not show the thing you need, when refs look stale,
or when you need visual position/layout to act. This is common on
complex SPAs (LinkedIn, X / Twitter, Reddit, Gmail, Notion, Slack,
Discord), shadow DOM, and virtual scrolling.
Use snapshot first for structure and ordinary controls; switch to
screenshot when snapshot can't find or verify the target. Interaction
tools (`browser_click`, `browser_type`, `browser_type_focused`,
`browser_fill`, `browser_scroll`) wait 0.5 s for the page to settle
after a successful action, then attach a fresh snapshot under the
`snapshot` key of their result so don't call `browser_snapshot`
separately after an interaction unless you need a newer view. Tune
with `auto_snapshot_mode`: `"default"` (full tree) is the default;
`"simple"` trims unnamed structural nodes; `"interactive"` returns
only controls (tightest token footprint); `"off"` skips the capture
entirely use when batching several interactions.
Only fall back to `browser_get_text` for extracting small elements by
CSS selector.
## Coordinates
Every browser tool that takes or returns coordinates operates in
**fractions of the viewport (0..1 for both axes)**. Read a target's
proportional position off `browser_screenshot` "this button is
~35% from the left, ~20% from the top" → pass `(0.35, 0.20)`.
`browser_get_rect` and `browser_shadow_query` return `rect.cx` /
`rect.cy` as fractions in the same space. The tools handle the
fraction CSS-px multiplication internally; you do not need to
track image pixels, DPR, or any scale factor.
Why fractions: every vision model (Claude, GPT-4o, Gemini, local
VLMs) resizes or tiles images differently before the model sees the
pixels. Proportions survive every such transform; pixel coordinates
only "work" per-model and break when you swap backends.
Avoid raw `browser_evaluate` + `getBoundingClientRect()` for coord
lookup that returns CSS px and will be wrong when fed to click
tools. Prefer `browser_get_rect` / `browser_shadow_query`, which
return fractions.
## Rich-text editors (X, LinkedIn DMs, Gmail, Reddit, Slack, Discord)
Click the input area first with `browser_click_coordinate` or
`browser_click(selector)` BEFORE typing. React / Draft.js / Lexical /
ProseMirror only register input as "real" after a native pointer-
sourced focus event; JS `.focus()` is not enough. Without a real click
first, the editor stays empty and the send button stays disabled.
`browser_type` does this automatically when you have a selector it
clicks the element, then inserts text via CDP `Input.insertText`.
For shadow-DOM inputs where selectors can't reach, use
`browser_click_coordinate` to focus, then `browser_type_focused(text=...)`
to type into the active element. Before clicking send, verify the
submit button's `disabled` / `aria-disabled` state via `browser_evaluate`.
## Shadow DOM
Sites like LinkedIn messaging (`#interop-outlet`), Reddit (faceplate
Web Components), and some X elements live inside shadow roots.
`document.querySelector` and `wait_for_selector` do **not** see into
shadow roots. But `browser_click_coordinate` **does** CDP hit
testing walks shadow roots natively, so coordinate-based operations
reach shadow elements transparently.
**Shadow-heavy site workflow:**
1. `browser_screenshot()` visual image
2. Identify target visually pixel `(x, y)` read straight off the image
3. `browser_click_coordinate(x, y)` lands via native hit test;
inputs get focused regardless of shadow depth
4. Type via `browser_type_focused` (no selector needed types into the
already-focused element), or `browser_type` if you have a selector
For selector-style access when you know the shadow path:
`browser_shadow_query("#interop-outlet >>> #msg-overlay >>> p")`
returns a CSS-px rect you can feed directly to click tools.
## Navigation & waiting
- `browser_navigate(wait_until="load")` returns when the page fires
load. On SPAs (LinkedIn especially 45 seconds), add a 23 s sleep
after to let React/Vue hydrate before querying for chrome elements.
- Never re-navigate to the same URL after scrolling resets scroll.
- Use `timeout_ms=20000` for heavy SPAs.
- `wait_for_selector` / `wait_for_text` resolve in milliseconds when
the element is already in the DOM no need to sleep if you can
express the wait condition.
## Keyboard shortcuts
`browser_press("a", modifiers=["ctrl"])` for Ctrl+A. Accepted
modifiers: `"alt"`, `"ctrl"`/`"control"`, `"meta"`/`"cmd"`,
`"shift"`. The tool dispatches the modifier key first, then the main
key with `code` and `windowsVirtualKeyCode` populated (Chrome's
shortcut dispatcher requires both), then releases in reverse order.
## Scrolling
- Use large scroll amounts ~2000 when loading more content
sites like twitter and linkedin have lazy loading for paging.
- The scroll result includes a snapshot automatically no need to call
`browser_snapshot` separately.
## Batching Actions
- You can call multiple tools in a single turn they execute in parallel.
ALWAYS batch independent actions together. Examples:
- Fill multiple form fields in one turn.
- Navigate + snapshot in one turn.
- Click + scroll if targeting different elements.
- When batching, set `auto_snapshot=false` on all but the last action
to avoid redundant snapshots.
- Aim for 3-5 tool calls per turn minimum. One tool call per turn is
wasteful.
- Use large amounts (~2000 px) for lazy-loaded sites (X, LinkedIn).
- Scroll result includes a snapshot don't call `browser_snapshot`
separately.
## Error Recovery
- If a tool fails, retry once with the same approach.
- If it fails a second time, STOP retrying and switch approach.
- If `browser_snapshot` fails try `browser_get_text` with a
specific small selector as fallback.
- If `browser_open` fails or page seems stale `browser_stop`,
then `browser_start`, then retry.
## Batching
## Tab Management
- Multiple tool calls per turn execute in parallel. Batch independent
actions together: fill multiple fields, navigate + snapshot,
different-target click + scroll.
- Set `auto_snapshot=false` on all but the last when batching.
- Aim for 35 tool calls per turn minimum.
**Close tabs as soon as you are done with them** not only at the end of the task.
After reading or extracting data from a tab, close it immediately.
## Tab management
**Decision rules:**
- Finished reading/extracting from a tab? `browser_close(target_id=...)`
- Completed a multi-tab workflow? `browser_close_finished()` to clean up all your tabs
- More than 3 tabs open? stop and close finished ones before opening more
- Popup appeared that you didn't need? → close it immediately
Close tabs as soon as you're done with them — not only at the end of
the task. `browser_close(target_id=...)` for one, `browser_close_finished()`
for a full cleanup. Never accumulate more than 3 open tabs.
`browser_tabs` reports an `origin` field: `"agent"` (you own it, close
when done), `"popup"` (close after extracting), `"startup"`/`"user"`
(leave alone).
**Origin awareness:** `browser_tabs` returns an `origin` field for each tab:
- `"agent"` you opened it; you own it; close it when done
- `"popup"` opened by a link or script; close after extracting what you need
- `"startup"` or `"user"` leave these alone unless the task requires it
## Login & auth walls
**Cleanup tools:**
- `browser_close(target_id=...)` close one specific tab
- `browser_close_finished()` close all your agent/popup tabs (safe: leaves startup/user tabs)
- `browser_close_all()` close everything except the active tab (use only for full reset)
Report the auth wall and stop do NOT attempt to log in. Dismiss
cookie consent banners if they block content.
**Multi-tab workflow pattern:**
1. Open background tabs with `browser_open(url=..., background=true)` to stay on current tab
2. Process each tab and close it with `browser_close` when done
3. When the full workflow completes, call `browser_close_finished()` to confirm cleanup
4. Check `browser_tabs` at any point it shows `origin` and `age_seconds` per tab
## Error recovery
Never accumulate tabs. Treat every tab you open as a resource you must free.
- Retry once on failure, then switch approach.
- If `browser_snapshot` fails, try `browser_get_text` with a narrow
selector as fallback.
- If `browser_open` fails or the page seems stale, `browser_stop`
`browser_start` retry.
## Shadow DOM & Overlays
## `browser_evaluate`
Some sites (LinkedIn messaging, etc.) render content inside closed shadow roots that are
invisible to regular DOM queries and `browser_snapshot` coordinates.
**Detecting shadow DOM**: `document.elementFromPoint(x, y)` returns a zero-height host element
(e.g. `#interop-outlet`) for the entire overlay area — this is normal, not a bug.
`document.body.innerText` and `document.querySelectorAll` return nothing for shadow content.
`browser_snapshot` CAN read shadow DOM text but cannot return coordinates.
**Querying into shadow DOM:**
```
browser_shadow_query("#interop-outlet >>> #msg-overlay >>> p")
```
Uses `>>>` to pierce shadow roots. Returns `rect` in CSS pixels and `physicalRect` ready for
`browser_click_coordinate` / `browser_hover_coordinate`.
**Getting physical rect for any element (including shadow DOM):**
```
browser_get_rect(selector="#interop-outlet >>> .msg-convo-wrapper", pierce_shadow=true)
```
**Manual JS traversal when selector is dynamic:**
```js
const shadow = document.getElementById('interop-outlet').shadowRoot;
const convo = shadow.querySelector('#ember37');
const rect = convo.querySelector('p').getBoundingClientRect();
// rect is in CSS pixels multiply by DPR for physical pixels
```
Pass this as a multi-statement script to `browser_evaluate`; it wraps automatically in an IIFE.
Use `JSON.stringify(rect)` to serialize the result.
## Coordinate System
There are THREE coordinate spaces. Using the wrong one causes clicks/hovers to land in the
wrong place.
| Space | Used by | How to get |
|---|---|---|
| Physical pixels | `browser_click_coordinate` | `browser_coords` `physical_x/y` |
| CSS pixels | `getBoundingClientRect()`, `elementFromPoint` | `browser_coords` `css_x/y` |
| Screenshot pixels | What you see in the 800px image | Raw position in screenshot |
**Converting screenshot physical**: `browser_coords(x, y)` use `physical_x/y`.
**Converting CSS physical**: multiply by `window.devicePixelRatio` (typically 1.6 on HiDPI).
**Never** pass raw `getBoundingClientRect()` values to `browser_hover_coordinate` without
multiplying by DPR first.
## Screenshots
Screenshot data is base64-encoded PNG. To view it:
```
run_command("echo '<base64_data>' | base64 -d > /tmp/screenshot.png")
```
Then use `read_file("/tmp/screenshot.png")` to view the image.
Always use `full_page=false` (default) unless you specifically need the full scrolled page.
## JavaScript Evaluation
`browser_evaluate` wraps your script in an IIFE automatically:
- Single expression (`document.title`) wrapped with `return`
- Multi-statement or contains `;`/`\n` wrapped without return (add explicit `return` yourself)
- Already an IIFE run as-is
**Avoid**: complex closures with `return` inside `for` loops Chrome CDP returns `null`.
**Use instead**: `Array.from(...).map(...).join(...)` chains, or build result objects and
`JSON.stringify()` them.
**For shadow DOM traversal with dynamic selectors**, write the full JS path:
```js
const s = document.getElementById('interop-outlet').shadowRoot;
const el = s.querySelector('.msg-convo-wrapper');
return JSON.stringify(el.getBoundingClientRect());
```
## Login & Auth Walls
- If you see a "Log in" or "Sign up" prompt instead of expected
content, report the auth wall immediately do NOT attempt to log in.
- Check for cookie consent banners and dismiss them if they block content.
## Efficiency
- Minimize tool calls combine actions where possible.
- When a snapshot result is saved to a spillover file, use
`run_command` with grep to extract specific data rather than
re-reading the full file.
- Call `set_output` in the same turn as your last browser action
when possible don't waste a turn.
Use for reading state inside a shadow root that standard tools don't
handle, for one-shot site-specific actions, or to measure layout the
tools don't expose. Do NOT use it on a strict-CSP site (LinkedIn,
some X surfaces) with `innerHTML` Trusted Types silently drops the
assignment. Always use `createElement` + `appendChild` + `setAttribute`
for DOM injection on those sites. `style.cssText`, `textContent`, and
`.value` assignments are fine.
"""

Some files were not shown because too many files have changed in this diff Show More