Compare commits

...

407 Commits

Author SHA1 Message Date
Timothy 736756b257 chore: fix test 2026-03-20 21:22:29 -07:00
Timothy 90efe7009d chore: lint 2026-03-20 21:13:22 -07:00
Timothy 4adb369bde chore: lint 2026-03-20 21:12:03 -07:00
Timothy d4a30eb2f3 feat: image model fallback 2026-03-20 20:18:07 -07:00
Timothy 94bb4a2984 Merge branch 'main' into feat/image-capabilities 2026-03-20 18:42:55 -07:00
Timothy 648bad26ed feat: user input image content 2026-03-20 18:40:28 -07:00
RichardTang-Aden f0c7470f3d Merge pull request #6663 from sundaram2021/fix/missing-minimax-option-on-windows
fix: minimax option in powershell quickstart
2026-03-20 17:00:11 -07:00
RichardTang-Aden fe533b72a6 Merge pull request #6648 from levxn/main
Antigravity subscription support as an LLM provider
2026-03-20 16:52:38 -07:00
Richard Tang e581767cab chore: ruff lint 2026-03-20 16:50:50 -07:00
Richard Tang 0663ee5950 feat: validate the existing credentials before auth 2026-03-20 16:45:56 -07:00
Richard Tang 4b97baa34b feat: native google oauth for antigravity support 2026-03-20 16:40:15 -07:00
levxn a89296d397 lint fix 2026-03-21 02:35:09 +05:30
Levin d568912ba2 Merge branch 'aden-hive:main' into main 2026-03-21 01:32:13 +05:30
Levin c4d7980058 Merge pull request #1 from levxn/subscription/antigravity
Subscription/antigravity
2026-03-21 01:30:27 +05:30
Timothy @aden 8549fe8238 Merge pull request #6635 from vakrahul/fix/skill-structured-errors-6366
feat: structured skill error codes and diagnostics (closes #6366)
2026-03-20 12:45:35 -07:00
levxn 2b8d85bb95 fixing tool calling issue, antigravity's model's expected thought_signature in functioncall parts, else faces 400 error stating invalid arguments 2026-03-20 23:26:50 +05:30
levxn 07f7801166 test v1 2026-03-20 22:32:30 +05:30
Levin 1f12a45151 Merge branch 'aden-hive:main' into main 2026-03-20 22:01:22 +05:30
Arshad Uzzama Shaik 936e02e8e6 fix(security): prevent symlink-based sandbox escape in get_secure_path (closes #1167) (#5635)
* fix(security): prevent symlink-based sandbox escape in get_secure_path (closes #1167)

* style: apply ruff formatting to tools to satisfy CI

---------

Co-authored-by: Arshad Shaik <arshad.shaik@violetis.ai>
2026-03-20 19:16:47 +08:00
Hundao d59fe1e109 fix(graph): remove dead check_constraint placeholder (#6660)
Never called anywhere in the codebase. Constraints are enforced
via prompt context, not runtime validation.
2026-03-20 18:44:18 +08:00
Sundaram Kumar Jha 274318d3e5 fix: minimax option in powershell quickstart 2026-03-20 15:33:26 +05:30
Anurag Kumar 0f0884c2e0 fix(tools): handle non-HTML content and add PDF URL support (#438)
* feat(tools): add URL support to pdf_read tool

Enable pdf_read to accept both local file paths and HTTP/HTTPS URLs.
Downloads PDF content to temporary file when URL is provided, validates
content-type, and cleans up automatically after extraction.

- Detect URL inputs (http:// or https://)
- Download PDF with httpx (60s timeout)
- Validate Content-Type is application/pdf
- Use temporary file for URL-based PDFs
- Automatic cleanup in finally block
- Maintains backward compatibility with local paths

Completes the workflow: web_scrape error on PDF → pdf_read from URL

* test(tools): Add test coverage for new features in web_scrape and pdf_read tools

* style: fix lint issues in pdf_read URL support

---------

Co-authored-by: Anurag <anuragkr-codes@users.noreply.github.com>
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-20 16:36:25 +08:00
Timothy @aden 764012c598 Merge pull request #6652 from aden-hive/feature/absolutely-parallel
Release / Create Release (push) Waiting to run
fix: parallel subagent execution display, session resume bugs, and GCU termination
2026-03-19 20:21:47 -07:00
Timothy fd4dc1a69a fix: google_sheets JSON parse error before credentials check
Move _get_client() before JSON deserialization so missing-credentials
errors aren't masked by input validation. Wrap json.loads in try/except
for non-JSON string inputs.
2026-03-19 20:13:18 -07:00
Timothy 377cd39c2a chore: lint 2026-03-19 20:07:42 -07:00
Timothy e92caeef24 fix: line too long in google_sheets_tool
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 20:06:31 -07:00
Timothy @aden b7e6226478 Update asset link in README.md 2026-03-19 19:41:19 -07:00
Timothy a995818db2 fix: subagent bubble boundary 2026-03-19 17:57:33 -07:00
Timothy 0772b4d300 feat: better subagent interleave logic 2026-03-19 16:58:34 -07:00
Timothy 684e0d8dc6 fix: no memory consolidation for worker 2026-03-19 16:58:00 -07:00
Timothy d284c5d790 feat: parallel execution display 2026-03-19 15:25:21 -07:00
Timothy 7a9b9666c4 fix: refresh system prompt with preamble 2026-03-19 15:25:04 -07:00
Timothy a852cb91bf fix: non-blocking memory consolidation 2026-03-19 15:24:30 -07:00
Timothy 2f21e9eb4b fix: session reload preamble 2026-03-19 15:24:12 -07:00
Timothy 8390ef8731 fix: google sheet tool support json string input 2026-03-19 15:23:31 -07:00
levxn 8d21479c24 fixing lint errors 2026-03-20 02:34:58 +05:30
levxn 965dec3ba1 fixing errors, finalising credential fetch (client id and secret) properly in fallback paths 2026-03-20 02:32:42 +05:30
Timothy d4b54446be Merge branch 'main' into feat/image-capabilities 2026-03-19 11:12:33 -07:00
Levin 7992b862c2 Merge branch 'aden-hive:main' into main 2026-03-19 22:16:10 +05:30
Ananya Verma 44b3e0eaa2 Configure pytest to ignore DeprecationWarning (#1727)
Add pytest configuration to ignore specific warnings.
2026-03-19 23:17:50 +08:00
levxn f480fc2b94 oauth creds for antigravity picked properly 2026-03-19 20:26:42 +05:30
vakrahul 2844dbf19f feat: structured skill error codes and diagnostics (closes #6366) 2026-03-19 13:18:18 +05:30
Timothy @aden 22b7e4b0c3 Merge pull request #6624 from aden-hive/feature/agent-skills
Release / Create Release (push) Waiting to run
feat: agent skills system and observability improvements
2026-03-18 20:28:34 -07:00
Timothy 5413833a69 fix: tool test 2026-03-18 20:20:32 -07:00
bryan 02e1a4584a fix: autolaunch gui (windows) 2026-03-18 20:15:25 -07:00
Timothy 520840b1dd fix: no immediate run digest 2026-03-18 20:14:20 -07:00
bryan ee96147336 feat: autolaunch gui (mac) 2026-03-18 20:11:03 -07:00
Timothy 705cef4dc1 fix: context window display 2026-03-18 20:05:48 -07:00
Timothy ab26e64122 Merge remote-tracking branch 'origin/main' into feature/agent-skills 2026-03-18 19:41:39 -07:00
Timothy @aden f365e219cb Merge pull request #6615 from aden-hive/feat/worker-llm
feat: support separate LLM model for worker agents
2026-03-18 19:41:06 -07:00
Timothy 01621881c2 chore: lint 2026-03-18 19:40:41 -07:00
Timothy f7639f8572 fix: realtime context display 2026-03-18 19:29:31 -07:00
Timothy fc643060ce fix: better message bubble handling 2026-03-18 17:49:55 -07:00
Timothy 9aebeb181e feat: compaction debugger 2026-03-18 17:42:10 -07:00
Timothy acbbfaaa79 feat: compaction debug 2026-03-18 17:41:22 -07:00
Timothy bf170bce10 feat: enable mcp server reuse by default 2026-03-18 17:30:31 -07:00
Timothy 0a090d058b Merge remote-tracking branch 'origin/main' into feature/agent-skills 2026-03-18 17:11:12 -07:00
Timothy @aden 47bfadaad9 Merge pull request #6622 from aden-hive/fix/resume-empty-message
Fix empty queen message bubbles on session resume
2026-03-18 16:55:50 -07:00
Timothy d968dcd44c Merge branch 'main' into feature/agent-skills 2026-03-18 16:53:42 -07:00
Timothy @aden 6fdaa9ea50 Merge pull request #6534 from VasuBansal7576/codex/mcp-connection-manager-6348-draft
feat: add shared MCP connection manager
2026-03-18 16:52:44 -07:00
Timothy @aden 4d251fbdc2 Merge pull request #6531 from VasuBansal7576/codex/mcp-transports-6347-single
feat: add unix and sse MCP transports
2026-03-18 16:38:17 -07:00
Timothy 6acceed288 feat: hive debugger 2026-03-18 16:26:55 -07:00
Richard Tang 8dd1d6e3aa chore: lint 2026-03-18 16:01:32 -07:00
Timothy 1da28644a6 Merge branch 'main' into feature/agent-skills 2026-03-18 15:38:49 -07:00
Timothy 6452fe7fef fix: discord bot 2026-03-18 15:34:08 -07:00
Richard Tang acff008bd2 fix: empty message render 2026-03-18 15:26:56 -07:00
Timothy 651d6850a1 fix: bounty tracker change 2026-03-18 14:49:21 -07:00
Timothy c7fdc92594 fix: bounty script 2026-03-18 14:27:24 -07:00
Richard Tang 43602a8801 fix: trim to remove empty message 2026-03-18 13:55:57 -07:00
Timothy @aden 3da04265a6 Merge pull request #6566 from levxn/skills/context-protection
feat(skills): AS-9 and AS-10 — skill directory allowlisting and context protection for activated skills
2026-03-18 13:51:25 -07:00
Timothy @aden 4c98f0d2d0 Merge pull request #6564 from levxn/skills/resource-loading
feat(skills): AS-6 tier 3 resource loading — base_dir in catalog XML and skill dirs wired through execution stack
2026-03-18 13:50:54 -07:00
bryan d84c3364d0 chore: update to pass make test 2026-03-18 13:20:56 -07:00
Timothy @aden ae921f6cee Merge pull request #6619 from aden-hive/fix/claude-code-subscription-support
fix(llm): restore Claude Code subscription OAuth support
2026-03-18 13:08:27 -07:00
Timothy 6b506a1c08 chore: lint 2026-03-18 13:05:00 -07:00
Timothy 0c9f4fa97e fix(llm): restore Claude Code subscription (OAuth) support after Anthropic API change
Anthropic tightened OAuth validation on 2026-03-17, requiring a
specific User-Agent header and a billing integrity system block for
subscription-authenticated requests. Without these, all OAuth calls
return HTTP 400 with a generic "Error" message.

Changes:
- Add billing integrity system block (SHA-256 hash derived from first
  user message content) prepended to system messages on OAuth requests
- Set User-Agent to claude-code/<version> for OAuth sessions
- Fix OAuth header patch to detect tokens in x-api-key (not just
  Authorization) and add required beta/browser-access headers
- Set litellm.drop_params=True to prevent unsupported params like
  stream_options from leaking to Anthropic (causes 400)
- Skip stream_options entirely for Anthropic models
- Honour LITELLM_LOG env var for debug logging instead of hardcoding
  LiteLLM logger to WARNING
2026-03-18 13:02:24 -07:00
Richard Tang 95e30bc607 chore: remove old queen history endpoint 2026-03-18 12:43:30 -07:00
bryan 0f1f0090b0 chore: linter update 2026-03-18 12:41:01 -07:00
bryan c0da3bec02 feat: strip image content for non-vision models 2026-03-18 12:40:30 -07:00
bryan 9dadb5264d feat: add screenshot image passthrough to LLM 2026-03-18 12:40:18 -07:00
bryan e39e6a75cc feat: add ref system for aria snapshots 2026-03-18 12:36:51 -07:00
Richard Tang 23c66d1059 feat: worker model loading 2026-03-18 12:14:02 -07:00
Richard Tang b9d529d94e feat: support separate worker llm setup 2026-03-18 11:19:44 -07:00
Bryan @ Aden 1c9b09fb78 Merge pull request #6602 from sundaram2021/cleanup/remove-commit-message-txt
micro-fix: remove unnecessary commit message file
2026-03-18 17:40:50 +00:00
Timothy @aden 9fb14f23d2 Merge pull request #6526 from sundaram2021/feature/openrouter-api-key-support
feat openrouter api key support
2026-03-18 10:15:40 -07:00
Sundaram Kumar Jha 4795dc4f68 chore: clean useless commit message file 2026-03-18 16:45:10 +05:30
Sundaram Kumar Jha acf0f804c5 style(llm): apply ruff formatting 2026-03-18 10:54:06 +05:30
Sundaram Kumar Jha 4e2951854b fix(openrouter): harden quickstart setup and model validation 2026-03-18 10:39:58 +05:30
Sundaram Kumar Jha 80dfb429d7 refactor(review): remove out-of-scope PR changes 2026-03-18 10:39:48 +05:30
Timothy @aden 9c0ba77e22 Replace demo image with GitHub asset link
Updated README to include new asset link and removed demo image.
2026-03-17 20:59:14 -07:00
Timothy @aden 46b4651073 Merge pull request #6589 from aden-hive/fix/data-disclosure-gaps
Release / Create Release (push) Waiting to run
Fix data disclosure gaps, add worker run digests, clean up deprecated tools
2026-03-17 20:46:12 -07:00
Timothy 86dd5246c6 Merge remote-tracking branch 'origin/fix/resume-with-scheduler' into fix/data-disclosure-gaps 2026-03-17 20:44:28 -07:00
Timothy a1227c88ee Merge remote-tracking branch 'origin/fix/resume-with-scheduler' into fix/data-disclosure-gaps 2026-03-17 20:42:25 -07:00
Timothy 535d7ab568 fix: worker digest sub event 2026-03-17 20:41:56 -07:00
Richard Tang af10494b31 chore: ruff lint 2026-03-17 20:41:08 -07:00
Richard Tang 39c1042827 fix: fall back to queen-only session when worker load fails on cold restore 2026-03-17 20:38:41 -07:00
Richard Tang 16e7dc11f4 fix: don't overwrite meta in queen creation 2026-03-17 20:27:39 -07:00
Richard Tang 7a27babefd feat: track and resume the session by phase 2026-03-17 20:22:54 -07:00
Timothy d53ae9d51d fix: deprecated tests 2026-03-17 20:20:21 -07:00
Timothy 910cf7727d Merge remote-tracking branch 'origin/fix/resume-with-scheduler' into fix/data-disclosure-gaps 2026-03-17 20:14:25 -07:00
Timothy 1698605f15 chore: lint 2026-03-17 19:59:23 -07:00
Timothy eda124a123 chore: lint 2026-03-17 19:58:08 -07:00
Timothy 15e9ce8d2f Merge remote-tracking branch 'origin/feature/session-digest' into fix/data-disclosure-gaps 2026-03-17 19:45:07 -07:00
Timothy c01dd603d7 fix: digest invocation 2026-03-17 19:44:22 -07:00
Timothy 9d5157d69f feat: queen subscribe to worker digest 2026-03-17 19:23:43 -07:00
Timothy d78795bdf5 Merge remote-tracking branch 'origin/feature/session-digest' into fix/data-disclosure-gaps 2026-03-17 19:15:22 -07:00
Timothy ff2b7f473e fix: subagent execution 2026-03-17 19:15:07 -07:00
Timothy 73c9a91811 feat: add worker memory consolidation hooks 2026-03-17 19:14:07 -07:00
Timothy 27b765d902 Merge branch 'feature/session-digest' into fix/data-disclosure-gaps 2026-03-17 18:32:20 -07:00
Timothy fddba419be fix: minor issues 2026-03-17 18:30:57 -07:00
Timothy f42d6308e8 Merge branch 'main' into fix/data-disclosure-gaps 2026-03-17 17:50:36 -07:00
Timothy c167002754 fix: data disclosure gaps 2026-03-17 17:50:08 -07:00
Timothy @aden ea26ee7d0c Merge pull request #6568 from aden-hive/feature/node-focus-prompt
Inject execution-scope preamble into worker node system prompts
2026-03-17 17:38:49 -07:00
Richard Tang 5280e908b2 feat: change the agent last active time 2026-03-17 17:35:01 -07:00
RichardTang-Aden 1c5dd8c664 Merge pull request #5178 from Schlaflied/feat/sdr-agent-template
feat(templates): add SDR Agent sample template
2026-03-17 16:05:45 -07:00
Richard Tang 3aca153be5 fix: add missing flowchart and terminal nodes 2026-03-17 16:03:29 -07:00
Timothy 65c8e1653c chore: lint 2026-03-17 15:31:36 -07:00
Timothy 58e4fa918c feat: make worker node aware of boundaries 2026-03-17 15:28:41 -07:00
Timothy 3af13d3f90 feat: session digest for run scoped diary 2026-03-17 14:25:32 -07:00
levxn b799789dbe fixing lint 2026-03-18 02:15:58 +05:30
levxn 2cd73dfccc implements AS-9 and AS-10 2026-03-18 02:06:51 +05:30
levxn 57d77d5479 fixing lint 2026-03-18 01:32:24 +05:30
levxn 5814021773 skills trust gate merged properly into resource loading branch 2026-03-18 01:18:20 +05:30
levxn 4f4cc9c8ce halfway done commit 2026-03-18 00:59:35 +05:30
Timothy d9c840eee5 chore: resolve merge conflicts with feature/agent-skills
Integrate SkillsManager refactor from base branch. Trust gating (AS-13)
is now wired into SkillsManager._do_load() instead of inline in runner.py,
with the interactive flag passed through SkillsManagerConfig.
2026-03-17 11:55:11 -07:00
Timothy @aden d2eb86e534 Merge pull request #6540 from sundaram2021/fix/make-windows-compatibility
fix make test compatibility on windows
2026-03-17 11:41:32 -07:00
Timothy 03842353e4 Merge branch 'main' into feature/openrouter-api-key-support 2026-03-17 11:21:53 -07:00
Schlaflied 48747e20af fix: remove personal oauth credential entries from .gitignore
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:53:16 -04:00
Schlaflied 58af593af6 revert: remove unrelated changes from previous commit
Restore .claude/settings.json and revert .gitignore change
that were accidentally included in the sdr-agent refactor commit.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:52:44 -04:00
Schlaflied 450575a927 refactor(sdr-agent): reuse agent.start() in tui command and fix mock mode
- Replace duplicated setup code in tui command with agent.start(mock_mode=mock)
- Fix mock mode to use MockLLMProvider instead of llm=None
- Add demo_contacts.json sample data for template testing
- Untrack .claude/settings.json and add to .gitignore

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:52:10 -04:00
Schlaflied eac2bb19b2 fix(sdr-agent): fix agent runtime lifecycle and mcp config
- Replace self._executor with self._agent_runtime (AgentRuntime | None)
- Import AgentRuntime for proper type annotation
- Add missing await self._agent_runtime.start() in start() — runtime
  was created but never started, causing silent failures at runtime
- Add self._agent_runtime = None reset in stop() for clean restart
- Remove redundant self._graph is None guard in trigger_and_wait()
- Update mcp_servers.json with hive-tools server config
- Add credential file patterns to .gitignore

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:50:29 -04:00
Schlaflied 756a815bf0 feat(templates): add SDR Agent sample template 2026-03-17 13:50:05 -04:00
mma2027 23a7b080eb test: add comprehensive test suite for safe_eval (#4015)
* test: add comprehensive test suite for safe_eval sandboxed evaluator

Adds 113 tests across 14 test classes covering the full surface area of
the safe_eval expression evaluator used by edge conditions:

- Literals, data structures, arithmetic, unary/binary/boolean operators
- Short-circuit semantics for `and`/`or` (including guard patterns)
- Ternary expressions, variable lookup, subscript/attribute access
- Whitelisted function and method calls
- Security boundaries (private attrs, disallowed AST nodes, blocked builtins)
- Real-world EdgeSpec.condition_expr patterns from graph executor usage

* style: fix import sort order

---------

Co-authored-by: mma2027 <mma2027@users.noreply.github.com>
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-18 01:01:31 +08:00
mma2027 bf39bcdec9 fixed race condition deadlock, missing short-circuit eval, unhandled format exceptions (#4012) 2026-03-18 00:36:54 +08:00
Richard Tang 0276632491 Merge branch 'feat/graph-improvements' 2026-03-17 07:34:10 -07:00
RichardTang-Aden ae2993d0d1 Merge pull request #6528 from Antiarin/feat/trigger-nodes-in-draft-graph
Restore trigger nodes in the new flowchart
2026-03-16 20:54:36 -07:00
RichardTang-Aden d14d71f760 Merge pull request #6549 from aden-hive/staging
Release / Create Release (push) Waiting to run
release 0.7.2
2026-03-16 20:44:47 -07:00
Richard Tang ef6efc2f55 chore: lint and dead code 2026-03-16 20:44:03 -07:00
Antiarin 738641d35f fix: correct trigger target, label, and SSE event data
- Add name and entry_node to all trigger SSE events (TRIGGER_AVAILABLE,
  TRIGGER_ACTIVATED, TRIGGER_DEACTIVATED) so frontend gets correct data
  immediately instead of guessing
- Use ep.entry_node from backend in polling instead of guessing first
  non-trigger node
- Compute cronToLabel from trigger config during polling so pill labels
  show human-readable schedule
- Fix AsyncMock for event_bus.publish in tests
2026-03-17 09:07:10 +05:30
Antiarin 22f5534f08 fix: ensure Queen calls remove_trigger when user asks to remove scheduler
Added explicit prompt guidance requiring the Queen to call the
remove_trigger tool instead of just saying "it's removed."
2026-03-17 09:07:10 +05:30
Antiarin b79e7eca73 feat: live update trigger pill and detail panel on save
- Handle trigger_updated SSE event to update graph node label and
  config in real time when cron or task is saved
- Use cronToLabel for human-readable schedule display in detail panel
- Add "Saved" button feedback for Save Cron and Save Task (2s toast)
- Update trigger pill label to reflect new schedule on cron save
2026-03-17 09:07:10 +05:30
Antiarin 28250dc45e feat: support cron editing via trigger update API
- Extend PATCH /triggers/{id} to accept trigger_config with cron
  validation via croniter and active timer restart
- Add TRIGGER_UPDATED SSE event so frontend updates in real time
- Update frontend API client to use updateTrigger with config support
- Add tests for task update, cron restart, and invalid cron rejection
2026-03-17 09:07:10 +05:30
Antiarin fe5df6a87a feat: restore trigger node rendering in DraftGraph
Trigger nodes (scheduler, webhook, etc.) stopped appearing after the
v0.7.0 refactor because DraftGraph had no trigger awareness.

- Extract shared utilities (cssVar, truncateLabel, trigger colors/icons,
  useTriggerColors, cronToLabel) into lib/graphUtils.ts
- Render trigger pills above the draft flowchart with pill shape, icons,
  countdown timers, active/inactive status, and click handling
- Draw dashed edges from trigger pills to the correct draft node using
  flowchartMap lookup
- Name all trigger layout constants, fix countdown text color bug
- Include trigger pill extent in SVG viewBox width

Closes #6344
2026-03-17 09:07:10 +05:30
Richard Tang 07e4b593dd fix: write config when change model with existing key 2026-03-16 20:23:20 -07:00
Timothy 497591bf3b Merge remote-tracking branch 'origin/feat/hive-llm-support' into staging 2026-03-16 19:49:21 -07:00
Timothy a2a3e334d6 Merge branch 'feature/node-node-comm-by-file' into staging 2026-03-16 19:48:45 -07:00
Timothy 1ccbfaf800 Merge branch 'feature/agent-skills' into staging 2026-03-16 19:48:36 -07:00
Timothy a9afa0555c chore: lint 2026-03-16 19:43:19 -07:00
Timothy 83b2183cf0 Merge branch 'feature/agent-skills' into feature/node-node-comm-by-file 2026-03-16 19:37:46 -07:00
bryan c2dea88398 refactor: active node always displaying 2026-03-16 19:30:44 -07:00
Timothy f49e7a760e fix: skill memory keys breaking unrestricted node permissions
Only extend read_keys/write_keys with skill memory keys when the
list was already non-empty (restricted). An empty list means "allow
all" — adding _-prefixed skill keys to an empty list accidentally
activated the permission check and blocked legitimate reads.
2026-03-16 19:27:48 -07:00
bryan dc95c88da0 chore: linter update 2026-03-16 19:22:51 -07:00
Timothy 6e0255ebec fix: lint E501 line-too-long and auto-format 2026-03-16 19:21:27 -07:00
bryan b51e688d1a feat: transition when loading 2026-03-16 19:17:16 -07:00
Timothy 379d3df46b feat: file path first data passing 2026-03-16 19:14:45 -07:00
bryan b77a3031fe refactor: update flowchart.json for templates 2026-03-16 17:27:28 -07:00
bryan c10eea04ec refactor: update graph node colors 2026-03-16 17:26:57 -07:00
Richard Tang 491a3f24da chore: Suppress noisy LiteLLM INFO logs 2026-03-16 16:45:23 -07:00
Timothy c7d70e0fb1 fix: skill injection, tool call timeout 2026-03-16 16:26:16 -07:00
Richard Tang d59f8e99cb chore: prompt users to go to discord for hive key 2026-03-16 16:09:47 -07:00
Richard Tang 0a91b49417 feat: add validation and config for baseURL 2026-03-16 16:07:13 -07:00
Timothy ced64541b9 Merge remote-tracking branch 'origin/main' into feature/agent-skills 2026-03-16 15:45:00 -07:00
levxn 88253883a3 tier 3 resource loading 2026-03-17 03:30:58 +05:30
Timothy 3c30cfe02b Merge branch 'chore/fix-workspace-queen-message' into feature/agent-skills 2026-03-16 14:52:03 -07:00
Timothy 0d6267bcf1 fix: add delegation notice 2026-03-16 14:49:33 -07:00
Richard Tang b47175d1df feat: add hive llm spec in the quickstart 2026-03-16 14:10:30 -07:00
Timothy 6f23a30eed fix: skill lifecycle to runtime 2026-03-16 13:46:49 -07:00
Sundaram Kumar Jha ff7b5c7e27 fix: prepend ~/.local/bin to PATH so uv is found in Git Bash on Windows 2026-03-17 01:28:25 +05:30
bryan 69f0ff7ac9 chore: linter update 2026-03-16 12:22:29 -07:00
bryan c3f13c50eb docs: remove stale iso 5807 references 2026-03-16 12:22:01 -07:00
bryan 5477408d40 chore: code quality updates 2026-03-16 12:18:46 -07:00
bryan 9fad385ddf fix: return staging phase for disk-loaded agents to prevent false planning loader 2026-03-16 12:14:20 -07:00
bryan cf44ee1d9b refactor: remove AgentGraph, extract shared types, add resizable graph panel 2026-03-16 12:13:56 -07:00
bryan 4ab33a39d6 chore: add generated flowchart.json for template agents 2026-03-16 12:13:29 -07:00
bryan ae19121802 test: add tests for flowchart_utils classification and remap 2026-03-16 12:13:16 -07:00
bryan b518525418 docs: update flowchart schema for 9 types with new color palette 2026-03-16 12:13:06 -07:00
bryan ac3fe38b33 refactor: remove dead shape cases and update imports 2026-03-16 12:12:50 -07:00
bryan 3c6a30fcae refactor: trim queen prompt to 9 flowchart types with dark theme colors 2026-03-16 12:12:35 -07:00
bryan 2ced873fb5 refactor: extract flowchart utils into dedicated module with fallback generation 2026-03-16 12:12:17 -07:00
levxn 6ed6e5b286 lint fixes 2026-03-17 00:32:14 +05:30
Vasu Bansal 30bb0ad5d8 style: format MCP connection manager 2026-03-16 23:46:44 +05:30
Vasu Bansal cb0845f5ba fix: wrap MCP manager cleanup condition 2026-03-16 23:41:36 +05:30
Levin ce2525b59c Merge branch 'aden-hive:main' into skills/trust-gating 2026-03-16 23:39:27 +05:30
levxn 1f77ec3831 fixed bug introduced with change in executor.py, AS-13 along with upstream's AS-1,2,3,4,5 2026-03-16 23:38:45 +05:30
Timothy @aden ab995d8b96 Merge pull request #6530 from aden-hive/chore/fix-workspace-queen-message
fix(micro-fix): queen message display
2026-03-16 10:52:57 -07:00
Vasu Bansal 6ab5aa8004 style: format mcp client
Apply ruff formatting to satisfy CI on the MCP transport changes.
2026-03-16 23:19:49 +05:30
Vasu Bansal 4449cd8ee8 feat: add shared MCP connection manager 2026-03-16 23:10:26 +05:30
Vasu Bansal 8b60c03a0a feat: add unix and sse MCP transports
Implements unix socket and SSE MCP transports, adds reconnect-once retry for unix/SSE, and adds focused unit coverage.
2026-03-16 23:03:44 +05:30
Timothy c2e560fc07 fix: queen message display 2026-03-16 10:30:05 -07:00
Timothy 19f7ae862e fix: skill loading log 2026-03-16 10:14:33 -07:00
Timothy 5e9f74744a fix: google sheet tools account param 2026-03-16 10:14:05 -07:00
Levin 0e98023e40 Merge branch 'aden-hive:main' into skills/trust-gating 2026-03-16 22:23:57 +05:30
Timothy 7787179a5a Merge branch 'main' into feature/agent-skills 2026-03-16 09:14:29 -07:00
Timothy @aden b63205b91a Merge pull request #6010 from Antiarin/feat/notion-tool-docs-and-improvements
feat: add Notion tool README, improve tool logic, and expand test coverage
2026-03-16 08:36:11 -07:00
Timothy @aden 347bccb9ee Merge branch 'main' into feat/notion-tool-docs-and-improvements 2026-03-16 08:10:43 -07:00
Sundaram Kumar Jha 22bb07f00e chore: resolve merge conflict 2026-03-16 19:59:57 +05:30
Sundaram Kumar Jha 660f883197 style(core): apply ruff formatting to satisfy CI lint 2026-03-16 19:57:21 +05:30
Timothy @aden 9d83f0298f Merge pull request #6385 from Waryjustice/fix/google-sheets-credentials-orphan
fix: make state.json progress writes atomic in GraphExecutor
2026-03-16 07:25:13 -07:00
Sundaram Kumar Jha 988de80b66 Merge branch 'main' into feature/openrouter-api-key-support 2026-03-16 19:51:04 +05:30
Sundaram Kumar Jha dc6aa226ee feat(openrouter): validate model readiness and harden tool-call handling
- add OpenRouter chat completion validation to key checks for quickstart flows

- improve OpenRouter compat parsing to convert plain textual tool calls into real tool events

- prevent tool-call text from leaking into assistant responses

- add regression tests for OpenRouter key checks and LiteLLM tool compat parsing
2026-03-16 19:39:11 +05:30
levxn 48a54b4ee2 implements AS-13, trusted gating for project level skills 2026-03-16 17:45:33 +05:30
Hundao 7f7e8b4dff docs: update Windows guidance to reflect native support (#6519)
quickstart.ps1 and hive.ps1 provide full native Windows support.
Update README, CONTRIBUTING, and environment-setup docs to stop
recommending WSL as the primary path. Also add Windows alternatives
for make check/test commands in CONTRIBUTING.md.

Fixes #3835
Fixes #3839
2026-03-16 15:52:42 +08:00
Sundaram Kumar Jha f48a7380f5 Add command sanitizer module and enhance command validation (#6217)
* feat(tools): add command sanitizer module with blocklists for shell injection prevention

* fix(tools): validate commands in execute_command_tool before execution

* fix(tools): validate commands in coder_tools_server run_command before execution

* test(tools): add 109 tests for command sanitizer covering safe, blocked, and edge cases

* fix(tools): normalize executable sanitizer matching

\) usage with explicit .exe suffix normalization in sanitizer paths to satisfy Ruff B005 while preserving blocking behavior for executable names.

Also apply the same normalization in coder_tools_server fallback sanitizer and clean a test-file formatting lint issue.

* fix(tools): harden command sanitizer handling

Normalize executable path matching, tighten python -c detection, and remove the duplicated coder_tools_server fallback by importing the shared sanitizer reliably.

Document the shell=True limitation in the command runners and add regression tests for absolute executable paths plus quoted python -c forms.
2026-03-16 14:46:53 +08:00
Gaurav Singh 3c7f129d86 fix(executor): enforce branch timeout and memory conflict strategy in parallel execution (#6504)
ParallelExecutionConfig.branch_timeout_seconds and memory_conflict_strategy
were declared but never read by any code. This caused branches to run
indefinitely and memory conflicts to go undetected.

Changes:
- Wrap parallel branch tasks with asyncio.wait_for() using configured timeout
- Switch asyncio.gather to return_exceptions=True so one timeout doesn't cancel siblings
- Handle asyncio.TimeoutError in result processing loop
- Implement last_wins/first_wins/error memory conflict strategies
- Track which branch wrote which key during fan-out for conflict detection
- Add 6 new tests covering timeout and conflict scenarios

Closes #5706
2026-03-16 14:31:09 +08:00
RichardTang-Aden 4533b27aa1 Merge pull request #6249 from aden-hive/fix/episodic-memory-access
fix: deduplicate queen memory tools into shared list
2026-03-15 20:26:29 -07:00
Richard Tang 3adf268c29 chore: ruff lint 2026-03-15 20:25:21 -07:00
Richard Tang ac8579900f Merge remote-tracking branch 'origin/main' into fix/episodic-memory-access 2026-03-15 20:23:13 -07:00
Richard Tang abbaaa68f3 Merge remote-tracking branch 'origin/main' 2026-03-15 20:19:32 -07:00
Richard Tang 11089093ef chore: remove deprecated step in quickstart 2026-03-15 20:05:23 -07:00
RichardTang-Aden 99b7cb07d5 Merge pull request #6300 from Nupreeth/docs/notion-tool-readme
docs(notion): add Notion tool README
2026-03-15 20:03:17 -07:00
RichardTang-Aden 70d61ae67a Merge pull request #6389 from saschabuehrle/micro-fix/issue-6015-step-numbering
micro-fix: remove vestigial duplicate Step 3 header in quickstart.sh
2026-03-15 20:01:36 -07:00
Richard Tang dd054815a3 docs: update product image 2026-03-15 19:56:17 -07:00
Timothy 8e5eaae9dd chore(micro-fix): windows string ops compatibility fix 2026-03-15 17:05:41 -07:00
Hundao 2d0128eb5c fix: declare croniter dependency and fail loudly on missing import (#6405)
croniter is used for cron-based timer entry points but was never
declared in pyproject.toml. A fresh install would silently skip
all cron triggers. Add croniter>=1.4.0 to dependencies and raise
RuntimeError instead of silently continuing on ImportError.

Fixes #5353
2026-03-15 18:29:05 +08:00
Milton Adina 06f1d4dcef docs: add Windows quickstart.ps1 instructions to getting-started.md (#5668)
- Add Windows (PowerShell) section alongside Linux/macOS
- Reference .\quickstart.ps1 for native Windows users
- Add Set-ExecutionPolicy note for script execution
- Link to environment-setup.md for WSL alternatives
2026-03-15 18:05:39 +08:00
Gowtham Tadikamalla 0e7b11b5b2 fix(llm): warn when litellm monkey-patches fail to apply due to ImportError (#5757)
Closes #5753

_patch_litellm_anthropic_oauth and _patch_litellm_metadata_nonetype
silently return when litellm internal modules change. This adds
logger.warning() calls so operators are alerted when patches cannot be
applied, instead of encountering cryptic 401 or TypeError at runtime.

Co-authored-by: GowthamT-1610 <gowthamt@umd.edu>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-15 17:59:36 +08:00
kalp patel 291b78f934 fix: prune ~/.hive/failed_requests/ to prevent unbounded disk growth (#5725)
Add MAX_FAILED_REQUEST_DUMPS = 50 cap and _prune_failed_request_dumps()
helper. After each _dump_failed_request() call the oldest files beyond
the cap are deleted so the directory never grows without bound.

Fixes #5696
2026-03-15 17:33:46 +08:00
Vaibhav Kumar e196a03972 Fix LLMJudge OpenAI fallback to use LiteLLM provider (#5674) 2026-03-15 17:22:37 +08:00
Ishan Chaurasia a0abe2685d fix: preserve custom session ids in runtime logs (#6241)
* fix: preserve custom session ids in runtime logs

Treat any execution stored under sessions/<id> as a session-backed run so custom IDs stay visible in worker-session browsing and unified log APIs. Add regression coverage for custom IDs across executor path selection, log directory creation, and API listing.

Made-with: Cursor

* fix: ignore stray session directories in listing

Keep the session_ prefix as the fast path for worker session discovery, but allow custom IDs when a backing state.json exists. This avoids ghost directories in the UI while preserving the custom session ID support from the original fix.

Made-with: Cursor
2026-03-15 16:08:54 +08:00
SRI LIKHITA ADRU e8f642c8b6 fix(credentials): aden_api_key delete returns 404 when not found, san… (#6340)
* fix(credentials): aden_api_key delete returns 404 when not found, sanitize 500 errors

* style: restore warning log for unexpected delete errors

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-15 15:56:32 +08:00
Abhilash Puli 6260f628eb feat(tools): add HuggingFace inference, embedding, and endpoint tools (#6132)
* feat(tools): add HuggingFace inference, embedding, and endpoint tools

* fix: resolve ruff E501 lint issues

* style: fix formatting and restore Hub API error message

* style: format test file

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-15 15:44:18 +08:00
Sundaram Kumar Jha 4a4f17ed40 fix quickstart guide for windows (#6264)
* fix(windows): verify uv is runnable before launch

* fix(windows): use validated uv path for kimi health check

* fix(windows): dedupe uv discovery and keep quickstart scoped

* chore: refresh uv lockfile
2026-03-15 15:19:15 +08:00
Fernando Mano 36dcf2025b Feature: #5871 - Improve developer agent logging: simplify terminal output (#6388) 2026-03-15 15:13:22 +08:00
Aryan Nandanwar 85c70c94e6 fix: queen bee multiple response error resolved (#5962)
* fix: queen bee multiple response error resolved

* fix: queen bee multiple response error resolved updates

* fix: added chatmsg.phas and reconsileoptimizeuser

* fix:cleaned up blank lines

* style: fix formatting in workspace.tsx

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-15 15:07:24 +08:00
saschabuehrle 336e82ba22 micro-fix: remove vestigial duplicate Step 3 header in quickstart.sh (fixes #6015) 2026-03-14 18:07:59 +01:00
Sundaram Kumar Jha a7b6b080ab chore(lockfiles): refresh generated lockfiles
- update frontend package-lock metadata after frontend validation\n- refresh uv.lock editable package version for the current workspace state
2026-03-14 20:50:51 +05:30
Sundaram Kumar Jha 9202cbd4d4 fix(openrouter): stabilize quickstart and tool execution
- add cross-platform OpenRouter quickstart setup, config fallbacks, and key validation\n- harden LiteLLM/OpenRouter tool execution, duplicate question handling, and worker loading UX\n- add backend and frontend regression coverage for OpenRouter flows
2026-03-14 20:48:58 +05:30
Waryjustice f2ddd1051d fix: make state.json progress writes atomic
Use atomic_write for GraphExecutor._write_progress and log persistence failures instead of silently swallowing exceptions. Add regression tests for atomic write usage and warning logs on write failure.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-14 18:52:25 +05:30
Aaryann Chandola 2dd60c8d52 Merge branch 'aden-hive:main' into feat/notion-tool-docs-and-improvements 2026-03-14 10:58:01 +05:30
Richard Tang ff01c1fd99 chore: release v0.7.1 — Chrome-native GCU, browser isolation, dummy agent tests
Release / Create Release (push) Waiting to run
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-13 20:39:46 -07:00
RichardTang-Aden 421b25fdb7 Merge pull request #6313 from prasoonmhwr/bugFix/add_tab_ui
bugFix: micro-fix add tab UI
2026-03-13 20:29:30 -07:00
Richard Tang 795c3c33e2 docs: readme update 2026-03-13 20:26:44 -07:00
RichardTang-Aden 97821f4d80 Merge pull request #6346 from aden-hive/fix/session-resume-new-agent
fix: save json path for the new agent update meta.json when loaded worker
2026-03-13 20:19:48 -07:00
RichardTang-Aden 505e1e30fd Merge branch 'main' into fix/session-resume-new-agent 2026-03-13 20:19:36 -07:00
Timothy 3fb2b285fb chore: add star history widget 2026-03-13 20:17:35 -07:00
RichardTang-Aden a76109840c Merge pull request #6345 from aden-hive/feat/gcu-updates
feat: GCU browser cleanup, draft loading state, and inner_turn message fix
2026-03-13 20:16:38 -07:00
Timothy 1db8484402 Merge branch 'main' into feature/agent-skills 2026-03-13 20:05:47 -07:00
RichardTang-Aden 39212350ba Merge pull request #6342 from aden-hive/ci/level-2-dummy-agent-testing
Add Level 2 dummy agent end-to-end tests
2026-03-13 19:42:34 -07:00
Richard Tang f3399fe95b chore: ruff lint 2026-03-13 19:39:44 -07:00
Richard Tang d02e1155ed feat: dummy agent tests 2026-03-13 19:39:14 -07:00
bryan 7ede3ba171 feat: queen upsert fix 2026-03-13 19:34:26 -07:00
Timothy cdaec8a837 feat: agent skills 2026-03-13 18:56:34 -07:00
Richard Tang 2272491cf5 chore: remove dead code 2026-03-13 18:10:43 -07:00
RichardTang-Aden bb38cb974f Merge pull request #6333 from aden-hive/fix/new-agent-resume
Fix: new agent resume and GCU browser improvements
2026-03-13 17:20:49 -07:00
bryan 635d2976f4 feat: show loading spinner in draft panel during planning phase 2026-03-13 16:40:33 -07:00
bryan 4e1525880d feat: clean up browser profile after top-level GCU node execution 2026-03-13 16:40:20 -07:00
Richard Tang b80559df68 chore: ruff lint 2026-03-13 16:38:50 -07:00
RichardTang-Aden 08d93ef90a Merge pull request #6331 from RichardTang-Aden/main
fix: generate worker mcp.json correctly in initialize_agent_package
2026-03-13 15:35:18 -07:00
Richard Tang 22bf035522 chore: fix lint 2026-03-13 15:35:01 -07:00
Richard Tang 15944a42ab fix: generate worker mcp file correctly 2026-03-13 15:30:28 -07:00
Richard Tang 8440ec70ba chore: document the difference between runner mode run() and start() 2026-03-13 15:28:18 -07:00
Timothy eacf2520cf chore: skills prd 2026-03-13 15:22:09 -07:00
Richard Tang def4f62a51 fix: update meta.json when loaded worker 2026-03-13 14:05:57 -07:00
bryan b0c5bcd210 chore: update tab management guidelines and add concurrent subagent patterns 2026-03-13 14:04:40 -07:00
bryan 2fe1343343 feat: inject unique browser profile per GCU subagent 2026-03-13 14:03:21 -07:00
bryan de0dcff50f feat: add tab origin/age metadata and per-subagent profile isolation 2026-03-13 14:02:15 -07:00
Richard Tang 20427e213a fix: update meta.json when loaded worker 2026-03-13 13:52:15 -07:00
bryan 1fb5c6337a fix: anchor worker monitoring to queen's session ID on cold-restore 2026-03-13 12:50:50 -07:00
Timothy @aden 1e74f194a1 Update authors in MCP Server Registry document 2026-03-13 12:15:50 -07:00
Timothy 08157d2bd6 chore(docs): bounty program - standard 2026-03-13 12:10:21 -07:00
Timothy ef036257a9 docs(mcp): MCP integration PRD 2026-03-13 11:56:33 -07:00
Timothy 16ce984c74 chore: add default context limit on windows quickstart 2026-03-13 10:04:49 -07:00
bryan 1e8b5b96eb Merge branch 'main' into feat/gcu-updates 2026-03-13 09:26:06 -07:00
Prasoon Mahawar 094ba89f19 Merge branch 'main' of https://github.com/prasoonmhwr/hive into bugFix/add_tab_ui 2026-03-13 18:59:44 +05:30
Prasoon Mahawar 7008c9f310 bugFix: UI overflow issue when creating multiple agents – “Add tab” dropdown partially hidden 2026-03-13 18:58:38 +05:30
Prasoon Mahawar 94d7cbacc2 Revert "bugFix: Clipboard write in SystemPromptTab lacks error handling and may show false Copied feedback"
This reverts commit bddc2b413a.
2026-03-13 18:55:52 +05:30
Prasoon Mahawar bddc2b413a bugFix: Clipboard write in SystemPromptTab lacks error handling and may show false Copied feedback 2026-03-13 18:23:36 +05:30
Nupreeth 48c8fb7fff docs(notion): add Notion tool README 2026-03-13 12:03:48 +05:30
RichardTang-Aden 52b1a3f472 Merge pull request #6282 from aden-hive/feat/refactor-session
Release / Create Release (push) Waiting to run
Refactor session lifecycle with flowchart planning and triggers
2026-03-12 21:15:10 -07:00
Richard Tang 079e00c8f7 Merge remote-tracking branch 'origin/main' into feat/refactor-session 2026-03-12 21:13:15 -07:00
Richard Tang 60bba38941 chore: ruff lint 2026-03-12 21:01:47 -07:00
Richard Tang ea8e7b11c6 Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 20:54:08 -07:00
Richard Tang 3dc2b25b01 fix: adding the trigger helpers 2026-03-12 20:53:45 -07:00
bryan 543b90b34f chore: tooltip update 2026-03-12 20:50:39 -07:00
Richard Tang 2ad78ec8a2 Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 20:48:09 -07:00
Timothy 412658e9f2 fix: remove subagent shapes 2026-03-12 20:46:09 -07:00
Richard Tang 9bfddec322 fix: missing _FLOWCHART_TYPES reference 2026-03-12 20:43:03 -07:00
Timothy bbd9c10169 fix: decision node cannot have subagents 2026-03-12 20:36:04 -07:00
Richard Tang 51fdc4ddde fix: always new session for new agent 2026-03-12 20:34:42 -07:00
Richard Tang 04685d33ca fix: solve the problem from merge conflict 2026-03-12 20:28:25 -07:00
Richard Tang 729a0e0cec fix: resolve merge conflict 2026-03-12 20:23:58 -07:00
bryan 2bcb0cacee added pause/run button 2026-03-12 20:15:25 -07:00
Timothy 44bf191f53 fix: no orphaned node by bfs 2026-03-12 20:04:00 -07:00
Richard Tang 993b31f19b Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 20:00:45 -07:00
Richard Tang 41b3b9619f Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feature/flowchart-linked-experimental 2026-03-12 19:45:45 -07:00
Richard Tang 2a4fe4020c feat: force the planning agent to ask questions 2026-03-12 19:45:07 -07:00
Ishan Chaurasia 9d1f268078 fix(server): honor session_id in one-step session creation (#6233)
Align POST /api/sessions behavior across queen-only and one-step worker creation so callers can rely on deterministic session IDs. Add a regression test covering the forwarded session_id contract.

Made-with: Cursor
2026-03-13 10:43:12 +08:00
bryan 2185e127b1 style: coder tools formatting and template quote fixes 2026-03-12 19:39:53 -07:00
bryan 99ed885fd0 fix: add cached_tokens to finish event test assertion 2026-03-12 19:39:53 -07:00
bryan d8a390a685 feat: flowchart rendering in DraftGraph with node shapes and layout 2026-03-12 19:39:53 -07:00
bryan f50cf1735b feat: CSS variable theming for agent graph components 2026-03-12 19:39:53 -07:00
bryan 04eb57f54e feat: auto-load worker on cold restore when queen resumes 2026-03-12 19:39:53 -07:00
bryan 7378408eb8 feat: add flowchart type system and draft-to-graph dissolution 2026-03-12 19:39:53 -07:00
bryan cf05420417 style: formatting and import cleanup across framework modules 2026-03-12 19:38:55 -07:00
Timothy f5ed4c7d43 fix: validate orphaned gcu node 2026-03-12 19:38:44 -07:00
Timothy 5547432b6e fix: queen defaults to global max context tokens 2026-03-12 19:29:14 -07:00
Ishan Chaurasia 336557d7c7 fix: pass browser_wait text as data (#6235)
Pass browser_wait text through Playwright's function argument channel so quoted and multiline strings do not break the generated wait expression. Add a regression test covering text that previously would have been interpolated unsafely.

Made-with: Cursor
2026-03-13 10:08:16 +08:00
Timothy 87c172227c fix: mandate flowchart topology correction 2026-03-12 19:03:46 -07:00
Richard Tang c2c4929de8 feat: remove the phase in the label 2026-03-12 18:55:24 -07:00
Timothy a978338738 fix: allow replanning 2026-03-12 18:54:01 -07:00
Timothy 8eb59b1f66 fix: mandate usage of ask tools and change pending behavior 2026-03-12 18:34:15 -07:00
Richard Tang f9d5f95936 Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 18:32:26 -07:00
Timothy 651e99ffe3 Merge branch 'feature/multiple-asks' into feature/flowchart-linked-experimental 2026-03-12 17:57:11 -07:00
Timothy 2564f1b948 feat: allow multiple questions 2026-03-12 17:56:58 -07:00
Richard Tang c01cd528d2 feat: planning phase prompt improvements 2026-03-12 17:44:06 -07:00
bryan 2434c86cdf docs: clarify two-step escalation relay protocol in queen prompt 2026-03-12 16:50:17 -07:00
Timothy bc194ee4e9 Merge branch 'main' into feature/flowchart-linked-experimental 2026-03-12 16:50:17 -07:00
bryan c4a5e621aa docs: update GCU prompt with popup tracking and close_all guidance 2026-03-12 16:50:06 -07:00
bryan 0f5b83d86a feat: add browser_close_all tool for bulk tab cleanup 2026-03-12 16:49:55 -07:00
bryan b5aadcd51e feat: auto-track popup pages and improve session startup logging 2026-03-12 16:49:46 -07:00
bryan 290d2f6823 feat: add --no-startup-window to Chrome launch flags 2026-03-12 16:49:36 -07:00
Timothy @aden 2bac100c03 Merge pull request #6283 from vincentjiang777/main
docs: rename and expand contributing guidelines
2026-03-12 16:46:59 -07:00
Timothy @aden 425d37f868 Merge branch 'main' into main 2026-03-12 16:44:29 -07:00
Vincent Jiang 99b127e2da docs: revert filename to CONTRIBUTING.md for GitHub compliance
Changed HOW_TO_CONTRIBUTE.md back to CONTRIBUTING.md to comply with
GitHub's standard for contributing guidelines files.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-03-12 16:42:42 -07:00
Timothy 43b759bf61 fix: ensure flowchart existence 2026-03-12 16:40:18 -07:00
Vincent Jiang 20d8d52f12 docs: rename and expand contributing guidelines
Renamed CONTRIBUTING.md to HOW_TO_CONTRIBUTE.md and significantly expanded
the documentation with detailed sections on development setup, OS support,
tooling requirements, performance metrics, and contribution workflows.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-03-12 16:29:13 -07:00
Richard Tang 944567dc31 chore: ruff lint 2026-03-12 16:23:13 -07:00
nightcityblade 7e09588e4e fix: reject path-like agent names in hive dispatch --agents (#6211)
Validate that agent names passed to --agents do not contain path
separators. Previously, passing 'exports/my_agent' would result in
the doubled path 'exports/exports/my_agent' with a confusing error.
Now a clear error message is shown suggesting the correct usage.

Fixes #6208

Co-authored-by: nightcityblade <nightcityblade@gmail.com>
2026-03-12 16:22:37 -07:00
Priyanka Bhallamudi 7bf69d2263 fix: read nodes from graph object in discovery.py for correct node count (#6227)
Co-authored-by: Lakshmi Priyanka Bhallamudi <priyanka@Lakshmis-MacBook-Air.local>
2026-03-12 16:22:37 -07:00
bryan 99d2b0c003 chore: update readme 2026-03-12 16:22:37 -07:00
bryan 8868416baa chore: update the tests and readme 2026-03-12 16:22:37 -07:00
bryan 405b120674 feat: fixed google credentials to use the google oauth credential 2026-03-12 16:22:37 -07:00
Trisha 66a7b43199 [bug:6117:docs]: fix inconsistent configuration and troubleshooting guidance (#6118) 2026-03-12 16:22:36 -07:00
Trisha a8f9d83723 docs: fix typos and awkward copy (#6115)
* [bug:6109:README]: fix typos and awkward copy

* trigger ci

* rerun checks
2026-03-12 16:22:36 -07:00
bryan d95d5804ca fix: align the credential functions to be the same 2026-03-12 16:22:36 -07:00
Richard Tang 674cf05601 feat: track the number of runs 2026-03-12 15:19:13 -07:00
Timothy 86349c78d0 Merge branch 'feature/guardrails' into feature/flowchart-linked-experimental 2026-03-12 15:11:12 -07:00
Timothy 2232f49191 fix: queen flowcharting behavior 2026-03-12 15:10:32 -07:00
Richard Tang 6fa71fa27d feat: track queen phase by message 2026-03-12 14:58:35 -07:00
Vincent Jiang 1ac9ba69d6 docs: replace recipe examples with 100 sample agent prompts
Replace individual recipe READMEs with a comprehensive collection of 100 real-world agent prompt examples across marketing, sales, operations, engineering, and finance. This provides users with a broader range of use case inspiration in a single, organized reference document.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-03-12 14:46:09 -07:00
Vincent Jiang 9e16be8f03 docs: replace recipe examples with 100 sample agent prompts
Replace individual recipe READMEs with a comprehensive collection of 100 real-world agent prompt examples across marketing, sales, operations, engineering, and finance. This provides users with a broader range of use case inspiration in a single, organized reference document.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-03-12 14:44:32 -07:00
Richard Tang 8c7065ad37 refactor: remove the parts conversion logic 2026-03-12 14:36:27 -07:00
Richard Tang a18ed5bbe6 feat: restore queen phase 2026-03-12 14:29:01 -07:00
bryan 9f3339650d chore: linter update 2026-03-12 14:27:17 -07:00
bryan d5e5d3e83d feat: add subagent activity tracking to queen status and instructions 2026-03-12 14:26:49 -07:00
bryan 5ea27dda09 refactor: update GCU system prompt for auto-snapshots and batching 2026-03-12 14:26:38 -07:00
bryan 6f9066ef20 feat: return auto-snapshot from browser interaction tools 2026-03-12 14:26:24 -07:00
bryan c37185732a feat: kill orphaned Chrome processes on GCU server shutdown 2026-03-12 14:26:05 -07:00
bryan 0c900fb50e refactor: clean session startup and add page lifecycle management 2026-03-12 14:25:16 -07:00
bryan 4d3ac28878 feat: launch Chrome on macOS via open -n to coexist with user's browser 2026-03-12 14:24:55 -07:00
bryan 270c1f8c50 fix: use lazy %-formatting in subagent completion log to avoid f-string in logger 2026-03-12 14:24:30 -07:00
bryan 3d0859d06a fix: stop clearing credentials_required on modal close to prevent infinite loop 2026-03-12 14:24:14 -07:00
Timothy 8f55170c1e fix: compaction ratio reporting 2026-03-12 14:17:42 -07:00
Richard Tang ed3d4bfe33 feat: resume cold session from event logs 2026-03-12 14:07:57 -07:00
Timothy 31a98a5f95 feat: cached token handing 2026-03-12 14:03:58 -07:00
Timothy 7667b773f2 fix: 18x tool discovery efficiency by progressive disclosure 2026-03-12 13:12:43 -07:00
Timothy 49560260de fix: token counts 2026-03-12 11:52:08 -07:00
Richard Tang 596ce9878d feat: unique run id 2026-03-12 11:09:36 -07:00
Timothy 1cc75f89bd feat: replanning 2026-03-12 09:55:42 -07:00
bryan ffe47c0f71 fix: credential modal eating errors, banner stays open 2026-03-12 09:41:53 -07:00
Timothy bb3c69cff1 fix: proper guardrail on combined context window 2026-03-12 09:37:17 -07:00
Timothy 70d11f537e feat: merge subagent nodes 2026-03-12 09:06:41 -07:00
Timothy b15dd2f623 fix: better logging 2026-03-12 09:03:29 -07:00
Timothy ce308312ae fix: usage tracking 2026-03-12 08:56:33 -07:00
bryan bf4652db4b fix: share event bus so tool events are visible to parent 2026-03-12 08:41:34 -07:00
bryan 2acd526b71 feat: dynamic viewport sizing and suppress Chrome warning bar 2026-03-12 08:40:49 -07:00
bryan df71834e4b refactor: switch from Playwright browser to system Chrome via CDP 2026-03-12 08:39:43 -07:00
nightcityblade f757c724cc fix: reject path-like agent names in hive dispatch --agents (#6211)
Validate that agent names passed to --agents do not contain path
separators. Previously, passing 'exports/my_agent' would result in
the doubled path 'exports/exports/my_agent' with a confusing error.
Now a clear error message is shown suggesting the correct usage.

Fixes #6208

Co-authored-by: nightcityblade <nightcityblade@gmail.com>
2026-03-12 21:11:02 +08:00
Priyanka Bhallamudi a4c758403e fix: read nodes from graph object in discovery.py for correct node count (#6227)
Co-authored-by: Lakshmi Priyanka Bhallamudi <priyanka@Lakshmis-MacBook-Air.local>
2026-03-12 18:34:47 +08:00
Timothy bc3c5a5899 fix: allow memory tool to be used in all phases 2026-03-11 20:10:24 -07:00
Timothy a67563850b feat: flowchart reconciliation 2026-03-11 19:58:27 -07:00
Bryan @ Aden b48465b778 Merge pull request #6230 from aden-hive/feat/google-doc-credential-alignment
micro-fix: Feat/google doc credential alignment
2026-03-12 02:52:03 +00:00
bryan d3baaaab24 chore: update readme 2026-03-11 19:48:00 -07:00
Timothy c764b4dc3b Merge branch 'main' into feature/flowchart-linked-experimental 2026-03-11 19:12:51 -07:00
bryan ad6077bd7b chore: update the tests and readme 2026-03-11 19:12:38 -07:00
Timothy ce2a91b1c0 feat: flowchart mapping 2026-03-11 19:12:25 -07:00
bryan c2e7afeb5e feat: fixed google credentials to use the google oauth credential 2026-03-11 19:12:25 -07:00
Timothy 0c9680ca89 feat: dissolution graph structure 2026-03-11 18:38:17 -07:00
Richard Tang 726016d24a fix: remove the duplicated session logic 2026-03-11 17:11:03 -07:00
Richard Tang 4895cea08a chore: lint and micro-fix 2026-03-11 16:55:29 -07:00
Richard Tang c9723a3ff2 feat(wip): always resume the previous session 2026-03-11 16:48:31 -07:00
Richard Tang 6cb73a6fea refactor: remove the remaining old trigger format and change the trigger format in examples to the latest format 2026-03-11 16:13:37 -07:00
Richard Tang 0c7f43f595 refactor: remove reference of the unused session judge 2026-03-11 16:01:00 -07:00
Richard Tang ea5cfcc5d6 refactor: remove the unused session judge 2026-03-11 15:57:19 -07:00
Richard Tang 34e85019c3 feat: stop supporting the old scheduler 2026-03-11 15:54:48 -07:00
Timothy 8011b72673 fix: flowchart display 2026-03-11 15:41:55 -07:00
RichardTang-Aden d87dfca1ab Merge pull request #6075 from aden-hive/fix/credential-function-alignment
fix: align the credential functions to be the same
2026-03-11 15:11:57 -07:00
Richard Tang c979dba958 fix: reference error from the rename 2026-03-11 14:33:42 -07:00
Richard Tang b4caa045e1 Merge remote-tracking branch 'origin/main' into feat/agent-trigger 2026-03-11 14:32:36 -07:00
Timothy b0fd4bc356 fix: draft flowchart display 2026-03-11 11:05:33 -07:00
Trisha a79d7de482 [bug:6117:docs]: fix inconsistent configuration and troubleshooting guidance (#6118) 2026-03-11 14:41:54 +08:00
Trisha e5e57302fa docs: fix typos and awkward copy (#6115)
* [bug:6109:README]: fix typos and awkward copy

* trigger ci

* rerun checks
2026-03-11 14:38:37 +08:00
Emmanuel Nwanguma c69cf1aea5 test(security): add comprehensive unit tests for 7 security scanning tools (#6151)
* test(security): add comprehensive unit tests for 7 security scanning tools

Add dedicated test files for all security scanning tools:
- test_dns_security_scanner.py (12 tests)
- test_http_headers_scanner.py (13 tests)
- test_ssl_tls_scanner.py (14 tests)
- test_subdomain_enumerator.py (15 tests)
- test_port_scanner.py (17 tests)
- test_tech_stack_detector.py (20 tests)
- test_risk_scorer.py (24 tests)

Total: 115 new tests covering:
- Input validation and cleaning
- Connection error handling
- Core scanning logic with mocked responses
- Grade/risk calculation
- Edge cases

Fixes #5920

* fix(tests): strengthen weak assertions in security scanner tests

- SSL scanner: replace always-true `or` assertions with specific checks
  that verify hostname stripping actually happened
- Port scanner: verify timeout clamp value, not just absence of error
- DNS scanner: remove unused helper method

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-11 13:29:11 +08:00
Emmanuel Nwanguma 2f4cd8c36f fix(credentials): improve exception handling in key_storage.py (#6153)
Replace bare except Exception: clauses with specific exception handling:

- delete_aden_api_key(): Catch FileNotFoundError, PermissionError at debug
  level; log unexpected errors at WARNING with exc_info=True
- _read_credential_key_file(): Catch FileNotFoundError, PermissionError at
  debug level; log unexpected errors at WARNING with exc_info=True
- _read_aden_from_encrypted_store(): Catch FileNotFoundError, PermissionError,
  KeyError at debug level; log unexpected errors at WARNING with exc_info=True

This makes credential issues easier to diagnose by:
- Logging unexpected errors at WARNING level (visible in production)
- Including full stack traces with exc_info=True
- Keeping expected failures (file not found, permissions) at debug level

Fixes #5931
2026-03-11 13:05:10 +08:00
Aaryann Chandola 6f571e6d00 [BUG] fix: use ReplaceFileW for atomic writes on Windows to preserve ACLs (#5849)
* [BUG] fix: use ReplaceFileW for atomic writes on Windows to preserve ACLs

* fix: ensure atomic_replace checks for Windows API availability
2026-03-11 12:59:14 +08:00
Emmanuel Nwanguma 31bc84106f test: add API integration tests for hubspot, intercom, google_docs tools (#6167)
>>
>> Resolves #5921
>>
>> - test_hubspot_tool.py: 51 tests covering 15 MCP tools
>> - test_intercom_tool.py: 50 tests covering 11 MCP tools
>> - test_google_docs_tool.py: 57 tests covering 11 MCP tools
2026-03-11 12:55:03 +08:00
Timothy bdd6194203 feature: hive flowchart at planning phase 2026-03-10 19:54:02 -07:00
RichardTang-Aden fd79dceb0f Merge pull request #6166 from aden-hive/fix/subagent-reply-stall
Release / Create Release (push) Waiting to run
micro-fix: update escalation tests for new ESCALATION_REQUESTED flow
2026-03-10 19:47:00 -07:00
RichardTang-Aden 738e469d96 Merge pull request #6165 from aden-hive/feature/provider-moonshotai-kimi
feat: support MoonShot AI Kimi subscription
2026-03-10 19:39:25 -07:00
Timothy 80ccbcc827 chore: lint 2026-03-10 19:37:18 -07:00
RichardTang-Aden 08fac31a9d Merge pull request #6159 from aden-hive/fix/subagent-reply-stall
fix: route subagent report_to_parent escalations to queen instead of user
2026-03-10 18:24:33 -07:00
Timothy 7c47e367de feat: support moonshotai kimi subscription 2026-03-10 18:03:44 -07:00
Aaryann Chandola e82133741c Merge branch 'aden-hive:main' into feat/notion-tool-docs-and-improvements 2026-03-11 04:23:20 +05:30
Antiarin 5076278dcb feat(notion): register Notion tool in verified and unverified registration functions
- Added the Notion tool registration to the _register_verified function.
- Removed the Notion tool registration from the _register_unverified function to ensure proper handling.
2026-03-11 02:45:51 +05:30
Antiarin 2398e04e11 docs(notion): add README for Notion tool with setup instructions and usage examples
- Introduced a comprehensive README.md for the Notion tool.
- Included setup instructions for the Notion API token and credential store configuration.
- Documented available tools and their functionalities.
- Provided usage examples for searching, creating, updating, and managing pages and databases.
2026-03-11 02:45:41 +05:30
Antiarin d00f321627 test(notion): add comprehensive tests for error handling and credential store in Notion tool
- Implemented tests for HTTP error codes, timeouts, and generic exceptions in _request.
- Added tests to verify the use of credential store when provided.
- Enhanced tests for notion_search to include filter types and page size clamping.
- Updated test assertions for successful responses from notion_get_page.
2026-03-11 02:45:30 +05:30
Antiarin e76b6cb575 feat(notion): enhance Notion tool functionality with new block types and improved page creation
- Added BlockType enum for various Notion block types.
- Updated notion_create_page to allow specifying parent_page_id and title_property.
- Enhanced notion_query_database to support sorting and pagination.
- Introduced notion_create_database for creating databases under a parent page.
- Improved error handling for required parameters in page and database creation.
2026-03-11 02:45:12 +05:30
bryan 4ad0d0e077 fix: align the credential functions to be the same 2026-03-09 10:14:21 -07:00
bryan cba0ec110f fix: linter update 2026-03-08 19:37:57 -07:00
bryan 0256e0c944 Merge branch 'main' into feat/agent-trigger 2026-03-08 19:28:36 -07:00
bryan 4d9d0362a0 fixes to make the timer trigger properly 2026-03-08 18:44:42 -07:00
bryan f474d0bc8e Merge branch 'main' into feat/agent-trigger 2026-03-08 16:59:14 -07:00
bryan 6a0681b9aa feat: fixing phase 4, continuing to test 2026-03-08 16:52:00 -07:00
bryan c7e634851b feat: phase 4 of trigger plan 2026-03-06 19:21:32 -08:00
bryan cdb7155960 feat: phase 3 of trigger plan 2026-03-06 18:07:26 -08:00
bryan 3f7790c26a feat: phase 2 of trigger plan 2026-03-06 17:22:57 -08:00
bryan 5676b115f4 Merge branch 'feat/queen-responsibility' into feat/agent-trigger 2026-03-06 16:58:06 -08:00
bryan 61c59d57e8 feat: phase 1 of trigger plan 2026-03-06 15:11:36 -08:00
314 changed files with 44470 additions and 11430 deletions
@@ -0,0 +1,78 @@
name: Standard Bounty
description: A bounty task for general framework contributions (not integration-specific)
title: "[Bounty]: "
labels: []
body:
- type: markdown
attributes:
value: |
## Standard Bounty
This issue is part of the [Bounty Program](../../docs/bounty-program/README.md).
**Claim this bounty** by commenting below — a maintainer will assign you within 24 hours.
- type: dropdown
id: bounty-size
attributes:
label: Bounty Size
options:
- "Small (10 pts)"
- "Medium (30 pts)"
- "Large (75 pts)"
- "Extreme (150 pts)"
validations:
required: true
- type: dropdown
id: difficulty
attributes:
label: Difficulty
options:
- Easy
- Medium
- Hard
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: What needs to be done to complete this bounty.
placeholder: |
Describe the specific task, including:
- What the contributor needs to do
- Links to relevant files in the repo
- Any context or motivation for the change
validations:
required: true
- type: textarea
id: acceptance-criteria
attributes:
label: Acceptance Criteria
description: What "done" looks like. The PR must meet all criteria.
placeholder: |
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] CI passes
validations:
required: true
- type: textarea
id: relevant-files
attributes:
label: Relevant Files
description: Links to files or directories related to this bounty.
placeholder: |
- `path/to/file.py`
- `path/to/directory/`
- type: textarea
id: resources
attributes:
label: Resources
description: Links to docs, issues, or external references that will help.
placeholder: |
- Related issue: #XXXX
- Docs: https://...
+14 -4
View File
@@ -2,14 +2,22 @@ name: Bounty completed
description: Awards points and notifies Discord when a bounty PR is merged
on:
pull_request:
pull_request_target:
types: [closed]
workflow_dispatch:
inputs:
pr_number:
description: "PR number to process (for missed bounties)"
required: true
type: number
jobs:
bounty-notify:
if: >
github.event.pull_request.merged == true &&
contains(join(github.event.pull_request.labels.*.name, ','), 'bounty:')
github.event_name == 'workflow_dispatch' ||
(github.event.pull_request.merged == true &&
contains(join(github.event.pull_request.labels.*.name, ','), 'bounty:'))
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
@@ -32,6 +40,8 @@ jobs:
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
DISCORD_WEBHOOK_URL: ${{ secrets.DISCORD_BOUNTY_WEBHOOK_URL }}
BOT_API_URL: ${{ secrets.BOT_API_URL }}
BOT_API_KEY: ${{ secrets.BOT_API_KEY }}
LURKR_API_KEY: ${{ secrets.LURKR_API_KEY }}
LURKR_GUILD_ID: ${{ secrets.LURKR_GUILD_ID }}
PR_NUMBER: ${{ github.event.pull_request.number }}
PR_NUMBER: ${{ inputs.pr_number || github.event.pull_request.number }}
-126
View File
@@ -1,126 +0,0 @@
name: Link Discord account
description: Auto-creates a PR to add contributor to contributors.yml when a link-discord issue is opened
on:
issues:
types: [opened]
jobs:
link-discord:
if: contains(github.event.issue.labels.*.name, 'link-discord')
runs-on: ubuntu-latest
timeout-minutes: 2
permissions:
contents: write
issues: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Parse issue and update contributors.yml
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const issue = context.payload.issue;
const githubUsername = issue.user.login;
// Parse the issue body for form fields
const body = issue.body || '';
// Extract Discord ID — look for the numeric value after the "Discord User ID" heading
const discordMatch = body.match(/### Discord User ID\s*\n\s*(\d{17,20})/);
if (!discordMatch) {
await github.rest.issues.createComment({
...context.repo,
issue_number: issue.number,
body: `Could not find a valid Discord ID in the issue body. Please make sure you entered a numeric ID (17-20 digits), not a username.\n\nExample: \`123456789012345678\``
});
await github.rest.issues.update({
...context.repo,
issue_number: issue.number,
state: 'closed',
state_reason: 'not_planned'
});
return;
}
const discordId = discordMatch[1];
// Extract display name (optional)
const nameMatch = body.match(/### Display Name \(optional\)\s*\n\s*(.+)/);
const displayName = nameMatch ? nameMatch[1].trim() : '';
// Check if user already exists
const yml = fs.readFileSync('contributors.yml', 'utf-8');
if (yml.includes(`github: ${githubUsername}`)) {
await github.rest.issues.createComment({
...context.repo,
issue_number: issue.number,
body: `@${githubUsername} is already in \`contributors.yml\`. If you need to update your Discord ID, please edit the file directly via PR.`
});
await github.rest.issues.update({
...context.repo,
issue_number: issue.number,
state: 'closed',
state_reason: 'completed'
});
return;
}
// Append entry to contributors.yml
let entry = ` - github: ${githubUsername}\n discord: "${discordId}"`;
if (displayName && displayName !== '_No response_') {
entry += `\n name: ${displayName}`;
}
entry += '\n';
const updated = yml.trimEnd() + '\n' + entry;
fs.writeFileSync('contributors.yml', updated);
// Set outputs for commit step
core.exportVariable('GITHUB_USERNAME', githubUsername);
core.exportVariable('DISCORD_ID', discordId);
core.exportVariable('ISSUE_NUMBER', issue.number.toString());
- name: Create PR
run: |
# Check if there are changes
if git diff --quiet contributors.yml; then
echo "No changes to contributors.yml"
exit 0
fi
BRANCH="docs/link-discord-${GITHUB_USERNAME}"
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git checkout -b "$BRANCH"
git add contributors.yml
git commit -m "docs: link @${GITHUB_USERNAME} to Discord"
git push origin "$BRANCH"
gh pr create \
--title "docs: link @${GITHUB_USERNAME} to Discord" \
--body "Adds @${GITHUB_USERNAME} (Discord \`${DISCORD_ID}\`) to \`contributors.yml\` for bounty XP tracking.
Closes #${ISSUE_NUMBER}" \
--base main \
--head "$BRANCH" \
--label "link-discord"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Notify on issue
uses: actions/github-script@v7
with:
script: |
const username = process.env.GITHUB_USERNAME;
const issueNumber = parseInt(process.env.ISSUE_NUMBER);
await github.rest.issues.createComment({
...context.repo,
issue_number: issueNumber,
body: `A PR has been created to link your account. A maintainer will merge it shortly — once merged, you'll receive XP and Discord pings when your bounty PRs are merged.`
});
+2
View File
@@ -35,6 +35,8 @@ jobs:
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
DISCORD_WEBHOOK_URL: ${{ secrets.DISCORD_BOUNTY_WEBHOOK_URL }}
BOT_API_URL: ${{ secrets.BOT_API_URL }}
BOT_API_KEY: ${{ secrets.BOT_API_KEY }}
LURKR_API_KEY: ${{ secrets.LURKR_API_KEY }}
LURKR_GUILD_ID: ${{ secrets.LURKR_GUILD_ID }}
SINCE_DATE: ${{ github.event.inputs.since_date || '' }}
-1
View File
@@ -68,7 +68,6 @@ temp/
exports/*
.claude/settings.local.json
.claude/skills/ship-it/
.venv
+150 -27
View File
@@ -1,17 +1,149 @@
# Release Notes
## v0.7.1
**Release Date:** March 13, 2026
**Tag:** v0.7.1
### Chrome-Native Browser Control
v0.7.1 replaces Playwright with direct Chrome DevTools Protocol (CDP) integration. The GCU now launches the user's system Chrome via `open -n` on macOS, connects over CDP, and manages browser lifecycle end-to-end -- no extra browser binary required.
---
### Highlights
#### System Chrome via CDP
The entire GCU browser stack has been rewritten:
- **Chrome finder & launcher** -- New `chrome_finder.py` discovers installed Chrome and `chrome_launcher.py` manages process lifecycle with `--remote-debugging-port`
- **Coexist with user's browser** -- `open -n` on macOS launches a separate Chrome instance so the user's tabs stay untouched
- **Dynamic viewport sizing** -- Viewport auto-sizes to the available display area, suppressing Chrome warning bars
- **Orphan cleanup** -- Chrome processes are killed on GCU server shutdown to prevent leaks
- **`--no-startup-window`** -- Chrome launches headlessly by default until a page is needed
#### Per-Subagent Browser Isolation
Each GCU subagent gets its own Chrome user-data directory, preventing cookie/session cross-contamination:
- Unique browser profiles injected per subagent
- Profiles cleaned up after top-level GCU node execution
- Tab origin and age metadata tracked per subagent
#### Dummy Agent Testing Framework
A comprehensive test suite for validating agent graph patterns without LLM calls:
- 8 test modules covering echo, pipeline, branch, parallel merge, retry, feedback loop, worker, and GCU subagent patterns
- Shared fixtures and a `run_all.py` runner for CI integration
- Subagent lifecycle tests
---
### What's New
#### GCU Browser
- **Switch from Playwright to system Chrome via CDP** -- Direct CDP connection replaces Playwright dependency. (@bryanadenhq)
- **Chrome finder and launcher modules** -- `chrome_finder.py` and `chrome_launcher.py` for cross-platform Chrome discovery and process management. (@bryanadenhq)
- **Dynamic viewport sizing** -- Auto-size viewport and suppress Chrome warning bar. (@bryanadenhq)
- **Per-subagent browser profile isolation** -- Unique user-data directories per subagent with cleanup. (@bryanadenhq)
- **Tab origin/age metadata** -- Track which subagent opened each tab and when. (@bryanadenhq)
- **`browser_close_all` tool** -- Bulk tab cleanup for agents managing many pages. (@bryanadenhq)
- **Auto-track popup pages** -- Popups are automatically captured and tracked. (@bryanadenhq)
- **Auto-snapshot from browser interactions** -- Browser interaction tools return screenshots automatically. (@bryanadenhq)
- **Kill orphaned Chrome processes** -- GCU server shutdown cleans up lingering Chrome instances. (@bryanadenhq)
- **`--no-startup-window` Chrome flag** -- Prevent empty window on launch. (@bryanadenhq)
- **Launch Chrome via `open -n` on macOS** -- Coexist with the user's running browser. (@bryanadenhq)
#### Framework & Runtime
- **Session resume fix for new agents** -- Correctly resume sessions when a new agent is loaded. (@bryanadenhq)
- **Queen upsert fix** -- Prevent duplicate queen entries on session restore. (@bryanadenhq)
- **Anchor worker monitoring to queen's session ID on cold-restore** -- Worker monitors reconnect to the correct queen after restart. (@bryanadenhq)
- **Update meta.json when loading workers** -- Worker metadata stays in sync with runtime state. (@RichardTang-Aden)
- **Generate worker MCP file correctly** -- Fix MCP config generation for spawned workers. (@RichardTang-Aden)
- **Share event bus so tool events are visible to parent** -- Tool execution events propagate up to parent graphs. (@bryanadenhq)
- **Subagent activity tracking in queen status** -- Queen instructions include live subagent status. (@bryanadenhq)
- **GCU system prompt updates** -- Auto-snapshots, batching, popup tracking, and close_all guidance. (@bryanadenhq)
#### Frontend
- **Loading spinner in draft panel** -- Shows spinner during planning phase instead of blank panel. (@bryanadenhq)
- **Fix credential modal errors** -- Modal no longer eats errors; banner stays visible. (@bryanadenhq)
- **Fix credentials_required loop** -- Stop clearing the flag on modal close to prevent infinite re-prompting. (@bryanadenhq)
- **Fix "Add tab" dropdown overflow** -- Dropdown no longer hidden when many agents are open. (@prasoonmhwr)
#### Testing
- **Dummy agent test framework** -- 8 test modules (echo, pipeline, branch, parallel merge, retry, feedback loop, worker, GCU subagent) with shared fixtures and CI runner. (@bryanadenhq)
- **Subagent lifecycle tests** -- Validate subagent spawn and completion flows. (@bryanadenhq)
#### Documentation & Infrastructure
- **MCP integration PRD** -- Product requirements for MCP server registry. (@TimothyZhang7)
- **Skills registry PRD** -- Product requirements for skill registry system. (@bryanadenhq)
- **Bounty program updates** -- Standard bounty issue template and updated contributor guide. (@bryanadenhq)
- **Windows quickstart** -- Add default context limit for PowerShell setup. (@bryanadenhq)
- **Remove deprecated files** -- Clean up `setup_mcp.py`, `verify_mcp.py`, `antigravity-setup.md`, and `setup-antigravity-mcp.sh`. (@bryanadenhq)
---
### Bug Fixes
- Fix credential modal eating errors and banner staying open
- Stop clearing `credentials_required` on modal close to prevent infinite loop
- Share event bus so tool events are visible to parent graph
- Use lazy %-formatting in subagent completion log to avoid f-string in logger
- Anchor worker monitoring to queen's session ID on cold-restore
- Update meta.json when loading workers
- Generate worker MCP file correctly
- Fix "Add tab" dropdown partially hidden when creating multiple agents
---
### Community Contributors
- **Prasoon Mahawar** (@prasoonmhwr) -- Fix UI overflow on agent tab dropdown
- **Richard Tang** (@RichardTang-Aden) -- Worker MCP generation and meta.json fixes
---
### Upgrading
```bash
git pull origin main
uv sync
```
The Playwright dependency is no longer required for GCU browser operations. Chrome must be installed on the host system.
---
## v0.7.0
**Release Date:** March 5, 2026
**Tag:** v0.7.0
Session management refactor release.
---
## v0.5.1
**Release Date:** February 18, 2026
**Tag:** v0.5.1
## The Hive Gets a Brain
### The Hive Gets a Brain
v0.5.1 is our most ambitious release yet. Hive agents can now **build other agents** -- the new Hive Coder meta-agent writes, tests, and fixes agent packages from natural language. The runtime grows multi-graph support so one session can orchestrate multiple agents simultaneously. The TUI gets a complete overhaul with an in-app agent picker, live streaming, and seamless escalation to the Coder. And we're now provider-agnostic: Claude Code subscriptions, OpenAI-compatible endpoints, and any LiteLLM-supported model work out of the box.
---
## Highlights
### Highlights
### Hive Coder -- The Agent That Builds Agents
#### Hive Coder -- The Agent That Builds Agents
A native meta-agent that lives inside the framework at `core/framework/agents/hive_coder/`. Give it a natural-language specification and it produces a complete agent package -- goal definition, node prompts, edge routing, MCP tool wiring, tests, and all boilerplate files.
@@ -30,7 +162,7 @@ The Coder ships with:
- **Coder Tools MCP server** -- file I/O, fuzzy-match editing, git snapshots, and sandboxed shell execution (`tools/coder_tools_server.py`)
- **Test generation** -- structural tests for forever-alive agents that don't hang on `runner.run()`
### Multi-Graph Agent Runtime
#### Multi-Graph Agent Runtime
`AgentRuntime` now supports loading, managing, and switching between multiple agent graphs within a single session. Six new lifecycle tools give agents (and the TUI) full control:
@@ -44,7 +176,7 @@ await runtime.add_graph("exports/deep_research_agent")
The Hive Coder uses multi-graph internally -- when you escalate from a worker agent, the Coder loads as a separate graph while the worker stays alive in the background.
### TUI Revamp
#### TUI Revamp
The Terminal UI gets a ground-up rebuild with five major additions:
@@ -54,7 +186,7 @@ The Terminal UI gets a ground-up rebuild with five major additions:
- **PDF attachments** -- `/attach` and `/detach` commands with native OS file dialog (macOS, Linux, Windows)
- **Multi-graph commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>` for managing agent graphs in-session
### Provider-Agnostic LLM Support
#### Provider-Agnostic LLM Support
Hive is no longer Anthropic-only. v0.5.1 adds first-class support for:
@@ -66,9 +198,9 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## What's New
### What's New
### Architecture & Runtime
#### Architecture & Runtime
- **Hive Coder meta-agent** -- Natural-language agent builder with reference docs, guardian watchdog, and `hive code` CLI command. (@TimothyZhang7)
- **Multi-graph agent sessions** -- `add_graph`/`remove_graph` on AgentRuntime with 6 lifecycle tools (`load_agent`, `unload_agent`, `start_agent`, `restart_agent`, `list_agents`, `get_user_presence`). (@TimothyZhang7)
@@ -79,7 +211,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
- **Pre-start confirmation prompt** -- Interactive prompt before agent execution allowing credential updates or abort. (@RichardTang-Aden)
- **Event bus multi-graph support** -- `graph_id` on events, `filter_graph` on subscriptions, `ESCALATION_REQUESTED` event type, `exclude_own_graph` filter. (@TimothyZhang7)
### TUI Improvements
#### TUI Improvements
- **In-app agent picker** (Ctrl+A) -- Tabbed modal for browsing agents with metadata badges (nodes, tools, sessions, tags). (@TimothyZhang7)
- **Runtime-optional TUI startup** -- Launches without a pre-loaded agent, shows agent picker on startup. (@TimothyZhang7)
@@ -89,7 +221,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
- **Multi-graph TUI commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>`. (@TimothyZhang7)
- **Agent Guardian watchdog** -- Event-driven monitor that catches secondary agent failures and triggers automatic remediation, with `--no-guardian` CLI flag. (@TimothyZhang7)
### New Tool Integrations
#### New Tool Integrations
| Tool | Description | Contributor |
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
@@ -99,7 +231,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
| **Google Docs** | Document creation, reading, and editing with OAuth credential support | @haliaeetusvocifer |
| **Gmail enhancements** | Expanded mail operations for inbox management | @bryanadenhq |
### Infrastructure
#### Infrastructure
- **Default node type → `event_loop`** -- `NodeSpec.node_type` defaults to `"event_loop"` instead of `"llm_tool_use"`. (@TimothyZhang7)
- **Default `max_node_visits` → 0 (unlimited)** -- Nodes default to unlimited visits, reducing friction for feedback loops and forever-alive agents. (@TimothyZhang7)
@@ -112,7 +244,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## Bug Fixes
### Bug Fixes
- Flush WIP accumulator outputs on cancel/failure so edge conditions see correct values on resume
- Stall detection state preserved across resume (no more resets on checkpoint restore)
@@ -125,13 +257,13 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
- Fix email agent version conflicts (@RichardTang-Aden)
- Fix coder tool timeouts (120s for tests, 300s cap for commands)
## Documentation
### Documentation
- Clarify installation and prevent root pip install misuse (@paarths-collab)
---
## Agent Updates
### Agent Updates
- **Email Inbox Management** -- Consolidate `gmail_inbox_guardian` and `inbox_management` into a single unified agent with updated prompts and config. (@RichardTang-Aden, @bryanadenhq)
- **Job Hunter** -- Updated node prompts, config, and agent metadata; added PDF resume selection. (@bryanadenhq)
@@ -141,7 +273,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## Breaking Changes
### Breaking Changes
- **Deprecated node types raise `RuntimeError`** -- `llm_tool_use`, `llm_generate`, `function`, `router`, `human_input` now fail instead of warning. Migrate to `event_loop`.
- **`NodeSpec.node_type` defaults to `"event_loop"`** (was `"llm_tool_use"`)
@@ -150,7 +282,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## Community Contributors
### Community Contributors
A huge thank you to everyone who contributed to this release:
@@ -165,14 +297,14 @@ A huge thank you to everyone who contributed to this release:
---
## Upgrading
### Upgrading
```bash
git pull origin main
uv sync
```
### Migration Guide
#### Migration Guide
If your agents use deprecated node types, update them:
@@ -196,12 +328,3 @@ hive code
# Or from TUI -- press Ctrl+E to escalate
hive tui
```
---
## What's Next
- **Agent-to-agent communication** -- one agent's output triggers another agent's entry point
- **Cost visibility** -- detailed runtime log of LLM costs per node and per session
- **Persistent webhook subscriptions** -- survive agent restarts without re-registering
- **Remote agent deployment** -- run agents as long-lived services with HTTP APIs
+1013 -51
View File
File diff suppressed because it is too large Load Diff
+19 -12
View File
@@ -1,24 +1,31 @@
.PHONY: lint format check test install-hooks help frontend-install frontend-dev frontend-build
.PHONY: lint format check test test-tools test-live test-all install-hooks help frontend-install frontend-dev frontend-build
# ── Ensure uv is findable in Git Bash on Windows ──────────────────────────────
# uv installs to ~/.local/bin on Windows/Linux/macOS. Git Bash may not include
# this in PATH by default, so we prepend it here.
export PATH := $(HOME)/.local/bin:$(PATH)
# ── Targets ───────────────────────────────────────────────────────────────────
help: ## Show this help
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | \
awk 'BEGIN {FS = ":.*?## "}; {printf " \033[36m%-15s\033[0m %s\n", $$1, $$2}'
lint: ## Run ruff linter and formatter (with auto-fix)
cd core && ruff check --fix .
cd tools && ruff check --fix .
cd core && ruff format .
cd tools && ruff format .
cd core && uv run ruff check --fix .
cd tools && uv run ruff check --fix .
cd core && uv run ruff format .
cd tools && uv run ruff format .
format: ## Run ruff formatter
cd core && ruff format .
cd tools && ruff format .
cd core && uv run ruff format .
cd tools && uv run ruff format .
check: ## Run all checks without modifying files (CI-safe)
cd core && ruff check .
cd tools && ruff check .
cd core && ruff format --check .
cd tools && ruff format --check .
cd core && uv run ruff check .
cd tools && uv run ruff check .
cd core && uv run ruff format --check .
cd tools && uv run ruff format --check .
test: ## Run all tests (core + tools, excludes live)
cd core && uv run python -m pytest tests/ -v
@@ -46,4 +53,4 @@ frontend-dev: ## Start frontend dev server
cd core/frontend && npm run dev
frontend-build: ## Build frontend for production
cd core/frontend && npm run build
cd core/frontend && npm run build
+23 -18
View File
@@ -27,7 +27,7 @@
<img src="https://img.shields.io/badge/Multi--Agent-Systems-blue?style=flat-square" alt="Multi-Agent" />
<img src="https://img.shields.io/badge/Headless-Development-purple?style=flat-square" alt="Headless" />
<img src="https://img.shields.io/badge/Human--in--the--Loop-orange?style=flat-square" alt="HITL" />
<img src="https://img.shields.io/badge/Production--Ready-red?style=flat-square" alt="Production" />
<img src="https://img.shields.io/badge/Browser-Use-red?style=flat-square" alt="Browser Use" />
</p>
<p align="center">
<img src="https://img.shields.io/badge/OpenAI-supported-412991?style=flat-square&logo=openai" alt="OpenAI" />
@@ -37,15 +37,17 @@
## Overview
Build autonomous, reliable, self-improving AI agents without hardcoding workflows. Define your goal through conversation with hive coding agent(queen), and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, credential management, and real-time monitoring give you control without sacrificing adaptability.
Generate a swarm of worker agents with a coding agent(queen) that control them. Define your goal through conversation with hive queen, and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, browser use, credential management, and real-time monitoring give you control without sacrificing adaptability.
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
[![Hive Demo](https://img.youtube.com/vi/XDOG9fOaLjU/maxresdefault.jpg)](https://www.youtube.com/watch?v=XDOG9fOaLjU)
https://github.com/user-attachments/assets/bf10edc3-06ba-48b6-98ba-d069b15fb69d
## Who Is Hive For?
Hive is designed for developers and teams who want to build **production-grade AI agents** without manually wiring complex workflows.
Hive is designed for developers and teams who want to build many **autonomous AI agents** fast without manually wiring complex workflows.
Hive is a good fit if you:
@@ -84,7 +86,7 @@ Use Hive when you need:
- An LLM provider that powers the agents
- **ripgrep (optional, recommended on Windows):** The `search_files` tool uses ripgrep for faster file search. If not installed, a Python fallback is used. On Windows: `winget install BurntSushi.ripgrep` or `scoop install ripgrep`
> **Note for Windows Users:** It is strongly recommended to use **WSL (Windows Subsystem for Linux)** or **Git Bash** to run this framework. Some core automation scripts may not execute correctly in standard Command Prompt or PowerShell.
> **Windows Users:** Native Windows is supported via `quickstart.ps1` and `hive.ps1`. Run these in PowerShell 5.1+. WSL is also an option but not required.
### Installation
@@ -111,39 +113,36 @@ This sets up:
- **LLM provider** - Interactive default model configuration
- All required Python dependencies with `uv`
- At last, it will initiate the open hive interface in your browser
- Finally, it will open the Hive interface in your browser
> **Tip:** To reopen the dashboard later, run `hive open` from the project directory.
<img width="2500" height="1214" alt="home-screen" src="https://github.com/user-attachments/assets/134d897f-5e75-4874-b00b-e0505f6b45c4" />
### Build Your First Agent
Type the agent you want to build in the home input box
Type the agent you want to build in the home input box. The queen is going to ask you questions and work out a solution with you.
<img width="2500" height="1214" alt="Image" src="https://github.com/user-attachments/assets/1ce19141-a78b-46f5-8d64-dbf987e048f4" />
### Use Template Agents
Click "Try a sample agent" and check the templates. You can run a templates directly or choose to build your version on top of the existing template.
Click "Try a sample agent" and check the templates. You can run a template directly or choose to build your version on top of the existing template.
### Run Agents
Now you can run an agent by selectiing the agent (either an existing agent or example agent). You can click the Run button on the top left, or talk to the queen agent and it can run the agent for you.
Now you can run an agent by selecting the agent (either an existing agent or example agent). You can click the Run button on the top left, or talk to the queen agent and it can run the agent for you.
<img width="2500" height="1214" alt="Image" src="https://github.com/user-attachments/assets/71c38206-2ad5-49aa-bde8-6698d0bc55f5" />
<img width="2549" height="1174" alt="Screenshot 2026-03-12 at 9 27 36PM" src="https://github.com/user-attachments/assets/7c7d30fa-9ceb-4c23-95af-b1caa405547d" />
## Features
- **Browser-Use** - Control the browser on your computer to achieve hard tasks
- **Parallel Execution** - Execute the generated graph in parallel. This way you can have multiple agent compelteing the jobs for you
- **Parallel Execution** - Execute the generated graph in parallel. This way you can have multiple agents completing the jobs for you
- **[Goal-Driven Generation](docs/key_concepts/goals_outcome.md)** - Define objectives in natural language; the coding agent generates the agent graph and connection code to achieve them
- **[Adaptiveness](docs/key_concepts/evolution.md)** - Framework captures failures, calibrates according to the objectives, and evolves the agent graph
- **[Dynamic Node Connections](docs/key_concepts/graph.md)** - No predefined edges; connection code is generated by any capable LLM based on your goals
- **SDK-Wrapped Nodes** - Every node gets shared memory, local RLM memory, monitoring, tools, and LLM access out of the box
- **[Human-in-the-Loop](docs/key_concepts/graph.md#human-in-the-loop)** - Intervention nodes that pause execution for human input with configurable timeouts and escalation
- **Real-time Observability** - WebSocket streaming for live monitoring of agent execution, decisions, and node-to-node communication
- **Production-Ready** - Self-hostable, built for scale and reliability
## Integration
@@ -392,10 +391,6 @@ Hive generates your entire agent system from natural language goals using a codi
Yes, Hive is fully open-source under the Apache License 2.0. We actively encourage community contributions and collaboration.
**Q: Can Hive handle complex, production-scale use cases?**
Yes. Hive is explicitly designed for production environments with features like automatic failure recovery, real-time observability, cost controls, and horizontal scaling support. The framework handles both simple automations and complex multi-agent workflows.
**Q: Does Hive support human-in-the-loop workflows?**
Yes, Hive fully supports [human-in-the-loop](docs/key_concepts/graph.md#human-in-the-loop) workflows through intervention nodes that pause execution for human input. These include configurable timeouts and escalation policies, allowing seamless collaboration between human experts and AI agents.
@@ -420,6 +415,16 @@ Visit [docs.adenhq.com](https://docs.adenhq.com/) for complete guides, API refer
Contributions are welcome! Fork the repository, create your feature branch, implement your changes, and submit a pull request. See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
## Star History
<a href="https://star-history.com/#aden-hive/hive&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date" />
</picture>
</a>
---
<p align="center">
+2 -2
View File
@@ -39,8 +39,8 @@ We consider security research conducted in accordance with this policy to be:
## Security Best Practices for Users
1. **Keep Updated**: Always run the latest version
2. **Secure Configuration**: Review `config.yaml` settings, especially in production
3. **Environment Variables**: Never commit `.env` files or `config.yaml` with secrets
2. **Secure Configuration**: Review your `~/.hive/configuration.json`, `.mcp.json`, and environment variable settings, especially in production
3. **Environment Variables**: Never commit `.env` files or any configuration files that contain secrets
4. **Network Security**: Use HTTPS in production, configure firewalls appropriately
5. **Database Security**: Use strong passwords, limit network access
-31
View File
@@ -1,31 +0,0 @@
perf: reduce subprocess spawning in quickstart scripts (#4427)
## Problem
Windows process creation (CreateProcess) is 10-100x slower than Linux fork/exec.
The quickstart scripts were spawning 4+ separate `uv run python -c "import X"`
processes to verify imports, adding ~600ms overhead on Windows.
## Solution
Consolidated all import checks into a single batch script that checks multiple
modules in one subprocess call, reducing spawn overhead by ~75%.
## Changes
- **New**: `scripts/check_requirements.py` - Batched import checker
- **New**: `scripts/test_check_requirements.py` - Test suite
- **New**: `scripts/benchmark_quickstart.ps1` - Performance benchmark tool
- **Modified**: `quickstart.ps1` - Updated import verification (2 sections)
- **Modified**: `quickstart.sh` - Updated import verification
## Performance Impact
**Benchmark results on Windows:**
- Before: ~19.8 seconds for import checks
- After: ~4.9 seconds for import checks
- **Improvement: 14.9 seconds saved (75.2% faster)**
## Testing
- ✅ All functional tests pass (`scripts/test_check_requirements.py`)
- ✅ Quickstart scripts work correctly on Windows
- ✅ Error handling verified (invalid imports reported correctly)
- ✅ Performance benchmark confirms 75%+ improvement
Fixes #4427
-27
View File
@@ -1,27 +0,0 @@
# Identity mapping: GitHub username -> Discord ID
#
# This file links GitHub accounts to Discord accounts for the
# Integration Bounty Program. When a bounty PR is merged, the
# GitHub Action uses this file to ping the contributor on Discord.
#
# HOW TO ADD YOURSELF:
# Open a "Link Discord Account" issue:
# https://github.com/aden-hive/hive/issues/new?template=link-discord.yml
# A GitHub Action will automatically add your entry here.
#
# To find your Discord ID:
# 1. Open Discord Settings > Advanced > Enable Developer Mode
# 2. Right-click your name > Copy User ID
#
# Format:
# - github: your-github-username
# discord: "your-discord-id" # quotes required (it's a number)
# name: Your Display Name # optional
contributors:
# - github: example-user
# discord: "123456789012345678"
# name: Example User
- github: TimothyZhang7
discord: "408460790061072384"
name: Timothy@Aden
+583
View File
@@ -0,0 +1,583 @@
#!/usr/bin/env python3
"""Antigravity authentication CLI.
Implements OAuth2 flow for Google's Antigravity Code Assist gateway.
Credentials are stored in ~/.hive/antigravity-accounts.json.
Usage:
python -m antigravity_auth auth account add
python -m antigravity_auth auth account list
python -m antigravity_auth auth account remove <email>
"""
from __future__ import annotations
import argparse
import json
import logging
import os
import secrets
import socket
import sys
import time
import urllib.parse
import urllib.request
import webbrowser
from http.server import BaseHTTPRequestHandler, HTTPServer
from pathlib import Path
from typing import Any
logging.basicConfig(level=logging.INFO, format="%(message)s")
logger = logging.getLogger(__name__)
# OAuth endpoints
_OAUTH_AUTH_URL = "https://accounts.google.com/o/oauth2/v2/auth"
_OAUTH_TOKEN_URL = "https://oauth2.googleapis.com/token"
# Scopes for Antigravity/Cloud Code Assist
_OAUTH_SCOPES = [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/userinfo.email",
"https://www.googleapis.com/auth/userinfo.profile",
]
# Credentials file path in ~/.hive/
_ACCOUNTS_FILE = Path.home() / ".hive" / "antigravity-accounts.json"
# Default project ID
_DEFAULT_PROJECT_ID = "rising-fact-p41fc"
_DEFAULT_REDIRECT_PORT = 51121
# OAuth credentials fetched from the opencode-antigravity-auth project.
# This project reverse-engineered and published the public OAuth credentials
# for Google's Antigravity/Cloud Code Assist API.
# Source: https://github.com/NoeFabris/opencode-antigravity-auth
_CREDENTIALS_URL = (
"https://raw.githubusercontent.com/NoeFabris/opencode-antigravity-auth/dev/src/constants.ts"
)
# Cached credentials fetched from public source
_cached_client_id: str | None = None
_cached_client_secret: str | None = None
def _fetch_credentials_from_public_source() -> tuple[str | None, str | None]:
"""Fetch OAuth client ID and secret from the public npm package source on GitHub."""
global _cached_client_id, _cached_client_secret
if _cached_client_id and _cached_client_secret:
return _cached_client_id, _cached_client_secret
try:
req = urllib.request.Request(
_CREDENTIALS_URL, headers={"User-Agent": "Hive-Antigravity-Auth/1.0"}
)
with urllib.request.urlopen(req, timeout=10) as resp:
content = resp.read().decode("utf-8")
import re
id_match = re.search(r'ANTIGRAVITY_CLIENT_ID\s*=\s*"([^"]+)"', content)
secret_match = re.search(r'ANTIGRAVITY_CLIENT_SECRET\s*=\s*"([^"]+)"', content)
if id_match:
_cached_client_id = id_match.group(1)
if secret_match:
_cached_client_secret = secret_match.group(1)
return _cached_client_id, _cached_client_secret
except Exception as e:
logger.debug(f"Failed to fetch credentials from public source: {e}")
return None, None
def get_client_id() -> str:
"""Get OAuth client ID from env, config, or public source."""
env_id = os.environ.get("ANTIGRAVITY_CLIENT_ID")
if env_id:
return env_id
# Try hive config
hive_cfg = Path.home() / ".hive" / "configuration.json"
if hive_cfg.exists():
try:
with open(hive_cfg) as f:
cfg = json.load(f)
cfg_id = cfg.get("llm", {}).get("antigravity_client_id")
if cfg_id:
return cfg_id
except Exception:
pass
# Fetch from public source
client_id, _ = _fetch_credentials_from_public_source()
if client_id:
return client_id
raise RuntimeError("Could not obtain Antigravity OAuth client ID")
def get_client_secret() -> str | None:
"""Get OAuth client secret from env, config, or public source."""
secret = os.environ.get("ANTIGRAVITY_CLIENT_SECRET")
if secret:
return secret
# Try to read from hive config
hive_cfg = Path.home() / ".hive" / "configuration.json"
if hive_cfg.exists():
try:
with open(hive_cfg) as f:
cfg = json.load(f)
secret = cfg.get("llm", {}).get("antigravity_client_secret")
if secret:
return secret
except Exception:
pass
# Fetch from public source (npm package on GitHub)
_, secret = _fetch_credentials_from_public_source()
return secret
def find_free_port() -> int:
"""Find an available local port."""
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind(("", 0))
s.listen(1)
return s.getsockname()[1]
class OAuthCallbackHandler(BaseHTTPRequestHandler):
"""Handle OAuth callback from browser."""
auth_code: str | None = None
state: str | None = None
error: str | None = None
def log_message(self, format: str, *args: Any) -> None:
pass # Suppress default logging
def do_GET(self) -> None:
parsed = urllib.parse.urlparse(self.path)
if parsed.path == "/oauth-callback":
query = urllib.parse.parse_qs(parsed.query)
if "error" in query:
self.error = query["error"][0]
self._send_response("Authentication failed. You can close this window.")
return
if "code" in query and "state" in query:
OAuthCallbackHandler.auth_code = query["code"][0]
OAuthCallbackHandler.state = query["state"][0]
self._send_response(
"Authentication successful! You can close this window "
"and return to the terminal."
)
return
self._send_response("Waiting for authentication...")
def _send_response(self, message: str) -> None:
self.send_response(200)
self.send_header("Content-Type", "text/html")
self.end_headers()
html = f"""<!DOCTYPE html>
<html>
<head><title>Antigravity Auth</title></head>
<body style="font-family: system-ui; display: flex; align-items: center;
justify-content: center; height: 100vh; margin: 0; background: #1a1a2e;
color: #eee;">
<div style="text-align: center;">
<h2>{message}</h2>
</div>
</body>
</html>"""
self.wfile.write(html.encode())
def wait_for_callback(port: int, timeout: int = 300) -> tuple[str | None, str | None, str | None]:
"""Start local server and wait for OAuth callback."""
server = HTTPServer(("localhost", port), OAuthCallbackHandler)
server.timeout = 1
start = time.time()
while time.time() - start < timeout:
if OAuthCallbackHandler.auth_code:
return (
OAuthCallbackHandler.auth_code,
OAuthCallbackHandler.state,
OAuthCallbackHandler.error,
)
server.handle_request()
return None, None, "timeout"
def exchange_code_for_tokens(
code: str, redirect_uri: str, client_id: str, client_secret: str | None
) -> dict[str, Any] | None:
"""Exchange authorization code for tokens."""
data = {
"code": code,
"client_id": client_id,
"redirect_uri": redirect_uri,
"grant_type": "authorization_code",
}
if client_secret:
data["client_secret"] = client_secret
body = urllib.parse.urlencode(data).encode()
req = urllib.request.Request(
_OAUTH_TOKEN_URL,
data=body,
headers={"Content-Type": "application/x-www-form-urlencoded"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=30) as resp:
return json.loads(resp.read())
except Exception as e:
logger.error(f"Token exchange failed: {e}")
return None
def get_user_email(access_token: str) -> str | None:
"""Get user email from Google API."""
req = urllib.request.Request(
"https://www.googleapis.com/oauth2/v2/userinfo",
headers={"Authorization": f"Bearer {access_token}"},
)
try:
with urllib.request.urlopen(req, timeout=10) as resp:
data = json.loads(resp.read())
return data.get("email")
except Exception:
return None
def load_accounts() -> dict[str, Any]:
"""Load existing accounts from file."""
if not _ACCOUNTS_FILE.exists():
return {"schemaVersion": 4, "accounts": []}
try:
with open(_ACCOUNTS_FILE) as f:
return json.load(f)
except Exception:
return {"schemaVersion": 4, "accounts": []}
def save_accounts(data: dict[str, Any]) -> None:
"""Save accounts to file."""
_ACCOUNTS_FILE.parent.mkdir(parents=True, exist_ok=True)
with open(_ACCOUNTS_FILE, "w") as f:
json.dump(data, f, indent=2)
logger.info(f"Saved credentials to {_ACCOUNTS_FILE}")
def validate_credentials(access_token: str, project_id: str = _DEFAULT_PROJECT_ID) -> bool:
"""Test if credentials work by making a simple API call to Antigravity.
Returns True if credentials are valid, False otherwise.
"""
endpoint = "https://daily-cloudcode-pa.sandbox.googleapis.com"
body = {
"project": project_id,
"model": "gemini-3-flash",
"request": {
"contents": [{"role": "user", "parts": [{"text": "hi"}]}],
"generationConfig": {"maxOutputTokens": 10},
},
"requestType": "agent",
"userAgent": "antigravity",
"requestId": "validation-test",
}
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json",
"User-Agent": (
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
"AppleWebKit/537.36 (KHTML, like Gecko) Antigravity/1.18.3"
),
"X-Goog-Api-Client": "google-cloud-sdk vscode_cloudshelleditor/0.1",
}
try:
req = urllib.request.Request(
f"{endpoint}/v1internal:generateContent",
data=json.dumps(body).encode("utf-8"),
headers=headers,
method="POST",
)
with urllib.request.urlopen(req, timeout=30) as resp:
json.loads(resp.read())
return True
except Exception:
return False
def refresh_access_token(
refresh_token: str, client_id: str, client_secret: str | None
) -> dict | None:
"""Refresh the access token using the refresh token."""
data = {
"grant_type": "refresh_token",
"refresh_token": refresh_token,
"client_id": client_id,
}
if client_secret:
data["client_secret"] = client_secret
body = urllib.parse.urlencode(data).encode()
req = urllib.request.Request(
_OAUTH_TOKEN_URL,
data=body,
headers={"Content-Type": "application/x-www-form-urlencoded"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=30) as resp:
return json.loads(resp.read())
except Exception as e:
logger.debug(f"Token refresh failed: {e}")
return None
def cmd_account_add(args: argparse.Namespace) -> int:
"""Add a new Antigravity account via OAuth2.
First checks if valid credentials already exist. If so, validates them
and skips OAuth if they work. Otherwise, proceeds with OAuth flow.
"""
client_id = get_client_id()
client_secret = get_client_secret()
# Check if credentials already exist
accounts_data = load_accounts()
accounts = accounts_data.get("accounts", [])
if accounts:
account = next((a for a in accounts if a.get("enabled", True) is not False), accounts[0])
access_token = account.get("access")
refresh_token_str = account.get("refresh", "")
refresh_token = refresh_token_str.split("|")[0] if refresh_token_str else None
project_id = (
refresh_token_str.split("|")[1] if "|" in refresh_token_str else _DEFAULT_PROJECT_ID
)
email = account.get("email", "unknown")
expires_ms = account.get("expires", 0)
expires_at = expires_ms / 1000.0 if expires_ms else 0.0
# Check if token is expired or near expiry
if access_token and expires_at and time.time() < expires_at - 60:
# Token still valid, test it
logger.info(f"Found existing credentials for: {email}")
logger.info("Validating existing credentials...")
if validate_credentials(access_token, project_id):
logger.info("✓ Credentials valid! Skipping OAuth.")
return 0
else:
logger.info("Credentials failed validation, refreshing...")
elif refresh_token:
logger.info(f"Found expired credentials for: {email}")
logger.info("Attempting token refresh...")
tokens = refresh_access_token(refresh_token, client_id, client_secret)
if tokens:
new_access = tokens.get("access_token")
expires_in = tokens.get("expires_in", 3600)
if new_access:
# Update the account
account["access"] = new_access
account["expires"] = int((time.time() + expires_in) * 1000)
accounts_data["last_refresh"] = time.strftime(
"%Y-%m-%dT%H:%M:%SZ", time.gmtime()
)
save_accounts(accounts_data)
# Validate the refreshed token
logger.info("Validating refreshed credentials...")
if validate_credentials(new_access, project_id):
logger.info("✓ Credentials refreshed and validated!")
return 0
else:
logger.info("Refreshed token failed validation, proceeding with OAuth...")
else:
logger.info("Token refresh failed, proceeding with OAuth...")
# No valid credentials, proceed with OAuth
if not client_secret:
logger.warning(
"No client secret configured. Token refresh may fail.\n"
"Set ANTIGRAVITY_CLIENT_SECRET env var or add "
"'antigravity_client_secret' to ~/.hive/configuration.json"
)
# Use fixed port and path matching Google's expected OAuth redirect URI
port = _DEFAULT_REDIRECT_PORT
redirect_uri = f"http://localhost:{port}/oauth-callback"
# Generate state for CSRF protection
state = secrets.token_urlsafe(16)
# Build authorization URL
params = {
"client_id": client_id,
"redirect_uri": redirect_uri,
"response_type": "code",
"scope": " ".join(_OAUTH_SCOPES),
"state": state,
"access_type": "offline",
"prompt": "consent",
}
auth_url = f"{_OAUTH_AUTH_URL}?{urllib.parse.urlencode(params)}"
logger.info("Opening browser for authentication...")
logger.info(f"If the browser doesn't open, visit: {auth_url}\n")
# Open browser
webbrowser.open(auth_url)
# Wait for callback
logger.info(f"Listening for callback on port {port}...")
code, received_state, error = wait_for_callback(port)
if error:
logger.error(f"Authentication failed: {error}")
return 1
if not code:
logger.error("No authorization code received")
return 1
if received_state != state:
logger.error("State mismatch - possible CSRF attack")
return 1
# Exchange code for tokens
logger.info("Exchanging authorization code for tokens...")
tokens = exchange_code_for_tokens(code, redirect_uri, client_id, client_secret)
if not tokens:
return 1
access_token = tokens.get("access_token")
refresh_token = tokens.get("refresh_token")
expires_in = tokens.get("expires_in", 3600)
if not access_token:
logger.error("No access token in response")
return 1
# Get user email
email = get_user_email(access_token)
if email:
logger.info(f"Authenticated as: {email}")
# Load existing accounts and add/update
accounts_data = load_accounts()
accounts = accounts_data.get("accounts", [])
# Build new account entry (V4 schema)
expires_ms = int((time.time() + expires_in) * 1000)
refresh_entry = f"{refresh_token}|{_DEFAULT_PROJECT_ID}"
new_account = {
"access": access_token,
"refresh": refresh_entry,
"expires": expires_ms,
"email": email,
"enabled": True,
}
# Update existing account or add new one
existing_idx = next((i for i, a in enumerate(accounts) if a.get("email") == email), None)
if existing_idx is not None:
accounts[existing_idx] = new_account
logger.info(f"Updated existing account: {email}")
else:
accounts.append(new_account)
logger.info(f"Added new account: {email}")
accounts_data["accounts"] = accounts
accounts_data["schemaVersion"] = 4
accounts_data["last_refresh"] = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
save_accounts(accounts_data)
logger.info("\n✓ Authentication complete!")
return 0
def cmd_account_list(args: argparse.Namespace) -> int:
"""List all stored accounts."""
data = load_accounts()
accounts = data.get("accounts", [])
if not accounts:
logger.info("No accounts configured.")
logger.info("Run 'antigravity auth account add' to add one.")
return 0
logger.info("Configured accounts:\n")
for i, account in enumerate(accounts, 1):
email = account.get("email", "unknown")
enabled = "enabled" if account.get("enabled", True) else "disabled"
logger.info(f" {i}. {email} ({enabled})")
return 0
def cmd_account_remove(args: argparse.Namespace) -> int:
"""Remove an account by email."""
email = args.email
data = load_accounts()
accounts = data.get("accounts", [])
original_len = len(accounts)
accounts = [a for a in accounts if a.get("email") != email]
if len(accounts) == original_len:
logger.error(f"No account found with email: {email}")
return 1
data["accounts"] = accounts
save_accounts(data)
logger.info(f"Removed account: {email}")
return 0
def main() -> int:
parser = argparse.ArgumentParser(
description="Antigravity authentication CLI",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
subparsers = parser.add_subparsers(dest="command", help="Commands")
# auth account add
auth_parser = subparsers.add_parser("auth", help="Authentication commands")
auth_subparsers = auth_parser.add_subparsers(dest="auth_command")
account_parser = auth_subparsers.add_parser("account", help="Account management")
account_subparsers = account_parser.add_subparsers(dest="account_command")
add_parser = account_subparsers.add_parser("add", help="Add a new account via OAuth2")
add_parser.set_defaults(func=cmd_account_add)
list_parser = account_subparsers.add_parser("list", help="List configured accounts")
list_parser.set_defaults(func=cmd_account_list)
remove_parser = account_subparsers.add_parser("remove", help="Remove an account")
remove_parser.add_argument("email", help="Email of account to remove")
remove_parser.set_defaults(func=cmd_account_remove)
args = parser.parse_args()
if hasattr(args, "func"):
return args.func(args)
parser.print_help()
return 0
if __name__ == "__main__":
sys.exit(main())
-740
View File
@@ -1,740 +0,0 @@
#!/usr/bin/env python3
"""
EventLoopNode WebSocket Demo
Real LLM, real FileConversationStore, real EventBus.
Streams EventLoopNode execution to a browser via WebSocket.
Usage:
cd /home/timothy/oss/hive/core
python demos/event_loop_wss_demo.py
Then open http://localhost:8765 in your browser.
"""
import asyncio
import json
import logging
import sys
import tempfile
from http import HTTPStatus
from pathlib import Path
import httpx
import websockets
from bs4 import BeautifulSoup
from websockets.http11 import Request, Response
# Add core, tools, and hive root to path
_CORE_DIR = Path(__file__).resolve().parent.parent
_HIVE_DIR = _CORE_DIR.parent
sys.path.insert(0, str(_CORE_DIR)) # framework.*
sys.path.insert(0, str(_HIVE_DIR / "tools" / "src")) # aden_tools.*
sys.path.insert(0, str(_HIVE_DIR)) # core.framework.* (for aden_tools imports)
import os # noqa: E402
from aden_tools.credentials import CREDENTIAL_SPECS, CredentialStoreAdapter # noqa: E402
from core.framework.credentials import CredentialStore # noqa: E402
from framework.credentials.storage import ( # noqa: E402
CompositeStorage,
EncryptedFileStorage,
EnvVarStorage,
)
from framework.graph.event_loop_node import EventLoopNode, LoopConfig # noqa: E402
from framework.graph.node import NodeContext, NodeSpec, SharedMemory # noqa: E402
from framework.llm.litellm import LiteLLMProvider # noqa: E402
from framework.llm.provider import Tool # noqa: E402
from framework.runner.tool_registry import ToolRegistry # noqa: E402
from framework.runtime.core import Runtime # noqa: E402
from framework.runtime.event_bus import EventBus, EventType # noqa: E402
from framework.storage.conversation_store import FileConversationStore # noqa: E402
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(name)s %(message)s")
logger = logging.getLogger("demo")
# -------------------------------------------------------------------------
# Persistent state (shared across WebSocket connections)
# -------------------------------------------------------------------------
STORE_DIR = Path(tempfile.mkdtemp(prefix="hive_demo_"))
STORE = FileConversationStore(STORE_DIR / "conversation")
RUNTIME = Runtime(STORE_DIR / "runtime")
LLM = LiteLLMProvider(model="claude-sonnet-4-5-20250929")
# -------------------------------------------------------------------------
# Tool Registry — real tools via ToolRegistry (same pattern as GraphExecutor)
# -------------------------------------------------------------------------
TOOL_REGISTRY = ToolRegistry()
# Credential store: Aden sync (OAuth2 tokens) + encrypted files + env var fallback
_env_mapping = {name: spec.env_var for name, spec in CREDENTIAL_SPECS.items()}
_local_storage = CompositeStorage(
primary=EncryptedFileStorage(),
fallbacks=[EnvVarStorage(env_mapping=_env_mapping)],
)
if os.environ.get("ADEN_API_KEY"):
try:
from framework.credentials.aden import ( # noqa: E402
AdenCachedStorage,
AdenClientConfig,
AdenCredentialClient,
AdenSyncProvider,
)
_client = AdenCredentialClient(AdenClientConfig(base_url="https://api.adenhq.com"))
_provider = AdenSyncProvider(client=_client)
_storage = AdenCachedStorage(
local_storage=_local_storage,
aden_provider=_provider,
)
_cred_store = CredentialStore(storage=_storage, providers=[_provider], auto_refresh=True)
_synced = _provider.sync_all(_cred_store)
logger.info("Synced %d credentials from Aden", _synced)
except Exception as e:
logger.warning("Aden sync unavailable: %s", e)
_cred_store = CredentialStore(storage=_local_storage)
else:
logger.info("ADEN_API_KEY not set, using local credential storage")
_cred_store = CredentialStore(storage=_local_storage)
CREDENTIALS = CredentialStoreAdapter(_cred_store)
# Debug: log which credentials resolved
for _name in ["brave_search", "hubspot", "anthropic"]:
_val = CREDENTIALS.get(_name)
if _val:
logger.debug("credential %s: OK (len=%d)", _name, len(_val))
else:
logger.debug("credential %s: not found", _name)
# --- web_search (Brave Search API) ---
TOOL_REGISTRY.register(
name="web_search",
tool=Tool(
name="web_search",
description=(
"Search the web for current information. "
"Returns titles, URLs, and snippets from search results."
),
parameters={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query (1-500 characters)",
},
"num_results": {
"type": "integer",
"description": "Number of results to return (1-20, default 10)",
},
},
"required": ["query"],
},
),
executor=lambda inputs: _exec_web_search(inputs),
)
def _exec_web_search(inputs: dict) -> dict:
api_key = CREDENTIALS.get("brave_search")
if not api_key:
return {"error": "brave_search credential not configured"}
query = inputs.get("query", "")
num_results = min(inputs.get("num_results", 10), 20)
resp = httpx.get(
"https://api.search.brave.com/res/v1/web/search",
params={"q": query, "count": num_results},
headers={"X-Subscription-Token": api_key, "Accept": "application/json"},
timeout=30.0,
)
if resp.status_code != 200:
return {"error": f"Brave API HTTP {resp.status_code}"}
data = resp.json()
results = [
{
"title": item.get("title", ""),
"url": item.get("url", ""),
"snippet": item.get("description", ""),
}
for item in data.get("web", {}).get("results", [])[:num_results]
]
return {"query": query, "results": results, "total": len(results)}
# --- web_scrape (httpx + BeautifulSoup, no playwright for sync compat) ---
TOOL_REGISTRY.register(
name="web_scrape",
tool=Tool(
name="web_scrape",
description=(
"Scrape and extract text content from a webpage URL. "
"Returns the page title and main text content."
),
parameters={
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "URL of the webpage to scrape",
},
"max_length": {
"type": "integer",
"description": "Maximum text length (default 50000)",
},
},
"required": ["url"],
},
),
executor=lambda inputs: _exec_web_scrape(inputs),
)
_SCRAPE_HEADERS = {
"User-Agent": (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/131.0.0.0 Safari/537.36"
),
"Accept": "text/html,application/xhtml+xml",
}
def _exec_web_scrape(inputs: dict) -> dict:
url = inputs.get("url", "")
max_length = max(1000, min(inputs.get("max_length", 50000), 500000))
if not url.startswith(("http://", "https://")):
url = "https://" + url
try:
resp = httpx.get(url, timeout=30.0, follow_redirects=True, headers=_SCRAPE_HEADERS)
if resp.status_code != 200:
return {"error": f"HTTP {resp.status_code}"}
soup = BeautifulSoup(resp.text, "html.parser")
for tag in soup(["script", "style", "nav", "footer", "header", "aside", "noscript"]):
tag.decompose()
title = soup.title.get_text(strip=True) if soup.title else ""
main = (
soup.find("article")
or soup.find("main")
or soup.find(attrs={"role": "main"})
or soup.find("body")
)
text = main.get_text(separator=" ", strip=True) if main else ""
text = " ".join(text.split())
if len(text) > max_length:
text = text[:max_length] + "..."
return {"url": url, "title": title, "content": text, "length": len(text)}
except httpx.TimeoutException:
return {"error": "Request timed out"}
except Exception as e:
return {"error": f"Scrape failed: {e}"}
# --- HubSpot CRM tools (optional, requires HUBSPOT_ACCESS_TOKEN) ---
_HUBSPOT_API = "https://api.hubapi.com"
def _hubspot_headers() -> dict | None:
token = CREDENTIALS.get("hubspot")
if token:
logger.debug("HubSpot token: %s...%s (len=%d)", token[:8], token[-4:], len(token))
else:
logger.debug("HubSpot token: not found")
if not token:
return None
return {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json",
"Accept": "application/json",
}
def _exec_hubspot_search(inputs: dict) -> dict:
headers = _hubspot_headers()
if not headers:
return {"error": "HUBSPOT_ACCESS_TOKEN not set"}
object_type = inputs.get("object_type", "contacts")
query = inputs.get("query", "")
limit = min(inputs.get("limit", 10), 100)
body: dict = {"limit": limit}
if query:
body["query"] = query
try:
resp = httpx.post(
f"{_HUBSPOT_API}/crm/v3/objects/{object_type}/search",
headers=headers,
json=body,
timeout=30.0,
)
if resp.status_code != 200:
return {"error": f"HubSpot API HTTP {resp.status_code}: {resp.text[:200]}"}
return resp.json()
except httpx.TimeoutException:
return {"error": "Request timed out"}
except Exception as e:
return {"error": f"HubSpot error: {e}"}
TOOL_REGISTRY.register(
name="hubspot_search",
tool=Tool(
name="hubspot_search",
description=(
"Search HubSpot CRM objects (contacts, companies, or deals). "
"Returns matching records with their properties."
),
parameters={
"type": "object",
"properties": {
"object_type": {
"type": "string",
"description": "CRM object type: 'contacts', 'companies', or 'deals'",
},
"query": {
"type": "string",
"description": "Search query (name, email, domain, etc.)",
},
"limit": {
"type": "integer",
"description": "Max results (1-100, default 10)",
},
},
"required": ["object_type"],
},
),
executor=lambda inputs: _exec_hubspot_search(inputs),
)
logger.info(
"ToolRegistry loaded: %s",
", ".join(TOOL_REGISTRY.get_registered_names()),
)
# -------------------------------------------------------------------------
# HTML page (embedded)
# -------------------------------------------------------------------------
HTML_PAGE = ( # noqa: E501
"""<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>EventLoopNode Live Demo</title>
<style>
* { box-sizing: border-box; margin: 0; padding: 0; }
body {
font-family: 'SF Mono', 'Fira Code', monospace;
background: #0d1117; color: #c9d1d9;
height: 100vh; display: flex; flex-direction: column;
}
header {
background: #161b22; padding: 12px 20px;
border-bottom: 1px solid #30363d;
display: flex; align-items: center; gap: 16px;
}
header h1 { font-size: 16px; color: #58a6ff; font-weight: 600; }
.status {
font-size: 12px; padding: 3px 10px; border-radius: 12px;
background: #21262d; color: #8b949e;
}
.status.running { background: #1a4b2e; color: #3fb950; }
.status.done { background: #1a3a5c; color: #58a6ff; }
.status.error { background: #4b1a1a; color: #f85149; }
.chat { flex: 1; overflow-y: auto; padding: 16px; }
.msg {
margin: 8px 0; padding: 10px 14px; border-radius: 8px;
line-height: 1.6; white-space: pre-wrap; word-wrap: break-word;
}
.msg.user { background: #1a3a5c; color: #58a6ff; }
.msg.assistant { background: #161b22; color: #c9d1d9; }
.msg.event {
background: transparent; color: #8b949e; font-size: 11px;
padding: 4px 14px; border-left: 3px solid #30363d;
}
.msg.event.loop { border-left-color: #58a6ff; }
.msg.event.tool { border-left-color: #d29922; }
.msg.event.stall { border-left-color: #f85149; }
.input-bar {
padding: 12px 16px; background: #161b22;
border-top: 1px solid #30363d; display: flex; gap: 8px;
}
.input-bar input {
flex: 1; background: #0d1117; border: 1px solid #30363d;
color: #c9d1d9; padding: 8px 12px; border-radius: 6px;
font-family: inherit; font-size: 14px; outline: none;
}
.input-bar input:focus { border-color: #58a6ff; }
.input-bar button {
background: #238636; color: #fff; border: none;
padding: 8px 20px; border-radius: 6px; cursor: pointer;
font-family: inherit; font-weight: 600;
}
.input-bar button:hover { background: #2ea043; }
.input-bar button:disabled {
background: #21262d; color: #484f58; cursor: not-allowed;
}
.input-bar button.clear { background: #da3633; }
.input-bar button.clear:hover { background: #f85149; }
</style>
</head>
<body>
<header>
<h1>EventLoopNode Live</h1>
<span id="status" class="status">Idle</span>
<span id="iter" class="status" style="display:none">Step 0</span>
</header>
<div id="chat" class="chat"></div>
<div class="input-bar">
<input id="input" type="text"
placeholder="Ask anything..." autofocus />
<button id="go" onclick="run()">Send</button>
<button class="clear"
onclick="clearConversation()">Clear</button>
</div>
<script>
let ws = null;
let currentAssistantEl = null;
let iterCount = 0;
const chat = document.getElementById('chat');
const status = document.getElementById('status');
const iterEl = document.getElementById('iter');
const goBtn = document.getElementById('go');
const inputEl = document.getElementById('input');
inputEl.addEventListener('keydown', e => {
if (e.key === 'Enter') run();
});
function setStatus(text, cls) {
status.textContent = text;
status.className = 'status ' + cls;
}
function addMsg(text, cls) {
const el = document.createElement('div');
el.className = 'msg ' + cls;
el.textContent = text;
chat.appendChild(el);
chat.scrollTop = chat.scrollHeight;
return el;
}
function connect() {
ws = new WebSocket('ws://' + location.host + '/ws');
ws.onopen = () => {
setStatus('Ready', 'done');
goBtn.disabled = false;
};
ws.onmessage = handleEvent;
ws.onerror = () => { setStatus('Error', 'error'); };
ws.onclose = () => {
setStatus('Reconnecting...', '');
goBtn.disabled = true;
setTimeout(connect, 2000);
};
}
function handleEvent(msg) {
const evt = JSON.parse(msg.data);
if (evt.type === 'llm_text_delta') {
if (currentAssistantEl) {
currentAssistantEl.textContent += evt.content;
chat.scrollTop = chat.scrollHeight;
}
}
else if (evt.type === 'ready') {
setStatus('Ready', 'done');
if (currentAssistantEl && !currentAssistantEl.textContent)
currentAssistantEl.remove();
goBtn.disabled = false;
}
else if (evt.type === 'node_loop_iteration') {
iterCount = evt.iteration || (iterCount + 1);
iterEl.textContent = 'Step ' + iterCount;
iterEl.style.display = '';
}
else if (evt.type === 'tool_call_started') {
var info = evt.tool_name + '('
+ JSON.stringify(evt.tool_input).slice(0, 120) + ')';
addMsg('TOOL ' + info, 'event tool');
}
else if (evt.type === 'tool_call_completed') {
var preview = (evt.result || '').slice(0, 200);
var cls = evt.is_error ? 'stall' : 'tool';
addMsg('RESULT ' + evt.tool_name + ': ' + preview,
'event ' + cls);
currentAssistantEl = addMsg('', 'assistant');
}
else if (evt.type === 'result') {
setStatus('Session ended', evt.success ? 'done' : 'error');
if (evt.error) addMsg('ERROR ' + evt.error, 'event stall');
if (currentAssistantEl && !currentAssistantEl.textContent)
currentAssistantEl.remove();
goBtn.disabled = false;
}
else if (evt.type === 'node_stalled') {
addMsg('STALLED ' + evt.reason, 'event stall');
}
else if (evt.type === 'cleared') {
chat.innerHTML = '';
iterCount = 0;
iterEl.textContent = 'Step 0';
iterEl.style.display = 'none';
setStatus('Ready', 'done');
goBtn.disabled = false;
}
}
function run() {
const text = inputEl.value.trim();
if (!text || !ws || ws.readyState !== 1) return;
addMsg(text, 'user');
currentAssistantEl = addMsg('', 'assistant');
inputEl.value = '';
setStatus('Running', 'running');
goBtn.disabled = true;
ws.send(JSON.stringify({ topic: text }));
}
function clearConversation() {
if (ws && ws.readyState === 1) {
ws.send(JSON.stringify({ command: 'clear' }));
}
}
connect();
</script>
</body>
</html>"""
)
# -------------------------------------------------------------------------
# WebSocket handler
# -------------------------------------------------------------------------
async def handle_ws(websocket):
"""Persistent WebSocket: long-lived EventLoopNode with client_facing blocking."""
global STORE
# -- Event forwarding (WebSocket ← EventBus) ----------------------------
bus = EventBus()
async def forward_event(event):
try:
payload = {"type": event.type.value, **event.data}
if event.node_id:
payload["node_id"] = event.node_id
await websocket.send(json.dumps(payload))
except Exception:
pass
bus.subscribe(
event_types=[
EventType.NODE_LOOP_STARTED,
EventType.NODE_LOOP_ITERATION,
EventType.NODE_LOOP_COMPLETED,
EventType.LLM_TEXT_DELTA,
EventType.TOOL_CALL_STARTED,
EventType.TOOL_CALL_COMPLETED,
EventType.NODE_STALLED,
],
handler=forward_event,
)
# -- Per-connection state -----------------------------------------------
node = None
loop_task = None
tools = list(TOOL_REGISTRY.get_tools().values())
tool_executor = TOOL_REGISTRY.get_executor()
node_spec = NodeSpec(
id="assistant",
name="Chat Assistant",
description="A conversational assistant that remembers context across messages",
node_type="event_loop",
client_facing=True,
system_prompt=(
"You are a helpful assistant with access to tools. "
"You can search the web, scrape webpages, and query HubSpot CRM. "
"Use tools when the user asks for current information or external data. "
"You have full conversation history, so you can reference previous messages."
),
)
# -- Ready callback: subscribe to CLIENT_INPUT_REQUESTED on the bus ---
async def on_input_requested(event):
try:
await websocket.send(json.dumps({"type": "ready"}))
except Exception:
pass
bus.subscribe(
event_types=[EventType.CLIENT_INPUT_REQUESTED],
handler=on_input_requested,
)
async def start_loop(first_message: str):
"""Create an EventLoopNode and run it as a background task."""
nonlocal node, loop_task
memory = SharedMemory()
ctx = NodeContext(
runtime=RUNTIME,
node_id="assistant",
node_spec=node_spec,
memory=memory,
input_data={},
llm=LLM,
available_tools=tools,
)
node = EventLoopNode(
event_bus=bus,
config=LoopConfig(max_iterations=10_000, max_history_tokens=32_000),
conversation_store=STORE,
tool_executor=tool_executor,
)
await node.inject_event(first_message)
async def _run():
try:
result = await node.execute(ctx)
try:
await websocket.send(
json.dumps(
{
"type": "result",
"success": result.success,
"output": result.output,
"error": result.error,
"tokens": result.tokens_used,
}
)
)
except Exception:
pass
logger.info(f"Loop ended: success={result.success}, tokens={result.tokens_used}")
except websockets.exceptions.ConnectionClosed:
logger.info("Loop stopped: WebSocket closed")
except Exception as e:
logger.exception("Loop error")
try:
await websocket.send(
json.dumps(
{
"type": "result",
"success": False,
"error": str(e),
"output": {},
}
)
)
except Exception:
pass
loop_task = asyncio.create_task(_run())
async def stop_loop():
"""Signal the node and wait for the loop task to finish."""
nonlocal node, loop_task
if loop_task and not loop_task.done():
if node:
node.signal_shutdown()
try:
await asyncio.wait_for(loop_task, timeout=5.0)
except (TimeoutError, asyncio.CancelledError):
loop_task.cancel()
node = None
loop_task = None
# -- Message loop (runs for the lifetime of this WebSocket) -------------
try:
async for raw in websocket:
try:
msg = json.loads(raw)
except Exception:
continue
# Clear command
if msg.get("command") == "clear":
import shutil
await stop_loop()
await STORE.close()
conv_dir = STORE_DIR / "conversation"
if conv_dir.exists():
shutil.rmtree(conv_dir)
STORE = FileConversationStore(conv_dir)
await websocket.send(json.dumps({"type": "cleared"}))
logger.info("Conversation cleared")
continue
topic = msg.get("topic", "")
if not topic:
continue
if node is None:
# First message — spin up the loop
logger.info(f"Starting persistent loop: {topic}")
await start_loop(topic)
else:
# Subsequent message — inject into the running loop
logger.info(f"Injecting message: {topic}")
await node.inject_event(topic)
except websockets.exceptions.ConnectionClosed:
pass
finally:
await stop_loop()
logger.info("WebSocket closed, loop stopped")
# -------------------------------------------------------------------------
# HTTP handler for serving the HTML page
# -------------------------------------------------------------------------
async def process_request(connection, request: Request):
"""Serve HTML on GET /, upgrade to WebSocket on /ws."""
if request.path == "/ws":
return None # let websockets handle the upgrade
# Serve the HTML page for any other path
return Response(
HTTPStatus.OK,
"OK",
websockets.Headers({"Content-Type": "text/html; charset=utf-8"}),
HTML_PAGE.encode(),
)
# -------------------------------------------------------------------------
# Main
# -------------------------------------------------------------------------
async def main():
port = 8765
async with websockets.serve(
handle_ws,
"0.0.0.0",
port,
process_request=process_request,
):
logger.info(f"Demo running at http://localhost:{port}")
logger.info("Open in your browser and enter a topic to research.")
await asyncio.Future() # run forever
if __name__ == "__main__":
asyncio.run(main())
File diff suppressed because it is too large Load Diff
-930
View File
@@ -1,930 +0,0 @@
#!/usr/bin/env python3
"""
Two-Node ContextHandoff Demo
Demonstrates ContextHandoff between two EventLoopNode instances:
Node A (Researcher) ContextHandoff Node B (Analyst)
Real LLM, real FileConversationStore, real EventBus.
Streams both nodes to a browser via WebSocket.
Usage:
cd /home/timothy/oss/hive/core
python demos/handoff_demo.py
Then open http://localhost:8766 in your browser.
"""
import asyncio
import json
import logging
import sys
import tempfile
from http import HTTPStatus
from pathlib import Path
import httpx
import websockets
from bs4 import BeautifulSoup
from websockets.http11 import Request, Response
# Add core, tools, and hive root to path
_CORE_DIR = Path(__file__).resolve().parent.parent
_HIVE_DIR = _CORE_DIR.parent
sys.path.insert(0, str(_CORE_DIR)) # framework.*
sys.path.insert(0, str(_HIVE_DIR / "tools" / "src")) # aden_tools.*
sys.path.insert(0, str(_HIVE_DIR)) # core.framework.* (for aden_tools imports)
from aden_tools.credentials import CREDENTIAL_SPECS, CredentialStoreAdapter # noqa: E402
from core.framework.credentials import CredentialStore # noqa: E402
from framework.credentials.storage import ( # noqa: E402
CompositeStorage,
EncryptedFileStorage,
EnvVarStorage,
)
from framework.graph.context_handoff import ContextHandoff # noqa: E402
from framework.graph.conversation import NodeConversation # noqa: E402
from framework.graph.event_loop_node import EventLoopNode, LoopConfig # noqa: E402
from framework.graph.node import NodeContext, NodeSpec, SharedMemory # noqa: E402
from framework.llm.litellm import LiteLLMProvider # noqa: E402
from framework.llm.provider import Tool # noqa: E402
from framework.runner.tool_registry import ToolRegistry # noqa: E402
from framework.runtime.core import Runtime # noqa: E402
from framework.runtime.event_bus import EventBus, EventType # noqa: E402
from framework.storage.conversation_store import FileConversationStore # noqa: E402
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(name)s %(message)s")
logger = logging.getLogger("handoff_demo")
# -------------------------------------------------------------------------
# Persistent state
# -------------------------------------------------------------------------
STORE_DIR = Path(tempfile.mkdtemp(prefix="hive_handoff_"))
RUNTIME = Runtime(STORE_DIR / "runtime")
LLM = LiteLLMProvider(model="claude-sonnet-4-5-20250929")
# -------------------------------------------------------------------------
# Credentials
# -------------------------------------------------------------------------
# Composite credential store: encrypted files (primary) + env vars (fallback)
_env_mapping = {name: spec.env_var for name, spec in CREDENTIAL_SPECS.items()}
_composite = CompositeStorage(
primary=EncryptedFileStorage(),
fallbacks=[EnvVarStorage(env_mapping=_env_mapping)],
)
CREDENTIALS = CredentialStoreAdapter(CredentialStore(storage=_composite))
for _name in ["brave_search", "hubspot"]:
_val = CREDENTIALS.get(_name)
if _val:
logger.debug("credential %s: OK (len=%d)", _name, len(_val))
else:
logger.debug("credential %s: not found", _name)
# -------------------------------------------------------------------------
# Tool Registry — web_search + web_scrape for Node A (Researcher)
# -------------------------------------------------------------------------
TOOL_REGISTRY = ToolRegistry()
def _exec_web_search(inputs: dict) -> dict:
api_key = CREDENTIALS.get("brave_search")
if not api_key:
return {"error": "brave_search credential not configured"}
query = inputs.get("query", "")
num_results = min(inputs.get("num_results", 10), 20)
resp = httpx.get(
"https://api.search.brave.com/res/v1/web/search",
params={"q": query, "count": num_results},
headers={
"X-Subscription-Token": api_key,
"Accept": "application/json",
},
timeout=30.0,
)
if resp.status_code != 200:
return {"error": f"Brave API HTTP {resp.status_code}"}
data = resp.json()
results = [
{
"title": item.get("title", ""),
"url": item.get("url", ""),
"snippet": item.get("description", ""),
}
for item in data.get("web", {}).get("results", [])[:num_results]
]
return {"query": query, "results": results, "total": len(results)}
TOOL_REGISTRY.register(
name="web_search",
tool=Tool(
name="web_search",
description=(
"Search the web for current information. "
"Returns titles, URLs, and snippets from search results."
),
parameters={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query (1-500 characters)",
},
"num_results": {
"type": "integer",
"description": "Number of results (1-20, default 10)",
},
},
"required": ["query"],
},
),
executor=lambda inputs: _exec_web_search(inputs),
)
_SCRAPE_HEADERS = {
"User-Agent": (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/131.0.0.0 Safari/537.36"
),
"Accept": "text/html,application/xhtml+xml",
}
def _exec_web_scrape(inputs: dict) -> dict:
url = inputs.get("url", "")
max_length = max(1000, min(inputs.get("max_length", 50000), 500000))
if not url.startswith(("http://", "https://")):
url = "https://" + url
try:
resp = httpx.get(
url,
timeout=30.0,
follow_redirects=True,
headers=_SCRAPE_HEADERS,
)
if resp.status_code != 200:
return {"error": f"HTTP {resp.status_code}"}
soup = BeautifulSoup(resp.text, "html.parser")
for tag in soup(["script", "style", "nav", "footer", "header", "aside", "noscript"]):
tag.decompose()
title = soup.title.get_text(strip=True) if soup.title else ""
main = (
soup.find("article")
or soup.find("main")
or soup.find(attrs={"role": "main"})
or soup.find("body")
)
text = main.get_text(separator=" ", strip=True) if main else ""
text = " ".join(text.split())
if len(text) > max_length:
text = text[:max_length] + "..."
return {
"url": url,
"title": title,
"content": text,
"length": len(text),
}
except httpx.TimeoutException:
return {"error": "Request timed out"}
except Exception as e:
return {"error": f"Scrape failed: {e}"}
TOOL_REGISTRY.register(
name="web_scrape",
tool=Tool(
name="web_scrape",
description=(
"Scrape and extract text content from a webpage URL. "
"Returns the page title and main text content."
),
parameters={
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "URL of the webpage to scrape",
},
"max_length": {
"type": "integer",
"description": "Maximum text length (default 50000)",
},
},
"required": ["url"],
},
),
executor=lambda inputs: _exec_web_scrape(inputs),
)
logger.info(
"ToolRegistry loaded: %s",
", ".join(TOOL_REGISTRY.get_registered_names()),
)
# -------------------------------------------------------------------------
# Node Specs
# -------------------------------------------------------------------------
RESEARCHER_SPEC = NodeSpec(
id="researcher",
name="Researcher",
description="Researches a topic using web search and scraping tools",
node_type="event_loop",
input_keys=["topic"],
output_keys=["research_summary"],
system_prompt=(
"You are a thorough research assistant. Your job is to research "
"the given topic using the web_search and web_scrape tools.\n\n"
"1. Search for relevant information on the topic\n"
"2. Scrape 1-2 of the most promising URLs for details\n"
"3. Synthesize your findings into a comprehensive summary\n"
"4. Use set_output with key='research_summary' to save your "
"findings\n\n"
"Be thorough but efficient. Aim for 2-4 search/scrape calls, "
"then summarize and set_output."
),
)
ANALYST_SPEC = NodeSpec(
id="analyst",
name="Analyst",
description="Analyzes research findings and provides insights",
node_type="event_loop",
input_keys=["context"],
output_keys=["analysis"],
system_prompt=(
"You are a strategic analyst. You receive research findings from "
"a previous researcher and must:\n\n"
"1. Identify key themes and patterns\n"
"2. Assess the reliability and significance of the findings\n"
"3. Provide actionable insights and recommendations\n"
"4. Use set_output with key='analysis' to save your analysis\n\n"
"Be concise but insightful. Focus on what matters most."
),
)
# -------------------------------------------------------------------------
# HTML page
# -------------------------------------------------------------------------
HTML_PAGE = ( # noqa: E501
"""<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>ContextHandoff Demo</title>
<style>
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
font-family: 'SF Mono', 'Fira Code', monospace;
background: #0d1117;
color: #c9d1d9;
height: 100vh;
display: flex;
flex-direction: column;
}
header {
background: #161b22;
padding: 12px 20px;
border-bottom: 1px solid #30363d;
display: flex;
align-items: center;
gap: 16px;
}
header h1 {
font-size: 16px;
color: #58a6ff;
font-weight: 600;
}
.badge {
font-size: 12px;
padding: 3px 10px;
border-radius: 12px;
background: #21262d;
color: #8b949e;
}
.badge.researcher {
background: #1a3a5c;
color: #58a6ff;
}
.badge.analyst {
background: #1a4b2e;
color: #3fb950;
}
.badge.handoff {
background: #3d1f00;
color: #d29922;
}
.badge.done {
background: #21262d;
color: #8b949e;
}
.badge.error {
background: #4b1a1a;
color: #f85149;
}
.chat {
flex: 1;
overflow-y: auto;
padding: 16px;
}
.msg {
margin: 8px 0;
padding: 10px 14px;
border-radius: 8px;
line-height: 1.6;
white-space: pre-wrap;
word-wrap: break-word;
}
.msg.user {
background: #1a3a5c;
color: #58a6ff;
}
.msg.assistant {
background: #161b22;
color: #c9d1d9;
}
.msg.assistant.analyst-msg {
border-left: 3px solid #3fb950;
}
.msg.event {
background: transparent;
color: #8b949e;
font-size: 11px;
padding: 4px 14px;
border-left: 3px solid #30363d;
}
.msg.event.loop {
border-left-color: #58a6ff;
}
.msg.event.tool {
border-left-color: #d29922;
}
.msg.event.stall {
border-left-color: #f85149;
}
.handoff-banner {
margin: 16px 0;
padding: 16px;
background: #1c1200;
border: 1px solid #d29922;
border-radius: 8px;
text-align: center;
}
.handoff-banner h3 {
color: #d29922;
font-size: 14px;
margin-bottom: 8px;
}
.handoff-banner p, .result-banner p {
color: #8b949e;
font-size: 12px;
line-height: 1.5;
max-height: 200px;
overflow-y: auto;
white-space: pre-wrap;
text-align: left;
}
.result-banner {
margin: 16px 0;
padding: 16px;
background: #0a2614;
border: 1px solid #3fb950;
border-radius: 8px;
}
.result-banner h3 {
color: #3fb950;
font-size: 14px;
margin-bottom: 8px;
text-align: center;
}
.result-banner .label {
color: #58a6ff;
font-size: 11px;
font-weight: 600;
margin-top: 10px;
margin-bottom: 2px;
}
.result-banner .tokens {
color: #484f58;
font-size: 11px;
text-align: center;
margin-top: 10px;
}
.input-bar {
padding: 12px 16px;
background: #161b22;
border-top: 1px solid #30363d;
display: flex;
gap: 8px;
}
.input-bar input {
flex: 1;
background: #0d1117;
border: 1px solid #30363d;
color: #c9d1d9;
padding: 8px 12px;
border-radius: 6px;
font-family: inherit;
font-size: 14px;
outline: none;
}
.input-bar input:focus {
border-color: #58a6ff;
}
.input-bar button {
background: #238636;
color: #fff;
border: none;
padding: 8px 20px;
border-radius: 6px;
cursor: pointer;
font-family: inherit;
font-weight: 600;
}
.input-bar button:hover {
background: #2ea043;
}
.input-bar button:disabled {
background: #21262d;
color: #484f58;
cursor: not-allowed;
}
</style>
</head>
<body>
<header>
<h1>ContextHandoff Demo</h1>
<span id="phase" class="badge">Idle</span>
<span id="iter" class="badge" style="display:none">Step 0</span>
</header>
<div id="chat" class="chat"></div>
<div class="input-bar">
<input id="input" type="text"
placeholder="Enter a research topic..." autofocus />
<button id="go" onclick="run()">Research</button>
</div>
<script>
let ws = null;
let currentAssistantEl = null;
let iterCount = 0;
let currentPhase = 'idle';
const chat = document.getElementById('chat');
const phase = document.getElementById('phase');
const iterEl = document.getElementById('iter');
const goBtn = document.getElementById('go');
const inputEl = document.getElementById('input');
inputEl.addEventListener('keydown', e => {
if (e.key === 'Enter') run();
});
function setPhase(text, cls) {
phase.textContent = text;
phase.className = 'badge ' + cls;
currentPhase = cls;
}
function addMsg(text, cls) {
const el = document.createElement('div');
el.className = 'msg ' + cls;
el.textContent = text;
chat.appendChild(el);
chat.scrollTop = chat.scrollHeight;
return el;
}
function addHandoffBanner(summary) {
const banner = document.createElement('div');
banner.className = 'handoff-banner';
const h3 = document.createElement('h3');
h3.textContent = 'Context Handoff: Researcher -> Analyst';
const p = document.createElement('p');
p.textContent = summary || 'Passing research context...';
banner.appendChild(h3);
banner.appendChild(p);
chat.appendChild(banner);
chat.scrollTop = chat.scrollHeight;
}
function addResultBanner(researcher, analyst, tokens) {
const banner = document.createElement('div');
banner.className = 'result-banner';
const h3 = document.createElement('h3');
h3.textContent = 'Pipeline Complete';
banner.appendChild(h3);
if (researcher && researcher.research_summary) {
const lbl = document.createElement('div');
lbl.className = 'label';
lbl.textContent = 'RESEARCH SUMMARY';
banner.appendChild(lbl);
const p = document.createElement('p');
p.textContent = researcher.research_summary;
banner.appendChild(p);
}
if (analyst && analyst.analysis) {
const lbl = document.createElement('div');
lbl.className = 'label';
lbl.textContent = 'ANALYSIS';
lbl.style.color = '#3fb950';
banner.appendChild(lbl);
const p = document.createElement('p');
p.textContent = analyst.analysis;
banner.appendChild(p);
}
if (tokens) {
const t = document.createElement('div');
t.className = 'tokens';
t.textContent = 'Total tokens: ' + tokens.toLocaleString();
banner.appendChild(t);
}
chat.appendChild(banner);
chat.scrollTop = chat.scrollHeight;
}
function connect() {
ws = new WebSocket('ws://' + location.host + '/ws');
ws.onopen = () => {
setPhase('Ready', 'done');
goBtn.disabled = false;
};
ws.onmessage = handleEvent;
ws.onerror = () => { setPhase('Error', 'error'); };
ws.onclose = () => {
setPhase('Reconnecting...', '');
goBtn.disabled = true;
setTimeout(connect, 2000);
};
}
function handleEvent(msg) {
const evt = JSON.parse(msg.data);
if (evt.type === 'phase') {
if (evt.phase === 'researcher') {
setPhase('Researcher', 'researcher');
} else if (evt.phase === 'handoff') {
setPhase('Handoff', 'handoff');
} else if (evt.phase === 'analyst') {
setPhase('Analyst', 'analyst');
}
iterCount = 0;
iterEl.style.display = 'none';
}
else if (evt.type === 'llm_text_delta') {
if (currentAssistantEl) {
currentAssistantEl.textContent += evt.content;
chat.scrollTop = chat.scrollHeight;
}
}
else if (evt.type === 'node_loop_iteration') {
iterCount = evt.iteration || (iterCount + 1);
iterEl.textContent = 'Step ' + iterCount;
iterEl.style.display = '';
}
else if (evt.type === 'tool_call_started') {
var info = evt.tool_name + '('
+ JSON.stringify(evt.tool_input).slice(0, 120) + ')';
addMsg('TOOL ' + info, 'event tool');
}
else if (evt.type === 'tool_call_completed') {
var preview = (evt.result || '').slice(0, 200);
var cls = evt.is_error ? 'stall' : 'tool';
addMsg(
'RESULT ' + evt.tool_name + ': ' + preview,
'event ' + cls
);
var assistCls = currentPhase === 'analyst'
? 'assistant analyst-msg' : 'assistant';
currentAssistantEl = addMsg('', assistCls);
}
else if (evt.type === 'handoff_context') {
addHandoffBanner(evt.summary);
var assistCls = 'assistant analyst-msg';
currentAssistantEl = addMsg('', assistCls);
}
else if (evt.type === 'node_result') {
if (evt.node_id === 'researcher') {
if (currentAssistantEl
&& !currentAssistantEl.textContent) {
currentAssistantEl.remove();
}
}
}
else if (evt.type === 'done') {
setPhase('Done', 'done');
iterEl.style.display = 'none';
if (currentAssistantEl
&& !currentAssistantEl.textContent) {
currentAssistantEl.remove();
}
currentAssistantEl = null;
addResultBanner(
evt.researcher, evt.analyst, evt.total_tokens
);
goBtn.disabled = false;
inputEl.placeholder = 'Enter another topic...';
}
else if (evt.type === 'error') {
setPhase('Error', 'error');
addMsg('ERROR ' + evt.message, 'event stall');
goBtn.disabled = false;
}
else if (evt.type === 'node_stalled') {
addMsg('STALLED ' + evt.reason, 'event stall');
}
}
function run() {
const text = inputEl.value.trim();
if (!text || !ws || ws.readyState !== 1) return;
chat.innerHTML = '';
addMsg(text, 'user');
currentAssistantEl = addMsg('', 'assistant');
inputEl.value = '';
goBtn.disabled = true;
ws.send(JSON.stringify({ topic: text }));
}
connect();
</script>
</body>
</html>"""
)
# -------------------------------------------------------------------------
# WebSocket handler — sequential Node A → Handoff → Node B
# -------------------------------------------------------------------------
async def handle_ws(websocket):
"""Run the two-node handoff pipeline per user message."""
try:
async for raw in websocket:
try:
msg = json.loads(raw)
except Exception:
continue
topic = msg.get("topic", "")
if not topic:
continue
logger.info(f"Starting handoff pipeline for: {topic}")
try:
await _run_pipeline(websocket, topic)
except websockets.exceptions.ConnectionClosed:
logger.info("WebSocket closed during pipeline")
return
except Exception as e:
logger.exception("Pipeline error")
try:
await websocket.send(json.dumps({"type": "error", "message": str(e)}))
except Exception:
pass
except websockets.exceptions.ConnectionClosed:
pass
async def _run_pipeline(websocket, topic: str):
"""Execute: Node A (research) → ContextHandoff → Node B (analysis)."""
import shutil
# Fresh stores for each run
run_dir = Path(tempfile.mkdtemp(prefix="hive_run_", dir=STORE_DIR))
store_a = FileConversationStore(run_dir / "node_a")
store_b = FileConversationStore(run_dir / "node_b")
# Shared event bus
bus = EventBus()
async def forward_event(event):
try:
payload = {"type": event.type.value, **event.data}
if event.node_id:
payload["node_id"] = event.node_id
await websocket.send(json.dumps(payload))
except Exception:
pass
bus.subscribe(
event_types=[
EventType.NODE_LOOP_STARTED,
EventType.NODE_LOOP_ITERATION,
EventType.NODE_LOOP_COMPLETED,
EventType.LLM_TEXT_DELTA,
EventType.TOOL_CALL_STARTED,
EventType.TOOL_CALL_COMPLETED,
EventType.NODE_STALLED,
],
handler=forward_event,
)
tools = list(TOOL_REGISTRY.get_tools().values())
tool_executor = TOOL_REGISTRY.get_executor()
# ---- Phase 1: Researcher ------------------------------------------------
await websocket.send(json.dumps({"type": "phase", "phase": "researcher"}))
node_a = EventLoopNode(
event_bus=bus,
judge=None, # implicit judge: accept when output_keys filled
config=LoopConfig(
max_iterations=20,
max_tool_calls_per_turn=30,
max_history_tokens=32_000,
),
conversation_store=store_a,
tool_executor=tool_executor,
)
ctx_a = NodeContext(
runtime=RUNTIME,
node_id="researcher",
node_spec=RESEARCHER_SPEC,
memory=SharedMemory(),
input_data={"topic": topic},
llm=LLM,
available_tools=tools,
)
result_a = await node_a.execute(ctx_a)
logger.info(
"Researcher done: success=%s, tokens=%s",
result_a.success,
result_a.tokens_used,
)
await websocket.send(
json.dumps(
{
"type": "node_result",
"node_id": "researcher",
"success": result_a.success,
"output": result_a.output,
}
)
)
if not result_a.success:
await websocket.send(
json.dumps(
{
"type": "error",
"message": f"Researcher failed: {result_a.error}",
}
)
)
return
# ---- Phase 2: Context Handoff -------------------------------------------
await websocket.send(json.dumps({"type": "phase", "phase": "handoff"}))
# Restore the researcher's conversation from store
conversation_a = await NodeConversation.restore(store_a)
if conversation_a is None:
await websocket.send(
json.dumps(
{
"type": "error",
"message": "Failed to restore researcher conversation",
}
)
)
return
handoff_engine = ContextHandoff(llm=LLM)
handoff_context = handoff_engine.summarize_conversation(
conversation=conversation_a,
node_id="researcher",
output_keys=["research_summary"],
)
formatted_handoff = ContextHandoff.format_as_input(handoff_context)
logger.info(
"Handoff: %d turns, ~%d tokens, keys=%s",
handoff_context.turn_count,
handoff_context.total_tokens_used,
list(handoff_context.key_outputs.keys()),
)
# Send handoff context to browser
await websocket.send(
json.dumps(
{
"type": "handoff_context",
"summary": handoff_context.summary[:500],
"turn_count": handoff_context.turn_count,
"tokens": handoff_context.total_tokens_used,
"key_outputs": handoff_context.key_outputs,
}
)
)
# ---- Phase 3: Analyst ---------------------------------------------------
await websocket.send(json.dumps({"type": "phase", "phase": "analyst"}))
node_b = EventLoopNode(
event_bus=bus,
judge=None, # implicit judge
config=LoopConfig(
max_iterations=10,
max_tool_calls_per_turn=30,
max_history_tokens=32_000,
),
conversation_store=store_b,
)
ctx_b = NodeContext(
runtime=RUNTIME,
node_id="analyst",
node_spec=ANALYST_SPEC,
memory=SharedMemory(),
input_data={"context": formatted_handoff},
llm=LLM,
available_tools=[],
)
result_b = await node_b.execute(ctx_b)
logger.info(
"Analyst done: success=%s, tokens=%s",
result_b.success,
result_b.tokens_used,
)
# ---- Done ---------------------------------------------------------------
await websocket.send(
json.dumps(
{
"type": "done",
"researcher": result_a.output,
"analyst": result_b.output,
"total_tokens": ((result_a.tokens_used or 0) + (result_b.tokens_used or 0)),
}
)
)
# Clean up temp stores
try:
shutil.rmtree(run_dir)
except Exception:
pass
# -------------------------------------------------------------------------
# HTTP handler
# -------------------------------------------------------------------------
async def process_request(connection, request: Request):
"""Serve HTML on GET /, upgrade to WebSocket on /ws."""
if request.path == "/ws":
return None
return Response(
HTTPStatus.OK,
"OK",
websockets.Headers({"Content-Type": "text/html; charset=utf-8"}),
HTML_PAGE.encode(),
)
# -------------------------------------------------------------------------
# Main
# -------------------------------------------------------------------------
async def main():
port = 8766
async with websockets.serve(
handle_ws,
"0.0.0.0",
port,
process_request=process_request,
):
logger.info(f"Handoff demo at http://localhost:{port}")
logger.info("Enter a research topic to start the pipeline.")
await asyncio.Future()
if __name__ == "__main__":
asyncio.run(main())
File diff suppressed because it is too large Load Diff
@@ -1,8 +1,6 @@
"""CLI entry point for Credential Tester agent."""
import asyncio
import logging
import sys
import click
@@ -10,13 +8,14 @@ from .agent import CredentialTesterAgent
def setup_logging(verbose=False, debug=False):
from framework.observability import configure_logging
if debug:
level, fmt = logging.DEBUG, "%(asctime)s %(name)s: %(message)s"
configure_logging(level="DEBUG")
elif verbose:
level, fmt = logging.INFO, "%(message)s"
configure_logging(level="INFO")
else:
level, fmt = logging.WARNING, "%(levelname)s: %(message)s"
logging.basicConfig(level=level, format=fmt, stream=sys.stderr)
configure_logging(level="WARNING")
def pick_account(agent: CredentialTesterAgent) -> dict | None:
@@ -19,6 +19,7 @@ from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING
from framework.config import get_max_context_tokens
from framework.graph import Goal, NodeSpec, SuccessCriterion
from framework.graph.checkpoint_config import CheckpointConfig
from framework.graph.edge import GraphSpec
@@ -455,7 +456,6 @@ identity_prompt = (
loop_config = {
"max_iterations": 50,
"max_tool_calls_per_turn": 30,
"max_history_tokens": 32000,
}
# ---------------------------------------------------------------------------
@@ -541,7 +541,7 @@ class CredentialTesterAgent:
loop_config={
"max_iterations": 50,
"max_tool_calls_per_turn": 30,
"max_history_tokens": 32000,
"max_context_tokens": get_max_context_tokens(),
},
conversation_mode="continuous",
identity_prompt=(
+78 -20
View File
@@ -16,31 +16,63 @@ class AgentEntry:
description: str
category: str
session_count: int = 0
run_count: int = 0
node_count: int = 0
tool_count: int = 0
tags: list[str] = field(default_factory=list)
last_active: str | None = None
def _get_last_active(agent_name: str) -> str | None:
"""Return the most recent updated_at timestamp across all sessions."""
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
if not sessions_dir.exists():
return None
def _get_last_active(agent_path: Path) -> str | None:
"""Return the most recent updated_at timestamp across all sessions.
Checks both worker sessions (``~/.hive/agents/{name}/sessions/``) and
queen sessions (``~/.hive/queen/session/``) whose ``meta.json`` references
the same *agent_path*.
"""
from datetime import datetime
agent_name = agent_path.name
latest: str | None = None
for session_dir in sessions_dir.iterdir():
if not session_dir.is_dir() or not session_dir.name.startswith("session_"):
continue
state_file = session_dir / "state.json"
if not state_file.exists():
continue
try:
data = json.loads(state_file.read_text(encoding="utf-8"))
ts = data.get("timestamps", {}).get("updated_at")
if ts and (latest is None or ts > latest):
latest = ts
except Exception:
continue
# 1. Worker sessions
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
if sessions_dir.exists():
for session_dir in sessions_dir.iterdir():
if not session_dir.is_dir() or not session_dir.name.startswith("session_"):
continue
state_file = session_dir / "state.json"
if not state_file.exists():
continue
try:
data = json.loads(state_file.read_text(encoding="utf-8"))
ts = data.get("timestamps", {}).get("updated_at")
if ts and (latest is None or ts > latest):
latest = ts
except Exception:
continue
# 2. Queen sessions
queen_sessions_dir = Path.home() / ".hive" / "queen" / "session"
if queen_sessions_dir.exists():
resolved = agent_path.resolve()
for d in queen_sessions_dir.iterdir():
if not d.is_dir():
continue
meta_file = d / "meta.json"
if not meta_file.exists():
continue
try:
meta = json.loads(meta_file.read_text(encoding="utf-8"))
stored = meta.get("agent_path")
if not stored or Path(stored).resolve() != resolved:
continue
ts = datetime.fromtimestamp(d.stat().st_mtime).isoformat()
if latest is None or ts > latest:
latest = ts
except Exception:
continue
return latest
@@ -52,6 +84,31 @@ def _count_sessions(agent_name: str) -> int:
return sum(1 for d in sessions_dir.iterdir() if d.is_dir() and d.name.startswith("session_"))
def _count_runs(agent_name: str) -> int:
"""Count unique run_ids across all sessions for an agent."""
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
if not sessions_dir.exists():
return 0
run_ids: set[str] = set()
for session_dir in sessions_dir.iterdir():
if not session_dir.is_dir() or not session_dir.name.startswith("session_"):
continue
# runs.jsonl lives inside workspace subdirectories
for runs_file in session_dir.rglob("runs.jsonl"):
try:
for line in runs_file.read_text(encoding="utf-8").splitlines():
line = line.strip()
if not line:
continue
record = json.loads(line)
rid = record.get("run_id")
if rid:
run_ids.add(rid)
except Exception:
continue
return len(run_ids)
def _extract_agent_stats(agent_path: Path) -> tuple[int, int, list[str]]:
"""Extract node count, tool count, and tags from an agent directory.
@@ -79,7 +136,7 @@ def _extract_agent_stats(agent_path: Path) -> tuple[int, int, list[str]]:
if agent_json.exists():
try:
data = json.loads(agent_json.read_text(encoding="utf-8"))
json_nodes = data.get("nodes", [])
json_nodes = data.get("graph", {}).get("nodes", []) or data.get("nodes", [])
if node_count == 0:
node_count = len(json_nodes)
tools: set[str] = set()
@@ -139,10 +196,11 @@ def discover_agents() -> dict[str, list[AgentEntry]]:
description=desc,
category=category,
session_count=_count_sessions(path.name),
run_count=_count_runs(path.name),
node_count=node_count,
tool_count=tool_count,
tags=tags,
last_active=_get_last_active(path.name),
last_active=_get_last_active(path),
)
)
if entries:
+1 -3
View File
@@ -14,8 +14,7 @@ queen_goal = Goal(
id="queen-manager",
name="Queen Manager",
description=(
"Manage the worker agent lifecycle and serve as the user's primary "
"interactive interface. Triage health escalations from the judge."
"Manage the worker agent lifecycle and serve as the user's primary interactive interface."
),
success_criteria=[],
constraints=[],
@@ -35,6 +34,5 @@ queen_graph = GraphSpec(
loop_config={
"max_iterations": 999_999,
"max_tool_calls_per_turn": 30,
"max_history_tokens": 32000,
},
)
+445 -202
View File
@@ -62,6 +62,12 @@ _SHARED_TOOLS = [
"get_agent_checkpoint",
]
# Episodic memory tools — available in every queen phase.
_QUEEN_MEMORY_TOOLS = [
"write_to_diary",
"recall_diary",
]
# Queen phase-specific tool sets.
# Planning phase: read-only exploration + design, no write tools.
@@ -77,18 +83,26 @@ _QUEEN_PLANNING_TOOLS = [
"list_agent_sessions",
"list_agent_checkpoints",
"get_agent_checkpoint",
# Draft graph (visual-only, no code) — new planning workflow
"save_agent_draft",
"confirm_and_build",
# Scaffold + transition to building (requires confirm_and_build first)
"initialize_and_build_agent",
# Load existing agent (after user confirms)
"load_built_agent",
]
] + _QUEEN_MEMORY_TOOLS
# Building phase: full coding + agent construction tools.
_QUEEN_BUILDING_TOOLS = _SHARED_TOOLS + [
"load_built_agent",
"list_credentials",
"replan_agent",
"write_to_diary", # Episodic memory — available in all phases
]
_QUEEN_BUILDING_TOOLS = (
_SHARED_TOOLS
+ [
"load_built_agent",
"list_credentials",
"replan_agent",
"save_agent_draft", # Re-draft during building → auto-dissolves + updates flowchart
]
+ _QUEEN_MEMORY_TOOLS
)
# Staging phase: agent loaded but not yet running — inspect, configure, launch.
_QUEEN_STAGING_TOOLS = [
@@ -105,7 +119,11 @@ _QUEEN_STAGING_TOOLS = [
"stop_worker_and_edit",
"stop_worker_and_plan",
"write_to_diary", # Episodic memory — available in all phases
]
# Trigger management
"set_trigger",
"remove_trigger",
"list_triggers",
] + _QUEEN_MEMORY_TOOLS
# Running phase: worker is executing — monitor and control.
_QUEEN_RUNNING_TOOLS = [
@@ -121,12 +139,16 @@ _QUEEN_RUNNING_TOOLS = [
"stop_worker_and_edit",
"stop_worker_and_plan",
"get_worker_status",
"run_agent_with_input",
"inject_worker_message",
# Monitoring
"get_worker_health_summary",
"notify_operator",
"set_trigger",
"remove_trigger",
"list_triggers",
"write_to_diary", # Episodic memory — available in all phases
]
] + _QUEEN_MEMORY_TOOLS
# ---------------------------------------------------------------------------
@@ -168,12 +190,8 @@ search_files, or list_directory — those are YOUR tools, not theirs.
)
_planning_knowledge = """\
**A responsible engineer doesn't jump into building. First, \
understand the problem and be transparent about what the framework can and cannot do.**
Use the user's selection (or their custom description if they chose "Other") \
as context when shaping the goal below. If the user already described \
what they want before this step, skip the question and proceed directly.
**Be responsible, understand the problem by asking practical qualify questions \
and be transparent about what the framework can and cannot do.**
# Core Mandates (Planning)
- **DO NOT propose a complete goal on your own.** Instead, \
@@ -185,45 +203,33 @@ docs. Always run list_agent_tools() to see what actually exists.
# Tool Discovery (MANDATORY before designing)
Before designing any agent, run list_agent_tools() with NO arguments \
to see ALL available tools (names + descriptions, grouped by category). \
ONLY use tools from this list in your node definitions. \
Before designing any agent, discover tools progressively start compact, drill into \
what you need. ONLY use tools from this list in your node definitions. \
NEVER guess or fabricate tool names from memory.
list_agent_tools() # ALWAYS call this first (simple mode)
list_agent_tools(group="google", output_schema="full") # drill into a provider
list_agent_tools() # Step 1: provider summary
list_agent_tools(group="google", output_schema="summary") # Step 2: service breakdown
list_agent_tools(group="google", service="gmail") # Step 3: tool names
list_agent_tools( # Step 4: full detail
group="google", service="gmail", output_schema="full"
)
NEVER skip the first call. Always start with the full list \
so you know what providers and tools exist before drilling in. \
Simple mode truncates long descriptions use group + "full" to \
get the complete description and input_schema for the tools you need.
Step 1 is MANDATORY. Returns provider names, tool counts, credential availability very compact. \
Step 2 breaks a provider into services (e.g. google gmail/calendar/sheets/drive). Only do this \
for providers that are relevant to the task. \
Step 3 gets tool names for a specific service no descriptions, minimal tokens. \
Step 4 only for services you plan to actually use. \
Use credentials="available" at any step to filter to tools whose credentials are already configured.
# Discovery & Design Workflow
## 1: Fast Discovery (3-6 Turns)
## 1: Discovery (3-6 Turns)
**The core principle**: Discovery should feel like progress, not paperwork. \
The stakeholder should walk away feeling like you understood them faster \
than anyone else would have.
**Communication sytle**: Be concise. Say less. Mean more. Impatient stakeholders \
don't want a wall of text — they want to know you get it. Every sentence you say \
should either move the conversation forward or prove you understood something. \
If it does neither, cut it.
**Ask Question Rules: Respect Their Time.** Every question must earn its place by:
1. **Preventing a costly wrong turn** you're about to build the wrong thing
2. **Unlocking a shortcut** their answer lets you simplify the design
3. **Surfacing a dealbreaker** there's a constraint that changes everything
4. **Provide Options** - Provide options to your questions if possible, \
but also always allow the user to type something beyong the options.
If a question doesn't do one of these, don't ask it. Make an assumption, state it, and move on.
---
### 1.1: Let Them Talk, But Listen Like an Solution Architect
Ask questions to help the user find bridge the goal and the solution \
When the stakeholder describes what they want, mentally construct:
- **The pain**: What about today's situation is broken, slow, or missing?
@@ -234,57 +240,6 @@ When the stakeholder describes what they want, mentally construct:
---
### 1.2: Use Domain Knowledge to Fill In the Blanks
You have broad knowledge of how systems work. Use it aggressively.
If they say "I need a research agent," you already know it probably involves: \
search, summarization, source tracking, and iteration. Don't ask about each — \
use them as your starting mental model and let their specifics override your defaults.
If they say "I need to monitor files and alert me," you know this probably involves: \
watch patterns, triggers, notifications, and state tracking.
---
### 1.3: Play Back a Proposed Model (Not a List of Questions)
After listening, present a **concrete picture** of what you think they need. \
Make it specific enough that they can spot what's wrong. \
Can you ASCII to show the user
**Pattern: "Here's what I heard — tell me where I'm off"**
> "OK here's how I'm picturing this: [User type] needs to [core action]. \
Right now they're [current painful workflow]. \
What you want is [proposed solution that replaces the pain].
> The way I'd structure this: [key entities] connected by [key relationships], \
with the main flow being [trigger steps outcome].
> For the MVP, I'd focus on [the one thing that delivers the most value] \
and hold off on [things that can wait].
> Before I start [1-2 specific questions you genuinely can't infer]."
---
### 1.4: Ask Only What You Cannot Infer
Your questions should be **narrow, specific, and consequential**. \
Never ask what you could answer yourself.
**Good questions** (high-stakes, can't infer):
- "Who's the primary user — you or your end customers?"
- "Is this replacing a spreadsheet, or is there literally nothing today?"
- "Does this need to integrate with anything, or standalone?"
- "Is there existing data to migrate, or starting fresh?"
**Bad questions** (low-stakes, inferable):
- "What should happen if there's an error?" *(handle gracefully, obviously)*
- "Should it have search?" *(if there's a list, yes)*
- "How should we handle permissions?" *(follow standard patterns)*
- "What tools should I use?" *(your call, not theirs)*
---
## 2: Capability Assessment & Gap Analysis
**After the user responds, assess fit and gaps together.** Be honest and specific. \
@@ -299,70 +254,137 @@ Present a short **Framework Fit Assessment**:
- **Gaps/Deal-breakers**: Only list genuinely missing capabilities after checking \
both list_agent_tools() and built-in features like GCU
## 3: Design Graph and Propose
### Credential Check (MANDATORY)
Act like an experienced AI solution architect Design the agent architecture:
- Goal: id, name, description, 3-5 success criteria, 2-4 constraints
- Nodes: **3-6 nodes** (HARD RULE: never fewer than 3, never more than 6). \
2 nodes is ALWAYS wrong it means you under-decomposed the task. \
Use as many nodes as the use case requires, but don't create nodes without \
tools merge them into nodes that do real work.
- Edges: on_success for linear, conditional for routing
- Lifecycle: ALWAYS have terminal_nodes
The summary from list_agent_tools() includes `credentials_required` and \
`credentials_available` per provider. **Before designing the graph**, check \
which providers the design will need and whether credentials are available.
**MERGE nodes when:**
- Node has NO tools (pure LLM reasoning) merge into predecessor/successor
- Node sets only 1 trivial output collapse into predecessor
For each provider whose tools you plan to use and where \
`credentials_available` is false:
- Tell the user which credential is missing and what it's needed for
- Ask if they have access to set it up (e.g., API key, OAuth, service account)
- If they don't have access, adjust the design to work without that provider \
or suggest alternatives
**SEPARATE nodes when:**
- Fundamentally different tool sets (e.g., search vs. write vs. validate)
- Fan-out parallelism (parallel branches MUST be separate)
- Different failure/retry semantics (e.g., gather can retry, transform cannot)
- Distinct phases of work (e.g., research, transform, validate, deliver)
- A node would need more than ~5 tools split by responsibility
**Do NOT proceed to the design step with tools that require unavailable \
credentials without the user acknowledging it.** Finding out at runtime that \
credentials are missing wastes everyone's time. Surface this early.
**Typical patterns (queen manages all user interaction):**
- 3 nodes: `gather work review`
- 4 nodes: `gather analyze transform review`
- 5 nodes: `gather research transform validate deliver`
- WRONG: 2 nodes where everything is crammed into one giant node
- WRONG: 7 nodes where half have no tools and just do LLM reasoning
Example:
> "The design needs Google Sheets tools, but the `google` credential isn't \
configured yet. Do you have a Google service account or OAuth credentials \
you can set up? If not, I can use CSV file output instead."
Read reference agents before designing:
list_agents()
read_file("exports/deep_research_agent/agent.py")
read_file("exports/deep_research_agent/nodes/__init__.py")
## 3: Design flowchart
Present the design to the user. Lead with a large ASCII graph inside \
a code block so it renders in monospace. Make it visually prominent \
use box-drawing characters and clear flow arrows:
Act like an experienced AI solution architect. Design the agent architecture \
in the flowchart
The flowchart is the shared canvas. Every structural change should be \
visible to the user immediately. The draft captures business logic \
(node purposes, data flow, tools) without requiring executable code. \
Include in each node: id, name, description, planned tools, \
input/output keys, and success criteria as high-level hints.
Each node is auto-classified into a flowchart symbol type with a unique \
color. You can override auto-detection by setting `flowchart_type` \
explicitly on a node. Available types:
- **start** (sage green, stadium): Entry point / trigger
- **terminal** (dusty red, stadium): End of flow
- **process** (blue-gray, rectangle): Standard processing step
- **decision** (warm amber, diamond): Conditional branching
- **io** (dusty purple, parallelogram): External data input/output
- **document** (steel blue, wavy rect): Report or document generation
- **database** (muted teal, cylinder): Database or data store
- **subprocess** (dark cyan, subroutine): Delegated sub-agent / predefined process
- **browser** (deep blue, hexagon): GCU browser automation / sub-agent \
delegation. At build time, browser nodes are dissolved into the parent \
node's sub_agents list. Use for any GCU or sub-agent leaf node.
Auto-detection works well for most cases: first node start, nodes with \
no outgoing edges terminal, nodes with multiple conditional outgoing \
edges decision, GCU nodes browser, nodes mentioning "database" \
database, nodes mentioning "report/document" document, I/O tools like \
send_email io. Everything else defaults to process. Set flowchart_type \
explicitly only when auto-detection would be wrong.
## Decision Nodes — Planning-Only Conditional Branching
Decision nodes (amber diamonds) are **planning-only** visual elements. They \
let you show explicit conditional logic in the flowchart so the user can see \
and approve branching behavior. At `confirm_and_build()`, decision nodes are \
automatically **dissolved** into the runtime graph:
- The decision clause is merged into the predecessor node's `success_criteria`
- The yes/no edges are rewired as the predecessor's `on_success`/`on_failure` edges
- The original flowchart (with decision diamonds) is preserved for display
**When to use decision nodes:**
- When a workflow has a meaningful condition that determines the next step \
(e.g., "Did we find enough results?", "Is the data valid?", "Amount > $100?")
- When the branching logic is important for the user to understand and approve
- When different outcomes lead to genuinely different processing paths
**How to create a decision node:**
- Set `flowchart_type: "decision"` on the node
- Set `decision_clause` to the condition text (e.g., "Data passes validation?")
- Add two outgoing edges with `label: "Yes"` and `label: "No"` pointing \
to the respective target nodes
**Good flowcharts display conditions explicitly.** During planning, the user \
sees the full flowchart with decision diamonds. This is different from the \
building/running phase where conditions are embedded inside node criteria. \
The flowchart is the user-facing contract make branching logic visible.
Example with a decision node:
```
gather
subagent: gcu_search
input: user_request
tools: load_data,
save_data
on_success
work
subagent: gcu_interact
tools: load_data,
save_data
on_success
review
tools: save_data
serve_file_to_user
on_failure
back to gather
gather [Valid data?] Yes transform deliver
No notify_user
```
In the draft: the `[Valid data?]` node has `flowchart_type: "decision"`, \
`decision_clause: "Data passes validation checks?"`, with labeled yes/no edges.
## Sub-Agent Nodes — Planning-Only Delegation
Sub-agent nodes (dark teal subroutines) are **planning-only** visual elements \
that show which nodes delegate to sub-agents. At `confirm_and_build()`, \
sub-agent nodes are **dissolved** into their parent node:
- The sub-agent node's ID is added to the predecessor's `sub_agents` list
- The sub-agent node and its connecting edge are removed
- At runtime, the parent node can invoke the sub-agent via `delegate_to_sub_agent`
**Rules for sub-agent nodes (INCLUDING GCU nodes):**
- GCU nodes are auto-detected as `flowchart_type: "browser"` (hexagon)
- Connect from the managing parent node to the sub-agent node
- Sub-agent nodes must be **leaf nodes** NO outgoing edges to other nodes
- At build time, browser/GCU nodes are dissolved into the parent's \
`sub_agents` list, just like decision nodes are dissolved into criteria
**CRITICAL: GCU nodes (`node_type: "gcu"`) are ALWAYS sub-agents.** \
They MUST NOT appear in the linear flow. NEVER chain GCU nodes \
sequentially (A gcu1 gcu2 B is WRONG). Instead, attach them \
as leaves to the parent that orchestrates them:
```
WRONG: intake gcu_find_prospect gcu_scan_mutuals check_results
WRONG: decision_node gcu_node (as a yes/no branch)
RIGHT: intake (sub_agents: [gcu_find, gcu_scan]) check_results
```
The parent node delegates to its GCU sub-agents and collects results. \
The main flow continues from the parent, not from the GCU node. \
GCU nodes MUST NOT be children of decision nodes decision nodes \
dissolve at build time, which would leave the GCU as a dangling \
workflow step.
**How to show delegation in the flowchart:**
```
research (deep_searcher) browser/GCU node, leaf
research [Enough results?] decision node
```
After dissolution: `research` node gets `sub_agents: ["deep_searcher"]` \
and `success_criteria: "Enough results?"`.
If the worker agent start from some initial input it is okay. \
The queen(you) owns intake: you gathers user requirements, then calls \
@@ -371,18 +393,25 @@ When building the agent, design the entry node's `input_keys` to \
match what the queen will provide at run time. Worker nodes should \
use `escalate` for blockers.
Follow the graph with a brief summary of each node's purpose. \
Get user approval before implementing.
## 4: Get User Confirmation (MANDATORY GATE)
## 4: Get User Confirmation by ask_user
**This is a hard boundary between planning and building.** \
You MUST get explicit user approval before ANY code is generated.
**WAIT for user response.** You MUST get explicit user approval before \
calling `initialize_and_build_agent`.
- If **Proceed**: Move to implementing (call `initialize_and_build_agent`)
- If **Adjust scope**: Discuss what to change, update your notes, re-assess if needed
- If **More questions**: Answer them honestly, then ask again
- If **Reconsider**: Discuss alternatives. If they decide to proceed anyway, \
that's their informed choice
1. Call ask_user() with options like \
["Approve and build", "Adjust the design", "I have questions"]
2. **WAIT for user response.** Do NOT proceed without it.
3. Handle the response:
- If **Approve / Proceed**: Call confirm_and_build(), then \
initialize_and_build_agent(agent_name, nodes)
- If **Adjust scope**: Discuss changes, update the draft with \
save_agent_draft() again, and re-ask
- If **More questions**: Answer them honestly, then ask again
- If **Reconsider**: Discuss alternatives. If they decide to proceed, \
that's their informed choice
**NEVER call initialize_and_build_agent without first calling \
confirm_and_build().** The system will block the transition if you try.
"""
_building_knowledge = """\
@@ -410,11 +439,10 @@ hashline=True for anchors in results
- undo_changes(path?) restore from git snapshot
## Meta-Agent
- list_agent_tools(server_config_path?, output_schema?, group?) discover \
available tools grouped by category. output_schema: "simple" (default, \
descriptions truncated to ~200 chars) or "full" (complete descriptions + \
input_schema). group: "all" (default) or a provider like "google". \
Call FIRST before designing.
- list_agent_tools(group?, service?, output_schema?, credentials?) discover tools \
progressively: no args=provider summary; group+output_schema="summary"=service breakdown; \
group+service=tool names; group+service+output_schema="full"=full details. \
credentials="available" filters to configured tools. Call FIRST before designing.
- validate_agent_package(agent_name) run ALL validation checks in one call \
(class validation, runner load, tool validation, tests). Call after building.
- list_agents() list all agent packages in exports/ with session counts
@@ -440,7 +468,9 @@ When a user says "my agent is failing" or "debug this agent":
## 5. Implement
**Please make sure you have propose the design to the user before implementing**
**You should only reach this step after the user has approved the draft design \
in the planning phase. The draft metadata will pre-populate descriptions, \
goals, success criteria, and node metadata in the generated files.**
Call `initialize_and_build_agent(agent_name, nodes)` to generate all package \
files. The agent_name must be snake_case (e.g., "my_agent"). Pass node names \
@@ -467,8 +497,8 @@ nodes/__init__.py
- Goal description, success criteria values, constraint values, edge \
definitions, identity_prompt in agent.py
- CLI options in __main__.py
- For async entry points (timers/webhooks), add AsyncEntryPointSpec \
and AgentRuntimeConfig to agent.py
- For triggers (timers/webhooks), add entries to triggers.json in the \
agent's export directory
Do NOT modify or rewrite:
- Import statements at top of agent.py (they are correct)
@@ -503,12 +533,15 @@ _package_builder_knowledge = _shared_building_knowledge + _planning_knowledge +
_queen_identity_planning = """\
You are an experienced, responsible and curious Solution Architect. \
"Queen" is the internal alias. \
You ask smart questions to guide user to the solution \
You are in PLANNING phase your job is to either: \
(a) understand what the user wants and design a new agent, or \
(b) diagnose issues with an existing agent, discuss a fix plan with the user, \
then transition to building to implement. \
You have read-only tools for exploration but no write/edit tools. \
Focus on conversation, research, and design.\
Focus on conversation, research, and design. \
You MUST use ask_user / ask_user_multiple tools for ALL questions \
never ask questions in plain text without calling the tool.\
"""
_queen_identity_building = """\
@@ -551,24 +584,45 @@ but no write/edit tools.
- run_command(command, cwd?, timeout?) Read-only commands only (grep, ls, git log). \
Never use this to write files, run scripts, or modify the filesystem transition \
to BUILDING phase for that.
- list_agent_tools(server_config_path?, output_schema?, group?) \
Discover available tools for design
- list_agent_tools(server_config_path?, output_schema?, group?, credentials?) \
Discover available tools for design (summary names full)
- list_agents() See existing agent packages for reference
- list_agent_sessions(agent_name, status?, limit?) Inspect past runs of an agent
- list_agent_checkpoints(agent_name, session_id) View execution history
- get_agent_checkpoint(agent_name, session_id, checkpoint_id?) Load a checkpoint
- initialize_and_build_agent(agent_name?, nodes?) With agent_name: scaffold a \
new agent and transition to BUILDING phase. Without agent_name: transition to \
BUILDING to fix the currently loaded agent (requires a loaded worker).
## Draft Graph Workflow (new agents)
- save_agent_draft(agent_name, goal, nodes, edges?, terminal_nodes?, ...) \
Create an ISO 5807 color-coded flowchart draft. No code is generated. Each \
node is auto-classified into a standard flowchart symbol (process, decision, \
document, database, subprocess, etc.) with unique shapes and colors. Set \
flowchart_type on a node to override. Nodes need only an id. \
Use decision nodes (flowchart_type: "decision", with decision_clause and \
labeled yes/no edges) to make conditional branching explicit. \
GCU/sub-agent nodes (node_type: "gcu") are auto-detected as browser \
hexagons connect them as leaf nodes to their parent.
- confirm_and_build() Record user confirmation of the draft. Dissolves \
planning-only nodes (decision predecessor criteria; browser/GCU \
predecessor sub_agents list). Call this ONLY after the user explicitly \
approves via ask_user.
- initialize_and_build_agent(agent_name?, nodes?) Scaffold the agent package \
and transition to BUILDING phase. For new agents, this REQUIRES \
save_agent_draft() + confirm_and_build() first. The draft metadata is used to \
pre-populate the generated files. Without agent_name: transition to BUILDING \
to fix the currently loaded agent (no draft required).
## Loading existing agents
- load_built_agent(agent_path) Load an existing agent and switch to STAGING \
phase. Only use this when the user explicitly asks to work with an existing agent \
(e.g. "load my_agent", "run the research agent"). Confirm with the user first.
Focus on understanding requirements and proposing an agent architecture \
with ASCII graph art. Use ask_user to get user approval, then call \
initialize_and_build_agent to begin building. If the user wants to work with \
an existing agent instead, use load_built_agent after confirming. \
If you are diagnosing an existing agent, call initialize_and_build_agent() \
## Workflow summary
1. Understand requirements discover tools design graph
2. Call save_agent_draft() to create visual draft present to user
3. Call ask_user() to get explicit approval
4. Call confirm_and_build() to record approval
5. Call initialize_and_build_agent() to scaffold and start building
For diagnosis of existing agents, call initialize_and_build_agent() \
(no args) after agreeing on a fix plan with the user.
"""
@@ -583,6 +637,15 @@ list_agents, list_agent_sessions, \
list_agent_checkpoints, get_agent_checkpoint
- load_built_agent(agent_path) Load the agent and switch to STAGING phase
- list_credentials(credential_id?) List authorized credentials
- save_agent_draft(...) **Re-draft the flowchart during building.** When \
called during building, planning-only nodes (decision, browser/GCU) are \
dissolved automatically no re-confirmation needed. The user sees the \
updated flowchart immediately. Use this when you make structural changes \
(add/remove nodes, change edges) so the flowchart stays in sync.
- replan_agent() Switch back to PLANNING phase. The previous draft is \
restored (with decision/browser nodes intact) so you can edit it. Use \
when the user wants to change integrations, swap tools, rethink the \
flow, or discuss any design changes before you build them.
When you finish building an agent, call load_built_agent(path) to stage it.
"""
@@ -598,6 +661,9 @@ The agent is loaded and ready to run. You can inspect it and launch it:
- stop_worker_and_plan() Go to PLANNING phase to discuss changes with the user \
first (DEFAULT for most modification requests)
- stop_worker_and_edit() Go to BUILDING phase for immediate, specific fixes
- set_trigger(trigger_id, trigger_type?, trigger_config?) Activate a trigger (timer)
- remove_trigger(trigger_id) Deactivate a trigger
- list_triggers() List all triggers and their active/inactive status
You do NOT have write tools. To modify the agent, prefer \
stop_worker_and_plan() unless the user gave a specific instruction.
@@ -620,6 +686,15 @@ with the user first (DEFAULT for most modification requests)
You do NOT have write tools. To modify the agent, prefer \
stop_worker_and_plan() unless the user gave a specific instruction. \
To just stop without modifying, call stop_worker().
- stop_worker_and_edit() Stop the worker and switch back to BUILDING phase
- set_trigger(trigger_id, trigger_type?, trigger_config?) Activate a trigger (timer)
- remove_trigger(trigger_id) Deactivate a trigger
- list_triggers() List all triggers and their active/inactive status
You do NOT have write tools or agent construction tools. \
If you need to modify the agent, call stop_worker_and_edit() to switch back \
to BUILDING phase. To stop the worker and ask the user what to do next, call \
stop_worker() to return to STAGING phase.
"""
# -- Behavior shared across all phases --
@@ -627,25 +702,66 @@ To just stop without modifying, call stop_worker().
_queen_behavior_always = """
# Behavior
## CRITICAL RULE — ask_user tool
## Images attached by the user
Users can attach images directly to their chat messages. When you see an \
image in the conversation, analyze it using your native vision capability \
do NOT say you cannot see images or that you lack access to files. The image \
is embedded in the message; no tool call is needed to view it. Describe what \
you see, answer questions about it, and use the visual content to inform your \
response just as you would text.
## CRITICAL RULE — ask_user / ask_user_multiple
Every response that ends with a question, a prompt, or expects user \
input MUST finish with a call to ask_user(prompt, options). \
input MUST finish with a call to ask_user or ask_user_multiple. \
The system CANNOT detect that you are waiting for \
input unless you call ask_user. You MUST call ask_user as the LAST \
input unless you call one of these tools. You MUST call it as the LAST \
action in your response.
NEVER end a response with a question in text without calling ask_user. \
NEVER rely on the user seeing your text and replying call ask_user.
NEVER rely on the user seeing your text and replying call ask_user. \
NEVER list options as text bullets the tool renders interactive buttons.
**When you have 2+ questions**, use ask_user_multiple instead of ask_user. \
This renders all questions at once so the user answers in one interaction \
instead of going back and forth. ALWAYS prefer ask_user_multiple when \
you need to clarify multiple things. \
**IMPORTANT: When using ask_user_multiple, do NOT repeat the questions \
in your text response.** The widget renders the questions with options \
duplicating them in text wastes the user's time and delays the widget \
appearing. Keep your text to a brief context/intro sentence only.
Always provide 2-4 short options that cover the most likely answers. \
The user can always type a custom response.
Examples:
- ask_user("What do you need?",
["Build a new agent", "Run the loaded worker", "Help with code"])
- ask_user("Which pattern?",
["Simple 3-node", "Rich with feedback", "Custom"])
### WRONG — never do this:
```
I need a few details:
- Documentation Source: Where should the agent look?
- Trigger: Should the agent poll or get a URL?
- Review Channel: Slack, Email, or Sheets?
Which of these would you like to define first?
1. Documentation source
2. Trigger
3. Review channel
```
This lists questions as plain text with NO tool call the user has no \
interactive widget and the system doesn't know you're waiting for input.
### RIGHT — always do this:
Write a brief intro (1-2 sentences), then call the tool:
- ask_user_multiple(questions=[
{"id": "docs", "prompt": "Where should the agent find answers?",
"options": ["GitHub repo", "Documentation website", "Internal wiki"]},
{"id": "trigger", "prompt": "How should questions be discovered?",
"options": ["Poll search automatically", "I provide a URL"]},
{"id": "review", "prompt": "Where to send drafted responses?",
"options": ["Slack", "Email", "Google Sheets"]}
])
Examples (single question):
- ask_user("Ready to proceed?",
["Yes, go ahead", "Let me change something"])
@@ -690,9 +806,26 @@ You are in planning mode. Your job is to:
3. Discover available tools with list_agent_tools()
4. Assess framework fit and gaps
5. Consider multiple approaches and their trade-offs
6. Design the agent graph and present it as ASCII art
7. Use ask_user to get explicit user approval and clarify the approach
8. Call initialize_and_build_agent(agent_name, nodes) to scaffold and start building
6. Design the agent graph call save_agent_draft() **as soon as you have a \
rough shape**, even before finalizing all details
7. **Iterate on the draft interactively** every time the user gives feedback \
that changes the structure, call save_agent_draft() again so they see the \
update in real-time. The flowchart is a live collaboration tool.
8. When the design is stable, use ask_user to get explicit approval
9. Call confirm_and_build() after the user approves
10. Call initialize_and_build_agent(agent_name, nodes) to scaffold and start building
**The flowchart is your shared whiteboard.** Don't describe changes in text \
and then ask "should I update the draft?" just update it. If the user says \
"add a validation step," immediately call save_agent_draft() with the new \
node added. If they say "remove that," update and re-draft. The user should \
see every structural change reflected in the visualizer as you discuss it.
**CRITICAL: Planning Building boundary.** You MUST get explicit user \
confirmation before moving to building. The sequence is:
save_agent_draft() iterate with user ask_user() confirm_and_build() \
initialize_and_build_agent()
Skipping any of these steps will be blocked by the system.
Remember: DO NOT write or edit any files yet. This is a read-only exploration \
and planning phase. You have read-only tools but no write/edit tools in this \
@@ -726,6 +859,11 @@ You keep a diary. Use write_to_diary() when something worth remembering \
happens: a pipeline went live, the user shared something important, a goal \
was reached or abandoned. Write in first person, as you actually experienced \
it. One or two paragraphs is enough.
Use recall_diary() to look up past diary entries when the user asks about \
previous sessions ("what happened yesterday?", "what did we work on last \
week?") or when you need past context to make a decision. You can filter by \
keyword and control how far back to search.
"""
_queen_behavior_always = _queen_behavior_always + _queen_memory_instructions
@@ -745,6 +883,41 @@ run_agent_with_input(task) (if in staging) or load then run (if in building)
subtasks to justify delegation.
- Building, modifying, or configuring agents is ALWAYS your job. Never \
delegate agent construction to the worker, even as a "research" subtask.
## Keeping the flowchart in sync during building
When you make structural changes to the agent (add/remove/rename nodes, \
change edges, modify sub-agent assignments), call save_agent_draft() to \
update the flowchart. During building, this auto-dissolves planning-only \
nodes without needing user re-confirmation. The user sees the updated \
flowchart immediately.
- **Minor changes** (add a node, rename, adjust edges): call \
save_agent_draft() with the updated graph and keep building.
- **User wants to discuss, redesign, or change integrations/tools**: call \
replan_agent(). The previous draft is restored so you can edit it with \
the user. After they approve, confirm_and_build() continue building.
**When to call replan_agent():** Changing which tools or integrations a \
node uses, swapping data sources, rethinking the flow, or any time the \
user says "replan", "go back", "let's redesign", "change the approach", \
"use a different tool/API", etc. Do NOT stay in building to handle these \
switch to planning so the user can review and approve the new design.
## CRITICAL — Graph topology errors require replanning, not code edits
If you discover that the agent graph has structural problems GCU nodes \
in the linear flow, missing edges, wrong node connections, incorrect \
sub-agent assignments you MUST call replan_agent() and fix the draft. \
Do NOT attempt to fix topology by editing agent.py directly. The graph \
structure is defined by the draft dissolution code-gen pipeline. \
Editing code to rewire nodes bypasses the flowchart and creates drift \
between what the user sees and what the code does.
**WRONG:** "Let me fix agent.py to remove GCU nodes from edges..."
**RIGHT:** Call replan_agent(), fix the draft with save_agent_draft(), \
get user approval, then confirm_and_build() the corrected code is \
generated automatically.
"""
# -- STAGING phase behavior --
@@ -822,6 +995,33 @@ Use stop_worker_and_edit() only when:
- The user gave a specific, concrete instruction ("add save_data to the gather node")
- You already discussed the fix in a previous planning session
- The change is trivial and unambiguous (rename, toggle a flag)
## Trigger Management
Use list_triggers() to see available triggers from the loaded worker.
Use set_trigger(trigger_id) to activate a timer. Once active, triggers \
fire periodically and inject [TRIGGER: ...] messages so you can decide \
whether to call run_agent_with_input(task).
### When the user says "Enable trigger <id>" (or clicks Enable in the UI):
1. Call get_worker_status(focus="memory") to check if the worker has \
saved configuration (rules, preferences, settings from a prior run).
2. If memory contains saved config: compose a task string from it \
(e.g. "Process inbox emails using saved rules") and call \
set_trigger(trigger_id, task="...") immediately. Tell the user the \
trigger is now active and what schedule it uses. Do NOT ask them to \
provide the task you derive it from memory.
3. If memory is empty (no prior run): tell the user the agent needs to \
run once first so its configuration can be saved. Offer to run it now. \
Once the worker finishes, enable the trigger.
4. If the user just provided config this session (rules/task context \
already in conversation): use that directly, no memory lookup needed. \
Enable the trigger immediately.
Never ask "what should the task be?" when enabling a trigger for an \
agent with a clear purpose. The task string is a brief description of \
what the worker does, derived from its saved state or your current context.
"""
# -- RUNNING phase behavior --
@@ -836,12 +1036,24 @@ NOT ask the user directly.
You wake up when:
- The user explicitly addresses you
- A worker escalation arrives (`[WORKER_ESCALATION_REQUEST]`)
- An escalation ticket arrives from the judge
- The worker finishes (`[WORKER_TERMINAL]`)
If the user asks for progress, call get_worker_status() ONCE and report. \
If the summary mentions issues, follow up with get_worker_status(focus="issues").
## Subagent delegations (browser automation, GCU)
When the worker delegates to a subagent (e.g., GCU browser automation), expect it \
to take 2-5 minutes. During this time:
- Progress will show 0% this is NORMAL. The subagent only calls set_output at the end.
- Check get_worker_status(focus="full") for "subagent_activity" this shows the \
subagent's latest reasoning text and confirms it is making real progress.
- Do NOT conclude the subagent is stuck just because progress is 0% or because \
you see repeated browser_click/browser_snapshot calls that is the expected \
pattern for web scraping.
- Only intervene if: the subagent has been running for 5+ minutes with no new \
subagent_activity updates, OR the judge escalates.
## Handling worker termination ([WORKER_TERMINAL])
When you receive a `[WORKER_TERMINAL]` event, the worker has finished:
@@ -870,19 +1082,30 @@ IMPORTANT: Only auto-handle if the user has NOT explicitly told you how to handl
escalations. If the user gave you instructions (e.g., "just retry on errors", \
"skip any auth issues"), follow those instructions instead.
CRITICAL escalation relay protocol:
When an escalation requires user input (auth blocks, human review), the worker \
or its subagent is BLOCKED and waiting for your response. You MUST follow this \
exact two-step sequence:
Step 1: call ask_user() to get the user's answer.
Step 2: call inject_worker_message() with the user's answer IMMEDIATELY after.
If you skip Step 2, the worker/subagent stays blocked FOREVER and the task hangs. \
NEVER respond to the user without also calling inject_worker_message() to unblock \
the worker. Even if the user says "skip" or "cancel", you must still relay that \
decision via inject_worker_message() so the worker can clean up.
**Auth blocks / credential issues:**
- ALWAYS ask the user (unless user explicitly told you how to handle this).
- The worker cannot proceed without valid credentials.
- Explain which credential is missing or invalid.
- Use ask_user to get guidance: "Provide credentials", "Skip this task", "Stop and edit agent"
- Use inject_worker_message() to relay user decisions back to the worker.
- Step 1: ask_user for guidance "Provide credentials", "Skip this task", "Stop and edit agent"
- Step 2: inject_worker_message() with the user's response to unblock the worker.
**Need human review / approval:**
- ALWAYS ask the user (unless user explicitly told you how to handle this).
- The worker is explicitly requesting human judgment.
- Present the context clearly (what decision is needed, what are the options).
- Use ask_user with the actual decision options.
- Use inject_worker_message() to relay user decisions back to the worker.
- Step 1: ask_user with the actual decision options.
- Step 2: inject_worker_message() with the user's decision to unblock the worker.
**Errors / unexpected failures:**
- Explain what went wrong in plain terms.
@@ -890,6 +1113,7 @@ escalations. If the user gave you instructions (e.g., "just retry on errors", \
- Or offer: "Diagnose the issue" use stop_worker_and_plan() to investigate first.
- Or offer: "Retry as-is", "Skip this task", "Abort run"
- (Skip asking if user explicitly told you to auto-retry or auto-skip errors.)
- If the escalation had wait_for_response: inject_worker_message() with the decision.
**Informational / progress updates:**
- Acknowledge briefly and let the worker continue.
@@ -914,6 +1138,23 @@ When the user asks to fix, change, modify, or update the loaded worker \
**Default: use stop_worker_and_plan().** Most modification requests need \
discussion first. Only use stop_worker_and_edit() when the user gave a \
specific, unambiguous instruction or you already agreed on the fix.
## Trigger Handling
You will receive [TRIGGER: ...] messages when a scheduled timer fires. \
These are framework-level signals, not user messages.
Rules:
- Check get_worker_status() before calling run_agent_with_input(task). If the worker \
is already RUNNING, decide: skip this trigger, or note it for after completion.
- When multiple [TRIGGER] messages arrive at once, read them all before acting. \
Batch your response do not call run_agent_with_input() once per trigger.
- If a trigger fires but the task no longer makes sense (e.g., user changed \
config since last run), skip it and inform the user.
- Never disable a trigger without telling the user. Use remove_trigger() only \
when explicitly asked or when the trigger is clearly obsolete.
- When the user asks to remove or disable a trigger, you MUST call remove_trigger(trigger_id). \
Never just say "it's removed" without actually calling the tool.
"""
# -- Backward-compatible composed versions (used by queen_node.system_prompt default) --
@@ -931,8 +1172,10 @@ _queen_tools_docs = (
+ "\n\n### RUNNING phase (worker is executing)\n"
+ _queen_tools_running.strip()
+ "\n\n### Phase transitions\n"
"- initialize_and_build_agent(agent_name?, nodes?) → with name: scaffolds package; "
"without name: switches to BUILDING for existing agent\n"
"- save_agent_draft(...) → creates visual-only draft graph (stays in PLANNING)\n"
"- confirm_and_build() → records user approval of draft (stays in PLANNING)\n"
"- initialize_and_build_agent(agent_name?, nodes?) → scaffolds package + switches to "
"BUILDING (requires draft + confirmation for new agents)\n"
"- replan_agent() → switches back to PLANNING phase (only when user explicitly requests)\n"
"- load_built_agent(path) → switches to STAGING phase\n"
"- run_agent_with_input(task) → starts worker, switches to RUNNING phase\n"
@@ -975,8 +1218,8 @@ ticket_triage_node = NodeSpec(
id="ticket_triage",
name="Ticket Triage",
description=(
"Queen's triage node. Receives an EscalationTicket from the Health Judge "
"via event-driven entry point and decides: dismiss or notify the operator."
"Queen's triage node. Receives an EscalationTicket via event-driven "
"entry point and decides: dismiss or notify the operator."
),
node_type="event_loop",
client_facing=True, # Operator can chat with queen once connected (Ctrl+Q)
@@ -990,8 +1233,8 @@ ticket_triage_node = NodeSpec(
),
tools=["notify_operator"],
system_prompt="""\
You are the Queen. The Worker Health Judge has escalated a worker \
issue to you. The ticket is in your memory under key "ticket". Read it carefully.
You are the Queen. A worker health issue has been escalated to you. \
The ticket is in your memory under key "ticket". Read it carefully.
## Dismiss criteria — do NOT call notify_operator:
- severity is "low" AND steps_since_last_accept < 8
@@ -1030,7 +1273,7 @@ queen_node = NodeSpec(
description=(
"User's primary interactive interface with full coding capability. "
"Can build agents directly or delegate to the worker. Manages the "
"worker agent lifecycle and triages health escalations from the judge."
"worker agent lifecycle."
),
node_type="event_loop",
client_facing=True,
+34 -6
View File
@@ -50,6 +50,23 @@ def read_episodic_memory(d: date | None = None) -> str:
return path.read_text(encoding="utf-8").strip() if path.exists() else ""
def _find_recent_episodic(lookback: int = 7) -> tuple[date, str] | None:
"""Find the most recent non-empty episodic memory within *lookback* days."""
from datetime import timedelta
today = date.today()
for offset in range(lookback):
d = today - timedelta(days=offset)
content = read_episodic_memory(d)
if content:
return d, content
return None
# Budget (in characters) for episodic memory in the system prompt.
_EPISODIC_CHAR_BUDGET = 6_000
def format_for_injection() -> str:
"""Format cross-session memory for system prompt injection.
@@ -57,7 +74,7 @@ def format_for_injection() -> str:
session with only the seed template).
"""
semantic = read_semantic_memory()
episodic = read_episodic_memory()
recent = _find_recent_episodic()
# Suppress injection if semantic is still just the seed template
if semantic and semantic.startswith("# My Understanding of the User\n\n*No sessions"):
@@ -66,9 +83,18 @@ def format_for_injection() -> str:
parts: list[str] = []
if semantic:
parts.append(semantic)
if episodic:
today_str = date.today().strftime("%B %-d, %Y")
parts.append(f"## Today — {today_str}\n\n{episodic}")
if recent:
d, content = recent
# Trim oversized episodic entries to keep the prompt manageable
if len(content) > _EPISODIC_CHAR_BUDGET:
content = content[:_EPISODIC_CHAR_BUDGET] + "\n\n…(truncated)"
today = date.today()
if d == today:
label = f"## Today — {d.strftime('%B %-d, %Y')}"
else:
label = f"## {d.strftime('%B %-d, %Y')}"
parts.append(f"{label}\n\n{content}")
if not parts:
return ""
@@ -100,7 +126,8 @@ def append_episodic_entry(content: str) -> None:
"""
ep_path = episodic_memory_path()
ep_path.parent.mkdir(parents=True, exist_ok=True)
today_str = date.today().strftime("%B %-d, %Y")
today = date.today()
today_str = f"{today.strftime('%B')} {today.day}, {today.year}"
timestamp = datetime.now().strftime("%H:%M")
if not ep_path.exists():
header = f"# {today_str}\n\n"
@@ -299,7 +326,8 @@ async def consolidate_queen_memory(
existing_semantic = read_semantic_memory()
today_journal = read_episodic_memory()
today_str = date.today().strftime("%B %-d, %Y")
today = date.today()
today_str = f"{today.strftime('%B')} {today.day}, {today.year}"
adapt_path = session_dir / "data" / "adapt.md"
user_msg = (
@@ -27,7 +27,9 @@
## GCU Errors
15. **Manually wiring browser tools on event_loop nodes** — Use `node_type="gcu"` which auto-includes browser tools. Do NOT manually list browser tool names.
16. **Using GCU nodes as regular graph nodes** — GCU nodes are subagents only. They must ONLY appear in `sub_agents=["gcu-node-id"]` and be invoked via `delegate_to_sub_agent()`. Never connect via edges or use as entry/terminal nodes.
17. **Reusing the same GCU node ID for parallel tasks** — Each concurrent browser task needs a distinct GCU node ID (e.g. `gcu-site-a`, `gcu-site-b`). Two `delegate_to_sub_agent` calls with the same `agent_id` share a browser profile and will interfere with each other's pages.
18. **Passing `profile=` in GCU tool calls** — Profile isolation for parallel subagents is automatic. The framework injects a unique profile per subagent via an asyncio `ContextVar`. Hardcoding `profile="default"` in a GCU system prompt breaks this isolation.
## Worker Agent Errors
17. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
18. **Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
19. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
20. **Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
@@ -180,7 +180,7 @@ terminal_nodes = [] # Forever-alive
# Module-level vars read by AgentRunner.load()
conversation_mode = "continuous"
identity_prompt = "You are a helpful agent."
loop_config = {"max_iterations": 100, "max_tool_calls_per_turn": 20, "max_history_tokens": 32000}
loop_config = {"max_iterations": 100, "max_tool_calls_per_turn": 20, "max_context_tokens": 32000}
class MyAgent:
@@ -332,81 +332,46 @@ class MyAgent:
default_agent = MyAgent()
```
## agent.py — Async Entry Points Variant
## triggers.json — Timer and Webhook Triggers
When an agent needs timers, webhooks, or event-driven triggers, add
`async_entry_points` and optionally `runtime_config` as module-level variables.
These are IN ADDITION to the standard variables above.
When an agent needs timers, webhooks, or event-driven triggers, create a
`triggers.json` file in the agent's directory (alongside `agent.py`).
The queen loads these at session start and the user can manage them via
the `set_trigger` / `remove_trigger` tools at runtime.
```python
# Additional imports for async entry points
from framework.graph.edge import GraphSpec, AsyncEntryPointSpec
from framework.runtime.agent_runtime import (
AgentRuntime, AgentRuntimeConfig, create_agent_runtime,
)
# ... (goal, nodes, edges, entry_node, entry_points, etc. as above) ...
# Async entry points — event-driven triggers
async_entry_points = [
# Timer with cron: daily at 9am
AsyncEntryPointSpec(
id="daily-check",
name="Daily Check",
entry_node="process-node",
trigger_type="timer",
trigger_config={"cron": "0 9 * * *"},
isolation_level="shared",
max_concurrent=1,
),
# Timer with fixed interval: every 20 minutes
AsyncEntryPointSpec(
id="scheduled-check",
name="Scheduled Check",
entry_node="process-node",
trigger_type="timer",
trigger_config={"interval_minutes": 20, "run_immediately": False},
isolation_level="shared",
max_concurrent=1,
),
# Event: reacts to webhook events
AsyncEntryPointSpec(
id="webhook-event",
name="Webhook Event Handler",
entry_node="process-node",
trigger_type="event",
trigger_config={"event_types": ["webhook_received"]},
isolation_level="shared",
max_concurrent=10,
),
```json
[
{
"id": "daily-check",
"name": "Daily Check",
"trigger_type": "timer",
"trigger_config": {"cron": "0 9 * * *"},
"task": "Run the daily check process"
},
{
"id": "scheduled-check",
"name": "Scheduled Check",
"trigger_type": "timer",
"trigger_config": {"interval_minutes": 20},
"task": "Run the scheduled check"
},
{
"id": "webhook-event",
"name": "Webhook Event Handler",
"trigger_type": "webhook",
"trigger_config": {"event_types": ["webhook_received"]},
"task": "Process incoming webhook event"
}
]
# Webhook server config (only needed if using webhooks)
runtime_config = AgentRuntimeConfig(
webhook_host="127.0.0.1",
webhook_port=8080,
webhook_routes=[
{
"source_id": "my-source",
"path": "/webhooks/my-source",
"methods": ["POST"],
},
],
)
```
**Key rules for async entry points:**
- `async_entry_points` is a list of `AsyncEntryPointSpec` (NOT `EntryPointSpec`)
- `runtime_config` is `AgentRuntimeConfig` (NOT `RuntimeConfig` from config.py)
- Valid trigger_types: `timer`, `event`, `webhook`, `manual`, `api`
- Valid isolation_levels: `isolated`, `shared`, `synchronized`
**Key rules for triggers.json:**
- Valid trigger_types: `timer`, `webhook`
- Timer trigger_config (cron): `{"cron": "0 9 * * *"}` — standard 5-field cron expression
- Timer trigger_config (interval): `{"interval_minutes": float, "run_immediately": bool}`
- Event trigger_config: `{"event_types": ["webhook_received"], "filter_stream": "...", "filter_node": "..."}`
- Use `isolation_level="shared"` for async entry points that need to read
the primary session's memory (e.g., user-configured rules)
- The `_build_graph()` method passes `async_entry_points` to GraphSpec
- Reference: `exports/gmail_inbox_guardian/agent.py`
- Timer trigger_config (interval): `{"interval_minutes": float}`
- Each trigger must have a unique `id`
- The `task` field describes what the worker should do when the trigger fires
- Triggers are persisted back to `triggers.json` when modified via queen tools
## __init__.py
@@ -453,21 +418,6 @@ __all__ = [
]
```
**If the agent uses async entry points**, also import and export:
```python
from .agent import (
...,
async_entry_points,
runtime_config, # Only if using webhooks
)
__all__ = [
...,
"async_entry_points",
"runtime_config",
]
```
## __main__.py
```python
@@ -31,8 +31,7 @@ module-level variables via `getattr()`:
| `conversation_mode` | no | not passed | Isolated mode (no context carryover) |
| `identity_prompt` | no | not passed | No agent-level identity |
| `loop_config` | no | `{}` | No iteration limits |
| `async_entry_points` | no | `[]` | No async triggers (timers, webhooks, events) |
| `runtime_config` | no | `None` | No webhook server |
| `triggers.json` (file) | no | not present | No triggers (timers, webhooks) |
**CRITICAL:** `__init__.py` MUST import and re-export ALL of these from
`agent.py`. Missing exports silently fall back to defaults, causing
@@ -226,7 +225,7 @@ Only three valid keys:
loop_config = {
"max_iterations": 100, # Max LLM turns per node visit
"max_tool_calls_per_turn": 20, # Max tool calls per LLM response
"max_history_tokens": 32000, # Triggers conversation compaction
"max_context_tokens": 32000, # Triggers conversation compaction
}
```
**INVALID keys** (do NOT use): `"strategy"`, `"mode"`, `"timeout"`,
@@ -257,44 +256,28 @@ Multiple ON_SUCCESS edges from same source → parallel execution via asyncio.ga
Judge is the SOLE acceptance mechanism — no ad-hoc framework gating.
## Async Entry Points (Webhooks, Timers, Events)
## Triggers (Timers, Webhooks)
For agents that react to external events, use `AsyncEntryPointSpec`:
For agents that react to external events, create a `triggers.json` file
in the agent's export directory:
```python
from framework.graph.edge import AsyncEntryPointSpec
from framework.runtime.agent_runtime import AgentRuntimeConfig
# Timer trigger (cron or interval)
async_entry_points = [
AsyncEntryPointSpec(
id="daily-check",
name="Daily Check",
entry_node="process",
trigger_type="timer",
trigger_config={"cron": "0 9 * * *"}, # daily at 9am
isolation_level="shared",
)
```json
[
{
"id": "daily-check",
"name": "Daily Check",
"trigger_type": "timer",
"trigger_config": {"cron": "0 9 * * *"},
"task": "Run the daily check process"
}
]
# Webhook server (optional)
runtime_config = AgentRuntimeConfig(
webhook_host="127.0.0.1",
webhook_port=8080,
webhook_routes=[{"source_id": "gmail", "path": "/webhooks/gmail", "methods": ["POST"]}],
)
```
### Key Fields
- `trigger_type`: `"timer"`, `"event"`, `"webhook"`, `"manual"`
- `trigger_type`: `"timer"` or `"webhook"`
- `trigger_config`: `{"cron": "0 9 * * *"}` or `{"interval_minutes": 20}`
- `isolation_level`: `"shared"` (recommended), `"isolated"`, `"synchronized"`
- `event_types`: For event triggers, e.g., `["webhook_received"]`
### Exports Required
Both `async_entry_points` and `runtime_config` must be exported from `__init__.py`.
See `exports/gmail_inbox_guardian/agent.py` for complete example.
- `task`: describes what the worker should do when the trigger fires
- Triggers can also be created/removed at runtime via `set_trigger` / `remove_trigger` queen tools
## Tool Discovery
@@ -109,9 +109,48 @@ Key rules to bake into GCU node prompts:
- Keep tool calls per turn ≤10
- Tab isolation: when browser is already running, use `browser_open(background=true)` and pass `target_id` to every call
## Multiple Concurrent GCU Subagents
When a task can be parallelized across multiple sites or profiles, declare a distinct GCU
node for each and invoke them all in the same LLM turn. The framework batches all
`delegate_to_sub_agent` calls made in one turn and runs them with `asyncio.gather`, so
they execute concurrently — not sequentially.
**Each GCU subagent automatically gets its own isolated browser context** — no `profile=`
argument is needed in tool calls. The framework derives a unique profile from the subagent's
node ID and instance counter and injects it via an asyncio `ContextVar` before the subagent
runs.
### Example: three sites in parallel
```python
# Three distinct GCU nodes
gcu_site_a = NodeSpec(id="gcu-site-a", node_type="gcu", ...)
gcu_site_b = NodeSpec(id="gcu-site-b", node_type="gcu", ...)
gcu_site_c = NodeSpec(id="gcu-site-c", node_type="gcu", ...)
orchestrator = NodeSpec(
id="orchestrator",
node_type="event_loop",
sub_agents=["gcu-site-a", "gcu-site-b", "gcu-site-c"],
system_prompt="""\
Call all three subagents in a single response to run them in parallel:
delegate_to_sub_agent(agent_id="gcu-site-a", task="Scrape prices from site A")
delegate_to_sub_agent(agent_id="gcu-site-b", task="Scrape prices from site B")
delegate_to_sub_agent(agent_id="gcu-site-c", task="Scrape prices from site C")
""",
)
```
**Rules:**
- Use distinct node IDs for each concurrent task — sharing an ID shares the browser context.
- The GCU node prompts do not need to mention `profile=`; isolation is automatic.
- Cleanup is automatic at session end, but GCU nodes can call `browser_stop()` explicitly
if they want to release resources mid-run.
## GCU Anti-Patterns
- Using `browser_screenshot` to read text (use `browser_snapshot`)
- Using `browser_screenshot` to read text (use `browser_snapshot` instead; screenshots are for visual context only)
- Re-navigating after scrolling (resets scroll position)
- Attempting login on auth walls
- Forgetting `target_id` in multi-tab scenarios
@@ -1,8 +1,8 @@
"""Queen's ticket receiver entry point.
When the Worker Health Judge emits a WORKER_ESCALATION_TICKET event on the
shared EventBus, this entry point fires and routes to the ``ticket_triage``
node, where the Queen deliberates and decides whether to notify the operator.
When a WORKER_ESCALATION_TICKET event is emitted on the shared EventBus,
this entry point fires and routes to the ``ticket_triage`` node, where the
Queen deliberates and decides whether to notify the operator.
Isolation level is ``isolated`` the queen's triage memory is kept separate
from the worker's shared memory. Each ticket triage runs in its own context.
+286
View File
@@ -0,0 +1,286 @@
"""Worker per-run digest (run diary).
Storage layout:
~/.hive/agents/{agent_name}/runs/{run_id}/digest.md
Each completed or failed worker run gets one digest file. The queen reads
these via get_worker_status(focus='diary') before digging into live runtime
logs the diary is a cheap, persistent record that survives across sessions.
"""
from __future__ import annotations
import logging
import traceback
from collections import Counter
from datetime import datetime
from pathlib import Path
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from framework.runtime.event_bus import AgentEvent, EventBus
logger = logging.getLogger(__name__)
_DIGEST_SYSTEM = """\
You maintain run digests for a worker agent.
A run digest is a concise, factual record of a single task execution.
Write 3-6 sentences covering:
- What the worker was asked to do (the task/goal)
- What approach it took and what tools it used
- What the outcome was (success, partial, or failure and why if relevant)
- Any notable issues, retries, or escalations to the queen
Write in third person past tense. Be direct and specific.
Omit routine tool invocations unless the result matters.
Output only the digest prose no headings, no code fences.
"""
def _worker_runs_dir(agent_name: str) -> Path:
return Path.home() / ".hive" / "agents" / agent_name / "runs"
def digest_path(agent_name: str, run_id: str) -> Path:
return _worker_runs_dir(agent_name) / run_id / "digest.md"
def _collect_run_events(bus: EventBus, run_id: str, limit: int = 2000) -> list[AgentEvent]:
"""Collect all events belonging to *run_id* from the bus history.
Strategy: find the EXECUTION_STARTED event that carries ``run_id``,
extract its ``execution_id``, then query the bus by that execution_id.
This works because TOOL_CALL_*, EDGE_TRAVERSED, NODE_STALLED etc. carry
execution_id but not run_id.
Falls back to a full-scan run_id filter when EXECUTION_STARTED is not
found (e.g. bus was rotated).
"""
from framework.runtime.event_bus import EventType
# Pass 1: find execution_id via EXECUTION_STARTED with matching run_id
started = bus.get_history(event_type=EventType.EXECUTION_STARTED, limit=limit)
exec_id: str | None = None
for e in started:
if getattr(e, "run_id", None) == run_id and e.execution_id:
exec_id = e.execution_id
break
if exec_id:
return bus.get_history(execution_id=exec_id, limit=limit)
# Fallback: scan all events and match by run_id attribute
return [e for e in bus.get_history(limit=limit) if getattr(e, "run_id", None) == run_id]
def _build_run_context(
events: list[AgentEvent],
outcome_event: AgentEvent | None,
) -> str:
"""Assemble a plain-text run context string for the digest LLM call."""
from framework.runtime.event_bus import EventType
# Reverse so events are in chronological order
events_chron = list(reversed(events))
lines: list[str] = []
# Task input from EXECUTION_STARTED
started = [e for e in events_chron if e.type == EventType.EXECUTION_STARTED]
if started:
inp = started[0].data.get("input", {})
if inp:
lines.append(f"Task input: {str(inp)[:400]}")
# Duration (elapsed so far if no outcome yet)
ref_ts = outcome_event.timestamp if outcome_event else datetime.utcnow()
if started:
elapsed = (ref_ts - started[0].timestamp).total_seconds()
m, s = divmod(int(elapsed), 60)
lines.append(f"Duration so far: {m}m {s}s" if m else f"Duration so far: {s}s")
# Outcome
if outcome_event is None:
lines.append("Status: still running (mid-run snapshot)")
elif outcome_event.type == EventType.EXECUTION_COMPLETED:
out = outcome_event.data.get("output", {})
out_str = f"Outcome: completed. Output: {str(out)[:300]}"
lines.append(out_str if out else "Outcome: completed.")
else:
err = outcome_event.data.get("error", "")
lines.append(f"Outcome: failed. Error: {str(err)[:300]}" if err else "Outcome: failed.")
# Node path (edge traversals)
edges = [e for e in events_chron if e.type == EventType.EDGE_TRAVERSED]
if edges:
parts = [
f"{e.data.get('source_node', '?')}->{e.data.get('target_node', '?')}"
for e in edges[-20:]
]
lines.append(f"Node path: {', '.join(parts)}")
# Tools used
tool_events = [e for e in events_chron if e.type == EventType.TOOL_CALL_COMPLETED]
if tool_events:
names = [e.data.get("tool_name", "?") for e in tool_events]
counts = Counter(names)
summary = ", ".join(f"{name}×{n}" if n > 1 else name for name, n in counts.most_common())
lines.append(f"Tools used: {summary}")
# Note any tool errors
errors = [e for e in tool_events if e.data.get("is_error")]
if errors:
err_names = Counter(e.data.get("tool_name", "?") for e in errors)
lines.append(f"Tool errors: {dict(err_names)}")
# Issues
issue_map = {
EventType.NODE_STALLED: "stall",
EventType.NODE_TOOL_DOOM_LOOP: "doom loop",
EventType.CONSTRAINT_VIOLATION: "constraint violation",
EventType.NODE_RETRY: "retry",
}
issue_parts: list[str] = []
for evt_type, label in issue_map.items():
n = sum(1 for e in events_chron if e.type == evt_type)
if n:
issue_parts.append(f"{n} {label}(s)")
if issue_parts:
lines.append(f"Issues: {', '.join(issue_parts)}")
# Escalations to queen
escalations = [e for e in events_chron if e.type == EventType.ESCALATION_REQUESTED]
if escalations:
lines.append(f"Escalations to queen: {len(escalations)}")
# Final LLM output snippet (last LLM_TEXT_DELTA snapshot)
text_events = [e for e in reversed(events_chron) if e.type == EventType.LLM_TEXT_DELTA]
if text_events:
snapshot = text_events[0].data.get("snapshot", "") or ""
if snapshot:
lines.append(f"Final LLM output: {snapshot[-400:].strip()}")
return "\n".join(lines)
async def consolidate_worker_run(
agent_name: str,
run_id: str,
outcome_event: AgentEvent | None,
bus: EventBus,
llm: Any,
) -> None:
"""Write (or overwrite) the digest for a worker run.
Called fire-and-forget either:
- After EXECUTION_COMPLETED / EXECUTION_FAILED (outcome_event set, final write)
- Periodically during a run on a cooldown timer (outcome_event=None, mid-run snapshot)
The digest file is always overwritten so each call produces the freshest view.
The final completion/failure call supersedes any mid-run snapshot.
Args:
agent_name: Worker agent directory name (determines storage path).
run_id: The run ID.
outcome_event: EXECUTION_COMPLETED or EXECUTION_FAILED event, or None for
a mid-run snapshot.
bus: The session EventBus (shared queen + worker).
llm: LLMProvider with an acomplete() method.
"""
try:
events = _collect_run_events(bus, run_id)
run_context = _build_run_context(events, outcome_event)
if not run_context:
logger.debug("worker_memory: no events for run %s, skipping digest", run_id)
return
is_final = outcome_event is not None
logger.info(
"worker_memory: generating %s digest for run %s ...",
"final" if is_final else "mid-run",
run_id,
)
from framework.agents.queen.config import default_config
resp = await llm.acomplete(
messages=[{"role": "user", "content": run_context}],
system=_DIGEST_SYSTEM,
max_tokens=min(default_config.max_tokens, 512),
)
digest_text = (resp.content or "").strip()
if not digest_text:
logger.warning("worker_memory: LLM returned empty digest for run %s", run_id)
return
path = digest_path(agent_name, run_id)
path.parent.mkdir(parents=True, exist_ok=True)
from framework.runtime.event_bus import EventType
ts = (outcome_event.timestamp if outcome_event else datetime.utcnow()).strftime(
"%Y-%m-%d %H:%M"
)
if outcome_event is None:
status = "running"
elif outcome_event.type == EventType.EXECUTION_COMPLETED:
status = "completed"
else:
status = "failed"
path.write_text(
f"# {run_id}\n\n**{ts}** | {status}\n\n{digest_text}\n",
encoding="utf-8",
)
logger.info(
"worker_memory: %s digest written for run %s (%d chars)",
status,
run_id,
len(digest_text),
)
except Exception:
tb = traceback.format_exc()
logger.exception("worker_memory: digest failed for run %s", run_id)
# Persist the error so it's findable without log access
error_path = _worker_runs_dir(agent_name) / run_id / "digest_error.txt"
try:
error_path.parent.mkdir(parents=True, exist_ok=True)
error_path.write_text(
f"run_id: {run_id}\ntime: {datetime.now().isoformat()}\n\n{tb}",
encoding="utf-8",
)
except Exception:
pass
def read_recent_digests(agent_name: str, max_runs: int = 5) -> list[tuple[str, str]]:
"""Return recent run digests as [(run_id, content), ...], newest first.
Args:
agent_name: Worker agent directory name.
max_runs: Maximum number of digests to return.
Returns:
List of (run_id, digest_content) tuples, ordered newest first.
"""
runs_dir = _worker_runs_dir(agent_name)
if not runs_dir.exists():
return []
digest_files = sorted(
runs_dir.glob("*/digest.md"),
key=lambda p: p.stat().st_mtime,
reverse=True,
)[:max_runs]
result: list[tuple[str, str]] = []
for f in digest_files:
try:
content = f.read_text(encoding="utf-8").strip()
if content:
result.append((f.parent.name, content))
except OSError:
continue
return result
+10
View File
@@ -89,6 +89,16 @@ def main():
register_testing_commands(subparsers)
# Register skill commands (skill list, skill trust, ...)
from framework.skills.cli import register_skill_commands
register_skill_commands(subparsers)
# Register debugger commands (debugger)
from framework.debugger.cli import register_debugger_commands
register_debugger_commands(subparsers)
args = parser.parse_args()
if hasattr(args, "func"):
+286 -2
View File
@@ -19,6 +19,10 @@ from framework.graph.edge import DEFAULT_MAX_TOKENS
# ---------------------------------------------------------------------------
HIVE_CONFIG_FILE = Path.home() / ".hive" / "configuration.json"
# Hive LLM router endpoint (Anthropic-compatible).
# litellm's Anthropic handler appends /v1/messages, so this is just the base host.
HIVE_LLM_ENDPOINT = "https://api.adenhq.com"
logger = logging.getLogger(__name__)
@@ -47,15 +51,174 @@ def get_preferred_model() -> str:
"""Return the user's preferred LLM model string (e.g. 'anthropic/claude-sonnet-4-20250514')."""
llm = get_hive_config().get("llm", {})
if llm.get("provider") and llm.get("model"):
return f"{llm['provider']}/{llm['model']}"
provider = str(llm["provider"])
model = str(llm["model"]).strip()
# OpenRouter quickstart stores raw model IDs; tolerate pasted "openrouter/<id>" too.
if provider.lower() == "openrouter" and model.lower().startswith("openrouter/"):
model = model[len("openrouter/") :]
if model:
return f"{provider}/{model}"
return "anthropic/claude-sonnet-4-20250514"
def get_preferred_worker_model() -> str | None:
"""Return the user's preferred worker LLM model, or None if not configured.
Reads from the ``worker_llm`` section of ~/.hive/configuration.json.
Returns None when no worker-specific model is set, so callers can
fall back to the default (queen) model via ``get_preferred_model()``.
"""
worker_llm = get_hive_config().get("worker_llm", {})
if worker_llm.get("provider") and worker_llm.get("model"):
provider = str(worker_llm["provider"])
model = str(worker_llm["model"]).strip()
if provider.lower() == "openrouter" and model.lower().startswith("openrouter/"):
model = model[len("openrouter/") :]
if model:
return f"{provider}/{model}"
return None
def get_worker_api_key() -> str | None:
"""Return the API key for the worker LLM, falling back to the default key."""
worker_llm = get_hive_config().get("worker_llm", {})
if not worker_llm:
return get_api_key()
# Worker-specific subscription / env var
if worker_llm.get("use_claude_code_subscription"):
try:
from framework.runner.runner import get_claude_code_token
token = get_claude_code_token()
if token:
return token
except ImportError:
pass
if worker_llm.get("use_codex_subscription"):
try:
from framework.runner.runner import get_codex_token
token = get_codex_token()
if token:
return token
except ImportError:
pass
if worker_llm.get("use_kimi_code_subscription"):
try:
from framework.runner.runner import get_kimi_code_token
token = get_kimi_code_token()
if token:
return token
except ImportError:
pass
if worker_llm.get("use_antigravity_subscription"):
try:
from framework.runner.runner import get_antigravity_token
token = get_antigravity_token()
if token:
return token
except ImportError:
pass
api_key_env_var = worker_llm.get("api_key_env_var")
if api_key_env_var:
return os.environ.get(api_key_env_var)
# Fall back to default key
return get_api_key()
def get_worker_api_base() -> str | None:
"""Return the api_base for the worker LLM, falling back to the default."""
worker_llm = get_hive_config().get("worker_llm", {})
if not worker_llm:
return get_api_base()
if worker_llm.get("use_codex_subscription"):
return "https://chatgpt.com/backend-api/codex"
if worker_llm.get("use_kimi_code_subscription"):
return "https://api.kimi.com/coding"
if worker_llm.get("use_antigravity_subscription"):
# Antigravity uses AntigravityProvider directly — no api_base needed.
return None
if worker_llm.get("api_base"):
return worker_llm["api_base"]
if str(worker_llm.get("provider", "")).lower() == "openrouter":
return OPENROUTER_API_BASE
return None
def get_worker_llm_extra_kwargs() -> dict[str, Any]:
"""Return extra kwargs for the worker LLM provider."""
worker_llm = get_hive_config().get("worker_llm", {})
if not worker_llm:
return get_llm_extra_kwargs()
if worker_llm.get("use_claude_code_subscription"):
api_key = get_worker_api_key()
if api_key:
return {
"extra_headers": {"authorization": f"Bearer {api_key}"},
}
if worker_llm.get("use_codex_subscription"):
api_key = get_worker_api_key()
if api_key:
headers: dict[str, str] = {
"Authorization": f"Bearer {api_key}",
"User-Agent": "CodexBar",
}
try:
from framework.runner.runner import get_codex_account_id
account_id = get_codex_account_id()
if account_id:
headers["ChatGPT-Account-Id"] = account_id
except ImportError:
pass
return {
"extra_headers": headers,
"store": False,
"allowed_openai_params": ["store"],
}
return {}
def get_worker_max_tokens() -> int:
"""Return max_tokens for the worker LLM, falling back to default."""
worker_llm = get_hive_config().get("worker_llm", {})
if worker_llm and "max_tokens" in worker_llm:
return worker_llm["max_tokens"]
return get_max_tokens()
def get_worker_max_context_tokens() -> int:
"""Return max_context_tokens for the worker LLM, falling back to default."""
worker_llm = get_hive_config().get("worker_llm", {})
if worker_llm and "max_context_tokens" in worker_llm:
return worker_llm["max_context_tokens"]
return get_max_context_tokens()
def get_max_tokens() -> int:
"""Return the configured max_tokens, falling back to DEFAULT_MAX_TOKENS."""
return get_hive_config().get("llm", {}).get("max_tokens", DEFAULT_MAX_TOKENS)
DEFAULT_MAX_CONTEXT_TOKENS = 32_000
OPENROUTER_API_BASE = "https://openrouter.ai/api/v1"
def get_max_context_tokens() -> int:
"""Return the configured max_context_tokens, falling back to DEFAULT_MAX_CONTEXT_TOKENS."""
return get_hive_config().get("llm", {}).get("max_context_tokens", DEFAULT_MAX_CONTEXT_TOKENS)
def get_api_key() -> str | None:
"""Return the API key, supporting env var, Claude Code subscription, Codex, and ZAI Code.
@@ -90,6 +253,28 @@ def get_api_key() -> str | None:
except ImportError:
pass
# Kimi Code subscription: read API key from ~/.kimi/config.toml
if llm.get("use_kimi_code_subscription"):
try:
from framework.runner.runner import get_kimi_code_token
token = get_kimi_code_token()
if token:
return token
except ImportError:
pass
# Antigravity subscription: read OAuth token from accounts JSON
if llm.get("use_antigravity_subscription"):
try:
from framework.runner.runner import get_antigravity_token
token = get_antigravity_token()
if token:
return token
except ImportError:
pass
# Standard env-var path (covers ZAI Code and all API-key providers)
api_key_env_var = llm.get("api_key_env_var")
if api_key_env_var:
@@ -97,18 +282,116 @@ def get_api_key() -> str | None:
return None
# OAuth credentials for Antigravity are fetched from the opencode-antigravity-auth project.
# This project reverse-engineered and published the public OAuth credentials
# for Google's Antigravity/Cloud Code Assist API.
# Source: https://github.com/NoeFabris/opencode-antigravity-auth
_ANTIGRAVITY_CREDENTIALS_URL = (
"https://raw.githubusercontent.com/NoeFabris/opencode-antigravity-auth/dev/src/constants.ts"
)
_antigravity_credentials_cache: tuple[str | None, str | None] = (None, None)
def _fetch_antigravity_credentials() -> tuple[str | None, str | None]:
"""Fetch OAuth client ID and secret from the public npm package source on GitHub."""
global _antigravity_credentials_cache
if _antigravity_credentials_cache[0] and _antigravity_credentials_cache[1]:
return _antigravity_credentials_cache
import re
import urllib.request
try:
req = urllib.request.Request(
_ANTIGRAVITY_CREDENTIALS_URL, headers={"User-Agent": "Hive/1.0"}
)
with urllib.request.urlopen(req, timeout=10) as resp:
content = resp.read().decode("utf-8")
id_match = re.search(r'ANTIGRAVITY_CLIENT_ID\s*=\s*"([^"]+)"', content)
secret_match = re.search(r'ANTIGRAVITY_CLIENT_SECRET\s*=\s*"([^"]+)"', content)
client_id = id_match.group(1) if id_match else None
client_secret = secret_match.group(1) if secret_match else None
if client_id and client_secret:
_antigravity_credentials_cache = (client_id, client_secret)
return client_id, client_secret
except Exception as e:
logger.debug("Failed to fetch Antigravity credentials from public source: %s", e)
return None, None
def get_antigravity_client_id() -> str:
"""Return the Antigravity OAuth application client ID.
Checked in order:
1. ``ANTIGRAVITY_CLIENT_ID`` environment variable
2. ``llm.antigravity_client_id`` in ~/.hive/configuration.json
3. Fetch from public source (opencode-antigravity-auth project on GitHub)
"""
env = os.environ.get("ANTIGRAVITY_CLIENT_ID")
if env:
return env
cfg_val = get_hive_config().get("llm", {}).get("antigravity_client_id")
if cfg_val:
return cfg_val
# Fetch from public source
client_id, _ = _fetch_antigravity_credentials()
if client_id:
return client_id
raise RuntimeError("Could not obtain Antigravity OAuth client ID")
def get_antigravity_client_secret() -> str | None:
"""Return the Antigravity OAuth client secret.
Checked in order:
1. ``ANTIGRAVITY_CLIENT_SECRET`` environment variable
2. ``llm.antigravity_client_secret`` in ~/.hive/configuration.json
3. Fetch from public source (opencode-antigravity-auth project on GitHub)
Returns None when not found token refresh will be skipped and
the caller must use whatever access token is already available.
"""
env = os.environ.get("ANTIGRAVITY_CLIENT_SECRET")
if env:
return env
cfg_val = get_hive_config().get("llm", {}).get("antigravity_client_secret") or None
if cfg_val:
return cfg_val
# Fetch from public source
_, secret = _fetch_antigravity_credentials()
return secret
def get_gcu_enabled() -> bool:
"""Return whether GCU (browser automation) is enabled in user config."""
return get_hive_config().get("gcu_enabled", True)
def get_gcu_viewport_scale() -> float:
"""Return GCU viewport scale factor (0.1-1.0), default 0.8."""
scale = get_hive_config().get("gcu_viewport_scale", 0.8)
if isinstance(scale, (int, float)) and 0.1 <= scale <= 1.0:
return float(scale)
return 0.8
def get_api_base() -> str | None:
"""Return the api_base URL for OpenAI-compatible endpoints, if configured."""
llm = get_hive_config().get("llm", {})
if llm.get("use_codex_subscription"):
# Codex subscription routes through the ChatGPT backend, not api.openai.com.
return "https://chatgpt.com/backend-api/codex"
return llm.get("api_base")
if llm.get("use_kimi_code_subscription"):
# Kimi Code uses an Anthropic-compatible endpoint (no /v1 suffix).
return "https://api.kimi.com/coding"
if llm.get("use_antigravity_subscription"):
# Antigravity uses AntigravityProvider directly — no api_base needed.
return None
if llm.get("api_base"):
return llm["api_base"]
if str(llm.get("provider", "")).lower() == "openrouter":
return OPENROUTER_API_BASE
return None
def get_llm_extra_kwargs() -> dict[str, Any]:
@@ -164,6 +447,7 @@ class RuntimeConfig:
model: str = field(default_factory=get_preferred_model)
temperature: float = 0.7
max_tokens: int = field(default_factory=get_max_tokens)
max_context_tokens: int = field(default_factory=get_max_context_tokens)
api_key: str | None = field(default_factory=get_api_key)
api_base: str | None = field(default_factory=get_api_base)
extra_kwargs: dict[str, Any] = field(default_factory=get_llm_extra_kwargs)
+25 -7
View File
@@ -142,17 +142,27 @@ def save_aden_api_key(key: str) -> None:
os.environ[ADEN_ENV_VAR] = key
def delete_aden_api_key() -> None:
"""Remove ADEN_API_KEY from the encrypted store and ``os.environ``."""
def delete_aden_api_key() -> bool:
"""Remove ADEN_API_KEY from the encrypted store and ``os.environ``.
Returns True if the key existed and was deleted, False otherwise.
"""
deleted = False
try:
from .storage import EncryptedFileStorage
storage = EncryptedFileStorage()
storage.delete(ADEN_CREDENTIAL_ID)
deleted = storage.delete(ADEN_CREDENTIAL_ID)
except (FileNotFoundError, PermissionError) as e:
logger.debug("Could not delete %s from encrypted store: %s", ADEN_CREDENTIAL_ID, e)
except Exception:
logger.debug("Could not delete %s from encrypted store", ADEN_CREDENTIAL_ID)
logger.warning(
"Unexpected error deleting %s from encrypted store",
ADEN_CREDENTIAL_ID,
exc_info=True,
)
os.environ.pop(ADEN_ENV_VAR, None)
return deleted
# ---------------------------------------------------------------------------
@@ -167,8 +177,10 @@ def _read_credential_key_file() -> str | None:
value = CREDENTIAL_KEY_PATH.read_text(encoding="utf-8").strip()
if value:
return value
except (FileNotFoundError, PermissionError) as e:
logger.debug("Could not read %s: %s", CREDENTIAL_KEY_PATH, e)
except Exception:
logger.debug("Could not read %s", CREDENTIAL_KEY_PATH)
logger.warning("Unexpected error reading %s", CREDENTIAL_KEY_PATH, exc_info=True)
return None
@@ -196,6 +208,12 @@ def _read_aden_from_encrypted_store() -> str | None:
cred = storage.load(ADEN_CREDENTIAL_ID)
if cred:
return cred.get_key("api_key")
except (FileNotFoundError, PermissionError, KeyError) as e:
logger.debug("Could not load %s from encrypted store: %s", ADEN_CREDENTIAL_ID, e)
except Exception:
logger.debug("Could not load %s from encrypted store", ADEN_CREDENTIAL_ID)
logger.warning(
"Unexpected error loading %s from encrypted store",
ADEN_CREDENTIAL_ID,
exc_info=True,
)
return None
+10
View File
@@ -51,6 +51,16 @@ def ensure_credential_key_env() -> None:
if found and value:
os.environ[var_name] = value
logger.debug("Loaded %s from shell config", var_name)
# Also load the currently configured LLM env var even if it's not in CREDENTIAL_SPECS.
# This keeps quickstart-written keys available to fresh processes on Unix shells.
from framework.config import get_hive_config
llm_env_var = str(get_hive_config().get("llm", {}).get("api_key_env_var", "")).strip()
if llm_env_var and not os.environ.get(llm_env_var):
found, value = check_env_var_in_shell_config(llm_env_var)
if found and value:
os.environ[llm_env_var] = value
logger.debug("Loaded configured LLM env var %s from shell config", llm_env_var)
except ImportError:
pass
View File
+76
View File
@@ -0,0 +1,76 @@
"""CLI command for the LLM debug log viewer."""
import argparse
import subprocess
import sys
from pathlib import Path
_SCRIPT = Path(__file__).resolve().parents[3] / "scripts" / "llm_debug_log_visualizer.py"
def register_debugger_commands(subparsers: argparse._SubParsersAction) -> None:
"""Register the ``hive debugger`` command."""
parser = subparsers.add_parser(
"debugger",
help="Open the LLM debug log viewer",
description=(
"Start a local server that lets you browse LLM debug sessions "
"recorded in ~/.hive/llm_logs. Sessions are loaded on demand so "
"the browser stays responsive."
),
)
parser.add_argument(
"--session",
help="Execution ID to select initially.",
)
parser.add_argument(
"--port",
type=int,
default=0,
help="Port for the local server (0 = auto-pick a free port).",
)
parser.add_argument(
"--logs-dir",
help="Directory containing JSONL log files (default: ~/.hive/llm_logs).",
)
parser.add_argument(
"--limit-files",
type=int,
default=None,
help="Maximum number of newest log files to scan (default: 200).",
)
parser.add_argument(
"--output",
help="Write a static HTML file instead of starting a server.",
)
parser.add_argument(
"--no-open",
action="store_true",
help="Start the server but do not open a browser.",
)
parser.add_argument(
"--include-tests",
action="store_true",
help="Show test/mock sessions (hidden by default).",
)
parser.set_defaults(func=cmd_debugger)
def cmd_debugger(args: argparse.Namespace) -> int:
"""Launch the LLM debug log visualizer."""
cmd: list[str] = [sys.executable, str(_SCRIPT)]
if args.session:
cmd += ["--session", args.session]
if args.port:
cmd += ["--port", str(args.port)]
if args.logs_dir:
cmd += ["--logs-dir", args.logs_dir]
if args.limit_files is not None:
cmd += ["--limit-files", str(args.limit_files)]
if args.output:
cmd += ["--output", args.output]
if args.no_open:
cmd.append("--no-open")
if args.include_tests:
cmd.append("--include-tests")
return subprocess.call(cmd)
+45 -11
View File
@@ -33,10 +33,20 @@ class Message:
is_transition_marker: bool = False
# True when this message is real human input (from /chat), not a system prompt
is_client_input: bool = False
# Optional image content blocks (e.g. from browser_screenshot)
image_content: list[dict[str, Any]] | None = None
# True when message contains an activated skill body (AS-10: never prune)
is_skill_content: bool = False
def to_llm_dict(self) -> dict[str, Any]:
"""Convert to OpenAI-format message dict."""
if self.role == "user":
if self.image_content:
blocks: list[dict[str, Any]] = []
if self.content:
blocks.append({"type": "text", "text": self.content})
blocks.extend(self.image_content)
return {"role": "user", "content": blocks}
return {"role": "user", "content": self.content}
if self.role == "assistant":
@@ -47,6 +57,15 @@ class Message:
# role == "tool"
content = f"ERROR: {self.content}" if self.is_error else self.content
if self.image_content:
# Multimodal tool result: text + image content blocks
blocks: list[dict[str, Any]] = [{"type": "text", "text": content}]
blocks.extend(self.image_content)
return {
"role": "tool",
"tool_call_id": self.tool_use_id,
"content": blocks,
}
return {
"role": "tool",
"tool_call_id": self.tool_use_id,
@@ -72,6 +91,8 @@ class Message:
d["is_transition_marker"] = self.is_transition_marker
if self.is_client_input:
d["is_client_input"] = self.is_client_input
if self.image_content is not None:
d["image_content"] = self.image_content
return d
@classmethod
@@ -87,6 +108,7 @@ class Message:
phase_id=data.get("phase_id"),
is_transition_marker=data.get("is_transition_marker", False),
is_client_input=data.get("is_client_input", False),
image_content=data.get("image_content"),
)
@@ -307,13 +329,13 @@ class NodeConversation:
def __init__(
self,
system_prompt: str = "",
max_history_tokens: int = 32000,
max_context_tokens: int = 32000,
compaction_threshold: float = 0.8,
output_keys: list[str] | None = None,
store: ConversationStore | None = None,
) -> None:
self._system_prompt = system_prompt
self._max_history_tokens = max_history_tokens
self._max_context_tokens = max_context_tokens
self._compaction_threshold = compaction_threshold
self._output_keys = output_keys
self._store = store
@@ -373,6 +395,7 @@ class NodeConversation:
*,
is_transition_marker: bool = False,
is_client_input: bool = False,
image_content: list[dict[str, Any]] | None = None,
) -> Message:
msg = Message(
seq=self._next_seq,
@@ -381,6 +404,7 @@ class NodeConversation:
phase_id=self._current_phase,
is_transition_marker=is_transition_marker,
is_client_input=is_client_input,
image_content=image_content,
)
self._messages.append(msg)
self._next_seq += 1
@@ -409,6 +433,8 @@ class NodeConversation:
tool_use_id: str,
content: str,
is_error: bool = False,
image_content: list[dict[str, Any]] | None = None,
is_skill_content: bool = False,
) -> Message:
msg = Message(
seq=self._next_seq,
@@ -417,6 +443,8 @@ class NodeConversation:
tool_use_id=tool_use_id,
is_error=is_error,
phase_id=self._current_phase,
image_content=image_content,
is_skill_content=is_skill_content,
)
self._messages.append(msg)
self._next_seq += 1
@@ -525,16 +553,16 @@ class NodeConversation:
self._last_api_input_tokens = actual_input_tokens
def usage_ratio(self) -> float:
"""Current token usage as a fraction of *max_history_tokens*.
"""Current token usage as a fraction of *max_context_tokens*.
Returns 0.0 when ``max_history_tokens`` is zero (unlimited).
Returns 0.0 when ``max_context_tokens`` is zero (unlimited).
"""
if self._max_history_tokens <= 0:
if self._max_context_tokens <= 0:
return 0.0
return self.estimate_tokens() / self._max_history_tokens
return self.estimate_tokens() / self._max_context_tokens
def needs_compaction(self) -> bool:
return self.estimate_tokens() >= self._max_history_tokens * self._compaction_threshold
return self.estimate_tokens() >= self._max_context_tokens * self._compaction_threshold
# --- Output-key extraction ---------------------------------------------
@@ -610,8 +638,15 @@ class NodeConversation:
continue
if msg.is_error:
continue # never prune errors
if msg.is_skill_content:
continue # never prune activated skill instructions (AS-10)
if msg.content.startswith("[Pruned tool result"):
continue # already pruned
# Tiny results (set_output acks, confirmations) — pruning
# saves negligible space but makes the LLM think the call
# failed, causing costly retries.
if len(msg.content) < 100:
continue
# Phase-aware: protect current phase messages
if self._current_phase and msg.phase_id == self._current_phase:
@@ -901,8 +936,7 @@ class NodeConversation:
full_path = str((spill_path / conv_filename).resolve())
ref_parts.append(
f"[Previous conversation saved to '{full_path}'. "
f"Use load_data('{conv_filename}'), read_file('{full_path}'), "
f"or run_command('cat \"{full_path}\"') to review if needed.]"
f"Use load_data('{conv_filename}') to review if needed.]"
)
elif not collapsed_msgs:
ref_parts.append("[Previous freeform messages compacted.]")
@@ -1029,7 +1063,7 @@ class NodeConversation:
await self._store.write_meta(
{
"system_prompt": self._system_prompt,
"max_history_tokens": self._max_history_tokens,
"max_context_tokens": self._max_context_tokens,
"compaction_threshold": self._compaction_threshold,
"output_keys": self._output_keys,
}
@@ -1062,7 +1096,7 @@ class NodeConversation:
conv = cls(
system_prompt=meta.get("system_prompt", ""),
max_history_tokens=meta.get("max_history_tokens", 32000),
max_context_tokens=meta.get("max_context_tokens", 32000),
compaction_threshold=meta.get("compaction_threshold", 0.8),
output_keys=meta.get("output_keys"),
store=store,
+3 -3
View File
@@ -37,7 +37,7 @@ async def evaluate_phase_completion(
phase_description: str,
success_criteria: str,
accumulator_state: dict[str, Any],
max_history_tokens: int = 8_196,
max_context_tokens: int = 8_196,
) -> PhaseVerdict:
"""Level 2 judge: read the conversation and evaluate quality.
@@ -50,7 +50,7 @@ async def evaluate_phase_completion(
phase_description: Description of the phase
success_criteria: Natural-language criteria for phase completion
accumulator_state: Current output key values
max_history_tokens: Main conversation token budget (judge gets 20%)
max_context_tokens: Main conversation token budget (judge gets 20%)
Returns:
PhaseVerdict with action and optional feedback
@@ -89,7 +89,7 @@ FEEDBACK: (reason if RETRY, empty if ACCEPT)"""
response = await llm.acomplete(
messages=[{"role": "user", "content": user_prompt}],
system=system_prompt,
max_tokens=max(1024, max_history_tokens // 5),
max_tokens=max(1024, max_context_tokens // 5),
max_retries=1,
)
if not response.content or not response.content.strip():
+13 -85
View File
@@ -322,7 +322,11 @@ class AsyncEntryPointSpec(BaseModel):
id: str = Field(description="Unique identifier for this entry point")
name: str = Field(description="Human-readable name")
entry_node: str = Field(description="Node ID to start execution from")
entry_node: str = Field(
default="",
description="Deprecated: Node ID to start execution from. "
"Triggers are graph-level; worker always enters at GraphSpec.entry_node.",
)
trigger_type: str = Field(
default="manual",
description="How this entry point is triggered: webhook, api, timer, event, manual",
@@ -331,6 +335,10 @@ class AsyncEntryPointSpec(BaseModel):
default_factory=dict,
description="Trigger-specific configuration (e.g., webhook URL, timer interval)",
)
task: str = Field(
default="",
description="Worker task string when this trigger fires autonomously",
)
isolation_level: str = Field(
default="shared", description="State isolation: isolated, shared, or synchronized"
)
@@ -368,28 +376,8 @@ class GraphSpec(BaseModel):
edges=[...],
)
For multi-entry-point agents (concurrent streams):
GraphSpec(
id="support-agent-graph",
goal_id="support-001",
entry_node="process-webhook", # Default entry
async_entry_points=[
AsyncEntryPointSpec(
id="webhook",
name="Zendesk Webhook",
entry_node="process-webhook",
trigger_type="webhook",
),
AsyncEntryPointSpec(
id="api",
name="API Handler",
entry_node="process-request",
trigger_type="api",
),
],
nodes=[...],
edges=[...],
)
Triggers (timer, webhook, event) are now defined in ``triggers.json``
alongside the agent directory, not embedded in the graph spec.
"""
id: str
@@ -402,12 +390,6 @@ class GraphSpec(BaseModel):
default_factory=dict,
description="Named entry points for resuming execution. Format: {name: node_id}",
)
async_entry_points: list[AsyncEntryPointSpec] = Field(
default_factory=list,
description=(
"Asynchronous entry points for concurrent execution streams (used with AgentRuntime)"
),
)
terminal_nodes: list[str] = Field(
default_factory=list, description="IDs of nodes that end execution"
)
@@ -486,17 +468,6 @@ class GraphSpec(BaseModel):
return node
return None
def has_async_entry_points(self) -> bool:
"""Check if this graph uses async entry points (multi-stream execution)."""
return len(self.async_entry_points) > 0
def get_async_entry_point(self, entry_point_id: str) -> AsyncEntryPointSpec | None:
"""Get an async entry point by ID."""
for ep in self.async_entry_points:
if ep.id == entry_point_id:
return ep
return None
def get_outgoing_edges(self, node_id: str) -> list[EdgeSpec]:
"""Get all edges leaving a node, sorted by priority."""
edges = [e for e in self.edges if e.source == node_id]
@@ -587,37 +558,6 @@ class GraphSpec(BaseModel):
if not self.get_node(self.entry_node):
errors.append(f"Entry node '{self.entry_node}' not found")
# Check async entry points
seen_entry_ids = set()
for entry_point in self.async_entry_points:
# Check for duplicate IDs
if entry_point.id in seen_entry_ids:
errors.append(f"Duplicate async entry point ID: '{entry_point.id}'")
seen_entry_ids.add(entry_point.id)
# Check entry node exists
if not self.get_node(entry_point.entry_node):
errors.append(
f"Async entry point '{entry_point.id}' references "
f"missing node '{entry_point.entry_node}'"
)
# Validate isolation level
valid_isolation = {"isolated", "shared", "synchronized"}
if entry_point.isolation_level not in valid_isolation:
errors.append(
f"Async entry point '{entry_point.id}' has invalid isolation_level "
f"'{entry_point.isolation_level}'. Valid: {valid_isolation}"
)
# Validate trigger type
valid_triggers = {"webhook", "api", "timer", "event", "manual"}
if entry_point.trigger_type not in valid_triggers:
errors.append(
f"Async entry point '{entry_point.id}' has invalid trigger_type "
f"'{entry_point.trigger_type}'. Valid: {valid_triggers}"
)
# Check terminal nodes exist
for term in self.terminal_nodes:
if not self.get_node(term):
@@ -646,10 +586,6 @@ class GraphSpec(BaseModel):
for entry_point_node in self.entry_points.values():
to_visit.append(entry_point_node)
# Add all async entry points as valid starting points
for async_entry in self.async_entry_points:
to_visit.append(async_entry.entry_node)
# Traverse from all entry points
while to_visit:
current = to_visit.pop()
@@ -666,18 +602,10 @@ class GraphSpec(BaseModel):
for sub_agent_id in sub_agents:
reachable.add(sub_agent_id)
# Build set of async entry point nodes for quick lookup
async_entry_nodes = {ep.entry_node for ep in self.async_entry_points}
for node in self.nodes:
if node.id not in reachable:
# Skip if node is a pause node, entry point target, or async entry
# (pause/resume architecture and async entry points make reachable)
if (
node.id in self.pause_nodes
or node.id in self.entry_points.values()
or node.id in async_entry_nodes
):
# Skip if node is a pause node or entry point target
if node.id in self.pause_nodes or node.id in self.entry_points.values():
continue
errors.append(f"Node '{node.id}' is unreachable from entry")
File diff suppressed because it is too large Load Diff
+177 -25
View File
@@ -27,11 +27,24 @@ from framework.graph.node import (
SharedMemory,
)
from framework.graph.validator import OutputValidator
from framework.llm.provider import LLMProvider, Tool
from framework.llm.provider import LLMProvider, Tool, ToolUse
from framework.observability import set_trace_context
from framework.runtime.core import Runtime
from framework.schemas.checkpoint import Checkpoint
from framework.storage.checkpoint_store import CheckpointStore
from framework.utils.io import atomic_write
logger = logging.getLogger(__name__)
def _default_max_context_tokens() -> int:
"""Resolve max_context_tokens from global config, falling back to 32000."""
try:
from framework.config import get_max_context_tokens
return get_max_context_tokens()
except Exception:
return 32_000
@dataclass
@@ -138,6 +151,10 @@ class GraphExecutor:
tool_provider_map: dict[str, str] | None = None,
dynamic_tools_provider: Callable | None = None,
dynamic_prompt_provider: Callable | None = None,
iteration_metadata_provider: Callable | None = None,
skills_catalog_prompt: str = "",
protocols_prompt: str = "",
skill_dirs: list[str] | None = None,
):
"""
Initialize the executor.
@@ -163,6 +180,9 @@ class GraphExecutor:
tool list (for mode switching)
dynamic_prompt_provider: Optional callback returning current
system prompt (for phase switching)
skills_catalog_prompt: Available skills catalog for system prompt
protocols_prompt: Default skill operational protocols for system prompt
skill_dirs: Skill base directories for Tier 3 resource access
"""
self.runtime = runtime
self.llm = llm
@@ -183,6 +203,22 @@ class GraphExecutor:
self.tool_provider_map = tool_provider_map
self.dynamic_tools_provider = dynamic_tools_provider
self.dynamic_prompt_provider = dynamic_prompt_provider
self.iteration_metadata_provider = iteration_metadata_provider
self.skills_catalog_prompt = skills_catalog_prompt
self.protocols_prompt = protocols_prompt
self.skill_dirs: list[str] = skill_dirs or []
if protocols_prompt:
self.logger.info(
"GraphExecutor[%s] received protocols_prompt (%d chars)",
stream_id,
len(protocols_prompt),
)
else:
self.logger.warning(
"GraphExecutor[%s] received EMPTY protocols_prompt",
stream_id,
)
# Parallel execution settings
self.enable_parallel_execution = enable_parallel_execution
@@ -212,11 +248,11 @@ class GraphExecutor:
"""
if not self._storage_path:
return
state_path = self._storage_path / "state.json"
try:
import json as _json
from datetime import datetime
state_path = self._storage_path / "state.json"
if state_path.exists():
state_data = _json.loads(state_path.read_text(encoding="utf-8"))
else:
@@ -239,9 +275,14 @@ class GraphExecutor:
state_data["memory"] = memory_snapshot
state_data["memory_keys"] = list(memory_snapshot.keys())
state_path.write_text(_json.dumps(state_data, indent=2), encoding="utf-8")
with atomic_write(state_path, encoding="utf-8") as f:
_json.dump(state_data, f, indent=2)
except Exception:
pass # Best-effort — never block execution
logger.warning(
"Failed to persist progress state to %s",
state_path,
exc_info=True,
)
def _validate_tools(self, graph: GraphSpec) -> list[str]:
"""
@@ -330,7 +371,7 @@ class GraphExecutor:
_depth,
)
else:
max_tokens = getattr(conversation, "_max_history_tokens", 32000)
max_tokens = getattr(conversation, "_max_context_tokens", 32000)
target_tokens = max_tokens // 2
target_chars = target_tokens * 4
@@ -403,6 +444,14 @@ class GraphExecutor:
)
return s1 + "\n\n" + s2
def _get_runtime_log_session_id(self) -> str:
"""Return the session-backed execution ID for runtime logging, if any."""
if not self._storage_path:
return ""
if self._storage_path.parent.name != "sessions":
return ""
return self._storage_path.name
async def execute(
self,
graph: GraphSpec,
@@ -696,10 +745,7 @@ class GraphExecutor:
)
if self.runtime_logger:
# Extract session_id from storage_path if available (for unified sessions)
session_id = ""
if self._storage_path and self._storage_path.name.startswith("session_"):
session_id = self._storage_path.name
session_id = self._get_runtime_log_session_id()
self.runtime_logger.start_run(goal_id=goal.id, session_id=session_id)
self.logger.info(f"🚀 Starting execution: {goal.name}")
@@ -925,6 +971,33 @@ class GraphExecutor:
self.logger.info(" Executing...")
result = await node_impl.execute(ctx)
# GCU tab cleanup: stop the browser profile after a top-level GCU node
# finishes so tabs don't accumulate. Mirrors the subagent cleanup in
# EventLoopNode._execute_subagent().
if node_spec.node_type == "gcu" and self.tool_executor is not None:
try:
from gcu.browser.session import (
_active_profile as _gcu_profile_var,
)
_gcu_profile = _gcu_profile_var.get()
_stop_use = ToolUse(
id="gcu-cleanup",
name="browser_stop",
input={"profile": _gcu_profile},
)
_stop_result = self.tool_executor(_stop_use)
if asyncio.iscoroutine(_stop_result) or asyncio.isfuture(_stop_result):
await _stop_result
except ImportError:
pass # GCU not installed
except Exception as _gcu_exc:
logger.warning(
"GCU browser_stop failed for profile %r: %s",
_gcu_profile,
_gcu_exc,
)
# Emit node-completed event (skip event_loop nodes)
if self._event_bus and node_spec.node_type != "event_loop":
await self._event_bus.emit_node_loop_completed(
@@ -1350,6 +1423,7 @@ class GraphExecutor:
next_spec = graph.get_node(current_node_id)
if next_spec and next_spec.node_type == "event_loop":
from framework.graph.prompt_composer import (
EXECUTION_SCOPE_PREAMBLE,
build_accounts_prompt,
build_narrative,
build_transition_marker,
@@ -1389,9 +1463,14 @@ class GraphExecutor:
)
# Compose new system prompt (Layer 1 + 2 + 3 + accounts)
# Prepend scope preamble to focus so the LLM stays
# within this node's responsibility.
_focus = next_spec.system_prompt
if next_spec.output_keys and _focus:
_focus = f"{EXECUTION_SCOPE_PREAMBLE}\n\n{_focus}"
new_system = compose_system_prompt(
identity_prompt=getattr(graph, "identity_prompt", None),
focus_prompt=next_spec.system_prompt,
focus_prompt=_focus,
narrative=narrative,
accounts_prompt=_node_accounts,
)
@@ -1753,10 +1832,34 @@ class GraphExecutor:
if node_spec.tools:
available_tools = [t for t in self.tools if t.name in node_spec.tools]
# Create scoped memory view
# Create scoped memory view.
# When permissions are restricted (non-empty key lists), auto-include
# _-prefixed keys used by default skill protocols so agents can read/write
# operational state (e.g. _working_notes, _batch_ledger) regardless of
# what the node declares. When key lists are empty (unrestricted), leave
# unchanged — empty means "allow all".
read_keys = list(node_spec.input_keys)
write_keys = list(node_spec.output_keys)
# Only extend lists that were already restricted (non-empty).
# Empty means "allow all" — adding keys would accidentally
# activate the permission check and block legitimate reads/writes.
if read_keys or write_keys:
from framework.skills.defaults import SHARED_MEMORY_KEYS as _skill_keys
existing_underscore = [k for k in memory._data if k.startswith("_")]
extra_keys = set(_skill_keys) | set(existing_underscore)
# Only inject into read_keys when it was already non-empty — an empty
# read_keys means "allow all reads" and injecting skill keys would
# inadvertently restrict reads to skill keys only.
for k in extra_keys:
if read_keys and k not in read_keys:
read_keys.append(k)
if write_keys and k not in write_keys:
write_keys.append(k)
scoped_memory = memory.with_permissions(
read_keys=node_spec.input_keys,
write_keys=node_spec.output_keys,
read_keys=read_keys,
write_keys=write_keys,
)
# Build per-node accounts prompt (filtered to this node's tools)
@@ -1799,6 +1902,10 @@ class GraphExecutor:
shared_node_registry=self.node_registry, # For subagent escalation routing
dynamic_tools_provider=self.dynamic_tools_provider,
dynamic_prompt_provider=self.dynamic_prompt_provider,
iteration_metadata_provider=self.iteration_metadata_provider,
skills_catalog_prompt=self.skills_catalog_prompt,
protocols_prompt=self.protocols_prompt,
skill_dirs=self.skill_dirs,
)
VALID_NODE_TYPES = {
@@ -1872,7 +1979,7 @@ class GraphExecutor:
max_tool_calls_per_turn=lc.get("max_tool_calls_per_turn", 30),
tool_call_overflow_margin=lc.get("tool_call_overflow_margin", 0.5),
stall_detection_threshold=lc.get("stall_detection_threshold", 3),
max_history_tokens=lc.get("max_history_tokens", 32000),
max_context_tokens=lc.get("max_context_tokens", _default_max_context_tokens()),
max_tool_result_chars=lc.get("max_tool_result_chars", 30_000),
spillover_dir=spillover,
hooks=lc.get("hooks", {}),
@@ -2039,6 +2146,10 @@ class GraphExecutor:
edge=edge,
)
# Track which branch wrote which key for memory conflict detection
fanout_written_keys: dict[str, str] = {} # key -> branch_id that wrote it
fanout_keys_lock = asyncio.Lock()
self.logger.info(f" ⑂ Fan-out: executing {len(branches)} branches in parallel")
for branch in branches.values():
target_spec = graph.get_node(branch.node_id)
@@ -2130,8 +2241,31 @@ class GraphExecutor:
)
if result.success:
# Write outputs to shared memory using async write
# Write outputs to shared memory with conflict detection
conflict_strategy = self._parallel_config.memory_conflict_strategy
for key, value in result.output.items():
async with fanout_keys_lock:
prior_branch = fanout_written_keys.get(key)
if prior_branch and prior_branch != branch.branch_id:
if conflict_strategy == "error":
raise RuntimeError(
f"Memory conflict: key '{key}' already written "
f"by branch '{prior_branch}', "
f"conflicting write from '{branch.branch_id}'"
)
elif conflict_strategy == "first_wins":
self.logger.debug(
f" ⚠ Skipping write to '{key}' "
f"(first_wins: already set by {prior_branch})"
)
continue
else:
# last_wins (default): write and log
self.logger.debug(
f" ⚠ Key '{key}' overwritten "
f"(last_wins: {prior_branch} -> {branch.branch_id})"
)
fanout_written_keys[key] = branch.branch_id
await memory.write_async(key, value)
branch.result = result
@@ -2178,9 +2312,11 @@ class GraphExecutor:
return branch, e
# Execute all branches concurrently
tasks = [execute_single_branch(b) for b in branches.values()]
results = await asyncio.gather(*tasks, return_exceptions=False)
# Execute all branches concurrently with per-branch timeout
timeout = self._parallel_config.branch_timeout_seconds
branch_list = list(branches.values())
tasks = [asyncio.wait_for(execute_single_branch(b), timeout=timeout) for b in branch_list]
results = await asyncio.gather(*tasks, return_exceptions=True)
# Process results
total_tokens = 0
@@ -2188,17 +2324,33 @@ class GraphExecutor:
branch_results: dict[str, NodeResult] = {}
failed_branches: list[ParallelBranch] = []
for branch, result in results:
path.append(branch.node_id)
for i, result in enumerate(results):
branch = branch_list[i]
if isinstance(result, Exception):
if isinstance(result, asyncio.TimeoutError):
# Branch timed out
branch.status = "timed_out"
branch.error = f"Branch timed out after {timeout}s"
self.logger.warning(
f" ⏱ Branch {graph.get_node(branch.node_id).name}: "
f"timed out after {timeout}s"
)
path.append(branch.node_id)
failed_branches.append(branch)
elif result is None or not result.success:
elif isinstance(result, Exception):
path.append(branch.node_id)
failed_branches.append(branch)
else:
total_tokens += result.tokens_used
total_latency += result.latency_ms
branch_results[branch.branch_id] = result
returned_branch, node_result = result
path.append(returned_branch.node_id)
if node_result is None or isinstance(node_result, Exception):
failed_branches.append(returned_branch)
elif not node_result.success:
failed_branches.append(returned_branch)
else:
total_tokens += node_result.tokens_used
total_latency += node_result.latency_ms
branch_results[returned_branch.branch_id] = node_result
# Handle failures based on config
if failed_branches:
+56 -13
View File
@@ -37,24 +37,45 @@ Follow these rules for reliable, efficient browser interaction.
## Reading Pages
- ALWAYS prefer `browser_snapshot` over `browser_get_text("body")`
it returns a compact ~1-5 KB accessibility tree vs 100+ KB of raw HTML.
- Use `browser_snapshot_aria` when you need full ARIA properties
for detailed element inspection.
- Do NOT use `browser_screenshot` for reading text content
it produces huge base64 images with no searchable text.
- Interaction tools (`browser_click`, `browser_type`, `browser_fill`,
`browser_scroll`, etc.) return a page snapshot automatically in their
result. Use it to decide your next action do NOT call
`browser_snapshot` separately after every action.
Only call `browser_snapshot` when you need a fresh view without
performing an action, or after setting `auto_snapshot=false`.
- Do NOT use `browser_screenshot` to read text use
`browser_snapshot` for that (compact, searchable, fast).
- DO use `browser_screenshot` when you need visual context:
charts, images, canvas elements, layout verification, or when
the snapshot doesn't capture what you need.
- Only fall back to `browser_get_text` for extracting specific
small elements by CSS selector.
## Navigation & Waiting
- Always call `browser_wait` after navigation actions
(`browser_open`, `browser_navigate`, `browser_click` on links)
to let the page load.
- `browser_navigate` and `browser_open` already wait for the page to
load (`domcontentloaded`). Do NOT call `browser_wait` with no
arguments after navigation it wastes time.
Only use `browser_wait` when you need a *specific element* or *text*
to appear (pass `selector` or `text`).
- NEVER re-navigate to the same URL after scrolling
this resets your scroll position and loses loaded content.
## Scrolling
- Use large scroll amounts ~2000 when loading more content
sites like twitter and linkedin have lazy loading for paging.
- After scrolling, take a new `browser_snapshot` to see updated content.
- The scroll result includes a snapshot automatically no need to call
`browser_snapshot` separately.
## Batching Actions
- You can call multiple tools in a single turn they execute in parallel.
ALWAYS batch independent actions together. Examples:
- Fill multiple form fields in one turn.
- Navigate + snapshot in one turn.
- Click + scroll if targeting different elements.
- When batching, set `auto_snapshot=false` on all but the last action
to avoid redundant snapshots.
- Aim for 3-5 tool calls per turn minimum. One tool call per turn is
wasteful.
## Error Recovery
- If a tool fails, retry once with the same approach.
@@ -65,11 +86,33 @@ Follow these rules for reliable, efficient browser interaction.
then `browser_start`, then retry.
## Tab Management
- Use `browser_tabs` to list open tabs when managing multiple pages.
- Pass `target_id` to tools when operating on a specific tab.
- Open background tabs with `browser_open(url=..., background=true)`
to avoid losing your current context.
- Close tabs you no longer need with `browser_close` to free resources.
**Close tabs as soon as you are done with them** not only at the end of the task.
After reading or extracting data from a tab, close it immediately.
**Decision rules:**
- Finished reading/extracting from a tab? `browser_close(target_id=...)`
- Completed a multi-tab workflow? `browser_close_finished()` to clean up all your tabs
- More than 3 tabs open? stop and close finished ones before opening more
- Popup appeared that you didn't need? → close it immediately
**Origin awareness:** `browser_tabs` returns an `origin` field for each tab:
- `"agent"` you opened it; you own it; close it when done
- `"popup"` opened by a link or script; close after extracting what you need
- `"startup"` or `"user"` leave these alone unless the task requires it
**Cleanup tools:**
- `browser_close(target_id=...)` close one specific tab
- `browser_close_finished()` close all your agent/popup tabs (safe: leaves startup/user tabs)
- `browser_close_all()` close everything except the active tab (use only for full reset)
**Multi-tab workflow pattern:**
1. Open background tabs with `browser_open(url=..., background=true)` to stay on current tab
2. Process each tab and close it with `browser_close` when done
3. When the full workflow completes, call `browser_close_finished()` to confirm cleanup
4. Check `browser_tabs` at any point it shows `origin` and `age_seconds` per tab
Never accumulate tabs. Treat every tab you open as a resource you must free.
## Login & Auth Walls
- If you see a "Log in" or "Sign up" prompt instead of expected
-8
View File
@@ -167,14 +167,6 @@ class Goal(BaseModel):
return met_weight >= total_weight * 0.9 # 90% threshold
def check_constraint(self, constraint_id: str, value: Any) -> bool:
"""Check if a specific constraint is satisfied."""
for c in self.constraints:
if c.id == constraint_id:
# This would be expanded with actual evaluation logic
return True
return True
def to_prompt_context(self) -> str:
"""Generate context string for LLM prompts.
+10
View File
@@ -565,6 +565,16 @@ class NodeContext:
# staging / running) without restarting the conversation.
dynamic_prompt_provider: Any = None # Callable[[], str] | None
# Skill system prompts — injected by the skill discovery pipeline
skills_catalog_prompt: str = "" # Available skills XML catalog
protocols_prompt: str = "" # Default skill operational protocols
skill_dirs: list[str] = field(default_factory=list) # Skill base dirs for resource access
# Per-iteration metadata provider — when set, EventLoopNode merges
# the returned dict into node_loop_iteration event data. Used by
# the queen to record the current phase per iteration.
iteration_metadata_provider: Any = None # Callable[[], dict] | None
@dataclass
class NodeResult:
+71 -4
View File
@@ -26,6 +26,16 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# Injected into every worker node's system prompt so the LLM understands
# it is one step in a multi-node pipeline and should not overreach.
EXECUTION_SCOPE_PREAMBLE = (
"EXECUTION SCOPE: You are one node in a multi-step workflow graph. "
"Focus ONLY on the task described in your instructions below. "
"Call set_output() for each of your declared output keys, then stop. "
"Do NOT attempt work that belongs to other nodes — the framework "
"routes data between nodes automatically."
)
def _with_datetime(prompt: str) -> str:
"""Append current datetime with local timezone to a system prompt."""
@@ -140,14 +150,24 @@ def compose_system_prompt(
focus_prompt: str | None,
narrative: str | None = None,
accounts_prompt: str | None = None,
skills_catalog_prompt: str | None = None,
protocols_prompt: str | None = None,
execution_preamble: str | None = None,
node_type_preamble: str | None = None,
) -> str:
"""Compose the three-layer system prompt.
"""Compose the multi-layer system prompt.
Args:
identity_prompt: Layer 1 static agent identity (from GraphSpec).
focus_prompt: Layer 3 per-node focus directive (from NodeSpec.system_prompt).
narrative: Layer 2 auto-generated from conversation state.
accounts_prompt: Connected accounts block (sits between identity and narrative).
skills_catalog_prompt: Available skills catalog XML (Agent Skills standard).
protocols_prompt: Default skill operational protocols section.
execution_preamble: EXECUTION_SCOPE_PREAMBLE for worker nodes
(prepended before focus so the LLM knows its pipeline scope).
node_type_preamble: Node-type-specific preamble, e.g. GCU browser
best-practices prompt (prepended before focus).
Returns:
Composed system prompt with all layers present, plus current datetime.
@@ -162,10 +182,27 @@ def compose_system_prompt(
if accounts_prompt:
parts.append(f"\n{accounts_prompt}")
# Skills catalog (discovered skills available for activation)
if skills_catalog_prompt:
parts.append(f"\n{skills_catalog_prompt}")
# Operational protocols (default skill behavioral guidance)
if protocols_prompt:
parts.append(f"\n{protocols_prompt}")
# Layer 2: Narrative (what's happened so far)
if narrative:
parts.append(f"\n--- Context (what has happened so far) ---\n{narrative}")
# Execution scope preamble (worker nodes — tells the LLM it is one
# step in a multi-node pipeline and should not overreach)
if execution_preamble:
parts.append(f"\n{execution_preamble}")
# Node-type preamble (e.g. GCU browser best-practices)
if node_type_preamble:
parts.append(f"\n{node_type_preamble}")
# Layer 3: Focus (current phase directive)
if focus_prompt:
parts.append(f"\n--- Current Focus ---\n{focus_prompt}")
@@ -255,7 +292,9 @@ def build_transition_marker(
sections.append(f"\nCompleted: {previous_node.name}")
sections.append(f" {previous_node.description}")
# Outputs in memory
# Outputs in memory — use file references for large values so the
# next node loads full data from disk instead of seeing truncated
# inline previews that look deceptively complete.
all_memory = memory.read_all()
if all_memory:
memory_lines: list[str] = []
@@ -263,7 +302,29 @@ def build_transition_marker(
if value is None:
continue
val_str = str(value)
if len(val_str) > 300:
if len(val_str) > 300 and data_dir:
# Auto-spill large transition values to data files
import json as _json
data_path = Path(data_dir)
data_path.mkdir(parents=True, exist_ok=True)
ext = ".json" if isinstance(value, (dict, list)) else ".txt"
filename = f"output_{key}{ext}"
try:
write_content = (
_json.dumps(value, indent=2, ensure_ascii=False)
if isinstance(value, (dict, list))
else str(value)
)
(data_path / filename).write_text(write_content, encoding="utf-8")
file_size = (data_path / filename).stat().st_size
val_str = (
f"[Saved to '{filename}' ({file_size:,} bytes). "
f"Use load_data(filename='{filename}') to access.]"
)
except Exception:
val_str = val_str[:300] + "..."
elif len(val_str) > 300:
val_str = val_str[:300] + "..."
memory_lines.append(f" {key}: {val_str}")
if memory_lines:
@@ -280,7 +341,7 @@ def build_transition_marker(
]
if file_lines:
sections.append(
"\nData files (use read_file to access):\n" + "\n".join(file_lines)
"\nData files (use load_data to access):\n" + "\n".join(file_lines)
)
# Agent working memory
@@ -294,6 +355,12 @@ def build_transition_marker(
# Next phase
sections.append(f"\nNow entering: {next_node.name}")
sections.append(f" {next_node.description}")
if next_node.output_keys:
sections.append(
f"\nYour ONLY job in this phase: complete the task above and call "
f"set_output() for {next_node.output_keys}. Do NOT do work that "
f"belongs to later phases."
)
# Reflection prompt (engineered metacognition)
sections.append(
+15 -3
View File
@@ -115,11 +115,23 @@ class SafeEvalVisitor(ast.NodeVisitor):
return True
def visit_BoolOp(self, node: ast.BoolOp) -> Any:
values = [self.visit(v) for v in node.values]
# Short-circuit evaluation to match Python semantics.
# Previously all operands were eagerly evaluated, which broke
# guard patterns like: ``x is not None and x.get("key")``
if isinstance(node.op, ast.And):
return all(values)
result = True
for v in node.values:
result = self.visit(v)
if not result:
return result
return result
elif isinstance(node.op, ast.Or):
return any(values)
result = False
for v in node.values:
result = self.visit(v)
if result:
return result
return result
raise ValueError(f"Boolean operator {type(node.op).__name__} is not allowed")
def visit_IfExp(self, node: ast.IfExp) -> Any:
+706
View File
@@ -0,0 +1,706 @@
"""Antigravity (Google internal Cloud Code Assist) LLM provider.
Antigravity is Google's unified gateway API that routes requests to Gemini,
Claude, and GPT-OSS models through a single Gemini-style interface. It is
NOT the public ``generativelanguage.googleapis.com`` API.
Authentication uses Google OAuth2. Token refresh is done directly with the
OAuth client secret no local proxy required.
Credential sources (checked in order):
1. ``~/.hive/antigravity-accounts.json`` (native OAuth implementation)
2. Antigravity IDE SQLite state DB (macOS / Linux)
"""
from __future__ import annotations
import json
import logging
import re
import time
import uuid
from collections.abc import AsyncIterator, Callable, Iterator
from pathlib import Path
from typing import Any
from framework.llm.provider import LLMProvider, LLMResponse, Tool
from framework.llm.stream_events import (
FinishEvent,
StreamErrorEvent,
StreamEvent,
TextDeltaEvent,
TextEndEvent,
ToolCallEvent,
)
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
_TOKEN_URL = "https://oauth2.googleapis.com/token"
# Fallback order: daily sandbox → autopush sandbox → production
_ENDPOINTS = [
"https://daily-cloudcode-pa.sandbox.googleapis.com",
"https://autopush-cloudcode-pa.sandbox.googleapis.com",
"https://cloudcode-pa.googleapis.com",
]
_DEFAULT_PROJECT_ID = "rising-fact-p41fc"
_TOKEN_REFRESH_BUFFER_SECS = 60
# Credentials file in ~/.hive/ (native implementation)
_ACCOUNTS_FILE = Path.home() / ".hive" / "antigravity-accounts.json"
_IDE_STATE_DB_MAC = (
Path.home()
/ "Library"
/ "Application Support"
/ "Antigravity"
/ "User"
/ "globalStorage"
/ "state.vscdb"
)
_IDE_STATE_DB_LINUX = (
Path.home() / ".config" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
)
_IDE_STATE_DB_KEY = "antigravityUnifiedStateSync.oauthToken"
_BASE_HEADERS: dict[str, str] = {
# Mimic the Antigravity Electron app so the API accepts the request.
"User-Agent": (
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 "
"(KHTML, like Gecko) Antigravity/1.18.3 Chrome/138.0.7204.235 "
"Electron/37.3.1 Safari/537.36"
),
"X-Goog-Api-Client": "google-cloud-sdk vscode_cloudshelleditor/0.1",
"Client-Metadata": '{"ideType":"ANTIGRAVITY","platform":"MACOS","pluginType":"GEMINI"}',
}
# ---------------------------------------------------------------------------
# Credential loading helpers
# ---------------------------------------------------------------------------
def _load_from_json_file() -> tuple[str | None, str | None, str, float]:
"""Read credentials from JSON accounts file.
Reads from ~/.hive/antigravity-accounts.json.
Returns ``(access_token | None, refresh_token | None, project_id, expires_at)``.
``expires_at`` is a Unix timestamp (seconds); 0.0 means unknown.
"""
if not _ACCOUNTS_FILE.exists():
return None, None, _DEFAULT_PROJECT_ID, 0.0
try:
with open(_ACCOUNTS_FILE, encoding="utf-8") as fh:
data = json.load(fh)
except (OSError, json.JSONDecodeError) as exc:
logger.debug("Failed to read Antigravity accounts file: %s", exc)
return None, None, _DEFAULT_PROJECT_ID, 0.0
accounts = data.get("accounts", [])
if not accounts:
return None, None, _DEFAULT_PROJECT_ID, 0.0
account = next((a for a in accounts if a.get("enabled", True) is not False), accounts[0])
schema_version = data.get("schemaVersion", 1)
if schema_version >= 4:
# V4 schema: refresh = "refreshToken|projectId[|managedProjectId]"
refresh_str = account.get("refresh", "")
parts = refresh_str.split("|") if refresh_str else []
refresh_token: str | None = parts[0] if parts else None
project_id = parts[1] if len(parts) >= 2 and parts[1] else _DEFAULT_PROJECT_ID
access_token: str | None = account.get("access")
expires_ms: int = account.get("expires", 0)
expires_at = float(expires_ms) / 1000.0 if expires_ms else 0.0
# Treat near-expiry tokens as absent so _ensure_token() triggers a refresh.
if access_token and expires_at and time.time() >= expires_at - _TOKEN_REFRESH_BUFFER_SECS:
access_token = None
expires_at = 0.0
return access_token, refresh_token, project_id, expires_at
else:
# V1V3 schema: plain accessToken / refreshToken fields
access_token = account.get("accessToken")
refresh_token = account.get("refreshToken")
# Estimate expiry from last_refresh + 1 h
last_refresh_str: str | None = data.get("last_refresh")
expires_at = 0.0
if last_refresh_str:
try:
from datetime import datetime # noqa: PLC0415
ts = datetime.fromisoformat(last_refresh_str.replace("Z", "+00:00")).timestamp()
expires_at = ts + 3600.0
if time.time() >= expires_at - _TOKEN_REFRESH_BUFFER_SECS:
access_token = None
except (ValueError, TypeError):
pass
return access_token, refresh_token, _DEFAULT_PROJECT_ID, expires_at
def _load_from_ide_db() -> tuple[str | None, str | None, float]:
"""Extract ``(access_token, refresh_token, expires_at)`` from the IDE SQLite DB."""
import base64 # noqa: PLC0415
import sqlite3 # noqa: PLC0415
for db_path in (_IDE_STATE_DB_MAC, _IDE_STATE_DB_LINUX):
if not db_path.exists():
continue
try:
con = sqlite3.connect(f"file:{db_path}?mode=ro", uri=True)
try:
row = con.execute(
"SELECT value FROM ItemTable WHERE key = ?",
(_IDE_STATE_DB_KEY,),
).fetchone()
finally:
con.close()
if not row:
continue
blob = base64.b64decode(row[0])
candidates = re.findall(rb"[A-Za-z0-9+/=_\-]{40,}", blob)
access_token: str | None = None
refresh_token: str | None = None
for candidate in candidates:
try:
padded = candidate + b"=" * (-len(candidate) % 4)
inner = base64.urlsafe_b64decode(padded)
except Exception:
continue
if not access_token:
m = re.search(rb"ya29\.[A-Za-z0-9_\-\.]+", inner)
if m:
access_token = m.group(0).decode("ascii")
if not refresh_token:
m = re.search(rb"1//[A-Za-z0-9_\-\.]+", inner)
if m:
refresh_token = m.group(0).decode("ascii")
if access_token and refresh_token:
break
if access_token:
# Estimate expiry from DB mtime (IDE refreshes while running)
mtime = db_path.stat().st_mtime
expires_at = mtime + 3600.0
return access_token, refresh_token, expires_at
except Exception as exc:
logger.debug("Failed to read Antigravity IDE state DB: %s", exc)
continue
return None, None, 0.0
def _do_token_refresh(refresh_token: str) -> tuple[str, float] | None:
"""POST to Google OAuth endpoint and return ``(new_access_token, expires_at)``.
The client secret is sourced via ``get_antigravity_client_secret()`` (env var,
config file, or npm package fallback). When unavailable the refresh is attempted
without it Google will reject it for web-app clients, but the npm fallback in
``get_antigravity_client_secret()`` should ensure the secret is found at runtime.
Returns None when the HTTP request fails.
"""
from framework.config import get_antigravity_client_secret # noqa: PLC0415
client_secret = get_antigravity_client_secret()
if not client_secret:
logger.debug(
"Antigravity client secret not configured — attempting refresh without it. "
"Set ANTIGRAVITY_CLIENT_SECRET or run quickstart to configure."
)
import urllib.error # noqa: PLC0415
import urllib.parse # noqa: PLC0415
import urllib.request # noqa: PLC0415
from framework.config import get_antigravity_client_id # noqa: PLC0415
params: dict[str, str] = {
"grant_type": "refresh_token",
"refresh_token": refresh_token,
"client_id": get_antigravity_client_id(),
}
if client_secret:
params["client_secret"] = client_secret
body = urllib.parse.urlencode(params).encode("utf-8")
req = urllib.request.Request(
_TOKEN_URL,
data=body,
headers={"Content-Type": "application/x-www-form-urlencoded"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=15) as resp: # noqa: S310
payload = json.loads(resp.read())
access_token: str = payload["access_token"]
expires_in: int = payload.get("expires_in", 3600)
logger.debug("Antigravity token refreshed successfully")
return access_token, time.time() + expires_in
except Exception as exc:
logger.debug("Antigravity token refresh failed: %s", exc)
return None
# ---------------------------------------------------------------------------
# Message conversion helpers
# ---------------------------------------------------------------------------
def _clean_tool_name(name: str) -> str:
"""Sanitize a tool name for the Antigravity function-calling schema."""
name = re.sub(r"[/\s]", "_", name)
if name and not (name[0].isalpha() or name[0] == "_"):
name = "_" + name
return name[:64]
def _to_gemini_contents(
messages: list[dict[str, Any]],
thought_sigs: dict[str, str] | None = None,
) -> list[dict[str, Any]]:
"""Convert OpenAI-format messages to Gemini-style ``contents`` array."""
# Pre-build a map tool_call_id → function_name from assistant messages.
# Tool result messages (role="tool") only carry tool_call_id, not the name,
# but Gemini requires functionResponse.name to match the functionCall.name.
tc_id_to_name: dict[str, str] = {}
for msg in messages:
if msg.get("role") == "assistant":
for tc in msg.get("tool_calls") or []:
tc_id = tc.get("id")
fn_name = tc.get("function", {}).get("name", "")
if tc_id and fn_name:
tc_id_to_name[tc_id] = fn_name
contents: list[dict[str, Any]] = []
# Consecutive tool-result messages must be batched into one user turn.
pending_tool_parts: list[dict[str, Any]] = []
def _flush_tool_results() -> None:
if pending_tool_parts:
contents.append({"role": "user", "parts": list(pending_tool_parts)})
pending_tool_parts.clear()
for msg in messages:
role = msg.get("role", "user")
content = msg.get("content")
if role == "system":
continue # Handled via systemInstruction, not in contents.
if role == "tool":
# OpenAI tool result → Gemini functionResponse part.
result_str = content if isinstance(content, str) else str(content or "")
tc_id = msg.get("tool_call_id", "")
# Look up function name from the pre-built map; fall back to msg.name.
fn_name = tc_id_to_name.get(tc_id) or msg.get("name", "")
pending_tool_parts.append(
{
"functionResponse": {
"name": fn_name,
"id": tc_id,
"response": {"content": result_str},
}
}
)
continue
_flush_tool_results()
gemini_role = "model" if role == "assistant" else "user"
parts: list[dict[str, Any]] = []
if isinstance(content, str) and content:
parts.append({"text": content})
elif isinstance(content, list):
for block in content:
if not isinstance(block, dict):
continue
if block.get("type") == "text":
text = block.get("text", "")
if text:
parts.append({"text": text})
# Other block types (image_url etc.) skipped.
# Assistant messages may carry OpenAI-style tool_calls.
for tc in msg.get("tool_calls") or []:
fn = tc.get("function", {})
try:
args = json.loads(fn.get("arguments", "{}") or "{}")
except (json.JSONDecodeError, TypeError):
args = {}
tc_id = tc.get("id", str(uuid.uuid4()))
fc_part: dict[str, Any] = {
"functionCall": {
"name": fn.get("name", ""),
"args": args,
"id": tc_id,
}
}
if thought_sigs:
sig = thought_sigs.get(tc_id, "")
if sig:
fc_part["thoughtSignature"] = sig # part-level, not inside functionCall
parts.append(fc_part)
if parts:
contents.append({"role": gemini_role, "parts": parts})
_flush_tool_results()
# Gemini requires the first turn to be a user turn. Drop any leading
# model messages so the API doesn't reject with a 400.
while contents and contents[0].get("role") == "model":
contents.pop(0)
return contents
# ---------------------------------------------------------------------------
# Response parsing helpers
# ---------------------------------------------------------------------------
def _map_finish_reason(reason: str) -> str:
return {"STOP": "stop", "MAX_TOKENS": "max_tokens", "OTHER": "tool_use"}.get(
(reason or "").upper(), "stop"
)
def _parse_complete_response(raw: dict[str, Any], model: str) -> LLMResponse:
"""Parse a non-streaming Antigravity response dict → LLMResponse."""
payload: dict[str, Any] = raw.get("response", raw)
candidates: list[dict[str, Any]] = payload.get("candidates", [])
usage: dict[str, Any] = payload.get("usageMetadata", {})
text_parts: list[str] = []
if candidates:
for part in candidates[0].get("content", {}).get("parts", []):
if "text" in part and not part.get("thought"):
text_parts.append(part["text"])
return LLMResponse(
content="".join(text_parts),
model=payload.get("modelVersion", model),
input_tokens=usage.get("promptTokenCount", 0),
output_tokens=usage.get("candidatesTokenCount", 0),
stop_reason=_map_finish_reason(candidates[0].get("finishReason", "") if candidates else ""),
raw_response=raw,
)
def _parse_sse_stream(
response: Any,
model: str,
on_thought_signature: Callable[[str, str], None] | None = None,
) -> Iterator[StreamEvent]:
"""Parse Antigravity SSE response line-by-line → StreamEvents.
Each SSE line looks like::
data: {"response": {"candidates": [...], "usageMetadata": {...}}, "traceId": "..."}
"""
accumulated = ""
input_tokens = 0
output_tokens = 0
finish_reason = ""
for raw_line in response:
line: str = raw_line.decode("utf-8", errors="replace").rstrip("\r\n")
if not line.startswith("data:"):
continue
data_str = line[5:].strip()
if not data_str or data_str == "[DONE]":
continue
try:
data: dict[str, Any] = json.loads(data_str)
except json.JSONDecodeError:
continue
# The outer envelope is {"response": {...}, "traceId": "..."}.
payload: dict[str, Any] = data.get("response", data)
usage = payload.get("usageMetadata", {})
if usage:
input_tokens = usage.get("promptTokenCount", input_tokens)
output_tokens = usage.get("candidatesTokenCount", output_tokens)
for candidate in payload.get("candidates", []):
fr = candidate.get("finishReason", "")
if fr:
finish_reason = fr
for part in candidate.get("content", {}).get("parts", []):
if "text" in part and not part.get("thought"):
delta: str = part["text"]
accumulated += delta
yield TextDeltaEvent(content=delta, snapshot=accumulated)
elif "functionCall" in part:
fc: dict[str, Any] = part["functionCall"]
tool_use_id = fc.get("id") or str(uuid.uuid4())
thought_sig = part.get("thoughtSignature", "") # sibling of functionCall
if thought_sig and on_thought_signature:
on_thought_signature(tool_use_id, thought_sig)
args = fc.get("args", {})
if isinstance(args, str):
try:
args = json.loads(args)
except json.JSONDecodeError:
args = {}
yield ToolCallEvent(
tool_use_id=tool_use_id,
tool_name=fc.get("name", ""),
tool_input=args,
)
if accumulated:
yield TextEndEvent(full_text=accumulated)
yield FinishEvent(
stop_reason=_map_finish_reason(finish_reason),
input_tokens=input_tokens,
output_tokens=output_tokens,
model=model,
)
# ---------------------------------------------------------------------------
# Provider
# ---------------------------------------------------------------------------
class AntigravityProvider(LLMProvider):
"""LLM provider for Google's internal Antigravity Code Assist gateway.
No local proxy required. Handles OAuth token refresh, Gemini-format
request/response conversion, and SSE streaming directly.
"""
def __init__(self, model: str = "gemini-3-flash") -> None:
# Strip any provider prefix ("openai/gemini-3-flash" → "gemini-3-flash").
if "/" in model:
model = model.split("/", 1)[1]
self.model = model
self._access_token: str | None = None
self._refresh_token: str | None = None
self._project_id: str = _DEFAULT_PROJECT_ID
self._token_expires_at: float = 0.0
self._thought_sigs: dict[str, str] = {} # tool_use_id → thoughtSignature
self._init_credentials()
# --- Credential management -------------------------------------------- #
def _init_credentials(self) -> None:
"""Load credentials from the best available source."""
access, refresh, project_id, expires_at = _load_from_json_file()
if refresh:
self._refresh_token = refresh
self._project_id = project_id
self._access_token = access
self._token_expires_at = expires_at
return
# Fall back to IDE state DB.
access, refresh, expires_at = _load_from_ide_db()
if access:
self._access_token = access
self._refresh_token = refresh
self._token_expires_at = expires_at
def has_credentials(self) -> bool:
"""Return True if any credential is available."""
return bool(self._access_token or self._refresh_token)
def _ensure_token(self) -> str:
"""Return a valid access token, refreshing via OAuth if needed."""
if (
self._access_token
and self._token_expires_at
and time.time() < self._token_expires_at - _TOKEN_REFRESH_BUFFER_SECS
):
return self._access_token
if self._refresh_token:
result = _do_token_refresh(self._refresh_token)
if result:
self._access_token, self._token_expires_at = result
return self._access_token
if self._access_token:
logger.warning("Using potentially stale Antigravity access token")
return self._access_token
raise RuntimeError(
"No valid Antigravity credentials. "
"Run: uv run python core/antigravity_auth.py auth account add"
)
# --- Request building -------------------------------------------------- #
def _build_body(
self,
messages: list[dict[str, Any]],
system: str,
tools: list[Tool] | None,
max_tokens: int,
) -> dict[str, Any]:
contents = _to_gemini_contents(messages, self._thought_sigs)
inner: dict[str, Any] = {
"contents": contents,
"generationConfig": {"maxOutputTokens": max_tokens},
}
if system:
inner["systemInstruction"] = {"parts": [{"text": system}]}
if tools:
inner["tools"] = [
{
"functionDeclarations": [
{
"name": _clean_tool_name(t.name),
"description": t.description,
"parameters": t.parameters
or {
"type": "object",
"properties": {},
},
}
for t in tools
]
}
]
return {
"project": self._project_id,
"model": self.model,
"request": inner,
"requestType": "agent",
"userAgent": "antigravity",
"requestId": f"agent-{uuid.uuid4()}",
}
# --- HTTP transport ---------------------------------------------------- #
def _post(self, body: dict[str, Any], *, streaming: bool) -> Any:
"""POST to the Antigravity endpoint, falling back through the endpoint list."""
import urllib.error # noqa: PLC0415
import urllib.request # noqa: PLC0415
token = self._ensure_token()
body_bytes = json.dumps(body).encode("utf-8")
path = (
"/v1internal:streamGenerateContent?alt=sse"
if streaming
else "/v1internal:generateContent"
)
headers = {
**_BASE_HEADERS,
"Authorization": f"Bearer {token}",
"Content-Type": "application/json",
}
if streaming:
headers["Accept"] = "text/event-stream"
last_exc: Exception | None = None
for base_url in _ENDPOINTS:
url = f"{base_url}{path}"
req = urllib.request.Request(url, data=body_bytes, headers=headers, method="POST")
try:
return urllib.request.urlopen(req, timeout=120) # noqa: S310
except urllib.error.HTTPError as exc:
if exc.code in (401, 403) and self._refresh_token:
# Token rejected — refresh once and retry this endpoint.
result = _do_token_refresh(self._refresh_token)
if result:
self._access_token, self._token_expires_at = result
headers["Authorization"] = f"Bearer {self._access_token}"
req2 = urllib.request.Request(
url, data=body_bytes, headers=headers, method="POST"
)
try:
return urllib.request.urlopen(req2, timeout=120) # noqa: S310
except urllib.error.HTTPError as exc2:
last_exc = exc2
continue
last_exc = exc
continue
elif exc.code >= 500:
last_exc = exc
continue
# Include the API response body in the exception for easier debugging.
try:
err_body = exc.read().decode("utf-8", errors="replace")
except Exception:
err_body = "(unreadable)"
raise RuntimeError(f"Antigravity HTTP {exc.code} from {url}: {err_body}") from exc
except (urllib.error.URLError, OSError) as exc:
last_exc = exc
continue
raise RuntimeError(
f"All Antigravity endpoints failed. Last error: {last_exc}"
) from last_exc
# --- LLMProvider interface --------------------------------------------- #
def complete(
self,
messages: list[dict[str, Any]],
system: str = "",
tools: list[Tool] | None = None,
max_tokens: int = 1024,
response_format: dict[str, Any] | None = None,
json_mode: bool = False,
max_retries: int | None = None,
) -> LLMResponse:
if json_mode:
suffix = "\n\nPlease respond with a valid JSON object."
system = (system + suffix) if system else suffix.strip()
body = self._build_body(messages, system, tools, max_tokens)
resp = self._post(body, streaming=False)
return _parse_complete_response(json.loads(resp.read()), self.model)
async def stream(
self,
messages: list[dict[str, Any]],
system: str = "",
tools: list[Tool] | None = None,
max_tokens: int = 4096,
) -> AsyncIterator[StreamEvent]:
import asyncio # noqa: PLC0415
import concurrent.futures # noqa: PLC0415
loop = asyncio.get_running_loop()
queue: asyncio.Queue[StreamEvent | None] = asyncio.Queue()
def _blocking_work() -> None:
try:
body = self._build_body(messages, system, tools, max_tokens)
http_resp = self._post(body, streaming=True)
for event in _parse_sse_stream(
http_resp, self.model, self._thought_sigs.__setitem__
):
loop.call_soon_threadsafe(queue.put_nowait, event)
except Exception as exc:
logger.error("Antigravity stream error: %s", exc)
loop.call_soon_threadsafe(queue.put_nowait, StreamErrorEvent(error=str(exc)))
finally:
loop.call_soon_threadsafe(queue.put_nowait, None) # sentinel
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
fut = loop.run_in_executor(executor, _blocking_work)
try:
while True:
event = await queue.get()
if event is None:
break
yield event
finally:
await fut
executor.shutdown(wait=False)
+106
View File
@@ -0,0 +1,106 @@
"""Model capability checks for LLM providers.
Vision support rules are derived from official vendor documentation:
- ZAI (z.ai): docs.z.ai/guides/vlm GLM-4.6V variants are vision; GLM-5/4.6/4.7 are text-only
- MiniMax: platform.minimax.io/docs minimax-vl-01 is vision; M2.x are text-only
- DeepSeek: api-docs.deepseek.com deepseek-vl2 is vision; chat/reasoner are text-only
- Cerebras: inference-docs.cerebras.ai no vision models at all
- Groq: console.groq.com/docs/vision vision capable; treat as supported by default
- Ollama/LM Studio/vLLM/llama.cpp: local runners denied by default; model names
don't reliably indicate vision support, so users must configure explicitly
"""
from __future__ import annotations
def _model_name(model: str) -> str:
"""Return the bare model name after stripping any 'provider/' prefix."""
if "/" in model:
return model.split("/", 1)[1]
return model
# Step 1: explicit vision allow-list — these always support images regardless
# of what the provider-level rules say. Checked first so that e.g. glm-4.6v
# is allowed even though glm-4.6 is denied.
_VISION_ALLOW_BARE_PREFIXES: tuple[str, ...] = (
# ZAI/GLM vision models (docs.z.ai/guides/vlm)
"glm-4v", # GLM-4V series (legacy)
"glm-4.6v", # GLM-4.6V, GLM-4.6V-flash, GLM-4.6V-flashx
# DeepSeek vision models
"deepseek-vl", # deepseek-vl2, deepseek-vl2-small, deepseek-vl2-tiny
# MiniMax vision model
"minimax-vl", # minimax-vl-01
)
# Step 2: provider-level deny — every model from this provider is text-only.
_TEXT_ONLY_PROVIDER_PREFIXES: tuple[str, ...] = (
# Cerebras: inference-docs.cerebras.ai lists only text models
"cerebras/",
# Local runners: model names don't reliably indicate vision support
"ollama/",
"ollama_chat/",
"lm_studio/",
"vllm/",
"llamacpp/",
)
# Step 3: per-model deny — text-only models within otherwise mixed providers.
# Matched against the bare model name (provider prefix stripped, lower-cased).
# The vision allow-list above is checked first, so vision variants of the same
# family are already handled before these deny patterns are reached.
_TEXT_ONLY_MODEL_BARE_PREFIXES: tuple[str, ...] = (
# --- ZAI / GLM family ---
# text-only: glm-5, glm-4.6, glm-4.7, glm-4.5, zai-glm-*
# vision: glm-4v, glm-4.6v (caught by allow-list above)
"glm-5",
"glm-4.6", # bare glm-4.6 is text-only; glm-4.6v is caught by allow-list
"glm-4.7",
"glm-4.5",
"zai-glm",
# --- DeepSeek ---
# text-only: deepseek-chat, deepseek-coder, deepseek-reasoner
# vision: deepseek-vl2 (caught by allow-list above)
# Note: LiteLLM's deepseek handler may flatten content lists for some models;
# VL models are allowed through and rely on LiteLLM's native VL support.
"deepseek-chat",
"deepseek-coder",
"deepseek-reasoner",
# --- MiniMax ---
# text-only: minimax-m2.*, minimax-text-*, abab* (legacy)
# vision: minimax-vl-01 (caught by allow-list above)
"minimax-m2",
"minimax-text",
"abab",
)
def supports_image_tool_results(model: str) -> bool:
"""Return whether *model* can receive image content in messages.
Used to gate both user-message images and tool-result image blocks.
Logic (checked in order):
1. Vision allow-list True (known vision model, skip all denies)
2. Provider deny False (entire provider is text-only)
3. Model deny False (specific text-only model within a mixed provider)
4. Default True (assume capable; unknown providers and models)
"""
model_lower = model.lower()
bare = _model_name(model_lower)
# 1. Explicit vision allow — takes priority over all denies
if any(bare.startswith(p) for p in _VISION_ALLOW_BARE_PREFIXES):
return True
# 2. Provider-level deny (all models from this provider are text-only)
if any(model_lower.startswith(p) for p in _TEXT_ONLY_PROVIDER_PREFIXES):
return False
# 3. Per-model deny (text-only variants within mixed-capability families)
if any(bare.startswith(p) for p in _TEXT_ONLY_MODEL_BARE_PREFIXES):
return False
# 5. Default: assume vision capable
# Covers: OpenAI, Anthropic, Google, Mistral, Kimi, and other hosted providers
return True
File diff suppressed because it is too large Load Diff
+2
View File
@@ -45,6 +45,8 @@ class ToolResult:
tool_use_id: str
content: str
is_error: bool = False
image_content: list[dict[str, Any]] | None = None
is_skill_content: bool = False # AS-10: marks activated skill body, protected from pruning
class LLMProvider(ABC):
+1
View File
@@ -71,6 +71,7 @@ class FinishEvent:
stop_reason: str = ""
input_tokens: int = 0
output_tokens: int = 0
cached_tokens: int = 0
model: str = ""
+1 -33
View File
@@ -1,33 +1 @@
"""Framework-level worker monitoring package.
Provides the Worker Health Judge: a reusable secondary graph that attaches to
any worker agent runtime and monitors its execution health via periodic log
inspection. Emits structured EscalationTickets when degradation is detected.
Usage::
from framework.monitoring import HEALTH_JUDGE_ENTRY_POINT, judge_goal, judge_graph
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
# Register tools bound to the worker runtime's EventBus
monitoring_registry = ToolRegistry()
register_worker_monitoring_tools(monitoring_registry, worker_runtime._event_bus, storage_path)
# Load judge as secondary graph on the worker runtime
await worker_runtime.add_graph(
graph_id="judge",
graph=judge_graph,
goal=judge_goal,
entry_points={"health_check": HEALTH_JUDGE_ENTRY_POINT},
storage_subpath="graphs/judge",
)
"""
from .judge import HEALTH_JUDGE_ENTRY_POINT, judge_goal, judge_graph, judge_node
__all__ = [
"HEALTH_JUDGE_ENTRY_POINT",
"judge_goal",
"judge_graph",
"judge_node",
]
"""Framework-level worker monitoring package."""
-258
View File
@@ -1,258 +0,0 @@
"""Worker Health Judge — framework-level reusable monitoring graph.
Attaches to any worker agent runtime as a secondary graph. Fires on a
2-minute timer, reads the worker's session logs via ``get_worker_health_summary``,
accumulates observations in a continuous conversation context, and emits a
structured ``EscalationTicket`` when it detects a degradation pattern.
Usage::
from framework.monitoring import judge_graph, judge_goal, HEALTH_JUDGE_ENTRY_POINT
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
# Register tools bound to the worker runtime's event bus
monitoring_registry = ToolRegistry()
register_worker_monitoring_tools(
monitoring_registry, worker_runtime._event_bus, storage_path
)
monitoring_tools = list(monitoring_registry.get_tools().values())
monitoring_executor = monitoring_registry.get_executor()
# Load judge as secondary graph on the worker runtime
await worker_runtime.add_graph(
graph_id="judge",
graph=judge_graph,
goal=judge_goal,
entry_points={"health_check": HEALTH_JUDGE_ENTRY_POINT},
storage_subpath="graphs/judge",
)
Design:
- ``isolation_level="isolated"`` the judge has its own memory, not
polluting the worker's shared memory namespace.
- ``conversation_mode="continuous"`` the judge's conversation carries
across timer ticks. The conversation IS the judge's memory. It tracks
trends by referring to its own prior messages ("Last check I saw 47
steps; now 52; 5 new steps, 3 RETRY").
- No shared memory keys. No external state files.
"""
from __future__ import annotations
from framework.graph import Constraint, Goal, NodeSpec, SuccessCriterion
from framework.graph.edge import AsyncEntryPointSpec, GraphSpec
# ---------------------------------------------------------------------------
# Goal
# ---------------------------------------------------------------------------
judge_goal = Goal(
id="worker-health-monitor",
name="Worker Health Monitor",
description=(
"Periodically assess the health of the worker agent by reading its "
"execution logs. Detect degradation patterns (excessive retries, "
"stalls, doom loops) and emit structured EscalationTickets when the "
"worker needs attention."
),
success_criteria=[
SuccessCriterion(
id="accurate-detection",
description="Only escalates genuine degradation, not normal retry cycles",
metric="false_positive_rate",
target="low",
weight=0.5,
),
SuccessCriterion(
id="timely-detection",
description="Detects genuine stalls within 2 timer ticks (≤4 minutes)",
metric="detection_latency_minutes",
target="<=4",
weight=0.5,
),
],
constraints=[
Constraint(
id="conservative-escalation",
description=(
"Do not escalate on a single bad verdict or a brief stall. "
"Require clear patterns (10+ consecutive bad verdicts or 4+ minute stall) "
"before creating a ticket."
),
constraint_type="hard",
category="quality",
),
Constraint(
id="complete-ticket",
description=(
"Every EscalationTicket must have all required fields filled. "
"Do not emit partial or placeholder tickets."
),
constraint_type="hard",
category="correctness",
),
],
)
# ---------------------------------------------------------------------------
# Node
# ---------------------------------------------------------------------------
judge_node = NodeSpec(
id="judge",
name="Worker Health Judge",
description=(
"Autonomous health monitor for worker agents. Reads execution logs "
"on each timer tick, compares to prior observations (via conversation "
"history), and emits a structured EscalationTicket when a genuine "
"degradation pattern is detected."
),
node_type="event_loop",
client_facing=False, # Autonomous monitor, not interactive
max_node_visits=0, # Unbounded — runs on every timer tick
input_keys=[],
output_keys=["health_verdict"],
nullable_output_keys=["health_verdict"],
success_criteria=(
"A clear health verdict is produced each check: either 'healthy' with "
"a brief observation, or a complete EscalationTicket is emitted via "
"emit_escalation_ticket and health_verdict describes the issue."
),
tools=[
"get_worker_health_summary",
"emit_escalation_ticket",
],
system_prompt="""\
You are the Worker Health Judge. You run every 2 minutes alongside a worker \
agent to monitor its execution health.
# Your Role
You observe the worker's iteration patterns over time and escalate only when \
you see genuine degradation not normal retry cycles. Your conversation history \
IS your memory. On each check, refer to your previous observations to track trends.
# Check Procedure
On each timer tick (every 2 minutes):
## Step 1: Read health snapshot
Call get_worker_health_summary() with no arguments to auto-discover the active \
session. This returns:
- worker_agent_id: the worker's agent name — use this for ticket identity fields
- worker_graph_id: the worker's primary graph ID — use this for ticket identity fields
- session_id: the session being monitored use this for worker_session_id in tickets
- total_steps: how many log steps have been recorded
- recent_verdicts: list of recent ACCEPT/RETRY/CONTINUE verdicts
- steps_since_last_accept: consecutive non-ACCEPT steps
- stall_minutes: wall-clock since last step (null if active)
- evidence_snippet: recent LLM output
## Step 2: Compare to prior check
Look at your conversation history. What was total_steps last time?
- If total_steps is UNCHANGED from prior check AND prior check was also unchanged:
STALL confirmed (worker has produced no new iterations in 4+ minutes).
Escalate with severity="high" or "critical" depending on stall duration.
- If total_steps increased: worker is making progress. Examine verdicts.
## Step 3: Analyze verdict pattern
- Healthy: Mix of ACCEPT and RETRY, steps_since_last_accept < 5. No action.
- Warning: steps_since_last_accept is 5-9. Note it, no escalation yet.
- Degraded: steps_since_last_accept >= 10. Examine evidence_snippet.
- If evidence shows the agent is making real progress (complex reasoning,
exploring solutions, productive tool use): may be a hard problem. Note it.
- If evidence shows a loop (same error, same tool call, no new information):
Escalate with severity="medium" or "high".
- Critical: steps_since_last_accept >= 20, OR stall_minutes >= 4.
Escalate with severity="critical".
## Step 4: Decide
### If healthy:
set_output("health_verdict", "healthy: <brief observation>")
Done.
### If escalating:
Build an EscalationTicket JSON string with ALL required fields:
{
"worker_agent_id": "<worker_agent_id from get_worker_health_summary>",
"worker_session_id": "<session_id from get_worker_health_summary>",
"worker_node_id": "<worker_graph_id from get_worker_health_summary>",
"worker_graph_id": "<worker_graph_id from get_worker_health_summary>",
"severity": "<low|medium|high|critical>",
"cause": "<what you observed — concrete, specific>",
"judge_reasoning": "<why you decided to escalate, not just dismiss>",
"suggested_action": "<what you recommend: restart, human review, etc.>",
"recent_verdicts": [<list from get_worker_health_summary>],
"total_steps_checked": <int>,
"steps_since_last_accept": <int>,
"stall_minutes": <float or null>,
"evidence_snippet": "<from get_worker_health_summary>"
}
Call: emit_escalation_ticket(ticket_json=<the JSON string above>)
Then: set_output("health_verdict", "escalated: <one-line summary>")
# Severity Guide
- low: Mild concern, worth noting. 5-9 consecutive bad verdicts.
- medium: Clear degradation pattern. 10-15 bad verdicts or brief stall (1-2 min).
- high: Serious issue. 15+ bad verdicts or stall 2-4 minutes or clear doom loop.
- critical: Worker is definitively stuck. 20+ bad verdicts or stall > 4 minutes.
# Conservative Bias
You MUST resist the urge to escalate prematurely. Worker agents naturally retry.
A node may legitimately need 5-8 retries before succeeding. Do not escalate unless:
1. The pattern is clear and sustained across your observation window, AND
2. The evidence shows no genuine progress
One missed escalation is less costly than two false alarms. The Queen will filter \
further. But do not be passive genuine stalls and doom loops must be caught.
# Rules
- Never escalate on the FIRST check unless stall_minutes > 4
- Always call get_worker_health_summary FIRST before deciding anything
- All ticket fields are REQUIRED do not submit partial tickets
- After any emit_escalation_ticket call, always set_output to complete the check
""",
)
# ---------------------------------------------------------------------------
# Entry Point
# ---------------------------------------------------------------------------
HEALTH_JUDGE_ENTRY_POINT = AsyncEntryPointSpec(
id="health_check",
name="Worker Health Check",
entry_node="judge",
trigger_type="timer",
trigger_config={
"interval_minutes": 2,
"run_immediately": True, # Fire immediately to establish a baseline
},
isolation_level="isolated", # Own memory namespace, not polluting worker's
)
# ---------------------------------------------------------------------------
# Graph
# ---------------------------------------------------------------------------
judge_graph = GraphSpec(
id="judge-graph",
goal_id=judge_goal.id,
version="1.0.0",
entry_node="judge",
entry_points={"health_check": "judge"},
terminal_nodes=["judge"], # Judge node can terminate after each check
pause_nodes=[],
nodes=[judge_node],
edges=[],
conversation_mode="continuous", # Conversation persists across timer ticks
async_entry_points=[HEALTH_JUDGE_ENTRY_POINT],
loop_config={
"max_iterations": 10, # One check shouldn't take many turns
"max_tool_calls_per_turn": 3, # get_summary + optionally emit_ticket
"max_history_tokens": 16000, # Compact — judge only needs recent context
},
)
+6 -6
View File
@@ -83,18 +83,18 @@ configure_logging(level="INFO", format="auto")
- Compact single-line format (easy to stream/parse)
- All trace context fields included automatically
### Human-Readable Format (Development)
### Human-Readable Format (Development / Terminal)
```
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] Starting agent execution
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] Processing input data [node_id:input-processor]
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] LLM call completed [latency_ms:1250] [tokens_used:450]
[INFO ] [agent:sales-agent] Starting agent execution
[INFO ] [agent:sales-agent] Processing input data [node_id:input-processor]
[INFO ] [agent:sales-agent] LLM call completed [latency_ms:1250] [tokens_used:450]
```
**Features:**
- Color-coded log levels
- Shortened IDs for readability (first 8 chars)
- Context prefix shows trace correlation
- Terminal output omits trace_id and execution_id for readability
- For full traceability (e.g. debugging), use `ENV=production` to get JSON file logs with trace_id and execution_id
## Trace Context Fields
+20 -15
View File
@@ -4,8 +4,9 @@ Structured logging with automatic trace context propagation.
Key Features:
- Zero developer friction: Standard logger.info() calls get automatic context
- ContextVar-based propagation: Thread-safe and async-safe
- Dual output modes: JSON for production, human-readable for development
- Correlation IDs: trace_id follows entire request flow automatically
- Dual output modes: JSON for production (full trace_id/execution_id), human-readable for terminal
- Terminal omits trace_id/execution_id for readability
- Use ENV=production for file logs with full traceability
Architecture:
Runtime.start_run() Generates trace_id, sets context once
@@ -101,10 +102,11 @@ class StructuredFormatter(logging.Formatter):
class HumanReadableFormatter(logging.Formatter):
"""
Human-readable formatter for development.
Human-readable formatter for development (terminal output).
Provides colorized logs with trace context for local debugging.
Includes trace_id prefix for correlation - AUTOMATIC!
Provides colorized logs for local debugging. Omits trace_id and execution_id
from the terminal for readability; use ENV=production (JSON file logs) when
traceability is needed.
"""
COLORS = {
@@ -118,18 +120,11 @@ class HumanReadableFormatter(logging.Formatter):
def format(self, record: logging.LogRecord) -> str:
"""Format log record as human-readable string."""
# Get trace context - AUTOMATIC!
# Get trace context; omit trace_id and execution_id in terminal for readability
context = trace_context.get() or {}
trace_id = context.get("trace_id", "")
execution_id = context.get("execution_id", "")
agent_id = context.get("agent_id", "")
# Build context prefix
prefix_parts = []
if trace_id:
prefix_parts.append(f"trace:{trace_id[:8]}")
if execution_id:
prefix_parts.append(f"exec:{execution_id[-8:]}")
if agent_id:
prefix_parts.append(f"agent:{agent_id}")
@@ -148,8 +143,9 @@ class HumanReadableFormatter(logging.Formatter):
if record_event is not None:
event = f" [{record_event}]"
# Format message: [LEVEL] [trace context] message
return f"{color}[{level}]{reset} {context_prefix}{record.getMessage()}{event}"
timestamp = self.formatTime(record, "%Y-%m-%d %H:%M:%S")
# Format message: TIMESTAMP [LEVEL] [trace context] message
return f"{timestamp} {color}[{level}]{reset} {context_prefix}{record.getMessage()}{event}"
def configure_logging(
@@ -210,6 +206,15 @@ def configure_logging(
root_logger.addHandler(handler)
root_logger.setLevel(level.upper())
# Suppress noisy LiteLLM INFO logs (model/provider line + Provider List URL
# printed on every single completion call). Warnings and errors still show.
# Honour LITELLM_LOG env var so users can opt-in to debug output.
_litellm_level = os.getenv("LITELLM_LOG", "").upper()
if _litellm_level and hasattr(logging, _litellm_level):
logging.getLogger("LiteLLM").setLevel(getattr(logging, _litellm_level))
else:
logging.getLogger("LiteLLM").setLevel(logging.WARNING)
# When in JSON mode, configure known third-party loggers to use JSON formatter
# This ensures libraries like LiteLLM, httpcore also output clean JSON
if format == "json":
+26 -15
View File
@@ -243,6 +243,8 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
action="store_true",
help="Open dashboard in browser after server starts",
)
serve_parser.add_argument("--verbose", "-v", action="store_true", help="Enable INFO log level")
serve_parser.add_argument("--debug", action="store_true", help="Enable DEBUG log level")
serve_parser.set_defaults(func=cmd_serve)
# open command (serve + auto-open browser)
@@ -280,6 +282,8 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
default=None,
help="LLM model for preloaded agents",
)
open_parser.add_argument("--verbose", "-v", action="store_true", help="Enable INFO log level")
open_parser.add_argument("--debug", action="store_true", help="Enable DEBUG log level")
open_parser.set_defaults(func=cmd_open)
@@ -375,18 +379,18 @@ def _prompt_before_start(agent_path: str, runner, model: str | None = None):
def cmd_run(args: argparse.Namespace) -> int:
"""Run an exported agent."""
import logging
from framework.credentials.models import CredentialError
from framework.observability import configure_logging
from framework.runner import AgentRunner
# Set logging level (quiet by default for cleaner output)
if args.quiet:
logging.basicConfig(level=logging.ERROR, format="%(message)s")
configure_logging(level="ERROR")
elif getattr(args, "verbose", False):
logging.basicConfig(level=logging.INFO, format="%(message)s")
configure_logging(level="INFO")
else:
logging.basicConfig(level=logging.WARNING, format="%(message)s")
configure_logging(level="WARNING")
# Load input context
context = {}
@@ -742,6 +746,17 @@ def cmd_dispatch(args: argparse.Namespace) -> int:
if args.agents:
# Use specific agents
for agent_name in args.agents:
# Guard against full paths: if the name contains path separators
# (e.g. "exports/my_agent"), it will be doubled with agents_dir
agent_name_path = Path(agent_name)
if len(agent_name_path.parts) > 1:
print(
f"Error: --agents expects agent names, not paths. "
f"Use: --agents {agent_name_path.name} "
f"instead of --agents {agent_name}",
file=sys.stderr,
)
return 1
agent_path = agents_dir / agent_name
if not _is_valid_agent_dir(agent_path):
print(f"Agent not found: {agent_path}", file=sys.stderr)
@@ -907,16 +922,12 @@ def _format_natural_language_to_json(
def cmd_shell(args: argparse.Namespace) -> int:
"""Start an interactive agent session."""
import logging
from framework.credentials.models import CredentialError
from framework.observability import configure_logging
from framework.runner import AgentRunner
# Configure logging to show runtime visibility
logging.basicConfig(
level=logging.INFO,
format="%(message)s", # Simple format for clean output
)
configure_logging(level="INFO")
agents_dir = Path(args.agents_dir)
@@ -1614,18 +1625,18 @@ def _build_frontend() -> bool:
def cmd_serve(args: argparse.Namespace) -> int:
"""Start the HTTP API server."""
import logging
from aiohttp import web
_build_frontend()
from framework.observability import configure_logging
from framework.server.app import create_app
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
)
if getattr(args, "debug", False):
configure_logging(level="DEBUG")
else:
configure_logging(level="INFO")
model = getattr(args, "model", None)
app = create_app(model=model)
+168 -20
View File
@@ -1,7 +1,7 @@
"""MCP Client for connecting to Model Context Protocol servers.
This module provides a client for connecting to MCP servers and invoking their tools.
Supports both STDIO and HTTP transports using the official MCP Python SDK.
Supports STDIO, HTTP, UNIX socket, and SSE transports using the official MCP Python SDK.
"""
import asyncio
@@ -22,7 +22,7 @@ class MCPServerConfig:
"""Configuration for an MCP server connection."""
name: str
transport: Literal["stdio", "http"]
transport: Literal["stdio", "http", "unix", "sse"]
# For STDIO transport
command: str | None = None
@@ -33,6 +33,7 @@ class MCPServerConfig:
# For HTTP transport
url: str | None = None
headers: dict[str, str] = field(default_factory=dict)
socket_path: str | None = None
# Optional metadata
description: str = ""
@@ -52,7 +53,7 @@ class MCPClient:
"""
Client for communicating with MCP servers.
Supports both STDIO and HTTP transports using the official MCP SDK.
Supports STDIO, HTTP, UNIX socket, and SSE transports using the official MCP SDK.
Manages the connection lifecycle and provides methods to list and invoke tools.
"""
@@ -68,6 +69,7 @@ class MCPClient:
self._read_stream = None
self._write_stream = None
self._stdio_context = None # Context manager for stdio_client
self._sse_context = None # Context manager for sse_client
self._errlog_handle = None # Track errlog file handle for cleanup
self._http_client: httpx.Client | None = None
self._tools: dict[str, MCPTool] = {}
@@ -141,6 +143,10 @@ class MCPClient:
self._connect_stdio()
elif self.config.transport == "http":
self._connect_http()
elif self.config.transport == "unix":
self._connect_unix()
elif self.config.transport == "sse":
self._connect_sse()
else:
raise ValueError(f"Unsupported transport: {self.config.transport}")
@@ -266,10 +272,94 @@ class MCPClient:
logger.warning(f"Health check failed for MCP server '{self.config.name}': {e}")
# Continue anyway, server might not have health endpoint
def _connect_unix(self) -> None:
"""Connect to MCP server via UNIX domain socket transport."""
if not self.config.url:
raise ValueError("url is required for UNIX transport")
if not self.config.socket_path:
raise ValueError("socket_path is required for UNIX transport")
self._http_client = httpx.Client(
base_url=self.config.url,
headers=self.config.headers,
timeout=30.0,
transport=httpx.HTTPTransport(uds=self.config.socket_path),
)
try:
response = self._http_client.get("/health")
response.raise_for_status()
logger.info(
"Connected to MCP server '%s' via UNIX socket at %s",
self.config.name,
self.config.socket_path,
)
except Exception as e:
logger.warning(f"Health check failed for MCP server '{self.config.name}': {e}")
# Continue anyway, server might not have health endpoint
def _connect_sse(self) -> None:
"""Connect to MCP server via SSE transport using MCP SDK with persistent session."""
if not self.config.url:
raise ValueError("url is required for SSE transport")
try:
loop_started = threading.Event()
connection_ready = threading.Event()
connection_error = []
def run_event_loop():
"""Run event loop in background thread."""
self._loop = asyncio.new_event_loop()
asyncio.set_event_loop(self._loop)
loop_started.set()
async def init_connection():
try:
from mcp import ClientSession
from mcp.client.sse import sse_client
self._sse_context = sse_client(
self.config.url,
headers=self.config.headers,
timeout=30.0,
)
(
self._read_stream,
self._write_stream,
) = await self._sse_context.__aenter__()
self._session = ClientSession(self._read_stream, self._write_stream)
await self._session.__aenter__()
await self._session.initialize()
connection_ready.set()
except Exception as e:
connection_error.append(e)
connection_ready.set()
self._loop.create_task(init_connection())
self._loop.run_forever()
self._loop_thread = threading.Thread(target=run_event_loop, daemon=True)
self._loop_thread.start()
loop_started.wait(timeout=5)
if not loop_started.is_set():
raise RuntimeError("Event loop failed to start")
connection_ready.wait(timeout=10)
if connection_error:
raise connection_error[0]
logger.info(f"Connected to MCP server '{self.config.name}' via SSE")
except Exception as e:
raise RuntimeError(f"Failed to connect to MCP server: {e}") from e
def _discover_tools(self) -> None:
"""Discover available tools from the MCP server."""
try:
if self.config.transport == "stdio":
if self.config.transport in {"stdio", "sse"}:
tools_list = self._run_async(self._list_tools_stdio_async())
else:
tools_list = self._list_tools_http()
@@ -371,9 +461,37 @@ class MCPClient:
if self.config.transport == "stdio":
with self._stdio_call_lock:
return self._run_async(self._call_tool_stdio_async(tool_name, arguments))
elif self.config.transport == "sse":
return self._call_tool_with_retry(
lambda: self._run_async(self._call_tool_stdio_async(tool_name, arguments))
)
elif self.config.transport == "unix":
return self._call_tool_with_retry(lambda: self._call_tool_http(tool_name, arguments))
else:
return self._call_tool_http(tool_name, arguments)
def _call_tool_with_retry(self, call: Any) -> Any:
"""Retry transient MCP transport failures once after reconnecting."""
if self.config.transport == "stdio":
return call()
if self.config.transport not in {"unix", "sse"}:
return call()
try:
return call()
except (httpx.ConnectError, httpx.ReadTimeout) as original_error:
logger.warning(
"Retrying MCP tool call after transport error from '%s': %s",
self.config.name,
original_error,
)
self._reconnect()
try:
return call()
except (httpx.ConnectError, httpx.ReadTimeout) as retry_error:
raise original_error from retry_error
async def _call_tool_stdio_async(self, tool_name: str, arguments: dict[str, Any]) -> Any:
"""Call tool via STDIO protocol using persistent session."""
if not self._session:
@@ -391,17 +509,30 @@ class MCPClient:
error_text = content_item.text
raise RuntimeError(f"MCP tool '{tool_name}' failed: {error_text}")
# Extract content
# Extract content — preserve image blocks alongside text
if result.content:
# MCP returns content as a list of content items
if len(result.content) > 0:
content_item = result.content[0]
# Check if it's a text content item
if hasattr(content_item, "text"):
return content_item.text
elif hasattr(content_item, "data"):
return content_item.data
return result.content
text_parts: list[str] = []
image_parts: list[dict[str, Any]] = []
for item in result.content:
if hasattr(item, "text"):
text_parts.append(item.text)
elif hasattr(item, "data") and hasattr(item, "mimeType"):
# MCP ImageContent — preserve as structured image block
image_parts.append(
{
"type": "image_url",
"image_url": {
"url": f"data:{item.mimeType};base64,{item.data}",
},
}
)
elif hasattr(item, "data"):
text_parts.append(str(item.data))
text = "\n".join(text_parts) if text_parts else ""
if image_parts:
return {"_text": text, "_images": image_parts}
return text if text else None
return None
@@ -433,18 +564,24 @@ class MCPClient:
except Exception as e:
raise RuntimeError(f"Failed to call tool via HTTP: {e}") from e
def _reconnect(self) -> None:
"""Reconnect to the configured MCP server."""
logger.info(f"Reconnecting to MCP server '{self.config.name}'...")
self.disconnect()
self.connect()
_CLEANUP_TIMEOUT = 10
_THREAD_JOIN_TIMEOUT = 12
async def _cleanup_stdio_async(self) -> None:
"""Async cleanup for STDIO session and context managers.
"""Async cleanup for persistent MCP session and context managers.
Cleanup order is critical:
- The session must be closed BEFORE the stdio_context because the session
depends on the streams provided by stdio_context.
- This mirrors the initialization order in _connect_stdio(), where
stdio_context is entered first (providing streams), then the session is
created with those streams and entered.
- The session must be closed BEFORE the transport context manager because the
session depends on the streams provided by that context.
- This mirrors the initialization order in _connect_stdio() / _connect_sse(),
where the transport context is entered first (providing streams), then the
session is created with those streams and entered.
- Do not change this ordering without carefully considering these dependencies.
"""
# First: close session (depends on stdio_context streams)
@@ -477,6 +614,16 @@ class MCPClient:
finally:
self._stdio_context = None
try:
if self._sse_context:
await self._sse_context.__aexit__(None, None, None)
except asyncio.CancelledError:
logger.debug("SSE context cleanup was cancelled; proceeding with best-effort shutdown")
except Exception as e:
logger.warning(f"Error closing SSE context: {e}")
finally:
self._sse_context = None
# Third: close errlog file handle if we opened one
if self._errlog_handle is not None:
try:
@@ -552,6 +699,7 @@ class MCPClient:
# Setting None to None is safe and ensures clean state.
self._session = None
self._stdio_context = None
self._sse_context = None
self._read_stream = None
self._write_stream = None
self._loop = None
@@ -0,0 +1,255 @@
"""Shared MCP client connection management."""
import logging
import threading
from typing import Any
import httpx
from framework.runner.mcp_client import MCPClient, MCPServerConfig
logger = logging.getLogger(__name__)
class MCPConnectionManager:
"""Process-wide MCP client pool keyed by server name."""
_instance = None
_lock = threading.Lock()
def __init__(self) -> None:
self._pool: dict[str, MCPClient] = {}
self._refcounts: dict[str, int] = {}
self._configs: dict[str, MCPServerConfig] = {}
self._pool_lock = threading.Lock()
# Transition events keep callers from racing a connect/reconnect/disconnect.
self._transitions: dict[str, threading.Event] = {}
@classmethod
def get_instance(cls) -> "MCPConnectionManager":
"""Return the process-level singleton instance."""
if cls._instance is None:
with cls._lock:
if cls._instance is None:
cls._instance = cls()
return cls._instance
@staticmethod
def _is_connected(client: MCPClient | None) -> bool:
return bool(client and getattr(client, "_connected", False))
def acquire(self, config: MCPServerConfig) -> MCPClient:
"""Get or create a shared connection and increment its refcount."""
server_name = config.name
while True:
should_connect = False
transition_event: threading.Event | None = None
with self._pool_lock:
client = self._pool.get(server_name)
if self._is_connected(client) and server_name not in self._transitions:
new_refcount = self._refcounts.get(server_name, 0) + 1
self._refcounts[server_name] = new_refcount
self._configs[server_name] = config
logger.debug(
"Reusing pooled connection for MCP server '%s' (refcount=%d)",
server_name,
new_refcount,
)
return client
transition_event = self._transitions.get(server_name)
if transition_event is None:
transition_event = threading.Event()
self._transitions[server_name] = transition_event
self._configs[server_name] = config
should_connect = True
if not should_connect:
transition_event.wait()
continue
client = MCPClient(config)
try:
client.connect()
except Exception:
with self._pool_lock:
current = self._transitions.get(server_name)
if current is transition_event:
self._transitions.pop(server_name, None)
if (
server_name not in self._pool
and self._refcounts.get(server_name, 0) <= 0
):
self._configs.pop(server_name, None)
transition_event.set()
raise
with self._pool_lock:
current = self._transitions.get(server_name)
if current is transition_event:
self._pool[server_name] = client
self._refcounts[server_name] = self._refcounts.get(server_name, 0) + 1
self._configs[server_name] = config
self._transitions.pop(server_name, None)
transition_event.set()
return client
client.disconnect()
def release(self, server_name: str) -> None:
"""Decrement refcount and disconnect when the last user releases."""
while True:
disconnect_client: MCPClient | None = None
transition_event: threading.Event | None = None
should_disconnect = False
with self._pool_lock:
transition_event = self._transitions.get(server_name)
if transition_event is None:
refcount = self._refcounts.get(server_name, 0)
if refcount <= 0:
return
if refcount > 1:
self._refcounts[server_name] = refcount - 1
return
disconnect_client = self._pool.pop(server_name, None)
self._refcounts.pop(server_name, None)
transition_event = threading.Event()
self._transitions[server_name] = transition_event
should_disconnect = True
if not should_disconnect:
transition_event.wait()
continue
try:
if disconnect_client is not None:
disconnect_client.disconnect()
finally:
with self._pool_lock:
current = self._transitions.get(server_name)
if current is transition_event:
self._transitions.pop(server_name, None)
transition_event.set()
return
def health_check(self, server_name: str) -> bool:
"""Return True when the pooled connection appears healthy."""
while True:
with self._pool_lock:
transition_event = self._transitions.get(server_name)
if transition_event is None:
client = self._pool.get(server_name)
config = self._configs.get(server_name)
break
transition_event.wait()
if client is None or config is None:
return False
try:
if config.transport == "stdio":
client.list_tools()
return True
if not config.url:
return False
client_kwargs: dict[str, Any] = {
"base_url": config.url,
"headers": config.headers,
"timeout": 5.0,
}
if config.transport == "unix":
if not config.socket_path:
return False
client_kwargs["transport"] = httpx.HTTPTransport(uds=config.socket_path)
with httpx.Client(**client_kwargs) as http_client:
response = http_client.get("/health")
response.raise_for_status()
return True
except Exception:
return False
def reconnect(self, server_name: str) -> MCPClient:
"""Force a disconnect and replace the pooled client with a fresh one."""
while True:
transition_event: threading.Event | None = None
old_client: MCPClient | None = None
with self._pool_lock:
transition_event = self._transitions.get(server_name)
if transition_event is None:
config = self._configs.get(server_name)
if config is None:
raise KeyError(f"Unknown MCP server: {server_name}")
old_client = self._pool.get(server_name)
refcount = self._refcounts.get(server_name, 0)
transition_event = threading.Event()
self._transitions[server_name] = transition_event
break
transition_event.wait()
if old_client is not None:
old_client.disconnect()
new_client = MCPClient(config)
try:
new_client.connect()
except Exception:
with self._pool_lock:
current = self._transitions.get(server_name)
if current is transition_event:
self._pool.pop(server_name, None)
self._transitions.pop(server_name, None)
transition_event.set()
raise
with self._pool_lock:
current = self._transitions.get(server_name)
if current is transition_event:
self._pool[server_name] = new_client
self._refcounts[server_name] = max(refcount, 1)
self._transitions.pop(server_name, None)
transition_event.set()
return new_client
new_client.disconnect()
return self.acquire(config)
def cleanup_all(self) -> None:
"""Disconnect all pooled clients and clear manager state."""
while True:
with self._pool_lock:
if self._transitions:
pending = list(self._transitions.values())
else:
cleanup_events = {name: threading.Event() for name in self._pool}
clients = list(self._pool.items())
self._transitions.update(cleanup_events)
self._pool.clear()
self._refcounts.clear()
self._configs.clear()
break
for event in pending:
event.wait()
for _server_name, client in clients:
try:
client.disconnect()
except Exception:
pass
with self._pool_lock:
for server_name, event in cleanup_events.items():
current = self._transitions.get(server_name)
if current is event:
self._transitions.pop(server_name, None)
event.set()
+4 -1
View File
@@ -1,6 +1,6 @@
"""Pre-load validation for agent graphs.
Runs structural and credential checks before MCP servers are spawned.
Runs structural, credential, and skill-trust checks before MCP servers are spawned.
Fails fast with actionable error messages.
"""
@@ -169,6 +169,9 @@ def run_preload_validation(
1. Graph structure (includes GCU subagent-only checks) non-recoverable
2. Credentials potentially recoverable via interactive setup
Skill discovery and trust gating (AS-13) happen later in runner._setup()
so they have access to agent-level skill configuration.
Raises PreloadValidationError for structural issues.
Raises CredentialError for credential issues.
"""
+482 -81
View File
@@ -9,14 +9,13 @@ from datetime import UTC
from pathlib import Path
from typing import TYPE_CHECKING, Any
from framework.config import get_hive_config, get_preferred_model
from framework.config import get_hive_config, get_max_context_tokens, get_preferred_model
from framework.credentials.validation import (
ensure_credential_key_env as _ensure_credential_key_env,
)
from framework.graph import Goal
from framework.graph.edge import (
DEFAULT_MAX_TOKENS,
AsyncEntryPointSpec,
EdgeCondition,
EdgeSpec,
GraphSpec,
@@ -29,6 +28,7 @@ from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import AgentRuntime, AgentRuntimeConfig, create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from framework.runtime.runtime_log_store import RuntimeLogStore
from framework.tools.flowchart_utils import generate_fallback_flowchart
if TYPE_CHECKING:
from framework.runner.protocol import AgentMessage, CapabilityResponse
@@ -517,6 +517,354 @@ def get_codex_account_id() -> str | None:
return None
# ---------------------------------------------------------------------------
# Kimi Code subscription token helpers
# ---------------------------------------------------------------------------
def get_kimi_code_token() -> str | None:
"""Get the API key from a Kimi Code CLI installation.
Reads the API key from ``~/.kimi/config.toml``, which is created when
the user runs ``kimi /login`` in the Kimi Code CLI.
Returns:
The API key if available, None otherwise.
"""
import tomllib
config_path = Path.home() / ".kimi" / "config.toml"
if not config_path.exists():
return None
try:
with open(config_path, "rb") as f:
config = tomllib.load(f)
providers = config.get("providers", {})
# kimi-cli stores credentials under providers.kimi-for-coding
for provider_cfg in providers.values():
if isinstance(provider_cfg, dict):
key = provider_cfg.get("api_key")
if key:
return key
except Exception:
pass
return None
# ---------------------------------------------------------------------------
# Antigravity subscription token helpers
# ---------------------------------------------------------------------------
# Antigravity IDE (native macOS/Linux app) stores OAuth tokens in its
# VSCode-style SQLite state database under the key
# "antigravityUnifiedStateSync.oauthToken" as a base64-encoded protobuf blob.
ANTIGRAVITY_IDE_STATE_DB = (
Path.home()
/ "Library"
/ "Application Support"
/ "Antigravity"
/ "User"
/ "globalStorage"
/ "state.vscdb"
)
# Linux fallback for the IDE state DB
ANTIGRAVITY_IDE_STATE_DB_LINUX = (
Path.home() / ".config" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
)
# Antigravity credentials stored by native OAuth implementation
ANTIGRAVITY_AUTH_FILE = Path.home() / ".hive" / "antigravity-accounts.json"
ANTIGRAVITY_OAUTH_TOKEN_URL = "https://oauth2.googleapis.com/token"
_ANTIGRAVITY_TOKEN_LIFETIME_SECS = 3600 # Google access tokens expire in 1 hour
_ANTIGRAVITY_IDE_STATE_DB_KEY = "antigravityUnifiedStateSync.oauthToken"
def _read_antigravity_ide_credentials() -> dict | None:
"""Read credentials from the Antigravity IDE's SQLite state database.
The Antigravity desktop IDE (VSCode-based) stores its OAuth token as a
base64-encoded protobuf blob in a SQLite database. The access token is
a standard Google OAuth ``ya29.*`` bearer token.
Returns:
Dict with ``accessToken`` and optionally ``refreshToken`` keys,
plus ``_source: "ide"`` to skip file-based save on refresh.
Returns None if the database is absent or the key is not found.
"""
import re
import sqlite3
for db_path in (ANTIGRAVITY_IDE_STATE_DB, ANTIGRAVITY_IDE_STATE_DB_LINUX):
if not db_path.exists():
continue
try:
con = sqlite3.connect(f"file:{db_path}?mode=ro", uri=True)
try:
row = con.execute(
"SELECT value FROM ItemTable WHERE key = ?",
(_ANTIGRAVITY_IDE_STATE_DB_KEY,),
).fetchone()
finally:
con.close()
if not row:
continue
import base64
blob = base64.b64decode(row[0])
# The protobuf blob contains the access token (ya29.*) and
# refresh token (1//*) as length-prefixed UTF-8 strings.
# Decode the inner base64 layer and extract with regex.
inner_b64_candidates = re.findall(rb"[A-Za-z0-9+/=_\-]{40,}", blob)
access_token: str | None = None
refresh_token: str | None = None
for candidate in inner_b64_candidates:
try:
padded = candidate + b"=" * (-len(candidate) % 4)
inner = base64.urlsafe_b64decode(padded)
except Exception:
continue
if not access_token:
m = re.search(rb"ya29\.[A-Za-z0-9_\-\.]+", inner)
if m:
access_token = m.group(0).decode("ascii")
if not refresh_token:
m = re.search(rb"1//[A-Za-z0-9_\-\.]+", inner)
if m:
refresh_token = m.group(0).decode("ascii")
if access_token and refresh_token:
break
if access_token:
return {
"accounts": [
{
"accessToken": access_token,
"refreshToken": refresh_token or "",
}
],
"_source": "ide",
"_db_path": str(db_path),
}
except Exception as exc:
logger.debug("Failed to read Antigravity IDE state DB: %s", exc)
continue
return None
def _read_antigravity_credentials() -> dict | None:
"""Read Antigravity auth data from all supported credential sources.
Checks in order:
1. Antigravity IDE SQLite state database (native macOS/Linux app)
2. Native OAuth credentials file (~/.hive/antigravity-accounts.json)
Returns:
Auth data dict with an ``accounts`` list on success, None otherwise.
"""
# 1. Native Antigravity IDE (primary on macOS)
ide_creds = _read_antigravity_ide_credentials()
if ide_creds:
return ide_creds
# 2. Native OAuth credentials file
if ANTIGRAVITY_AUTH_FILE.exists():
try:
with open(ANTIGRAVITY_AUTH_FILE, encoding="utf-8") as f:
data = json.load(f)
accounts = data.get("accounts", [])
if accounts and isinstance(accounts[0], dict):
return data
except (json.JSONDecodeError, OSError):
pass
return None
def _is_antigravity_token_expired(auth_data: dict) -> bool:
"""Check whether the Antigravity access token is expired or near expiry.
For IDE-sourced credentials: uses the state DB's mtime as last_refresh
since the IDE keeps the DB fresh while it's running.
For JSON-sourced credentials: uses the ``last_refresh`` field or file mtime.
"""
import time
from datetime import datetime
now = time.time()
if auth_data.get("_source") == "ide":
# The IDE refreshes tokens automatically while running.
# Use the DB file's mtime as a proxy for when the token was last updated.
try:
db_path = Path(auth_data.get("_db_path", str(ANTIGRAVITY_IDE_STATE_DB)))
last_refresh: float = db_path.stat().st_mtime
except OSError:
return True
expires_at = last_refresh + _ANTIGRAVITY_TOKEN_LIFETIME_SECS
return now >= (expires_at - _TOKEN_REFRESH_BUFFER_SECS)
last_refresh_val: float | str | None = auth_data.get("last_refresh")
if last_refresh_val is None:
try:
last_refresh_val = ANTIGRAVITY_AUTH_FILE.stat().st_mtime
except OSError:
return True
elif isinstance(last_refresh_val, str):
try:
last_refresh_val = datetime.fromisoformat(
last_refresh_val.replace("Z", "+00:00")
).timestamp()
except (ValueError, TypeError):
return True
expires_at = float(last_refresh_val) + _ANTIGRAVITY_TOKEN_LIFETIME_SECS
return now >= (expires_at - _TOKEN_REFRESH_BUFFER_SECS)
def _refresh_antigravity_token(refresh_token: str) -> dict | None:
"""Refresh the Antigravity access token via Google OAuth.
POSTs form-encoded ``grant_type=refresh_token`` to the Google token
endpoint using Antigravity's public OAuth client ID.
Returns:
Parsed response dict (containing ``access_token``) on success,
None on any error.
"""
import urllib.error
import urllib.parse
import urllib.request
from framework.config import get_antigravity_client_id, get_antigravity_client_secret
client_id = get_antigravity_client_id()
client_secret = get_antigravity_client_secret()
params: dict = {
"grant_type": "refresh_token",
"refresh_token": refresh_token,
"client_id": client_id,
}
if client_secret:
params["client_secret"] = client_secret
data = urllib.parse.urlencode(params).encode("utf-8")
req = urllib.request.Request(
ANTIGRAVITY_OAUTH_TOKEN_URL,
data=data,
headers={"Content-Type": "application/x-www-form-urlencoded"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=15) as resp: # noqa: S310
return json.loads(resp.read())
except (urllib.error.URLError, json.JSONDecodeError, TimeoutError, OSError) as exc:
logger.debug("Antigravity token refresh failed: %s", exc)
return None
def _save_refreshed_antigravity_credentials(auth_data: dict, token_data: dict) -> None:
"""Write refreshed tokens back to the Antigravity JSON credentials file.
Skipped for IDE-sourced credentials (the IDE manages its own DB).
Updates ``accounts[0].accessToken`` (and ``refreshToken`` if present),
then persists ``last_refresh`` as an ISO-8601 UTC string.
"""
from datetime import datetime
# IDE manages its own state — we do not write back to its SQLite DB
if auth_data.get("_source") == "ide":
return
try:
accounts = auth_data.get("accounts", [])
if not accounts:
return
account = accounts[0]
account["accessToken"] = token_data["access_token"]
if "refresh_token" in token_data:
account["refreshToken"] = token_data["refresh_token"]
auth_data["accounts"] = accounts
auth_data["last_refresh"] = datetime.now(UTC).isoformat()
ANTIGRAVITY_AUTH_FILE.parent.mkdir(parents=True, exist_ok=True)
fd = os.open(ANTIGRAVITY_AUTH_FILE, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600)
with os.fdopen(fd, "w", encoding="utf-8") as f:
json.dump(auth_data, f, indent=2)
logger.debug("Antigravity credentials refreshed and saved")
except (OSError, KeyError) as exc:
logger.debug("Failed to save refreshed Antigravity credentials: %s", exc)
def get_antigravity_token() -> str | None:
"""Get the OAuth access token from an Antigravity subscription.
Credential sources checked in order:
1. Antigravity IDE SQLite state DB (native app, macOS/Linux)
2. antigravity-auth CLI JSON file
For IDE credentials the token is read directly (the IDE refreshes it
automatically while running). For JSON credentials an automatic OAuth
refresh is attempted when the token is near expiry.
Returns:
The ``ya29.*`` Google OAuth access token, or None if unavailable.
"""
auth_data = _read_antigravity_credentials()
if not auth_data:
return None
accounts = auth_data.get("accounts", [])
if not accounts:
return None
account = accounts[0]
access_token = account.get("accessToken")
if not access_token:
return None
if not _is_antigravity_token_expired(auth_data):
return access_token
# Token is expired or near expiry — attempt a refresh
refresh_token = account.get("refreshToken")
if not refresh_token:
logger.warning(
"Antigravity token expired and no refresh token available. "
"Re-open the Antigravity IDE to refresh, or run 'antigravity-auth accounts add'."
)
return access_token # return stale token; proxy may still accept it briefly
logger.info("Antigravity token expired or near expiry, refreshing...")
token_data = _refresh_antigravity_token(refresh_token)
if token_data and "access_token" in token_data:
_save_refreshed_antigravity_credentials(auth_data, token_data)
return token_data["access_token"]
logger.warning(
"Antigravity token refresh failed. "
"Re-open the Antigravity IDE or run 'antigravity-auth accounts add'."
)
return access_token
def _is_antigravity_proxy_available() -> bool:
"""Return True if antigravity-auth serve is running on localhost:8069."""
import socket
try:
with socket.create_connection(("localhost", 8069), timeout=0.5):
return True
except (OSError, TimeoutError):
return False
@dataclass
class AgentInfo:
"""Information about an exported agent."""
@@ -535,9 +883,6 @@ class AgentInfo:
constraints: list[dict]
required_tools: list[str]
has_tools_module: bool
# Multi-entry-point support
async_entry_points: list[dict] = field(default_factory=list)
is_multi_entry_point: bool = False
@dataclass
@@ -595,22 +940,6 @@ def load_agent_export(data: str | dict) -> tuple[GraphSpec, Goal]:
)
edges.append(edge)
# Build AsyncEntryPointSpec objects for multi-entry-point support
async_entry_points = []
for aep_data in graph_data.get("async_entry_points", []):
async_entry_points.append(
AsyncEntryPointSpec(
id=aep_data["id"],
name=aep_data.get("name", aep_data["id"]),
entry_node=aep_data["entry_node"],
trigger_type=aep_data.get("trigger_type", "manual"),
trigger_config=aep_data.get("trigger_config", {}),
isolation_level=aep_data.get("isolation_level", "shared"),
priority=aep_data.get("priority", 0),
max_concurrent=aep_data.get("max_concurrent", 10),
)
)
# Build GraphSpec
graph = GraphSpec(
id=graph_data.get("id", "agent-graph"),
@@ -618,7 +947,6 @@ def load_agent_export(data: str | dict) -> tuple[GraphSpec, Goal]:
version=graph_data.get("version", "1.0.0"),
entry_node=graph_data.get("entry_node", ""),
entry_points=graph_data.get("entry_points", {}), # Support pause/resume architecture
async_entry_points=async_entry_points, # Support multi-entry-point agents
terminal_nodes=graph_data.get("terminal_nodes", []),
pause_nodes=graph_data.get("pause_nodes", []), # Support pause/resume architecture
nodes=nodes,
@@ -770,8 +1098,6 @@ class AgentRunner:
# AgentRuntime — unified execution path for all agents
self._agent_runtime: AgentRuntime | None = None
self._uses_async_entry_points = self.graph.has_async_entry_points()
# Pre-load validation: structural checks + credentials.
# Fails fast with actionable guidance — no MCP noise on screen.
run_preload_validation(
@@ -891,10 +1217,32 @@ class AgentRunner:
if agent_config and hasattr(agent_config, "max_tokens"):
max_tokens = agent_config.max_tokens
logger.info(
"Agent default_config overrides max_tokens: %d "
"(configuration.json value ignored)",
max_tokens,
)
else:
hive_config = get_hive_config()
max_tokens = hive_config.get("llm", {}).get("max_tokens", DEFAULT_MAX_TOKENS)
# Resolve max_context_tokens with priority:
# 1. agent loop_config["max_context_tokens"] (explicit, wins silently)
# 2. agent default_config.max_context_tokens (logged)
# 3. configuration.json llm.max_context_tokens
# 4. hardcoded default (32_000)
agent_loop_config: dict = dict(getattr(agent_module, "loop_config", {}))
if "max_context_tokens" not in agent_loop_config:
if agent_config and hasattr(agent_config, "max_context_tokens"):
agent_loop_config["max_context_tokens"] = agent_config.max_context_tokens
logger.info(
"Agent default_config overrides max_context_tokens: %d"
" (configuration.json value ignored)",
agent_config.max_context_tokens,
)
else:
agent_loop_config["max_context_tokens"] = get_max_context_tokens()
# Read intro_message from agent metadata (shown on TUI load)
agent_metadata = getattr(agent_module, "metadata", None)
intro_message = ""
@@ -908,13 +1256,12 @@ class AgentRunner:
"version": "1.0.0",
"entry_node": getattr(agent_module, "entry_node", nodes[0].id),
"entry_points": getattr(agent_module, "entry_points", {}),
"async_entry_points": getattr(agent_module, "async_entry_points", []),
"terminal_nodes": getattr(agent_module, "terminal_nodes", []),
"pause_nodes": getattr(agent_module, "pause_nodes", []),
"nodes": nodes,
"edges": edges,
"max_tokens": max_tokens,
"loop_config": getattr(agent_module, "loop_config", {}),
"loop_config": agent_loop_config,
}
# Only pass optional fields if explicitly defined by the agent module
conversation_mode = getattr(agent_module, "conversation_mode", None)
@@ -926,6 +1273,12 @@ class AgentRunner:
graph = GraphSpec(**graph_kwargs)
# Generate flowchart.json if missing (for template/legacy agents)
generate_fallback_flowchart(graph, goal, agent_path)
# Read skill configuration from agent module
agent_default_skills = getattr(agent_module, "default_skills", None)
agent_skills = getattr(agent_module, "skills", None)
# Read runtime config (webhook settings, etc.) if defined
agent_runtime_config = getattr(agent_module, "runtime_config", None)
@@ -937,7 +1290,7 @@ class AgentRunner:
configure_fn = getattr(agent_module, "configure_for_account", None)
list_accts_fn = getattr(agent_module, "list_connected_accounts", None)
return cls(
runner = cls(
agent_path=agent_path,
graph=graph,
goal=goal,
@@ -953,6 +1306,10 @@ class AgentRunner:
list_accounts=list_accts_fn,
credential_store=credential_store,
)
# Stash skill config for use in _setup()
runner._agent_default_skills = agent_default_skills
runner._agent_skills = agent_skills
return runner
# Fallback: load from agent.json (legacy JSON-based agents)
agent_json_path = agent_path / "agent.json"
@@ -970,7 +1327,10 @@ class AgentRunner:
except json.JSONDecodeError as exc:
raise ValueError(f"Invalid JSON in agent export file: {agent_json_path}") from exc
return cls(
# Generate flowchart.json if missing (for legacy JSON-based agents)
generate_fallback_flowchart(graph, goal, agent_path)
runner = cls(
agent_path=agent_path,
graph=graph,
goal=goal,
@@ -981,6 +1341,9 @@ class AgentRunner:
skip_credential_validation=skip_credential_validation or False,
credential_store=credential_store,
)
runner._agent_default_skills = None
runner._agent_skills = None
return runner
def register_tool(
self,
@@ -1091,7 +1454,10 @@ class AgentRunner:
# Create LLM provider
# Uses LiteLLM which auto-detects the provider from model name
if self.mock_mode:
# Skip if already injected (e.g. worker agents with a pre-built LLM)
if self._llm is not None:
pass # LLM already configured externally
elif self.mock_mode:
# Use mock LLM for testing without real API calls
from framework.llm.mock import MockLLMProvider
@@ -1104,6 +1470,8 @@ class AgentRunner:
llm_config = config.get("llm", {})
use_claude_code = llm_config.get("use_claude_code_subscription", False)
use_codex = llm_config.get("use_codex_subscription", False)
use_kimi_code = llm_config.get("use_kimi_code_subscription", False)
use_antigravity = llm_config.get("use_antigravity_subscription", False)
api_base = llm_config.get("api_base")
api_key = None
@@ -1119,6 +1487,14 @@ class AgentRunner:
if not api_key:
print("Warning: Codex subscription configured but no token found.")
print("Run 'codex' to authenticate, then try again.")
elif use_kimi_code:
# Get API key from Kimi Code CLI config (~/.kimi/config.toml)
api_key = get_kimi_code_token()
if not api_key:
print("Warning: Kimi Code subscription configured but no key found.")
print("Run 'kimi /login' to authenticate, then try again.")
elif use_antigravity:
pass # AntigravityProvider handles credentials internally
if api_key and use_claude_code:
# Use litellm's built-in Anthropic OAuth support.
@@ -1149,6 +1525,27 @@ class AgentRunner:
store=False,
allowed_openai_params=["store"],
)
elif api_key and use_kimi_code:
# Kimi Code subscription uses the Kimi coding API (OpenAI-compatible).
# The api_base is set automatically by LiteLLMProvider for kimi/ models.
self._llm = LiteLLMProvider(
model=self.model,
api_key=api_key,
api_base=api_base,
)
elif use_antigravity:
# Direct OAuth to Google's internal Cloud Code Assist gateway.
# No local proxy required — AntigravityProvider handles token
# refresh and Gemini-format request/response conversion natively.
from framework.llm.antigravity import AntigravityProvider # noqa: PLC0415
provider = AntigravityProvider(model=self.model)
if not provider.has_credentials():
print(
"Warning: Antigravity credentials not found. "
"Run: uv run python core/antigravity_auth.py auth account add"
)
self._llm = provider
else:
# Local models (e.g. Ollama) don't need an API key
if self._is_local_model(self.model):
@@ -1275,6 +1672,20 @@ class AgentRunner:
except Exception:
pass # Best-effort — agent works without account info
# Skill configuration — the runtime handles discovery, loading, trust-gating and
# prompt rasterization. The runner just builds the config.
from framework.skills.config import SkillsConfig
from framework.skills.manager import SkillsManagerConfig
skills_manager_config = SkillsManagerConfig(
skills_config=SkillsConfig.from_agent_vars(
default_skills=getattr(self, "_agent_default_skills", None),
skills=getattr(self, "_agent_skills", None),
),
project_root=self.agent_path,
interactive=self._interactive,
)
self._setup_agent_runtime(
tools,
tool_executor,
@@ -1282,6 +1693,7 @@ class AgentRunner:
accounts_data=accounts_data,
tool_provider_map=tool_provider_map,
event_bus=event_bus,
skills_manager_config=skills_manager_config,
)
def _get_api_key_env_var(self, model: str) -> str | None:
@@ -1302,6 +1714,8 @@ class AgentRunner:
return "MISTRAL_API_KEY"
elif model_lower.startswith("groq/"):
return "GROQ_API_KEY"
elif model_lower.startswith("openrouter/"):
return "OPENROUTER_API_KEY"
elif self._is_local_model(model_lower):
return None # Local models don't need an API key
elif model_lower.startswith("azure/"):
@@ -1314,6 +1728,10 @@ class AgentRunner:
return "TOGETHER_API_KEY"
elif model_lower.startswith("minimax/") or model_lower.startswith("minimax-"):
return "MINIMAX_API_KEY"
elif model_lower.startswith("kimi/"):
return "KIMI_API_KEY"
elif model_lower.startswith("hive/"):
return "HIVE_API_KEY"
else:
# Default: assume OpenAI-compatible
return "OPENAI_API_KEY"
@@ -1334,6 +1752,10 @@ class AgentRunner:
cred_id = "anthropic"
elif model_lower.startswith("minimax/") or model_lower.startswith("minimax-"):
cred_id = "minimax"
elif model_lower.startswith("kimi/"):
cred_id = "kimi"
elif model_lower.startswith("hive/"):
cred_id = "hive"
# Add more mappings as providers are added to LLM_CREDENTIALS
if cred_id is None:
@@ -1373,23 +1795,13 @@ class AgentRunner:
accounts_data: list[dict] | None = None,
tool_provider_map: dict[str, str] | None = None,
event_bus=None,
skills_catalog_prompt: str = "",
protocols_prompt: str = "",
skill_dirs: list[str] | None = None,
skills_manager_config=None,
) -> None:
"""Set up multi-entry-point execution using AgentRuntime."""
# Convert AsyncEntryPointSpec to EntryPointSpec for AgentRuntime
entry_points = []
for async_ep in self.graph.async_entry_points:
ep = EntryPointSpec(
id=async_ep.id,
name=async_ep.name,
entry_node=async_ep.entry_node,
trigger_type=async_ep.trigger_type,
trigger_config=async_ep.trigger_config,
isolation_level=async_ep.isolation_level,
priority=async_ep.priority,
max_concurrent=async_ep.max_concurrent,
max_resurrections=async_ep.max_resurrections,
)
entry_points.append(ep)
# Always create a primary entry point for the graph's entry node.
# For multi-entry-point agents this ensures the primary path (e.g.
@@ -1446,26 +1858,37 @@ class AgentRunner:
accounts_data=accounts_data,
tool_provider_map=tool_provider_map,
event_bus=event_bus,
skills_manager_config=skills_manager_config,
)
# Pass intro_message through for TUI display
self._agent_runtime.intro_message = self.intro_message
# ------------------------------------------------------------------
# Execution modes
#
# run() One-shot, blocking execution for worker agents
# (headless CLI via ``hive run``). Validates, runs
# the graph to completion, and returns the result.
#
# start() / trigger() Long-lived runtime for the frontend (queen).
# start() boots the runtime; trigger() sends
# non-blocking execution requests. Used by the
# server session manager and API routes.
# ------------------------------------------------------------------
async def run(
self,
input_data: dict | None = None,
session_state: dict | None = None,
entry_point_id: str | None = None,
) -> ExecutionResult:
"""
Execute the agent with given input data.
"""One-shot execution for worker agents (headless CLI).
Validates credentials before execution. If any required credentials
are missing, returns an error result with instructions on how to
provide them.
Validates credentials, runs the graph to completion, and returns
the result. Used by ``hive run`` and programmatic callers.
For single-entry-point agents, this is the standard execution path.
For multi-entry-point agents, you can optionally specify which entry point to use.
For the frontend (queen), use start() + trigger() instead.
Args:
input_data: Input data for the agent (e.g., {"lead_id": "123"})
@@ -1591,7 +2014,12 @@ class AgentRunner:
# === Runtime API ===
async def start(self) -> None:
"""Start the agent runtime."""
"""Boot the agent runtime for the frontend (queen).
Pair with trigger() to send execution requests. Used by the
server session manager. For headless worker agents, use run()
instead.
"""
if self._agent_runtime is None:
self._setup()
@@ -1608,10 +2036,10 @@ class AgentRunner:
input_data: dict[str, Any],
correlation_id: str | None = None,
) -> str:
"""
Trigger execution at a specific entry point (non-blocking).
"""Send a non-blocking execution request to a running runtime.
Returns execution ID for tracking.
Used by the server API routes after start(). For headless
worker agents, use run() instead.
Args:
entry_point_id: Which entry point to trigger
@@ -1696,19 +2124,6 @@ class AgentRunner:
for edge in self.graph.edges
]
# Build async entry points info
async_entry_points_info = [
{
"id": ep.id,
"name": ep.name,
"entry_node": ep.entry_node,
"trigger_type": ep.trigger_type,
"isolation_level": ep.isolation_level,
"max_concurrent": ep.max_concurrent,
}
for ep in self.graph.async_entry_points
]
return AgentInfo(
name=self.graph.id,
description=self.graph.description,
@@ -1735,8 +2150,6 @@ class AgentRunner:
],
required_tools=sorted(required_tools),
has_tools_module=(self.agent_path / "tools.py").exists(),
async_entry_points=async_entry_points_info,
is_multi_entry_point=self._uses_async_entry_points,
)
def validate(self) -> ValidationResult:
@@ -2051,18 +2464,6 @@ Respond with JSON only:
trigger_type="manual",
isolation_level="shared",
)
for aep in runner.graph.async_entry_points:
entry_points[aep.id] = EntryPointSpec(
id=aep.id,
name=aep.name,
entry_node=aep.entry_node,
trigger_type=aep.trigger_type,
trigger_config=aep.trigger_config,
isolation_level=aep.isolation_level,
priority=aep.priority,
max_concurrent=aep.max_concurrent,
)
await runtime.add_graph(
graph_id=gid,
graph=runner.graph,
+60 -16
View File
@@ -54,6 +54,8 @@ class ToolRegistry:
def __init__(self):
self._tools: dict[str, RegisteredTool] = {}
self._mcp_clients: list[Any] = [] # List of MCPClient instances
self._mcp_client_servers: dict[int, str] = {} # client id -> server name
self._mcp_managed_clients: set[int] = set() # client ids acquired from the manager
self._session_context: dict[str, Any] = {} # Auto-injected context for tools
self._provider_index: dict[str, set[str]] = {} # provider -> tool names
# MCP resync tracking
@@ -243,6 +245,13 @@ class ToolRegistry:
def _wrap_result(tool_use_id: str, result: Any) -> ToolResult:
if isinstance(result, ToolResult):
return result
# MCP client returns dict with _images when image content is present
if isinstance(result, dict) and "_images" in result:
return ToolResult(
tool_use_id=tool_use_id,
content=result.get("_text", ""),
image_content=result["_images"],
)
return ToolResult(
tool_use_id=tool_use_id,
content=json.dumps(result) if not isinstance(result, str) else result,
@@ -455,11 +464,23 @@ class ToolRegistry:
for server_config in server_list:
server_config = self._resolve_mcp_server_config(server_config, base_dir)
try:
self.register_mcp_server(server_config)
except Exception as e:
name = server_config.get("name", "unknown")
logger.warning(f"Failed to register MCP server '{name}': {e}")
for _attempt in range(2):
try:
self.register_mcp_server(server_config)
break
except Exception as e:
name = server_config.get("name", "unknown")
if _attempt == 0:
logger.warning(
"MCP server '%s' failed to register, retrying in 2s: %s",
name,
e,
)
import time
time.sleep(2)
else:
logger.warning("MCP server '%s' failed after retry: %s", name, e)
# Snapshot credential files and ADEN_API_KEY so we can detect mid-session changes
self._mcp_cred_snapshot = self._snapshot_credentials()
@@ -468,6 +489,7 @@ class ToolRegistry:
def register_mcp_server(
self,
server_config: dict[str, Any],
use_connection_manager: bool = True,
) -> int:
"""
Register an MCP server and discover its tools.
@@ -483,12 +505,14 @@ class ToolRegistry:
- url: Server URL (for http)
- headers: HTTP headers (for http)
- description: Server description (optional)
use_connection_manager: When True, reuse a shared client keyed by server name
Returns:
Number of tools registered from this server
"""
try:
from framework.runner.mcp_client import MCPClient, MCPServerConfig
from framework.runner.mcp_connection_manager import MCPConnectionManager
# Build config object
config = MCPServerConfig(
@@ -504,11 +528,18 @@ class ToolRegistry:
)
# Create and connect client
client = MCPClient(config)
client.connect()
if use_connection_manager:
client = MCPConnectionManager.get_instance().acquire(config)
else:
client = MCPClient(config)
client.connect()
# Store client for cleanup
self._mcp_clients.append(client)
client_id = id(client)
self._mcp_client_servers[client_id] = config.name
if use_connection_manager:
self._mcp_managed_clients.add(client_id)
# Register each tool
server_name = server_config["name"]
@@ -548,7 +579,9 @@ class ToolRegistry:
}
merged_inputs = {**clean_inputs, **filtered_context}
result = client_ref.call_tool(tool_name, merged_inputs)
# MCP tools return content array, extract the result
# MCP client already extracts content (returns str
# or {"_text": ..., "_images": ...} for image results).
# Handle legacy list format from HTTP transport.
if isinstance(result, list) and len(result) > 0:
if isinstance(result[0], dict) and "text" in result[0]:
return result[0]["text"]
@@ -708,12 +741,7 @@ class ToolRegistry:
logger.info("%s — resyncing MCP servers", reason)
# 1. Disconnect existing MCP clients
for client in self._mcp_clients:
try:
client.disconnect()
except Exception as e:
logger.warning(f"Error disconnecting MCP client during resync: {e}")
self._mcp_clients.clear()
self._cleanup_mcp_clients("during resync")
# 2. Remove MCP-registered tools
for name in self._mcp_tool_names:
@@ -728,12 +756,28 @@ class ToolRegistry:
def cleanup(self) -> None:
"""Clean up all MCP client connections."""
self._cleanup_mcp_clients()
def _cleanup_mcp_clients(self, context: str = "") -> None:
"""Disconnect or release all tracked MCP clients for this registry."""
if context:
context = f" {context}"
for client in self._mcp_clients:
client_id = id(client)
server_name = self._mcp_client_servers.get(client_id, client.config.name)
try:
client.disconnect()
if client_id in self._mcp_managed_clients:
from framework.runner.mcp_connection_manager import MCPConnectionManager
MCPConnectionManager.get_instance().release(server_name)
else:
client.disconnect()
except Exception as e:
logger.warning(f"Error disconnecting MCP client: {e}")
logger.warning(f"Error disconnecting MCP client{context}: {e}")
self._mcp_clients.clear()
self._mcp_client_servers.clear()
self._mcp_managed_clients.clear()
def __del__(self):
"""Destructor to ensure cleanup."""
+2 -2
View File
@@ -454,11 +454,11 @@ An agent has requested handoff to the Hive Coder (via the `escalate` synthetic t
## Worker Health Monitoring
These events form the **judge → queen → operator** escalation pipeline.
These events form the **queen → operator** escalation pipeline.
### `worker_escalation_ticket`
The Worker Health Judge has detected a degradation pattern and is escalating to the Queen.
A worker degradation pattern has been detected and is being escalated to the Queen.
| Data Field | Type | Description |
| ---------- | ------ | ------------------------------------ |
+102 -12
View File
@@ -8,6 +8,7 @@ while preserving the goal-driven approach.
import asyncio
import logging
import time
import uuid
from collections.abc import Callable
from dataclasses import dataclass, field
from datetime import datetime
@@ -28,6 +29,7 @@ if TYPE_CHECKING:
from framework.graph.edge import GraphSpec
from framework.graph.goal import Goal
from framework.llm.provider import LLMProvider, Tool
from framework.skills.manager import SkillsManagerConfig
logger = logging.getLogger(__name__)
@@ -131,6 +133,11 @@ class AgentRuntime:
accounts_data: list[dict] | None = None,
tool_provider_map: dict[str, str] | None = None,
event_bus: "EventBus | None" = None,
skills_manager_config: "SkillsManagerConfig | None" = None,
# Deprecated — pass skills_manager_config instead.
skills_catalog_prompt: str = "",
protocols_prompt: str = "",
skill_dirs: list[str] | None = None,
):
"""
Initialize agent runtime.
@@ -152,7 +159,16 @@ class AgentRuntime:
event_bus: Optional external EventBus. If provided, the runtime shares
this bus instead of creating its own. Used by SessionManager to
share a single bus between queen, worker, and judge.
skills_catalog_prompt: Available skills catalog for system prompt
protocols_prompt: Default skill operational protocols for system prompt
skill_dirs: Skill base directories for Tier 3 resource access
skills_manager_config: Skill configuration the runtime owns
discovery, loading, and prompt renderation internally.
skills_catalog_prompt: Deprecated. Pre-rendered skills catalog.
protocols_prompt: Deprecated. Pre-rendered operational protocols.
"""
from framework.skills.manager import SkillsManager
self.graph = graph
self.goal = goal
self._config = config or AgentRuntimeConfig()
@@ -160,6 +176,31 @@ class AgentRuntime:
self._checkpoint_config = checkpoint_config
self.accounts_prompt = accounts_prompt
# --- Skill lifecycle: runtime owns the SkillsManager ---
if skills_manager_config is not None:
# New path: config-driven, runtime handles loading
self._skills_manager = SkillsManager(skills_manager_config)
self._skills_manager.load()
elif skills_catalog_prompt or protocols_prompt:
# Legacy path: caller passed pre-rendered strings
import warnings
warnings.warn(
"Passing pre-rendered skills_catalog_prompt/protocols_prompt "
"is deprecated. Pass skills_manager_config instead.",
DeprecationWarning,
stacklevel=2,
)
self._skills_manager = SkillsManager.from_precomputed(
skills_catalog_prompt, protocols_prompt
)
else:
# Bare constructor: auto-load defaults
self._skills_manager = SkillsManager()
self._skills_manager.load()
self.skill_dirs: list[str] = self._skills_manager.allowlisted_dirs
# Primary graph identity
self._graph_id: str = graph_id or "primary"
@@ -215,6 +256,18 @@ class AgentRuntime:
# Optional greeting shown to user on TUI load (set by AgentRunner)
self.intro_message: str = ""
# ------------------------------------------------------------------
# Skill prompt accessors (read by ExecutionStream constructors)
# ------------------------------------------------------------------
@property
def skills_catalog_prompt(self) -> str:
return self._skills_manager.skills_catalog_prompt
@property
def protocols_prompt(self) -> str:
return self._skills_manager.protocols_prompt
def register_entry_point(self, spec: EntryPointSpec) -> None:
"""
Register a named entry point for the agent.
@@ -292,6 +345,9 @@ class AgentRuntime:
accounts_prompt=self._accounts_prompt,
accounts_data=self._accounts_data,
tool_provider_map=self._tool_provider_map,
skills_catalog_prompt=self.skills_catalog_prompt,
protocols_prompt=self.protocols_prompt,
skill_dirs=self.skill_dirs,
)
await stream.start()
self._streams[ep_id] = stream
@@ -392,18 +448,24 @@ class AgentRuntime:
tc = spec.trigger_config
cron_expr = tc.get("cron")
interval = tc.get("interval_minutes")
_raw_interval = tc.get("interval_minutes")
interval = float(_raw_interval) if _raw_interval is not None else None
run_immediately = tc.get("run_immediately", False)
if cron_expr:
# Cron expression mode — takes priority over interval_minutes
try:
from croniter import croniter
except ImportError as e:
raise RuntimeError(
"croniter is required for cron-based entry points. "
"Install it with: uv pip install croniter"
) from e
# Validate the expression upfront
try:
if not croniter.is_valid(cron_expr):
raise ValueError(f"Invalid cron expression: {cron_expr}")
except (ImportError, ValueError) as e:
except ValueError as e:
logger.warning(
"Entry point '%s' has invalid cron config: %s",
ep_id,
@@ -543,7 +605,7 @@ class AgentRuntime:
ep_id,
cron_expr,
run_immediately,
idle_timeout=tc.get("idle_timeout_seconds", 300),
idle_timeout=float(tc.get("idle_timeout_seconds", 300)),
)()
)
self._timer_tasks.append(task)
@@ -673,7 +735,7 @@ class AgentRuntime:
ep_id,
interval,
run_immediately,
idle_timeout=tc.get("idle_timeout_seconds", 300),
idle_timeout=float(tc.get("idle_timeout_seconds", 300)),
)()
)
self._timer_tasks.append(task)
@@ -822,7 +884,8 @@ class AgentRuntime:
if stream is None:
raise ValueError(f"Entry point '{entry_point_id}' not found")
return await stream.execute(input_data, correlation_id, session_state)
run_id = uuid.uuid4().hex[:12]
return await stream.execute(input_data, correlation_id, session_state, run_id=run_id)
async def trigger_and_wait(
self,
@@ -919,6 +982,9 @@ class AgentRuntime:
accounts_prompt=self._accounts_prompt,
accounts_data=self._accounts_data,
tool_provider_map=self._tool_provider_map,
skills_catalog_prompt=self.skills_catalog_prompt,
protocols_prompt=self.protocols_prompt,
skill_dirs=self.skill_dirs,
)
if self._running:
await stream.start()
@@ -997,7 +1063,8 @@ class AgentRuntime:
if spec.trigger_type != "timer":
continue
tc = spec.trigger_config
interval = tc.get("interval_minutes")
_raw_interval = tc.get("interval_minutes")
interval = float(_raw_interval) if _raw_interval is not None else None
run_immediately = tc.get("run_immediately", False)
if interval and interval > 0 and self._running:
@@ -1142,7 +1209,7 @@ class AgentRuntime:
ep_id,
interval,
run_immediately,
idle_timeout=tc.get("idle_timeout_seconds", 300),
idle_timeout=float(tc.get("idle_timeout_seconds", 300)),
)()
)
timer_tasks.append(task)
@@ -1359,8 +1426,8 @@ class AgentRuntime:
allowed_keys = set(entry_node.input_keys)
# Search primary graph's streams for an active session.
# Skip isolated streams (e.g. health judge) — they have their own
# session directories and must never be used as a shared session.
# Skip isolated streams — they have their own session directories
# and must never be used as a shared session.
all_streams: list[tuple[str, ExecutionStream]] = []
for _gid, reg in self._graphs.items():
for ep_id, stream in reg.streams.items():
@@ -1407,6 +1474,7 @@ class AgentRuntime:
graph_id: str | None = None,
*,
is_client_input: bool = False,
image_content: list[dict[str, Any]] | None = None,
) -> bool:
"""Inject user input into a running client-facing node.
@@ -1419,6 +1487,8 @@ class AgentRuntime:
graph_id: Optional graph to search first (defaults to active graph)
is_client_input: True when the message originates from a real
human user (e.g. /chat endpoint), False for external events.
image_content: Optional list of image content blocks (OpenAI
image_url format) to include alongside the text.
Returns:
True if input was delivered, False if no matching node found
@@ -1430,7 +1500,9 @@ class AgentRuntime:
target = graph_id or self._active_graph_id
if target in self._graphs:
for stream in self._graphs[target].streams.values():
if await stream.inject_input(node_id, content, is_client_input=is_client_input):
if await stream.inject_input(
node_id, content, is_client_input=is_client_input, image_content=image_content
):
return True
# Then search all other graphs
@@ -1438,7 +1510,9 @@ class AgentRuntime:
if gid == target:
continue
for stream in reg.streams.values():
if await stream.inject_input(node_id, content, is_client_input=is_client_input):
if await stream.inject_input(
node_id, content, is_client_input=is_client_input, image_content=image_content
):
return True
return False
@@ -1697,6 +1771,11 @@ def create_agent_runtime(
accounts_data: list[dict] | None = None,
tool_provider_map: dict[str, str] | None = None,
event_bus: "EventBus | None" = None,
skills_manager_config: "SkillsManagerConfig | None" = None,
# Deprecated — pass skills_manager_config instead.
skills_catalog_prompt: str = "",
protocols_prompt: str = "",
skill_dirs: list[str] | None = None,
) -> AgentRuntime:
"""
Create and configure an AgentRuntime with entry points.
@@ -1723,6 +1802,13 @@ def create_agent_runtime(
accounts_data: Raw account data for per-node prompt generation.
tool_provider_map: Tool name to provider name mapping for account routing.
event_bus: Optional external EventBus to share with other components.
skills_catalog_prompt: Available skills catalog for system prompt.
protocols_prompt: Default skill operational protocols for system prompt.
skill_dirs: Skill base directories for Tier 3 resource access.
skills_manager_config: Skill configuration the runtime owns
discovery, loading, and prompt renderation internally.
skills_catalog_prompt: Deprecated. Pre-rendered skills catalog.
protocols_prompt: Deprecated. Pre-rendered operational protocols.
Returns:
Configured AgentRuntime (not yet started)
@@ -1749,6 +1835,10 @@ def create_agent_runtime(
accounts_data=accounts_data,
tool_provider_map=tool_provider_map,
event_bus=event_bus,
skills_manager_config=skills_manager_config,
skills_catalog_prompt=skills_catalog_prompt,
protocols_prompt=protocols_prompt,
skill_dirs=skill_dirs,
)
for spec in entry_points:
+5 -5
View File
@@ -1,4 +1,4 @@
"""EscalationTicket — structured schema for worker health judge escalations."""
"""EscalationTicket — structured schema for worker health escalations."""
from __future__ import annotations
@@ -10,10 +10,10 @@ from pydantic import BaseModel, Field
class EscalationTicket(BaseModel):
"""Structured escalation report emitted by the Worker Health Judge.
"""Structured escalation report for worker health monitoring.
The judge must fill every field before calling emit_escalation_ticket.
Pydantic validation rejects partial tickets, preventing impulsive escalation.
All fields must be filled before calling emit_escalation_ticket.
Pydantic validation rejects partial tickets.
"""
ticket_id: str = Field(default_factory=lambda: str(uuid4()))
@@ -25,7 +25,7 @@ class EscalationTicket(BaseModel):
worker_node_id: str
worker_graph_id: str
# Problem characterization (filled by judge via LLM deliberation)
# Problem characterization
severity: Literal["low", "medium", "high", "critical"]
cause: str # Human-readable: "Node has produced 18 RETRY verdicts..."
judge_reasoning: str # Judge's own deliberation chain
+193 -8
View File
@@ -97,6 +97,7 @@ class EventType(StrEnum):
# Client I/O (client_facing=True nodes only)
CLIENT_OUTPUT_DELTA = "client_output_delta"
CLIENT_INPUT_REQUESTED = "client_input_requested"
CLIENT_INPUT_RECEIVED = "client_input_received"
# Internal node observability (client_facing=False nodes)
NODE_INTERNAL_OUTPUT = "node_internal_output"
@@ -104,7 +105,7 @@ class EventType(StrEnum):
NODE_STALLED = "node_stalled"
NODE_TOOL_DOOM_LOOP = "node_tool_doom_loop"
# Judge decisions
# Judge decisions (implicit judge in event loop nodes)
JUDGE_VERDICT = "judge_verdict"
# Output tracking
@@ -116,6 +117,7 @@ class EventType(StrEnum):
# Context management
CONTEXT_COMPACTED = "context_compacted"
CONTEXT_USAGE_UPDATED = "context_usage_updated"
# External triggers
WEBHOOK_RECEIVED = "webhook_received"
@@ -126,7 +128,7 @@ class EventType(StrEnum):
# Escalation (agent requests handoff to queen)
ESCALATION_REQUESTED = "escalation_requested"
# Worker health monitoring (judge → queen → operator)
# Worker health monitoring
WORKER_ESCALATION_TICKET = "worker_escalation_ticket"
QUEEN_INTERVENTION_REQUESTED = "queen_intervention_requested"
@@ -137,6 +139,12 @@ class EventType(StrEnum):
WORKER_LOADED = "worker_loaded"
CREDENTIALS_REQUIRED = "credentials_required"
# Draft graph (planning phase — lightweight graph preview)
DRAFT_GRAPH_UPDATED = "draft_graph_updated"
# Flowchart map updated (after reconciliation with runtime graph)
FLOWCHART_MAP_UPDATED = "flowchart_map_updated"
# Queen phase changes (building <-> staging <-> running)
QUEEN_PHASE_CHANGED = "queen_phase_changed"
@@ -146,6 +154,14 @@ class EventType(StrEnum):
# Subagent reports (one-way progress updates from sub-agents)
SUBAGENT_REPORT = "subagent_report"
# Trigger lifecycle (queen-level triggers / heartbeats)
TRIGGER_AVAILABLE = "trigger_available"
TRIGGER_ACTIVATED = "trigger_activated"
TRIGGER_DEACTIVATED = "trigger_deactivated"
TRIGGER_FIRED = "trigger_fired"
TRIGGER_REMOVED = "trigger_removed"
TRIGGER_UPDATED = "trigger_updated"
@dataclass
class AgentEvent:
@@ -159,10 +175,11 @@ class AgentEvent:
timestamp: datetime = field(default_factory=datetime.now)
correlation_id: str | None = None # For tracking related events
graph_id: str | None = None # Which graph emitted this event (multi-graph sessions)
run_id: str | None = None # Unique ID per trigger() invocation — used for run dividers
def to_dict(self) -> dict:
"""Convert to dictionary for serialization."""
return {
d = {
"type": self.type.value,
"stream_id": self.stream_id,
"node_id": self.node_id,
@@ -172,6 +189,9 @@ class AgentEvent:
"correlation_id": self.correlation_id,
"graph_id": self.graph_id,
}
if self.run_id is not None:
d["run_id"] = self.run_id
return d
# Type for event handlers
@@ -240,6 +260,128 @@ class EventBus:
self._semaphore = asyncio.Semaphore(max_concurrent_handlers)
self._subscription_counter = 0
self._lock = asyncio.Lock()
# Per-session persistent event log (always-on, survives restarts)
self._session_log: IO[str] | None = None
self._session_log_iteration_offset: int = 0
# Accumulator for client_output_delta snapshots — flushed on llm_turn_complete.
# Key: (stream_id, node_id, execution_id, iteration, inner_turn) → latest AgentEvent
self._pending_output_snapshots: dict[tuple, AgentEvent] = {}
def set_session_log(self, path: Path, *, iteration_offset: int = 0) -> None:
"""Enable per-session event persistence to a JSONL file.
Called once when the queen starts so that all events survive server
restarts and can be replayed to reconstruct the frontend state.
``iteration_offset`` is added to the ``iteration`` field in logged
events so that cold-resumed sessions produce monotonically increasing
iteration values preventing frontend message ID collisions between
the original run and resumed runs.
"""
if self._session_log is not None:
try:
self._session_log.close()
except Exception:
pass
path.parent.mkdir(parents=True, exist_ok=True)
self._session_log = open(path, "a", encoding="utf-8") # noqa: SIM115
self._session_log_iteration_offset = iteration_offset
logger.info("Session event log → %s (iteration_offset=%d)", path, iteration_offset)
def close_session_log(self) -> None:
"""Close the per-session event log file."""
# Flush any pending output snapshots before closing
self._flush_pending_snapshots()
if self._session_log is not None:
try:
self._session_log.close()
except Exception:
pass
self._session_log = None
# Event types that are high-frequency streaming deltas — accumulated rather
# than written individually to the session log.
_STREAMING_DELTA_TYPES = frozenset(
{
EventType.CLIENT_OUTPUT_DELTA,
EventType.LLM_TEXT_DELTA,
EventType.LLM_REASONING_DELTA,
}
)
def _write_session_log_event(self, event: AgentEvent) -> None:
"""Write an event to the per-session log with streaming coalescing.
Streaming deltas (client_output_delta, llm_text_delta) are accumulated
in memory. When llm_turn_complete fires, any pending snapshots for that
(stream_id, node_id, execution_id) are flushed as single consolidated
events before the turn-complete event itself is written.
Note: iteration offset is already applied in publish() before this is
called, so events here already have correct iteration values.
"""
if self._session_log is None:
return
if event.type in self._STREAMING_DELTA_TYPES:
# Accumulate — keep only the latest event (which carries the full snapshot)
key = (
event.stream_id,
event.node_id,
event.execution_id,
event.data.get("iteration"),
event.data.get("inner_turn", 0),
)
self._pending_output_snapshots[key] = event
return
# On turn-complete, flush accumulated snapshots for this stream first
if event.type == EventType.LLM_TURN_COMPLETE:
self._flush_pending_snapshots(
stream_id=event.stream_id,
node_id=event.node_id,
execution_id=event.execution_id,
)
line = json.dumps(event.to_dict(), default=str)
self._session_log.write(line + "\n")
self._session_log.flush()
def _flush_pending_snapshots(
self,
stream_id: str | None = None,
node_id: str | None = None,
execution_id: str | None = None,
) -> None:
"""Flush accumulated streaming snapshots to the session log.
When called with filters, only matching entries are flushed.
When called without filters (e.g. on close), everything is flushed.
"""
if self._session_log is None or not self._pending_output_snapshots:
return
to_flush: list[tuple] = []
for key, _evt in self._pending_output_snapshots.items():
if stream_id is not None:
k_stream, k_node, k_exec, _, _ = key
if k_stream != stream_id or k_node != node_id or k_exec != execution_id:
continue
to_flush.append(key)
for key in to_flush:
evt = self._pending_output_snapshots.pop(key)
try:
line = json.dumps(evt.to_dict(), default=str)
self._session_log.write(line + "\n")
except Exception:
pass
if to_flush:
try:
self._session_log.flush()
except Exception:
pass
def subscribe(
self,
@@ -305,6 +447,19 @@ class EventBus:
Args:
event: Event to publish
"""
# Apply iteration offset at the source so ALL consumers (SSE subscribers,
# event history, session log) see the same monotonically increasing
# iteration values. Without this, live SSE would use raw iterations
# while events.jsonl would use offset iterations, causing ID collisions
# on the frontend when replaying after cold resume.
if (
self._session_log_iteration_offset
and isinstance(event.data, dict)
and "iteration" in event.data
):
offset = self._session_log_iteration_offset
event.data = {**event.data, "iteration": event.data["iteration"] + offset}
# Add to history
async with self._lock:
self._event_history.append(event)
@@ -325,6 +480,15 @@ class EventBus:
except Exception:
pass # never break event delivery
# Per-session persistent log (always-on when set_session_log was called).
# Streaming deltas are coalesced: client_output_delta and llm_text_delta
# are accumulated and flushed as a single snapshot event on llm_turn_complete.
if self._session_log is not None:
try:
self._write_session_log_event(event)
except Exception:
pass # never break event delivery
# Find matching subscriptions
matching_handlers: list[EventHandler] = []
@@ -385,6 +549,7 @@ class EventBus:
execution_id: str,
input_data: dict[str, Any] | None = None,
correlation_id: str | None = None,
run_id: str | None = None,
) -> None:
"""Emit execution started event."""
await self.publish(
@@ -394,6 +559,7 @@ class EventBus:
execution_id=execution_id,
data={"input": input_data or {}},
correlation_id=correlation_id,
run_id=run_id,
)
)
@@ -403,6 +569,7 @@ class EventBus:
execution_id: str,
output: dict[str, Any] | None = None,
correlation_id: str | None = None,
run_id: str | None = None,
) -> None:
"""Emit execution completed event."""
await self.publish(
@@ -412,6 +579,7 @@ class EventBus:
execution_id=execution_id,
data={"output": output or {}},
correlation_id=correlation_id,
run_id=run_id,
)
)
@@ -421,6 +589,7 @@ class EventBus:
execution_id: str,
error: str,
correlation_id: str | None = None,
run_id: str | None = None,
) -> None:
"""Emit execution failed event."""
await self.publish(
@@ -430,6 +599,7 @@ class EventBus:
execution_id=execution_id,
data={"error": error},
correlation_id=correlation_id,
run_id=run_id,
)
)
@@ -521,15 +691,19 @@ class EventBus:
node_id: str,
iteration: int,
execution_id: str | None = None,
extra_data: dict[str, Any] | None = None,
) -> None:
"""Emit node loop iteration event."""
data: dict[str, Any] = {"iteration": iteration}
if extra_data:
data.update(extra_data)
await self.publish(
AgentEvent(
type=EventType.NODE_LOOP_ITERATION,
stream_id=stream_id,
node_id=node_id,
execution_id=execution_id,
data={"iteration": iteration},
data=data,
)
)
@@ -578,6 +752,7 @@ class EventBus:
content: str,
snapshot: str,
execution_id: str | None = None,
inner_turn: int = 0,
) -> None:
"""Emit LLM text delta event."""
await self.publish(
@@ -586,7 +761,7 @@ class EventBus:
stream_id=stream_id,
node_id=node_id,
execution_id=execution_id,
data={"content": content, "snapshot": snapshot},
data={"content": content, "snapshot": snapshot, "inner_turn": inner_turn},
)
)
@@ -616,6 +791,7 @@ class EventBus:
model: str,
input_tokens: int,
output_tokens: int,
cached_tokens: int = 0,
execution_id: str | None = None,
iteration: int | None = None,
) -> None:
@@ -625,6 +801,7 @@ class EventBus:
"model": model,
"input_tokens": input_tokens,
"output_tokens": output_tokens,
"cached_tokens": cached_tokens,
}
if iteration is not None:
data["iteration"] = iteration
@@ -700,9 +877,10 @@ class EventBus:
snapshot: str,
execution_id: str | None = None,
iteration: int | None = None,
inner_turn: int = 0,
) -> None:
"""Emit client output delta event (client_facing=True nodes)."""
data: dict = {"content": content, "snapshot": snapshot}
data: dict = {"content": content, "snapshot": snapshot, "inner_turn": inner_turn}
if iteration is not None:
data["iteration"] = iteration
await self.publish(
@@ -722,16 +900,23 @@ class EventBus:
prompt: str = "",
execution_id: str | None = None,
options: list[str] | None = None,
questions: list[dict] | None = None,
) -> None:
"""Emit client input requested event (client_facing=True nodes).
Args:
options: Optional predefined choices for the user (1-3 items).
The frontend appends an "Other" free-text option automatically.
The frontend appends an "Other" free-text option
automatically.
questions: Optional list of question dicts for multi-question
batches (from ask_user_multiple). Each dict has id,
prompt, and optional options.
"""
data: dict[str, Any] = {"prompt": prompt}
if options:
data["options"] = options
if questions:
data["questions"] = questions
await self.publish(
AgentEvent(
type=EventType.CLIENT_INPUT_REQUESTED,
@@ -994,7 +1179,7 @@ class EventBus:
ticket: dict,
execution_id: str | None = None,
) -> None:
"""Emitted by health judge when worker shows a degradation pattern."""
"""Emitted when worker shows a degradation pattern."""
await self.publish(
AgentEvent(
type=EventType.WORKER_ESCALATION_TICKET,
+110 -4
View File
@@ -127,6 +127,7 @@ class ExecutionContext:
input_data: dict[str, Any]
isolation_level: IsolationLevel
session_state: dict[str, Any] | None = None # For resuming from pause
run_id: str | None = None # Unique ID per trigger() invocation
started_at: datetime = field(default_factory=datetime.now)
completed_at: datetime | None = None
status: str = "pending" # pending, running, completed, failed, paused
@@ -185,6 +186,9 @@ class ExecutionStream:
accounts_prompt: str = "",
accounts_data: list[dict] | None = None,
tool_provider_map: dict[str, str] | None = None,
skills_catalog_prompt: str = "",
protocols_prompt: str = "",
skill_dirs: list[str] | None = None,
):
"""
Initialize execution stream.
@@ -208,6 +212,9 @@ class ExecutionStream:
accounts_prompt: Connected accounts block for system prompt injection
accounts_data: Raw account data for per-node prompt generation
tool_provider_map: Tool name to provider name mapping for account routing
skills_catalog_prompt: Available skills catalog for system prompt
protocols_prompt: Default skill operational protocols for system prompt
skill_dirs: Skill base directories for Tier 3 resource access
"""
self.stream_id = stream_id
self.entry_spec = entry_spec
@@ -229,6 +236,22 @@ class ExecutionStream:
self._accounts_prompt = accounts_prompt
self._accounts_data = accounts_data
self._tool_provider_map = tool_provider_map
self._skills_catalog_prompt = skills_catalog_prompt
self._protocols_prompt = protocols_prompt
self._skill_dirs: list[str] = skill_dirs or []
_es_logger = logging.getLogger(__name__)
if protocols_prompt:
_es_logger.info(
"ExecutionStream[%s] received protocols_prompt (%d chars)",
stream_id,
len(protocols_prompt),
)
else:
_es_logger.warning(
"ExecutionStream[%s] received EMPTY protocols_prompt",
stream_id,
)
# Create stream-scoped runtime
self._runtime = StreamRuntime(
@@ -410,6 +433,7 @@ class ExecutionStream:
content: str,
*,
is_client_input: bool = False,
image_content: list[dict[str, Any]] | None = None,
) -> bool:
"""Inject user input into a running client-facing EventLoopNode.
@@ -421,7 +445,33 @@ class ExecutionStream:
for executor in self._active_executors.values():
node = executor.node_registry.get(node_id)
if node is not None and hasattr(node, "inject_event"):
await node.inject_event(content, is_client_input=is_client_input)
await node.inject_event(
content, is_client_input=is_client_input, image_content=image_content
)
return True
return False
async def inject_trigger(
self,
node_id: str,
trigger: Any,
) -> bool:
"""Inject a trigger event into a running queen EventLoopNode.
Searches active executors for a node matching ``node_id`` and calls
its ``inject_trigger()`` method to wake the queen.
Args:
node_id: The queen EventLoopNode ID.
trigger: A ``TriggerEvent`` instance (typed as Any to avoid
circular imports with graph layer).
Returns True if the trigger was delivered, False otherwise.
"""
for executor in self._active_executors.values():
node = executor.node_registry.get(node_id)
if node is not None and hasattr(node, "inject_trigger"):
await node.inject_trigger(trigger)
return True
return False
@@ -430,6 +480,7 @@ class ExecutionStream:
input_data: dict[str, Any],
correlation_id: str | None = None,
session_state: dict[str, Any] | None = None,
run_id: str | None = None,
) -> str:
"""
Queue an execution and return its ID.
@@ -440,6 +491,7 @@ class ExecutionStream:
input_data: Input data for this execution
correlation_id: Optional ID to correlate related executions
session_state: Optional session state to resume from (with paused_at, memory)
run_id: Unique ID for this trigger invocation (for run dividers)
Returns:
Execution ID for tracking
@@ -500,6 +552,7 @@ class ExecutionStream:
input_data=input_data,
isolation_level=self.entry_spec.get_isolation_level(),
session_state=session_state,
run_id=run_id,
)
async with self._lock:
@@ -575,7 +628,9 @@ class ExecutionStream:
execution_id=execution_id,
input_data=ctx.input_data,
correlation_id=ctx.correlation_id,
run_id=ctx.run_id,
)
self._write_run_event(execution_id, ctx.run_id, "run_started")
# Create execution-scoped memory
self._state_manager.create_memory(
@@ -645,6 +700,9 @@ class ExecutionStream:
accounts_prompt=self._accounts_prompt,
accounts_data=self._accounts_data,
tool_provider_map=self._tool_provider_map,
skills_catalog_prompt=self._skills_catalog_prompt,
protocols_prompt=self._protocols_prompt,
skill_dirs=self._skill_dirs,
)
# Track executor so inject_input() can reach EventLoopNode instances
self._active_executors[execution_id] = executor
@@ -740,6 +798,7 @@ class ExecutionStream:
execution_id=execution_id,
output=result.output,
correlation_id=ctx.correlation_id,
run_id=ctx.run_id,
)
elif result.paused_at:
# The executor returns paused_at on CancelledError but
@@ -757,8 +816,22 @@ class ExecutionStream:
execution_id=execution_id,
error=result.error or "Unknown error",
correlation_id=ctx.correlation_id,
run_id=ctx.run_id,
)
# Write run event for historical restoration
if result.success:
self._write_run_event(execution_id, ctx.run_id, "run_completed")
elif result.paused_at:
self._write_run_event(execution_id, ctx.run_id, "run_paused")
else:
self._write_run_event(
execution_id,
ctx.run_id,
"run_failed",
{"error": result.error or "Unknown error"},
)
logger.debug(f"Execution {execution_id} completed: success={result.success}")
except asyncio.CancelledError:
@@ -818,8 +891,10 @@ class ExecutionStream:
execution_id=execution_id,
error=cancel_reason,
correlation_id=ctx.correlation_id,
run_id=ctx.run_id,
)
self._write_run_event(execution_id, ctx.run_id, "run_cancelled")
# Don't re-raise - we've handled it and saved state
except Exception as e:
@@ -856,7 +931,9 @@ class ExecutionStream:
execution_id=execution_id,
error=str(e),
correlation_id=ctx.correlation_id,
run_id=ctx.run_id,
)
self._write_run_event(execution_id, ctx.run_id, "run_failed", {"error": str(e)})
finally:
# Clean up state
@@ -872,6 +949,36 @@ class ExecutionStream:
self._completion_events.pop(execution_id, None)
self._execution_tasks.pop(execution_id, None)
def _write_run_event(
self,
execution_id: str,
run_id: str | None,
event: str,
extra: dict[str, Any] | None = None,
) -> None:
"""Append a run lifecycle event to runs.jsonl for historical restoration."""
if not self._session_store or not run_id:
return
import json as _json
session_dir = self._session_store.get_session_path(execution_id)
runs_file = session_dir / "runs.jsonl"
now = datetime.now()
record = {
"run_id": run_id,
"event": event,
"timestamp": now.isoformat(),
"created_at": now.timestamp(),
}
if extra:
record.update(extra)
try:
runs_file.parent.mkdir(parents=True, exist_ok=True)
with open(runs_file, "a", encoding="utf-8") as f:
f.write(_json.dumps(record) + "\n")
except OSError:
pass # Non-critical — don't break execution
async def _write_session_state(
self,
execution_id: str,
@@ -978,8 +1085,8 @@ class ExecutionStream:
def _create_modified_graph(self) -> "GraphSpec":
"""Create a graph with the entry point overridden.
Preserves the original graph's entry_points and async_entry_points
so that validation correctly considers ALL entry nodes reachable.
Preserves the original graph's entry_points so that validation
correctly considers ALL entry nodes reachable.
Each stream only executes from its own entry_node, but the full
graph must validate with all entry points accounted for.
"""
@@ -1004,7 +1111,6 @@ class ExecutionStream:
version=self.graph.version,
entry_node=self.entry_spec.entry_node, # Use our entry point
entry_points=merged_entry_points,
async_entry_points=self.graph.async_entry_points,
terminal_nodes=self.graph.terminal_nodes,
pause_nodes=self.graph.pause_nodes,
nodes=self.graph.nodes,
@@ -8,6 +8,7 @@ write. Errors are silently swallowed — this must never break the agent.
import json
import logging
import os
from datetime import datetime
from pathlib import Path
from typing import IO, Any
@@ -47,6 +48,9 @@ def log_llm_turn(
Never raises.
"""
try:
# Skip logging during test runs to avoid polluting real logs.
if os.environ.get("PYTEST_CURRENT_TEST") or os.environ.get("HIVE_DISABLE_LLM_LOGS"):
return
global _log_file, _log_ready # noqa: PLW0603
if not _log_ready:
_log_file = _open_log()
+25 -12
View File
@@ -47,25 +47,34 @@ class RuntimeLogStore:
self._base_path = base_path
# Note: _runs_dir is determined per-run_id by _get_run_dir()
def _session_logs_dir(self, run_id: str) -> Path:
"""Return the unified session-backed logs directory for a run ID."""
is_runtime_logs = self._base_path.name == "runtime_logs"
root = self._base_path.parent if is_runtime_logs else self._base_path
return root / "sessions" / run_id / "logs"
def _legacy_run_dir(self, run_id: str) -> Path:
"""Return the deprecated standalone runs directory for a run ID."""
return self._base_path / "runs" / run_id
def _get_run_dir(self, run_id: str) -> Path:
"""Determine run directory path based on run_id format.
- New format (session_*): {storage_root}/sessions/{run_id}/logs/
- Session-backed runs: {storage_root}/sessions/{run_id}/logs/
- Old format (anything else): {base_path}/runs/{run_id}/ (deprecated)
"""
if run_id.startswith("session_"):
is_runtime_logs = self._base_path.name == "runtime_logs"
root = self._base_path.parent if is_runtime_logs else self._base_path
return root / "sessions" / run_id / "logs"
session_run_dir = self._session_logs_dir(run_id)
if session_run_dir.exists() or run_id.startswith("session_"):
return session_run_dir
import warnings
warnings.warn(
f"Reading logs from deprecated location for run_id={run_id}. "
"New sessions use unified storage at sessions/session_*/logs/",
"New sessions use unified storage at sessions/<session_id>/logs/",
DeprecationWarning,
stacklevel=3,
)
return self._base_path / "runs" / run_id
return self._legacy_run_dir(run_id)
# -------------------------------------------------------------------
# Incremental write (sync — called from locked sections)
@@ -76,6 +85,10 @@ class RuntimeLogStore:
run_dir = self._get_run_dir(run_id)
run_dir.mkdir(parents=True, exist_ok=True)
def ensure_session_run_dir(self, run_id: str) -> None:
"""Create the unified session-backed log directory immediately."""
self._session_logs_dir(run_id).mkdir(parents=True, exist_ok=True)
def append_step(self, run_id: str, step: NodeStepLog) -> None:
"""Append one JSONL line to tool_logs.jsonl. Sync."""
path = self._get_run_dir(run_id) / "tool_logs.jsonl"
@@ -200,17 +213,17 @@ class RuntimeLogStore:
run_ids = []
# Scan new location: base_path/sessions/{session_id}/logs/
# Determine the correct base path for sessions
is_runtime_logs = self._base_path.name == "runtime_logs"
root = self._base_path.parent if is_runtime_logs else self._base_path
sessions_dir = root / "sessions"
if sessions_dir.exists():
for session_dir in sessions_dir.iterdir():
if session_dir.is_dir() and session_dir.name.startswith("session_"):
logs_dir = session_dir / "logs"
if logs_dir.exists() and logs_dir.is_dir():
run_ids.append(session_dir.name)
if not session_dir.is_dir():
continue
logs_dir = session_dir / "logs"
if logs_dir.exists() and logs_dir.is_dir():
run_ids.append(session_dir.name)
# Scan old location: base_path/runs/ (deprecated)
old_runs_dir = self._base_path / "runs"
+2 -1
View File
@@ -66,15 +66,16 @@ class RuntimeLogger:
"""
if session_id:
self._run_id = session_id
self._store.ensure_session_run_dir(self._run_id)
else:
ts = datetime.now(UTC).strftime("%Y%m%dT%H%M%S")
short_uuid = uuid.uuid4().hex[:8]
self._run_id = f"{ts}_{short_uuid}"
self._store.ensure_run_dir(self._run_id)
self._goal_id = goal_id
self._started_at = datetime.now(UTC).isoformat()
self._logged_node_ids = set()
self._store.ensure_run_dir(self._run_id)
return self._run_id
def log_step(
@@ -17,7 +17,7 @@ from pathlib import Path
import pytest
from framework.graph import Goal
from framework.graph.edge import AsyncEntryPointSpec, EdgeCondition, EdgeSpec, GraphSpec
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
from framework.graph.goal import Constraint, SuccessCriterion
from framework.graph.node import NodeSpec
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
@@ -101,30 +101,12 @@ def sample_graph():
),
]
async_entry_points = [
AsyncEntryPointSpec(
id="webhook",
name="Webhook Handler",
entry_node="process-webhook",
trigger_type="webhook",
isolation_level="shared",
),
AsyncEntryPointSpec(
id="api",
name="API Handler",
entry_node="process-api",
trigger_type="api",
isolation_level="shared",
),
]
return GraphSpec(
id="test-graph",
goal_id="test-goal",
version="1.0.0",
entry_node="process-webhook",
entry_points={"start": "process-webhook"},
async_entry_points=async_entry_points,
terminal_nodes=["complete"],
pause_nodes=[],
nodes=nodes,
@@ -504,108 +486,6 @@ class TestAgentRuntime:
# === GraphSpec Validation Tests ===
class TestGraphSpecValidation:
"""Tests for GraphSpec with async_entry_points."""
def test_has_async_entry_points(self, sample_graph):
"""Test checking for async entry points."""
assert sample_graph.has_async_entry_points() is True
# Graph without async entry points
simple_graph = GraphSpec(
id="simple",
goal_id="goal",
entry_node="start",
nodes=[],
edges=[],
)
assert simple_graph.has_async_entry_points() is False
def test_get_async_entry_point(self, sample_graph):
"""Test getting async entry point by ID."""
ep = sample_graph.get_async_entry_point("webhook")
assert ep is not None
assert ep.id == "webhook"
assert ep.entry_node == "process-webhook"
ep_not_found = sample_graph.get_async_entry_point("nonexistent")
assert ep_not_found is None
def test_validate_async_entry_points(self):
"""Test validation catches async entry point errors."""
nodes = [
NodeSpec(
id="valid-node",
name="Valid Node",
description="A valid node",
node_type="event_loop",
input_keys=[],
output_keys=[],
),
]
# Invalid entry node
graph = GraphSpec(
id="test",
goal_id="goal",
entry_node="valid-node",
async_entry_points=[
AsyncEntryPointSpec(
id="invalid",
name="Invalid",
entry_node="nonexistent-node",
trigger_type="webhook",
),
],
nodes=nodes,
edges=[],
)
errors = graph.validate()["errors"]
assert any("nonexistent-node" in e for e in errors)
# Invalid isolation level
graph2 = GraphSpec(
id="test",
goal_id="goal",
entry_node="valid-node",
async_entry_points=[
AsyncEntryPointSpec(
id="bad-isolation",
name="Bad Isolation",
entry_node="valid-node",
trigger_type="webhook",
isolation_level="invalid",
),
],
nodes=nodes,
edges=[],
)
errors2 = graph2.validate()["errors"]
assert any("isolation_level" in e for e in errors2)
# Invalid trigger type
graph3 = GraphSpec(
id="test",
goal_id="goal",
entry_node="valid-node",
async_entry_points=[
AsyncEntryPointSpec(
id="bad-trigger",
name="Bad Trigger",
entry_node="valid-node",
trigger_type="invalid_trigger",
),
],
nodes=nodes,
edges=[],
)
errors3 = graph3.validate()["errors"]
assert any("trigger_type" in e for e in errors3)
# === Integration Tests ===
@@ -0,0 +1,29 @@
"""Tests for custom session-backed runtime logging paths."""
from pathlib import Path
from unittest.mock import MagicMock
from framework.graph.executor import GraphExecutor
from framework.runtime.runtime_log_store import RuntimeLogStore
from framework.runtime.runtime_logger import RuntimeLogger
def test_graph_executor_uses_custom_session_dir_name_for_runtime_logs():
executor = GraphExecutor(
runtime=MagicMock(),
storage_path=Path("/tmp/test-agent/sessions/my-custom-session"),
)
assert executor._get_runtime_log_session_id() == "my-custom-session"
def test_runtime_logger_creates_session_log_dir_for_custom_session_id(tmp_path):
base = tmp_path / ".hive" / "agents" / "test_agent"
base.mkdir(parents=True)
store = RuntimeLogStore(base)
logger = RuntimeLogger(store=store, agent_id="test-agent")
run_id = logger.start_run(goal_id="goal-1", session_id="my-custom-session")
assert run_id == "my-custom-session"
assert (base / "sessions" / "my-custom-session" / "logs").is_dir()
@@ -483,7 +483,6 @@ class TestEventDrivenEntryPoints:
version="1.0.0",
entry_node="process-event",
entry_points={"start": "process-event"},
async_entry_points=[],
terminal_nodes=[],
pause_nodes=[],
nodes=nodes,
+22
View File
@@ -0,0 +1,22 @@
"""Trigger definitions for queen-level heartbeats (timers, webhooks)."""
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Any
@dataclass
class TriggerDefinition:
"""A registered trigger that can be activated on the queen runtime.
Trigger *definitions* come from the worker's ``triggers.json``.
Activation state is per-session (persisted in ``SessionState.active_triggers``).
"""
id: str
trigger_type: str # "timer" | "webhook"
trigger_config: dict[str, Any] = field(default_factory=dict)
description: str = ""
task: str = ""
active: bool = False
+7
View File
@@ -144,6 +144,13 @@ class SessionState(BaseModel):
checkpoint_enabled: bool = False
latest_checkpoint_id: str | None = None
# Trigger activation state (IDs of triggers the queen/user turned on)
active_triggers: list[str] = Field(default_factory=list)
# Per-trigger task strings (user overrides, keyed by trigger ID)
trigger_tasks: dict[str, str] = Field(default_factory=dict)
# True after first successful worker execution (gates trigger delivery on restart)
worker_configured: bool = Field(default=False)
model_config = {"extra": "allow"}
@computed_field
+23
View File
@@ -94,6 +94,29 @@ def sessions_dir(session: Session) -> Path:
return Path.home() / ".hive" / "agents" / agent_name / "sessions"
def cold_sessions_dir(session_id: str) -> Path | None:
"""Resolve the worker sessions directory from disk for a cold/stopped session.
Reads agent_path from the queen session's meta.json to find the agent name,
then returns ~/.hive/agents/{agent_name}/sessions/.
Returns None if meta.json is missing or has no agent_path.
"""
import json
meta_path = Path.home() / ".hive" / "queen" / "session" / session_id / "meta.json"
if not meta_path.exists():
return None
try:
meta = json.loads(meta_path.read_text(encoding="utf-8"))
agent_path = meta.get("agent_path")
if not agent_path:
return None
agent_name = Path(agent_path).name
return Path.home() / ".hive" / "agents" / agent_name / "sessions"
except (json.JSONDecodeError, OSError):
return None
# Allowed CORS origins (localhost on any port)
_CORS_ORIGINS = {"http://localhost", "http://127.0.0.1"}
+42 -1
View File
@@ -69,6 +69,7 @@ async def create_queen(
QueenPhaseState,
register_queen_lifecycle_tools,
)
from framework.tools.queen_memory_tools import register_queen_memory_tools
hive_home = Path.home() / ".hive"
@@ -90,6 +91,28 @@ async def create_queen(
phase_state = QueenPhaseState(phase=initial_phase, event_bus=session.event_bus)
session.phase_state = phase_state
# ---- Track ask rounds during planning ----------------------------
# Increment planning_ask_rounds each time the queen requests user
# input (ask_user or ask_user_multiple) while in the planning phase.
async def _track_planning_asks(event: AgentEvent) -> None:
if phase_state.phase != "planning":
return
# Only count explicit ask_user / ask_user_multiple calls, not
# auto-block (text-only turns emit CLIENT_INPUT_REQUESTED with
# an empty prompt and no options/questions).
data = event.data or {}
has_prompt = bool(data.get("prompt"))
has_questions = bool(data.get("questions"))
has_options = bool(data.get("options"))
if has_prompt or has_questions or has_options:
phase_state.planning_ask_rounds += 1
session.event_bus.subscribe(
[EventType.CLIENT_INPUT_REQUESTED],
_track_planning_asks,
filter_stream="queen",
)
# ---- Lifecycle tools (always registered) --------------------------
register_queen_lifecycle_tools(
queen_registry,
@@ -100,6 +123,9 @@ async def create_queen(
phase_state=phase_state,
)
# ---- Episodic memory tools (always registered) ---------------------
register_queen_memory_tools(queen_registry)
# ---- Monitoring tools (only when worker is loaded) ----------------
if session.worker_runtime:
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
@@ -110,6 +136,7 @@ async def create_queen(
session.worker_path,
stream_id="queen",
worker_graph_id=session.worker_runtime._graph_id,
default_session_id=session.id,
)
queen_tools = list(queen_registry.get_tools().values())
@@ -149,7 +176,8 @@ async def create_queen(
worker_identity = (
"\n\n# Worker Profile\n"
"No worker agent loaded. You are operating independently.\n"
"Handle all tasks directly using your coding tools."
"Design or build the agent to solve the user's problem "
"according to your current phase."
)
_planning_body = (
@@ -192,6 +220,16 @@ async def create_queen(
+ worker_identity
)
# ---- Default skill protocols -------------------------------------
try:
from framework.skills.manager import SkillsManager
_queen_skills_mgr = SkillsManager()
_queen_skills_mgr.load()
phase_state.protocols_prompt = _queen_skills_mgr.protocols_prompt
except Exception:
logger.debug("Queen skill loading failed (non-fatal)", exc_info=True)
# ---- Persona hook ------------------------------------------------
_session_llm = session.llm
_session_event_bus = session.event_bus
@@ -252,6 +290,7 @@ async def create_queen(
execution_id=session.id,
dynamic_tools_provider=phase_state.get_current_tools,
dynamic_prompt_provider=phase_state.get_current_prompt,
iteration_metadata_provider=lambda: {"phase": phase_state.phase},
)
session.queen_executor = executor
@@ -269,6 +308,8 @@ async def create_queen(
return
if phase_state.phase == "running":
if event.type == EventType.EXECUTION_COMPLETED:
# Mark worker as configured after first successful run
session.worker_configured = True
output = event.data.get("output", {})
output_summary = ""
if output:
+7 -2
View File
@@ -103,7 +103,9 @@ async def handle_delete_credential(request: web.Request) -> web.Response:
if credential_id == "aden_api_key":
from framework.credentials.key_storage import delete_aden_api_key
delete_aden_api_key()
deleted = delete_aden_api_key()
if not deleted:
return web.json_response({"error": "Credential 'aden_api_key' not found"}, status=404)
return web.json_response({"deleted": True})
store = _get_store(request)
@@ -178,7 +180,10 @@ async def handle_check_agent(request: web.Request) -> web.Response:
)
except Exception as e:
logger.exception(f"Error checking agent credentials: {e}")
return web.json_response({"error": str(e)}, status=500)
return web.json_response(
{"error": "Internal server error while checking credentials"},
status=500,
)
def _status_to_dict(c) -> dict:
+60 -1
View File
@@ -6,7 +6,7 @@ import logging
from aiohttp import web
from aiohttp.client_exceptions import ClientConnectionResetError as _AiohttpConnReset
from framework.runtime.event_bus import EventType
from framework.runtime.event_bus import AgentEvent, EventType
from framework.server.app import resolve_session
logger = logging.getLogger(__name__)
@@ -15,6 +15,7 @@ logger = logging.getLogger(__name__)
DEFAULT_EVENT_TYPES = [
EventType.CLIENT_OUTPUT_DELTA,
EventType.CLIENT_INPUT_REQUESTED,
EventType.CLIENT_INPUT_RECEIVED,
EventType.LLM_TEXT_DELTA,
EventType.TOOL_CALL_STARTED,
EventType.TOOL_CALL_COMPLETED,
@@ -36,10 +37,18 @@ DEFAULT_EVENT_TYPES = [
EventType.NODE_RETRY,
EventType.NODE_TOOL_DOOM_LOOP,
EventType.CONTEXT_COMPACTED,
EventType.CONTEXT_USAGE_UPDATED,
EventType.WORKER_LOADED,
EventType.CREDENTIALS_REQUIRED,
EventType.SUBAGENT_REPORT,
EventType.QUEEN_PHASE_CHANGED,
EventType.TRIGGER_AVAILABLE,
EventType.TRIGGER_ACTIVATED,
EventType.TRIGGER_DEACTIVATED,
EventType.TRIGGER_FIRED,
EventType.TRIGGER_REMOVED,
EventType.TRIGGER_UPDATED,
EventType.DRAFT_GRAPH_UPDATED,
]
# Keepalive interval in seconds
@@ -89,6 +98,7 @@ async def handle_events(request: web.Request) -> web.StreamResponse:
"execution_failed",
"execution_paused",
"client_input_requested",
"client_input_received",
"node_loop_iteration",
"node_loop_started",
"credentials_required",
@@ -142,6 +152,7 @@ async def handle_events(request: web.Request) -> web.StreamResponse:
EventType.CLIENT_OUTPUT_DELTA.value,
EventType.EXECUTION_STARTED.value,
EventType.CLIENT_INPUT_REQUESTED.value,
EventType.CLIENT_INPUT_RECEIVED.value,
}
event_type_values = {et.value for et in event_types}
replay_types = _REPLAY_TYPES & event_type_values
@@ -156,6 +167,54 @@ async def handle_events(request: web.Request) -> web.StreamResponse:
if replayed:
logger.info("SSE replayed %d buffered events for session='%s'", replayed, session.id)
# Inject a live-status snapshot so the frontend knows which nodes are
# currently running. This covers the case where the user navigated away
# and back — the localStorage snapshot is stale, and the ring-buffer
# replay may not include the original node_loop_started events.
worker_runtime = getattr(session, "worker_runtime", None)
if worker_runtime and getattr(worker_runtime, "is_running", False):
try:
for stream_info in worker_runtime.get_active_streams():
graph_id = stream_info.get("graph_id")
stream_id = stream_info.get("stream_id", "default")
for exec_id in stream_info.get("active_execution_ids", []):
# Synthesize execution_started so frontend sets workerRunState
synth_exec = AgentEvent(
type=EventType.EXECUTION_STARTED,
stream_id=stream_id,
execution_id=exec_id,
graph_id=graph_id,
data={"synthetic": True},
).to_dict()
try:
queue.put_nowait(synth_exec)
except asyncio.QueueFull:
pass
# Find the currently executing node via the executor
for _gid, reg in worker_runtime._graphs.items():
if _gid != graph_id:
continue
for _ep_id, stream in reg.streams.items():
for exec_id, executor in stream._active_executors.items():
current = getattr(executor, "current_node_id", None)
if current:
synth_node = AgentEvent(
type=EventType.NODE_LOOP_STARTED,
stream_id=stream_id,
node_id=current,
execution_id=exec_id,
graph_id=graph_id,
data={"synthetic": True},
).to_dict()
try:
queue.put_nowait(synth_node)
except asyncio.QueueFull:
pass
logger.info("SSE injected live-status snapshot for session='%s'", session.id)
except Exception:
logger.debug("Failed to inject live-status snapshot", exc_info=True)
event_count = 0
close_reason = "unknown"
try:
+22 -3
View File
@@ -108,7 +108,10 @@ async def handle_chat(request: web.Request) -> web.Response:
The input box is permanently connected to the queen agent.
Worker input is handled separately via /worker-input.
Body: {"message": "hello"}
Body: {"message": "hello", "images": [{"type": "image_url", "image_url": {"url": "data:..."}}]}
The optional ``images`` field accepts a list of OpenAI-format image_url
content blocks. The frontend encodes images as base64 data URIs.
"""
session, err = resolve_session(request)
if err:
@@ -116,15 +119,31 @@ async def handle_chat(request: web.Request) -> web.Response:
body = await request.json()
message = body.get("message", "")
image_content = body.get("images") or None # list[dict] | None
if not message:
if not message and not image_content:
return web.json_response({"error": "message is required"}, status=400)
queen_executor = session.queen_executor
if queen_executor is not None:
node = queen_executor.node_registry.get("queen")
if node is not None and hasattr(node, "inject_event"):
await node.inject_event(message, is_client_input=True)
await node.inject_event(message, is_client_input=True, image_content=image_content)
# Publish to EventBus so the session event log captures user messages
from framework.runtime.event_bus import AgentEvent, EventType
await session.event_bus.publish(
AgentEvent(
type=EventType.CLIENT_INPUT_RECEIVED,
stream_id="queen",
node_id="queen",
execution_id=session.id,
data={
"content": message,
"image_count": len(image_content) if image_content else 0,
},
)
)
return web.json_response(
{
"status": "queen",
+80
View File
@@ -2,6 +2,7 @@
import json
import logging
import time
from aiohttp import web
@@ -116,6 +117,20 @@ async def handle_list_nodes(request: web.Request) -> web.Response:
}
for ep in reg.entry_points.values()
]
# Append triggers from triggers.json (stored on session)
for t in getattr(session, "available_triggers", {}).values():
entry = {
"id": t.id,
"name": t.description or t.id,
"entry_node": graph.entry_node,
"trigger_type": t.trigger_type,
"trigger_config": t.trigger_config,
"task": t.task,
}
mono = getattr(session, "trigger_next_fire", {}).get(t.id)
if mono is not None:
entry["next_fire_in"] = max(0.0, mono - time.monotonic())
entry_points.append(entry)
return web.json_response(
{
"nodes": nodes,
@@ -234,8 +249,73 @@ async def handle_node_tools(request: web.Request) -> web.Response:
return web.json_response({"tools": tools_out})
async def handle_draft_graph(request: web.Request) -> web.Response:
"""Return the current draft graph from planning phase (if any)."""
session, err = resolve_session(request)
if err:
return err
phase_state = getattr(session, "phase_state", None)
if phase_state is None or phase_state.draft_graph is None:
return web.json_response({"draft": None})
return web.json_response({"draft": phase_state.draft_graph})
async def handle_flowchart_map(request: web.Request) -> web.Response:
"""Return the flowchart→runtime node mapping and the original (pre-dissolution) draft.
Available after confirm_and_build() dissolves decision nodes, or loaded
from the agent's flowchart.json file, or synthesized from the runtime graph.
"""
session, err = resolve_session(request)
if err:
return err
phase_state = getattr(session, "phase_state", None)
# Fast path: already in memory
if phase_state is not None and phase_state.original_draft_graph is not None:
return web.json_response(
{
"map": phase_state.flowchart_map,
"original_draft": phase_state.original_draft_graph,
}
)
# Try loading from flowchart.json in the agent folder
worker_path = getattr(session, "worker_path", None)
if worker_path is not None:
from pathlib import Path
target = Path(worker_path) / "flowchart.json"
if target.is_file():
try:
data = json.loads(target.read_text(encoding="utf-8"))
original_draft = data.get("original_draft")
fmap = data.get("flowchart_map")
# Cache in phase_state for future requests
if phase_state is not None and original_draft:
phase_state.original_draft_graph = original_draft
phase_state.flowchart_map = fmap
return web.json_response(
{
"map": fmap,
"original_draft": original_draft,
}
)
except Exception:
logger.warning("Failed to read flowchart.json from %s", worker_path)
return web.json_response({"map": None, "original_draft": None})
def register_routes(app: web.Application) -> None:
"""Register graph/node inspection routes."""
# Draft graph (planning phase — visual only, no loaded worker required)
app.router.add_get("/api/sessions/{session_id}/draft-graph", handle_draft_graph)
# Flowchart map (post-dissolution — maps runtime nodes to original draft nodes)
app.router.add_get("/api/sessions/{session_id}/flowchart-map", handle_flowchart_map)
# Session-primary routes
app.router.add_get("/api/sessions/{session_id}/graphs/{graph_id}/nodes", handle_list_nodes)
app.router.add_get(
+341 -86
View File
@@ -9,8 +9,9 @@ Session-primary routes:
- DELETE /api/sessions/{session_id}/worker unload worker from session
- GET /api/sessions/{session_id}/stats runtime statistics
- GET /api/sessions/{session_id}/entry-points list entry points
- PATCH /api/sessions/{session_id}/triggers/{id} update trigger task
- GET /api/sessions/{session_id}/graphs list graph IDs
- GET /api/sessions/{session_id}/queen-messages queen conversation history
- GET /api/sessions/{session_id}/events/history persisted eventbus log (for replay)
Worker session browsing (persisted execution runs on disk):
- GET /api/sessions/{session_id}/worker-sessions list
@@ -22,15 +23,20 @@ Worker session browsing (persisted execution runs on disk):
"""
import asyncio
import contextlib
import json
import logging
import shutil
import subprocess
import sys
import time
from pathlib import Path
from aiohttp import web
from framework.server.app import (
cold_sessions_dir,
resolve_session,
safe_path_segment,
sessions_dir,
@@ -47,8 +53,11 @@ def _get_manager(request: web.Request) -> SessionManager:
def _session_to_live_dict(session) -> dict:
"""Serialize a live Session to the session-primary JSON shape."""
from framework.llm.capabilities import supports_image_tool_results
info = session.worker_info
phase_state = getattr(session, "phase_state", None)
queen_model: str = getattr(getattr(session, "runner", None), "model", "") or ""
return {
"session_id": session.id,
"worker_id": session.worker_id,
@@ -61,7 +70,10 @@ def _session_to_live_dict(session) -> dict:
"loaded_at": session.loaded_at,
"uptime_seconds": round(time.time() - session.loaded_at, 1),
"intro_message": getattr(session.runner, "intro_message", "") or "",
"queen_phase": phase_state.phase if phase_state else "planning",
"queen_phase": phase_state.phase
if phase_state
else ("staging" if session.worker_runtime else "planning"),
"queen_supports_images": supports_image_tool_results(queen_model) if queen_model else True,
}
@@ -140,6 +152,7 @@ async def handle_create_session(request: web.Request) -> web.Response:
session = await manager.create_session_with_worker(
agent_path,
agent_id=agent_id,
session_id=session_id,
model=model,
initial_prompt=initial_prompt,
queen_resume_from=queen_resume_from,
@@ -228,6 +241,22 @@ async def handle_get_live_session(request: web.Request) -> web.Response:
}
for ep in rt.get_entry_points()
]
# Append triggers from triggers.json (stored on session)
runner = getattr(session, "runner", None)
graph_entry = runner.graph.entry_node if runner else ""
for t in getattr(session, "available_triggers", {}).values():
entry = {
"id": t.id,
"name": t.description or t.id,
"entry_node": graph_entry,
"trigger_type": t.trigger_type,
"trigger_config": t.trigger_config,
"task": t.task,
}
mono = getattr(session, "trigger_next_fire", {}).get(t.id)
if mono is not None:
entry["next_fire_in"] = max(0.0, mono - time.monotonic())
data["entry_points"].append(entry)
data["graphs"] = session.worker_runtime.list_graphs()
return web.json_response(data)
@@ -351,23 +380,190 @@ async def handle_session_entry_points(request: web.Request) -> web.Response:
rt = session.worker_runtime
eps = rt.get_entry_points() if rt else []
entry_points = [
{
"id": ep.id,
"name": ep.name,
"entry_node": ep.entry_node,
"trigger_type": ep.trigger_type,
"trigger_config": ep.trigger_config,
**(
{"next_fire_in": nf}
if rt and (nf := rt.get_timer_next_fire_in(ep.id)) is not None
else {}
),
}
for ep in eps
]
# Append triggers from triggers.json (stored on session)
runner = getattr(session, "runner", None)
graph_entry = runner.graph.entry_node if runner else ""
for t in getattr(session, "available_triggers", {}).values():
entry = {
"id": t.id,
"name": t.description or t.id,
"entry_node": graph_entry,
"trigger_type": t.trigger_type,
"trigger_config": t.trigger_config,
"task": t.task,
}
mono = getattr(session, "trigger_next_fire", {}).get(t.id)
if mono is not None:
entry["next_fire_in"] = max(0.0, mono - time.monotonic())
entry_points.append(entry)
return web.json_response({"entry_points": entry_points})
async def handle_update_trigger_task(request: web.Request) -> web.Response:
"""PATCH /api/sessions/{session_id}/triggers/{trigger_id} — update trigger fields."""
session, err = resolve_session(request)
if err:
return err
trigger_id = request.match_info["trigger_id"]
available = getattr(session, "available_triggers", {})
tdef = available.get(trigger_id)
if tdef is None:
return web.json_response(
{"error": f"Trigger '{trigger_id}' not found"},
status=404,
)
try:
body = await request.json()
except Exception:
return web.json_response({"error": "Invalid JSON body"}, status=400)
updates: dict[str, object] = {}
if "task" in body:
task = body.get("task")
if not isinstance(task, str):
return web.json_response({"error": "'task' must be a string"}, status=400)
tdef.task = task
updates["task"] = tdef.task
trigger_config_update = body.get("trigger_config")
if trigger_config_update is not None:
if not isinstance(trigger_config_update, dict):
return web.json_response(
{"error": "'trigger_config' must be an object"},
status=400,
)
merged_trigger_config = dict(tdef.trigger_config)
merged_trigger_config.update(trigger_config_update)
if tdef.trigger_type == "timer":
cron_expr = merged_trigger_config.get("cron")
interval = merged_trigger_config.get("interval_minutes")
if cron_expr is not None and not isinstance(cron_expr, str):
return web.json_response(
{"error": "'trigger_config.cron' must be a string"},
status=400,
)
if cron_expr:
try:
from croniter import croniter
if not croniter.is_valid(cron_expr):
return web.json_response(
{"error": f"Invalid cron expression: {cron_expr}"},
status=400,
)
except ImportError:
return web.json_response(
{
"error": (
"croniter package not installed — cannot validate cron expression."
)
},
status=500,
)
merged_trigger_config.pop("interval_minutes", None)
elif interval is None:
return web.json_response(
{
"error": (
"Timer trigger needs 'cron' or 'interval_minutes' in trigger_config."
)
},
status=400,
)
elif not isinstance(interval, (int, float)) or interval <= 0:
return web.json_response(
{"error": "'trigger_config.interval_minutes' must be > 0"},
status=400,
)
tdef.trigger_config = merged_trigger_config
updates["trigger_config"] = tdef.trigger_config
if not updates:
return web.json_response(
{"error": "Provide at least one of 'task' or 'trigger_config'"},
status=400,
)
# Persist to session state and agent definition
from framework.tools.queen_lifecycle_tools import (
_persist_active_triggers,
_save_trigger_to_agent,
_start_trigger_timer,
_start_trigger_webhook,
)
if "trigger_config" in updates and trigger_id in getattr(session, "active_trigger_ids", set()):
task = session.active_timer_tasks.pop(trigger_id, None)
if task and not task.done():
task.cancel()
with contextlib.suppress(asyncio.CancelledError):
await task
getattr(session, "trigger_next_fire", {}).pop(trigger_id, None)
webhook_subs = getattr(session, "active_webhook_subs", {})
if sub_id := webhook_subs.pop(trigger_id, None):
with contextlib.suppress(Exception):
session.event_bus.unsubscribe(sub_id)
if tdef.trigger_type == "timer":
await _start_trigger_timer(session, trigger_id, tdef)
elif tdef.trigger_type == "webhook":
await _start_trigger_webhook(session, trigger_id, tdef)
if trigger_id in getattr(session, "active_trigger_ids", set()):
session_id = request.match_info["session_id"]
await _persist_active_triggers(session, session_id)
_save_trigger_to_agent(session, trigger_id, tdef)
# Emit SSE event so the frontend updates the graph and detail panel
bus = getattr(session, "event_bus", None)
if bus:
from framework.runtime.event_bus import AgentEvent, EventType
await bus.publish(
AgentEvent(
type=EventType.TRIGGER_UPDATED,
stream_id="queen",
data={
"trigger_id": trigger_id,
"task": tdef.task,
"trigger_config": tdef.trigger_config,
"trigger_type": tdef.trigger_type,
"name": tdef.description or trigger_id,
"entry_node": getattr(
getattr(getattr(session, "runner", None), "graph", None),
"entry_node",
None,
),
},
)
)
return web.json_response(
{
"entry_points": [
{
"id": ep.id,
"name": ep.name,
"entry_node": ep.entry_node,
"trigger_type": ep.trigger_type,
"trigger_config": ep.trigger_config,
**(
{"next_fire_in": nf}
if rt and (nf := rt.get_timer_next_fire_in(ep.id)) is not None
else {}
),
}
for ep in eps
]
"trigger_id": trigger_id,
"task": tdef.task,
"trigger_config": tdef.trigger_config,
}
)
@@ -397,23 +593,28 @@ async def handle_list_worker_sessions(request: web.Request) -> web.Response:
"""List worker sessions on disk."""
session, err = resolve_session(request)
if err:
return err
if not session.worker_path:
return web.json_response({"sessions": []})
sess_dir = sessions_dir(session)
# Fall back to cold session lookup from disk
sid = request.match_info["session_id"]
sess_dir = cold_sessions_dir(sid)
if sess_dir is None:
return err
else:
if not session.worker_path:
return web.json_response({"sessions": []})
sess_dir = sessions_dir(session)
if not sess_dir.exists():
return web.json_response({"sessions": []})
sessions = []
for d in sorted(sess_dir.iterdir(), reverse=True):
if not d.is_dir() or not d.name.startswith("session_"):
if not d.is_dir():
continue
state_path = d / "state.json"
if not d.name.startswith("session_") and not state_path.exists():
continue
entry: dict = {"session_id": d.name}
state_path = d / "state.json"
if state_path.exists():
try:
state = json.loads(state_path.read_text(encoding="utf-8"))
@@ -564,48 +765,85 @@ async def handle_messages(request: web.Request) -> web.Response:
"""Get messages for a worker session."""
session, err = resolve_session(request)
if err:
return err
if not session.worker_path:
return web.json_response({"error": "No worker loaded"}, status=503)
# Fall back to cold session lookup from disk
sid = request.match_info["session_id"]
sess_dir = cold_sessions_dir(sid)
if sess_dir is None:
return err
else:
if not session.worker_path:
return web.json_response({"error": "No worker loaded"}, status=503)
sess_dir = sessions_dir(session)
ws_id = request.match_info.get("ws_id") or request.match_info.get("session_id", "")
ws_id = safe_path_segment(ws_id)
convs_dir = sessions_dir(session) / ws_id / "conversations"
convs_dir = sess_dir / ws_id / "conversations"
if not convs_dir.exists():
return web.json_response({"messages": []})
filter_node = request.query.get("node_id")
all_messages = []
for node_dir in convs_dir.iterdir():
if not node_dir.is_dir():
continue
if filter_node and node_dir.name != filter_node:
continue
parts_dir = node_dir / "parts"
def _collect_msg_parts(parts_dir: Path, node_id: str) -> None:
if not parts_dir.exists():
continue
return
for part_file in sorted(parts_dir.iterdir()):
if part_file.suffix != ".json":
continue
try:
part = json.loads(part_file.read_text(encoding="utf-8"))
part["_node_id"] = node_dir.name
part["_node_id"] = node_id
part.setdefault("created_at", part_file.stat().st_mtime)
all_messages.append(part)
except (json.JSONDecodeError, OSError):
continue
# Flat layout: conversations/parts/*.json
if not filter_node:
_collect_msg_parts(convs_dir / "parts", "worker")
# Node-based layout: conversations/<node_id>/parts/*.json
for node_dir in convs_dir.iterdir():
if not node_dir.is_dir() or node_dir.name == "parts":
continue
if filter_node and node_dir.name != filter_node:
continue
_collect_msg_parts(node_dir / "parts", node_dir.name)
# Merge run lifecycle markers from runs.jsonl (for historical dividers)
runs_file = sess_dir / ws_id / "runs.jsonl"
if runs_file.exists():
try:
for line in runs_file.read_text(encoding="utf-8").splitlines():
line = line.strip()
if not line:
continue
try:
record = json.loads(line)
all_messages.append(
{
"seq": -1,
"role": "system",
"content": "",
"_node_id": "_run_marker",
"is_run_marker": True,
"run_id": record.get("run_id"),
"run_event": record.get("event"),
"created_at": record.get("created_at", 0),
}
)
except json.JSONDecodeError:
continue
except OSError:
pass
all_messages.sort(key=lambda m: m.get("created_at", m.get("seq", 0)))
client_only = request.query.get("client_only", "").lower() in ("true", "1")
if client_only:
client_facing_nodes: set[str] = set()
if session.runner and hasattr(session.runner, "graph"):
if session and session.runner and hasattr(session.runner, "graph"):
for node in session.runner.graph.nodes:
if node.client_facing:
client_facing_nodes.add(node.id)
@@ -614,63 +852,51 @@ async def handle_messages(request: web.Request) -> web.Response:
all_messages = [
m
for m in all_messages
if not m.get("is_transition_marker")
and m["role"] != "tool"
and not (m["role"] == "assistant" and m.get("tool_calls"))
and (
(m["role"] == "user" and m.get("is_client_input"))
or (m["role"] == "assistant" and m.get("_node_id") in client_facing_nodes)
if m.get("is_run_marker")
or (
not m.get("is_transition_marker")
and m["role"] != "tool"
and not (m["role"] == "assistant" and m.get("tool_calls"))
and (
(m["role"] == "user" and m.get("is_client_input"))
or (m["role"] == "assistant" and m.get("_node_id") in client_facing_nodes)
)
)
]
return web.json_response({"messages": all_messages})
async def handle_queen_messages(request: web.Request) -> web.Response:
"""GET /api/sessions/{session_id}/queen-messages — get queen conversation.
async def handle_session_events_history(request: web.Request) -> web.Response:
"""GET /api/sessions/{session_id}/events/history — persisted eventbus log.
Reads directly from disk so it works for both live sessions and cold
(post-server-restart) sessions no live session required.
Reads ``events.jsonl`` from the session directory on disk so it works for
both live sessions and cold (post-server-restart) sessions. The frontend
replays these events through ``sseEventToChatMessage`` to fully reconstruct
the UI state on resume.
"""
session_id = request.match_info["session_id"]
queen_dir = Path.home() / ".hive" / "queen" / "session" / session_id
convs_dir = queen_dir / "conversations"
if not convs_dir.exists():
return web.json_response({"messages": [], "session_id": session_id})
events_path = queen_dir / "events.jsonl"
if not events_path.exists():
return web.json_response({"events": [], "session_id": session_id})
all_messages: list[dict] = []
for node_dir in convs_dir.iterdir():
if not node_dir.is_dir():
continue
parts_dir = node_dir / "parts"
if not parts_dir.exists():
continue
for part_file in sorted(parts_dir.iterdir()):
if part_file.suffix != ".json":
continue
try:
part = json.loads(part_file.read_text(encoding="utf-8"))
part["_node_id"] = node_dir.name
# Use file mtime as created_at so frontend can order
# queen and worker messages chronologically.
part.setdefault("created_at", part_file.stat().st_mtime)
all_messages.append(part)
except (json.JSONDecodeError, OSError):
continue
events: list[dict] = []
try:
with open(events_path, encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line:
continue
try:
events.append(json.loads(line))
except json.JSONDecodeError:
continue
except OSError:
return web.json_response({"events": [], "session_id": session_id})
all_messages.sort(key=lambda m: m.get("created_at", m.get("seq", 0)))
# Filter to client-facing messages only
all_messages = [
m
for m in all_messages
if not m.get("is_transition_marker")
and m["role"] != "tool"
and not (m["role"] == "assistant" and m.get("tool_calls"))
]
return web.json_response({"messages": all_messages, "session_id": session_id})
return web.json_response({"events": events, "session_id": session_id})
async def handle_session_history(request: web.Request) -> web.Response:
@@ -746,6 +972,7 @@ async def handle_discover(request: web.Request) -> web.Response:
"description": entry.description,
"category": entry.category,
"session_count": entry.session_count,
"run_count": entry.run_count,
"node_count": entry.node_count,
"tool_count": entry.tool_count,
"tags": entry.tags,
@@ -757,6 +984,29 @@ async def handle_discover(request: web.Request) -> web.Response:
return web.json_response(result)
async def handle_reveal_session_folder(request: web.Request) -> web.Response:
"""POST /api/sessions/{session_id}/reveal — open session data folder in the OS file manager."""
manager: SessionManager = request.app["manager"]
session_id = request.match_info["session_id"]
session = manager.get_session(session_id)
storage_session_id = (session.queen_resume_from or session.id) if session else session_id
folder = Path.home() / ".hive" / "queen" / "session" / storage_session_id
folder.mkdir(parents=True, exist_ok=True)
try:
if sys.platform == "darwin":
subprocess.Popen(["open", str(folder)])
elif sys.platform == "win32":
subprocess.Popen(["explorer", str(folder)])
else:
subprocess.Popen(["xdg-open", str(folder)])
except Exception as exc:
return web.json_response({"error": str(exc)}, status=500)
return web.json_response({"path": str(folder)})
# ------------------------------------------------------------------
# Route registration
# ------------------------------------------------------------------
@@ -781,10 +1031,15 @@ def register_routes(app: web.Application) -> None:
app.router.add_delete("/api/sessions/{session_id}/worker", handle_unload_worker)
# Session info
app.router.add_post("/api/sessions/{session_id}/reveal", handle_reveal_session_folder)
app.router.add_get("/api/sessions/{session_id}/stats", handle_session_stats)
app.router.add_get("/api/sessions/{session_id}/entry-points", handle_session_entry_points)
app.router.add_patch(
"/api/sessions/{session_id}/triggers/{trigger_id}", handle_update_trigger_task
)
app.router.add_get("/api/sessions/{session_id}/graphs", handle_session_graphs)
app.router.add_get("/api/sessions/{session_id}/queen-messages", handle_queen_messages)
app.router.add_get("/api/sessions/{session_id}/events/history", handle_session_events_history)
# Worker session browsing (session-primary)
app.router.add_get("/api/sessions/{session_id}/worker-sessions", handle_list_worker_sessions)
File diff suppressed because it is too large Load Diff
+153 -5
View File
@@ -5,6 +5,7 @@ Uses aiohttp TestClient with mocked sessions to test all endpoints
without requiring actual LLM calls or agent loading.
"""
import asyncio
import json
from dataclasses import dataclass, field
from pathlib import Path
@@ -13,9 +14,13 @@ from unittest.mock import AsyncMock, MagicMock
import pytest
from aiohttp.test_utils import TestClient, TestServer
from framework.runtime.triggers import TriggerDefinition
from framework.server.app import create_app
from framework.server.session_manager import Session
REPO_ROOT = Path(__file__).resolve().parents[4]
EXAMPLE_AGENT_PATH = REPO_ROOT / "examples" / "templates" / "deep_research_agent"
# ---------------------------------------------------------------------------
# Mock helpers
# ---------------------------------------------------------------------------
@@ -169,6 +174,7 @@ def _make_session(
runner.intro_message = "Test intro"
mock_event_bus = MagicMock()
mock_event_bus.publish = AsyncMock()
mock_llm = MagicMock()
queen_executor = _make_queen_executor() if with_queen else None
@@ -207,11 +213,8 @@ def tmp_agent_dir(tmp_path, monkeypatch):
return tmp_path, agent_name, base
@pytest.fixture
def sample_session(tmp_agent_dir):
"""Create a sample session with state.json, checkpoints, and conversations."""
tmp_path, agent_name, base = tmp_agent_dir
session_id = "session_20260220_120000_abc12345"
def _write_sample_session(base: Path, session_id: str):
"""Create a sample worker session on disk."""
session_dir = base / "sessions" / session_id
# state.json
@@ -292,6 +295,20 @@ def sample_session(tmp_agent_dir):
return session_id, session_dir, state
@pytest.fixture
def sample_session(tmp_agent_dir):
"""Create a sample session with state.json, checkpoints, and conversations."""
_tmp_path, _agent_name, base = tmp_agent_dir
return _write_sample_session(base, "session_20260220_120000_abc12345")
@pytest.fixture
def custom_id_session(tmp_agent_dir):
"""Create a sample session that uses a custom non-session_* ID."""
_tmp_path, _agent_name, base = tmp_agent_dir
return _write_sample_session(base, "my-custom-session")
def _make_app_with_session(session):
"""Create an aiohttp app with a pre-loaded session."""
app = create_app()
@@ -347,6 +364,35 @@ class TestHealth:
class TestSessionCRUD:
@pytest.mark.asyncio
async def test_create_session_with_worker_forwards_session_id(self):
app = create_app()
manager = app["manager"]
manager.create_session_with_worker = AsyncMock(
return_value=_make_session(agent_id="my-custom-session")
)
async with TestClient(TestServer(app)) as client:
resp = await client.post(
"/api/sessions",
json={
"session_id": "my-custom-session",
"agent_path": str(EXAMPLE_AGENT_PATH),
},
)
data = await resp.json()
assert resp.status == 201
assert data["session_id"] == "my-custom-session"
manager.create_session_with_worker.assert_awaited_once_with(
str(EXAMPLE_AGENT_PATH.resolve()),
agent_id=None,
session_id="my-custom-session",
model=None,
initial_prompt=None,
queen_resume_from=None,
)
@pytest.mark.asyncio
async def test_list_sessions_empty(self):
app = create_app()
@@ -441,6 +487,70 @@ class TestSessionCRUD:
data = await resp.json()
assert "primary" in data["graphs"]
@pytest.mark.asyncio
async def test_update_trigger_task(self, tmp_path):
session = _make_session(tmp_dir=tmp_path)
session.available_triggers["daily"] = TriggerDefinition(
id="daily",
trigger_type="timer",
trigger_config={"cron": "0 5 * * *"},
task="Old task",
)
app = _make_app_with_session(session)
async with TestClient(TestServer(app)) as client:
resp = await client.patch(
"/api/sessions/test_agent/triggers/daily",
json={"task": "New task"},
)
assert resp.status == 200
data = await resp.json()
assert data["task"] == "New task"
assert data["trigger_config"]["cron"] == "0 5 * * *"
assert session.available_triggers["daily"].task == "New task"
@pytest.mark.asyncio
async def test_update_trigger_cron_restarts_active_timer(self, tmp_path):
session = _make_session(tmp_dir=tmp_path)
session.available_triggers["daily"] = TriggerDefinition(
id="daily",
trigger_type="timer",
trigger_config={"cron": "0 5 * * *"},
task="Run task",
active=True,
)
session.active_trigger_ids.add("daily")
session.active_timer_tasks["daily"] = asyncio.create_task(asyncio.sleep(60))
app = _make_app_with_session(session)
async with TestClient(TestServer(app)) as client:
resp = await client.patch(
"/api/sessions/test_agent/triggers/daily",
json={"trigger_config": {"cron": "0 6 * * *"}},
)
assert resp.status == 200
data = await resp.json()
assert data["trigger_config"]["cron"] == "0 6 * * *"
assert "daily" in session.active_timer_tasks
assert session.active_timer_tasks["daily"] is not None
assert session.available_triggers["daily"].trigger_config["cron"] == "0 6 * * *"
session.active_timer_tasks["daily"].cancel()
@pytest.mark.asyncio
async def test_update_trigger_cron_rejects_invalid_expression(self, tmp_path):
session = _make_session(tmp_dir=tmp_path)
session.available_triggers["daily"] = TriggerDefinition(
id="daily",
trigger_type="timer",
trigger_config={"cron": "0 5 * * *"},
task="Run task",
)
app = _make_app_with_session(session)
async with TestClient(TestServer(app)) as client:
resp = await client.patch(
"/api/sessions/test_agent/triggers/daily",
json={"trigger_config": {"cron": "not a cron"}},
)
assert resp.status == 400
class TestExecution:
@pytest.mark.asyncio
@@ -767,6 +877,22 @@ class TestWorkerSessions:
assert data["sessions"][0]["status"] == "paused"
assert data["sessions"][0]["steps"] == 5
@pytest.mark.asyncio
async def test_list_sessions_includes_custom_id(self, custom_id_session, tmp_agent_dir):
session_id, session_dir, state = custom_id_session
tmp_path, agent_name, base = tmp_agent_dir
session = _make_session(tmp_dir=tmp_path / ".hive" / "agents" / agent_name)
app = _make_app_with_session(session)
async with TestClient(TestServer(app)) as client:
resp = await client.get("/api/sessions/test_agent/worker-sessions")
assert resp.status == 200
data = await resp.json()
assert len(data["sessions"]) == 1
assert data["sessions"][0]["session_id"] == session_id
assert data["sessions"][0]["status"] == "paused"
@pytest.mark.asyncio
async def test_list_sessions_empty(self, tmp_agent_dir):
tmp_path, agent_name, base = tmp_agent_dir
@@ -1284,6 +1410,28 @@ class TestLogs:
assert len(data["logs"]) >= 1
assert data["logs"][0]["run_id"] == session_id
@pytest.mark.asyncio
async def test_logs_list_summaries_with_custom_id(self, custom_id_session, tmp_agent_dir):
session_id, session_dir, state = custom_id_session
tmp_path, agent_name, base = tmp_agent_dir
from framework.runtime.runtime_log_store import RuntimeLogStore
log_store = RuntimeLogStore(base)
session = _make_session(
tmp_dir=tmp_path / ".hive" / "agents" / agent_name,
log_store=log_store,
)
app = _make_app_with_session(session)
async with TestClient(TestServer(app)) as client:
resp = await client.get("/api/sessions/test_agent/logs")
assert resp.status == 200
data = await resp.json()
assert "logs" in data
assert len(data["logs"]) >= 1
assert data["logs"][0]["run_id"] == session_id
@pytest.mark.asyncio
async def test_logs_session_summary(self, sample_session, tmp_agent_dir):
session_id, session_dir, state = sample_session
+35
View File
@@ -0,0 +1,35 @@
"""Hive Agent Skills — discovery, parsing, trust gating, and injection of SKILL.md packages.
Implements the open Agent Skills standard (agentskills.io) for portable
skill discovery and activation, plus built-in default skills for runtime
operational discipline, and AS-13 trust gating for project-scope skills.
"""
from framework.skills.catalog import SkillCatalog
from framework.skills.config import DefaultSkillConfig, SkillsConfig
from framework.skills.defaults import DefaultSkillManager
from framework.skills.discovery import DiscoveryConfig, SkillDiscovery
from framework.skills.manager import SkillsManager, SkillsManagerConfig
from framework.skills.models import TrustStatus
from framework.skills.parser import ParsedSkill, parse_skill_md
from framework.skills.skill_errors import SkillError, SkillErrorCode, log_skill_error
from framework.skills.trust import TrustedRepoStore, TrustGate
__all__ = [
"DefaultSkillConfig",
"DefaultSkillManager",
"DiscoveryConfig",
"ParsedSkill",
"SkillCatalog",
"SkillDiscovery",
"SkillsConfig",
"SkillsManager",
"SkillsManagerConfig",
"TrustGate",
"TrustedRepoStore",
"TrustStatus",
"parse_skill_md",
"SkillError",
"SkillErrorCode",
"log_skill_error",
]
@@ -0,0 +1,24 @@
---
name: hive.batch-ledger
description: Track per-item status when processing collections to prevent skipped or duplicated items.
metadata:
author: hive
type: default-skill
---
## Operational Protocol: Batch Progress Ledger
When processing a collection of items, maintain a batch ledger in `_batch_ledger`.
Initialize when you identify the batch:
- `_batch_total`: total item count
- `_batch_ledger`: JSON with per-item status
Per-item statuses: pending → in_progress → completed|failed|skipped
- Set `in_progress` BEFORE processing
- Set final status AFTER processing with 1-line result_summary
- Include error reason for failed/skipped items
- Update aggregate counts after each item
- NEVER remove items from the ledger
- If resuming, skip items already marked completed
@@ -0,0 +1,22 @@
---
name: hive.context-preservation
description: Proactively preserve critical information before automatic context pruning destroys it.
metadata:
author: hive
type: default-skill
---
## Operational Protocol: Context Preservation
You operate under a finite context window. Important information WILL be pruned.
Save-As-You-Go: After any tool call producing information you'll need later,
immediately extract key data into `_working_notes` or `_preserved_data`.
Do NOT rely on referring back to old tool results.
What to extract: URLs and key snippets (not full pages), relevant API fields
(not raw JSON), specific lines/values (not entire files), analysis results
(not raw data).
Before transitioning to the next phase/node, write a handoff summary to
`_handoff_context` with everything the next phase needs to know.
@@ -0,0 +1,18 @@
---
name: hive.error-recovery
description: Follow a structured recovery protocol when tool calls fail instead of blindly retrying or giving up.
metadata:
author: hive
type: default-skill
---
## Operational Protocol: Error Recovery
When a tool call fails:
1. Diagnose — record error in notes, classify as transient or structural
2. Decide — transient: retry once. Structural fixable: fix and retry.
Structural unfixable: record as failed, move to next item.
Blocking all progress: record escalation note.
3. Adapt — if same tool failed 3+ times, stop using it and find alternative.
Update plan in notes. Never silently drop the failed item.
@@ -0,0 +1,27 @@
---
name: hive.note-taking
description: Maintain structured working notes throughout execution to prevent information loss during context pruning.
metadata:
author: hive
type: default-skill
---
## Operational Protocol: Structured Note-Taking
Maintain structured working notes in shared memory key `_working_notes`.
Update at these checkpoints:
- After completing each discrete subtask or batch item
- After receiving new information that changes your plan
- Before any tool call that will produce substantial output
Structure:
### Objective — restate the goal
### Current Plan — numbered steps, mark completed with ✓
### Key Decisions — decisions made and WHY
### Working Data — intermediate results, extracted values
### Open Questions — uncertainties to verify
### Blockers — anything preventing progress
Update incrementally — do not rewrite from scratch each time.
@@ -0,0 +1,20 @@
---
name: hive.quality-monitor
description: Periodically self-assess output quality to catch degradation before the judge does.
metadata:
author: hive
type: default-skill
---
## Operational Protocol: Quality Self-Assessment
Every 5 iterations, self-assess:
1. On-task? Still working toward the stated objective?
2. Thorough? Cutting corners compared to earlier?
3. Non-repetitive? Producing new value or rehashing?
4. Consistent? Latest output contradict earlier decisions?
5. Complete? Tracking all items, or silently dropped some?
If degrading: write assessment to `_quality_log`, re-read `_working_notes`,
change approach explicitly. If acceptable: brief note in `_quality_log`.
@@ -0,0 +1,17 @@
---
name: hive.task-decomposition
description: Decompose complex tasks into explicit subtasks before diving in.
metadata:
author: hive
type: default-skill
---
## Operational Protocol: Task Decomposition
Before starting a complex task:
1. Decompose — break into numbered subtasks in `_working_notes` Current Plan
2. Estimate — relative effort per subtask (small/medium/large)
3. Execute — work through in order, mark ✓ when complete
4. Budget — if running low on iterations, prioritize by impact
5. Verify — before declaring done, every subtask must be ✓, skipped (with reason), or blocked
+116
View File
@@ -0,0 +1,116 @@
"""Skill catalog — in-memory index with system prompt generation.
Builds the XML catalog injected into the system prompt for model-driven
skill activation per the Agent Skills standard.
"""
from __future__ import annotations
import logging
from xml.sax.saxutils import escape
from framework.skills.parser import ParsedSkill
from framework.skills.skill_errors import SkillErrorCode, log_skill_error
logger = logging.getLogger(__name__)
_BEHAVIORAL_INSTRUCTION = (
"The following skills provide specialized instructions for specific tasks.\n"
"When a task matches a skill's description, read the SKILL.md at the listed\n"
"location to load the full instructions before proceeding.\n"
"When a skill references relative paths, resolve them against the skill's\n"
"directory (the parent of SKILL.md) and use absolute paths in tool calls."
)
class SkillCatalog:
"""In-memory catalog of discovered skills."""
def __init__(self, skills: list[ParsedSkill] | None = None):
self._skills: dict[str, ParsedSkill] = {}
self._activated: set[str] = set()
if skills:
for skill in skills:
self.add(skill)
def add(self, skill: ParsedSkill) -> None:
"""Add a skill to the catalog."""
self._skills[skill.name] = skill
def get(self, name: str) -> ParsedSkill | None:
"""Look up a skill by name."""
return self._skills.get(name)
def mark_activated(self, name: str) -> None:
"""Mark a skill as activated in the current session."""
self._activated.add(name)
def is_activated(self, name: str) -> bool:
"""Check if a skill has been activated."""
return name in self._activated
@property
def skill_count(self) -> int:
return len(self._skills)
@property
def allowlisted_dirs(self) -> list[str]:
"""All skill base directories for file access allowlisting."""
return [skill.base_dir for skill in self._skills.values()]
def to_prompt(self) -> str:
"""Generate the catalog prompt for system prompt injection.
Returns empty string if no community/user skills are discovered
(default skills are handled separately by DefaultSkillManager).
"""
# Filter out framework-scope skills (default skills) — they're
# injected via the protocols prompt, not the catalog
community_skills = [s for s in self._skills.values() if s.source_scope != "framework"]
if not community_skills:
return ""
lines = ["<available_skills>"]
for skill in sorted(community_skills, key=lambda s: s.name):
lines.append(" <skill>")
lines.append(f" <name>{escape(skill.name)}</name>")
lines.append(f" <description>{escape(skill.description)}</description>")
lines.append(f" <location>{escape(skill.location)}</location>")
lines.append(f" <base_dir>{escape(skill.base_dir)}</base_dir>")
lines.append(" </skill>")
lines.append("</available_skills>")
xml_block = "\n".join(lines)
return f"{_BEHAVIORAL_INSTRUCTION}\n\n{xml_block}"
def build_pre_activated_prompt(self, skill_names: list[str]) -> str:
"""Build prompt content for pre-activated skills.
Pre-activated skills get their full SKILL.md body loaded into
the system prompt at startup (tier 2), bypassing model-driven
activation.
Returns empty string if no skills match.
"""
parts: list[str] = []
for name in skill_names:
skill = self.get(name)
if skill is None:
log_skill_error(
logger,
"warning",
SkillErrorCode.SKILL_NOT_FOUND,
what=f"Pre-activated skill '{name}' not found in catalog",
why="The skill was listed for pre-activation but was not discovered.",
fix=f"Check that a SKILL.md for '{name}' exists in a scanned directory.",
)
continue
if self.is_activated(name):
continue # Already activated, skip duplicate
self.mark_activated(name)
parts.append(f"--- Pre-Activated Skill: {skill.name} ---\n{skill.body}")
return "\n\n".join(parts)
+120
View File
@@ -0,0 +1,120 @@
"""CLI commands for the Hive skill system.
Phase 1 commands (AS-13):
hive skill list list discovered skills across all scopes
hive skill trust <path> permanently trust a project repo's skills
Full CLI suite (CLI-1 through CLI-13) is Phase 2.
"""
from __future__ import annotations
import subprocess
import sys
from pathlib import Path
def register_skill_commands(subparsers) -> None:
"""Register the ``hive skill`` subcommand group."""
skill_parser = subparsers.add_parser("skill", help="Manage skills")
skill_sub = skill_parser.add_subparsers(dest="skill_command", required=True)
# hive skill list
list_parser = skill_sub.add_parser("list", help="List discovered skills across all scopes")
list_parser.add_argument(
"--project-dir",
default=None,
metavar="PATH",
help="Project directory to scan (default: current directory)",
)
list_parser.set_defaults(func=cmd_skill_list)
# hive skill trust
trust_parser = skill_sub.add_parser(
"trust",
help="Permanently trust a project repository so its skills load without prompting",
)
trust_parser.add_argument(
"project_path",
help="Path to the project directory (must contain a .git with a remote origin)",
)
trust_parser.set_defaults(func=cmd_skill_trust)
def cmd_skill_list(args) -> int:
"""List all discovered skills grouped by scope."""
from framework.skills.discovery import DiscoveryConfig, SkillDiscovery
project_dir = Path(args.project_dir).resolve() if args.project_dir else Path.cwd()
skills = SkillDiscovery(DiscoveryConfig(project_root=project_dir)).discover()
if not skills:
print("No skills discovered.")
return 0
scope_headers = {
"project": "PROJECT SKILLS",
"user": "USER SKILLS",
"framework": "FRAMEWORK SKILLS",
}
for scope in ("project", "user", "framework"):
scope_skills = [s for s in skills if s.source_scope == scope]
if not scope_skills:
continue
print(f"\n{scope_headers[scope]}")
print("" * 40)
for skill in scope_skills:
print(f"{skill.name}")
print(f" {skill.description}")
print(f" {skill.location}")
return 0
def cmd_skill_trust(args) -> int:
"""Permanently trust a project repository's skills."""
from framework.skills.trust import TrustedRepoStore, _normalize_remote_url
project_path = Path(args.project_path).resolve()
if not project_path.exists():
print(f"Error: path does not exist: {project_path}", file=sys.stderr)
return 1
if not (project_path / ".git").exists():
print(
f"Error: {project_path} is not a git repository (no .git directory).",
file=sys.stderr,
)
return 1
try:
result = subprocess.run(
["git", "-C", str(project_path), "remote", "get-url", "origin"],
capture_output=True,
text=True,
timeout=3,
)
if result.returncode != 0:
print(
"Error: no remote 'origin' configured in this repository.",
file=sys.stderr,
)
return 1
remote_url = result.stdout.strip()
except subprocess.TimeoutExpired:
print("Error: git remote lookup timed out.", file=sys.stderr)
return 1
except (FileNotFoundError, OSError) as e:
print(f"Error reading git remote: {e}", file=sys.stderr)
return 1
repo_key = _normalize_remote_url(remote_url)
store = TrustedRepoStore()
store.trust(repo_key, project_path=str(project_path))
print(f"✓ Trusted: {repo_key}")
print(" Stored in ~/.hive/trusted_repos.json")
print(" Skills from this repository will load without prompting in future runs.")
return 0
+100
View File
@@ -0,0 +1,100 @@
"""Skill configuration dataclasses.
Handles agent-level skill configuration from module-level variables
(``default_skills`` and ``skills``).
"""
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Any
@dataclass
class DefaultSkillConfig:
"""Configuration for a single default skill."""
enabled: bool = True
overrides: dict[str, Any] = field(default_factory=dict)
@classmethod
def from_dict(cls, data: dict[str, Any]) -> DefaultSkillConfig:
enabled = data.get("enabled", True)
overrides = {k: v for k, v in data.items() if k != "enabled"}
return cls(enabled=enabled, overrides=overrides)
@dataclass
class SkillsConfig:
"""Agent-level skill configuration.
Built from module-level variables in agent.py::
# Pre-activated community skills
skills = ["deep-research", "code-review"]
# Default skill configuration
default_skills = {
"hive.note-taking": {"enabled": True},
"hive.batch-ledger": {"enabled": True, "checkpoint_every_n": 10},
"hive.quality-monitor": {"enabled": False},
}
"""
# Per-default-skill config, keyed by skill name (e.g. "hive.note-taking")
default_skills: dict[str, DefaultSkillConfig] = field(default_factory=dict)
# Pre-activated community skills (by name)
skills: list[str] = field(default_factory=list)
# Master switch: disable all default skills at once
all_defaults_disabled: bool = False
def is_default_enabled(self, skill_name: str) -> bool:
"""Check if a specific default skill is enabled."""
if self.all_defaults_disabled:
return False
config = self.default_skills.get(skill_name)
if config is None:
return True # enabled by default
return config.enabled
def get_default_overrides(self, skill_name: str) -> dict[str, Any]:
"""Get skill-specific configuration overrides."""
config = self.default_skills.get(skill_name)
if config is None:
return {}
return config.overrides
@classmethod
def from_agent_vars(
cls,
default_skills: dict[str, Any] | None = None,
skills: list[str] | None = None,
) -> SkillsConfig:
"""Build config from agent module-level variables.
Args:
default_skills: Dict from agent module, e.g.
``{"hive.note-taking": {"enabled": True}}``
skills: List of pre-activated skill names from agent module
"""
all_disabled = False
parsed_defaults: dict[str, DefaultSkillConfig] = {}
if default_skills:
for name, config_dict in default_skills.items():
if name == "_all":
if isinstance(config_dict, dict) and not config_dict.get("enabled", True):
all_disabled = True
continue
if isinstance(config_dict, dict):
parsed_defaults[name] = DefaultSkillConfig.from_dict(config_dict)
elif isinstance(config_dict, bool):
parsed_defaults[name] = DefaultSkillConfig(enabled=config_dict)
return cls(
default_skills=parsed_defaults,
skills=list(skills or []),
all_defaults_disabled=all_disabled,
)
+200
View File
@@ -0,0 +1,200 @@
"""DefaultSkillManager — load, configure, and inject built-in default skills.
Default skills are SKILL.md packages shipped with the framework that provide
runtime operational protocols (note-taking, batch tracking, error recovery, etc.).
"""
from __future__ import annotations
import logging
from pathlib import Path
from framework.skills.config import SkillsConfig
from framework.skills.parser import ParsedSkill, parse_skill_md
from framework.skills.skill_errors import SkillErrorCode, log_skill_error
logger = logging.getLogger(__name__)
# Default skills directory relative to this module
_DEFAULT_SKILLS_DIR = Path(__file__).parent / "_default_skills"
# Ordered list of default skills (name → directory)
SKILL_REGISTRY: dict[str, str] = {
"hive.note-taking": "note-taking",
"hive.batch-ledger": "batch-ledger",
"hive.context-preservation": "context-preservation",
"hive.quality-monitor": "quality-monitor",
"hive.error-recovery": "error-recovery",
"hive.task-decomposition": "task-decomposition",
}
# All shared memory keys used by default skills (for permission auto-inclusion)
SHARED_MEMORY_KEYS: list[str] = [
# note-taking
"_working_notes",
"_notes_updated_at",
# batch-ledger
"_batch_ledger",
"_batch_total",
"_batch_completed",
"_batch_failed",
# context-preservation
"_handoff_context",
"_preserved_data",
# quality-monitor
"_quality_log",
"_quality_degradation_count",
# error-recovery
"_error_log",
"_failed_tools",
"_escalation_needed",
# task-decomposition
"_subtasks",
"_iteration_budget_remaining",
]
class DefaultSkillManager:
"""Manages loading, configuration, and prompt generation for default skills."""
def __init__(self, config: SkillsConfig | None = None):
self._config = config or SkillsConfig()
self._skills: dict[str, ParsedSkill] = {}
self._loaded = False
self._error_count = 0
def load(self) -> None:
"""Load all enabled default skill SKILL.md files."""
if self._loaded:
return
error_count = 0
for skill_name, dir_name in SKILL_REGISTRY.items():
if not self._config.is_default_enabled(skill_name):
logger.info("Default skill '%s' disabled by config", skill_name)
continue
skill_path = _DEFAULT_SKILLS_DIR / dir_name / "SKILL.md"
if not skill_path.is_file():
log_skill_error(
logger,
"error",
SkillErrorCode.SKILL_NOT_FOUND,
what=f"Default skill SKILL.md not found: '{skill_path}'",
why=f"The framework skill '{skill_name}' is missing its SKILL.md file.",
fix="Reinstall the hive framework — this file is part of the package.",
)
error_count += 1
continue
parsed = parse_skill_md(skill_path, source_scope="framework")
if parsed is None:
log_skill_error(
logger,
"error",
SkillErrorCode.SKILL_PARSE_ERROR,
what=f"Failed to parse default skill '{skill_name}'",
why=f"parse_skill_md returned None for '{skill_path}'.",
fix="Reinstall the hive framework — this file may be corrupted.",
)
error_count += 1
continue
self._skills[skill_name] = parsed
self._loaded = True
self._error_count = error_count
def build_protocols_prompt(self) -> str:
"""Build the combined operational protocols section.
Extracts protocol sections from all enabled default skills and
combines them into a single ``## Operational Protocols`` block
for system prompt injection.
Returns empty string if all defaults are disabled.
"""
if not self._skills:
return ""
parts: list[str] = ["## Operational Protocols\n"]
for skill_name in SKILL_REGISTRY:
skill = self._skills.get(skill_name)
if skill is None:
continue
# Use the full body — each SKILL.md contains exactly one protocol section
parts.append(skill.body)
if len(parts) <= 1:
return ""
combined = "\n\n".join(parts)
# Token budget warning (approximate: 1 token ≈ 4 chars)
approx_tokens = len(combined) // 4
if approx_tokens > 2000:
logger.warning(
"Default skill protocols exceed 2000 token budget "
"(~%d tokens, %d chars). Consider trimming.",
approx_tokens,
len(combined),
)
return combined
def log_active_skills(self) -> None:
"""Log which default skills are active and their configuration."""
if not self._skills:
logger.info("Default skills: all disabled")
# DX-3: Per-skill structured startup log
for skill_name in SKILL_REGISTRY:
if skill_name in self._skills:
overrides = self._config.get_default_overrides(skill_name)
status = f"loaded overrides={overrides}" if overrides else "loaded"
elif not self._config.is_default_enabled(skill_name):
status = "disabled"
else:
status = "error"
logger.info(
"skill_startup name=%s scope=framework status=%s",
skill_name,
status,
)
# Original active skills log line (preserved for backward compatibility)
active = []
for skill_name in SKILL_REGISTRY:
if skill_name in self._skills:
overrides = self._config.get_default_overrides(skill_name)
if overrides:
active.append(f"{skill_name} ({overrides})")
else:
active.append(skill_name)
if active:
logger.info("Default skills active: %s", ", ".join(active))
# DX-3: Summary line with error count
total = len(SKILL_REGISTRY)
active_count = len(self._skills)
error_count = getattr(self, "_error_count", 0)
disabled_count = total - active_count - error_count
logger.info(
"Skills: %d default (%d active, %d disabled, %d error)",
total,
active_count,
disabled_count,
error_count,
)
@property
def active_skill_names(self) -> list[str]:
"""Names of all currently active default skills."""
return list(self._skills.keys())
@property
def active_skills(self) -> dict[str, ParsedSkill]:
"""All active default skills keyed by name."""
return dict(self._skills)
+186
View File
@@ -0,0 +1,186 @@
"""Skill discovery — scan standard directories for SKILL.md files.
Implements the Agent Skills standard discovery paths plus Hive-specific
locations. Resolves name collisions deterministically.
"""
from __future__ import annotations
import logging
from dataclasses import dataclass
from pathlib import Path
from framework.skills.parser import ParsedSkill, parse_skill_md
from framework.skills.skill_errors import SkillErrorCode, log_skill_error
logger = logging.getLogger(__name__)
# Directories to skip during scanning
_SKIP_DIRS = frozenset(
{
".git",
"node_modules",
"__pycache__",
".venv",
"venv",
".mypy_cache",
".pytest_cache",
".ruff_cache",
}
)
# Scope priority (higher = takes precedence)
_SCOPE_PRIORITY = {
"framework": 0,
"user": 1,
"project": 2,
}
# Within the same scope, Hive-specific paths override cross-client paths.
# We encode this by scanning cross-client first, then Hive-specific (later wins).
@dataclass
class DiscoveryConfig:
"""Configuration for skill discovery."""
project_root: Path | None = None
skip_user_scope: bool = False
skip_framework_scope: bool = False
max_depth: int = 4
max_dirs: int = 2000
class SkillDiscovery:
"""Scans standard directories for SKILL.md files and resolves collisions."""
def __init__(self, config: DiscoveryConfig | None = None):
self._config = config or DiscoveryConfig()
def discover(self) -> list[ParsedSkill]:
"""Scan all scopes and return deduplicated skill list.
Scanning order (lowest to highest precedence):
1. Framework defaults
2. User cross-client (~/.agents/skills/)
3. User Hive-specific (~/.hive/skills/)
4. Project cross-client (<project>/.agents/skills/)
5. Project Hive-specific (<project>/.hive/skills/)
Later entries override earlier ones on name collision.
"""
all_skills: list[ParsedSkill] = []
# Framework scope (lowest precedence)
if not self._config.skip_framework_scope:
framework_dir = Path(__file__).parent / "_default_skills"
if framework_dir.is_dir():
all_skills.extend(self._scan_scope(framework_dir, "framework"))
# User scope
if not self._config.skip_user_scope:
home = Path.home()
# Cross-client (lower precedence within user scope)
user_agents = home / ".agents" / "skills"
if user_agents.is_dir():
all_skills.extend(self._scan_scope(user_agents, "user"))
# Hive-specific (higher precedence within user scope)
user_hive = home / ".hive" / "skills"
if user_hive.is_dir():
all_skills.extend(self._scan_scope(user_hive, "user"))
# Project scope (highest precedence)
if self._config.project_root:
root = self._config.project_root
# Cross-client
project_agents = root / ".agents" / "skills"
if project_agents.is_dir():
all_skills.extend(self._scan_scope(project_agents, "project"))
# Hive-specific
project_hive = root / ".hive" / "skills"
if project_hive.is_dir():
all_skills.extend(self._scan_scope(project_hive, "project"))
resolved = self._resolve_collisions(all_skills)
logger.info(
"Skill discovery: found %d skills (%d after dedup) across all scopes",
len(all_skills),
len(resolved),
)
return resolved
def _scan_scope(self, root: Path, scope: str) -> list[ParsedSkill]:
"""Scan a single directory for skill directories containing SKILL.md."""
skills: list[ParsedSkill] = []
dirs_scanned = 0
for skill_md in self._find_skill_files(root, depth=0):
if dirs_scanned >= self._config.max_dirs:
logger.warning(
"Hit max directory limit (%d) scanning %s",
self._config.max_dirs,
root,
)
break
parsed = parse_skill_md(skill_md, source_scope=scope)
if parsed is not None:
skills.append(parsed)
dirs_scanned += 1
return skills
def _find_skill_files(self, directory: Path, depth: int) -> list[Path]:
"""Recursively find SKILL.md files up to max_depth."""
if depth > self._config.max_depth:
return []
results: list[Path] = []
try:
entries = sorted(directory.iterdir())
except OSError:
return []
for entry in entries:
if not entry.is_dir():
continue
if entry.name in _SKIP_DIRS:
continue
skill_md = entry / "SKILL.md"
if skill_md.is_file():
results.append(skill_md)
else:
# Recurse into subdirectories
results.extend(self._find_skill_files(entry, depth + 1))
return results
def _resolve_collisions(self, skills: list[ParsedSkill]) -> list[ParsedSkill]:
"""Resolve name collisions deterministically.
Later entries in the list override earlier ones (because we scan
from lowest to highest precedence). On collision, log a warning.
"""
seen: dict[str, ParsedSkill] = {}
for skill in skills:
if skill.name in seen:
existing = seen[skill.name]
log_skill_error(
logger,
"warning",
SkillErrorCode.SKILL_COLLISION,
what=f"Skill name collision: '{skill.name}'",
why=f"'{skill.location}' overrides '{existing.location}'.",
fix="Rename one of the conflicting skill directories to use a unique name.",
)
seen[skill.name] = skill
return list(seen.values())
+194
View File
@@ -0,0 +1,194 @@
"""Unified skill lifecycle manager.
``SkillsManager`` is the single facade that owns skill discovery, loading,
and prompt renderation. The runtime creates one at startup and downstream
layers read the cached prompt strings.
Typical usage **config-driven** (runner passes configuration)::
config = SkillsManagerConfig(
skills_config=SkillsConfig.from_agent_vars(...),
project_root=agent_path,
)
mgr = SkillsManager(config)
mgr.load()
print(mgr.protocols_prompt) # default skill protocols
print(mgr.skills_catalog_prompt) # community skills XML
Typical usage **bare** (exported agents, SDK users)::
mgr = SkillsManager() # default config
mgr.load() # loads all 6 default skills, no community discovery
"""
from __future__ import annotations
import logging
from dataclasses import dataclass, field
from pathlib import Path
from framework.skills.config import SkillsConfig
logger = logging.getLogger(__name__)
@dataclass
class SkillsManagerConfig:
"""Everything the runtime needs to configure skills.
Attributes:
skills_config: Per-skill enable/disable and overrides.
project_root: Agent directory for community skill discovery.
When ``None``, community discovery is skipped.
skip_community_discovery: Explicitly skip community scanning
even when ``project_root`` is set.
interactive: Whether trust gating can prompt the user interactively.
When ``False``, untrusted project skills are silently skipped.
"""
skills_config: SkillsConfig = field(default_factory=SkillsConfig)
project_root: Path | None = None
skip_community_discovery: bool = False
interactive: bool = True
class SkillsManager:
"""Unified skill lifecycle: discovery → loading → prompt renderation.
The runtime creates one instance during init and owns it for the
lifetime of the process. Downstream layers (``ExecutionStream``,
``GraphExecutor``, ``NodeContext``, ``EventLoopNode``) receive the
cached prompt strings via property accessors.
"""
def __init__(self, config: SkillsManagerConfig | None = None) -> None:
self._config = config or SkillsManagerConfig()
self._loaded = False
self._catalog_prompt: str = ""
self._protocols_prompt: str = ""
self._allowlisted_dirs: list[str] = []
# ------------------------------------------------------------------
# Factory for backwards-compat bridge
# ------------------------------------------------------------------
@classmethod
def from_precomputed(
cls,
skills_catalog_prompt: str = "",
protocols_prompt: str = "",
) -> SkillsManager:
"""Wrap pre-rendered prompt strings (legacy callers).
Returns a manager that skips discovery/loading and just returns
the provided strings. Used by the deprecation bridge in
``AgentRuntime`` when callers pass raw prompt strings.
"""
mgr = cls.__new__(cls)
mgr._config = SkillsManagerConfig()
mgr._loaded = True # skip load()
mgr._catalog_prompt = skills_catalog_prompt
mgr._protocols_prompt = protocols_prompt
mgr._allowlisted_dirs = []
return mgr
# ------------------------------------------------------------------
# Lifecycle
# ------------------------------------------------------------------
def load(self) -> None:
"""Discover, load, and cache skill prompts. Idempotent."""
if self._loaded:
return
self._loaded = True
try:
self._do_load()
except Exception:
logger.warning("Skill system init failed (non-fatal)", exc_info=True)
def _do_load(self) -> None:
"""Internal load — may raise; caller catches."""
from framework.skills.catalog import SkillCatalog
from framework.skills.defaults import DefaultSkillManager
from framework.skills.discovery import DiscoveryConfig, SkillDiscovery
skills_config = self._config.skills_config
# 1. Community skill discovery (when project_root is available)
catalog_prompt = ""
if self._config.project_root is not None and not self._config.skip_community_discovery:
from framework.skills.trust import TrustGate
discovery = SkillDiscovery(DiscoveryConfig(project_root=self._config.project_root))
discovered = discovery.discover()
# Trust-gate project-scope skills (AS-13)
discovered = TrustGate(interactive=self._config.interactive).filter_and_gate(
discovered, project_dir=self._config.project_root
)
catalog = SkillCatalog(discovered)
self._allowlisted_dirs = catalog.allowlisted_dirs
catalog_prompt = catalog.to_prompt()
# Pre-activated community skills
if skills_config.skills:
pre_activated = catalog.build_pre_activated_prompt(skills_config.skills)
if pre_activated:
if catalog_prompt:
catalog_prompt = f"{catalog_prompt}\n\n{pre_activated}"
else:
catalog_prompt = pre_activated
# 2. Default skills (always loaded unless explicitly disabled)
default_mgr = DefaultSkillManager(config=skills_config)
default_mgr.load()
default_mgr.log_active_skills()
protocols_prompt = default_mgr.build_protocols_prompt()
# DX-3: Community skill startup summary
if self._config.project_root is not None and not self._config.skip_community_discovery:
community_count = len(catalog._skills) if catalog_prompt else 0
pre_activated_count = len(skills_config.skills) if skills_config.skills else 0
logger.info(
"Skills: %d community (%d catalog, %d pre-activated)",
community_count,
community_count,
pre_activated_count,
)
# 3. Cache
self._catalog_prompt = catalog_prompt
self._protocols_prompt = protocols_prompt
if protocols_prompt:
logger.info(
"Skill system ready: protocols=%d chars, catalog=%d chars",
len(protocols_prompt),
len(catalog_prompt),
)
else:
logger.warning("Skill system produced empty protocols_prompt")
# ------------------------------------------------------------------
# Prompt accessors (consumed by downstream layers)
# ------------------------------------------------------------------
@property
def skills_catalog_prompt(self) -> str:
"""Community skills XML catalog for system prompt injection."""
return self._catalog_prompt
@property
def protocols_prompt(self) -> str:
"""Default skill operational protocols for system prompt injection."""
return self._protocols_prompt
@property
def allowlisted_dirs(self) -> list[str]:
"""Skill base directories for Tier 3 resource access (AS-6)."""
return self._allowlisted_dirs
@property
def is_loaded(self) -> bool:
return self._loaded
+52
View File
@@ -0,0 +1,52 @@
"""Data models for the Hive skill system (Agent Skills standard)."""
from __future__ import annotations
from dataclasses import dataclass, field
from enum import StrEnum
from pathlib import Path
class SkillScope(StrEnum):
"""Where a skill was discovered."""
PROJECT = "project"
USER = "user"
FRAMEWORK = "framework"
class TrustStatus(StrEnum):
"""Trust state of a skill entry."""
TRUSTED = "trusted"
PENDING_CONSENT = "pending_consent"
DENIED = "denied"
@dataclass
class SkillEntry:
"""In-memory record for a discovered skill (PRD §4.2)."""
name: str
"""Skill name from SKILL.md frontmatter."""
description: str
"""Skill description from SKILL.md frontmatter."""
location: Path
"""Absolute path to SKILL.md."""
base_dir: Path
"""Parent directory of SKILL.md (skill root)."""
source_scope: SkillScope
"""Which scope this skill was found in."""
trust_status: TrustStatus = TrustStatus.TRUSTED
"""Trust state; project-scope skills start as PENDING_CONSENT before gating."""
# Optional frontmatter fields
license: str | None = None
compatibility: list[str] = field(default_factory=list)
allowed_tools: list[str] = field(default_factory=list)
metadata: dict = field(default_factory=dict)
+225
View File
@@ -0,0 +1,225 @@
"""SKILL.md parser — extracts YAML frontmatter and markdown body.
Parses SKILL.md files per the Agent Skills standard (agentskills.io/specification).
Lenient validation: warns on non-critical issues, skips only on missing description
or completely unparseable YAML.
"""
from __future__ import annotations
import logging
import re
from dataclasses import dataclass
from pathlib import Path
from typing import Any
from framework.skills.skill_errors import SkillErrorCode, log_skill_error
logger = logging.getLogger(__name__)
# Maximum name length before a warning is logged
_MAX_NAME_LENGTH = 64
@dataclass
class ParsedSkill:
"""In-memory representation of a parsed SKILL.md file."""
name: str
description: str
location: str # absolute path to SKILL.md
base_dir: str # parent directory of SKILL.md
source_scope: str # "project", "user", or "framework"
body: str # markdown body after closing ---
# Optional frontmatter fields
license: str | None = None
compatibility: list[str] | None = None
metadata: dict[str, Any] | None = None
allowed_tools: list[str] | None = None
def _try_fix_yaml(raw: str) -> str:
"""Attempt to fix common YAML issues (unquoted colon values).
Some SKILL.md files written for other clients may contain unquoted
values with colons, e.g. ``description: Use for: research tasks``.
This wraps such values in quotes as a best-effort fixup.
"""
lines = raw.split("\n")
fixed = []
for line in lines:
# Match "key: value" where value contains an unquoted colon
m = re.match(r"^(\s*\w[\w-]*:\s*)(.+)$", line)
if m:
key_part, value_part = m.group(1), m.group(2)
# If value contains a colon and isn't already quoted
if ":" in value_part and not (value_part.startswith('"') or value_part.startswith("'")):
value_part = f'"{value_part}"'
fixed.append(f"{key_part}{value_part}")
else:
fixed.append(line)
return "\n".join(fixed)
def parse_skill_md(path: Path, source_scope: str = "project") -> ParsedSkill | None:
"""Parse a SKILL.md file into a ParsedSkill record.
Args:
path: Absolute path to the SKILL.md file.
source_scope: One of "project", "user", or "framework".
Returns:
ParsedSkill on success, None if the file is unparseable or
missing required fields (description).
"""
try:
content = path.read_text(encoding="utf-8")
except OSError as exc:
log_skill_error(
logger,
"error",
SkillErrorCode.SKILL_ACTIVATION_FAILED,
what=f"Failed to read '{path}'",
why=str(exc),
fix="Check the file exists and has read permissions.",
)
return None
if not content.strip():
log_skill_error(
logger,
"error",
SkillErrorCode.SKILL_PARSE_ERROR,
what=f"Invalid SKILL.md at '{path}'",
why="The file exists but contains no content.",
fix="Add valid YAML frontmatter and a markdown body to the SKILL.md.",
)
return None
# Split on --- delimiters (first two occurrences)
parts = content.split("---", 2)
if len(parts) < 3:
log_skill_error(
logger,
"error",
SkillErrorCode.SKILL_PARSE_ERROR,
what=f"Invalid SKILL.md at '{path}'",
why="Missing YAML frontmatter (---).",
fix="Wrap the frontmatter with --- on its own line at the top and bottom.",
)
return None
# parts[0] is content before first --- (should be empty or whitespace)
# parts[1] is the YAML frontmatter
# parts[2] is the markdown body
raw_yaml = parts[1].strip()
body = parts[2].strip()
if not raw_yaml:
log_skill_error(
logger,
"error",
SkillErrorCode.SKILL_PARSE_ERROR,
what=f"Invalid SKILL.md at '{path}'",
why="The --- delimiters are present but the YAML block is empty.",
fix="Add at least 'name' and 'description' fields to the frontmatter.",
)
return None
# Parse YAML
import yaml
frontmatter: dict[str, Any] | None = None
try:
frontmatter = yaml.safe_load(raw_yaml)
except yaml.YAMLError:
# Fallback: try fixing unquoted colon values
try:
fixed = _try_fix_yaml(raw_yaml)
frontmatter = yaml.safe_load(fixed)
log_skill_error(
logger,
"warning",
SkillErrorCode.SKILL_YAML_FIXUP,
what=f"Auto-fixed YAML in '{path}'",
why="Unquoted colon values detected in frontmatter.",
fix='Wrap values containing colons in quotes e.g. description: "Use for: research"',
)
except yaml.YAMLError as exc:
log_skill_error(
logger,
"error",
SkillErrorCode.SKILL_PARSE_ERROR,
what=f"Invalid SKILL.md at '{path}'",
why=str(exc),
fix="Validate the YAML frontmatter at https://yaml-online-parser.appspot.com/",
)
return None
if not isinstance(frontmatter, dict):
log_skill_error(
logger,
"error",
SkillErrorCode.SKILL_PARSE_ERROR,
what=f"Invalid SKILL.md at '{path}'",
why="YAML frontmatter is not a key-value mapping.",
fix="Ensure the frontmatter is valid YAML with key: value pairs.",
)
return None
# Required: description
description = frontmatter.get("description")
if not description or not str(description).strip():
log_skill_error(
logger,
"error",
SkillErrorCode.SKILL_MISSING_DESCRIPTION,
what=f"Missing 'description' in '{path}'",
why="The 'description' field is required but is absent or empty.",
fix="Add a non-empty 'description' field to the YAML frontmatter.",
)
return None
# Required: name (fallback to parent directory name)
name = frontmatter.get("name")
parent_dir_name = path.parent.name
if not name or not str(name).strip():
name = parent_dir_name
log_skill_error(
logger,
"warning",
SkillErrorCode.SKILL_NAME_MISMATCH,
what=f"Missing 'name' in '{path}' — using directory name '{name}'",
why="The 'name' field is absent from the YAML frontmatter.",
fix=f"Add 'name: {name}' to the frontmatter to make this explicit.",
)
else:
name = str(name).strip()
# Lenient warnings
if len(name) > _MAX_NAME_LENGTH:
logger.warning("Skill name exceeds %d chars in %s: '%s'", _MAX_NAME_LENGTH, path, name)
if name != parent_dir_name and not name.endswith(f".{parent_dir_name}"):
log_skill_error(
logger,
"warning",
SkillErrorCode.SKILL_NAME_MISMATCH,
what=f"Name mismatch in '{path}'",
why=f"Skill name '{name}' doesn't match directory '{parent_dir_name}'.",
fix=f"Rename the directory to '{name}' or set name to '{parent_dir_name}'.",
)
return ParsedSkill(
name=name,
description=str(description).strip(),
location=str(path.resolve()),
base_dir=str(path.parent.resolve()),
source_scope=source_scope,
body=body,
license=frontmatter.get("license"),
compatibility=frontmatter.get("compatibility"),
metadata=frontmatter.get("metadata"),
allowed_tools=frontmatter.get("allowed-tools"),
)
+70
View File
@@ -0,0 +1,70 @@
"""Structured error codes and diagnostics for the Hive skill system.
Implements DX-1 (structured error codes) and DX-2 (what/why/fix format)
from the skill system PRD §7.5.
"""
from __future__ import annotations
import logging
from enum import Enum
class SkillErrorCode(Enum):
"""Standardized error codes for skill system operations (DX-1)."""
SKILL_NOT_FOUND = "SKILL_NOT_FOUND"
SKILL_PARSE_ERROR = "SKILL_PARSE_ERROR"
SKILL_ACTIVATION_FAILED = "SKILL_ACTIVATION_FAILED"
SKILL_MISSING_DESCRIPTION = "SKILL_MISSING_DESCRIPTION"
SKILL_YAML_FIXUP = "SKILL_YAML_FIXUP"
SKILL_NAME_MISMATCH = "SKILL_NAME_MISMATCH"
SKILL_COLLISION = "SKILL_COLLISION"
class SkillError(Exception):
"""Structured exception for skill system errors (DX-2).
Raised in strict validation paths. Also used as the base
format contract for log_skill_error() log messages.
"""
def __init__(self, code: SkillErrorCode, what: str, why: str, fix: str):
self.code = code
self.what = what
self.why = why
self.fix = fix
self.message = (
f"[{self.code.value}]\nWhat failed: {self.what}\nWhy: {self.why}\nFix: {self.fix}"
)
super().__init__(self.message)
def log_skill_error(
logger: logging.Logger,
level: str,
code: SkillErrorCode,
what: str,
why: str,
fix: str,
) -> None:
"""Emit a structured skill diagnostic log with consistent format (DX-2).
Args:
logger: The module logger to emit to.
level: Log level string 'error', 'warning', or 'info'.
code: Structured error code.
what: What failed (specific skill name and path).
why: Root cause.
fix: Concrete next step for the developer.
"""
msg = f"[{code.value}] What failed: {what} | Why: {why} | Fix: {fix}"
getattr(logger, level)(
msg,
extra={
"skill_error_code": code.value,
"what": what,
"why": why,
"fix": fix,
},
)
+477
View File
@@ -0,0 +1,477 @@
"""Trust gating for project-level skills (PRD AS-13).
Project-level skills from untrusted repositories require explicit user consent
before their instructions are loaded into the agent's system prompt.
Framework and user-scope skills are always trusted.
Trusted repos are persisted at ~/.hive/trusted_repos.json.
"""
from __future__ import annotations
import json
import logging
import subprocess
import sys
from collections.abc import Callable
from dataclasses import dataclass
from datetime import UTC, datetime
from enum import StrEnum
from pathlib import Path
from urllib.parse import urlparse
from framework.skills.parser import ParsedSkill
logger = logging.getLogger(__name__)
# Env var to bypass trust gating in CI/headless pipelines (opt-in).
_ENV_TRUST_ALL = "HIVE_TRUST_PROJECT_SKILLS"
# Env var for comma-separated own-remote glob patterns (e.g. "github.com/myorg/*").
_ENV_OWN_REMOTES = "HIVE_OWN_REMOTES"
_TRUSTED_REPOS_PATH = Path.home() / ".hive" / "trusted_repos.json"
_NOTICE_SENTINEL_PATH = Path.home() / ".hive" / ".skill_trust_notice_shown"
# ---------------------------------------------------------------------------
# Trusted repo store
# ---------------------------------------------------------------------------
@dataclass
class TrustedRepoEntry:
repo_key: str
added_at: datetime
project_path: str = ""
class TrustedRepoStore:
"""Persists permanently-trusted repo keys to ~/.hive/trusted_repos.json."""
def __init__(self, path: Path | None = None) -> None:
self._path = path or _TRUSTED_REPOS_PATH
self._entries: dict[str, TrustedRepoEntry] = {}
self._loaded = False
def is_trusted(self, repo_key: str) -> bool:
self._ensure_loaded()
return repo_key in self._entries
def trust(self, repo_key: str, project_path: str = "") -> None:
self._ensure_loaded()
self._entries[repo_key] = TrustedRepoEntry(
repo_key=repo_key,
added_at=datetime.now(tz=UTC),
project_path=project_path,
)
self._save()
logger.info("skill_trust_store: trusted repo_key=%s", repo_key)
def revoke(self, repo_key: str) -> bool:
self._ensure_loaded()
if repo_key in self._entries:
del self._entries[repo_key]
self._save()
logger.info("skill_trust_store: revoked repo_key=%s", repo_key)
return True
return False
def list_entries(self) -> list[TrustedRepoEntry]:
self._ensure_loaded()
return list(self._entries.values())
def _ensure_loaded(self) -> None:
if not self._loaded:
self._load()
self._loaded = True
def _load(self) -> None:
try:
data = json.loads(self._path.read_text(encoding="utf-8"))
for raw in data.get("entries", []):
repo_key = raw.get("repo_key", "")
if not repo_key:
continue
try:
added_at = datetime.fromisoformat(raw["added_at"])
except (KeyError, ValueError):
added_at = datetime.now(tz=UTC)
self._entries[repo_key] = TrustedRepoEntry(
repo_key=repo_key,
added_at=added_at,
project_path=raw.get("project_path", ""),
)
except FileNotFoundError:
pass
except Exception as e:
logger.warning(
"skill_trust_store: could not read %s (%s); treating as empty",
self._path,
e,
)
def _save(self) -> None:
self._path.parent.mkdir(parents=True, exist_ok=True)
data = {
"version": 1,
"entries": [
{
"repo_key": e.repo_key,
"added_at": e.added_at.isoformat(),
"project_path": e.project_path,
}
for e in self._entries.values()
],
}
# Atomic write: write to .tmp then rename
tmp = self._path.with_suffix(".tmp")
tmp.write_text(json.dumps(data, indent=2), encoding="utf-8")
tmp.replace(self._path)
# ---------------------------------------------------------------------------
# Trust classification
# ---------------------------------------------------------------------------
class ProjectTrustClassification(StrEnum):
ALWAYS_TRUSTED = "always_trusted"
TRUSTED_BY_USER = "trusted_by_user"
UNTRUSTED = "untrusted"
class ProjectTrustDetector:
"""Classifies a project directory as trusted or untrusted.
Algorithm (PRD §4.1 trust note):
1. No project_dir ALWAYS_TRUSTED
2. No .git directory ALWAYS_TRUSTED (not a git repo)
3. No remote 'origin' ALWAYS_TRUSTED (local-only repo)
4. Remote URL repo_key; in TrustedRepoStore TRUSTED_BY_USER
5. Localhost remote ALWAYS_TRUSTED
6. ~/.hive/own_remotes match ALWAYS_TRUSTED
7. HIVE_OWN_REMOTES env match ALWAYS_TRUSTED
8. None of the above UNTRUSTED
"""
def __init__(self, store: TrustedRepoStore | None = None) -> None:
self._store = store or TrustedRepoStore()
def classify(self, project_dir: Path | None) -> tuple[ProjectTrustClassification, str]:
"""Return (classification, repo_key).
repo_key is empty string for ALWAYS_TRUSTED cases without a remote.
"""
if project_dir is None or not project_dir.exists():
return ProjectTrustClassification.ALWAYS_TRUSTED, ""
if not (project_dir / ".git").exists():
return ProjectTrustClassification.ALWAYS_TRUSTED, ""
remote_url = self._get_remote_origin(project_dir)
if not remote_url:
return ProjectTrustClassification.ALWAYS_TRUSTED, ""
repo_key = _normalize_remote_url(remote_url)
# Explicitly trusted by user
if self._store.is_trusted(repo_key):
return ProjectTrustClassification.TRUSTED_BY_USER, repo_key
# Localhost remotes are always trusted
if _is_localhost_remote(remote_url):
return ProjectTrustClassification.ALWAYS_TRUSTED, repo_key
# User-configured own-remote patterns
if self._matches_own_remotes(repo_key):
return ProjectTrustClassification.ALWAYS_TRUSTED, repo_key
return ProjectTrustClassification.UNTRUSTED, repo_key
def _get_remote_origin(self, project_dir: Path) -> str:
"""Run git remote get-url origin. Returns empty string on any failure."""
try:
result = subprocess.run(
["git", "-C", str(project_dir), "remote", "get-url", "origin"],
capture_output=True,
text=True,
timeout=3,
)
if result.returncode == 0:
return result.stdout.strip()
except subprocess.TimeoutExpired:
logger.warning(
"skill_trust: git remote lookup timed out for %s; treating as trusted",
project_dir,
)
except (FileNotFoundError, OSError):
pass # git not found or other OS error
return ""
def _matches_own_remotes(self, repo_key: str) -> bool:
"""Check repo_key against user-configured own-remote glob patterns."""
import fnmatch
patterns: list[str] = []
# From env var
env_patterns = _ENV_OWN_REMOTES
import os
raw = os.environ.get(env_patterns, "")
if raw:
patterns.extend(p.strip() for p in raw.split(",") if p.strip())
# From ~/.hive/own_remotes file
own_remotes_file = Path.home() / ".hive" / "own_remotes"
if own_remotes_file.is_file():
try:
for line in own_remotes_file.read_text(encoding="utf-8").splitlines():
line = line.strip()
if line and not line.startswith("#"):
patterns.append(line)
except OSError:
pass
return any(fnmatch.fnmatch(repo_key, p) for p in patterns)
# ---------------------------------------------------------------------------
# URL helpers (public so CLI can reuse)
# ---------------------------------------------------------------------------
def _normalize_remote_url(url: str) -> str:
"""Normalize a git remote URL to a canonical ``host/org/repo`` key.
Examples:
git@github.com:org/repo.git github.com/org/repo
https://github.com/org/repo github.com/org/repo
ssh://git@github.com/org/repo.git github.com/org/repo
"""
url = url.strip()
# SCP-style SSH: git@github.com:org/repo.git
if url.startswith("git@") and ":" in url and "://" not in url:
url = url[4:] # strip git@
url = url.replace(":", "/", 1)
elif "://" in url:
parsed = urlparse(url)
host = parsed.hostname or ""
path = parsed.path.lstrip("/")
url = f"{host}/{path}"
# Strip .git suffix
if url.endswith(".git"):
url = url[:-4]
return url.lower().strip("/")
def _is_localhost_remote(remote_url: str) -> bool:
"""Return True if the remote points to a local host."""
local_hosts = {"localhost", "127.0.0.1", "::1"}
try:
if "://" in remote_url:
parsed = urlparse(remote_url)
return (parsed.hostname or "").lower() in local_hosts
# SCP-style: git@localhost:org/repo
if "@" in remote_url:
host_part = remote_url.split("@", 1)[1].split(":")[0]
return host_part.lower() in local_hosts
except Exception:
pass
return False
# ---------------------------------------------------------------------------
# Trust gate
# ---------------------------------------------------------------------------
class TrustGate:
"""Filters skill list, running consent flow for untrusted project-scope skills.
Framework and user-scope skills are always allowed through.
Project-scope skills from untrusted repos require consent.
"""
def __init__(
self,
store: TrustedRepoStore | None = None,
detector: ProjectTrustDetector | None = None,
interactive: bool = True,
print_fn: Callable[[str], None] | None = None,
input_fn: Callable[[str], str] | None = None,
) -> None:
self._store = store or TrustedRepoStore()
self._detector = detector or ProjectTrustDetector(self._store)
self._interactive = interactive
self._print = print_fn or print
self._input = input_fn or input
def filter_and_gate(
self,
skills: list[ParsedSkill],
project_dir: Path | None,
) -> list[ParsedSkill]:
"""Return the subset of skills that are trusted for loading.
- Framework and user-scope skills: always included.
- Project-scope skills: classified; consent prompt shown if untrusted.
"""
import os
# Separate project skills from always-trusted scopes
always_trusted = [s for s in skills if s.source_scope != "project"]
project_skills = [s for s in skills if s.source_scope == "project"]
if not project_skills:
return always_trusted
# Env-var CI override: trust all project skills for this invocation
if os.environ.get(_ENV_TRUST_ALL, "").strip() == "1":
logger.info(
"skill_trust: %s=1 set; trusting %d project skill(s) without consent",
_ENV_TRUST_ALL,
len(project_skills),
)
return always_trusted + project_skills
classification, repo_key = self._detector.classify(project_dir)
if classification in (
ProjectTrustClassification.ALWAYS_TRUSTED,
ProjectTrustClassification.TRUSTED_BY_USER,
):
logger.info(
"skill_trust: project skills trusted classification=%s repo=%s count=%d",
classification,
repo_key or "(no remote)",
len(project_skills),
)
return always_trusted + project_skills
# UNTRUSTED — need consent
if not self._interactive or not sys.stdin.isatty():
logger.warning(
"skill_trust: skipping %d project-scope skill(s) from untrusted repo "
"'%s' (non-interactive mode). "
"To trust permanently run: hive skill trust %s",
len(project_skills),
repo_key,
project_dir or ".",
)
logger.info(
"skill_trust_decision repo=%s skills=%d decision=denied mode=headless",
repo_key,
len(project_skills),
)
return always_trusted
# Interactive consent flow
decision = self._run_consent_flow(project_skills, project_dir, repo_key)
logger.info(
"skill_trust_decision repo=%s skills=%d decision=%s mode=interactive",
repo_key,
len(project_skills),
decision,
)
if decision == "session":
return always_trusted + project_skills
if decision == "permanent":
self._store.trust(repo_key, project_path=str(project_dir or ""))
return always_trusted + project_skills
# denied
return always_trusted
def _run_consent_flow(
self,
project_skills: list[ParsedSkill],
project_dir: Path | None,
repo_key: str,
) -> str:
"""Show the security notice (once) and consent prompt.
Return 'session' | 'permanent' | 'denied'."""
from framework.credentials.setup import Colors
if not sys.stdout.isatty():
Colors.disable()
self._maybe_show_security_notice(Colors)
self._print_consent_prompt(project_skills, project_dir, repo_key, Colors)
return self._prompt_consent(Colors)
def _maybe_show_security_notice(self, Colors) -> None: # noqa: N803
"""Show the one-time security notice if not already shown (NFR-5)."""
if _NOTICE_SENTINEL_PATH.exists():
return
self._print("")
self._print(
f"{Colors.YELLOW}Security notice:{Colors.NC} Skills inject instructions "
"into the agent's system prompt."
)
self._print(
" Only load skills from sources you trust. "
"Registry skills at tier 'verified' or 'official' have been audited."
)
self._print("")
try:
_NOTICE_SENTINEL_PATH.parent.mkdir(parents=True, exist_ok=True)
_NOTICE_SENTINEL_PATH.touch()
except OSError:
pass
def _print_consent_prompt(
self,
project_skills: list[ParsedSkill],
project_dir: Path | None,
repo_key: str,
Colors, # noqa: N803
) -> None:
p = self._print
p("")
p(f"{Colors.YELLOW}{'=' * 60}{Colors.NC}")
p(f"{Colors.BOLD} SKILL TRUST REQUIRED{Colors.NC}")
p(f"{Colors.YELLOW}{'=' * 60}{Colors.NC}")
p("")
proj_label = str(project_dir) if project_dir else "this project"
p(
f" The project at {Colors.CYAN}{proj_label}{Colors.NC} wants to load "
f"{len(project_skills)} skill(s)"
)
p(" that will inject instructions into the agent's system prompt.")
if repo_key:
p(f" Source: {Colors.BOLD}{repo_key}{Colors.NC}")
p("")
p(" Skills requesting access:")
for skill in project_skills:
p(f" {Colors.CYAN}{Colors.NC} {Colors.BOLD}{skill.name}{Colors.NC}")
p(f' "{skill.description}"')
p(f" {Colors.DIM}{skill.location}{Colors.NC}")
p("")
p(" Options:")
p(f" {Colors.CYAN}1){Colors.NC} Trust this session only")
p(f" {Colors.CYAN}2){Colors.NC} Trust permanently — remember for future runs")
p(
f" {Colors.DIM}3) Deny"
f" — skip all project-scope skills from this repo{Colors.NC}"
)
p(f"{Colors.YELLOW}{'' * 60}{Colors.NC}")
def _prompt_consent(self, Colors) -> str: # noqa: N803
"""Prompt until a valid choice is entered. Returns 'session'|'permanent'|'denied'."""
mapping = {"1": "session", "2": "permanent", "3": "denied"}
while True:
try:
choice = self._input("Select option (1-3): ").strip()
if choice in mapping:
return mapping[choice]
except (KeyboardInterrupt, EOFError):
return "denied"
self._print(f"{Colors.RED}Invalid choice. Enter 1, 2, or 3.{Colors.NC}")
+28 -9
View File
@@ -40,18 +40,31 @@ class LLMJudge:
def _get_fallback_provider(self) -> LLMProvider | None:
"""
Auto-detects available API keys and returns the appropriate provider.
Priority: OpenAI -> Anthropic.
Auto-detects available API keys and returns an appropriate provider.
Uses LiteLLM for OpenAI (framework has no framework.llm.openai module).
Priority:
1. OpenAI-compatible models via LiteLLM (OPENAI_API_KEY)
2. Anthropic via AnthropicProvider (ANTHROPIC_API_KEY)
"""
# OpenAI: use LiteLLM (the framework's standard multi-provider integration)
if os.environ.get("OPENAI_API_KEY"):
from framework.llm.openai import OpenAIProvider
try:
from framework.llm.litellm import LiteLLMProvider
return OpenAIProvider(model="gpt-4o-mini")
return LiteLLMProvider(model="gpt-4o-mini")
except ImportError:
# LiteLLM is optional; fall through to Anthropic/None
pass
# Anthropic via dedicated provider (wraps LiteLLM internally)
if os.environ.get("ANTHROPIC_API_KEY"):
from framework.llm.anthropic import AnthropicProvider
try:
from framework.llm.anthropic import AnthropicProvider
return AnthropicProvider(model="claude-3-haiku-20240307")
return AnthropicProvider(model="claude-haiku-4-5-20251001")
except Exception:
# If AnthropicProvider cannot be constructed, treat as no fallback
return None
return None
@@ -77,11 +90,16 @@ SUMMARY TO EVALUATE:
Respond with JSON: {{"passes": true/false, "explanation": "..."}}"""
try:
# Compute fallback provider once so we do not create multiple instances
fallback_provider = self._get_fallback_provider()
# 1. Use injected provider
if self._provider:
active_provider = self._provider
# 2. Check if _get_client was MOCKED (legacy tests) or use Agnostic Fallback
elif hasattr(self._get_client, "return_value") or not self._get_fallback_provider():
# 2. Legacy path: anthropic client mocked in tests takes precedence,
# or no fallback provider is available.
elif hasattr(self._get_client, "return_value") or fallback_provider is None:
# Use legacy Anthropic client (e.g. when tests mock _get_client, or no env keys set)
client = self._get_client()
response = client.messages.create(
model="claude-haiku-4-5-20251001",
@@ -90,7 +108,8 @@ Respond with JSON: {{"passes": true/false, "explanation": "..."}}"""
)
return self._parse_json_result(response.content[0].text.strip())
else:
active_provider = self._get_fallback_provider()
# Use env-based fallback (LiteLLM or AnthropicProvider)
active_provider = fallback_provider
response = active_provider.complete(
messages=[{"role": "user", "content": prompt}],

Some files were not shown because too many files have changed in this diff Show More