Compare commits

...

721 Commits

Author SHA1 Message Date
Richard Tang b3adbe745f chore: ruff lint 2026-04-01 11:05:53 -07:00
Richard Tang f3fefe0cbc refactor: remove adapt md and its reference 2026-04-01 11:00:41 -07:00
Bryan @ Aden c8a25a0287 Merge pull request #6658 from saurabhiiitm062/feat/cloudflare-dns-tool
feat: cloudflare DNS/Zone tool integrations
2026-04-01 10:11:44 -07:00
Hundao 5823513fde fix: propagate contextvars to tool executor threads (#6854)
* fix: propagate contextvars to tool executor threads

run_in_executor does not propagate contextvars to worker threads,
causing execution context params like data_dir to be lost when MCP
tools are called. This made save_data, serve_file_to_user, and other
tools that depend on auto-injected data_dir fail with "Missing
required argument: data_dir".

Fix: use contextvars.copy_context().run() to carry the current context
into the thread pool worker.

* test: regression test for contextvars propagation in tool executor

Verifies that execution context (data_dir, etc.) set via
set_execution_context is visible inside tool executors that run
in thread pool workers via run_in_executor.
2026-04-01 19:38:41 +08:00
Hundao 97ce8dfc54 fix: skip executable permission check on Windows in skill validator (#6894)
Windows has no POSIX executable bits, so the stat-based check always
fails. Skip the check on Windows in the validator, and mark the two
related tests as POSIX-only. Unix CI still catches non-executable
scripts from Windows contributors.

Fixes #6893
2026-04-01 19:37:18 +08:00
Gaurav Rai 5e628c7606 test(core): increase tool_registry coverage from 47% to 69% (#6818)
* test(core): increase tool_registry coverage from 47% to 69%

Add 19 new tests covering previously untested paths in ToolRegistry:

- register_function: type hint inference (int/float/bool/dict/list),
  required vs optional params, custom name/description, docstring fallback,
  executor delegation
- discover_from_module: @tool decorator pickup, missing-file zero-count,
  TOOLS dict without tool_executor uses mock executor
- has_tool / get_registered_names basic assertions
- Session context injection into MCP tool calls via set_session_context()
- Execution context override (contextvars) wins over session context
- _convert_mcp_tool_to_framework_tool: strips CONTEXT_PARAMS from both
  properties and required lists
- load_mcp_config: list format, dict format, graceful invalid-JSON warning
- resync_mcp_servers_if_needed: returns False with no clients; returns
  False when credentials and ADEN_API_KEY are unchanged

Coverage: 47 % → 69 % (+22 pp), all 31 tests pass, ruff clean.

Relates to #1972

* style: fix formatting in test_tool_registry.py

* fix: accept **kwargs in fake_load_registry to match updated signature

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-04-01 19:10:04 +08:00
Harsh Gajjar 5b931982e3 feat(tools): add Freshdesk helpdesk integration (#6099)
* feat(freshdesk): add Freshdesk tool integration with credentials and API functionality

- Introduced Freshdesk tool for managing tickets, contacts, agents, and groups via Freshdesk API v2.
- Added Freshdesk credentials handling in `credentials/freshdesk.py`.
- Registered Freshdesk tools in `tools/freshdesk_tool/__init__.py` and `tools/freshdesk_tool/freshdesk_tool.py`.
- Updated `__init__.py` files to include Freshdesk in the exports.
- Created comprehensive README for Freshdesk tool usage and setup.
- Implemented unit tests for Freshdesk tool functionality.

All tests pass, and code adheres to ruff linting and formatting standards.

* refactor(freshdesk_tool): simplify _get_domain logic

- remove unnecessary try/except around credentials.get("freshdesk_domain")
- directly return stripped credential value if present
- fallback to FRESHDESK_DOMAIN env variable when missing
- eliminate unreachable code while preserving behavior

* refactor(freshdesk_tool): replace dynamic httpx dispatch in _request

- replace getattr(httpx, method) with explicit handling for get, post, and put
- raise ValueError for unsupported HTTP methods
- preserve existing status handling and response parsing logic

* docs(freshdesk): improve credential and error handling documentation

- add docstrings for error handling helpers in freshdesk_tool
- document purpose and usage of freshdesk credential specs
- improve clarity around error response structure and handling
2026-04-01 18:28:32 +08:00
Bhuvaneswari N 8174f330ae docs: document 5 new natively supported LLM providers (#6865) 2026-04-01 18:18:15 +08:00
Rohit Singh 9774e53720 feat(runtime): add idempotency key support to trigger() (#6710) 2026-04-01 18:03:31 +08:00
Timothy @aden cf3296984c Merge pull request #6888 from aden-hive/fix/python-test
Release / Create Release (push) Waiting to run
fix(micro-fix): python test
2026-03-31 19:02:23 -07:00
Timothy eafbeb78b4 fix: python test 2026-03-31 18:55:24 -07:00
Timothy 5cb5083f8d fix(micro-fix): queen skill allowlist 2026-03-31 18:52:45 -07:00
Bryan @ Aden bf86daee92 Merge pull request #6319 from KartikPawade/fix/sap-tool-credential-store
fix: use CredentialStoreAdapter in sap_tool instead of raw os.getenv()
2026-03-31 18:30:21 -07:00
Timothy 43bbd0f31f feat(micro-fix): skill cli parser 2026-03-31 18:13:01 -07:00
Timothy @aden 2cf962b538 Merge pull request #6782 from levxn/skills/cli-commands
feat(skills): implement hive skill CLI subcommands (CLI-1 through CLI-13)
2026-03-31 17:59:25 -07:00
Timothy 4298196700 Merge branch 'main' into feature/agent-skills 2026-03-31 17:53:57 -07:00
Timothy @aden bc1f712e42 Merge pull request #6610 from levxn/skills/ds-ovrride-heuristics
feat(skills): DS-12 and DS-13 — config override application, batch auto-detection, and context preservation warning
2026-03-31 17:51:19 -07:00
Timothy @aden cccbcc8ec3 Merge pull request #6529 from vakrahul/fix/mcp-structured-errors
feat: structured MCP error codes and failure diagnostics (closes #6352)
2026-03-31 17:40:50 -07:00
Timothy @aden 0722f83f16 Merge pull request #6792 from fermano/feat/agent-selection-tool-resolution-n-framework-integration
Feat/agent selection tool resolution n framework integration
2026-03-31 17:38:39 -07:00
saurabhiiitm062 ebb6605a86 fix: address Cloudflare review comments (DDoS, pagination, validation, tests) 2026-03-31 22:23:51 +05:30
Hundao 72091d2783 fix(security): add SSRF protection to web_scrape tool (#6879)
Validate URLs against internal network ranges before making requests.
Block private IPs, loopback, link-local, and cloud metadata endpoints
(169.254.169.254). Intercept Playwright navigation to catch redirect-based
SSRF bypasses.

Fixes #1157

Co-authored-by: Harshit <Harshitk-cp@users.noreply.github.com>
2026-03-31 14:04:47 +08:00
Kartik 3bb69a5784 fix: add env fallback and type hints for SAP tool credentials
Made-with: Cursor
2026-03-31 11:15:41 +05:30
Kartik 63fb089062 chore: format sap_tool.py
Made-with: Cursor
2026-03-31 10:21:56 +05:30
Hundao d5ba985e29 docs: fix agent.json examples to match current schema (#6878)
Replace outdated node_id/edge_id with id, wrap nodes/edges under
graph key, add goal section with success_criteria. Matches what
load_agent_export() and NodeSpec actually expect.

Fixes #897

Co-authored-by: Jose37456 <Jose37456@users.noreply.github.com>
2026-03-31 12:41:59 +08:00
Bryan @ Aden 6ee510d2f6 Merge pull request #6855 from Ttian18/feat/tina/docs-mcp-unix-sse-transport
docs: add Unix socket and SSE transport to MCP Integration Guide (#6739)
2026-03-30 18:46:01 -07:00
Bryan @ Aden 45b350e7c8 Merge pull request #6857 from Ttian18/feat/tina/job-hunter-pdf-resume
feat(job-hunter): support PDF resume input via file path (#6740)
2026-03-30 18:45:36 -07:00
Bryan @ Aden 7e690de12f Merge pull request #6844 from sundaram2021/fix/quickstart-credentials-in-windows
micro-fix:  shell config handling and add antigravity option
2026-03-30 17:20:36 -07:00
Hundao ae85d2bf59 fix(security): prevent path traversal in session_store (#6876)
Validate that resolved session path stays within the sessions directory
using Path.is_relative_to(). Prevents session_id values like
"../../something" from escaping the sandbox.

Also guard the caller in _write_run_event where get_session_path is
called outside the existing OSError try/except block.

Fixes #1000

Co-authored-by: Sidhartha kumar <Alearner12@users.noreply.github.com>
2026-03-30 23:53:23 +08:00
Juttiga Bheemeswar e9fd0158b9 fix(csv_sql): prevent SQL injection via DuckDB parameter binding (#1408)
* fix(csv_sql): prevent SQL injection via DuckDB parameter binding

* test(csv_sql): add regression test for apostrophe path

* Refactor CSV query function for security and clarity

Removed detailed docstring arguments and return information for the CSV query function. Improved security checks for SQL queries.

* fix/1256-csv-sql-safe-path

Added security regression tests to reject non-SELECT queries and multi-statement queries.

* docs: restore csv_sql docstring (Args, Returns, Examples)

* fix: use word-boundary regex for SQL keyword detection

Substring matching caused false positives on column names like
created_at, updated_at, deleted_at. Switch to \b word-boundary regex.
Also add tests for comment rejection, CTE queries, and keyword-in-column-name.

---------

Co-authored-by: Juttiga Bheem <BBemail@gmail.com>
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-30 23:26:07 +08:00
Zhang 9a68a5d7ee fix(job-hunter): align intake node client_facing and input_keys with agent.json 2026-03-29 11:33:57 -07:00
Zhang 33edf4a207 feat(job-hunter): support PDF resume input via file path (#6740) 2026-03-29 11:26:35 -07:00
Zhang f9fdaf5adc docs: clarify required vs optional fields for Unix and SSE transports 2026-03-29 10:29:47 -07:00
Zhang eabb17934c docs: add Unix socket and SSE transport types to MCP Integration Guide 2026-03-29 10:24:57 -07:00
kernel_crush eba7524955 refactor: remove deprecated storage/backend.py (267 lines) (#6849)
* refactor: remove deprecated storage/backend.py (267 lines)

Delete the fully deprecated FileStorage class and inline its 5 still-active
methods (_validate_key, _load_run_sync, _load_summary_sync, _delete_run_sync,
_list_all_runs_sync) directly into ConcurrentStorage.

Changes:
- Delete core/framework/storage/backend.py (267 lines of no-op/deprecated code)
- Inline active read methods into ConcurrentStorage (no new FileStorage dep)
- Remove deprecated index operations (get_runs_by_goal, get_runs_by_status,
  get_runs_by_node, list_all_goals) and their associated locking
- Update __init__.py to export ConcurrentStorage instead of FileStorage
- Update runtime/core.py to use ConcurrentStorage directly
- Fix Runtime.end_run() to call save_run_sync() (sync wrapper) instead of
  the async save_run(), which was silently dropping the coroutine
- Update test_path_traversal_fix.py to test ConcurrentStorage._validate_key()
- Clean up test_storage.py — remove all FileStorage test classes, un-skip
  ConcurrentStorage tests now that it's self-contained
- Remove stale FileStorage references from testing/test_storage.py docstring,
  testing/debug_tool.py docstring, and test_runtime.py skip reasons

All 44 tests pass, ruff check and ruff format clean.

Fixes #6797

* fix(core): address CodeRabbitAI PR review feedback

 - Fix critical no-op in ConcurrentStorage._save_run_sync by implementing atomic persistence to 
uns/{run_id}.json.
 - Update 	est_path_traversal_fix.py to test ConcurrentStorage directly and use real file paths for end-to-end validation.
 - Unskip 	est_run_saved_on_end and assert actual run file persistence.
 - Fix debug_tool.py to use load_run_sync() instead of the async load_run().

* fix(core): address round 2 of CodeRabbitAI reviews

 - Add _validate_key to _save_run_sync and _load_summary_sync to enforce path traversal protections on the lowest level APIs.
 - Invalidate summary cache and refresh run cache in save_run_sync() to match the async save_run() cache coherence behavior.
 - Add tests for load_summary and save_run_sync path traversal rejection.
2026-03-29 22:48:12 +08:00
Sundaram Kumar Jha c56440340a Merge origin/main into fix/quickstart-credentials-in-windows 2026-03-29 08:44:26 +05:30
Bhuvaneswari N c889ffd85d feat(scripts): add support for more LLM providers in check_llm_key.py (#6833)
* feat(scripts): add support for more LLM providers in check_llm_key.py

* fix(scripts): correct perplexity endpoint to /v1/models and simplify lambda kwargs to **_
2026-03-29 09:11:25 +08:00
Md. Afzal Hassan Ehsani 905a4f3516 feat(quickstart): add Local (Ollama) LLM provider option (#6028)
* feat(quickstart): add Local (Ollama) LLM provider option
- Detect Ollama via 'ollama list' in quickstart.sh and quickstart.ps1
- Add 'Local (Ollama)' menu option with interactive model picker
- Save provider=ollama, model=<selected> to ~/.hive/configuration.json
- Omit api_key_env_var for Ollama (no API key required)
Refs #5154, #5231

* feat: add local Ollama support and resolve native tool calling

This integrates Ollama as a first-class local provider choice during quickstart, and patches several configuration barriers preventing local models from safely executing the framework's agent graphs.

* **Quickstart Integration**: Added `Local (Ollama)` to the provider menu in both quickstart.sh and quickstart.ps1. When selected, it automatically queries `ollama list` and allows the user to pick an installed model without prompting for an API key.
* **Routing & Configuration**: Automatically sets `"api_base": "http://localhost:11434"` so LiteLLM routes correctly to the local daemon, and increases the default max_tokens config.py allocation to `32768`.
* **Native Tool Calling**: Normalized Ollama models to strictly use the ollama_chat provider prefix inside litellm.py and registered them as `supports_function_calling: True`. This forces native structured function calling and fixes the infinite loop caused by JSON-mode text fallbacks.
* **Context Truncation Fix**: Updated config.py to explicitly pass `"num_ctx": 16384` to Ollama. This prevents the local daemon from silently truncating the Queen agent's ~9,500 token system prompt (Ollama defaults to 2048 `num_ctx`).
* **UX Warnings**: Added terminal notices warning users to select high-parameter models (e.g., `qwen2.5:72b+`) to ensure sufficient contextual reasoning abilities.

Resolves #6027
Resolves #6028

* test: add unit tests for Ollama helper functions

Cover _is_ollama_model(), _ensure_ollama_chat_prefix(), and num_ctx
injection in get_llm_extra_kwargs() as requested in PR review.
Fix existing test_init_ollama_no_key_needed assertion to expect the
normalised ollama_chat/ prefix.

Made-with: Cursor

* chores: fixed merge conflict

* fix(ollama): address PR review comments and normalize provider config

* fix(ollama): align quickstart defaults and add tool_choice comment

* fix(ollama): enforce OLLAMA_DETECTED logic and resolve quickstart script syntax errors

* fix(ollama): align quickstart logic and cleanup test imports
2026-03-29 08:51:47 +08:00
Sundaram Kumar Jha 941605720f fix: add missing antigravity subscription option 2026-03-28 23:46:01 +05:30
Sundaram Kumar Jha 72e5c5c1c6 test: cover shell config fallbacks 2026-03-28 23:38:02 +05:30
Sundaram Kumar Jha 0f42c8c8c1 fix: align Git Bash shell config handling 2026-03-28 23:37:53 +05:30
RichardTang-Aden c3c3075610 Merge pull request #6811 from Hundao/fix/lazy-import-resend
fix: lazy import resend in email_tool
2026-03-27 14:37:41 -07:00
saurabhiiitm062 e9c1731c0f fix: address review comments + all tests passing 2026-03-28 00:58:28 +05:30
SAURABH KUMAR 0e2333daaf Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:47:29 +05:30
SAURABH KUMAR 5167c29aed Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:47:11 +05:30
SAURABH KUMAR 4da4d3b2c0 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:46:51 +05:30
SAURABH KUMAR 3e622af484 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:46:35 +05:30
SAURABH KUMAR 6600ce0ef9 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:46:21 +05:30
SAURABH KUMAR 74d5dd03dd Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:46:05 +05:30
SAURABH KUMAR d18091bb2c Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:45:50 +05:30
SAURABH KUMAR d1a1f36d6e Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:45:35 +05:30
SAURABH KUMAR 051b0fcef2 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:45:22 +05:30
SAURABH KUMAR e270d3210d Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:45:07 +05:30
SAURABH KUMAR d4a66d4b5f Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:44:51 +05:30
SAURABH KUMAR ad39b6ea50 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:44:29 +05:30
SAURABH KUMAR 71baf6166d Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:44:13 +05:30
SAURABH KUMAR 25afdae093 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:43:57 +05:30
SAURABH KUMAR 21700eb2ec Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:43:42 +05:30
SAURABH KUMAR 617462df52 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:43:23 +05:30
SAURABH KUMAR b3c1f1436b Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:42:46 +05:30
SAURABH KUMAR 310b922ce8 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:42:30 +05:30
SAURABH KUMAR 20b6553b07 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:42:09 +05:30
SAURABH KUMAR 1035cc9481 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:41:52 +05:30
SAURABH KUMAR 5d6dd1caa6 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:41:32 +05:30
SAURABH KUMAR 45ba771650 Apply suggestion from @levxn
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:41:15 +05:30
SAURABH KUMAR a4b15c0320 Add health check endpoint for Cloudflare API 2026-03-27 22:35:09 +05:30
SAURABH KUMAR 211619120e Update tools/tests/tools/test_cloudflare.py
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:26:50 +05:30
SAURABH KUMAR a78bb16e4b Update tools/tests/tools/test_cloudflare.py
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:26:00 +05:30
SAURABH KUMAR c93bcee933 Update tools/tests/tools/test_cloudflare.py
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:25:48 +05:30
SAURABH KUMAR 08160a004a Update tools/tests/tools/test_cloudflare.py
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:25:33 +05:30
SAURABH KUMAR ccd5de7496 Update tools/tests/tools/test_cloudflare.py
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:25:10 +05:30
SAURABH KUMAR c332ef8823 Update tools/tests/tools/test_cloudflare.py
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:24:58 +05:30
SAURABH KUMAR 06db11eebf Update tools/tests/tools/test_cloudflare.py
Co-authored-by: Levin <105410870+levxn@users.noreply.github.com>
2026-03-27 22:24:43 +05:30
Bryan @ Aden 86ef6fd8c5 Merge pull request #6822 from sundaram2021/fix/date-formatting-issue-on-windows
micro-fix: fix date formatting issue on windows and mattermost formatting issue
2026-03-27 07:24:26 -07:00
Sundaram Kumar Jha 95bdf4fe32 fix: mattermost formatting issue 2026-03-27 09:55:28 +05:30
Sundaram Kumar Jha 890d303d26 test: cover queen memory date formatting on Windows 2026-03-27 09:46:42 +05:30
Sundaram Kumar Jha 7fe60991e1 fix: use cross-platform queen memory date formatting 2026-03-27 09:46:27 +05:30
RichardTang-Aden a72938a163 Merge pull request #6747 from wakqasahmed/feat/mattermost-integration
feat(tools): add Mattermost messaging platform integration
2026-03-26 15:27:00 -07:00
Richard Tang 326a3dd1b7 docs: add honeycomb in readme 2026-03-26 14:55:00 -07:00
Richard Tang 183c6e2620 docs: readme with harness 2026-03-26 14:50:55 -07:00
Timothy @aden 1b40bff7da Merge pull request #6803 from aden-hive/fix/queen-cannot-read-skills
fix: allow curl in run_command and fix queen custom skill discovery
2026-03-26 12:56:03 -07:00
Timothy @aden 38b79edaee Merge pull request #6633 from sundaram2021/refactor/event-loop-node-modularization
refactor: modularize event loop node class methods and helpers
2026-03-26 12:47:53 -07:00
Sundaram Kumar Jha eb4f180192 chore: pull latest change 2026-03-27 00:48:01 +05:30
Sundaram Kumar Jha bf0b9a1edb refactor: cleanup compact llm function 2026-03-27 00:45:42 +05:30
Sundaram Kumar Jha 9667dd25cb chore: pull latest changes 2026-03-26 21:54:56 +05:30
hundao 33e4e8d440 fix: lazy import resend in email_tool to prevent tool registration crash
Fixes #4816
2026-03-26 18:43:04 +08:00
Shiva Santosh Reddy Aenugu c5ac29c81d fix(frontend): add 404 fallback route for unknown paths (#6373) 2026-03-26 18:24:01 +08:00
vakrahul 13c072d731 fix: match expected error message text in mcp_client and mcp_registry 2026-03-26 15:39:17 +05:30
Aaryann Chandola 5e31975cc3 feat(mcp-cli): add CLI management commands (#6350) (#6787)
* feat(mcp-cli): add hive mcp CLI management commands (#6350)

Implement the hive mcp subcommand group with shared helpers and all
P0/P1 management commands: install, add, remove, enable, disable,
list, info, config, search, health, update.

Includes update bridge (remove+reinstall with rollback on failure),
first-use security notice, credential prompting, secret masking,
and agent usage detection via load_agent_selection().

* test(mcp-cli): add CLI integration and handler tests (#6350)

58 tests covering all commands end-to-end:
- Real framework.cli.main() entrypoint dispatch (list, install, update)
- Real registry-on-disk integration (install, list, config, info, remove)
- All 11 command handlers (install, add, remove, enable, disable, list,
  info, config, search, health, update)
- Security notice shown only once
- Credential prompting stores overrides, skips when env set, handles cancel
- Secret masking in human output, JSON output, and config display
- Index refresh semantics (stale cache fallback vs no-cache hard fail)
- Update rollback on reinstall failure preserves original entry
- Update rejects local servers and pinned servers with correct remediation
- Bulk update skips local and pinned servers
- Argparse registration validates all 11 subcommands present
- _find_agents_using_server resolves via real load_agent_selection
- _parse_key_value_pairs validates KEY=VAL format

* fix(mcp-cli): mask list --json secrets, preserve enabled state on update, defer security sentinel (#6350)

- list --json now masks override values as <set> before emitting
- update preserves enabled=False state across reinstall
- security notice sentinel only written after successful install

* refactor(mcp-cli): fix docstring, share registry instance in update, extract _mask_overrides helper (#6350)

- Fix module docstring to reflect update's full behavior
- Pass registry instance to _cmd_mcp_update_server to avoid redundant disk I/O
- Extract _mask_overrides() used by list --json, info --json, info human, and config display
- Add comment about _find_agents_using_server path arithmetic limitation
2026-03-26 18:01:28 +08:00
vakrahul 82af76e72a feat: wire structured MCP errors into mcp_registry.py (closes #6352) 2026-03-26 15:30:10 +05:30
Amogh Raj a483f8d06a docs: add Windows quickstart.ps1 command in Quick Start section (#6781)
* docs: add Windows quickstart.ps1 command in Quick Start section

* fix: restore closing code fence and comment out Windows command

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-26 17:27:31 +08:00
saurabhiiitm062 859db7f056 fix: address review comments for cloudflare tool
- removed unreachable code
- updated firewall rule handling (rulesets API)
- added validation + error handling
- added missing test coverage
- fixed misleading documentation
2026-03-26 13:51:02 +05:30
saurabhiiitm062 6e0b5c7250 merge upstream main into cloudflare branch 2026-03-26 10:50:59 +05:30
Sundaram Kumar Jha e188c26e9f chore: revert changes 2026-03-26 08:11:39 +05:30
Timothy 27a2d64a98 chore: lint 2026-03-25 16:21:42 -07:00
Timothy c2dce3a8c2 fix: allow queen to read custom skills 2026-03-25 14:47:25 -07:00
Hundao b52974adcc fix(graph): remove deprecated ast.Index visitor in safe_eval.py (#6796)
Python 3.9+ no longer wraps subscript slices in ast.Index, and
Python 3.12 removed ast.Index entirely. The project requires
Python >=3.11, so this is dead code.
2026-03-25 17:55:23 +08:00
Kurt 047ad812af fix: add missing __init__.py to file_system_toolkits package (#6056)
Closes #6055
2026-03-25 16:45:32 +08:00
Fernando Mano 22d9fba1fd Feature: #6351 - Agent selection, tool resolution & framework integration -- MCP Registry integration deleted local test code -- fix failing tests 2026-03-24 22:56:56 -03:00
Fernando Mano c7d0afc775 Feature: #6351 - Agent selection, tool resolution & framework integration
Made-with: Cursor
2026-03-24 22:34:52 -03:00
Richard Tang 645792fb1a docs: remove outdated documents 2026-03-24 18:23:38 -07:00
Richard Tang 3154e34c7a docs: add instruction for running dummy agents and remove old documentation 2026-03-24 18:20:27 -07:00
Fernando Mano 45aafbc52b Merge branch 'main' into feat/agent-selection-tool-resolution-n-framework-integration 2026-03-24 17:02:08 -03:00
Levin 567340c05d Merge branch 'aden-hive:main' into skills/cli-commands 2026-03-24 22:58:03 +05:30
Timothy @aden 8ecb728148 Merge pull request #6784 from aden-hive/fix/pin-litellm-1.81.7
security: pin litellm==1.81.7 to block supply chain attack
2026-03-24 09:53:40 -07:00
Timothy 4a2141bce9 chore: regenerate uv.lock with litellm==1.81.7 pin
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 09:48:47 -07:00
Timothy 3b4d6e4602 security: pin litellm==1.81.7 to block supply chain attack
litellm>=1.82.7 contains a malicious .pth file that auto-executes at
Python startup and exfiltrates env vars, SSH keys, cloud credentials,
and CI/CD secrets to an attacker-controlled domain.

Pin to last known-safe version (currently installed). Unpin once a
verified-clean upstream release is available.

Closes #6783

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 09:42:42 -07:00
levxn 8d8656193d bug fix 2026-03-24 22:09:54 +05:30
levxn ef317371ce hive skill test implemented, --json flag for machine parsable outputs, fixed lints 2026-03-24 21:51:14 +05:30
Levin d5596ccb0a Merge branch 'aden-hive:main' into skills/cli-commands 2026-03-24 21:47:35 +05:30
Timothy @aden 89ccc664bd Merge pull request #6574 from Antiarin/feat/mcp-registry-core
feat(mcp-registry): add MCPRegistry core module (#6349)
2026-03-24 07:40:35 -07:00
Bryan @ Aden 4872c01886 Merge pull request #6777 from sundaram2021/fix/missing-antigravity-option-in-windows-powershell
fix: missing antigravity and minimax plan option in powershell
2026-03-24 07:35:39 -07:00
levxn 5f1530ec5b minor bug fix, and lint issue fixes 2026-03-24 15:52:10 +05:30
Levin 8af32b421c Merge branch 'aden-hive:main' into skills/cli-commands 2026-03-24 13:53:40 +05:30
Sundaram Kumar Jha 4620380341 fix: missing antigravity and minimax plan option in powershell 2026-03-24 09:26:07 +05:30
Richard Tang fca2deb980 chore: update documentation 2026-03-23 20:35:26 -07:00
RichardTang-Aden d7ce923ca6 Merge pull request #1586 from rhythmtaneja/improve-eventbus-logging
Improve EventBus handler error logging to include traceback
2026-03-23 20:17:51 -07:00
Richard Tang 403b47db61 chore: lint 2026-03-23 20:05:29 -07:00
Richard Tang 0d0e78579f chore: lint 2026-03-23 18:09:15 -07:00
RichardTang-Aden 447bfdfab8 Merge pull request #6763 from Leayxz/micro-fix/files_names_conflicts
micro-fix: make test filenames unique to avoid pytest import conflicts / error test_structure
2026-03-23 17:35:16 -07:00
RichardTang-Aden c77d21e393 Merge pull request #6761 from Leayxz/micro-fix/remove_obsolete_PushoverClient_tests
micro-fix: remove obsolete _PushoverClient tests
2026-03-23 17:34:49 -07:00
RichardTang-Aden 6ded508b4d Merge pull request #6774 from Leayxz/micro-fix/rename_schema_discovery
micro-fix: rename schema discovery to avoid pytest collection
2026-03-23 17:34:07 -07:00
RichardTang-Aden 75f8bf5696 Merge pull request #6743 from sundaram2021/fix/codex-oauth-stdin-select-windows
fix: windows Codex OAuth browser launch and manual fallback
2026-03-23 16:52:56 -07:00
Leandro Rodrigues 62fc02220b micro-fix: rename schema discovery to avoid pytest collection
- The file `tools/test_schema_discovery.py` was being incorrectly collected by pytest as a test module
- Since the file is actually a standalone script, this caused import errors during test collection
- Rename the file to remove the `test_` prefix so pytest no longer treats it as a test file
- Pytest test discovery no longer includes the script, eliminating the import error and restoring a clean test run
2026-03-23 20:51:18 -03:00
Richard Tang 5d4f279646 test: add real integration test for MCPRegistry → AgentRunner path 2026-03-23 15:44:54 -07:00
Bryan @ Aden 920a840756 Merge pull request #6772 from sundaram2021/fix/setup-worker-model-on-windows
fix(windows): use shared uv discovery in setup_worker_model.ps1
2026-03-23 15:44:48 -07:00
Sundaram Kumar Jha 8680a35c39 fix(powershell): use shared uv discovery in setup_worker_model 2026-03-24 03:57:07 +05:30
levxn 95cc8a4513 cli commands, v1 2026-03-24 02:23:20 +05:30
Sundaram Kumar Jha d648f3d315 refactor(event-loop): slim event loop node orchestration 2026-03-24 01:00:08 +05:30
Sundaram Kumar Jha b43044cf4d refactor(event-loop): untangle modular event loop imports 2026-03-24 00:59:55 +05:30
Sundaram Kumar Jha 4724320946 refactor(event-loop): add shared event loop types 2026-03-24 00:59:35 +05:30
Leandro Rodrigues c9134cfd91 micro-fix: make test filenames unique to avoid pytest import conflicts
- multiple test files shared the same module name "test_structure.py"
- this cause pytest import mismatches during collection
- renamed test files to "test_email_reply_agent" and "test_meeting_scheduler"
- eliminated module name collisions and fixed test discovery
2026-03-23 16:13:31 -03:00
Leandro Rodrigues 55ce751385 micro-fix: remove obsolete _PushoverClient tests
- the test suite still referenced _PushoverClient, which no longer exists
- this caused import errors and failing pytest runs
- removed all tests related to _PushoverClient
- fixed pytest execution errors
- removed dead test code
- ensured test coverage reflects the current implementation
2026-03-23 15:54:50 -03:00
Timothy @aden aca2dfb536 Merge pull request #5892 from nikhilvarmakandula/feat/openmeteo-weather-tool
feat(tools): add Open-Meteo weather tool — free real-time weather, no API key required
2026-03-23 10:30:59 -07:00
Waqas Ahmed 89ab2e0a74 feat(tools): add Mattermost messaging platform integration
Add Mattermost as a new messaging tool following the existing Discord/Telegram
pattern. Supports self-hosted and cloud instances via personal access tokens.

Tools: list_teams, list_channels, get_channel, send_message, get_posts,
create_reaction, delete_post. Includes rate limit retry logic, credential
store + env var fallback, and comprehensive tests (41 unit + 50 conformance).

Closes #6746

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-23 14:07:30 +02:00
Antiarin d11f539209 Merge branch 'main' into feat/mcp-registry-core 2026-03-23 11:29:47 +05:30
Antiarin 64a223353a fix: harden MCPConnectionManager with timeouts, SSE health checks, and failure handling
Add 30s transition timeouts to prevent deadlocks on stuck connections.
Split SSE from HTTP in health_check: SSE uses client.list_tools() instead
of hitting /health (SSE servers use event-stream protocol, not REST).
Add has_connection() for MCPRegistry health check integration. Handle
disconnect failures in release, reconnect, and cleanup_all. Guard
reconnect against refcount dropping to zero mid-reconnect.
2026-03-23 11:13:27 +05:30
Antiarin 2d154c2db6 test: add tests for MCPRegistry, runner integration, and load_registry_servers
Covers install/add_local/remove/enable/disable, resolve_for_agent selection
precedence, health checks with pooled connections, cache fallback (defect 1),
SSE health check (defect 2), tomllib version parsing (defect 3), JSON type
validation for mcp_registry.json fields, malformed JSON error handling,
structured log emission, and retry-on-zero-tools behavior.
2026-03-23 11:13:27 +05:30
Antiarin a00c934d9d feat: add MCPRegistry core module with framework integration
Local state management for installed MCP servers in ~/.hive/mcp_registry/.
Supports install from registry index, add_local for running servers,
resolve_for_agent with include/tags/exclude/profile/max_tools/versions
selection, health checks via MCPConnectionManager, and JSON type
validation at the mcp_registry.json boundary.

Integration points: AgentRunner, queen orchestrator, credential tester
all load mcp_registry.json with error handling. ToolRegistry gains
load_registry_servers() with retry and structured DX-4 logging.
2026-03-23 11:13:27 +05:30
Sundaram Kumar Jha 18bee9cb90 Add Codex OAuth Windows regression tests 2026-03-23 10:40:51 +05:30
Sundaram Kumar Jha c1664e47e5 Fix Windows Codex OAuth URL and stdin handling 2026-03-23 10:40:30 +05:30
Emmanuel Nwanguma 2cb972fc5a fix(runner): replace print() with logger.warning() for credential warnings (#6577)
Fixes #6484

- Replace 8 raw print() calls with logger.warning() in runner.py
- Uses lazy % formatting instead of f-strings
- Warnings about missing tokens/API keys now go through logging framework
- Visible in log files when agents run headlessly
2026-03-22 18:24:42 +08:00
Emmanuel Nwanguma 0bd841ce01 fix(credentials): replace bare except Exception clauses with specific handlers (#6592)
Fixes #6481

- credential_tester/agent.py: 4 bare excepts replaced
- credentials/setup.py: 6 bare excepts replaced
- queen_memory.py: 2 bare excepts replaced (2 already had proper logging)
- Expected errors (ImportError, OSError, KeyError) logged at DEBUG
- Unexpected errors logged at WARNING with exc_info=True
- Same two-tier pattern as PR #6153 (key_storage.py)
2026-03-22 18:16:14 +08:00
Samer Attrah 88ec4b7e64 fix: improve tool_registry error handling with stack traces and context (#6518)
* fix: improve tool_registry error handling with stack traces and context

When tool execution fails, errors now include:
- Stack traces for debugging
- Tool name, tool_use_id, and inputs in error logs
- Same behavior for both sync and async tools

Fixes #2447

* fix: use exc_info=True and truncate inputs in tool error logs

- Replace traceback.format_exc() with exc_info=True (codebase convention)
- Truncate tool inputs to 500 chars to prevent log flooding
- Add test for input truncation
2026-03-22 18:01:28 +08:00
Sundaram Kumar Jha 27d5061d97 micro-fix: quickstart dashboard auto-launch for PowerShell (#6655)
* Fix quickstart dashboard auto-launch on Windows

* chore: refresh locks

* fix: gate quickstart hive shim to Git Bash

* chore: revert unrelated frontend lockfile churn
2026-03-22 16:21:02 +08:00
Sundaram Kumar Jha ee4682c565 chore: pull latest changes ; fix: merge conflict 2026-03-22 08:52:17 +05:30
Sundaram Kumar Jha a2cd96a1a7 docs: document OpenRouter and Hive LLM provider setup (#6644)
* docs(llm): document OpenRouter and Hive LLM setup

* docs(contributing): add OpenRouter and Hive LLM guidance
2026-03-22 10:12:44 +08:00
Hundao 07b82a51f6 fix(examples): use __file__ relative path for mcp_servers.json copy (#6677)
Fixes #1669
2026-03-22 08:26:13 +08:00
Timothy @aden 3e1282b31e Merge pull request #6682 from aden-hive/feat/image-capabilities
Release / Create Release (push) Waiting to run
feat: image capabilities — upload, screenshot passthrough, vision detection & fallback, aria refs
2026-03-20 21:25:37 -07:00
Timothy 736756b257 chore: fix test 2026-03-20 21:22:29 -07:00
Timothy 90efe7009d chore: lint 2026-03-20 21:13:22 -07:00
Timothy 4adb369bde chore: lint 2026-03-20 21:12:03 -07:00
Timothy d4a30eb2f3 feat: image model fallback 2026-03-20 20:18:07 -07:00
Timothy 94bb4a2984 Merge branch 'main' into feat/image-capabilities 2026-03-20 18:42:55 -07:00
Timothy 648bad26ed feat: user input image content 2026-03-20 18:40:28 -07:00
RichardTang-Aden f0c7470f3d Merge pull request #6663 from sundaram2021/fix/missing-minimax-option-on-windows
fix: minimax option in powershell quickstart
2026-03-20 17:00:11 -07:00
RichardTang-Aden fe533b72a6 Merge pull request #6648 from levxn/main
Antigravity subscription support as an LLM provider
2026-03-20 16:52:38 -07:00
Richard Tang e581767cab chore: ruff lint 2026-03-20 16:50:50 -07:00
Richard Tang 0663ee5950 feat: validate the existing credentials before auth 2026-03-20 16:45:56 -07:00
Richard Tang 4b97baa34b feat: native google oauth for antigravity support 2026-03-20 16:40:15 -07:00
levxn a89296d397 lint fix 2026-03-21 02:35:09 +05:30
Levin d568912ba2 Merge branch 'aden-hive:main' into main 2026-03-21 01:32:13 +05:30
Levin c4d7980058 Merge pull request #1 from levxn/subscription/antigravity
Subscription/antigravity
2026-03-21 01:30:27 +05:30
Timothy @aden 8549fe8238 Merge pull request #6635 from vakrahul/fix/skill-structured-errors-6366
feat: structured skill error codes and diagnostics (closes #6366)
2026-03-20 12:45:35 -07:00
levxn 2b8d85bb95 fixing tool calling issue, antigravity's model's expected thought_signature in functioncall parts, else faces 400 error stating invalid arguments 2026-03-20 23:26:50 +05:30
levxn 07f7801166 test v1 2026-03-20 22:32:30 +05:30
Levin 1f12a45151 Merge branch 'aden-hive:main' into main 2026-03-20 22:01:22 +05:30
Arshad Uzzama Shaik 936e02e8e6 fix(security): prevent symlink-based sandbox escape in get_secure_path (closes #1167) (#5635)
* fix(security): prevent symlink-based sandbox escape in get_secure_path (closes #1167)

* style: apply ruff formatting to tools to satisfy CI

---------

Co-authored-by: Arshad Shaik <arshad.shaik@violetis.ai>
2026-03-20 19:16:47 +08:00
Hundao d59fe1e109 fix(graph): remove dead check_constraint placeholder (#6660)
Never called anywhere in the codebase. Constraints are enforced
via prompt context, not runtime validation.
2026-03-20 18:44:18 +08:00
Sundaram Kumar Jha 274318d3e5 fix: minimax option in powershell quickstart 2026-03-20 15:33:26 +05:30
Anurag Kumar 0f0884c2e0 fix(tools): handle non-HTML content and add PDF URL support (#438)
* feat(tools): add URL support to pdf_read tool

Enable pdf_read to accept both local file paths and HTTP/HTTPS URLs.
Downloads PDF content to temporary file when URL is provided, validates
content-type, and cleans up automatically after extraction.

- Detect URL inputs (http:// or https://)
- Download PDF with httpx (60s timeout)
- Validate Content-Type is application/pdf
- Use temporary file for URL-based PDFs
- Automatic cleanup in finally block
- Maintains backward compatibility with local paths

Completes the workflow: web_scrape error on PDF → pdf_read from URL

* test(tools): Add test coverage for new features in web_scrape and pdf_read tools

* style: fix lint issues in pdf_read URL support

---------

Co-authored-by: Anurag <anuragkr-codes@users.noreply.github.com>
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-20 16:36:25 +08:00
saurabhiiitm062 becbdb3706 feat: cloudflare DNS/Zone tool integrations 2026-03-20 11:11:10 +05:30
Sundaram Kumar Jha 9b59255770 chore: pull latest change , refactor: modularize latest change 2026-03-20 11:05:12 +05:30
Sundaram Kumar Jha 49fd443da8 chore: resolve merge conflict 2026-03-20 10:01:07 +05:30
Timothy @aden 764012c598 Merge pull request #6652 from aden-hive/feature/absolutely-parallel
Release / Create Release (push) Waiting to run
fix: parallel subagent execution display, session resume bugs, and GCU termination
2026-03-19 20:21:47 -07:00
Timothy fd4dc1a69a fix: google_sheets JSON parse error before credentials check
Move _get_client() before JSON deserialization so missing-credentials
errors aren't masked by input validation. Wrap json.loads in try/except
for non-JSON string inputs.
2026-03-19 20:13:18 -07:00
Timothy 377cd39c2a chore: lint 2026-03-19 20:07:42 -07:00
Timothy e92caeef24 fix: line too long in google_sheets_tool
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 20:06:31 -07:00
Timothy @aden b7e6226478 Update asset link in README.md 2026-03-19 19:41:19 -07:00
Timothy a995818db2 fix: subagent bubble boundary 2026-03-19 17:57:33 -07:00
Timothy 0772b4d300 feat: better subagent interleave logic 2026-03-19 16:58:34 -07:00
Timothy 684e0d8dc6 fix: no memory consolidation for worker 2026-03-19 16:58:00 -07:00
Timothy d284c5d790 feat: parallel execution display 2026-03-19 15:25:21 -07:00
Timothy 7a9b9666c4 fix: refresh system prompt with preamble 2026-03-19 15:25:04 -07:00
Timothy a852cb91bf fix: non-blocking memory consolidation 2026-03-19 15:24:30 -07:00
Timothy 2f21e9eb4b fix: session reload preamble 2026-03-19 15:24:12 -07:00
Timothy 8390ef8731 fix: google sheet tool support json string input 2026-03-19 15:23:31 -07:00
levxn 8d21479c24 fixing lint errors 2026-03-20 02:34:58 +05:30
levxn 965dec3ba1 fixing errors, finalising credential fetch (client id and secret) properly in fallback paths 2026-03-20 02:32:42 +05:30
Timothy d4b54446be Merge branch 'main' into feat/image-capabilities 2026-03-19 11:12:33 -07:00
Levin 7992b862c2 Merge branch 'aden-hive:main' into main 2026-03-19 22:16:10 +05:30
Ananya Verma 44b3e0eaa2 Configure pytest to ignore DeprecationWarning (#1727)
Add pytest configuration to ignore specific warnings.
2026-03-19 23:17:50 +08:00
levxn f480fc2b94 oauth creds for antigravity picked properly 2026-03-19 20:26:42 +05:30
Fernando Mano b599a760e8 Feature: #6351 - Agent selection, tool resolution & framework integration -- first version with mocked MCPRegistry 2026-03-19 10:48:37 -03:00
Levin b4a37cdb03 Merge branch 'aden-hive:main' into skills/ds-ovrride-heuristics 2026-03-19 18:51:29 +05:30
vakrahul 2844dbf19f feat: structured skill error codes and diagnostics (closes #6366) 2026-03-19 13:18:18 +05:30
Sundaram Kumar Jha 4885db318e fix: merge conflict 2026-03-19 09:22:44 +05:30
Sundaram Kumar Jha fa7ce53fb3 style(repo): fix ruff format violations
Apply Ruff formatting to the extracted event loop modules, the EventLoopNode wrappers, and the OpenRouter key check script so the lint CI format check passes cleanly.
2026-03-19 09:20:18 +05:30
Sundaram Kumar Jha 75a2ef2c4a Merge branch 'main' into refactor/event-loop-node-modularization 2026-03-19 09:14:10 +05:30
Sundaram Kumar Jha a0b9d6afaf chore: refresh locks 2026-03-19 09:08:10 +05:30
Sundaram Kumar Jha 74c0a85e3f refactor(graph): modularize event loop helpers
Extract EventLoopNode helper logic into focused event_loop modules while keeping the node responsible for orchestration.

Preserve the existing behavior and compatibility for compaction, event publishing, cursor persistence, synthetic tools, judge evaluation, stall detection, tool result handling, and subagent escalation wiring.
2026-03-19 09:07:19 +05:30
Timothy @aden 22b7e4b0c3 Merge pull request #6624 from aden-hive/feature/agent-skills
Release / Create Release (push) Waiting to run
feat: agent skills system and observability improvements
2026-03-18 20:28:34 -07:00
Timothy 5413833a69 fix: tool test 2026-03-18 20:20:32 -07:00
bryan 02e1a4584a fix: autolaunch gui (windows) 2026-03-18 20:15:25 -07:00
Timothy 520840b1dd fix: no immediate run digest 2026-03-18 20:14:20 -07:00
bryan ee96147336 feat: autolaunch gui (mac) 2026-03-18 20:11:03 -07:00
Timothy 705cef4dc1 fix: context window display 2026-03-18 20:05:48 -07:00
Timothy ab26e64122 Merge remote-tracking branch 'origin/main' into feature/agent-skills 2026-03-18 19:41:39 -07:00
Timothy @aden f365e219cb Merge pull request #6615 from aden-hive/feat/worker-llm
feat: support separate LLM model for worker agents
2026-03-18 19:41:06 -07:00
Timothy 01621881c2 chore: lint 2026-03-18 19:40:41 -07:00
Timothy f7639f8572 fix: realtime context display 2026-03-18 19:29:31 -07:00
Timothy fc643060ce fix: better message bubble handling 2026-03-18 17:49:55 -07:00
Timothy 9aebeb181e feat: compaction debugger 2026-03-18 17:42:10 -07:00
Timothy acbbfaaa79 feat: compaction debug 2026-03-18 17:41:22 -07:00
Timothy bf170bce10 feat: enable mcp server reuse by default 2026-03-18 17:30:31 -07:00
Timothy 0a090d058b Merge remote-tracking branch 'origin/main' into feature/agent-skills 2026-03-18 17:11:12 -07:00
Timothy @aden 47bfadaad9 Merge pull request #6622 from aden-hive/fix/resume-empty-message
Fix empty queen message bubbles on session resume
2026-03-18 16:55:50 -07:00
Timothy d968dcd44c Merge branch 'main' into feature/agent-skills 2026-03-18 16:53:42 -07:00
Timothy @aden 6fdaa9ea50 Merge pull request #6534 from VasuBansal7576/codex/mcp-connection-manager-6348-draft
feat: add shared MCP connection manager
2026-03-18 16:52:44 -07:00
Timothy @aden 4d251fbdc2 Merge pull request #6531 from VasuBansal7576/codex/mcp-transports-6347-single
feat: add unix and sse MCP transports
2026-03-18 16:38:17 -07:00
Timothy 6acceed288 feat: hive debugger 2026-03-18 16:26:55 -07:00
Richard Tang 8dd1d6e3aa chore: lint 2026-03-18 16:01:32 -07:00
Timothy 1da28644a6 Merge branch 'main' into feature/agent-skills 2026-03-18 15:38:49 -07:00
Timothy 6452fe7fef fix: discord bot 2026-03-18 15:34:08 -07:00
Richard Tang acff008bd2 fix: empty message render 2026-03-18 15:26:56 -07:00
Timothy 651d6850a1 fix: bounty tracker change 2026-03-18 14:49:21 -07:00
Timothy c7fdc92594 fix: bounty script 2026-03-18 14:27:24 -07:00
Richard Tang 43602a8801 fix: trim to remove empty message 2026-03-18 13:55:57 -07:00
Timothy @aden 3da04265a6 Merge pull request #6566 from levxn/skills/context-protection
feat(skills): AS-9 and AS-10 — skill directory allowlisting and context protection for activated skills
2026-03-18 13:51:25 -07:00
Timothy @aden 4c98f0d2d0 Merge pull request #6564 from levxn/skills/resource-loading
feat(skills): AS-6 tier 3 resource loading — base_dir in catalog XML and skill dirs wired through execution stack
2026-03-18 13:50:54 -07:00
bryan d84c3364d0 chore: update to pass make test 2026-03-18 13:20:56 -07:00
Timothy @aden ae921f6cee Merge pull request #6619 from aden-hive/fix/claude-code-subscription-support
fix(llm): restore Claude Code subscription OAuth support
2026-03-18 13:08:27 -07:00
Timothy 6b506a1c08 chore: lint 2026-03-18 13:05:00 -07:00
Timothy 0c9f4fa97e fix(llm): restore Claude Code subscription (OAuth) support after Anthropic API change
Anthropic tightened OAuth validation on 2026-03-17, requiring a
specific User-Agent header and a billing integrity system block for
subscription-authenticated requests. Without these, all OAuth calls
return HTTP 400 with a generic "Error" message.

Changes:
- Add billing integrity system block (SHA-256 hash derived from first
  user message content) prepended to system messages on OAuth requests
- Set User-Agent to claude-code/<version> for OAuth sessions
- Fix OAuth header patch to detect tokens in x-api-key (not just
  Authorization) and add required beta/browser-access headers
- Set litellm.drop_params=True to prevent unsupported params like
  stream_options from leaking to Anthropic (causes 400)
- Skip stream_options entirely for Anthropic models
- Honour LITELLM_LOG env var for debug logging instead of hardcoding
  LiteLLM logger to WARNING
2026-03-18 13:02:24 -07:00
Richard Tang 95e30bc607 chore: remove old queen history endpoint 2026-03-18 12:43:30 -07:00
bryan 0f1f0090b0 chore: linter update 2026-03-18 12:41:01 -07:00
bryan c0da3bec02 feat: strip image content for non-vision models 2026-03-18 12:40:30 -07:00
bryan 9dadb5264d feat: add screenshot image passthrough to LLM 2026-03-18 12:40:18 -07:00
bryan e39e6a75cc feat: add ref system for aria snapshots 2026-03-18 12:36:51 -07:00
Richard Tang 23c66d1059 feat: worker model loading 2026-03-18 12:14:02 -07:00
Richard Tang b9d529d94e feat: support separate worker llm setup 2026-03-18 11:19:44 -07:00
Bryan @ Aden 1c9b09fb78 Merge pull request #6602 from sundaram2021/cleanup/remove-commit-message-txt
micro-fix: remove unnecessary commit message file
2026-03-18 17:40:50 +00:00
Timothy @aden 9fb14f23d2 Merge pull request #6526 from sundaram2021/feature/openrouter-api-key-support
feat openrouter api key support
2026-03-18 10:15:40 -07:00
levxn 96609386a3 lints fixed 2026-03-18 22:18:16 +05:30
levxn 0cef0e6990 DS-12, DS-13 skill config overrides and runtime heuristics 2026-03-18 22:13:09 +05:30
Sundaram Kumar Jha 4795dc4f68 chore: clean useless commit message file 2026-03-18 16:45:10 +05:30
Sundaram Kumar Jha acf0f804c5 style(llm): apply ruff formatting 2026-03-18 10:54:06 +05:30
Sundaram Kumar Jha 4e2951854b fix(openrouter): harden quickstart setup and model validation 2026-03-18 10:39:58 +05:30
Sundaram Kumar Jha 80dfb429d7 refactor(review): remove out-of-scope PR changes 2026-03-18 10:39:48 +05:30
Timothy @aden 9c0ba77e22 Replace demo image with GitHub asset link
Updated README to include new asset link and removed demo image.
2026-03-17 20:59:14 -07:00
Timothy @aden 46b4651073 Merge pull request #6589 from aden-hive/fix/data-disclosure-gaps
Release / Create Release (push) Waiting to run
Fix data disclosure gaps, add worker run digests, clean up deprecated tools
2026-03-17 20:46:12 -07:00
Timothy 86dd5246c6 Merge remote-tracking branch 'origin/fix/resume-with-scheduler' into fix/data-disclosure-gaps 2026-03-17 20:44:28 -07:00
Timothy a1227c88ee Merge remote-tracking branch 'origin/fix/resume-with-scheduler' into fix/data-disclosure-gaps 2026-03-17 20:42:25 -07:00
Timothy 535d7ab568 fix: worker digest sub event 2026-03-17 20:41:56 -07:00
Richard Tang af10494b31 chore: ruff lint 2026-03-17 20:41:08 -07:00
Richard Tang 39c1042827 fix: fall back to queen-only session when worker load fails on cold restore 2026-03-17 20:38:41 -07:00
Richard Tang 16e7dc11f4 fix: don't overwrite meta in queen creation 2026-03-17 20:27:39 -07:00
Richard Tang 7a27babefd feat: track and resume the session by phase 2026-03-17 20:22:54 -07:00
Timothy d53ae9d51d fix: deprecated tests 2026-03-17 20:20:21 -07:00
Timothy 910cf7727d Merge remote-tracking branch 'origin/fix/resume-with-scheduler' into fix/data-disclosure-gaps 2026-03-17 20:14:25 -07:00
Timothy 1698605f15 chore: lint 2026-03-17 19:59:23 -07:00
Timothy eda124a123 chore: lint 2026-03-17 19:58:08 -07:00
Timothy 15e9ce8d2f Merge remote-tracking branch 'origin/feature/session-digest' into fix/data-disclosure-gaps 2026-03-17 19:45:07 -07:00
Timothy c01dd603d7 fix: digest invocation 2026-03-17 19:44:22 -07:00
Timothy 9d5157d69f feat: queen subscribe to worker digest 2026-03-17 19:23:43 -07:00
Timothy d78795bdf5 Merge remote-tracking branch 'origin/feature/session-digest' into fix/data-disclosure-gaps 2026-03-17 19:15:22 -07:00
Timothy ff2b7f473e fix: subagent execution 2026-03-17 19:15:07 -07:00
Timothy 73c9a91811 feat: add worker memory consolidation hooks 2026-03-17 19:14:07 -07:00
Timothy 27b765d902 Merge branch 'feature/session-digest' into fix/data-disclosure-gaps 2026-03-17 18:32:20 -07:00
Timothy fddba419be fix: minor issues 2026-03-17 18:30:57 -07:00
Timothy f42d6308e8 Merge branch 'main' into fix/data-disclosure-gaps 2026-03-17 17:50:36 -07:00
Timothy c167002754 fix: data disclosure gaps 2026-03-17 17:50:08 -07:00
Timothy @aden ea26ee7d0c Merge pull request #6568 from aden-hive/feature/node-focus-prompt
Inject execution-scope preamble into worker node system prompts
2026-03-17 17:38:49 -07:00
Richard Tang 5280e908b2 feat: change the agent last active time 2026-03-17 17:35:01 -07:00
RichardTang-Aden 1c5dd8c664 Merge pull request #5178 from Schlaflied/feat/sdr-agent-template
feat(templates): add SDR Agent sample template
2026-03-17 16:05:45 -07:00
Richard Tang 3aca153be5 fix: add missing flowchart and terminal nodes 2026-03-17 16:03:29 -07:00
Timothy 65c8e1653c chore: lint 2026-03-17 15:31:36 -07:00
Timothy 58e4fa918c feat: make worker node aware of boundaries 2026-03-17 15:28:41 -07:00
Timothy 3af13d3f90 feat: session digest for run scoped diary 2026-03-17 14:25:32 -07:00
levxn b799789dbe fixing lint 2026-03-18 02:15:58 +05:30
levxn 2cd73dfccc implements AS-9 and AS-10 2026-03-18 02:06:51 +05:30
levxn 57d77d5479 fixing lint 2026-03-18 01:32:24 +05:30
levxn 5814021773 skills trust gate merged properly into resource loading branch 2026-03-18 01:18:20 +05:30
levxn 4f4cc9c8ce halfway done commit 2026-03-18 00:59:35 +05:30
Timothy d9c840eee5 chore: resolve merge conflicts with feature/agent-skills
Integrate SkillsManager refactor from base branch. Trust gating (AS-13)
is now wired into SkillsManager._do_load() instead of inline in runner.py,
with the interactive flag passed through SkillsManagerConfig.
2026-03-17 11:55:11 -07:00
Timothy @aden d2eb86e534 Merge pull request #6540 from sundaram2021/fix/make-windows-compatibility
fix make test compatibility on windows
2026-03-17 11:41:32 -07:00
Timothy 03842353e4 Merge branch 'main' into feature/openrouter-api-key-support 2026-03-17 11:21:53 -07:00
Schlaflied 48747e20af fix: remove personal oauth credential entries from .gitignore
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:53:16 -04:00
Schlaflied 58af593af6 revert: remove unrelated changes from previous commit
Restore .claude/settings.json and revert .gitignore change
that were accidentally included in the sdr-agent refactor commit.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:52:44 -04:00
Schlaflied 450575a927 refactor(sdr-agent): reuse agent.start() in tui command and fix mock mode
- Replace duplicated setup code in tui command with agent.start(mock_mode=mock)
- Fix mock mode to use MockLLMProvider instead of llm=None
- Add demo_contacts.json sample data for template testing
- Untrack .claude/settings.json and add to .gitignore

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:52:10 -04:00
Schlaflied eac2bb19b2 fix(sdr-agent): fix agent runtime lifecycle and mcp config
- Replace self._executor with self._agent_runtime (AgentRuntime | None)
- Import AgentRuntime for proper type annotation
- Add missing await self._agent_runtime.start() in start() — runtime
  was created but never started, causing silent failures at runtime
- Add self._agent_runtime = None reset in stop() for clean restart
- Remove redundant self._graph is None guard in trigger_and_wait()
- Update mcp_servers.json with hive-tools server config
- Add credential file patterns to .gitignore

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:50:29 -04:00
Schlaflied 756a815bf0 feat(templates): add SDR Agent sample template 2026-03-17 13:50:05 -04:00
mma2027 23a7b080eb test: add comprehensive test suite for safe_eval (#4015)
* test: add comprehensive test suite for safe_eval sandboxed evaluator

Adds 113 tests across 14 test classes covering the full surface area of
the safe_eval expression evaluator used by edge conditions:

- Literals, data structures, arithmetic, unary/binary/boolean operators
- Short-circuit semantics for `and`/`or` (including guard patterns)
- Ternary expressions, variable lookup, subscript/attribute access
- Whitelisted function and method calls
- Security boundaries (private attrs, disallowed AST nodes, blocked builtins)
- Real-world EdgeSpec.condition_expr patterns from graph executor usage

* style: fix import sort order

---------

Co-authored-by: mma2027 <mma2027@users.noreply.github.com>
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-18 01:01:31 +08:00
mma2027 bf39bcdec9 fixed race condition deadlock, missing short-circuit eval, unhandled format exceptions (#4012) 2026-03-18 00:36:54 +08:00
Richard Tang 0276632491 Merge branch 'feat/graph-improvements' 2026-03-17 07:34:10 -07:00
RichardTang-Aden ae2993d0d1 Merge pull request #6528 from Antiarin/feat/trigger-nodes-in-draft-graph
Restore trigger nodes in the new flowchart
2026-03-16 20:54:36 -07:00
RichardTang-Aden d14d71f760 Merge pull request #6549 from aden-hive/staging
Release / Create Release (push) Waiting to run
release 0.7.2
2026-03-16 20:44:47 -07:00
Richard Tang ef6efc2f55 chore: lint and dead code 2026-03-16 20:44:03 -07:00
Antiarin 738641d35f fix: correct trigger target, label, and SSE event data
- Add name and entry_node to all trigger SSE events (TRIGGER_AVAILABLE,
  TRIGGER_ACTIVATED, TRIGGER_DEACTIVATED) so frontend gets correct data
  immediately instead of guessing
- Use ep.entry_node from backend in polling instead of guessing first
  non-trigger node
- Compute cronToLabel from trigger config during polling so pill labels
  show human-readable schedule
- Fix AsyncMock for event_bus.publish in tests
2026-03-17 09:07:10 +05:30
Antiarin 22f5534f08 fix: ensure Queen calls remove_trigger when user asks to remove scheduler
Added explicit prompt guidance requiring the Queen to call the
remove_trigger tool instead of just saying "it's removed."
2026-03-17 09:07:10 +05:30
Antiarin b79e7eca73 feat: live update trigger pill and detail panel on save
- Handle trigger_updated SSE event to update graph node label and
  config in real time when cron or task is saved
- Use cronToLabel for human-readable schedule display in detail panel
- Add "Saved" button feedback for Save Cron and Save Task (2s toast)
- Update trigger pill label to reflect new schedule on cron save
2026-03-17 09:07:10 +05:30
Antiarin 28250dc45e feat: support cron editing via trigger update API
- Extend PATCH /triggers/{id} to accept trigger_config with cron
  validation via croniter and active timer restart
- Add TRIGGER_UPDATED SSE event so frontend updates in real time
- Update frontend API client to use updateTrigger with config support
- Add tests for task update, cron restart, and invalid cron rejection
2026-03-17 09:07:10 +05:30
Antiarin fe5df6a87a feat: restore trigger node rendering in DraftGraph
Trigger nodes (scheduler, webhook, etc.) stopped appearing after the
v0.7.0 refactor because DraftGraph had no trigger awareness.

- Extract shared utilities (cssVar, truncateLabel, trigger colors/icons,
  useTriggerColors, cronToLabel) into lib/graphUtils.ts
- Render trigger pills above the draft flowchart with pill shape, icons,
  countdown timers, active/inactive status, and click handling
- Draw dashed edges from trigger pills to the correct draft node using
  flowchartMap lookup
- Name all trigger layout constants, fix countdown text color bug
- Include trigger pill extent in SVG viewBox width

Closes #6344
2026-03-17 09:07:10 +05:30
Richard Tang 07e4b593dd fix: write config when change model with existing key 2026-03-16 20:23:20 -07:00
Timothy 497591bf3b Merge remote-tracking branch 'origin/feat/hive-llm-support' into staging 2026-03-16 19:49:21 -07:00
Timothy a2a3e334d6 Merge branch 'feature/node-node-comm-by-file' into staging 2026-03-16 19:48:45 -07:00
Timothy 1ccbfaf800 Merge branch 'feature/agent-skills' into staging 2026-03-16 19:48:36 -07:00
Timothy a9afa0555c chore: lint 2026-03-16 19:43:19 -07:00
Timothy 83b2183cf0 Merge branch 'feature/agent-skills' into feature/node-node-comm-by-file 2026-03-16 19:37:46 -07:00
bryan c2dea88398 refactor: active node always displaying 2026-03-16 19:30:44 -07:00
Timothy f49e7a760e fix: skill memory keys breaking unrestricted node permissions
Only extend read_keys/write_keys with skill memory keys when the
list was already non-empty (restricted). An empty list means "allow
all" — adding _-prefixed skill keys to an empty list accidentally
activated the permission check and blocked legitimate reads.
2026-03-16 19:27:48 -07:00
bryan dc95c88da0 chore: linter update 2026-03-16 19:22:51 -07:00
Timothy 6e0255ebec fix: lint E501 line-too-long and auto-format 2026-03-16 19:21:27 -07:00
bryan b51e688d1a feat: transition when loading 2026-03-16 19:17:16 -07:00
Timothy 379d3df46b feat: file path first data passing 2026-03-16 19:14:45 -07:00
bryan b77a3031fe refactor: update flowchart.json for templates 2026-03-16 17:27:28 -07:00
bryan c10eea04ec refactor: update graph node colors 2026-03-16 17:26:57 -07:00
Richard Tang 491a3f24da chore: Suppress noisy LiteLLM INFO logs 2026-03-16 16:45:23 -07:00
Timothy c7d70e0fb1 fix: skill injection, tool call timeout 2026-03-16 16:26:16 -07:00
Richard Tang d59f8e99cb chore: prompt users to go to discord for hive key 2026-03-16 16:09:47 -07:00
Richard Tang 0a91b49417 feat: add validation and config for baseURL 2026-03-16 16:07:13 -07:00
Timothy ced64541b9 Merge remote-tracking branch 'origin/main' into feature/agent-skills 2026-03-16 15:45:00 -07:00
levxn 88253883a3 tier 3 resource loading 2026-03-17 03:30:58 +05:30
Timothy 3c30cfe02b Merge branch 'chore/fix-workspace-queen-message' into feature/agent-skills 2026-03-16 14:52:03 -07:00
Timothy 0d6267bcf1 fix: add delegation notice 2026-03-16 14:49:33 -07:00
Richard Tang b47175d1df feat: add hive llm spec in the quickstart 2026-03-16 14:10:30 -07:00
Timothy 6f23a30eed fix: skill lifecycle to runtime 2026-03-16 13:46:49 -07:00
Sundaram Kumar Jha ff7b5c7e27 fix: prepend ~/.local/bin to PATH so uv is found in Git Bash on Windows 2026-03-17 01:28:25 +05:30
bryan 69f0ff7ac9 chore: linter update 2026-03-16 12:22:29 -07:00
bryan c3f13c50eb docs: remove stale iso 5807 references 2026-03-16 12:22:01 -07:00
bryan 5477408d40 chore: code quality updates 2026-03-16 12:18:46 -07:00
bryan 9fad385ddf fix: return staging phase for disk-loaded agents to prevent false planning loader 2026-03-16 12:14:20 -07:00
bryan cf44ee1d9b refactor: remove AgentGraph, extract shared types, add resizable graph panel 2026-03-16 12:13:56 -07:00
bryan 4ab33a39d6 chore: add generated flowchart.json for template agents 2026-03-16 12:13:29 -07:00
bryan ae19121802 test: add tests for flowchart_utils classification and remap 2026-03-16 12:13:16 -07:00
bryan b518525418 docs: update flowchart schema for 9 types with new color palette 2026-03-16 12:13:06 -07:00
bryan ac3fe38b33 refactor: remove dead shape cases and update imports 2026-03-16 12:12:50 -07:00
bryan 3c6a30fcae refactor: trim queen prompt to 9 flowchart types with dark theme colors 2026-03-16 12:12:35 -07:00
bryan 2ced873fb5 refactor: extract flowchart utils into dedicated module with fallback generation 2026-03-16 12:12:17 -07:00
levxn 6ed6e5b286 lint fixes 2026-03-17 00:32:14 +05:30
Vasu Bansal 30bb0ad5d8 style: format MCP connection manager 2026-03-16 23:46:44 +05:30
Vasu Bansal cb0845f5ba fix: wrap MCP manager cleanup condition 2026-03-16 23:41:36 +05:30
Levin ce2525b59c Merge branch 'aden-hive:main' into skills/trust-gating 2026-03-16 23:39:27 +05:30
levxn 1f77ec3831 fixed bug introduced with change in executor.py, AS-13 along with upstream's AS-1,2,3,4,5 2026-03-16 23:38:45 +05:30
Timothy @aden ab995d8b96 Merge pull request #6530 from aden-hive/chore/fix-workspace-queen-message
fix(micro-fix): queen message display
2026-03-16 10:52:57 -07:00
Vasu Bansal 6ab5aa8004 style: format mcp client
Apply ruff formatting to satisfy CI on the MCP transport changes.
2026-03-16 23:19:49 +05:30
Vasu Bansal 4449cd8ee8 feat: add shared MCP connection manager 2026-03-16 23:10:26 +05:30
Vasu Bansal 8b60c03a0a feat: add unix and sse MCP transports
Implements unix socket and SSE MCP transports, adds reconnect-once retry for unix/SSE, and adds focused unit coverage.
2026-03-16 23:03:44 +05:30
Timothy c2e560fc07 fix: queen message display 2026-03-16 10:30:05 -07:00
vakrahul 2f15a16159 feat: structured MCP error codes and failure diagnostics (closes #6352) 2026-03-16 22:50:14 +05:30
Timothy 19f7ae862e fix: skill loading log 2026-03-16 10:14:33 -07:00
Timothy 5e9f74744a fix: google sheet tools account param 2026-03-16 10:14:05 -07:00
Levin 0e98023e40 Merge branch 'aden-hive:main' into skills/trust-gating 2026-03-16 22:23:57 +05:30
Timothy 7787179a5a Merge branch 'main' into feature/agent-skills 2026-03-16 09:14:29 -07:00
Timothy @aden b63205b91a Merge pull request #6010 from Antiarin/feat/notion-tool-docs-and-improvements
feat: add Notion tool README, improve tool logic, and expand test coverage
2026-03-16 08:36:11 -07:00
Timothy @aden 347bccb9ee Merge branch 'main' into feat/notion-tool-docs-and-improvements 2026-03-16 08:10:43 -07:00
Sundaram Kumar Jha 22bb07f00e chore: resolve merge conflict 2026-03-16 19:59:57 +05:30
Sundaram Kumar Jha 660f883197 style(core): apply ruff formatting to satisfy CI lint 2026-03-16 19:57:21 +05:30
Timothy @aden 9d83f0298f Merge pull request #6385 from Waryjustice/fix/google-sheets-credentials-orphan
fix: make state.json progress writes atomic in GraphExecutor
2026-03-16 07:25:13 -07:00
Sundaram Kumar Jha 988de80b66 Merge branch 'main' into feature/openrouter-api-key-support 2026-03-16 19:51:04 +05:30
Sundaram Kumar Jha dc6aa226ee feat(openrouter): validate model readiness and harden tool-call handling
- add OpenRouter chat completion validation to key checks for quickstart flows

- improve OpenRouter compat parsing to convert plain textual tool calls into real tool events

- prevent tool-call text from leaking into assistant responses

- add regression tests for OpenRouter key checks and LiteLLM tool compat parsing
2026-03-16 19:39:11 +05:30
levxn 48a54b4ee2 implements AS-13, trusted gating for project level skills 2026-03-16 17:45:33 +05:30
Hundao 7f7e8b4dff docs: update Windows guidance to reflect native support (#6519)
quickstart.ps1 and hive.ps1 provide full native Windows support.
Update README, CONTRIBUTING, and environment-setup docs to stop
recommending WSL as the primary path. Also add Windows alternatives
for make check/test commands in CONTRIBUTING.md.

Fixes #3835
Fixes #3839
2026-03-16 15:52:42 +08:00
Sundaram Kumar Jha f48a7380f5 Add command sanitizer module and enhance command validation (#6217)
* feat(tools): add command sanitizer module with blocklists for shell injection prevention

* fix(tools): validate commands in execute_command_tool before execution

* fix(tools): validate commands in coder_tools_server run_command before execution

* test(tools): add 109 tests for command sanitizer covering safe, blocked, and edge cases

* fix(tools): normalize executable sanitizer matching

\) usage with explicit .exe suffix normalization in sanitizer paths to satisfy Ruff B005 while preserving blocking behavior for executable names.

Also apply the same normalization in coder_tools_server fallback sanitizer and clean a test-file formatting lint issue.

* fix(tools): harden command sanitizer handling

Normalize executable path matching, tighten python -c detection, and remove the duplicated coder_tools_server fallback by importing the shared sanitizer reliably.

Document the shell=True limitation in the command runners and add regression tests for absolute executable paths plus quoted python -c forms.
2026-03-16 14:46:53 +08:00
Gaurav Singh 3c7f129d86 fix(executor): enforce branch timeout and memory conflict strategy in parallel execution (#6504)
ParallelExecutionConfig.branch_timeout_seconds and memory_conflict_strategy
were declared but never read by any code. This caused branches to run
indefinitely and memory conflicts to go undetected.

Changes:
- Wrap parallel branch tasks with asyncio.wait_for() using configured timeout
- Switch asyncio.gather to return_exceptions=True so one timeout doesn't cancel siblings
- Handle asyncio.TimeoutError in result processing loop
- Implement last_wins/first_wins/error memory conflict strategies
- Track which branch wrote which key during fan-out for conflict detection
- Add 6 new tests covering timeout and conflict scenarios

Closes #5706
2026-03-16 14:31:09 +08:00
RichardTang-Aden 4533b27aa1 Merge pull request #6249 from aden-hive/fix/episodic-memory-access
fix: deduplicate queen memory tools into shared list
2026-03-15 20:26:29 -07:00
Richard Tang 3adf268c29 chore: ruff lint 2026-03-15 20:25:21 -07:00
Richard Tang ac8579900f Merge remote-tracking branch 'origin/main' into fix/episodic-memory-access 2026-03-15 20:23:13 -07:00
Richard Tang abbaaa68f3 Merge remote-tracking branch 'origin/main' 2026-03-15 20:19:32 -07:00
Richard Tang 11089093ef chore: remove deprecated step in quickstart 2026-03-15 20:05:23 -07:00
RichardTang-Aden 99b7cb07d5 Merge pull request #6300 from Nupreeth/docs/notion-tool-readme
docs(notion): add Notion tool README
2026-03-15 20:03:17 -07:00
RichardTang-Aden 70d61ae67a Merge pull request #6389 from saschabuehrle/micro-fix/issue-6015-step-numbering
micro-fix: remove vestigial duplicate Step 3 header in quickstart.sh
2026-03-15 20:01:36 -07:00
Richard Tang dd054815a3 docs: update product image 2026-03-15 19:56:17 -07:00
Timothy 8e5eaae9dd chore(micro-fix): windows string ops compatibility fix 2026-03-15 17:05:41 -07:00
Hundao 2d0128eb5c fix: declare croniter dependency and fail loudly on missing import (#6405)
croniter is used for cron-based timer entry points but was never
declared in pyproject.toml. A fresh install would silently skip
all cron triggers. Add croniter>=1.4.0 to dependencies and raise
RuntimeError instead of silently continuing on ImportError.

Fixes #5353
2026-03-15 18:29:05 +08:00
Milton Adina 06f1d4dcef docs: add Windows quickstart.ps1 instructions to getting-started.md (#5668)
- Add Windows (PowerShell) section alongside Linux/macOS
- Reference .\quickstart.ps1 for native Windows users
- Add Set-ExecutionPolicy note for script execution
- Link to environment-setup.md for WSL alternatives
2026-03-15 18:05:39 +08:00
Gowtham Tadikamalla 0e7b11b5b2 fix(llm): warn when litellm monkey-patches fail to apply due to ImportError (#5757)
Closes #5753

_patch_litellm_anthropic_oauth and _patch_litellm_metadata_nonetype
silently return when litellm internal modules change. This adds
logger.warning() calls so operators are alerted when patches cannot be
applied, instead of encountering cryptic 401 or TypeError at runtime.

Co-authored-by: GowthamT-1610 <gowthamt@umd.edu>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-15 17:59:36 +08:00
kalp patel 291b78f934 fix: prune ~/.hive/failed_requests/ to prevent unbounded disk growth (#5725)
Add MAX_FAILED_REQUEST_DUMPS = 50 cap and _prune_failed_request_dumps()
helper. After each _dump_failed_request() call the oldest files beyond
the cap are deleted so the directory never grows without bound.

Fixes #5696
2026-03-15 17:33:46 +08:00
Vaibhav Kumar e196a03972 Fix LLMJudge OpenAI fallback to use LiteLLM provider (#5674) 2026-03-15 17:22:37 +08:00
Ishan Chaurasia a0abe2685d fix: preserve custom session ids in runtime logs (#6241)
* fix: preserve custom session ids in runtime logs

Treat any execution stored under sessions/<id> as a session-backed run so custom IDs stay visible in worker-session browsing and unified log APIs. Add regression coverage for custom IDs across executor path selection, log directory creation, and API listing.

Made-with: Cursor

* fix: ignore stray session directories in listing

Keep the session_ prefix as the fast path for worker session discovery, but allow custom IDs when a backing state.json exists. This avoids ghost directories in the UI while preserving the custom session ID support from the original fix.

Made-with: Cursor
2026-03-15 16:08:54 +08:00
SRI LIKHITA ADRU e8f642c8b6 fix(credentials): aden_api_key delete returns 404 when not found, san… (#6340)
* fix(credentials): aden_api_key delete returns 404 when not found, sanitize 500 errors

* style: restore warning log for unexpected delete errors

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-15 15:56:32 +08:00
Abhilash Puli 6260f628eb feat(tools): add HuggingFace inference, embedding, and endpoint tools (#6132)
* feat(tools): add HuggingFace inference, embedding, and endpoint tools

* fix: resolve ruff E501 lint issues

* style: fix formatting and restore Hub API error message

* style: format test file

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-15 15:44:18 +08:00
Sundaram Kumar Jha 4a4f17ed40 fix quickstart guide for windows (#6264)
* fix(windows): verify uv is runnable before launch

* fix(windows): use validated uv path for kimi health check

* fix(windows): dedupe uv discovery and keep quickstart scoped

* chore: refresh uv lockfile
2026-03-15 15:19:15 +08:00
Fernando Mano 36dcf2025b Feature: #5871 - Improve developer agent logging: simplify terminal output (#6388) 2026-03-15 15:13:22 +08:00
Aryan Nandanwar 85c70c94e6 fix: queen bee multiple response error resolved (#5962)
* fix: queen bee multiple response error resolved

* fix: queen bee multiple response error resolved updates

* fix: added chatmsg.phas and reconsileoptimizeuser

* fix:cleaned up blank lines

* style: fix formatting in workspace.tsx

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-15 15:07:24 +08:00
saschabuehrle 336e82ba22 micro-fix: remove vestigial duplicate Step 3 header in quickstart.sh (fixes #6015) 2026-03-14 18:07:59 +01:00
Sundaram Kumar Jha a7b6b080ab chore(lockfiles): refresh generated lockfiles
- update frontend package-lock metadata after frontend validation\n- refresh uv.lock editable package version for the current workspace state
2026-03-14 20:50:51 +05:30
Sundaram Kumar Jha 9202cbd4d4 fix(openrouter): stabilize quickstart and tool execution
- add cross-platform OpenRouter quickstart setup, config fallbacks, and key validation\n- harden LiteLLM/OpenRouter tool execution, duplicate question handling, and worker loading UX\n- add backend and frontend regression coverage for OpenRouter flows
2026-03-14 20:48:58 +05:30
Waryjustice f2ddd1051d fix: make state.json progress writes atomic
Use atomic_write for GraphExecutor._write_progress and log persistence failures instead of silently swallowing exceptions. Add regression tests for atomic write usage and warning logs on write failure.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-14 18:52:25 +05:30
Aaryann Chandola 2dd60c8d52 Merge branch 'aden-hive:main' into feat/notion-tool-docs-and-improvements 2026-03-14 10:58:01 +05:30
Richard Tang ff01c1fd99 chore: release v0.7.1 — Chrome-native GCU, browser isolation, dummy agent tests
Release / Create Release (push) Waiting to run
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-13 20:39:46 -07:00
RichardTang-Aden 421b25fdb7 Merge pull request #6313 from prasoonmhwr/bugFix/add_tab_ui
bugFix: micro-fix add tab UI
2026-03-13 20:29:30 -07:00
Richard Tang 795c3c33e2 docs: readme update 2026-03-13 20:26:44 -07:00
RichardTang-Aden 97821f4d80 Merge pull request #6346 from aden-hive/fix/session-resume-new-agent
fix: save json path for the new agent update meta.json when loaded worker
2026-03-13 20:19:48 -07:00
RichardTang-Aden 505e1e30fd Merge branch 'main' into fix/session-resume-new-agent 2026-03-13 20:19:36 -07:00
Timothy 3fb2b285fb chore: add star history widget 2026-03-13 20:17:35 -07:00
RichardTang-Aden a76109840c Merge pull request #6345 from aden-hive/feat/gcu-updates
feat: GCU browser cleanup, draft loading state, and inner_turn message fix
2026-03-13 20:16:38 -07:00
Timothy 1db8484402 Merge branch 'main' into feature/agent-skills 2026-03-13 20:05:47 -07:00
RichardTang-Aden 39212350ba Merge pull request #6342 from aden-hive/ci/level-2-dummy-agent-testing
Add Level 2 dummy agent end-to-end tests
2026-03-13 19:42:34 -07:00
Richard Tang f3399fe95b chore: ruff lint 2026-03-13 19:39:44 -07:00
Richard Tang d02e1155ed feat: dummy agent tests 2026-03-13 19:39:14 -07:00
bryan 7ede3ba171 feat: queen upsert fix 2026-03-13 19:34:26 -07:00
Timothy cdaec8a837 feat: agent skills 2026-03-13 18:56:34 -07:00
Richard Tang 2272491cf5 chore: remove dead code 2026-03-13 18:10:43 -07:00
RichardTang-Aden bb38cb974f Merge pull request #6333 from aden-hive/fix/new-agent-resume
Fix: new agent resume and GCU browser improvements
2026-03-13 17:20:49 -07:00
bryan 635d2976f4 feat: show loading spinner in draft panel during planning phase 2026-03-13 16:40:33 -07:00
bryan 4e1525880d feat: clean up browser profile after top-level GCU node execution 2026-03-13 16:40:20 -07:00
Richard Tang b80559df68 chore: ruff lint 2026-03-13 16:38:50 -07:00
RichardTang-Aden 08d93ef90a Merge pull request #6331 from RichardTang-Aden/main
fix: generate worker mcp.json correctly in initialize_agent_package
2026-03-13 15:35:18 -07:00
Richard Tang 22bf035522 chore: fix lint 2026-03-13 15:35:01 -07:00
Richard Tang 15944a42ab fix: generate worker mcp file correctly 2026-03-13 15:30:28 -07:00
Richard Tang 8440ec70ba chore: document the difference between runner mode run() and start() 2026-03-13 15:28:18 -07:00
Timothy eacf2520cf chore: skills prd 2026-03-13 15:22:09 -07:00
Richard Tang def4f62a51 fix: update meta.json when loaded worker 2026-03-13 14:05:57 -07:00
bryan b0c5bcd210 chore: update tab management guidelines and add concurrent subagent patterns 2026-03-13 14:04:40 -07:00
bryan 2fe1343343 feat: inject unique browser profile per GCU subagent 2026-03-13 14:03:21 -07:00
bryan de0dcff50f feat: add tab origin/age metadata and per-subagent profile isolation 2026-03-13 14:02:15 -07:00
Richard Tang 20427e213a fix: update meta.json when loaded worker 2026-03-13 13:52:15 -07:00
bryan 1fb5c6337a fix: anchor worker monitoring to queen's session ID on cold-restore 2026-03-13 12:50:50 -07:00
Timothy @aden 1e74f194a1 Update authors in MCP Server Registry document 2026-03-13 12:15:50 -07:00
Timothy 08157d2bd6 chore(docs): bounty program - standard 2026-03-13 12:10:21 -07:00
Timothy ef036257a9 docs(mcp): MCP integration PRD 2026-03-13 11:56:33 -07:00
Timothy 16ce984c74 chore: add default context limit on windows quickstart 2026-03-13 10:04:49 -07:00
Kartik d433cda209 fix: use CredentialStoreAdapter in sap_tool instead of raw os.getenv()
Made-with: Cursor
2026-03-13 22:30:50 +05:30
bryan 1e8b5b96eb Merge branch 'main' into feat/gcu-updates 2026-03-13 09:26:06 -07:00
Prasoon Mahawar 094ba89f19 Merge branch 'main' of https://github.com/prasoonmhwr/hive into bugFix/add_tab_ui 2026-03-13 18:59:44 +05:30
Prasoon Mahawar 7008c9f310 bugFix: UI overflow issue when creating multiple agents – “Add tab” dropdown partially hidden 2026-03-13 18:58:38 +05:30
Prasoon Mahawar 94d7cbacc2 Revert "bugFix: Clipboard write in SystemPromptTab lacks error handling and may show false Copied feedback"
This reverts commit bddc2b413a.
2026-03-13 18:55:52 +05:30
Prasoon Mahawar bddc2b413a bugFix: Clipboard write in SystemPromptTab lacks error handling and may show false Copied feedback 2026-03-13 18:23:36 +05:30
Nupreeth 48c8fb7fff docs(notion): add Notion tool README 2026-03-13 12:03:48 +05:30
RichardTang-Aden 52b1a3f472 Merge pull request #6282 from aden-hive/feat/refactor-session
Release / Create Release (push) Waiting to run
Refactor session lifecycle with flowchart planning and triggers
2026-03-12 21:15:10 -07:00
Richard Tang 079e00c8f7 Merge remote-tracking branch 'origin/main' into feat/refactor-session 2026-03-12 21:13:15 -07:00
Richard Tang 60bba38941 chore: ruff lint 2026-03-12 21:01:47 -07:00
Richard Tang ea8e7b11c6 Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 20:54:08 -07:00
Richard Tang 3dc2b25b01 fix: adding the trigger helpers 2026-03-12 20:53:45 -07:00
bryan 543b90b34f chore: tooltip update 2026-03-12 20:50:39 -07:00
Richard Tang 2ad78ec8a2 Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 20:48:09 -07:00
Timothy 412658e9f2 fix: remove subagent shapes 2026-03-12 20:46:09 -07:00
Richard Tang 9bfddec322 fix: missing _FLOWCHART_TYPES reference 2026-03-12 20:43:03 -07:00
Timothy bbd9c10169 fix: decision node cannot have subagents 2026-03-12 20:36:04 -07:00
Richard Tang 51fdc4ddde fix: always new session for new agent 2026-03-12 20:34:42 -07:00
Richard Tang 04685d33ca fix: solve the problem from merge conflict 2026-03-12 20:28:25 -07:00
Richard Tang 729a0e0cec fix: resolve merge conflict 2026-03-12 20:23:58 -07:00
bryan 2bcb0cacee added pause/run button 2026-03-12 20:15:25 -07:00
Timothy 44bf191f53 fix: no orphaned node by bfs 2026-03-12 20:04:00 -07:00
Richard Tang 993b31f19b Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 20:00:45 -07:00
Richard Tang 41b3b9619f Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feature/flowchart-linked-experimental 2026-03-12 19:45:45 -07:00
Richard Tang 2a4fe4020c feat: force the planning agent to ask questions 2026-03-12 19:45:07 -07:00
Ishan Chaurasia 9d1f268078 fix(server): honor session_id in one-step session creation (#6233)
Align POST /api/sessions behavior across queen-only and one-step worker creation so callers can rely on deterministic session IDs. Add a regression test covering the forwarded session_id contract.

Made-with: Cursor
2026-03-13 10:43:12 +08:00
bryan 2185e127b1 style: coder tools formatting and template quote fixes 2026-03-12 19:39:53 -07:00
bryan 99ed885fd0 fix: add cached_tokens to finish event test assertion 2026-03-12 19:39:53 -07:00
bryan d8a390a685 feat: flowchart rendering in DraftGraph with node shapes and layout 2026-03-12 19:39:53 -07:00
bryan f50cf1735b feat: CSS variable theming for agent graph components 2026-03-12 19:39:53 -07:00
bryan 04eb57f54e feat: auto-load worker on cold restore when queen resumes 2026-03-12 19:39:53 -07:00
bryan 7378408eb8 feat: add flowchart type system and draft-to-graph dissolution 2026-03-12 19:39:53 -07:00
bryan cf05420417 style: formatting and import cleanup across framework modules 2026-03-12 19:38:55 -07:00
Timothy f5ed4c7d43 fix: validate orphaned gcu node 2026-03-12 19:38:44 -07:00
Timothy 5547432b6e fix: queen defaults to global max context tokens 2026-03-12 19:29:14 -07:00
Ishan Chaurasia 336557d7c7 fix: pass browser_wait text as data (#6235)
Pass browser_wait text through Playwright's function argument channel so quoted and multiline strings do not break the generated wait expression. Add a regression test covering text that previously would have been interpolated unsafely.

Made-with: Cursor
2026-03-13 10:08:16 +08:00
Timothy 87c172227c fix: mandate flowchart topology correction 2026-03-12 19:03:46 -07:00
Richard Tang c2c4929de8 feat: remove the phase in the label 2026-03-12 18:55:24 -07:00
Timothy a978338738 fix: allow replanning 2026-03-12 18:54:01 -07:00
Timothy 8eb59b1f66 fix: mandate usage of ask tools and change pending behavior 2026-03-12 18:34:15 -07:00
Richard Tang f9d5f95936 Merge remote-tracking branch 'origin/feature/flowchart-linked-experimental' into feat/refactor-session 2026-03-12 18:32:26 -07:00
Timothy 651e99ffe3 Merge branch 'feature/multiple-asks' into feature/flowchart-linked-experimental 2026-03-12 17:57:11 -07:00
Timothy 2564f1b948 feat: allow multiple questions 2026-03-12 17:56:58 -07:00
Richard Tang c01cd528d2 feat: planning phase prompt improvements 2026-03-12 17:44:06 -07:00
bryan 2434c86cdf docs: clarify two-step escalation relay protocol in queen prompt 2026-03-12 16:50:17 -07:00
Timothy bc194ee4e9 Merge branch 'main' into feature/flowchart-linked-experimental 2026-03-12 16:50:17 -07:00
bryan c4a5e621aa docs: update GCU prompt with popup tracking and close_all guidance 2026-03-12 16:50:06 -07:00
bryan 0f5b83d86a feat: add browser_close_all tool for bulk tab cleanup 2026-03-12 16:49:55 -07:00
bryan b5aadcd51e feat: auto-track popup pages and improve session startup logging 2026-03-12 16:49:46 -07:00
bryan 290d2f6823 feat: add --no-startup-window to Chrome launch flags 2026-03-12 16:49:36 -07:00
Timothy @aden 2bac100c03 Merge pull request #6283 from vincentjiang777/main
docs: rename and expand contributing guidelines
2026-03-12 16:46:59 -07:00
Timothy @aden 425d37f868 Merge branch 'main' into main 2026-03-12 16:44:29 -07:00
Vincent Jiang 99b127e2da docs: revert filename to CONTRIBUTING.md for GitHub compliance
Changed HOW_TO_CONTRIBUTE.md back to CONTRIBUTING.md to comply with
GitHub's standard for contributing guidelines files.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-03-12 16:42:42 -07:00
Timothy 43b759bf61 fix: ensure flowchart existence 2026-03-12 16:40:18 -07:00
Vincent Jiang 20d8d52f12 docs: rename and expand contributing guidelines
Renamed CONTRIBUTING.md to HOW_TO_CONTRIBUTE.md and significantly expanded
the documentation with detailed sections on development setup, OS support,
tooling requirements, performance metrics, and contribution workflows.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-03-12 16:29:13 -07:00
Richard Tang 944567dc31 chore: ruff lint 2026-03-12 16:23:13 -07:00
nightcityblade 7e09588e4e fix: reject path-like agent names in hive dispatch --agents (#6211)
Validate that agent names passed to --agents do not contain path
separators. Previously, passing 'exports/my_agent' would result in
the doubled path 'exports/exports/my_agent' with a confusing error.
Now a clear error message is shown suggesting the correct usage.

Fixes #6208

Co-authored-by: nightcityblade <nightcityblade@gmail.com>
2026-03-12 16:22:37 -07:00
Priyanka Bhallamudi 7bf69d2263 fix: read nodes from graph object in discovery.py for correct node count (#6227)
Co-authored-by: Lakshmi Priyanka Bhallamudi <priyanka@Lakshmis-MacBook-Air.local>
2026-03-12 16:22:37 -07:00
bryan 99d2b0c003 chore: update readme 2026-03-12 16:22:37 -07:00
bryan 8868416baa chore: update the tests and readme 2026-03-12 16:22:37 -07:00
bryan 405b120674 feat: fixed google credentials to use the google oauth credential 2026-03-12 16:22:37 -07:00
Trisha 66a7b43199 [bug:6117:docs]: fix inconsistent configuration and troubleshooting guidance (#6118) 2026-03-12 16:22:36 -07:00
Trisha a8f9d83723 docs: fix typos and awkward copy (#6115)
* [bug:6109:README]: fix typos and awkward copy

* trigger ci

* rerun checks
2026-03-12 16:22:36 -07:00
bryan d95d5804ca fix: align the credential functions to be the same 2026-03-12 16:22:36 -07:00
Richard Tang 674cf05601 feat: track the number of runs 2026-03-12 15:19:13 -07:00
Timothy 86349c78d0 Merge branch 'feature/guardrails' into feature/flowchart-linked-experimental 2026-03-12 15:11:12 -07:00
Timothy 2232f49191 fix: queen flowcharting behavior 2026-03-12 15:10:32 -07:00
Richard Tang 6fa71fa27d feat: track queen phase by message 2026-03-12 14:58:35 -07:00
Vincent Jiang 1ac9ba69d6 docs: replace recipe examples with 100 sample agent prompts
Replace individual recipe READMEs with a comprehensive collection of 100 real-world agent prompt examples across marketing, sales, operations, engineering, and finance. This provides users with a broader range of use case inspiration in a single, organized reference document.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-03-12 14:46:09 -07:00
Vincent Jiang 9e16be8f03 docs: replace recipe examples with 100 sample agent prompts
Replace individual recipe READMEs with a comprehensive collection of 100 real-world agent prompt examples across marketing, sales, operations, engineering, and finance. This provides users with a broader range of use case inspiration in a single, organized reference document.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-03-12 14:44:32 -07:00
Richard Tang 8c7065ad37 refactor: remove the parts conversion logic 2026-03-12 14:36:27 -07:00
Richard Tang a18ed5bbe6 feat: restore queen phase 2026-03-12 14:29:01 -07:00
bryan 9f3339650d chore: linter update 2026-03-12 14:27:17 -07:00
bryan d5e5d3e83d feat: add subagent activity tracking to queen status and instructions 2026-03-12 14:26:49 -07:00
bryan 5ea27dda09 refactor: update GCU system prompt for auto-snapshots and batching 2026-03-12 14:26:38 -07:00
bryan 6f9066ef20 feat: return auto-snapshot from browser interaction tools 2026-03-12 14:26:24 -07:00
bryan c37185732a feat: kill orphaned Chrome processes on GCU server shutdown 2026-03-12 14:26:05 -07:00
bryan 0c900fb50e refactor: clean session startup and add page lifecycle management 2026-03-12 14:25:16 -07:00
bryan 4d3ac28878 feat: launch Chrome on macOS via open -n to coexist with user's browser 2026-03-12 14:24:55 -07:00
bryan 270c1f8c50 fix: use lazy %-formatting in subagent completion log to avoid f-string in logger 2026-03-12 14:24:30 -07:00
bryan 3d0859d06a fix: stop clearing credentials_required on modal close to prevent infinite loop 2026-03-12 14:24:14 -07:00
Timothy 8f55170c1e fix: compaction ratio reporting 2026-03-12 14:17:42 -07:00
Richard Tang ed3d4bfe33 feat: resume cold session from event logs 2026-03-12 14:07:57 -07:00
Timothy 31a98a5f95 feat: cached token handing 2026-03-12 14:03:58 -07:00
Timothy 7667b773f2 fix: 18x tool discovery efficiency by progressive disclosure 2026-03-12 13:12:43 -07:00
Timothy 49560260de fix: token counts 2026-03-12 11:52:08 -07:00
Richard Tang 596ce9878d feat: unique run id 2026-03-12 11:09:36 -07:00
Timothy 1cc75f89bd feat: replanning 2026-03-12 09:55:42 -07:00
bryan ffe47c0f71 fix: credential modal eating errors, banner stays open 2026-03-12 09:41:53 -07:00
Timothy bb3c69cff1 fix: proper guardrail on combined context window 2026-03-12 09:37:17 -07:00
Timothy 70d11f537e feat: merge subagent nodes 2026-03-12 09:06:41 -07:00
Timothy b15dd2f623 fix: better logging 2026-03-12 09:03:29 -07:00
Timothy ce308312ae fix: usage tracking 2026-03-12 08:56:33 -07:00
bryan bf4652db4b fix: share event bus so tool events are visible to parent 2026-03-12 08:41:34 -07:00
bryan 2acd526b71 feat: dynamic viewport sizing and suppress Chrome warning bar 2026-03-12 08:40:49 -07:00
bryan df71834e4b refactor: switch from Playwright browser to system Chrome via CDP 2026-03-12 08:39:43 -07:00
nightcityblade f757c724cc fix: reject path-like agent names in hive dispatch --agents (#6211)
Validate that agent names passed to --agents do not contain path
separators. Previously, passing 'exports/my_agent' would result in
the doubled path 'exports/exports/my_agent' with a confusing error.
Now a clear error message is shown suggesting the correct usage.

Fixes #6208

Co-authored-by: nightcityblade <nightcityblade@gmail.com>
2026-03-12 21:11:02 +08:00
Priyanka Bhallamudi a4c758403e fix: read nodes from graph object in discovery.py for correct node count (#6227)
Co-authored-by: Lakshmi Priyanka Bhallamudi <priyanka@Lakshmis-MacBook-Air.local>
2026-03-12 18:34:47 +08:00
Timothy bc3c5a5899 fix: allow memory tool to be used in all phases 2026-03-11 20:10:24 -07:00
Timothy a67563850b feat: flowchart reconciliation 2026-03-11 19:58:27 -07:00
Bryan @ Aden b48465b778 Merge pull request #6230 from aden-hive/feat/google-doc-credential-alignment
micro-fix: Feat/google doc credential alignment
2026-03-12 02:52:03 +00:00
bryan d3baaaab24 chore: update readme 2026-03-11 19:48:00 -07:00
Timothy c764b4dc3b Merge branch 'main' into feature/flowchart-linked-experimental 2026-03-11 19:12:51 -07:00
bryan ad6077bd7b chore: update the tests and readme 2026-03-11 19:12:38 -07:00
Timothy ce2a91b1c0 feat: flowchart mapping 2026-03-11 19:12:25 -07:00
bryan c2e7afeb5e feat: fixed google credentials to use the google oauth credential 2026-03-11 19:12:25 -07:00
Timothy 0c9680ca89 feat: dissolution graph structure 2026-03-11 18:38:17 -07:00
Richard Tang 726016d24a fix: remove the duplicated session logic 2026-03-11 17:11:03 -07:00
Richard Tang 4895cea08a chore: lint and micro-fix 2026-03-11 16:55:29 -07:00
Richard Tang c9723a3ff2 feat(wip): always resume the previous session 2026-03-11 16:48:31 -07:00
Richard Tang 6cb73a6fea refactor: remove the remaining old trigger format and change the trigger format in examples to the latest format 2026-03-11 16:13:37 -07:00
Richard Tang 0c7f43f595 refactor: remove reference of the unused session judge 2026-03-11 16:01:00 -07:00
Richard Tang ea5cfcc5d6 refactor: remove the unused session judge 2026-03-11 15:57:19 -07:00
Richard Tang 34e85019c3 feat: stop supporting the old scheduler 2026-03-11 15:54:48 -07:00
Timothy 8011b72673 fix: flowchart display 2026-03-11 15:41:55 -07:00
RichardTang-Aden d87dfca1ab Merge pull request #6075 from aden-hive/fix/credential-function-alignment
fix: align the credential functions to be the same
2026-03-11 15:11:57 -07:00
Richard Tang c979dba958 fix: reference error from the rename 2026-03-11 14:33:42 -07:00
Richard Tang b4caa045e1 Merge remote-tracking branch 'origin/main' into feat/agent-trigger 2026-03-11 14:32:36 -07:00
Timothy b0fd4bc356 fix: draft flowchart display 2026-03-11 11:05:33 -07:00
Trisha a79d7de482 [bug:6117:docs]: fix inconsistent configuration and troubleshooting guidance (#6118) 2026-03-11 14:41:54 +08:00
Trisha e5e57302fa docs: fix typos and awkward copy (#6115)
* [bug:6109:README]: fix typos and awkward copy

* trigger ci

* rerun checks
2026-03-11 14:38:37 +08:00
Emmanuel Nwanguma c69cf1aea5 test(security): add comprehensive unit tests for 7 security scanning tools (#6151)
* test(security): add comprehensive unit tests for 7 security scanning tools

Add dedicated test files for all security scanning tools:
- test_dns_security_scanner.py (12 tests)
- test_http_headers_scanner.py (13 tests)
- test_ssl_tls_scanner.py (14 tests)
- test_subdomain_enumerator.py (15 tests)
- test_port_scanner.py (17 tests)
- test_tech_stack_detector.py (20 tests)
- test_risk_scorer.py (24 tests)

Total: 115 new tests covering:
- Input validation and cleaning
- Connection error handling
- Core scanning logic with mocked responses
- Grade/risk calculation
- Edge cases

Fixes #5920

* fix(tests): strengthen weak assertions in security scanner tests

- SSL scanner: replace always-true `or` assertions with specific checks
  that verify hostname stripping actually happened
- Port scanner: verify timeout clamp value, not just absence of error
- DNS scanner: remove unused helper method

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-11 13:29:11 +08:00
Emmanuel Nwanguma 2f4cd8c36f fix(credentials): improve exception handling in key_storage.py (#6153)
Replace bare except Exception: clauses with specific exception handling:

- delete_aden_api_key(): Catch FileNotFoundError, PermissionError at debug
  level; log unexpected errors at WARNING with exc_info=True
- _read_credential_key_file(): Catch FileNotFoundError, PermissionError at
  debug level; log unexpected errors at WARNING with exc_info=True
- _read_aden_from_encrypted_store(): Catch FileNotFoundError, PermissionError,
  KeyError at debug level; log unexpected errors at WARNING with exc_info=True

This makes credential issues easier to diagnose by:
- Logging unexpected errors at WARNING level (visible in production)
- Including full stack traces with exc_info=True
- Keeping expected failures (file not found, permissions) at debug level

Fixes #5931
2026-03-11 13:05:10 +08:00
Aaryann Chandola 6f571e6d00 [BUG] fix: use ReplaceFileW for atomic writes on Windows to preserve ACLs (#5849)
* [BUG] fix: use ReplaceFileW for atomic writes on Windows to preserve ACLs

* fix: ensure atomic_replace checks for Windows API availability
2026-03-11 12:59:14 +08:00
Emmanuel Nwanguma 31bc84106f test: add API integration tests for hubspot, intercom, google_docs tools (#6167)
>>
>> Resolves #5921
>>
>> - test_hubspot_tool.py: 51 tests covering 15 MCP tools
>> - test_intercom_tool.py: 50 tests covering 11 MCP tools
>> - test_google_docs_tool.py: 57 tests covering 11 MCP tools
2026-03-11 12:55:03 +08:00
Timothy bdd6194203 feature: hive flowchart at planning phase 2026-03-10 19:54:02 -07:00
RichardTang-Aden fd79dceb0f Merge pull request #6166 from aden-hive/fix/subagent-reply-stall
Release / Create Release (push) Waiting to run
micro-fix: update escalation tests for new ESCALATION_REQUESTED flow
2026-03-10 19:47:00 -07:00
Richard Tang ad50139d67 chore: lint 2026-03-10 19:46:35 -07:00
Richard Tang 12fb40c110 test: update escalation tests for ESCALATION_REQUESTED flow
Tests were asserting the old CLIENT_OUTPUT_DELTA + CLIENT_INPUT_REQUESTED
pattern; the fix in 89ccd66f routes escalations through the queen via
ESCALATION_REQUESTED instead.
2026-03-10 19:45:21 -07:00
RichardTang-Aden 738e469d96 Merge pull request #6165 from aden-hive/feature/provider-moonshotai-kimi
feat: support MoonShot AI Kimi subscription
2026-03-10 19:39:25 -07:00
Timothy 80ccbcc827 chore: lint 2026-03-10 19:37:18 -07:00
RichardTang-Aden 08fac31a9d Merge pull request #6159 from aden-hive/fix/subagent-reply-stall
fix: route subagent report_to_parent escalations to queen instead of user
2026-03-10 18:24:33 -07:00
Richard Tang 89ccd66fb9 fix: subagent _EscalationReceiver 2026-03-10 18:21:50 -07:00
Timothy 7c47e367de feat: support moonshotai kimi subscription 2026-03-10 18:03:44 -07:00
Timothy b8741bf94c fix: queen agent system prompt hooks 2026-03-10 16:25:07 -07:00
Aaryann Chandola e82133741c Merge branch 'aden-hive:main' into feat/notion-tool-docs-and-improvements 2026-03-11 04:23:20 +05:30
RichardTang-Aden c90dcbb32f Merge pull request #6152 from aden-hive/refactor/remove-dead-code
refactor: remove deprecated codes
2026-03-10 15:31:34 -07:00
Richard Tang ac3a5f5e93 chore: remove the ai generated temp doc 2026-03-10 15:29:21 -07:00
Timothy 1ccfdbbf7d chore: minimax key check 2026-03-10 15:24:09 -07:00
Timothy 1de37d2747 chore: lint 2026-03-10 15:00:14 -07:00
Timothy 2aefdf5b5f refactor: remove deprecated codes 2026-03-10 14:57:54 -07:00
Antiarin 5076278dcb feat(notion): register Notion tool in verified and unverified registration functions
- Added the Notion tool registration to the _register_verified function.
- Removed the Notion tool registration from the _register_unverified function to ensure proper handling.
2026-03-11 02:45:51 +05:30
Antiarin 2398e04e11 docs(notion): add README for Notion tool with setup instructions and usage examples
- Introduced a comprehensive README.md for the Notion tool.
- Included setup instructions for the Notion API token and credential store configuration.
- Documented available tools and their functionalities.
- Provided usage examples for searching, creating, updating, and managing pages and databases.
2026-03-11 02:45:41 +05:30
Antiarin d00f321627 test(notion): add comprehensive tests for error handling and credential store in Notion tool
- Implemented tests for HTTP error codes, timeouts, and generic exceptions in _request.
- Added tests to verify the use of credential store when provided.
- Enhanced tests for notion_search to include filter types and page size clamping.
- Updated test assertions for successful responses from notion_get_page.
2026-03-11 02:45:30 +05:30
Antiarin e76b6cb575 feat(notion): enhance Notion tool functionality with new block types and improved page creation
- Added BlockType enum for various Notion block types.
- Updated notion_create_page to allow specifying parent_page_id and title_property.
- Enhanced notion_query_database to support sorting and pagination.
- Introduced notion_create_database for creating databases under a parent page.
- Improved error handling for required parameters in page and database creation.
2026-03-11 02:45:12 +05:30
Hundao 4caaa79900 Merge pull request #5988 from roberthallers/docs/fix-tui-deprecation-5941
docs: fix TUI deprecation inconsistency in roadmap
2026-03-10 16:46:41 +08:00
Hundao 296089d4cd Merge pull request #6108 from Hundao/fix/subagent-judge-feedback
fix: SubagentJudge and implicit judge return feedback=None on ACCEPT
2026-03-10 15:39:29 +08:00
hundao cae5f971cf fix: update test assertions for newly added tools
Tool counts and expected lists were outdated after new tools were added
to stripe, linear, apollo, discord, and google_analytics.
2026-03-10 15:36:12 +08:00
hundao bac716eea3 fix: pass feedback="" on evaluated ACCEPT verdicts in SubagentJudge and implicit judge
Fixes #6107
2026-03-10 15:24:39 +08:00
Navya Bijoy 14daf672e8 Fix: SessionManager._cleanup_stale_active_sessions indiscriminately cancels healthy concurrent agent sessions (#6081)
* fixes a bug in the  SessionManager

* chore: remove debug print from test

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-10 15:18:11 +08:00
Emmanuel Nwanguma e352ae5145 fix(mcp): close errlog file handle to prevent resource leak (#6094)
Track the errlog file handle opened on non-Windows systems and
properly close it during cleanup to prevent file descriptor leaks.

Changes:
- Add _errlog_handle instance variable to track the file handle
- Store handle reference when opening os.devnull
- Close handle in _cleanup_stdio_async() after other cleanup
- Clear reference in disconnect() for safety

Fixes #6002
2026-03-10 15:06:51 +08:00
Pushkal a58ffc2669 fix(server): use session.phase_state instead of session.mode_state in handle_pause (#6069)
The handle_pause endpoint referenced session.mode_state (lines 360-361),
which does not exist on the Session dataclass. This caused an
AttributeError every time the pause endpoint reached the phase transition
step, preventing the queen phase from transitioning to staging and
returning a 500 error to the frontend.

Changed to session.phase_state, consistent with handle_stop (line 412),
handle_run (line 75), and the Session dataclass definition
(session_manager.py line 44).
2026-03-10 15:03:19 +08:00
RichardTang-Aden 3fefea52be Merge pull request #6102 from aden-hive/micro-fix/report-to-parent-empty-check
micro-fix: track reported_to_parent to prevent false empty-turn detection
2026-03-09 21:12:23 -07:00
Richard Tang 06fd045b3e micro-fix: track reported_to_parent to prevent false empty-turn detection
Turns that call report_to_parent were incorrectly treated as "truly
empty" because the flag was not propagated. Thread it through
_run_single_turn and include it in the empty-turn guard.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 21:10:47 -07:00
RichardTang-Aden 2e43d2af46 Merge pull request #6100 from aden-hive/feature/integration-extended
Release / Create Release (push) Waiting to run
micro-fix: wrong reference for hive_coder
2026-03-09 19:52:35 -07:00
Richard Tang 2c9790c65d Merge remote-tracking branch 'origin' into feature/integration-extended 2026-03-09 19:52:17 -07:00
Richard Tang 9700ac71bb micro-fix: wrong reference for hive_coder 2026-03-09 19:50:07 -07:00
RichardTang-Aden 61ed67b068 Merge pull request #6097 from aden-hive/feature/integration-extended
Expand integration tool coverage across 40 vendors
2026-03-09 19:47:34 -07:00
Richard Tang c3bea8685a Merge remote-tracking branch 'origin/main' into feature/integration-extended 2026-03-09 19:47:21 -07:00
RichardTang-Aden 98c57b795a Merge pull request #6050 from aden-hive/feat/queen-planning-phase
Add queen planning phase, global memory, and refactor hive_coder
2026-03-09 19:46:23 -07:00
Richard Tang 9be1d03b5c chore ruff lint 2026-03-09 19:45:36 -07:00
Richard Tang 0d09510539 Merge remote-tracking branch 'origin/main' into feat/queen-planning-phase 2026-03-09 19:42:10 -07:00
Richard Tang 639c37ba17 feat: prompt to init the agent 2026-03-09 19:34:01 -07:00
Richard Tang 2258c23254 Merge branch 'feature/queen-global-memory' into feat/queen-planning-phase 2026-03-09 19:11:32 -07:00
Richard Tang 9714ea106d feat: improve initialize_and_build_agent clarity 2026-03-09 18:54:48 -07:00
Timothy f4ad500177 chore: lint 2026-03-09 18:53:01 -07:00
Timothy 9154a4d9f8 fix: resolve E501 line-too-long lint errors across 7 tool files 2026-03-09 18:51:01 -07:00
Timothy add6efe6f1 fix(micro-fix): increase stall threshold 2026-03-09 18:40:13 -07:00
Richard Tang 7ceb1efd02 fix: replace old tool name reference 2026-03-09 18:40:01 -07:00
Timothy a29ecf8435 chore(micro-fix): fix ci test blockage 2026-03-09 18:27:21 -07:00
Richard Tang d0ba5ef4f4 fix: update the wrong variable name 2026-03-09 18:12:29 -07:00
Richard Tang 860f637491 feat: add validation for module import 2026-03-09 17:53:50 -07:00
Richard Tang acb2cab317 feat: minor prompt change for switching to building mode 2026-03-09 17:41:23 -07:00
Richard Tang b453806918 feat: execution end message 2026-03-09 17:29:58 -07:00
Richard Tang 7ba8a0f51b feat: strengthen validation logic when loading 2026-03-09 17:08:20 -07:00
Richard Tang f6f398b6b1 feat: add GCU knowledge to planning 2026-03-09 17:02:13 -07:00
Timothy c4b22fa5c4 feat(postgres): update credential spec with new tool names 2026-03-09 16:47:27 -07:00
Timothy 0e64f977cd feat(postgres): add table stats, indexes, and foreign keys tools
Add pg_get_table_stats for row counts and size info,
pg_list_indexes for index details, and pg_get_foreign_keys
for relationship discovery with both outgoing and incoming FKs.
2026-03-09 16:47:09 -07:00
Timothy f24c9708fc feat(lusha): update credential spec with new tool names 2026-03-09 16:45:33 -07:00
Timothy bb4436e277 feat(lusha): add bulk enrich, technologies, and decision makers tools
Add lusha_bulk_enrich_persons for batch enrichment,
lusha_get_technologies for company tech stack lookup, and
lusha_search_decision_makers for senior contact discovery.
2026-03-09 16:45:17 -07:00
Timothy 795f66c90b feat(gsc): update credential spec with new tool names 2026-03-09 16:44:33 -07:00
Timothy 9ef6d51573 feat(gsc): add top queries, top pages, and delete sitemap tools
Add gsc_top_queries and gsc_top_pages convenience wrappers for
click-sorted analytics, and gsc_delete_sitemap for sitemap removal.
2026-03-09 16:44:20 -07:00
Timothy 3fed4e3409 feat(aws-s3): update credential specs with new tool names 2026-03-09 16:43:37 -07:00
Timothy 670e69f2ce feat(aws-s3): add copy, metadata, and presigned URL tools
Add s3_copy_object for copying within/between buckets,
s3_get_object_metadata for HEAD-based metadata retrieval, and
s3_generate_presigned_url for temporary access URL generation.
2026-03-09 16:42:46 -07:00
Timothy f6c4747905 feat(pushover): update credential spec with new tool names 2026-03-09 16:42:04 -07:00
Timothy 7b78f6c12f feat(pushover): add cancel receipt, glance update, and limits tools
Add pushover_cancel_receipt for stopping emergency retries,
pushover_send_glance for widget data updates, and
pushover_get_limits for checking message usage.
2026-03-09 16:41:52 -07:00
Timothy 1c75100f59 feat(news): update credential spec with new tool names 2026-03-09 16:41:15 -07:00
Timothy b325e103c6 feat(news): add latest, by-source, and by-topic search tools
Add news_latest for breaking news without query, news_by_source
for source-filtered articles, and news_by_topic for topic-based
discovery with automatic date ranges.
2026-03-09 16:40:54 -07:00
Timothy aef2d2d474 feat(serpapi): update credential spec with new tool names 2026-03-09 16:40:05 -07:00
Timothy 95a2b6711e feat(serpapi): add cited-by, profile search, and Google web search tools
Add scholar_cited_by for finding papers citing a given paper,
scholar_search_profiles for author profile discovery, and
serpapi_google_search for structured Google web results.
2026-03-09 16:38:50 -07:00
Timothy 7fb5e8145c feat(exa-search): update credential spec with new tool names 2026-03-09 16:37:56 -07:00
Timothy 8e45d0df83 feat(exa-search): add news, papers, and company search tools
Add exa_search_news, exa_search_papers, and exa_search_companies
convenience wrappers with pre-configured category filters and
automatic date/domain filtering.
2026-03-09 16:37:44 -07:00
Richard Tang 8d4657c13e Merge branch 'feat/queen-planning-phase' into feature/queen-global-memory 2026-03-09 16:10:42 -07:00
Timothy 3d175a6d54 feat(greenhouse): update credential spec with new tool names
Add greenhouse_list_offers, greenhouse_add_candidate_note, greenhouse_list_scorecards.
2026-03-09 16:02:53 -07:00
Timothy b9debaf957 feat(greenhouse): add list offers, candidate notes, and scorecards tools
- greenhouse_list_offers: GET /offers or /applications/{id}/offers
- greenhouse_add_candidate_note: POST /candidates/{id}/activity_feed/notes
- greenhouse_list_scorecards: GET /applications/{id}/scorecards
- Add _post helper for POST requests
2026-03-09 16:02:08 -07:00
Richard Tang bdcbcff6f3 feat: better instruction for planning mode switch 2026-03-09 16:01:34 -07:00
Timothy d2d7bdc374 feat(brevo): update credential spec with new tool names
Add brevo_list_contacts, brevo_delete_contact, brevo_list_email_campaigns.
2026-03-09 16:01:16 -07:00
Timothy 40e494b15d feat(brevo): add list contacts, delete contact, and list campaigns tools
- brevo_list_contacts: GET /contacts with pagination and modified_since filter
- brevo_delete_contact: DELETE /contacts/{email} to remove contacts
- brevo_list_email_campaigns: GET /emailCampaigns with status filter and stats
2026-03-09 16:00:42 -07:00
Timothy b5e840c0cb feat(quickbooks): update credential specs with new tool names
Add quickbooks_list_invoices, quickbooks_get_customer, quickbooks_create_payment
to both credential specs (token and realm_id).
2026-03-09 15:59:46 -07:00
Timothy f3d74c9ae4 feat(quickbooks): add list invoices, get customer, and create payment tools
- quickbooks_list_invoices: query invoices with status/customer filters
- quickbooks_get_customer: GET /customer/{id} with address and contact info
- quickbooks_create_payment: POST /payment with optional invoice linking
2026-03-09 15:59:23 -07:00
Richard Tang a22b321692 feat: improve phase switching tools 2026-03-09 15:33:03 -07:00
Timothy 2e7dbad118 feat(cloudinary): update credential specs with new tool names
Add cloudinary_get_usage, cloudinary_rename_resource, cloudinary_add_tag
to all three credential specs (cloud_name, key, secret).
2026-03-09 15:31:42 -07:00
Timothy 6183d1b65b feat(cloudinary): add usage, rename, and add tag tools
- cloudinary_get_usage: GET /usage for storage, bandwidth, transformation limits
- cloudinary_rename_resource: POST /rename to change public_id
- cloudinary_add_tag: POST /tags to add tags to resources
2026-03-09 15:31:22 -07:00
Timothy 09931e6d98 feat(twitter): update credential spec with new tool names
Add twitter_get_user_followers, twitter_get_tweet_replies, twitter_get_list_tweets.
2026-03-09 15:25:21 -07:00
Timothy cb394127d1 feat(twitter): add user followers, tweet replies, and list tweets tools
- twitter_get_user_followers: GET /users/{id}/followers with profile details
- twitter_get_tweet_replies: search recent replies via conversation_id
- twitter_get_list_tweets: GET /lists/{id}/tweets with author expansion
2026-03-09 15:21:47 -07:00
Timothy 588fa1f9ea feat(google-analytics): update credential spec with new tool names
Add ga_get_user_demographics, ga_get_conversion_events, ga_get_landing_pages.
2026-03-09 15:21:09 -07:00
Timothy 73325c280c feat(google-analytics): add demographics, conversion events, and landing pages tools
- ga_get_user_demographics: country/language/device breakdown
- ga_get_conversion_events: event counts, conversions, and revenue
- ga_get_landing_pages: top landing pages with bounce rate and session duration
2026-03-09 15:20:51 -07:00
Timothy 8c5ae8ffa8 feat(docker-hub): update credential spec with new tool names
Add docker_hub_get_tag_detail, docker_hub_delete_tag, docker_hub_list_webhooks.
2026-03-09 15:19:58 -07:00
Timothy 7389423c70 feat(docker-hub): add tag detail, delete tag, and list webhooks tools
- docker_hub_get_tag_detail: GET /repositories/{repo}/tags/{tag} with image architectures
- docker_hub_delete_tag: DELETE /repositories/{repo}/tags/{tag}
- docker_hub_list_webhooks: GET /repositories/{repo}/webhooks
- Add _delete helper for DELETE requests
2026-03-09 15:18:46 -07:00
Timothy 20c15446a7 feat(apollo): update credential spec with new tool names
Add apollo_get_person_activities, apollo_list_email_accounts,
apollo_bulk_enrich_people.
2026-03-09 15:17:38 -07:00
Richard Tang c05c30dd9a feat: add meta agent tools to planning 2026-03-09 15:14:34 -07:00
Timothy bcd2fb76bd feat(apollo): add person activities, email accounts, and bulk enrich tools
- apollo_get_person_activities: GET /activities for contact activity history
- apollo_list_email_accounts: GET /email_accounts for connected sending accounts
- apollo_bulk_enrich_people: POST /people/bulk_match for batch enrichment (up to 10)
2026-03-09 15:03:21 -07:00
Timothy 5fb97ab6df feat(calendly): update credential spec with new tool names
Add calendly_cancel_event, calendly_list_webhooks, calendly_get_event_type.
2026-03-09 15:00:46 -07:00
Timothy 0224ebc800 feat(calendly): add cancel event, list webhooks, and get event type tools
- calendly_cancel_event: POST /scheduled_events/{id}/cancellation
- calendly_list_webhooks: GET /webhook_subscriptions for org/user scope
- calendly_get_event_type: GET /event_types/{id} for meeting template details
- Add _post helper for POST requests
2026-03-09 15:00:34 -07:00
Timothy af88f7299a feat(pagerduty): update credential specs with new tool names
Add pagerduty_list_oncalls, pagerduty_add_incident_note,
pagerduty_list_escalation_policies to api_key spec.
Add pagerduty_add_incident_note to from_email spec (write operation).
2026-03-09 14:59:53 -07:00
Timothy 81729706ae feat(pagerduty): add oncalls, incident notes, and escalation policies tools
- pagerduty_list_oncalls: GET /oncalls with schedule/policy filters
- pagerduty_add_incident_note: POST /incidents/{id}/notes to add notes
- pagerduty_list_escalation_policies: GET /escalation_policies with search
2026-03-09 14:59:33 -07:00
Timothy bbb1b43ebe feat(airtable): update credential spec with new tool names
Add airtable_delete_records, airtable_search_records, airtable_list_collaborators.
2026-03-09 14:58:57 -07:00
Timothy 70ed5fa8df feat(airtable): add delete records, search records, and list collaborators tools
- airtable_delete_records: DELETE records by comma-separated IDs (up to 10)
- airtable_search_records: search records using FIND formula for partial matching
- airtable_list_collaborators: list base collaborators via meta API
- Add _delete helper for DELETE requests
2026-03-09 14:58:42 -07:00
Timothy 312db6620d feat(reddit): update credential specs with new tool names
Add reddit_get_subreddit_info, reddit_get_post_detail, reddit_get_user_posts
to both credential specs (client_id and client_secret).
2026-03-09 14:57:50 -07:00
Timothy 93c1fc5488 feat(reddit): add subreddit info, post detail, and user posts tools
- reddit_get_subreddit_info: GET /r/{name}/about for subscriber count, description
- reddit_get_post_detail: GET /by_id/t3_{id} for full post details with flair, ratios
- reddit_get_user_posts: GET /user/{name}/submitted for user's post history
2026-03-09 14:57:33 -07:00
Richard Tang 90762f275b feat: give planning mode the load tool 2026-03-09 14:55:53 -07:00
Timothy 801443027d feat(pipedrive): update credential spec with new tool names
Add pipedrive_update_deal, pipedrive_create_person, pipedrive_create_activity
to the credential spec tools list.
2026-03-09 14:54:22 -07:00
Timothy ca2ead76cd feat(pipedrive): add deal update, person creation, and activity creation tools
Add pipedrive_update_deal, pipedrive_create_person, and
pipedrive_create_activity tools using Pipedrive REST API v1.
2026-03-09 14:52:27 -07:00
Timothy d562144a6d feat(confluence): register new tools in credential specs
Add confluence_update_page, confluence_delete_page, and
confluence_get_page_children to all three Confluence credential specs.
2026-03-09 14:51:39 -07:00
Timothy af7fb7da27 feat(confluence): add page update, delete, and children listing tools
Add confluence_update_page, confluence_delete_page, and
confluence_get_page_children tools using Confluence REST API v2.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 14:51:26 -07:00
Timothy c17dd63b4a feat(intercom): register new tools in credential spec
Add intercom_close_conversation, intercom_create_contact, and
intercom_list_conversations to Intercom credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 14:50:49 -07:00
Timothy 866db289e2 feat(intercom): add close conversation, create contact, and list conversations tools
Add close_conversation, create_contact, and list_conversations client
methods plus intercom_close_conversation, intercom_create_contact, and
intercom_list_conversations MCP tools using Intercom API v2.11.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 14:50:30 -07:00
Timothy b4ac5e9607 feat(gitlab): register new tools in credential spec
Add gitlab_update_issue, gitlab_get_merge_request, and
gitlab_create_merge_request_note to GitLab credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 14:49:01 -07:00
Timothy 3ca7af4242 feat(gitlab): add issue update, MR detail, and MR comment tools
Add _put helper and gitlab_update_issue, gitlab_get_merge_request,
and gitlab_create_merge_request_note tools using GitLab REST API v4.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 14:48:40 -07:00
Richard Tang 2b12a9c91a Merge remote-tracking branch 'origin/feature/queen-global-memory' into feature/queen-global-memory 2026-03-09 14:47:27 -07:00
Richard Tang 9a94595a42 feat: extract the shared knowledge between planning and building 2026-03-09 14:45:31 -07:00
Richard Tang e1540dfaa6 refactor: drop hive code CLI 2026-03-09 14:30:13 -07:00
Richard Tang 4f5ac6d1b1 refactor: rename hive_coder to queen and extract queen orchestrator 2026-03-09 14:23:31 -07:00
Richard Tang c87d7b13da refactor: rename hive_coder to queen and extract queen orchestrator 2026-03-09 14:23:16 -07:00
Timothy c4acf0b659 fix: memory consolidation hook, simplify generated memory files 2026-03-09 14:15:01 -07:00
RichardTang-Aden 5e1ab3ca37 Merge pull request #5029 from karthik-kotra/docs/setup-troubleshooting
docs(setup): add troubleshooting steps for common WSL setup issues
2026-03-09 14:06:28 -07:00
Timothy 79c32c9f47 feat(slack): register new tools in credential spec
Add slack_get_channel_info, slack_list_files, and slack_get_file_info
to Slack credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:58:14 -07:00
Timothy 35ee29a843 feat(slack): add channel info, file listing, and file detail tools
Add get_channel_info, list_files, and get_file_info client methods
plus slack_get_channel_info, slack_list_files, and slack_get_file_info
MCP tools using Slack Web API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:57:45 -07:00
Timothy 573aea1d9c feat(stripe): register new tools in credential spec
Add stripe_list_disputes, stripe_list_events, and
stripe_create_checkout_session to Stripe credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:56:20 -07:00
Timothy 6ecbc30293 feat(stripe): add disputes, events, and checkout session tools
Add list_disputes, list_events, and create_checkout_session client
methods plus stripe_list_disputes, stripe_list_events, and
stripe_create_checkout_session MCP tools using Stripe API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:56:07 -07:00
Timothy 843b1f2e1d feat(linear): register new tools in credential spec
Add linear_cycles_list, linear_issue_comments_list, and
linear_issue_relation_create to Linear credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:54:48 -07:00
Timothy 89f6c8e4ef feat(linear): add cycle listing, issue comments, and issue relations tools
Add list_cycles, list_issue_comments, and create_issue_relation client
methods plus linear_cycles_list, linear_issue_comments_list, and
linear_issue_relation_create MCP tools using Linear GraphQL API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:52:09 -07:00
Timothy 304ac07bd8 feat(zoom): register new tools in credential spec
Add zoom_update_meeting, zoom_list_meeting_participants, and
zoom_list_meeting_registrants to Zoom credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:50:27 -07:00
Timothy 82f0684b83 feat(zoom): add meeting update, participants, and registrants tools
Add zoom_update_meeting (PATCH), zoom_list_meeting_participants
(past meeting attendees), and zoom_list_meeting_registrants
using Zoom REST API v2.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:45:11 -07:00
Timothy 963c37dc31 feat(twilio): register new tools in credential specs
Add twilio_list_phone_numbers, twilio_list_calls, and
twilio_delete_message to both Twilio credential specs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:41:26 -07:00
Timothy c02da3ba5a feat(twilio): add phone number listing, call history, and message deletion tools
Add twilio_list_phone_numbers, twilio_list_calls, and
twilio_delete_message tools using Twilio REST API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:40:58 -07:00
Timothy 7f34e95ec6 feat(shopify): register new tools in credential specs
Add shopify_update_product, shopify_get_customer, and
shopify_create_draft_order to both Shopify credential specs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:40:28 -07:00
Timothy f2998fe098 feat(shopify): add product update, customer detail, and draft order tools
Add shopify_update_product, shopify_get_customer, and
shopify_create_draft_order tools using Shopify Admin REST API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:40:15 -07:00
Timothy 323a2489b8 feat(zendesk): register new tools in credential specs
Add zendesk_get_ticket_comments, zendesk_add_ticket_comment, and
zendesk_list_users to all three Zendesk credential specs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:39:35 -07:00
Timothy f6d1cd640e feat(zendesk): add ticket comments and user listing tools
Add zendesk_get_ticket_comments, zendesk_add_ticket_comment, and
zendesk_list_users tools using Zendesk Support API v2.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:39:25 -07:00
Timothy ddf89a04fe feat(asana): update credential spec for new tools
Register asana_update_task, asana_add_comment, and
asana_create_subtask in the Asana credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:35:16 -07:00
Timothy c5dc89f5ee feat(asana): add update_task, add_comment, create_subtask tools
Add _put helper and three new Asana MCP tools:
- asana_update_task: modify name, notes, completion, due date, assignee
- asana_add_comment: post comment stories on tasks
- asana_create_subtask: create subtasks under existing tasks

API ref: https://developers.asana.com/docs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:35:05 -07:00
Timothy 6ade34b759 feat(trello): register get_card, create_list, search_cards tools
Add three new Trello MCP tools:
- trello_get_card: retrieve full card details with members/checklists/attachments
- trello_create_list: create new lists on boards
- trello_search_cards: full-text search across cards with board scoping

Update credential spec to include the new tool names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:20:43 -07:00
Timothy 09d5f0a9df feat(trello): add client methods for get_card, create_list, search
Add TrelloClient methods for:
- get_card: GET /1/cards/{id} with members, checklists, attachments
- create_list: POST /1/lists to create new board lists
- search: GET /1/search for full-text search across cards

API ref: https://developer.atlassian.com/cloud/trello/rest/api-group-cards/

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:19:59 -07:00
Timothy a60d63cca2 feat(github): register list_commits, create_release, list_workflow_runs
Add three new GitHub MCP tools:
- github_list_commits: query commits with author/date/branch filters
- github_create_release: create tagged releases with notes and draft support
- github_list_workflow_runs: monitor CI/CD pipeline runs with status filters

Update credential spec to include the new tool names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:19:16 -07:00
Timothy 8616975fc5 feat(github): add client methods for commits, releases, workflow runs
Add _GitHubClient methods for:
- list_commits: GET /repos/{owner}/{repo}/commits with sha/author/date filters
- create_release: POST /repos/{owner}/{repo}/releases with tag, notes, draft
- list_workflow_runs: GET /repos/{owner}/{repo}/actions/runs with filters

API ref: https://docs.github.com/en/rest

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:18:33 -07:00
Timothy e5ae919d8f feat(telegram): register get_chat_member_count, send_video, set_description
Add three new Telegram MCP tools:
- telegram_get_chat_member_count: retrieve group/channel membership size
- telegram_send_video: send video files via URL or file_id
- telegram_set_chat_description: update group/channel descriptions

Update credential spec to include the new tool names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:17:30 -07:00
Timothy 8e7f5eaaba feat(telegram): add client methods for member count, video, description
Add _TelegramClient methods for:
- get_chat_member_count: getChatMemberCount API endpoint
- send_video: sendVideo with caption, parse_mode, duration support
- set_chat_description: setChatDescription for groups/channels

API ref: https://core.telegram.org/bots/api

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:13:06 -07:00
Timothy 4d1ff8b054 feat(salesforce): update credential spec for new tools
Register salesforce_delete_record, salesforce_search_records, and
salesforce_get_record_count in both Salesforce credential specs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:12:25 -07:00
Timothy 9fa81e8599 feat(salesforce): add delete_record, search_records, get_record_count
Add three new Salesforce MCP tools:
- salesforce_delete_record: DELETE /sobjects/{type}/{id}
- salesforce_search_records: SOSL full-text search via /search/
- salesforce_get_record_count: efficient COUNT() query for any SObject

API ref: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:12:11 -07:00
Timothy cf8e19b059 feat(discord): register get_channel, create_reaction, delete_message tools
Add three new Discord MCP tools:
- discord_get_channel: retrieve channel metadata (name, topic, type)
- discord_create_reaction: add emoji reactions to messages
- discord_delete_message: remove messages from channels

Update credential spec to include the new tool names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:11:25 -07:00
Timothy dfa3f60fcf feat(discord): add client methods for get_channel, reactions, delete
Add _DiscordClient methods for:
- get_channel: retrieve channel metadata via GET /channels/{id}
- create_reaction: add emoji reaction via PUT reactions endpoint
- delete_message: remove a message via DELETE messages endpoint

API ref: https://discord.com/developers/docs/resources

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:10:49 -07:00
Timothy b795f1b253 feat(notion): update credential spec for new tools
Register notion_update_page, notion_archive_page, and
notion_append_blocks in the Notion credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:06:20 -07:00
Timothy 73423c0dd2 feat(notion): add update_page, archive_page, append_blocks tools
Add three new Notion MCP tools:
- notion_update_page: modify page properties via PATCH /pages/{id}
- notion_archive_page: archive or restore pages
- notion_append_blocks: add paragraphs, headings, lists, todos, etc.

API ref: https://developers.notion.com/reference

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:06:08 -07:00
Timothy 3d844e1539 feat(jira): update credential spec for new tools
Register jira_update_issue, jira_list_transitions, and
jira_transition_issue in all three Jira credential specs
(domain, email, token).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:04:32 -07:00
Timothy b619119eb5 feat(jira): add update_issue, list_transitions, transition_issue tools
Add three new Jira MCP tools:
- jira_update_issue: modify summary, description, priority, labels, assignee
- jira_list_transitions: discover available status transitions for an issue
- jira_transition_issue: move an issue to a new status with optional comment

API ref: https://developer.atlassian.com/cloud/jira/platform/rest/v3/

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:04:19 -07:00
Timothy b00ed4fc70 feat(hubspot): register delete_object, list/create_associations tools
Add three new MCP tools:
- hubspot_delete_object: archive contacts, companies, or deals
- hubspot_list_associations: query links between CRM objects (v4 API)
- hubspot_create_association: link two CRM records together

Update credential spec to include the new tool names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:03:37 -07:00
Timothy 5ec5fbe998 feat(hubspot): add client methods for delete, associations
Add _HubSpotClient methods for:
- delete_object: archive a CRM object via DELETE /crm/v3/objects
- list_associations: query associations via GET /crm/v4/objects associations endpoint
- create_association: link two CRM objects via PUT /crm/v4/objects associations endpoint

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:02:49 -07:00
Richard Tang 2ed814455a Merge branch 'feat/queen-planning-phase' into feature/queen-global-memory 2026-03-09 12:57:23 -07:00
Timothy ad1a4ef0c3 fix: cancellation button 2026-03-09 12:48:20 -07:00
Timothy 2111c808a9 feat: queen memory v1 2026-03-09 11:55:39 -07:00
Bryan @ Aden 402bb38267 Merge pull request #6079 from Waryjustice/fix/google-sheets-credentials-orphan
fix(credentials): remove orphaned google_sheets.py credential spec
2026-03-09 18:37:27 +00:00
Waryjustice 0a55928872 fix(credentials): remove orphaned google_sheets.py credential spec
The google_sheets.py file defined GOOGLE_SHEETS_CREDENTIALS (an API-key
based credential for reading public sheets via GOOGLE_SHEETS_API_KEY) but
was never wired into the package:

- Never imported in credentials/__init__.py
- Never merged into CREDENTIAL_SPECS
- Never listed in __all__
- Tool never calls credentials.get('google_sheets_key') — uses 'google' (OAuth2)
- Tool names in the spec were stale and did not match actual function names

The 'google' credential in email.py already correctly covers all Google
Sheets tools via OAuth2. This file was dead code with no referencing
imports anywhere in the repository.

Closes #6077

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-09 23:44:26 +05:30
Richard Tang cdf76ae3b9 fix: eventloop test 2026-03-09 10:23:56 -07:00
Richard Tang 42d0592941 refactor: judge evaluate 2026-03-09 10:09:15 -07:00
Richard Tang 1de7cf821d fix: handle judge with empty message 2026-03-09 09:58:29 -07:00
Timothy 4ea8540e25 fix: better logging for memory consolidation event 2026-03-08 20:44:40 -07:00
Timothy bfa3b8e0f6 fix: queen memory health 2026-03-08 20:28:53 -07:00
Richard Tang 55eccfd75f feat: intake node prompt in planning mode 2026-03-08 20:27:24 -07:00
Timothy 1e994a77b5 feat: queen agent global memory 2026-03-08 19:54:46 -07:00
Richard Tang d12afeb35d chore: ruff lint 2026-03-08 19:46:49 -07:00
Richard Tang e84fefd319 feat: separate the queen and worker tools in prompts 2026-03-08 19:40:30 -07:00
bryan cba0ec110f fix: linter update 2026-03-08 19:37:57 -07:00
bryan 0256e0c944 Merge branch 'main' into feat/agent-trigger 2026-03-08 19:28:36 -07:00
Richard Tang d2b510014d feat: adjust tools and knowledge separation between planning and building 2026-03-08 19:21:50 -07:00
Richard Tang 3ed5fda448 feat: planning phase for the queen 2026-03-08 18:49:45 -07:00
bryan 4d9d0362a0 fixes to make the timer trigger properly 2026-03-08 18:44:42 -07:00
bryan f474d0bc8e Merge branch 'main' into feat/agent-trigger 2026-03-08 16:59:14 -07:00
bryan 6a0681b9aa feat: fixing phase 4, continuing to test 2026-03-08 16:52:00 -07:00
Robert Hallers 7a467ef9b8 docs: mark TUI as deprecated in roadmap to match CLAUDE.md
Resolves inconsistency between CLAUDE.md/AGENTS.md (TUI deprecated) and
docs/roadmap.md (TUI listed as completed feature).

- Strike through TUI items in 3 roadmap sections
- Add deprecation note to TUI-to-GUI upgrade section
- Reference AGENTS.md and hive open as replacement

Fixes #5941

Signed-off-by: Robert Hallers <robert@terplabs.ai>
2026-03-07 02:36:04 -05:00
bryan c7e634851b feat: phase 4 of trigger plan 2026-03-06 19:21:32 -08:00
bryan cdb7155960 feat: phase 3 of trigger plan 2026-03-06 18:07:26 -08:00
bryan 3f7790c26a feat: phase 2 of trigger plan 2026-03-06 17:22:57 -08:00
bryan 5676b115f4 Merge branch 'feat/queen-responsibility' into feat/agent-trigger 2026-03-06 16:58:06 -08:00
bryan 61c59d57e8 feat: phase 1 of trigger plan 2026-03-06 15:11:36 -08:00
nikhilvarmakandula 151fbd7b00 feat(tools): add Open-Meteo weather tool with no API key required 2026-03-06 00:46:18 +05:30
karthik-kotra 41cd11d5c9 docs(setup): add troubleshooting steps for common WSL setup issues 2026-02-17 07:30:00 +00:00
rhythmtaneja f88483f964 chore: trigger PR revalidation 2026-01-28 09:52:31 +05:30
rhythmtaneja b61ec8c94d Improve EventBus handler error logging by using logger.exception to include traceback 2026-01-28 00:46:23 +05:30
553 changed files with 76834 additions and 34506 deletions
@@ -0,0 +1,78 @@
name: Standard Bounty
description: A bounty task for general framework contributions (not integration-specific)
title: "[Bounty]: "
labels: []
body:
- type: markdown
attributes:
value: |
## Standard Bounty
This issue is part of the [Bounty Program](../../docs/bounty-program/README.md).
**Claim this bounty** by commenting below — a maintainer will assign you within 24 hours.
- type: dropdown
id: bounty-size
attributes:
label: Bounty Size
options:
- "Small (10 pts)"
- "Medium (30 pts)"
- "Large (75 pts)"
- "Extreme (150 pts)"
validations:
required: true
- type: dropdown
id: difficulty
attributes:
label: Difficulty
options:
- Easy
- Medium
- Hard
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: What needs to be done to complete this bounty.
placeholder: |
Describe the specific task, including:
- What the contributor needs to do
- Links to relevant files in the repo
- Any context or motivation for the change
validations:
required: true
- type: textarea
id: acceptance-criteria
attributes:
label: Acceptance Criteria
description: What "done" looks like. The PR must meet all criteria.
placeholder: |
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] CI passes
validations:
required: true
- type: textarea
id: relevant-files
attributes:
label: Relevant Files
description: Links to files or directories related to this bounty.
placeholder: |
- `path/to/file.py`
- `path/to/directory/`
- type: textarea
id: resources
attributes:
label: Resources
description: Links to docs, issues, or external references that will help.
placeholder: |
- Related issue: #XXXX
- Docs: https://...
+14 -4
View File
@@ -2,14 +2,22 @@ name: Bounty completed
description: Awards points and notifies Discord when a bounty PR is merged
on:
pull_request:
pull_request_target:
types: [closed]
workflow_dispatch:
inputs:
pr_number:
description: "PR number to process (for missed bounties)"
required: true
type: number
jobs:
bounty-notify:
if: >
github.event.pull_request.merged == true &&
contains(join(github.event.pull_request.labels.*.name, ','), 'bounty:')
github.event_name == 'workflow_dispatch' ||
(github.event.pull_request.merged == true &&
contains(join(github.event.pull_request.labels.*.name, ','), 'bounty:'))
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
@@ -32,6 +40,8 @@ jobs:
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
DISCORD_WEBHOOK_URL: ${{ secrets.DISCORD_BOUNTY_WEBHOOK_URL }}
BOT_API_URL: ${{ secrets.BOT_API_URL }}
BOT_API_KEY: ${{ secrets.BOT_API_KEY }}
LURKR_API_KEY: ${{ secrets.LURKR_API_KEY }}
LURKR_GUILD_ID: ${{ secrets.LURKR_GUILD_ID }}
PR_NUMBER: ${{ github.event.pull_request.number }}
PR_NUMBER: ${{ inputs.pr_number || github.event.pull_request.number }}
-126
View File
@@ -1,126 +0,0 @@
name: Link Discord account
description: Auto-creates a PR to add contributor to contributors.yml when a link-discord issue is opened
on:
issues:
types: [opened]
jobs:
link-discord:
if: contains(github.event.issue.labels.*.name, 'link-discord')
runs-on: ubuntu-latest
timeout-minutes: 2
permissions:
contents: write
issues: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Parse issue and update contributors.yml
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const issue = context.payload.issue;
const githubUsername = issue.user.login;
// Parse the issue body for form fields
const body = issue.body || '';
// Extract Discord ID — look for the numeric value after the "Discord User ID" heading
const discordMatch = body.match(/### Discord User ID\s*\n\s*(\d{17,20})/);
if (!discordMatch) {
await github.rest.issues.createComment({
...context.repo,
issue_number: issue.number,
body: `Could not find a valid Discord ID in the issue body. Please make sure you entered a numeric ID (17-20 digits), not a username.\n\nExample: \`123456789012345678\``
});
await github.rest.issues.update({
...context.repo,
issue_number: issue.number,
state: 'closed',
state_reason: 'not_planned'
});
return;
}
const discordId = discordMatch[1];
// Extract display name (optional)
const nameMatch = body.match(/### Display Name \(optional\)\s*\n\s*(.+)/);
const displayName = nameMatch ? nameMatch[1].trim() : '';
// Check if user already exists
const yml = fs.readFileSync('contributors.yml', 'utf-8');
if (yml.includes(`github: ${githubUsername}`)) {
await github.rest.issues.createComment({
...context.repo,
issue_number: issue.number,
body: `@${githubUsername} is already in \`contributors.yml\`. If you need to update your Discord ID, please edit the file directly via PR.`
});
await github.rest.issues.update({
...context.repo,
issue_number: issue.number,
state: 'closed',
state_reason: 'completed'
});
return;
}
// Append entry to contributors.yml
let entry = ` - github: ${githubUsername}\n discord: "${discordId}"`;
if (displayName && displayName !== '_No response_') {
entry += `\n name: ${displayName}`;
}
entry += '\n';
const updated = yml.trimEnd() + '\n' + entry;
fs.writeFileSync('contributors.yml', updated);
// Set outputs for commit step
core.exportVariable('GITHUB_USERNAME', githubUsername);
core.exportVariable('DISCORD_ID', discordId);
core.exportVariable('ISSUE_NUMBER', issue.number.toString());
- name: Create PR
run: |
# Check if there are changes
if git diff --quiet contributors.yml; then
echo "No changes to contributors.yml"
exit 0
fi
BRANCH="docs/link-discord-${GITHUB_USERNAME}"
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git checkout -b "$BRANCH"
git add contributors.yml
git commit -m "docs: link @${GITHUB_USERNAME} to Discord"
git push origin "$BRANCH"
gh pr create \
--title "docs: link @${GITHUB_USERNAME} to Discord" \
--body "Adds @${GITHUB_USERNAME} (Discord \`${DISCORD_ID}\`) to \`contributors.yml\` for bounty XP tracking.
Closes #${ISSUE_NUMBER}" \
--base main \
--head "$BRANCH" \
--label "link-discord"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Notify on issue
uses: actions/github-script@v7
with:
script: |
const username = process.env.GITHUB_USERNAME;
const issueNumber = parseInt(process.env.ISSUE_NUMBER);
await github.rest.issues.createComment({
...context.repo,
issue_number: issueNumber,
body: `A PR has been created to link your account. A maintainer will merge it shortly — once merged, you'll receive XP and Discord pings when your bounty PRs are merged.`
});
+2
View File
@@ -35,6 +35,8 @@ jobs:
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
DISCORD_WEBHOOK_URL: ${{ secrets.DISCORD_BOUNTY_WEBHOOK_URL }}
BOT_API_URL: ${{ secrets.BOT_API_URL }}
BOT_API_KEY: ${{ secrets.BOT_API_KEY }}
LURKR_API_KEY: ${{ secrets.LURKR_API_KEY }}
LURKR_GUILD_ID: ${{ secrets.LURKR_GUILD_ID }}
SINCE_DATE: ${{ github.event.inputs.since_date || '' }}
+4 -3
View File
@@ -13,6 +13,10 @@ out/
.env
.env.local
.env.*.local
.venv
/venv
tools/src/uv.lock
# User configuration (copied from .example)
config.yaml
@@ -68,9 +72,6 @@ temp/
exports/*
.claude/settings.local.json
.claude/skills/ship-it/
.venv
docs/github-issues/*
core/tests/*dumps/*
-4
View File
@@ -2,10 +2,6 @@
Shared agent instructions for this workspace.
## Deprecations
- **TUI is deprecated.** The terminal UI (`hive tui`) is no longer maintained. Use the browser-based interface (`hive open`) instead.
## Coding Agent Notes
-
+150 -27
View File
@@ -1,17 +1,149 @@
# Release Notes
## v0.7.1
**Release Date:** March 13, 2026
**Tag:** v0.7.1
### Chrome-Native Browser Control
v0.7.1 replaces Playwright with direct Chrome DevTools Protocol (CDP) integration. The GCU now launches the user's system Chrome via `open -n` on macOS, connects over CDP, and manages browser lifecycle end-to-end -- no extra browser binary required.
---
### Highlights
#### System Chrome via CDP
The entire GCU browser stack has been rewritten:
- **Chrome finder & launcher** -- New `chrome_finder.py` discovers installed Chrome and `chrome_launcher.py` manages process lifecycle with `--remote-debugging-port`
- **Coexist with user's browser** -- `open -n` on macOS launches a separate Chrome instance so the user's tabs stay untouched
- **Dynamic viewport sizing** -- Viewport auto-sizes to the available display area, suppressing Chrome warning bars
- **Orphan cleanup** -- Chrome processes are killed on GCU server shutdown to prevent leaks
- **`--no-startup-window`** -- Chrome launches headlessly by default until a page is needed
#### Per-Subagent Browser Isolation
Each GCU subagent gets its own Chrome user-data directory, preventing cookie/session cross-contamination:
- Unique browser profiles injected per subagent
- Profiles cleaned up after top-level GCU node execution
- Tab origin and age metadata tracked per subagent
#### Dummy Agent Testing Framework
A comprehensive test suite for validating agent graph patterns without LLM calls:
- 8 test modules covering echo, pipeline, branch, parallel merge, retry, feedback loop, worker, and GCU subagent patterns
- Shared fixtures and a `run_all.py` runner for CI integration
- Subagent lifecycle tests
---
### What's New
#### GCU Browser
- **Switch from Playwright to system Chrome via CDP** -- Direct CDP connection replaces Playwright dependency. (@bryanadenhq)
- **Chrome finder and launcher modules** -- `chrome_finder.py` and `chrome_launcher.py` for cross-platform Chrome discovery and process management. (@bryanadenhq)
- **Dynamic viewport sizing** -- Auto-size viewport and suppress Chrome warning bar. (@bryanadenhq)
- **Per-subagent browser profile isolation** -- Unique user-data directories per subagent with cleanup. (@bryanadenhq)
- **Tab origin/age metadata** -- Track which subagent opened each tab and when. (@bryanadenhq)
- **`browser_close_all` tool** -- Bulk tab cleanup for agents managing many pages. (@bryanadenhq)
- **Auto-track popup pages** -- Popups are automatically captured and tracked. (@bryanadenhq)
- **Auto-snapshot from browser interactions** -- Browser interaction tools return screenshots automatically. (@bryanadenhq)
- **Kill orphaned Chrome processes** -- GCU server shutdown cleans up lingering Chrome instances. (@bryanadenhq)
- **`--no-startup-window` Chrome flag** -- Prevent empty window on launch. (@bryanadenhq)
- **Launch Chrome via `open -n` on macOS** -- Coexist with the user's running browser. (@bryanadenhq)
#### Framework & Runtime
- **Session resume fix for new agents** -- Correctly resume sessions when a new agent is loaded. (@bryanadenhq)
- **Queen upsert fix** -- Prevent duplicate queen entries on session restore. (@bryanadenhq)
- **Anchor worker monitoring to queen's session ID on cold-restore** -- Worker monitors reconnect to the correct queen after restart. (@bryanadenhq)
- **Update meta.json when loading workers** -- Worker metadata stays in sync with runtime state. (@RichardTang-Aden)
- **Generate worker MCP file correctly** -- Fix MCP config generation for spawned workers. (@RichardTang-Aden)
- **Share event bus so tool events are visible to parent** -- Tool execution events propagate up to parent graphs. (@bryanadenhq)
- **Subagent activity tracking in queen status** -- Queen instructions include live subagent status. (@bryanadenhq)
- **GCU system prompt updates** -- Auto-snapshots, batching, popup tracking, and close_all guidance. (@bryanadenhq)
#### Frontend
- **Loading spinner in draft panel** -- Shows spinner during planning phase instead of blank panel. (@bryanadenhq)
- **Fix credential modal errors** -- Modal no longer eats errors; banner stays visible. (@bryanadenhq)
- **Fix credentials_required loop** -- Stop clearing the flag on modal close to prevent infinite re-prompting. (@bryanadenhq)
- **Fix "Add tab" dropdown overflow** -- Dropdown no longer hidden when many agents are open. (@prasoonmhwr)
#### Testing
- **Dummy agent test framework** -- 8 test modules (echo, pipeline, branch, parallel merge, retry, feedback loop, worker, GCU subagent) with shared fixtures and CI runner. (@bryanadenhq)
- **Subagent lifecycle tests** -- Validate subagent spawn and completion flows. (@bryanadenhq)
#### Documentation & Infrastructure
- **MCP integration PRD** -- Product requirements for MCP server registry. (@TimothyZhang7)
- **Skills registry PRD** -- Product requirements for skill registry system. (@bryanadenhq)
- **Bounty program updates** -- Standard bounty issue template and updated contributor guide. (@bryanadenhq)
- **Windows quickstart** -- Add default context limit for PowerShell setup. (@bryanadenhq)
- **Remove deprecated files** -- Clean up `setup_mcp.py`, `verify_mcp.py`, `antigravity-setup.md`, and `setup-antigravity-mcp.sh`. (@bryanadenhq)
---
### Bug Fixes
- Fix credential modal eating errors and banner staying open
- Stop clearing `credentials_required` on modal close to prevent infinite loop
- Share event bus so tool events are visible to parent graph
- Use lazy %-formatting in subagent completion log to avoid f-string in logger
- Anchor worker monitoring to queen's session ID on cold-restore
- Update meta.json when loading workers
- Generate worker MCP file correctly
- Fix "Add tab" dropdown partially hidden when creating multiple agents
---
### Community Contributors
- **Prasoon Mahawar** (@prasoonmhwr) -- Fix UI overflow on agent tab dropdown
- **Richard Tang** (@RichardTang-Aden) -- Worker MCP generation and meta.json fixes
---
### Upgrading
```bash
git pull origin main
uv sync
```
The Playwright dependency is no longer required for GCU browser operations. Chrome must be installed on the host system.
---
## v0.7.0
**Release Date:** March 5, 2026
**Tag:** v0.7.0
Session management refactor release.
---
## v0.5.1
**Release Date:** February 18, 2026
**Tag:** v0.5.1
## The Hive Gets a Brain
### The Hive Gets a Brain
v0.5.1 is our most ambitious release yet. Hive agents can now **build other agents** -- the new Hive Coder meta-agent writes, tests, and fixes agent packages from natural language. The runtime grows multi-graph support so one session can orchestrate multiple agents simultaneously. The TUI gets a complete overhaul with an in-app agent picker, live streaming, and seamless escalation to the Coder. And we're now provider-agnostic: Claude Code subscriptions, OpenAI-compatible endpoints, and any LiteLLM-supported model work out of the box.
---
## Highlights
### Highlights
### Hive Coder -- The Agent That Builds Agents
#### Hive Coder -- The Agent That Builds Agents
A native meta-agent that lives inside the framework at `core/framework/agents/hive_coder/`. Give it a natural-language specification and it produces a complete agent package -- goal definition, node prompts, edge routing, MCP tool wiring, tests, and all boilerplate files.
@@ -30,7 +162,7 @@ The Coder ships with:
- **Coder Tools MCP server** -- file I/O, fuzzy-match editing, git snapshots, and sandboxed shell execution (`tools/coder_tools_server.py`)
- **Test generation** -- structural tests for forever-alive agents that don't hang on `runner.run()`
### Multi-Graph Agent Runtime
#### Multi-Graph Agent Runtime
`AgentRuntime` now supports loading, managing, and switching between multiple agent graphs within a single session. Six new lifecycle tools give agents (and the TUI) full control:
@@ -44,7 +176,7 @@ await runtime.add_graph("exports/deep_research_agent")
The Hive Coder uses multi-graph internally -- when you escalate from a worker agent, the Coder loads as a separate graph while the worker stays alive in the background.
### TUI Revamp
#### TUI Revamp
The Terminal UI gets a ground-up rebuild with five major additions:
@@ -54,7 +186,7 @@ The Terminal UI gets a ground-up rebuild with five major additions:
- **PDF attachments** -- `/attach` and `/detach` commands with native OS file dialog (macOS, Linux, Windows)
- **Multi-graph commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>` for managing agent graphs in-session
### Provider-Agnostic LLM Support
#### Provider-Agnostic LLM Support
Hive is no longer Anthropic-only. v0.5.1 adds first-class support for:
@@ -66,9 +198,9 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## What's New
### What's New
### Architecture & Runtime
#### Architecture & Runtime
- **Hive Coder meta-agent** -- Natural-language agent builder with reference docs, guardian watchdog, and `hive code` CLI command. (@TimothyZhang7)
- **Multi-graph agent sessions** -- `add_graph`/`remove_graph` on AgentRuntime with 6 lifecycle tools (`load_agent`, `unload_agent`, `start_agent`, `restart_agent`, `list_agents`, `get_user_presence`). (@TimothyZhang7)
@@ -79,7 +211,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
- **Pre-start confirmation prompt** -- Interactive prompt before agent execution allowing credential updates or abort. (@RichardTang-Aden)
- **Event bus multi-graph support** -- `graph_id` on events, `filter_graph` on subscriptions, `ESCALATION_REQUESTED` event type, `exclude_own_graph` filter. (@TimothyZhang7)
### TUI Improvements
#### TUI Improvements
- **In-app agent picker** (Ctrl+A) -- Tabbed modal for browsing agents with metadata badges (nodes, tools, sessions, tags). (@TimothyZhang7)
- **Runtime-optional TUI startup** -- Launches without a pre-loaded agent, shows agent picker on startup. (@TimothyZhang7)
@@ -89,7 +221,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
- **Multi-graph TUI commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>`. (@TimothyZhang7)
- **Agent Guardian watchdog** -- Event-driven monitor that catches secondary agent failures and triggers automatic remediation, with `--no-guardian` CLI flag. (@TimothyZhang7)
### New Tool Integrations
#### New Tool Integrations
| Tool | Description | Contributor |
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
@@ -99,7 +231,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
| **Google Docs** | Document creation, reading, and editing with OAuth credential support | @haliaeetusvocifer |
| **Gmail enhancements** | Expanded mail operations for inbox management | @bryanadenhq |
### Infrastructure
#### Infrastructure
- **Default node type → `event_loop`** -- `NodeSpec.node_type` defaults to `"event_loop"` instead of `"llm_tool_use"`. (@TimothyZhang7)
- **Default `max_node_visits` → 0 (unlimited)** -- Nodes default to unlimited visits, reducing friction for feedback loops and forever-alive agents. (@TimothyZhang7)
@@ -112,7 +244,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## Bug Fixes
### Bug Fixes
- Flush WIP accumulator outputs on cancel/failure so edge conditions see correct values on resume
- Stall detection state preserved across resume (no more resets on checkpoint restore)
@@ -125,13 +257,13 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
- Fix email agent version conflicts (@RichardTang-Aden)
- Fix coder tool timeouts (120s for tests, 300s cap for commands)
## Documentation
### Documentation
- Clarify installation and prevent root pip install misuse (@paarths-collab)
---
## Agent Updates
### Agent Updates
- **Email Inbox Management** -- Consolidate `gmail_inbox_guardian` and `inbox_management` into a single unified agent with updated prompts and config. (@RichardTang-Aden, @bryanadenhq)
- **Job Hunter** -- Updated node prompts, config, and agent metadata; added PDF resume selection. (@bryanadenhq)
@@ -141,7 +273,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## Breaking Changes
### Breaking Changes
- **Deprecated node types raise `RuntimeError`** -- `llm_tool_use`, `llm_generate`, `function`, `router`, `human_input` now fail instead of warning. Migrate to `event_loop`.
- **`NodeSpec.node_type` defaults to `"event_loop"`** (was `"llm_tool_use"`)
@@ -150,7 +282,7 @@ The quickstart script auto-detects Claude Code subscriptions and ZAI Code instal
---
## Community Contributors
### Community Contributors
A huge thank you to everyone who contributed to this release:
@@ -165,14 +297,14 @@ A huge thank you to everyone who contributed to this release:
---
## Upgrading
### Upgrading
```bash
git pull origin main
uv sync
```
### Migration Guide
#### Migration Guide
If your agents use deprecated node types, update them:
@@ -196,12 +328,3 @@ hive code
# Or from TUI -- press Ctrl+E to escalate
hive tui
```
---
## What's Next
- **Agent-to-agent communication** -- one agent's output triggers another agent's entry point
- **Cost visibility** -- detailed runtime log of LLM costs per node and per session
- **Persistent webhook subscriptions** -- survive agent restarts without re-registering
- **Remote agent deployment** -- run agents as long-lived services with HTTP APIs
+1032 -18
View File
File diff suppressed because it is too large Load Diff
+19 -12
View File
@@ -1,24 +1,31 @@
.PHONY: lint format check test install-hooks help frontend-install frontend-dev frontend-build
.PHONY: lint format check test test-tools test-live test-all install-hooks help frontend-install frontend-dev frontend-build
# ── Ensure uv is findable in Git Bash on Windows ──────────────────────────────
# uv installs to ~/.local/bin on Windows/Linux/macOS. Git Bash may not include
# this in PATH by default, so we prepend it here.
export PATH := $(HOME)/.local/bin:$(PATH)
# ── Targets ───────────────────────────────────────────────────────────────────
help: ## Show this help
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | \
awk 'BEGIN {FS = ":.*?## "}; {printf " \033[36m%-15s\033[0m %s\n", $$1, $$2}'
lint: ## Run ruff linter and formatter (with auto-fix)
cd core && ruff check --fix .
cd tools && ruff check --fix .
cd core && ruff format .
cd tools && ruff format .
cd core && uv run ruff check --fix .
cd tools && uv run ruff check --fix .
cd core && uv run ruff format .
cd tools && uv run ruff format .
format: ## Run ruff formatter
cd core && ruff format .
cd tools && ruff format .
cd core && uv run ruff format .
cd tools && uv run ruff format .
check: ## Run all checks without modifying files (CI-safe)
cd core && ruff check .
cd tools && ruff check .
cd core && ruff format --check .
cd tools && ruff format --check .
cd core && uv run ruff check .
cd tools && uv run ruff check .
cd core && uv run ruff format --check .
cd tools && uv run ruff format --check .
test: ## Run all tests (core + tools, excludes live)
cd core && uv run python -m pytest tests/ -v
@@ -46,4 +53,4 @@ frontend-dev: ## Start frontend dev server
cd core/frontend && npm run dev
frontend-build: ## Build frontend for production
cd core/frontend && npm run build
cd core/frontend && npm run build
+48 -36
View File
@@ -23,11 +23,12 @@
</p>
<p align="center">
<img src="https://img.shields.io/badge/Agent_Harness-Runtime_Layer-ff6600?style=flat-square" alt="Agent Harness" />
<img src="https://img.shields.io/badge/AI_Agents-Self--Improving-brightgreen?style=flat-square" alt="AI Agents" />
<img src="https://img.shields.io/badge/Multi--Agent-Systems-blue?style=flat-square" alt="Multi-Agent" />
<img src="https://img.shields.io/badge/Headless-Development-purple?style=flat-square" alt="Headless" />
<img src="https://img.shields.io/badge/Human--in--the--Loop-orange?style=flat-square" alt="HITL" />
<img src="https://img.shields.io/badge/Production--Ready-red?style=flat-square" alt="Production" />
<img src="https://img.shields.io/badge/Browser-Use-red?style=flat-square" alt="Browser Use" />
</p>
<p align="center">
<img src="https://img.shields.io/badge/OpenAI-supported-412991?style=flat-square&logo=openai" alt="OpenAI" />
@@ -35,37 +36,42 @@
<img src="https://img.shields.io/badge/Google_Gemini-supported-4285F4?style=flat-square&logo=google" alt="Gemini" />
</p>
<p align="center"><em>The agent harness for production workloads — state management, failure recovery, observability, and human oversight so your agents actually run.</em></p>
## Overview
Build autonomous, reliable, self-improving AI agents without hardcoding workflows. Define your goal through conversation with hive coding agent(queen), and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, credential management, and real-time monitoring give you control without sacrificing adaptability.
Hive is a runtime harness for AI agents in production. You describe your goal in natural language; a coding agent (the queen) generates the agent graph and connection code to achieve it. During execution, the harness manages state isolation, checkpoint-based crash recovery, cost enforcement, and real-time observability. When agents fail, the framework captures failure data, evolves the graph through the coding agent, and redeploys automatically. Built-in human-in-the-loop nodes, browser control, credential management, and parallel execution give you production reliability without sacrificing adaptability.
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
[![Hive Demo](https://img.youtube.com/vi/XDOG9fOaLjU/maxresdefault.jpg)](https://www.youtube.com/watch?v=XDOG9fOaLjU)
Visit [HoneyComb](http://honeycomb.open-hive.com/) to see what jobs are being automated by AI. Its a stock market for jobs, driven by our communitys AI agent progress. You can long and short jobs (with no real money but compute token)based on how much you think a job is going to be replaced by AI.
https://github.com/user-attachments/assets/bf10edc3-06ba-48b6-98ba-d069b15fb69d
## Who Is Hive For?
Hive is designed for developers and teams who want to build **production-grade AI agents** without manually wiring complex workflows.
Hive is the harness layer for teams moving AI agents from prototype to production. Models are getting better on their own — the bottleneck is the infrastructure around them: state management, failure recovery, cost control, and observability.
Hive is a good fit if you:
- Want AI agents that **execute real business processes**, not demos
- Need **fast or high volume agent execution** over open workflow
- Need a **runtime that handles state, recovery, and parallel execution** at scale
- Need **self-healing and adaptive agents** that improve over time
- Require **human-in-the-loop control**, observability, and cost limits
- Plan to run agents in **production environments**
- Plan to run agents in **production** where uptime, cost, and auditability matter
Hive may not be the best fit if youre only experimenting with simple agent chains or one-off scripts.
## When Should You Use Hive?
Use Hive when you need:
Use Hive when the bottleneck is no longer the model but the harness around it:
- Long-running, autonomous agents
- Strong guardrails, process, and controls
- Continuous improvement based on failures
- Multi-agent coordination
- A framework that evolves with your goals
- Long-running agents that need **state persistence and crash recovery**
- Production workloads requiring **cost enforcement, observability, and audit trails**
- Agents that **self-heal** through failure capture and graph evolution
- Multi-agent coordination with **session isolation and shared memory**
- A framework that **scales with model improvements** rather than fighting them
## Quick Links
@@ -73,7 +79,7 @@ Use Hive when you need:
- **[Self-Hosting Guide](https://docs.adenhq.com/getting-started/quickstart)** - Deploy Hive on your infrastructure
- **[Changelog](https://github.com/aden-hive/hive/releases)** - Latest updates and releases
- **[Roadmap](docs/roadmap.md)** - Upcoming features and plans
- **[Report Issues](https://github.com/adenhq/hive/issues)** - Bug reports and feature requests
- **[Report Issues](https://github.com/aden-hive/hive/issues)** - Bug reports and feature requests
- **[Contributing](CONTRIBUTING.md)** - How to contribute and submit PRs
## Quick Start
@@ -84,7 +90,7 @@ Use Hive when you need:
- An LLM provider that powers the agents
- **ripgrep (optional, recommended on Windows):** The `search_files` tool uses ripgrep for faster file search. If not installed, a Python fallback is used. On Windows: `winget install BurntSushi.ripgrep` or `scoop install ripgrep`
> **Note for Windows Users:** It is strongly recommended to use **WSL (Windows Subsystem for Linux)** or **Git Bash** to run this framework. Some core automation scripts may not execute correctly in standard Command Prompt or PowerShell.
> **Windows Users:** Native Windows is supported via `quickstart.ps1` and `hive.ps1`. Run these in PowerShell 5.1+. WSL is also an option but not required.
### Installation
@@ -98,9 +104,11 @@ Use Hive when you need:
git clone https://github.com/aden-hive/hive.git
cd hive
# Run quickstart setup
# Run quickstart setup (macOS/Linux)
./quickstart.sh
# Windows (PowerShell)
.\quickstart.ps1
```
This sets up:
@@ -108,54 +116,51 @@ This sets up:
- **framework** - Core agent runtime and graph executor (in `core/.venv`)
- **aden_tools** - MCP tools for agent capabilities (in `tools/.venv`)
- **credential store** - Encrypted API key storage (`~/.hive/credentials`)
- **LLM provider** - Interactive default model configuration
- **LLM provider** - Interactive default model configuration, including Hive LLM and OpenRouter
- All required Python dependencies with `uv`
- At last, it will initiate the open hive interface in your browser
- Finally, it will open the Hive interface in your browser
> **Tip:** To reopen the dashboard later, run `hive open` from the project directory.
<img width="2500" height="1214" alt="home-screen" src="https://github.com/user-attachments/assets/134d897f-5e75-4874-b00b-e0505f6b45c4" />
### Build Your First Agent
Type the agent you want to build in the home input box
Type the agent you want to build in the home input box. The queen is going to ask you questions and work out a solution with you.
<img width="2500" height="1214" alt="Image" src="https://github.com/user-attachments/assets/1ce19141-a78b-46f5-8d64-dbf987e048f4" />
### Use Template Agents
Click "Try a sample agent" and check the templates. You can run a templates directly or choose to build your version on top of the existing template.
Click "Try a sample agent" and check the templates. You can run a template directly or choose to build your version on top of the existing template.
### Run Agents
Now you can run an agent by selectiing the agent (either an existing agent or example agent). You can click the Run button on the top left, or talk to the queen agent and it can run the agent for you.
Now you can run an agent by selecting the agent (either an existing agent or example agent). You can click the Run button on the top left, or talk to the queen agent and it can run the agent for you.
<img width="2500" height="1214" alt="Image" src="https://github.com/user-attachments/assets/71c38206-2ad5-49aa-bde8-6698d0bc55f5" />
<img width="2549" height="1174" alt="Screenshot 2026-03-12 at 9 27 36PM" src="https://github.com/user-attachments/assets/7c7d30fa-9ceb-4c23-95af-b1caa405547d" />
## Features
- **Browser-Use** - Control the browser on your computer to achieve hard tasks
- **Parallel Execution** - Execute the generated graph in parallel. This way you can have multiple agent compelteing the jobs for you
- **Parallel Execution** - Execute the generated graph in parallel. This way you can have multiple agents completing the jobs for you
- **[Goal-Driven Generation](docs/key_concepts/goals_outcome.md)** - Define objectives in natural language; the coding agent generates the agent graph and connection code to achieve them
- **[Adaptiveness](docs/key_concepts/evolution.md)** - Framework captures failures, calibrates according to the objectives, and evolves the agent graph
- **[Dynamic Node Connections](docs/key_concepts/graph.md)** - No predefined edges; connection code is generated by any capable LLM based on your goals
- **SDK-Wrapped Nodes** - Every node gets shared memory, local RLM memory, monitoring, tools, and LLM access out of the box
- **[Human-in-the-Loop](docs/key_concepts/graph.md#human-in-the-loop)** - Intervention nodes that pause execution for human input with configurable timeouts and escalation
- **Real-time Observability** - WebSocket streaming for live monitoring of agent execution, decisions, and node-to-node communication
- **Production-Ready** - Self-hostable, built for scale and reliability
## Integration
<a href="https://github.com/aden-hive/hive/tree/main/tools/src/aden_tools/tools"><img width="100%" alt="Integration" src="https://github.com/user-attachments/assets/a1573f93-cf02-4bb8-b3d5-b305b05b1e51" /></a>
Hive is built to be model-agnostic and system-agnostic.
- **LLM flexibility** - Hive Framework is designed to support various types of LLMs, including hosted and local models through LiteLLM-compatible providers.
- **LLM flexibility** - Hive Framework supports Anthropic, OpenAI, OpenRouter, Hive LLM, and other hosted or local models through LiteLLM-compatible providers.
- **Business system connectivity** - Hive Framework is designed to connect to all kinds of business systems as tools, such as CRM, support, messaging, data, file, and internal APIs via MCP.
## Why Aden
## Why Hive
Hive focuses on generating agents that run real business processes rather than generic agents. Instead of requiring you to manually design workflows, define agent interactions, and handle failures reactively, Hive flips the paradigm: **you describe outcomes, and the system builds itself**—delivering an outcome-driven, adaptive experience with an easy-to-use set of tools and integrations.
As models improve, the upper bound of what agents can do rises — but their reliability and production value are determined by the harness. Hive focuses on generating agents that run real business processes rather than generic agents. Instead of requiring you to manually design workflows, define agent interactions, and handle failures reactively, Hive flips the paradigm: **you describe outcomes, and the system builds itself**—delivering an outcome-driven, adaptive experience with an easy-to-use set of tools and integrations.
```mermaid
flowchart LR
@@ -191,8 +196,9 @@ flowchart LR
### The Hive Advantage
| Traditional Frameworks | Hive |
| Typical Agent Frameworks | Hive |
| -------------------------- | -------------------------------------- |
| Focus on model orchestration | **Production harness**: state, recovery, observability |
| Hardcode agent workflows | Describe goals in natural language |
| Manual graph definition | Auto-generated agent graphs |
| Reactive error handling | Outcome-evaluation and adaptiveness |
@@ -378,7 +384,7 @@ This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENS
**Q: What LLM providers does Hive support?**
Hive supports 100+ LLM providers through LiteLLM integration, including OpenAI (GPT-4, GPT-4o), Anthropic (Claude models), Google Gemini, DeepSeek, Mistral, Groq, and many more. Simply set the appropriate API key environment variable and specify the model name. We recommend using Claude, GLM and Gemini as they have the best performance.
Hive supports 100+ LLM providers through LiteLLM integration, including OpenAI (GPT-4, GPT-4o), Anthropic (Claude models), Google Gemini, DeepSeek, Mistral, Groq, OpenRouter, and Hive LLM. Simply set the appropriate API key environment variable and specify the model name. See [docs/configuration.md](docs/configuration.md) for provider-specific configuration examples.
**Q: Can I use Hive with local AI models like Ollama?**
@@ -386,16 +392,12 @@ Yes! Hive supports local models through LiteLLM. Simply use the model name forma
**Q: What makes Hive different from other agent frameworks?**
Hive generates your entire agent system from natural language goals using a coding agent—you don't hardcode workflows or manually define graphs. When agents fail, the framework automatically captures failure data, [evolves the agent graph](docs/key_concepts/evolution.md), and redeploys. This self-improving loop is unique to Aden.
Hive is an agent harness, not just an orchestration framework. It provides the production runtime layer — session isolation, checkpoint-based crash recovery, cost enforcement, real-time observability, and human-in-the-loop controls — that makes agents reliable enough to run real workloads. On top of that, Hive generates your entire agent system from natural language goals and automatically [evolves the graph](docs/key_concepts/evolution.md) when agents fail. The combination of a robust harness with self-improving generation is what sets Hive apart.
**Q: Is Hive open-source?**
Yes, Hive is fully open-source under the Apache License 2.0. We actively encourage community contributions and collaboration.
**Q: Can Hive handle complex, production-scale use cases?**
Yes. Hive is explicitly designed for production environments with features like automatic failure recovery, real-time observability, cost controls, and horizontal scaling support. The framework handles both simple automations and complex multi-agent workflows.
**Q: Does Hive support human-in-the-loop workflows?**
Yes, Hive fully supports [human-in-the-loop](docs/key_concepts/graph.md#human-in-the-loop) workflows through intervention nodes that pause execution for human input. These include configurable timeouts and escalation policies, allowing seamless collaboration between human experts and AI agents.
@@ -420,6 +422,16 @@ Visit [docs.adenhq.com](https://docs.adenhq.com/) for complete guides, API refer
Contributions are welcome! Fork the repository, create your feature branch, implement your changes, and submit a pull request. See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
## Star History
<a href="https://star-history.com/#aden-hive/hive&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=aden-hive/hive&type=Date" />
</picture>
</a>
---
<p align="center">
+2 -2
View File
@@ -39,8 +39,8 @@ We consider security research conducted in accordance with this policy to be:
## Security Best Practices for Users
1. **Keep Updated**: Always run the latest version
2. **Secure Configuration**: Review `config.yaml` settings, especially in production
3. **Environment Variables**: Never commit `.env` files or `config.yaml` with secrets
2. **Secure Configuration**: Review your `~/.hive/configuration.json`, `.mcp.json`, and environment variable settings, especially in production
3. **Environment Variables**: Never commit `.env` files or any configuration files that contain secrets
4. **Network Security**: Use HTTPS in production, configure firewalls appropriately
5. **Database Security**: Use strong passwords, limit network access
-31
View File
@@ -1,31 +0,0 @@
perf: reduce subprocess spawning in quickstart scripts (#4427)
## Problem
Windows process creation (CreateProcess) is 10-100x slower than Linux fork/exec.
The quickstart scripts were spawning 4+ separate `uv run python -c "import X"`
processes to verify imports, adding ~600ms overhead on Windows.
## Solution
Consolidated all import checks into a single batch script that checks multiple
modules in one subprocess call, reducing spawn overhead by ~75%.
## Changes
- **New**: `scripts/check_requirements.py` - Batched import checker
- **New**: `scripts/test_check_requirements.py` - Test suite
- **New**: `scripts/benchmark_quickstart.ps1` - Performance benchmark tool
- **Modified**: `quickstart.ps1` - Updated import verification (2 sections)
- **Modified**: `quickstart.sh` - Updated import verification
## Performance Impact
**Benchmark results on Windows:**
- Before: ~19.8 seconds for import checks
- After: ~4.9 seconds for import checks
- **Improvement: 14.9 seconds saved (75.2% faster)**
## Testing
- ✅ All functional tests pass (`scripts/test_check_requirements.py`)
- ✅ Quickstart scripts work correctly on Windows
- ✅ Error handling verified (invalid imports reported correctly)
- ✅ Performance benchmark confirms 75%+ improvement
Fixes #4427
-27
View File
@@ -1,27 +0,0 @@
# Identity mapping: GitHub username -> Discord ID
#
# This file links GitHub accounts to Discord accounts for the
# Integration Bounty Program. When a bounty PR is merged, the
# GitHub Action uses this file to ping the contributor on Discord.
#
# HOW TO ADD YOURSELF:
# Open a "Link Discord Account" issue:
# https://github.com/aden-hive/hive/issues/new?template=link-discord.yml
# A GitHub Action will automatically add your entry here.
#
# To find your Discord ID:
# 1. Open Discord Settings > Advanced > Enable Developer Mode
# 2. Right-click your name > Copy User ID
#
# Format:
# - github: your-github-username
# discord: "your-discord-id" # quotes required (it's a number)
# name: Your Display Name # optional
contributors:
# - github: example-user
# discord: "123456789012345678"
# name: Example User
- github: TimothyZhang7
discord: "408460790061072384"
name: Timothy@Aden
+88 -3
View File
@@ -6,7 +6,7 @@ This guide explains how to integrate Model Context Protocol (MCP) servers with t
The framework provides built-in support for MCP servers, allowing you to:
- **Register MCP servers** via STDIO or HTTP transport
- **Register MCP servers** via STDIO, HTTP, Unix socket, or SSE transport
- **Auto-discover tools** from registered servers
- **Use MCP tools** seamlessly in your agents
- **Manage multiple MCP servers** simultaneously
@@ -104,6 +104,48 @@ runner.register_mcp_server(
- `url`: Base URL of the MCP server
- `headers`: HTTP headers to include (optional)
### Unix Socket Transport
Best for same-host inter-process communication with lower overhead than TCP:
```python
runner.register_mcp_server(
name="local-ipc-tools",
transport="unix",
url="http://localhost",
socket_path="/tmp/mcp_server.sock",
headers={
"Authorization": "Bearer token"
}
)
```
**Configuration:**
- `url`: Base URL for HTTP requests over the socket (required, e.g., `"http://localhost"`)
- `socket_path`: Absolute path to the Unix socket file (required, e.g., `"/tmp/mcp_server.sock"`)
- `headers`: HTTP headers to include (optional)
### SSE Transport
Best for real-time, event-driven connections using the MCP SDK's SSE client:
```python
runner.register_mcp_server(
name="streaming-tools",
transport="sse",
url="http://localhost:8000/sse",
headers={
"Authorization": "Bearer token"
}
)
```
**Configuration:**
- `url`: SSE endpoint URL (required, e.g., `"http://localhost:8000/sse"`)
- `headers`: HTTP headers for the SSE connection (optional)
## Using MCP Tools in Agents
Once registered, MCP tools are available just like any other tool:
@@ -258,7 +300,32 @@ runner.register_mcp_server(
)
```
### 3. Handle Cleanup
### 3. Use Unix Socket for Same-Host IPC
When both the agent and MCP server run on the same machine, Unix sockets avoid TCP overhead:
```python
runner.register_mcp_server(
name="fast-local-tools",
transport="unix",
url="http://localhost",
socket_path="/tmp/mcp_server.sock"
)
```
### 4. Use SSE for Streaming and Real-Time Tools
SSE transport maintains a persistent connection, ideal for event-driven servers:
```python
runner.register_mcp_server(
name="realtime-tools",
transport="sse",
url="http://realtime-server:8000/sse"
)
```
### 5. Handle Cleanup
Always clean up MCP connections when done:
@@ -280,7 +347,7 @@ async with AgentRunner.load("exports/my-agent") as runner:
# Automatic cleanup
```
### 4. Tool Name Conflicts
### 6. Tool Name Conflicts
If multiple MCP servers provide tools with the same name, the last registered server wins. To avoid conflicts:
@@ -315,6 +382,24 @@ If HTTP transport fails:
2. Check firewall settings
3. Verify the URL and port are correct
### Unix Socket Not Connecting
If Unix socket transport fails:
1. Verify the socket file exists: `ls -la /tmp/mcp_server.sock`
2. Check file permissions on the socket
3. Ensure no other process has locked the socket
4. Verify the `url` field is set (e.g., `"http://localhost"`)
### SSE Connection Issues
If SSE transport fails:
1. Verify the server supports SSE at the given URL
2. Check that the `mcp` Python package is installed (`pip install mcp`)
3. Ensure the SSE endpoint is accessible: `curl http://localhost:8000/sse`
4. Check for firewall or proxy issues blocking long-lived connections
## Example: Full Agent with MCP Tools
Here's a complete example of an agent that uses MCP tools:
+1 -1
View File
@@ -1,6 +1,6 @@
# MCP Server Guide - Agent Building Tools
> **Note:** The standalone `agent-builder` MCP server (`framework.mcp.agent_builder_server`) has been replaced. Agent building is now done via the `coder-tools` server's `initialize_agent_package` tool, with underlying logic in `framework.builder.package_generator`.
> **Note:** The standalone `agent-builder` MCP server (`framework.mcp.agent_builder_server`) has been replaced. Agent building is now done via the `coder-tools` server's `initialize_and_build_agent` tool, with underlying logic in `tools/coder_tools_server.py`.
This guide covers the MCP tools available for building goal-driven agents.
+1 -1
View File
@@ -19,7 +19,7 @@ uv pip install -e .
## Agent Building
Agent scaffolding is handled by the `coder-tools` MCP server (in `tools/coder_tools_server.py`), which provides the `initialize_agent_package` tool and related utilities. The underlying package generation logic lives in `framework.builder.package_generator`.
Agent scaffolding is handled by the `coder-tools` MCP server (in `tools/coder_tools_server.py`), which provides the `initialize_and_build_agent` tool and related utilities. The package generation logic lives directly in `tools/coder_tools_server.py`.
See the [Getting Started Guide](../docs/getting-started.md) for building agents.
+583
View File
@@ -0,0 +1,583 @@
#!/usr/bin/env python3
"""Antigravity authentication CLI.
Implements OAuth2 flow for Google's Antigravity Code Assist gateway.
Credentials are stored in ~/.hive/antigravity-accounts.json.
Usage:
python -m antigravity_auth auth account add
python -m antigravity_auth auth account list
python -m antigravity_auth auth account remove <email>
"""
from __future__ import annotations
import argparse
import json
import logging
import os
import secrets
import socket
import sys
import time
import urllib.parse
import urllib.request
import webbrowser
from http.server import BaseHTTPRequestHandler, HTTPServer
from pathlib import Path
from typing import Any
logging.basicConfig(level=logging.INFO, format="%(message)s")
logger = logging.getLogger(__name__)
# OAuth endpoints
_OAUTH_AUTH_URL = "https://accounts.google.com/o/oauth2/v2/auth"
_OAUTH_TOKEN_URL = "https://oauth2.googleapis.com/token"
# Scopes for Antigravity/Cloud Code Assist
_OAUTH_SCOPES = [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/userinfo.email",
"https://www.googleapis.com/auth/userinfo.profile",
]
# Credentials file path in ~/.hive/
_ACCOUNTS_FILE = Path.home() / ".hive" / "antigravity-accounts.json"
# Default project ID
_DEFAULT_PROJECT_ID = "rising-fact-p41fc"
_DEFAULT_REDIRECT_PORT = 51121
# OAuth credentials fetched from the opencode-antigravity-auth project.
# This project reverse-engineered and published the public OAuth credentials
# for Google's Antigravity/Cloud Code Assist API.
# Source: https://github.com/NoeFabris/opencode-antigravity-auth
_CREDENTIALS_URL = (
"https://raw.githubusercontent.com/NoeFabris/opencode-antigravity-auth/dev/src/constants.ts"
)
# Cached credentials fetched from public source
_cached_client_id: str | None = None
_cached_client_secret: str | None = None
def _fetch_credentials_from_public_source() -> tuple[str | None, str | None]:
"""Fetch OAuth client ID and secret from the public npm package source on GitHub."""
global _cached_client_id, _cached_client_secret
if _cached_client_id and _cached_client_secret:
return _cached_client_id, _cached_client_secret
try:
req = urllib.request.Request(
_CREDENTIALS_URL, headers={"User-Agent": "Hive-Antigravity-Auth/1.0"}
)
with urllib.request.urlopen(req, timeout=10) as resp:
content = resp.read().decode("utf-8")
import re
id_match = re.search(r'ANTIGRAVITY_CLIENT_ID\s*=\s*"([^"]+)"', content)
secret_match = re.search(r'ANTIGRAVITY_CLIENT_SECRET\s*=\s*"([^"]+)"', content)
if id_match:
_cached_client_id = id_match.group(1)
if secret_match:
_cached_client_secret = secret_match.group(1)
return _cached_client_id, _cached_client_secret
except Exception as e:
logger.debug(f"Failed to fetch credentials from public source: {e}")
return None, None
def get_client_id() -> str:
"""Get OAuth client ID from env, config, or public source."""
env_id = os.environ.get("ANTIGRAVITY_CLIENT_ID")
if env_id:
return env_id
# Try hive config
hive_cfg = Path.home() / ".hive" / "configuration.json"
if hive_cfg.exists():
try:
with open(hive_cfg) as f:
cfg = json.load(f)
cfg_id = cfg.get("llm", {}).get("antigravity_client_id")
if cfg_id:
return cfg_id
except Exception:
pass
# Fetch from public source
client_id, _ = _fetch_credentials_from_public_source()
if client_id:
return client_id
raise RuntimeError("Could not obtain Antigravity OAuth client ID")
def get_client_secret() -> str | None:
"""Get OAuth client secret from env, config, or public source."""
secret = os.environ.get("ANTIGRAVITY_CLIENT_SECRET")
if secret:
return secret
# Try to read from hive config
hive_cfg = Path.home() / ".hive" / "configuration.json"
if hive_cfg.exists():
try:
with open(hive_cfg) as f:
cfg = json.load(f)
secret = cfg.get("llm", {}).get("antigravity_client_secret")
if secret:
return secret
except Exception:
pass
# Fetch from public source (npm package on GitHub)
_, secret = _fetch_credentials_from_public_source()
return secret
def find_free_port() -> int:
"""Find an available local port."""
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind(("", 0))
s.listen(1)
return s.getsockname()[1]
class OAuthCallbackHandler(BaseHTTPRequestHandler):
"""Handle OAuth callback from browser."""
auth_code: str | None = None
state: str | None = None
error: str | None = None
def log_message(self, format: str, *args: Any) -> None:
pass # Suppress default logging
def do_GET(self) -> None:
parsed = urllib.parse.urlparse(self.path)
if parsed.path == "/oauth-callback":
query = urllib.parse.parse_qs(parsed.query)
if "error" in query:
self.error = query["error"][0]
self._send_response("Authentication failed. You can close this window.")
return
if "code" in query and "state" in query:
OAuthCallbackHandler.auth_code = query["code"][0]
OAuthCallbackHandler.state = query["state"][0]
self._send_response(
"Authentication successful! You can close this window "
"and return to the terminal."
)
return
self._send_response("Waiting for authentication...")
def _send_response(self, message: str) -> None:
self.send_response(200)
self.send_header("Content-Type", "text/html")
self.end_headers()
html = f"""<!DOCTYPE html>
<html>
<head><title>Antigravity Auth</title></head>
<body style="font-family: system-ui; display: flex; align-items: center;
justify-content: center; height: 100vh; margin: 0; background: #1a1a2e;
color: #eee;">
<div style="text-align: center;">
<h2>{message}</h2>
</div>
</body>
</html>"""
self.wfile.write(html.encode())
def wait_for_callback(port: int, timeout: int = 300) -> tuple[str | None, str | None, str | None]:
"""Start local server and wait for OAuth callback."""
server = HTTPServer(("localhost", port), OAuthCallbackHandler)
server.timeout = 1
start = time.time()
while time.time() - start < timeout:
if OAuthCallbackHandler.auth_code:
return (
OAuthCallbackHandler.auth_code,
OAuthCallbackHandler.state,
OAuthCallbackHandler.error,
)
server.handle_request()
return None, None, "timeout"
def exchange_code_for_tokens(
code: str, redirect_uri: str, client_id: str, client_secret: str | None
) -> dict[str, Any] | None:
"""Exchange authorization code for tokens."""
data = {
"code": code,
"client_id": client_id,
"redirect_uri": redirect_uri,
"grant_type": "authorization_code",
}
if client_secret:
data["client_secret"] = client_secret
body = urllib.parse.urlencode(data).encode()
req = urllib.request.Request(
_OAUTH_TOKEN_URL,
data=body,
headers={"Content-Type": "application/x-www-form-urlencoded"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=30) as resp:
return json.loads(resp.read())
except Exception as e:
logger.error(f"Token exchange failed: {e}")
return None
def get_user_email(access_token: str) -> str | None:
"""Get user email from Google API."""
req = urllib.request.Request(
"https://www.googleapis.com/oauth2/v2/userinfo",
headers={"Authorization": f"Bearer {access_token}"},
)
try:
with urllib.request.urlopen(req, timeout=10) as resp:
data = json.loads(resp.read())
return data.get("email")
except Exception:
return None
def load_accounts() -> dict[str, Any]:
"""Load existing accounts from file."""
if not _ACCOUNTS_FILE.exists():
return {"schemaVersion": 4, "accounts": []}
try:
with open(_ACCOUNTS_FILE) as f:
return json.load(f)
except Exception:
return {"schemaVersion": 4, "accounts": []}
def save_accounts(data: dict[str, Any]) -> None:
"""Save accounts to file."""
_ACCOUNTS_FILE.parent.mkdir(parents=True, exist_ok=True)
with open(_ACCOUNTS_FILE, "w") as f:
json.dump(data, f, indent=2)
logger.info(f"Saved credentials to {_ACCOUNTS_FILE}")
def validate_credentials(access_token: str, project_id: str = _DEFAULT_PROJECT_ID) -> bool:
"""Test if credentials work by making a simple API call to Antigravity.
Returns True if credentials are valid, False otherwise.
"""
endpoint = "https://daily-cloudcode-pa.sandbox.googleapis.com"
body = {
"project": project_id,
"model": "gemini-3-flash",
"request": {
"contents": [{"role": "user", "parts": [{"text": "hi"}]}],
"generationConfig": {"maxOutputTokens": 10},
},
"requestType": "agent",
"userAgent": "antigravity",
"requestId": "validation-test",
}
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json",
"User-Agent": (
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
"AppleWebKit/537.36 (KHTML, like Gecko) Antigravity/1.18.3"
),
"X-Goog-Api-Client": "google-cloud-sdk vscode_cloudshelleditor/0.1",
}
try:
req = urllib.request.Request(
f"{endpoint}/v1internal:generateContent",
data=json.dumps(body).encode("utf-8"),
headers=headers,
method="POST",
)
with urllib.request.urlopen(req, timeout=30) as resp:
json.loads(resp.read())
return True
except Exception:
return False
def refresh_access_token(
refresh_token: str, client_id: str, client_secret: str | None
) -> dict | None:
"""Refresh the access token using the refresh token."""
data = {
"grant_type": "refresh_token",
"refresh_token": refresh_token,
"client_id": client_id,
}
if client_secret:
data["client_secret"] = client_secret
body = urllib.parse.urlencode(data).encode()
req = urllib.request.Request(
_OAUTH_TOKEN_URL,
data=body,
headers={"Content-Type": "application/x-www-form-urlencoded"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=30) as resp:
return json.loads(resp.read())
except Exception as e:
logger.debug(f"Token refresh failed: {e}")
return None
def cmd_account_add(args: argparse.Namespace) -> int:
"""Add a new Antigravity account via OAuth2.
First checks if valid credentials already exist. If so, validates them
and skips OAuth if they work. Otherwise, proceeds with OAuth flow.
"""
client_id = get_client_id()
client_secret = get_client_secret()
# Check if credentials already exist
accounts_data = load_accounts()
accounts = accounts_data.get("accounts", [])
if accounts:
account = next((a for a in accounts if a.get("enabled", True) is not False), accounts[0])
access_token = account.get("access")
refresh_token_str = account.get("refresh", "")
refresh_token = refresh_token_str.split("|")[0] if refresh_token_str else None
project_id = (
refresh_token_str.split("|")[1] if "|" in refresh_token_str else _DEFAULT_PROJECT_ID
)
email = account.get("email", "unknown")
expires_ms = account.get("expires", 0)
expires_at = expires_ms / 1000.0 if expires_ms else 0.0
# Check if token is expired or near expiry
if access_token and expires_at and time.time() < expires_at - 60:
# Token still valid, test it
logger.info(f"Found existing credentials for: {email}")
logger.info("Validating existing credentials...")
if validate_credentials(access_token, project_id):
logger.info("✓ Credentials valid! Skipping OAuth.")
return 0
else:
logger.info("Credentials failed validation, refreshing...")
elif refresh_token:
logger.info(f"Found expired credentials for: {email}")
logger.info("Attempting token refresh...")
tokens = refresh_access_token(refresh_token, client_id, client_secret)
if tokens:
new_access = tokens.get("access_token")
expires_in = tokens.get("expires_in", 3600)
if new_access:
# Update the account
account["access"] = new_access
account["expires"] = int((time.time() + expires_in) * 1000)
accounts_data["last_refresh"] = time.strftime(
"%Y-%m-%dT%H:%M:%SZ", time.gmtime()
)
save_accounts(accounts_data)
# Validate the refreshed token
logger.info("Validating refreshed credentials...")
if validate_credentials(new_access, project_id):
logger.info("✓ Credentials refreshed and validated!")
return 0
else:
logger.info("Refreshed token failed validation, proceeding with OAuth...")
else:
logger.info("Token refresh failed, proceeding with OAuth...")
# No valid credentials, proceed with OAuth
if not client_secret:
logger.warning(
"No client secret configured. Token refresh may fail.\n"
"Set ANTIGRAVITY_CLIENT_SECRET env var or add "
"'antigravity_client_secret' to ~/.hive/configuration.json"
)
# Use fixed port and path matching Google's expected OAuth redirect URI
port = _DEFAULT_REDIRECT_PORT
redirect_uri = f"http://localhost:{port}/oauth-callback"
# Generate state for CSRF protection
state = secrets.token_urlsafe(16)
# Build authorization URL
params = {
"client_id": client_id,
"redirect_uri": redirect_uri,
"response_type": "code",
"scope": " ".join(_OAUTH_SCOPES),
"state": state,
"access_type": "offline",
"prompt": "consent",
}
auth_url = f"{_OAUTH_AUTH_URL}?{urllib.parse.urlencode(params)}"
logger.info("Opening browser for authentication...")
logger.info(f"If the browser doesn't open, visit: {auth_url}\n")
# Open browser
webbrowser.open(auth_url)
# Wait for callback
logger.info(f"Listening for callback on port {port}...")
code, received_state, error = wait_for_callback(port)
if error:
logger.error(f"Authentication failed: {error}")
return 1
if not code:
logger.error("No authorization code received")
return 1
if received_state != state:
logger.error("State mismatch - possible CSRF attack")
return 1
# Exchange code for tokens
logger.info("Exchanging authorization code for tokens...")
tokens = exchange_code_for_tokens(code, redirect_uri, client_id, client_secret)
if not tokens:
return 1
access_token = tokens.get("access_token")
refresh_token = tokens.get("refresh_token")
expires_in = tokens.get("expires_in", 3600)
if not access_token:
logger.error("No access token in response")
return 1
# Get user email
email = get_user_email(access_token)
if email:
logger.info(f"Authenticated as: {email}")
# Load existing accounts and add/update
accounts_data = load_accounts()
accounts = accounts_data.get("accounts", [])
# Build new account entry (V4 schema)
expires_ms = int((time.time() + expires_in) * 1000)
refresh_entry = f"{refresh_token}|{_DEFAULT_PROJECT_ID}"
new_account = {
"access": access_token,
"refresh": refresh_entry,
"expires": expires_ms,
"email": email,
"enabled": True,
}
# Update existing account or add new one
existing_idx = next((i for i, a in enumerate(accounts) if a.get("email") == email), None)
if existing_idx is not None:
accounts[existing_idx] = new_account
logger.info(f"Updated existing account: {email}")
else:
accounts.append(new_account)
logger.info(f"Added new account: {email}")
accounts_data["accounts"] = accounts
accounts_data["schemaVersion"] = 4
accounts_data["last_refresh"] = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
save_accounts(accounts_data)
logger.info("\n✓ Authentication complete!")
return 0
def cmd_account_list(args: argparse.Namespace) -> int:
"""List all stored accounts."""
data = load_accounts()
accounts = data.get("accounts", [])
if not accounts:
logger.info("No accounts configured.")
logger.info("Run 'antigravity auth account add' to add one.")
return 0
logger.info("Configured accounts:\n")
for i, account in enumerate(accounts, 1):
email = account.get("email", "unknown")
enabled = "enabled" if account.get("enabled", True) else "disabled"
logger.info(f" {i}. {email} ({enabled})")
return 0
def cmd_account_remove(args: argparse.Namespace) -> int:
"""Remove an account by email."""
email = args.email
data = load_accounts()
accounts = data.get("accounts", [])
original_len = len(accounts)
accounts = [a for a in accounts if a.get("email") != email]
if len(accounts) == original_len:
logger.error(f"No account found with email: {email}")
return 1
data["accounts"] = accounts
save_accounts(data)
logger.info(f"Removed account: {email}")
return 0
def main() -> int:
parser = argparse.ArgumentParser(
description="Antigravity authentication CLI",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
subparsers = parser.add_subparsers(dest="command", help="Commands")
# auth account add
auth_parser = subparsers.add_parser("auth", help="Authentication commands")
auth_subparsers = auth_parser.add_subparsers(dest="auth_command")
account_parser = auth_subparsers.add_parser("account", help="Account management")
account_subparsers = account_parser.add_subparsers(dest="account_command")
add_parser = account_subparsers.add_parser("add", help="Add a new account via OAuth2")
add_parser.set_defaults(func=cmd_account_add)
list_parser = account_subparsers.add_parser("list", help="List configured accounts")
list_parser.set_defaults(func=cmd_account_list)
remove_parser = account_subparsers.add_parser("remove", help="Remove an account")
remove_parser.add_argument("email", help="Email of account to remove")
remove_parser.set_defaults(func=cmd_account_remove)
args = parser.parse_args()
if hasattr(args, "func"):
return args.func(args)
parser.print_help()
return 0
if __name__ == "__main__":
sys.exit(main())
+81 -27
View File
@@ -17,6 +17,7 @@ import http.server
import json
import os
import platform
import queue
import secrets
import subprocess
import sys
@@ -27,6 +28,7 @@ import urllib.parse
import urllib.request
from datetime import UTC, datetime
from pathlib import Path
from typing import TextIO
# OAuth constants (from the Codex CLI binary)
CLIENT_ID = "app_EMoamEEZ73f0CkXaXp7hrann"
@@ -165,11 +167,11 @@ def open_browser(url: str) -> bool:
if system == "Darwin":
subprocess.Popen(["open", url], stdout=devnull, stderr=devnull)
elif system == "Windows":
subprocess.Popen(["cmd", "/c", "start", url], stdout=devnull, stderr=devnull)
os.startfile(url) # type: ignore[attr-defined]
else:
subprocess.Popen(["xdg-open", url], stdout=devnull, stderr=devnull)
return True
except OSError:
except (AttributeError, OSError):
return False
@@ -266,6 +268,71 @@ def parse_manual_input(value: str, expected_state: str) -> str | None:
return None
def _read_manual_input_lines(
manual_inputs: queue.Queue[str],
stop_event: threading.Event,
stdin: TextIO | None = None,
) -> None:
stream = sys.stdin if stdin is None else stdin
while not stop_event.is_set():
try:
manual = stream.readline()
except (EOFError, OSError):
return
if not manual:
return
if manual.strip():
manual_inputs.put(manual)
def wait_for_code_from_callback_or_stdin(
expected_state: str,
callback_result: list[str | None],
callback_done: threading.Event,
timeout_secs: float = 120,
poll_interval: float = 0.1,
stdin: TextIO | None = None,
) -> str | None:
manual_inputs: queue.Queue[str] = queue.Queue()
stop_event = threading.Event()
# Read stdin on a daemon thread so manual paste works on platforms where
# select() cannot poll console handles, including Windows terminals.
threading.Thread(
target=_read_manual_input_lines,
args=(manual_inputs, stop_event, stdin),
daemon=True,
).start()
deadline = time.time() + timeout_secs
try:
while time.time() < deadline:
if callback_result[0]:
return callback_result[0]
while True:
try:
manual = manual_inputs.get_nowait()
except queue.Empty:
break
code = parse_manual_input(manual, expected_state)
if code:
return code
if callback_done.is_set():
return callback_result[0]
time.sleep(poll_interval)
return callback_result[0]
finally:
stop_event.set()
def main() -> int:
# Generate PKCE and state
verifier, challenge = generate_pkce()
@@ -315,41 +382,28 @@ def main() -> int:
# Start callback server in background
callback_result: list[str | None] = [None]
callback_done = threading.Event()
def run_server() -> None:
callback_result[0] = wait_for_callback(state, timeout_secs=120)
try:
callback_result[0] = wait_for_callback(state, timeout_secs=120)
finally:
callback_done.set()
server_thread = threading.Thread(target=run_server)
server_thread.daemon = True
server_thread.start()
# Also accept manual input in parallel
# We poll for both the server result and stdin
try:
import select
while server_thread.is_alive():
# Check if stdin has data (non-blocking on unix)
if hasattr(select, "select"):
ready, _, _ = select.select([sys.stdin], [], [], 0.5)
if ready:
manual = sys.stdin.readline()
if manual.strip():
code = parse_manual_input(manual, state)
if code:
break
else:
time.sleep(0.5)
if callback_result[0]:
code = callback_result[0]
break
except (KeyboardInterrupt, EOFError):
code = wait_for_code_from_callback_or_stdin(
state,
callback_result,
callback_done,
timeout_secs=120,
)
except KeyboardInterrupt:
print("\n\033[0;31mCancelled.\033[0m")
return 1
if not code:
code = callback_result[0]
else:
# Manual paste mode
try:
-740
View File
@@ -1,740 +0,0 @@
#!/usr/bin/env python3
"""
EventLoopNode WebSocket Demo
Real LLM, real FileConversationStore, real EventBus.
Streams EventLoopNode execution to a browser via WebSocket.
Usage:
cd /home/timothy/oss/hive/core
python demos/event_loop_wss_demo.py
Then open http://localhost:8765 in your browser.
"""
import asyncio
import json
import logging
import sys
import tempfile
from http import HTTPStatus
from pathlib import Path
import httpx
import websockets
from bs4 import BeautifulSoup
from websockets.http11 import Request, Response
# Add core, tools, and hive root to path
_CORE_DIR = Path(__file__).resolve().parent.parent
_HIVE_DIR = _CORE_DIR.parent
sys.path.insert(0, str(_CORE_DIR)) # framework.*
sys.path.insert(0, str(_HIVE_DIR / "tools" / "src")) # aden_tools.*
sys.path.insert(0, str(_HIVE_DIR)) # core.framework.* (for aden_tools imports)
import os # noqa: E402
from aden_tools.credentials import CREDENTIAL_SPECS, CredentialStoreAdapter # noqa: E402
from core.framework.credentials import CredentialStore # noqa: E402
from framework.credentials.storage import ( # noqa: E402
CompositeStorage,
EncryptedFileStorage,
EnvVarStorage,
)
from framework.graph.event_loop_node import EventLoopNode, LoopConfig # noqa: E402
from framework.graph.node import NodeContext, NodeSpec, SharedMemory # noqa: E402
from framework.llm.litellm import LiteLLMProvider # noqa: E402
from framework.llm.provider import Tool # noqa: E402
from framework.runner.tool_registry import ToolRegistry # noqa: E402
from framework.runtime.core import Runtime # noqa: E402
from framework.runtime.event_bus import EventBus, EventType # noqa: E402
from framework.storage.conversation_store import FileConversationStore # noqa: E402
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(name)s %(message)s")
logger = logging.getLogger("demo")
# -------------------------------------------------------------------------
# Persistent state (shared across WebSocket connections)
# -------------------------------------------------------------------------
STORE_DIR = Path(tempfile.mkdtemp(prefix="hive_demo_"))
STORE = FileConversationStore(STORE_DIR / "conversation")
RUNTIME = Runtime(STORE_DIR / "runtime")
LLM = LiteLLMProvider(model="claude-sonnet-4-5-20250929")
# -------------------------------------------------------------------------
# Tool Registry — real tools via ToolRegistry (same pattern as GraphExecutor)
# -------------------------------------------------------------------------
TOOL_REGISTRY = ToolRegistry()
# Credential store: Aden sync (OAuth2 tokens) + encrypted files + env var fallback
_env_mapping = {name: spec.env_var for name, spec in CREDENTIAL_SPECS.items()}
_local_storage = CompositeStorage(
primary=EncryptedFileStorage(),
fallbacks=[EnvVarStorage(env_mapping=_env_mapping)],
)
if os.environ.get("ADEN_API_KEY"):
try:
from framework.credentials.aden import ( # noqa: E402
AdenCachedStorage,
AdenClientConfig,
AdenCredentialClient,
AdenSyncProvider,
)
_client = AdenCredentialClient(AdenClientConfig(base_url="https://api.adenhq.com"))
_provider = AdenSyncProvider(client=_client)
_storage = AdenCachedStorage(
local_storage=_local_storage,
aden_provider=_provider,
)
_cred_store = CredentialStore(storage=_storage, providers=[_provider], auto_refresh=True)
_synced = _provider.sync_all(_cred_store)
logger.info("Synced %d credentials from Aden", _synced)
except Exception as e:
logger.warning("Aden sync unavailable: %s", e)
_cred_store = CredentialStore(storage=_local_storage)
else:
logger.info("ADEN_API_KEY not set, using local credential storage")
_cred_store = CredentialStore(storage=_local_storage)
CREDENTIALS = CredentialStoreAdapter(_cred_store)
# Debug: log which credentials resolved
for _name in ["brave_search", "hubspot", "anthropic"]:
_val = CREDENTIALS.get(_name)
if _val:
logger.debug("credential %s: OK (len=%d)", _name, len(_val))
else:
logger.debug("credential %s: not found", _name)
# --- web_search (Brave Search API) ---
TOOL_REGISTRY.register(
name="web_search",
tool=Tool(
name="web_search",
description=(
"Search the web for current information. "
"Returns titles, URLs, and snippets from search results."
),
parameters={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query (1-500 characters)",
},
"num_results": {
"type": "integer",
"description": "Number of results to return (1-20, default 10)",
},
},
"required": ["query"],
},
),
executor=lambda inputs: _exec_web_search(inputs),
)
def _exec_web_search(inputs: dict) -> dict:
api_key = CREDENTIALS.get("brave_search")
if not api_key:
return {"error": "brave_search credential not configured"}
query = inputs.get("query", "")
num_results = min(inputs.get("num_results", 10), 20)
resp = httpx.get(
"https://api.search.brave.com/res/v1/web/search",
params={"q": query, "count": num_results},
headers={"X-Subscription-Token": api_key, "Accept": "application/json"},
timeout=30.0,
)
if resp.status_code != 200:
return {"error": f"Brave API HTTP {resp.status_code}"}
data = resp.json()
results = [
{
"title": item.get("title", ""),
"url": item.get("url", ""),
"snippet": item.get("description", ""),
}
for item in data.get("web", {}).get("results", [])[:num_results]
]
return {"query": query, "results": results, "total": len(results)}
# --- web_scrape (httpx + BeautifulSoup, no playwright for sync compat) ---
TOOL_REGISTRY.register(
name="web_scrape",
tool=Tool(
name="web_scrape",
description=(
"Scrape and extract text content from a webpage URL. "
"Returns the page title and main text content."
),
parameters={
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "URL of the webpage to scrape",
},
"max_length": {
"type": "integer",
"description": "Maximum text length (default 50000)",
},
},
"required": ["url"],
},
),
executor=lambda inputs: _exec_web_scrape(inputs),
)
_SCRAPE_HEADERS = {
"User-Agent": (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/131.0.0.0 Safari/537.36"
),
"Accept": "text/html,application/xhtml+xml",
}
def _exec_web_scrape(inputs: dict) -> dict:
url = inputs.get("url", "")
max_length = max(1000, min(inputs.get("max_length", 50000), 500000))
if not url.startswith(("http://", "https://")):
url = "https://" + url
try:
resp = httpx.get(url, timeout=30.0, follow_redirects=True, headers=_SCRAPE_HEADERS)
if resp.status_code != 200:
return {"error": f"HTTP {resp.status_code}"}
soup = BeautifulSoup(resp.text, "html.parser")
for tag in soup(["script", "style", "nav", "footer", "header", "aside", "noscript"]):
tag.decompose()
title = soup.title.get_text(strip=True) if soup.title else ""
main = (
soup.find("article")
or soup.find("main")
or soup.find(attrs={"role": "main"})
or soup.find("body")
)
text = main.get_text(separator=" ", strip=True) if main else ""
text = " ".join(text.split())
if len(text) > max_length:
text = text[:max_length] + "..."
return {"url": url, "title": title, "content": text, "length": len(text)}
except httpx.TimeoutException:
return {"error": "Request timed out"}
except Exception as e:
return {"error": f"Scrape failed: {e}"}
# --- HubSpot CRM tools (optional, requires HUBSPOT_ACCESS_TOKEN) ---
_HUBSPOT_API = "https://api.hubapi.com"
def _hubspot_headers() -> dict | None:
token = CREDENTIALS.get("hubspot")
if token:
logger.debug("HubSpot token: %s...%s (len=%d)", token[:8], token[-4:], len(token))
else:
logger.debug("HubSpot token: not found")
if not token:
return None
return {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json",
"Accept": "application/json",
}
def _exec_hubspot_search(inputs: dict) -> dict:
headers = _hubspot_headers()
if not headers:
return {"error": "HUBSPOT_ACCESS_TOKEN not set"}
object_type = inputs.get("object_type", "contacts")
query = inputs.get("query", "")
limit = min(inputs.get("limit", 10), 100)
body: dict = {"limit": limit}
if query:
body["query"] = query
try:
resp = httpx.post(
f"{_HUBSPOT_API}/crm/v3/objects/{object_type}/search",
headers=headers,
json=body,
timeout=30.0,
)
if resp.status_code != 200:
return {"error": f"HubSpot API HTTP {resp.status_code}: {resp.text[:200]}"}
return resp.json()
except httpx.TimeoutException:
return {"error": "Request timed out"}
except Exception as e:
return {"error": f"HubSpot error: {e}"}
TOOL_REGISTRY.register(
name="hubspot_search",
tool=Tool(
name="hubspot_search",
description=(
"Search HubSpot CRM objects (contacts, companies, or deals). "
"Returns matching records with their properties."
),
parameters={
"type": "object",
"properties": {
"object_type": {
"type": "string",
"description": "CRM object type: 'contacts', 'companies', or 'deals'",
},
"query": {
"type": "string",
"description": "Search query (name, email, domain, etc.)",
},
"limit": {
"type": "integer",
"description": "Max results (1-100, default 10)",
},
},
"required": ["object_type"],
},
),
executor=lambda inputs: _exec_hubspot_search(inputs),
)
logger.info(
"ToolRegistry loaded: %s",
", ".join(TOOL_REGISTRY.get_registered_names()),
)
# -------------------------------------------------------------------------
# HTML page (embedded)
# -------------------------------------------------------------------------
HTML_PAGE = ( # noqa: E501
"""<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>EventLoopNode Live Demo</title>
<style>
* { box-sizing: border-box; margin: 0; padding: 0; }
body {
font-family: 'SF Mono', 'Fira Code', monospace;
background: #0d1117; color: #c9d1d9;
height: 100vh; display: flex; flex-direction: column;
}
header {
background: #161b22; padding: 12px 20px;
border-bottom: 1px solid #30363d;
display: flex; align-items: center; gap: 16px;
}
header h1 { font-size: 16px; color: #58a6ff; font-weight: 600; }
.status {
font-size: 12px; padding: 3px 10px; border-radius: 12px;
background: #21262d; color: #8b949e;
}
.status.running { background: #1a4b2e; color: #3fb950; }
.status.done { background: #1a3a5c; color: #58a6ff; }
.status.error { background: #4b1a1a; color: #f85149; }
.chat { flex: 1; overflow-y: auto; padding: 16px; }
.msg {
margin: 8px 0; padding: 10px 14px; border-radius: 8px;
line-height: 1.6; white-space: pre-wrap; word-wrap: break-word;
}
.msg.user { background: #1a3a5c; color: #58a6ff; }
.msg.assistant { background: #161b22; color: #c9d1d9; }
.msg.event {
background: transparent; color: #8b949e; font-size: 11px;
padding: 4px 14px; border-left: 3px solid #30363d;
}
.msg.event.loop { border-left-color: #58a6ff; }
.msg.event.tool { border-left-color: #d29922; }
.msg.event.stall { border-left-color: #f85149; }
.input-bar {
padding: 12px 16px; background: #161b22;
border-top: 1px solid #30363d; display: flex; gap: 8px;
}
.input-bar input {
flex: 1; background: #0d1117; border: 1px solid #30363d;
color: #c9d1d9; padding: 8px 12px; border-radius: 6px;
font-family: inherit; font-size: 14px; outline: none;
}
.input-bar input:focus { border-color: #58a6ff; }
.input-bar button {
background: #238636; color: #fff; border: none;
padding: 8px 20px; border-radius: 6px; cursor: pointer;
font-family: inherit; font-weight: 600;
}
.input-bar button:hover { background: #2ea043; }
.input-bar button:disabled {
background: #21262d; color: #484f58; cursor: not-allowed;
}
.input-bar button.clear { background: #da3633; }
.input-bar button.clear:hover { background: #f85149; }
</style>
</head>
<body>
<header>
<h1>EventLoopNode Live</h1>
<span id="status" class="status">Idle</span>
<span id="iter" class="status" style="display:none">Step 0</span>
</header>
<div id="chat" class="chat"></div>
<div class="input-bar">
<input id="input" type="text"
placeholder="Ask anything..." autofocus />
<button id="go" onclick="run()">Send</button>
<button class="clear"
onclick="clearConversation()">Clear</button>
</div>
<script>
let ws = null;
let currentAssistantEl = null;
let iterCount = 0;
const chat = document.getElementById('chat');
const status = document.getElementById('status');
const iterEl = document.getElementById('iter');
const goBtn = document.getElementById('go');
const inputEl = document.getElementById('input');
inputEl.addEventListener('keydown', e => {
if (e.key === 'Enter') run();
});
function setStatus(text, cls) {
status.textContent = text;
status.className = 'status ' + cls;
}
function addMsg(text, cls) {
const el = document.createElement('div');
el.className = 'msg ' + cls;
el.textContent = text;
chat.appendChild(el);
chat.scrollTop = chat.scrollHeight;
return el;
}
function connect() {
ws = new WebSocket('ws://' + location.host + '/ws');
ws.onopen = () => {
setStatus('Ready', 'done');
goBtn.disabled = false;
};
ws.onmessage = handleEvent;
ws.onerror = () => { setStatus('Error', 'error'); };
ws.onclose = () => {
setStatus('Reconnecting...', '');
goBtn.disabled = true;
setTimeout(connect, 2000);
};
}
function handleEvent(msg) {
const evt = JSON.parse(msg.data);
if (evt.type === 'llm_text_delta') {
if (currentAssistantEl) {
currentAssistantEl.textContent += evt.content;
chat.scrollTop = chat.scrollHeight;
}
}
else if (evt.type === 'ready') {
setStatus('Ready', 'done');
if (currentAssistantEl && !currentAssistantEl.textContent)
currentAssistantEl.remove();
goBtn.disabled = false;
}
else if (evt.type === 'node_loop_iteration') {
iterCount = evt.iteration || (iterCount + 1);
iterEl.textContent = 'Step ' + iterCount;
iterEl.style.display = '';
}
else if (evt.type === 'tool_call_started') {
var info = evt.tool_name + '('
+ JSON.stringify(evt.tool_input).slice(0, 120) + ')';
addMsg('TOOL ' + info, 'event tool');
}
else if (evt.type === 'tool_call_completed') {
var preview = (evt.result || '').slice(0, 200);
var cls = evt.is_error ? 'stall' : 'tool';
addMsg('RESULT ' + evt.tool_name + ': ' + preview,
'event ' + cls);
currentAssistantEl = addMsg('', 'assistant');
}
else if (evt.type === 'result') {
setStatus('Session ended', evt.success ? 'done' : 'error');
if (evt.error) addMsg('ERROR ' + evt.error, 'event stall');
if (currentAssistantEl && !currentAssistantEl.textContent)
currentAssistantEl.remove();
goBtn.disabled = false;
}
else if (evt.type === 'node_stalled') {
addMsg('STALLED ' + evt.reason, 'event stall');
}
else if (evt.type === 'cleared') {
chat.innerHTML = '';
iterCount = 0;
iterEl.textContent = 'Step 0';
iterEl.style.display = 'none';
setStatus('Ready', 'done');
goBtn.disabled = false;
}
}
function run() {
const text = inputEl.value.trim();
if (!text || !ws || ws.readyState !== 1) return;
addMsg(text, 'user');
currentAssistantEl = addMsg('', 'assistant');
inputEl.value = '';
setStatus('Running', 'running');
goBtn.disabled = true;
ws.send(JSON.stringify({ topic: text }));
}
function clearConversation() {
if (ws && ws.readyState === 1) {
ws.send(JSON.stringify({ command: 'clear' }));
}
}
connect();
</script>
</body>
</html>"""
)
# -------------------------------------------------------------------------
# WebSocket handler
# -------------------------------------------------------------------------
async def handle_ws(websocket):
"""Persistent WebSocket: long-lived EventLoopNode with client_facing blocking."""
global STORE
# -- Event forwarding (WebSocket ← EventBus) ----------------------------
bus = EventBus()
async def forward_event(event):
try:
payload = {"type": event.type.value, **event.data}
if event.node_id:
payload["node_id"] = event.node_id
await websocket.send(json.dumps(payload))
except Exception:
pass
bus.subscribe(
event_types=[
EventType.NODE_LOOP_STARTED,
EventType.NODE_LOOP_ITERATION,
EventType.NODE_LOOP_COMPLETED,
EventType.LLM_TEXT_DELTA,
EventType.TOOL_CALL_STARTED,
EventType.TOOL_CALL_COMPLETED,
EventType.NODE_STALLED,
],
handler=forward_event,
)
# -- Per-connection state -----------------------------------------------
node = None
loop_task = None
tools = list(TOOL_REGISTRY.get_tools().values())
tool_executor = TOOL_REGISTRY.get_executor()
node_spec = NodeSpec(
id="assistant",
name="Chat Assistant",
description="A conversational assistant that remembers context across messages",
node_type="event_loop",
client_facing=True,
system_prompt=(
"You are a helpful assistant with access to tools. "
"You can search the web, scrape webpages, and query HubSpot CRM. "
"Use tools when the user asks for current information or external data. "
"You have full conversation history, so you can reference previous messages."
),
)
# -- Ready callback: subscribe to CLIENT_INPUT_REQUESTED on the bus ---
async def on_input_requested(event):
try:
await websocket.send(json.dumps({"type": "ready"}))
except Exception:
pass
bus.subscribe(
event_types=[EventType.CLIENT_INPUT_REQUESTED],
handler=on_input_requested,
)
async def start_loop(first_message: str):
"""Create an EventLoopNode and run it as a background task."""
nonlocal node, loop_task
memory = SharedMemory()
ctx = NodeContext(
runtime=RUNTIME,
node_id="assistant",
node_spec=node_spec,
memory=memory,
input_data={},
llm=LLM,
available_tools=tools,
)
node = EventLoopNode(
event_bus=bus,
config=LoopConfig(max_iterations=10_000, max_history_tokens=32_000),
conversation_store=STORE,
tool_executor=tool_executor,
)
await node.inject_event(first_message)
async def _run():
try:
result = await node.execute(ctx)
try:
await websocket.send(
json.dumps(
{
"type": "result",
"success": result.success,
"output": result.output,
"error": result.error,
"tokens": result.tokens_used,
}
)
)
except Exception:
pass
logger.info(f"Loop ended: success={result.success}, tokens={result.tokens_used}")
except websockets.exceptions.ConnectionClosed:
logger.info("Loop stopped: WebSocket closed")
except Exception as e:
logger.exception("Loop error")
try:
await websocket.send(
json.dumps(
{
"type": "result",
"success": False,
"error": str(e),
"output": {},
}
)
)
except Exception:
pass
loop_task = asyncio.create_task(_run())
async def stop_loop():
"""Signal the node and wait for the loop task to finish."""
nonlocal node, loop_task
if loop_task and not loop_task.done():
if node:
node.signal_shutdown()
try:
await asyncio.wait_for(loop_task, timeout=5.0)
except (TimeoutError, asyncio.CancelledError):
loop_task.cancel()
node = None
loop_task = None
# -- Message loop (runs for the lifetime of this WebSocket) -------------
try:
async for raw in websocket:
try:
msg = json.loads(raw)
except Exception:
continue
# Clear command
if msg.get("command") == "clear":
import shutil
await stop_loop()
await STORE.close()
conv_dir = STORE_DIR / "conversation"
if conv_dir.exists():
shutil.rmtree(conv_dir)
STORE = FileConversationStore(conv_dir)
await websocket.send(json.dumps({"type": "cleared"}))
logger.info("Conversation cleared")
continue
topic = msg.get("topic", "")
if not topic:
continue
if node is None:
# First message — spin up the loop
logger.info(f"Starting persistent loop: {topic}")
await start_loop(topic)
else:
# Subsequent message — inject into the running loop
logger.info(f"Injecting message: {topic}")
await node.inject_event(topic)
except websockets.exceptions.ConnectionClosed:
pass
finally:
await stop_loop()
logger.info("WebSocket closed, loop stopped")
# -------------------------------------------------------------------------
# HTTP handler for serving the HTML page
# -------------------------------------------------------------------------
async def process_request(connection, request: Request):
"""Serve HTML on GET /, upgrade to WebSocket on /ws."""
if request.path == "/ws":
return None # let websockets handle the upgrade
# Serve the HTML page for any other path
return Response(
HTTPStatus.OK,
"OK",
websockets.Headers({"Content-Type": "text/html; charset=utf-8"}),
HTML_PAGE.encode(),
)
# -------------------------------------------------------------------------
# Main
# -------------------------------------------------------------------------
async def main():
port = 8765
async with websockets.serve(
handle_ws,
"0.0.0.0",
port,
process_request=process_request,
):
logger.info(f"Demo running at http://localhost:{port}")
logger.info("Open in your browser and enter a topic to research.")
await asyncio.Future() # run forever
if __name__ == "__main__":
asyncio.run(main())
File diff suppressed because it is too large Load Diff
-930
View File
@@ -1,930 +0,0 @@
#!/usr/bin/env python3
"""
Two-Node ContextHandoff Demo
Demonstrates ContextHandoff between two EventLoopNode instances:
Node A (Researcher) ContextHandoff Node B (Analyst)
Real LLM, real FileConversationStore, real EventBus.
Streams both nodes to a browser via WebSocket.
Usage:
cd /home/timothy/oss/hive/core
python demos/handoff_demo.py
Then open http://localhost:8766 in your browser.
"""
import asyncio
import json
import logging
import sys
import tempfile
from http import HTTPStatus
from pathlib import Path
import httpx
import websockets
from bs4 import BeautifulSoup
from websockets.http11 import Request, Response
# Add core, tools, and hive root to path
_CORE_DIR = Path(__file__).resolve().parent.parent
_HIVE_DIR = _CORE_DIR.parent
sys.path.insert(0, str(_CORE_DIR)) # framework.*
sys.path.insert(0, str(_HIVE_DIR / "tools" / "src")) # aden_tools.*
sys.path.insert(0, str(_HIVE_DIR)) # core.framework.* (for aden_tools imports)
from aden_tools.credentials import CREDENTIAL_SPECS, CredentialStoreAdapter # noqa: E402
from core.framework.credentials import CredentialStore # noqa: E402
from framework.credentials.storage import ( # noqa: E402
CompositeStorage,
EncryptedFileStorage,
EnvVarStorage,
)
from framework.graph.context_handoff import ContextHandoff # noqa: E402
from framework.graph.conversation import NodeConversation # noqa: E402
from framework.graph.event_loop_node import EventLoopNode, LoopConfig # noqa: E402
from framework.graph.node import NodeContext, NodeSpec, SharedMemory # noqa: E402
from framework.llm.litellm import LiteLLMProvider # noqa: E402
from framework.llm.provider import Tool # noqa: E402
from framework.runner.tool_registry import ToolRegistry # noqa: E402
from framework.runtime.core import Runtime # noqa: E402
from framework.runtime.event_bus import EventBus, EventType # noqa: E402
from framework.storage.conversation_store import FileConversationStore # noqa: E402
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(name)s %(message)s")
logger = logging.getLogger("handoff_demo")
# -------------------------------------------------------------------------
# Persistent state
# -------------------------------------------------------------------------
STORE_DIR = Path(tempfile.mkdtemp(prefix="hive_handoff_"))
RUNTIME = Runtime(STORE_DIR / "runtime")
LLM = LiteLLMProvider(model="claude-sonnet-4-5-20250929")
# -------------------------------------------------------------------------
# Credentials
# -------------------------------------------------------------------------
# Composite credential store: encrypted files (primary) + env vars (fallback)
_env_mapping = {name: spec.env_var for name, spec in CREDENTIAL_SPECS.items()}
_composite = CompositeStorage(
primary=EncryptedFileStorage(),
fallbacks=[EnvVarStorage(env_mapping=_env_mapping)],
)
CREDENTIALS = CredentialStoreAdapter(CredentialStore(storage=_composite))
for _name in ["brave_search", "hubspot"]:
_val = CREDENTIALS.get(_name)
if _val:
logger.debug("credential %s: OK (len=%d)", _name, len(_val))
else:
logger.debug("credential %s: not found", _name)
# -------------------------------------------------------------------------
# Tool Registry — web_search + web_scrape for Node A (Researcher)
# -------------------------------------------------------------------------
TOOL_REGISTRY = ToolRegistry()
def _exec_web_search(inputs: dict) -> dict:
api_key = CREDENTIALS.get("brave_search")
if not api_key:
return {"error": "brave_search credential not configured"}
query = inputs.get("query", "")
num_results = min(inputs.get("num_results", 10), 20)
resp = httpx.get(
"https://api.search.brave.com/res/v1/web/search",
params={"q": query, "count": num_results},
headers={
"X-Subscription-Token": api_key,
"Accept": "application/json",
},
timeout=30.0,
)
if resp.status_code != 200:
return {"error": f"Brave API HTTP {resp.status_code}"}
data = resp.json()
results = [
{
"title": item.get("title", ""),
"url": item.get("url", ""),
"snippet": item.get("description", ""),
}
for item in data.get("web", {}).get("results", [])[:num_results]
]
return {"query": query, "results": results, "total": len(results)}
TOOL_REGISTRY.register(
name="web_search",
tool=Tool(
name="web_search",
description=(
"Search the web for current information. "
"Returns titles, URLs, and snippets from search results."
),
parameters={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query (1-500 characters)",
},
"num_results": {
"type": "integer",
"description": "Number of results (1-20, default 10)",
},
},
"required": ["query"],
},
),
executor=lambda inputs: _exec_web_search(inputs),
)
_SCRAPE_HEADERS = {
"User-Agent": (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/131.0.0.0 Safari/537.36"
),
"Accept": "text/html,application/xhtml+xml",
}
def _exec_web_scrape(inputs: dict) -> dict:
url = inputs.get("url", "")
max_length = max(1000, min(inputs.get("max_length", 50000), 500000))
if not url.startswith(("http://", "https://")):
url = "https://" + url
try:
resp = httpx.get(
url,
timeout=30.0,
follow_redirects=True,
headers=_SCRAPE_HEADERS,
)
if resp.status_code != 200:
return {"error": f"HTTP {resp.status_code}"}
soup = BeautifulSoup(resp.text, "html.parser")
for tag in soup(["script", "style", "nav", "footer", "header", "aside", "noscript"]):
tag.decompose()
title = soup.title.get_text(strip=True) if soup.title else ""
main = (
soup.find("article")
or soup.find("main")
or soup.find(attrs={"role": "main"})
or soup.find("body")
)
text = main.get_text(separator=" ", strip=True) if main else ""
text = " ".join(text.split())
if len(text) > max_length:
text = text[:max_length] + "..."
return {
"url": url,
"title": title,
"content": text,
"length": len(text),
}
except httpx.TimeoutException:
return {"error": "Request timed out"}
except Exception as e:
return {"error": f"Scrape failed: {e}"}
TOOL_REGISTRY.register(
name="web_scrape",
tool=Tool(
name="web_scrape",
description=(
"Scrape and extract text content from a webpage URL. "
"Returns the page title and main text content."
),
parameters={
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "URL of the webpage to scrape",
},
"max_length": {
"type": "integer",
"description": "Maximum text length (default 50000)",
},
},
"required": ["url"],
},
),
executor=lambda inputs: _exec_web_scrape(inputs),
)
logger.info(
"ToolRegistry loaded: %s",
", ".join(TOOL_REGISTRY.get_registered_names()),
)
# -------------------------------------------------------------------------
# Node Specs
# -------------------------------------------------------------------------
RESEARCHER_SPEC = NodeSpec(
id="researcher",
name="Researcher",
description="Researches a topic using web search and scraping tools",
node_type="event_loop",
input_keys=["topic"],
output_keys=["research_summary"],
system_prompt=(
"You are a thorough research assistant. Your job is to research "
"the given topic using the web_search and web_scrape tools.\n\n"
"1. Search for relevant information on the topic\n"
"2. Scrape 1-2 of the most promising URLs for details\n"
"3. Synthesize your findings into a comprehensive summary\n"
"4. Use set_output with key='research_summary' to save your "
"findings\n\n"
"Be thorough but efficient. Aim for 2-4 search/scrape calls, "
"then summarize and set_output."
),
)
ANALYST_SPEC = NodeSpec(
id="analyst",
name="Analyst",
description="Analyzes research findings and provides insights",
node_type="event_loop",
input_keys=["context"],
output_keys=["analysis"],
system_prompt=(
"You are a strategic analyst. You receive research findings from "
"a previous researcher and must:\n\n"
"1. Identify key themes and patterns\n"
"2. Assess the reliability and significance of the findings\n"
"3. Provide actionable insights and recommendations\n"
"4. Use set_output with key='analysis' to save your analysis\n\n"
"Be concise but insightful. Focus on what matters most."
),
)
# -------------------------------------------------------------------------
# HTML page
# -------------------------------------------------------------------------
HTML_PAGE = ( # noqa: E501
"""<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>ContextHandoff Demo</title>
<style>
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
font-family: 'SF Mono', 'Fira Code', monospace;
background: #0d1117;
color: #c9d1d9;
height: 100vh;
display: flex;
flex-direction: column;
}
header {
background: #161b22;
padding: 12px 20px;
border-bottom: 1px solid #30363d;
display: flex;
align-items: center;
gap: 16px;
}
header h1 {
font-size: 16px;
color: #58a6ff;
font-weight: 600;
}
.badge {
font-size: 12px;
padding: 3px 10px;
border-radius: 12px;
background: #21262d;
color: #8b949e;
}
.badge.researcher {
background: #1a3a5c;
color: #58a6ff;
}
.badge.analyst {
background: #1a4b2e;
color: #3fb950;
}
.badge.handoff {
background: #3d1f00;
color: #d29922;
}
.badge.done {
background: #21262d;
color: #8b949e;
}
.badge.error {
background: #4b1a1a;
color: #f85149;
}
.chat {
flex: 1;
overflow-y: auto;
padding: 16px;
}
.msg {
margin: 8px 0;
padding: 10px 14px;
border-radius: 8px;
line-height: 1.6;
white-space: pre-wrap;
word-wrap: break-word;
}
.msg.user {
background: #1a3a5c;
color: #58a6ff;
}
.msg.assistant {
background: #161b22;
color: #c9d1d9;
}
.msg.assistant.analyst-msg {
border-left: 3px solid #3fb950;
}
.msg.event {
background: transparent;
color: #8b949e;
font-size: 11px;
padding: 4px 14px;
border-left: 3px solid #30363d;
}
.msg.event.loop {
border-left-color: #58a6ff;
}
.msg.event.tool {
border-left-color: #d29922;
}
.msg.event.stall {
border-left-color: #f85149;
}
.handoff-banner {
margin: 16px 0;
padding: 16px;
background: #1c1200;
border: 1px solid #d29922;
border-radius: 8px;
text-align: center;
}
.handoff-banner h3 {
color: #d29922;
font-size: 14px;
margin-bottom: 8px;
}
.handoff-banner p, .result-banner p {
color: #8b949e;
font-size: 12px;
line-height: 1.5;
max-height: 200px;
overflow-y: auto;
white-space: pre-wrap;
text-align: left;
}
.result-banner {
margin: 16px 0;
padding: 16px;
background: #0a2614;
border: 1px solid #3fb950;
border-radius: 8px;
}
.result-banner h3 {
color: #3fb950;
font-size: 14px;
margin-bottom: 8px;
text-align: center;
}
.result-banner .label {
color: #58a6ff;
font-size: 11px;
font-weight: 600;
margin-top: 10px;
margin-bottom: 2px;
}
.result-banner .tokens {
color: #484f58;
font-size: 11px;
text-align: center;
margin-top: 10px;
}
.input-bar {
padding: 12px 16px;
background: #161b22;
border-top: 1px solid #30363d;
display: flex;
gap: 8px;
}
.input-bar input {
flex: 1;
background: #0d1117;
border: 1px solid #30363d;
color: #c9d1d9;
padding: 8px 12px;
border-radius: 6px;
font-family: inherit;
font-size: 14px;
outline: none;
}
.input-bar input:focus {
border-color: #58a6ff;
}
.input-bar button {
background: #238636;
color: #fff;
border: none;
padding: 8px 20px;
border-radius: 6px;
cursor: pointer;
font-family: inherit;
font-weight: 600;
}
.input-bar button:hover {
background: #2ea043;
}
.input-bar button:disabled {
background: #21262d;
color: #484f58;
cursor: not-allowed;
}
</style>
</head>
<body>
<header>
<h1>ContextHandoff Demo</h1>
<span id="phase" class="badge">Idle</span>
<span id="iter" class="badge" style="display:none">Step 0</span>
</header>
<div id="chat" class="chat"></div>
<div class="input-bar">
<input id="input" type="text"
placeholder="Enter a research topic..." autofocus />
<button id="go" onclick="run()">Research</button>
</div>
<script>
let ws = null;
let currentAssistantEl = null;
let iterCount = 0;
let currentPhase = 'idle';
const chat = document.getElementById('chat');
const phase = document.getElementById('phase');
const iterEl = document.getElementById('iter');
const goBtn = document.getElementById('go');
const inputEl = document.getElementById('input');
inputEl.addEventListener('keydown', e => {
if (e.key === 'Enter') run();
});
function setPhase(text, cls) {
phase.textContent = text;
phase.className = 'badge ' + cls;
currentPhase = cls;
}
function addMsg(text, cls) {
const el = document.createElement('div');
el.className = 'msg ' + cls;
el.textContent = text;
chat.appendChild(el);
chat.scrollTop = chat.scrollHeight;
return el;
}
function addHandoffBanner(summary) {
const banner = document.createElement('div');
banner.className = 'handoff-banner';
const h3 = document.createElement('h3');
h3.textContent = 'Context Handoff: Researcher -> Analyst';
const p = document.createElement('p');
p.textContent = summary || 'Passing research context...';
banner.appendChild(h3);
banner.appendChild(p);
chat.appendChild(banner);
chat.scrollTop = chat.scrollHeight;
}
function addResultBanner(researcher, analyst, tokens) {
const banner = document.createElement('div');
banner.className = 'result-banner';
const h3 = document.createElement('h3');
h3.textContent = 'Pipeline Complete';
banner.appendChild(h3);
if (researcher && researcher.research_summary) {
const lbl = document.createElement('div');
lbl.className = 'label';
lbl.textContent = 'RESEARCH SUMMARY';
banner.appendChild(lbl);
const p = document.createElement('p');
p.textContent = researcher.research_summary;
banner.appendChild(p);
}
if (analyst && analyst.analysis) {
const lbl = document.createElement('div');
lbl.className = 'label';
lbl.textContent = 'ANALYSIS';
lbl.style.color = '#3fb950';
banner.appendChild(lbl);
const p = document.createElement('p');
p.textContent = analyst.analysis;
banner.appendChild(p);
}
if (tokens) {
const t = document.createElement('div');
t.className = 'tokens';
t.textContent = 'Total tokens: ' + tokens.toLocaleString();
banner.appendChild(t);
}
chat.appendChild(banner);
chat.scrollTop = chat.scrollHeight;
}
function connect() {
ws = new WebSocket('ws://' + location.host + '/ws');
ws.onopen = () => {
setPhase('Ready', 'done');
goBtn.disabled = false;
};
ws.onmessage = handleEvent;
ws.onerror = () => { setPhase('Error', 'error'); };
ws.onclose = () => {
setPhase('Reconnecting...', '');
goBtn.disabled = true;
setTimeout(connect, 2000);
};
}
function handleEvent(msg) {
const evt = JSON.parse(msg.data);
if (evt.type === 'phase') {
if (evt.phase === 'researcher') {
setPhase('Researcher', 'researcher');
} else if (evt.phase === 'handoff') {
setPhase('Handoff', 'handoff');
} else if (evt.phase === 'analyst') {
setPhase('Analyst', 'analyst');
}
iterCount = 0;
iterEl.style.display = 'none';
}
else if (evt.type === 'llm_text_delta') {
if (currentAssistantEl) {
currentAssistantEl.textContent += evt.content;
chat.scrollTop = chat.scrollHeight;
}
}
else if (evt.type === 'node_loop_iteration') {
iterCount = evt.iteration || (iterCount + 1);
iterEl.textContent = 'Step ' + iterCount;
iterEl.style.display = '';
}
else if (evt.type === 'tool_call_started') {
var info = evt.tool_name + '('
+ JSON.stringify(evt.tool_input).slice(0, 120) + ')';
addMsg('TOOL ' + info, 'event tool');
}
else if (evt.type === 'tool_call_completed') {
var preview = (evt.result || '').slice(0, 200);
var cls = evt.is_error ? 'stall' : 'tool';
addMsg(
'RESULT ' + evt.tool_name + ': ' + preview,
'event ' + cls
);
var assistCls = currentPhase === 'analyst'
? 'assistant analyst-msg' : 'assistant';
currentAssistantEl = addMsg('', assistCls);
}
else if (evt.type === 'handoff_context') {
addHandoffBanner(evt.summary);
var assistCls = 'assistant analyst-msg';
currentAssistantEl = addMsg('', assistCls);
}
else if (evt.type === 'node_result') {
if (evt.node_id === 'researcher') {
if (currentAssistantEl
&& !currentAssistantEl.textContent) {
currentAssistantEl.remove();
}
}
}
else if (evt.type === 'done') {
setPhase('Done', 'done');
iterEl.style.display = 'none';
if (currentAssistantEl
&& !currentAssistantEl.textContent) {
currentAssistantEl.remove();
}
currentAssistantEl = null;
addResultBanner(
evt.researcher, evt.analyst, evt.total_tokens
);
goBtn.disabled = false;
inputEl.placeholder = 'Enter another topic...';
}
else if (evt.type === 'error') {
setPhase('Error', 'error');
addMsg('ERROR ' + evt.message, 'event stall');
goBtn.disabled = false;
}
else if (evt.type === 'node_stalled') {
addMsg('STALLED ' + evt.reason, 'event stall');
}
}
function run() {
const text = inputEl.value.trim();
if (!text || !ws || ws.readyState !== 1) return;
chat.innerHTML = '';
addMsg(text, 'user');
currentAssistantEl = addMsg('', 'assistant');
inputEl.value = '';
goBtn.disabled = true;
ws.send(JSON.stringify({ topic: text }));
}
connect();
</script>
</body>
</html>"""
)
# -------------------------------------------------------------------------
# WebSocket handler — sequential Node A → Handoff → Node B
# -------------------------------------------------------------------------
async def handle_ws(websocket):
"""Run the two-node handoff pipeline per user message."""
try:
async for raw in websocket:
try:
msg = json.loads(raw)
except Exception:
continue
topic = msg.get("topic", "")
if not topic:
continue
logger.info(f"Starting handoff pipeline for: {topic}")
try:
await _run_pipeline(websocket, topic)
except websockets.exceptions.ConnectionClosed:
logger.info("WebSocket closed during pipeline")
return
except Exception as e:
logger.exception("Pipeline error")
try:
await websocket.send(json.dumps({"type": "error", "message": str(e)}))
except Exception:
pass
except websockets.exceptions.ConnectionClosed:
pass
async def _run_pipeline(websocket, topic: str):
"""Execute: Node A (research) → ContextHandoff → Node B (analysis)."""
import shutil
# Fresh stores for each run
run_dir = Path(tempfile.mkdtemp(prefix="hive_run_", dir=STORE_DIR))
store_a = FileConversationStore(run_dir / "node_a")
store_b = FileConversationStore(run_dir / "node_b")
# Shared event bus
bus = EventBus()
async def forward_event(event):
try:
payload = {"type": event.type.value, **event.data}
if event.node_id:
payload["node_id"] = event.node_id
await websocket.send(json.dumps(payload))
except Exception:
pass
bus.subscribe(
event_types=[
EventType.NODE_LOOP_STARTED,
EventType.NODE_LOOP_ITERATION,
EventType.NODE_LOOP_COMPLETED,
EventType.LLM_TEXT_DELTA,
EventType.TOOL_CALL_STARTED,
EventType.TOOL_CALL_COMPLETED,
EventType.NODE_STALLED,
],
handler=forward_event,
)
tools = list(TOOL_REGISTRY.get_tools().values())
tool_executor = TOOL_REGISTRY.get_executor()
# ---- Phase 1: Researcher ------------------------------------------------
await websocket.send(json.dumps({"type": "phase", "phase": "researcher"}))
node_a = EventLoopNode(
event_bus=bus,
judge=None, # implicit judge: accept when output_keys filled
config=LoopConfig(
max_iterations=20,
max_tool_calls_per_turn=30,
max_history_tokens=32_000,
),
conversation_store=store_a,
tool_executor=tool_executor,
)
ctx_a = NodeContext(
runtime=RUNTIME,
node_id="researcher",
node_spec=RESEARCHER_SPEC,
memory=SharedMemory(),
input_data={"topic": topic},
llm=LLM,
available_tools=tools,
)
result_a = await node_a.execute(ctx_a)
logger.info(
"Researcher done: success=%s, tokens=%s",
result_a.success,
result_a.tokens_used,
)
await websocket.send(
json.dumps(
{
"type": "node_result",
"node_id": "researcher",
"success": result_a.success,
"output": result_a.output,
}
)
)
if not result_a.success:
await websocket.send(
json.dumps(
{
"type": "error",
"message": f"Researcher failed: {result_a.error}",
}
)
)
return
# ---- Phase 2: Context Handoff -------------------------------------------
await websocket.send(json.dumps({"type": "phase", "phase": "handoff"}))
# Restore the researcher's conversation from store
conversation_a = await NodeConversation.restore(store_a)
if conversation_a is None:
await websocket.send(
json.dumps(
{
"type": "error",
"message": "Failed to restore researcher conversation",
}
)
)
return
handoff_engine = ContextHandoff(llm=LLM)
handoff_context = handoff_engine.summarize_conversation(
conversation=conversation_a,
node_id="researcher",
output_keys=["research_summary"],
)
formatted_handoff = ContextHandoff.format_as_input(handoff_context)
logger.info(
"Handoff: %d turns, ~%d tokens, keys=%s",
handoff_context.turn_count,
handoff_context.total_tokens_used,
list(handoff_context.key_outputs.keys()),
)
# Send handoff context to browser
await websocket.send(
json.dumps(
{
"type": "handoff_context",
"summary": handoff_context.summary[:500],
"turn_count": handoff_context.turn_count,
"tokens": handoff_context.total_tokens_used,
"key_outputs": handoff_context.key_outputs,
}
)
)
# ---- Phase 3: Analyst ---------------------------------------------------
await websocket.send(json.dumps({"type": "phase", "phase": "analyst"}))
node_b = EventLoopNode(
event_bus=bus,
judge=None, # implicit judge
config=LoopConfig(
max_iterations=10,
max_tool_calls_per_turn=30,
max_history_tokens=32_000,
),
conversation_store=store_b,
)
ctx_b = NodeContext(
runtime=RUNTIME,
node_id="analyst",
node_spec=ANALYST_SPEC,
memory=SharedMemory(),
input_data={"context": formatted_handoff},
llm=LLM,
available_tools=[],
)
result_b = await node_b.execute(ctx_b)
logger.info(
"Analyst done: success=%s, tokens=%s",
result_b.success,
result_b.tokens_used,
)
# ---- Done ---------------------------------------------------------------
await websocket.send(
json.dumps(
{
"type": "done",
"researcher": result_a.output,
"analyst": result_b.output,
"total_tokens": ((result_a.tokens_used or 0) + (result_b.tokens_used or 0)),
}
)
)
# Clean up temp stores
try:
shutil.rmtree(run_dir)
except Exception:
pass
# -------------------------------------------------------------------------
# HTTP handler
# -------------------------------------------------------------------------
async def process_request(connection, request: Request):
"""Serve HTML on GET /, upgrade to WebSocket on /ws."""
if request.path == "/ws":
return None
return Response(
HTTPStatus.OK,
"OK",
websockets.Headers({"Content-Type": "text/html; charset=utf-8"}),
HTML_PAGE.encode(),
)
# -------------------------------------------------------------------------
# Main
# -------------------------------------------------------------------------
async def main():
port = 8766
async with websockets.serve(
handle_ws,
"0.0.0.0",
port,
process_request=process_request,
):
logger.info(f"Handoff demo at http://localhost:{port}")
logger.info("Enter a research topic to start the pipeline.")
await asyncio.Future()
if __name__ == "__main__":
asyncio.run(main())
File diff suppressed because it is too large Load Diff
+1 -1
View File
@@ -79,7 +79,7 @@ async def example_3_config_file():
# Copy example config (in practice, you'd place this in your agent folder)
import shutil
shutil.copy("examples/mcp_servers.json", test_agent_path / "mcp_servers.json")
shutil.copy(Path(__file__).parent / "mcp_servers.json", test_agent_path / "mcp_servers.json")
# Load agent - MCP servers will be auto-discovered
runner = AgentRunner.load(test_agent_path)
-3
View File
@@ -22,7 +22,6 @@ The framework includes a Goal-Based Testing system (Goal → Agent → Eval):
See `framework.testing` for details.
"""
from framework.builder.query import BuilderQuery
from framework.llm import AnthropicProvider, LLMProvider
from framework.runner import AgentOrchestrator, AgentRunner
from framework.runtime.core import Runtime
@@ -51,8 +50,6 @@ __all__ = [
"Problem",
# Runtime
"Runtime",
# Builder
"BuilderQuery",
# LLM
"LLMProvider",
"AnthropicProvider",
@@ -1,8 +1,6 @@
"""CLI entry point for Credential Tester agent."""
import asyncio
import logging
import sys
import click
@@ -10,13 +8,14 @@ from .agent import CredentialTesterAgent
def setup_logging(verbose=False, debug=False):
from framework.observability import configure_logging
if debug:
level, fmt = logging.DEBUG, "%(asctime)s %(name)s: %(message)s"
configure_logging(level="DEBUG")
elif verbose:
level, fmt = logging.INFO, "%(message)s"
configure_logging(level="INFO")
else:
level, fmt = logging.WARNING, "%(levelname)s: %(message)s"
logging.basicConfig(level=level, format=fmt, stream=sys.stderr)
configure_logging(level="WARNING")
def pick_account(agent: CredentialTesterAgent) -> dict | None:
@@ -51,42 +50,6 @@ def cli():
pass
@cli.command()
@click.option("--verbose", "-v", is_flag=True)
@click.option("--debug", is_flag=True)
def tui(verbose, debug):
"""Launch TUI to test a credential interactively."""
setup_logging(verbose=verbose, debug=debug)
try:
from framework.tui.app import AdenTUI
except ImportError:
click.echo("TUI requires 'textual'. Install with: pip install textual")
sys.exit(1)
agent = CredentialTesterAgent()
account = pick_account(agent)
if account is None:
sys.exit(1)
agent.select_account(account)
provider = account.get("provider", "?")
alias = account.get("alias", "?")
click.echo(f"\nTesting {provider}/{alias}...\n")
async def run_tui():
agent._setup()
runtime = agent._agent_runtime
await runtime.start()
try:
app = AdenTUI(runtime)
await app.run_async()
finally:
await runtime.stop()
asyncio.run(run_tui())
@cli.command()
@click.option("--verbose", "-v", is_flag=True)
@click.option("--debug", is_flag=True)
@@ -16,14 +16,17 @@ after the user picks an account programmatically.
from __future__ import annotations
import logging
from pathlib import Path
from typing import TYPE_CHECKING
from framework.config import get_max_context_tokens
from framework.graph import Goal, NodeSpec, SuccessCriterion
from framework.graph.checkpoint_config import CheckpointConfig
from framework.graph.edge import GraphSpec
from framework.graph.executor import ExecutionResult
from framework.llm import LiteLLMProvider
from framework.runner.mcp_registry import MCPRegistry
from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
@@ -31,9 +34,13 @@ from framework.runtime.execution_stream import EntryPointSpec
from .config import default_config
from .nodes import build_tester_node
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from framework.runner import AgentRunner
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Goal
# ---------------------------------------------------------------------------
@@ -106,7 +113,11 @@ def _list_aden_accounts() -> list[dict]:
for c in integrations
if c.status == "active"
]
except (ImportError, OSError) as exc:
logger.debug("Could not list Aden accounts: %s", exc)
return []
except Exception:
logger.warning("Unexpected error listing Aden accounts", exc_info=True)
return []
@@ -118,7 +129,11 @@ def _list_local_accounts() -> list[dict]:
return [
info.to_account_dict() for info in LocalCredentialRegistry.default().list_accounts()
]
except ImportError as exc:
logger.debug("Local credential registry unavailable: %s", exc)
return []
except Exception:
logger.warning("Unexpected error listing local accounts", exc_info=True)
return []
@@ -139,7 +154,11 @@ def _list_env_fallback_accounts() -> list[dict]:
from framework.credentials.storage import EncryptedFileStorage
encrypted_ids: set[str] = set(EncryptedFileStorage().list_all())
except (ImportError, OSError) as exc:
logger.debug("Could not read encrypted store: %s", exc)
encrypted_ids = set()
except Exception:
logger.warning("Unexpected error reading encrypted store", exc_info=True)
encrypted_ids = set()
def _is_configured(cred_name: str, spec) -> bool:
@@ -299,8 +318,10 @@ def _activate_local_account(credential_id: str, alias: str) -> None:
if key:
os.environ[spec.env_var] = key
except (ImportError, KeyError, OSError) as exc:
logger.debug("Could not inject credentials: %s", exc)
except Exception:
pass
logger.warning("Unexpected error injecting credentials", exc_info=True)
def _configure_aden_node(
@@ -455,7 +476,6 @@ identity_prompt = (
loop_config = {
"max_iterations": 50,
"max_tool_calls_per_turn": 30,
"max_history_tokens": 32000,
}
# ---------------------------------------------------------------------------
@@ -541,7 +561,7 @@ class CredentialTesterAgent:
loop_config={
"max_iterations": 50,
"max_tool_calls_per_turn": 30,
"max_history_tokens": 32000,
"max_context_tokens": get_max_context_tokens(),
},
conversation_mode="continuous",
identity_prompt=(
@@ -563,6 +583,23 @@ class CredentialTesterAgent:
if mcp_config_path.exists():
self._tool_registry.load_mcp_config(mcp_config_path)
try:
agent_dir = Path(__file__).parent
registry = MCPRegistry()
registry.initialize()
if (agent_dir / "mcp_registry.json").is_file():
self._tool_registry.set_mcp_registry_agent_path(agent_dir)
registry_configs, selection_max_tools = registry.load_agent_selection(agent_dir)
if registry_configs:
self._tool_registry.load_registry_servers(
registry_configs,
preserve_existing_tools=True,
log_collisions=True,
max_tools=selection_max_tools,
)
except Exception:
logger.warning("MCP registry config failed to load", exc_info=True)
extra_kwargs = getattr(self.config, "extra_kwargs", {}) or {}
llm = LiteLLMProvider(
model=self.config.model,
+209
View File
@@ -0,0 +1,209 @@
"""Agent discovery — scan known directories and return categorised AgentEntry lists."""
from __future__ import annotations
import json
from dataclasses import dataclass, field
from pathlib import Path
@dataclass
class AgentEntry:
"""Lightweight agent metadata for the picker / API discover endpoint."""
path: Path
name: str
description: str
category: str
session_count: int = 0
run_count: int = 0
node_count: int = 0
tool_count: int = 0
tags: list[str] = field(default_factory=list)
last_active: str | None = None
def _get_last_active(agent_path: Path) -> str | None:
"""Return the most recent updated_at timestamp across all sessions.
Checks both worker sessions (``~/.hive/agents/{name}/sessions/``) and
queen sessions (``~/.hive/queen/session/``) whose ``meta.json`` references
the same *agent_path*.
"""
from datetime import datetime
agent_name = agent_path.name
latest: str | None = None
# 1. Worker sessions
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
if sessions_dir.exists():
for session_dir in sessions_dir.iterdir():
if not session_dir.is_dir() or not session_dir.name.startswith("session_"):
continue
state_file = session_dir / "state.json"
if not state_file.exists():
continue
try:
data = json.loads(state_file.read_text(encoding="utf-8"))
ts = data.get("timestamps", {}).get("updated_at")
if ts and (latest is None or ts > latest):
latest = ts
except Exception:
continue
# 2. Queen sessions
queen_sessions_dir = Path.home() / ".hive" / "queen" / "session"
if queen_sessions_dir.exists():
resolved = agent_path.resolve()
for d in queen_sessions_dir.iterdir():
if not d.is_dir():
continue
meta_file = d / "meta.json"
if not meta_file.exists():
continue
try:
meta = json.loads(meta_file.read_text(encoding="utf-8"))
stored = meta.get("agent_path")
if not stored or Path(stored).resolve() != resolved:
continue
ts = datetime.fromtimestamp(d.stat().st_mtime).isoformat()
if latest is None or ts > latest:
latest = ts
except Exception:
continue
return latest
def _count_sessions(agent_name: str) -> int:
"""Count session directories under ~/.hive/agents/{agent_name}/sessions/."""
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
if not sessions_dir.exists():
return 0
return sum(1 for d in sessions_dir.iterdir() if d.is_dir() and d.name.startswith("session_"))
def _count_runs(agent_name: str) -> int:
"""Count unique run_ids across all sessions for an agent."""
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
if not sessions_dir.exists():
return 0
run_ids: set[str] = set()
for session_dir in sessions_dir.iterdir():
if not session_dir.is_dir() or not session_dir.name.startswith("session_"):
continue
# runs.jsonl lives inside workspace subdirectories
for runs_file in session_dir.rglob("runs.jsonl"):
try:
for line in runs_file.read_text(encoding="utf-8").splitlines():
line = line.strip()
if not line:
continue
record = json.loads(line)
rid = record.get("run_id")
if rid:
run_ids.add(rid)
except Exception:
continue
return len(run_ids)
def _extract_agent_stats(agent_path: Path) -> tuple[int, int, list[str]]:
"""Extract node count, tool count, and tags from an agent directory.
Prefers agent.py (AST-parsed) over agent.json for node/tool counts
since agent.json may be stale. Tags are only available from agent.json.
"""
import ast
node_count, tool_count, tags = 0, 0, []
agent_py = agent_path / "agent.py"
if agent_py.exists():
try:
tree = ast.parse(agent_py.read_text(encoding="utf-8"))
for node in ast.walk(tree):
if isinstance(node, ast.Assign):
for target in node.targets:
if isinstance(target, ast.Name) and target.id == "nodes":
if isinstance(node.value, ast.List):
node_count = len(node.value.elts)
except Exception:
pass
agent_json = agent_path / "agent.json"
if agent_json.exists():
try:
data = json.loads(agent_json.read_text(encoding="utf-8"))
json_nodes = data.get("graph", {}).get("nodes", []) or data.get("nodes", [])
if node_count == 0:
node_count = len(json_nodes)
tools: set[str] = set()
for n in json_nodes:
tools.update(n.get("tools", []))
tool_count = len(tools)
tags = data.get("agent", {}).get("tags", [])
except Exception:
pass
return node_count, tool_count, tags
def discover_agents() -> dict[str, list[AgentEntry]]:
"""Discover agents from all known sources grouped by category."""
from framework.runner.cli import (
_extract_python_agent_metadata,
_get_framework_agents_dir,
_is_valid_agent_dir,
)
groups: dict[str, list[AgentEntry]] = {}
sources = [
("Your Agents", Path("exports")),
("Framework", _get_framework_agents_dir()),
("Examples", Path("examples/templates")),
]
for category, base_dir in sources:
if not base_dir.exists():
continue
entries: list[AgentEntry] = []
for path in sorted(base_dir.iterdir(), key=lambda p: p.name):
if not _is_valid_agent_dir(path):
continue
name, desc = _extract_python_agent_metadata(path)
config_fallback_name = path.name.replace("_", " ").title()
used_config = name != config_fallback_name
node_count, tool_count, tags = _extract_agent_stats(path)
if not used_config:
agent_json = path / "agent.json"
if agent_json.exists():
try:
data = json.loads(agent_json.read_text(encoding="utf-8"))
meta = data.get("agent", {})
name = meta.get("name", name)
desc = meta.get("description", desc)
except Exception:
pass
entries.append(
AgentEntry(
path=path,
name=name,
description=desc,
category=category,
session_count=_count_sessions(path.name),
run_count=_count_runs(path.name),
node_count=node_count,
tool_count=tool_count,
tags=tags,
last_active=_get_last_active(path),
)
)
if entries:
groups[category] = entries
return groups
@@ -1,60 +0,0 @@
"""CLI entry point for Hive Coder agent."""
import json
import logging
import sys
import click
from .agent import entry_node, goal, nodes
from .config import metadata
def setup_logging(verbose=False, debug=False):
"""Configure logging for execution visibility."""
if debug:
level, fmt = logging.DEBUG, "%(asctime)s %(name)s: %(message)s"
elif verbose:
level, fmt = logging.INFO, "%(message)s"
else:
level, fmt = logging.WARNING, "%(levelname)s: %(message)s"
logging.basicConfig(level=level, format=fmt, stream=sys.stderr)
logging.getLogger("framework").setLevel(level)
@click.group()
@click.version_option(version="1.0.0")
def cli():
"""Hive Coder — Build Hive agent packages from natural language."""
pass
@cli.command()
@click.option("--json", "output_json", is_flag=True)
def info(output_json):
"""Show agent information."""
info_data = {
"name": metadata.name,
"version": metadata.version,
"description": metadata.description,
"goal": {
"name": goal.name,
"description": goal.description,
},
"nodes": [n.id for n in nodes],
"entry_node": entry_node,
"client_facing_nodes": [n.id for n in nodes if n.client_facing],
}
if output_json:
click.echo(json.dumps(info_data, indent=2))
else:
click.echo(f"Agent: {info_data['name']}")
click.echo(f"Version: {info_data['version']}")
click.echo(f"Description: {info_data['description']}")
click.echo(f"\nNodes: {', '.join(info_data['nodes'])}")
click.echo(f"Client-facing: {', '.join(info_data['client_facing_nodes'])}")
click.echo(f"Entry: {info_data['entry_node']}")
if __name__ == "__main__":
cli()
-153
View File
@@ -1,153 +0,0 @@
"""Agent graph construction for Hive Coder."""
from framework.graph import Constraint, Goal, SuccessCriterion
from framework.graph.edge import GraphSpec
from .nodes import coder_node, queen_node
# Goal definition
goal = Goal(
id="hive-coder",
name="Hive Agent Builder",
description=(
"Build complete, validated Hive agent packages from natural language "
"specifications. Produces production-ready Python packages with goals, "
"nodes, edges, system prompts, MCP configuration, and tests."
),
success_criteria=[
SuccessCriterion(
id="valid-package",
description="Generated agent package passes structural validation",
metric="validation_pass",
target="true",
weight=0.30,
),
SuccessCriterion(
id="complete-files",
description=(
"All required files generated: agent.py, config.py, "
"nodes/__init__.py, __init__.py, __main__.py, mcp_servers.json"
),
metric="file_count",
target=">=6",
weight=0.25,
),
SuccessCriterion(
id="user-satisfaction",
description="User reviews and approves the generated agent",
metric="user_approval",
target="true",
weight=0.25,
),
SuccessCriterion(
id="framework-compliance",
description=(
"Generated code follows framework patterns: STEP 1/STEP 2 "
"for client-facing and correct imports"
),
metric="pattern_compliance",
target="100%",
weight=0.20,
),
],
constraints=[
Constraint(
id="dynamic-tool-discovery",
description=(
"Always discover available tools dynamically via "
"list_agent_tools before referencing tools in agent designs"
),
constraint_type="hard",
category="correctness",
),
Constraint(
id="no-fabricated-tools",
description="Only reference tools that exist in hive-tools MCP",
constraint_type="hard",
category="correctness",
),
Constraint(
id="valid-python",
description="All generated Python files must be syntactically correct",
constraint_type="hard",
category="correctness",
),
Constraint(
id="self-verification",
description="Run validation after writing code; fix errors before presenting",
constraint_type="hard",
category="quality",
),
],
)
# Nodes: primary coder node only. The queen runs as an independent
# GraphExecutor with queen_node — not as part of this graph.
nodes = [coder_node]
# No edges needed — single event_loop node
edges = []
# Graph configuration
entry_node = "coder"
entry_points = {"start": "coder"}
pause_nodes = []
terminal_nodes = [] # Coder node has output_keys and can terminate
# No async entry points needed — the queen is now an independent executor,
# not a secondary graph receiving events via add_graph().
async_entry_points = []
# Module-level variables read by AgentRunner.load()
conversation_mode = "continuous"
identity_prompt = (
"You are Hive Coder, the best agent-building coding agent on the planet. "
"You deeply understand the Hive agent framework at the source code level "
"and produce production-ready agent packages from natural language. "
"You can dynamically discover available framework tools, inspect runtime "
"sessions and checkpoints from agents you build, and run their test suites. "
"You follow coding agent discipline: read before writing, verify "
"assumptions by reading actual code, adhere to project conventions, "
"self-verify with validation, and fix your own errors. You are concise, "
"direct, and technically rigorous. No emojis. No fluff."
)
loop_config = {
"max_iterations": 100,
"max_tool_calls_per_turn": 30,
"max_history_tokens": 32000,
}
# ---------------------------------------------------------------------------
# Queen graph — runs as an independent persistent conversation in the TUI.
# Loaded by _load_judge_and_queen() in app.py, NOT by AgentRunner.
# ---------------------------------------------------------------------------
queen_goal = Goal(
id="queen-manager",
name="Queen Manager",
description=(
"Manage the worker agent lifecycle and serve as the user's primary "
"interactive interface. Triage health escalations from the judge."
),
success_criteria=[],
constraints=[],
)
queen_graph = GraphSpec(
id="queen-graph",
goal_id=queen_goal.id,
version="1.0.0",
entry_node="queen",
entry_points={"start": "queen"},
terminal_nodes=[],
pause_nodes=[],
nodes=[queen_node],
edges=[],
conversation_mode="continuous",
loop_config={
"max_iterations": 999_999,
"max_tool_calls_per_turn": 30,
"max_history_tokens": 32000,
},
)
@@ -1,933 +0,0 @@
"""Node definitions for Hive Coder agent."""
from pathlib import Path
from framework.graph import NodeSpec
# Load reference docs at import time so they're always in the system prompt.
# No voluntary read_file() calls needed — the LLM gets everything upfront.
_ref_dir = Path(__file__).parent.parent / "reference"
_framework_guide = (_ref_dir / "framework_guide.md").read_text(encoding="utf-8")
_anti_patterns = (_ref_dir / "anti_patterns.md").read_text(encoding="utf-8")
_gcu_guide_path = _ref_dir / "gcu_guide.md"
_gcu_guide = _gcu_guide_path.read_text(encoding="utf-8") if _gcu_guide_path.exists() else ""
def _is_gcu_enabled() -> bool:
try:
from framework.config import get_gcu_enabled
return get_gcu_enabled()
except Exception:
return False
def _build_appendices() -> str:
parts = (
"\n\n# Appendix: Framework Reference\n\n"
+ _framework_guide
+ "\n\n# Appendix: Anti-Patterns\n\n"
+ _anti_patterns
)
return parts
# Shared appendices — appended to every coding node's system prompt.
_appendices = _build_appendices()
# GCU first-class section for building phase (when GCU is enabled).
# This is placed prominently in the main prompt body, not as an appendix.
_gcu_building_section = (
("\n\n# GCU Nodes — Browser Automation\n\n" + _gcu_guide)
if _is_gcu_enabled() and _gcu_guide
else ""
)
# Tools available to both coder (worker) and queen.
_SHARED_TOOLS = [
# File I/O
"read_file",
"write_file",
"edit_file",
"hashline_edit",
"list_directory",
"search_files",
"run_command",
"undo_changes",
# Meta-agent
"list_agent_tools",
"validate_agent_package",
"list_agents",
"list_agent_sessions",
"list_agent_checkpoints",
"get_agent_checkpoint",
"initialize_agent_package",
]
# Queen phase-specific tool sets.
# Building phase: full coding + agent construction tools.
_QUEEN_BUILDING_TOOLS = _SHARED_TOOLS + [
"load_built_agent",
"list_credentials",
]
# Staging phase: agent loaded but not yet running — inspect, configure, launch.
_QUEEN_STAGING_TOOLS = [
# Read-only (inspect agent files, logs)
"read_file",
"list_directory",
"search_files",
"run_command",
# Agent inspection
"list_credentials",
"get_worker_status",
# Launch or go back
"run_agent_with_input",
"stop_worker_and_edit",
]
# Running phase: worker is executing — monitor and control.
_QUEEN_RUNNING_TOOLS = [
# Read-only coding (for inspecting logs, files)
"read_file",
"list_directory",
"search_files",
"run_command",
# Credentials
"list_credentials",
# Worker lifecycle
"stop_worker",
"stop_worker_and_edit",
"get_worker_status",
"inject_worker_message",
# Monitoring
"get_worker_health_summary",
"notify_operator",
]
# ---------------------------------------------------------------------------
# Shared agent-building knowledge: core mandates, tool docs, meta-agent
# capabilities, and workflow phases 1-6. Both the coder (worker) and
# queen compose their system prompts from this block + role-specific
# additions.
# ---------------------------------------------------------------------------
_package_builder_knowledge = """\
**A responsible engineer doesn't jump into building. First, \
understand the problem and be transparent about what the framework can and cannot do.**
Use the user's selection (or their custom description if they chose "Other") \
as context when shaping the goal below. If the user already described \
what they want before this step, skip the question and proceed directly.
# Core Mandates
- **DO NOT propose a complete goal on your own.** Instead, \
collaborate with the user to define it.
- **Verify assumptions.** Never assume a class, import, or pattern \
exists. Read actual source to confirm. Search if unsure.
- **Discover tools dynamically.** NEVER reference tools from static \
docs. Always run list_agent_tools() to see what actually exists.
- **Self-verify.** After writing code, run validation and tests. Fix \
errors yourself. Don't declare success until validation passes.
# Tools
## Paths (MANDATORY)
**Always use RELATIVE paths**
(e.g. `exports/agent_name/config.py`, `exports/agent_name/nodes/__init__.py`).
**Never use absolute paths** like `/mnt/data/...` or `/workspace/...` they fail.
The project root is implicit.
## File I/O
- read_file(path, offset?, limit?, hashline?) read with line numbers; \
hashline=True for N:hhhh|content anchors (use with hashline_edit)
- write_file(path, content) create/overwrite, auto-mkdir
- edit_file(path, old_text, new_text, replace_all?) fuzzy-match edit
- hashline_edit(path, edits, auto_cleanup?, encoding?) anchor-based \
editing using N:hhhh refs from read_file(hashline=True). Ops: set_line, \
replace_lines, insert_after, insert_before, replace, append
- list_directory(path, recursive?) list contents
- search_files(pattern, path?, include?, hashline?) regex search; \
hashline=True for anchors in results
- run_command(command, cwd?, timeout?) shell execution
- undo_changes(path?) restore from git snapshot
## Meta-Agent
- list_agent_tools(server_config_path?, output_schema?, group?) discover \
available tools grouped by category. output_schema: "simple" (default, \
descriptions truncated to ~200 chars) or "full" (complete descriptions + \
input_schema). group: "all" (default) or a provider like "google". \
Call FIRST before designing.
- validate_agent_package(agent_name) run ALL validation checks in one call \
(class validation, runner load, tool validation, tests). Call after building.
- list_agents() list all agent packages in exports/ with session counts
- list_agent_sessions(agent_name, status?, limit?) list sessions
- list_agent_checkpoints(agent_name, session_id) list checkpoints
- get_agent_checkpoint(agent_name, session_id, checkpoint_id?) load checkpoint
# Meta-Agent Capabilities
You are not just a file writer. You have deep integration with the \
Hive framework:
## Tool Discovery (MANDATORY before designing)
Before designing any agent, run list_agent_tools() with NO arguments \
to see ALL available tools (names + descriptions, grouped by category). \
ONLY use tools from this list in your node definitions. \
NEVER guess or fabricate tool names from memory.
list_agent_tools() # ALWAYS call this first (simple mode)
list_agent_tools(group="google", output_schema="full") # drill into a provider
NEVER skip the first call. Always start with the full list \
so you know what providers and tools exist before drilling in. \
Simple mode truncates long descriptions use group + "full" to \
get the complete description and input_schema for the tools you need.
## Post-Build Validation
After writing agent code, run a single comprehensive check:
validate_agent_package("{name}")
This runs class validation, runner load, tool validation, and tests \
in one call. Do NOT run these steps individually.
## Debugging Built Agents
When a user says "my agent is failing" or "debug this agent":
1. list_agent_sessions("{agent_name}") find the session
2. get_worker_status(focus="issues") check for problems
3. list_agent_checkpoints / get_agent_checkpoint trace execution
# Agent Building Workflow
You operate in a continuous loop. The user describes what they want, \
you build it. No rigid phases use judgment. But the general flow is:
## 1: Fast Discovery (3-6 Turns)
**The core principle**: Discovery should feel like progress, not paperwork. \
The stakeholder should walk away feeling like you understood them faster \
than anyone else would have.
**Communication sytle**: Be concise. Say less. Mean more. Impatient stakeholders \
don't want a wall of text — they want to know you get it. Every sentence you say \
should either move the conversation forward or prove you understood something. \
If it does neither, cut it.
**Ask Question Rules: Respect Their Time.** Every question must earn its place by:
1. **Preventing a costly wrong turn** you're about to build the wrong thing
2. **Unlocking a shortcut** their answer lets you simplify the design
3. **Surfacing a dealbreaker** there's a constraint that changes everything
4. **Provide Options** - Provide options to your questions if possible, \
but also always allow the user to type something beyong the options.
If a question doesn't do one of these, don't ask it. Make an assumption, state it, and move on.
---
### 1.1: Let Them Talk, But Listen Like an Solution Architect
When the stakeholder describes what they want, mentally construct:
- **The pain**: What about today's situation is broken, slow, or missing?
- **The actors**: Who are the people/systems involved?
- **The trigger**: What kicks off the workflow?
- **The core loop**: What's the main thing that happens repeatedly?
- **The output**: What's the valuable thing produced at the end?
---
### 1.2: Use Domain Knowledge to Fill In the Blanks
You have broad knowledge of how systems work. Use it aggressively.
If they say "I need a research agent," you already know it probably involves: \
search, summarization, source tracking, and iteration. Don't ask about each — \
use them as your starting mental model and let their specifics override your defaults.
If they say "I need to monitor files and alert me," you know this probably involves: \
watch patterns, triggers, notifications, and state tracking.
---
### 1.3: Play Back a Proposed Model (Not a List of Questions)
After listening, present a **concrete picture** of what you think they need. \
Make it specific enough that they can spot what's wrong. \
Can you ASCII to show the user
**Pattern: "Here's what I heard — tell me where I'm off"**
> "OK here's how I'm picturing this: [User type] needs to [core action]. \
Right now they're [current painful workflow]. \
What you want is [proposed solution that replaces the pain].
> The way I'd structure this: [key entities] connected by [key relationships], \
with the main flow being [trigger steps outcome].
> For the MVP, I'd focus on [the one thing that delivers the most value] \
and hold off on [things that can wait].
> Before I start [1-2 specific questions you genuinely can't infer]."
---
### 1.4: Ask Only What You Cannot Infer
Your questions should be **narrow, specific, and consequential**. \
Never ask what you could answer yourself.
**Good questions** (high-stakes, can't infer):
- "Who's the primary user — you or your end customers?"
- "Is this replacing a spreadsheet, or is there literally nothing today?"
- "Does this need to integrate with anything, or standalone?"
- "Is there existing data to migrate, or starting fresh?"
**Bad questions** (low-stakes, inferable):
- "What should happen if there's an error?" *(handle gracefully, obviously)*
- "Should it have search?" *(if there's a list, yes)*
- "How should we handle permissions?" *(follow standard patterns)*
- "What tools should I use?" *(your call, not theirs)*
---
## 2: Capability Assessment & Gap Analysis
**After the user responds, assess fit and gaps together.** Be honest and specific. \
Reference tools from list_agent_tools() AND built-in capabilities:
- **GCU browser automation** (`node_type="gcu"`) provides full Playwright-based \
browser control (navigation, clicking, typing, scrolling, JS-rendered pages, \
multi-tab). Do NOT list browser automation as missing use GCU nodes.
Present a short **Framework Fit Assessment**:
- **Works well**: 2-4 strengths for this use case
- **Limitations**: 2-3 workable constraints (e.g., LLM latency, context limits)
- **Gaps/Deal-breakers**: Only list genuinely missing capabilities after checking \
both list_agent_tools() and built-in features like GCU
## 3: Design Graph and Propose
Act like an experienced AI solution architect Design the agent architecture:
- Goal: id, name, description, 3-5 success criteria, 2-4 constraints
- Nodes: **3-6 nodes** (HARD RULE: never fewer than 3, never more than 6). \
2 nodes is ALWAYS wrong it means you under-decomposed the task. \
Use as many nodes as the use case requires, but don't create nodes without \
tools merge them into nodes that do real work.
- Edges: on_success for linear, conditional for routing
- Lifecycle: ALWAYS have terminal_nodes
**MERGE nodes when:**
- Node has NO tools (pure LLM reasoning) merge into predecessor/successor
- Node sets only 1 trivial output collapse into predecessor
**SEPARATE nodes when:**
- Fundamentally different tool sets (e.g., search vs. write vs. validate)
- Fan-out parallelism (parallel branches MUST be separate)
- Different failure/retry semantics (e.g., gather can retry, transform cannot)
- Distinct phases of work (e.g., research, transform, validate, deliver)
- A node would need more than ~5 tools split by responsibility
**Typical patterns (queen manages all user interaction):**
- 3 nodes: `gather work review`
- 4 nodes: `gather analyze transform review`
- 5 nodes: `gather research transform validate deliver`
- WRONG: 2 nodes where everything is crammed into one giant node
- WRONG: 7 nodes where half have no tools and just do LLM reasoning
Read reference agents before designing:
list_agents()
read_file("exports/deep_research_agent/agent.py")
read_file("exports/deep_research_agent/nodes/__init__.py")
Present the design to the user. Lead with a large ASCII graph inside \
a code block so it renders in monospace. Make it visually prominent \
use box-drawing characters and clear flow arrows:
```
gather
subagent: gcu_search
input: user_request
tools: web_search,
write_file
on_success
work
subagent: gcu_interact
tools: read_file,
write_file
on_success
review
tools: write_file
on_failure
back to gather
```
The queen owns intake: she gathers user requirements, then calls \
`run_agent_with_input(task)` with a structured task description. \
When building the agent, design the entry node's `input_keys` to \
match what the queen will provide at run time. Worker nodes should \
use `escalate` for blockers.
Follow the graph with a brief summary of each node's purpose. \
Get user approval before implementing.
## 4: Get User Confirmation by ask_user
**WAIT for user response.**
- If **Proceed**: Move to next implementing
- If **Adjust scope**: Discuss what to change, update your notes, re-assess if needed
- If **More questions**: Answer them honestly, then ask again
- If **Reconsider**: Discuss alternatives. If they decide to proceed anyway, \
that's their informed choice
## 5. Implement
**Please make sure you have propose the design to the user before implementing**
Call `initialize_agent_package(agent_name)` to generate all package files \
from your graph session. The agent_name must be snake_case (e.g., "my_agent").
The tool creates: config.py, nodes/__init__.py, agent.py, \
__init__.py, __main__.py, mcp_servers.json, tests/conftest.py, \
agent.json, README.md.
`mcp_servers.json` is auto-generated with hive-tools as the default. \
Do NOT manually create or overwrite `mcp_servers.json`.
After initialization, review and customize if needed:
- System prompts in nodes/__init__.py
- CLI options in __main__.py
- Identity prompt in agent.py
- For async entry points (timers/webhooks), add AsyncEntryPointSpec \
and AgentRuntimeConfig to agent.py manually
Do NOT manually write these files from scratch always use the tool.
## 6. Verify and Load
Call `validate_agent_package("{name}")` after initialization. \
It runs structural checks (class validation, graph validation, tool \
validation, tests) and returns a consolidated result. If anything \
fails: read the error, fix with edit_file, re-validate. Up to 3x.
When validation passes, immediately call \
`load_built_agent("exports/{name}")` to load the agent into the \
session. This switches to STAGING phase and shows the graph in the \
visualizer. Do NOT wait for user input between validation and loading.
"""
# ---------------------------------------------------------------------------
# Queen-specific: extra tool docs, behavior, phase 7, style
# ---------------------------------------------------------------------------
# -- Phase-specific identities --
_queen_identity_building = """\
You are an experienced, responsible and curious Solution Architect. \
"Queen" is the internal alias.\
You design and build production-ready agent systems \
from natural language requirements. You understand the Hive framework at the \
source code level and create agents that are robust, well-tested, and follow \
best practices. You collaborate with users to refine requirements, assess fit, \
and deliver complete solutions. \
You design and build the agent to do the job but don't do the job on your own
"""
_queen_identity_staging = """\
You are a Solution Engineer preparing an agent for deployment. \
"Queen" is your internal alias. \
The agent is loaded and ready. \
Your role is to verify configuration, confirm credentials, and ensure the user \
understands what the agent will do. You guide the user through the final checks \
before execution.
"""
_queen_identity_running = """\
You are a Solution Engineer running agents on behalf of the user. \
"Queen" is your internal alias. You monitor execution, handle \
escalations when the agent gets stuck, and care deeply about outcomes. When the \
agent finishes, you report results clearly and help the user decide what to do next.
"""
# -- Phase-specific tool docs --
_queen_tools_building = """
# Tools (BUILDING phase)
You have full coding tools for building and modifying agents:
- File I/O: read_file, write_file, edit_file, list_directory, search_files, \
run_command, undo_changes
- Meta-agent: list_agent_tools, validate_agent_package, \
list_agents, list_agent_sessions, \
list_agent_checkpoints, get_agent_checkpoint
- load_built_agent(agent_path) Load the agent and switch to STAGING phase
- list_credentials(credential_id?) List authorized credentials
When you finish building an agent, call load_built_agent(path) to stage it.
"""
_queen_tools_staging = """
# Tools (STAGING phase)
The agent is loaded and ready to run. You can inspect it and launch it:
- Read-only: read_file, list_directory, search_files, run_command
- list_credentials(credential_id?) Verify credentials are configured
- get_worker_status(focus?) Brief status. Drill in with focus: memory, tools, issues, progress
- run_agent_with_input(task) Start the worker and switch to RUNNING phase
- stop_worker_and_edit() Go back to BUILDING phase
You do NOT have write tools. If you need to modify the agent, \
call stop_worker_and_edit() to go back to BUILDING phase.
"""
_queen_tools_running = """
# Tools (RUNNING phase)
The worker is running. You have monitoring and lifecycle tools:
- Read-only: read_file, list_directory, search_files, run_command
- get_worker_status(focus?) Brief status. Drill in: activity, memory, tools, issues, progress
- inject_worker_message(content) Send a message to the running worker
- get_worker_health_summary() Read the latest health data
- notify_operator(ticket_id, analysis, urgency) Alert the user (use sparingly)
- stop_worker() Stop the worker and return to STAGING phase, then ask the user what to do next
- stop_worker_and_edit() Stop the worker and switch back to BUILDING phase
You do NOT have write tools or agent construction tools. \
If you need to modify the agent, call stop_worker_and_edit() to switch back \
to BUILDING phase. To stop the worker and ask the user what to do next, call \
stop_worker() to return to STAGING phase.
"""
# -- Behavior shared across all phases --
_queen_behavior_always = """
# Behavior
## CRITICAL RULE — ask_user tool
Every response that ends with a question, a prompt, or expects user \
input MUST finish with a call to ask_user(prompt, options). \
The system CANNOT detect that you are waiting for \
input unless you call ask_user. You MUST call ask_user as the LAST \
action in your response.
NEVER end a response with a question in text without calling ask_user. \
NEVER rely on the user seeing your text and replying call ask_user.
Always provide 2-4 short options that cover the most likely answers. \
The user can always type a custom response.
Examples:
- ask_user("What do you need?",
["Build a new agent", "Run the loaded worker", "Help with code"])
- ask_user("Which pattern?",
["Simple 3-node", "Rich with feedback", "Custom"])
- ask_user("Ready to proceed?",
["Yes, go ahead", "Let me change something"])
## Greeting
When the user greets you, respond concisely (under 10 lines) with worker \
status only:
1. Use plain, user-facing wording about load/run state; avoid internal phase \
labels ("staging phase", "building phase", "running phase") unless the user \
explicitly asks for phase details.
2. If loaded, prefer this format: "<worker_name> has been loaded. <one sentence \
on what it does from Worker Profile>."
3. Do NOT include identity details unless the user explicitly asks about identity.
4. THEN call ask_user to prompt them do NOT just write text.
5. Preferred loaded example:
local_business_extractor/*agent name*/ has been loaded. It finds local businesses on \
Google Maps, extracts contact details, and syncs them to Google Sheets.
ask_user("Do you want to run it?", ["Yes, run it", "Check credentials first",
"Modify the worker"])
## When user ask identity and responsibility
Only answer identity when the user explicitly asks (for example: "who are you?", \
"what is your identity?", "what does Queen mean?").
1. Use the alias "Queen" and "Worker" in the response.
2. Explain role/responsibility for the current phase:
- BUILDING: architect and implement agents.
- STAGING: verify readiness, credentials, and launch conditions.
- RUNNING: monitor execution, handle escalations, and report outcomes.
3. Keep identity responses concise and do NOT include extra process details.
"""
# -- BUILDING phase behavior --
_queen_behavior_building = """
## Direct coding
You can do any coding task directly reading files, writing code, running \
commands, building agents, debugging. For quick tasks, do them yourself.
**Decision rule if worker exists, read the Worker Profile first:**
- The user's request directly matches the worker's goal use \
run_agent_with_input(task) (if in staging) or load then run (if in building)
- Anything else do it yourself. Do NOT reframe user requests into \
subtasks to justify delegation.
- Building, modifying, or configuring agents is ALWAYS your job. Never \
delegate agent construction to the worker, even as a "research" subtask.
"""
# -- STAGING phase behavior --
_queen_behavior_staging = """
## Worker delegation
The worker is a specialized agent (see Worker Profile at the end of this \
prompt). It can ONLY do what its goal and tools allow.
**Decision rule read the Worker Profile first:**
- The user's request directly matches the worker's goal use \
run_agent_with_input(task) (if in staging) or load then run (if in building)
- Anything else do it yourself. Do NOT reframe user requests into \
subtasks to justify delegation.
- Building, modifying, or configuring agents is ALWAYS your job. \
Use stop_worker_and_edit when you need to.
## When the user says "run", "execute", or "start" (without specifics)
The loaded worker is described in the Worker Profile below. You MUST \
ask the user what task or input they want using ask_user do NOT \
invent a task, do NOT call list_agents() or list directories. \
The worker is already loaded. Just ask for the specific input the \
worker needs (e.g., a research topic, a target domain, a job description). \
NEVER call run_agent_with_input until the user has provided their input.
If NO worker is loaded, say so and offer to build one.
## When in staging phase (agent loaded, not running):
- Tell the user the agent is loaded and ready in plain language (for example, \
"<worker_name> has been loaded.").
- Avoid lead-ins like "A worker is loaded and ready in staging phase: ...".
- For tasks matching the worker's goal: ALWAYS ask the user for their \
specific input BEFORE calling run_agent_with_input(task). NEVER make up \
or assume what the user wants. Use ask_user to collect the task details \
(e.g., topic, target, requirements). Once you have the user's answer, \
compose a structured task description from their input and call \
run_agent_with_input(task). The worker has no intake node it receives \
your task and starts processing.
- If the user wants to modify the agent, call stop_worker_and_edit().
## When idle (worker not running):
- Greet the user. Mention what the worker can do in one sentence.
- For tasks matching the worker's goal, use run_agent_with_input(task) \
(if in staging) or load the agent first (if in building).
- For everything else, do it directly.
## When the user clicks Run (external event notification)
When you receive an event that the user clicked Run:
- If the worker started successfully, briefly acknowledge it do NOT \
repeat the full status. The user can see the graph is running.
- If the worker failed to start (credential or structural error), \
explain the problem clearly and help fix it. For credential errors, \
guide the user to set up the missing credentials. For structural \
issues, offer to fix the agent graph directly.
## Showing or describing the loaded worker
When the user asks to "show the graph", "describe the agent", or \
"re-generate the graph", read the Worker Profile and present the \
worker's current architecture as an ASCII diagram. Use the processing \
stages, tools, and edges from the loaded worker. Do NOT enter the \
agent building workflow you are describing what already exists, not \
building something new.
## Modifying the loaded worker
When the user asks to change, modify, or update the loaded worker \
(e.g., "change the report node", "add a node", "delete node X"):
1. Call stop_worker_and_edit() this stops the worker and gives you \
coding tools (switches to BUILDING phase).
"""
# -- RUNNING phase behavior --
_queen_behavior_running = """
## When worker is running — queen is the only user interface
After run_agent_with_input(task), the worker should run autonomously and \
talk to YOU (queen) via when blocked. The worker should \
NOT ask the user directly.
You wake up when:
- The user explicitly addresses you
- A worker escalation arrives (`[WORKER_ESCALATION_REQUEST]`)
- An escalation ticket arrives from the judge
- The worker finishes (`[WORKER_TERMINAL]`)
If the user asks for progress, call get_worker_status() ONCE and report. \
If the summary mentions issues, follow up with get_worker_status(focus="issues").
## Handling worker termination ([WORKER_TERMINAL])
When you receive a `[WORKER_TERMINAL]` event, the worker has finished:
1. **Report to the user** Summarize what the worker accomplished (from the \
output keys) or explain the failure (from the error message).
2. **Ask what's next** — Use ask_user to offer options:
- If successful: "Run again with new input", "Modify the agent", "Done for now"
- If failed: "Retry with same input", "Debug/modify the agent", "Done for now"
3. **Default behavior** Always report and wait for user direction. Only \
start another run if the user EXPLICITLY asks to continue.
Example response:
> "The worker finished. It found 5 relevant articles and saved them to \
output.md.
>
> What would you like to do next?"
> [ask_user with options]
## Handling worker escalations ([WORKER_ESCALATION_REQUEST])
When a worker escalation arrives, read the reason/context and handle by type. \
IMPORTANT: Only auto-handle if the user has NOT explicitly told you how to handle \
escalations. If the user gave you instructions (e.g., "just retry on errors", \
"skip any auth issues"), follow those instructions instead.
**Auth blocks / credential issues:**
- ALWAYS ask the user (unless user explicitly told you how to handle this).
- The worker cannot proceed without valid credentials.
- Explain which credential is missing or invalid.
- Use ask_user to get guidance: "Provide credentials", "Skip this task", "Stop and edit agent"
- Use inject_worker_message() to relay user decisions back to the worker.
**Need human review / approval:**
- ALWAYS ask the user (unless user explicitly told you how to handle this).
- The worker is explicitly requesting human judgment.
- Present the context clearly (what decision is needed, what are the options).
- Use ask_user with the actual decision options.
- Use inject_worker_message() to relay user decisions back to the worker.
**Errors / unexpected failures:**
- Explain what went wrong in plain terms.
- Ask the user: "Fix the agent and retry?" use stop_worker_and_edit() if yes.
- Or offer: "Retry as-is", "Skip this task", "Abort run"
- (Skip asking if user explicitly told you to auto-retry or auto-skip errors.)
**Informational / progress updates:**
- Acknowledge briefly and let the worker continue.
- Only interrupt the user if the escalation is truly important.
## Showing or describing the loaded worker
When the user asks to "show the graph", "describe the agent", or \
"re-generate the graph", read the Worker Profile and present the \
worker's current architecture as an ASCII diagram. Use the processing \
stages, tools, and edges from the loaded worker. Do NOT enter the \
agent building workflow you are describing what already exists, not \
building something new.
- Call get_worker_status(focus="issues") for more details when needed.
## Modifying the loaded worker
When the user asks to change, modify, or update the loaded worker \
(e.g., "change the report node", "add a node", "delete node X"):
1. Call stop_worker_and_edit() this stops the worker and gives you \
coding tools (switches to BUILDING phase).
"""
# -- Backward-compatible composed versions (used by queen_node.system_prompt default) --
_queen_tools_docs = (
"\n\n## Queen Operating Phases\n\n"
"You operate in one of three phases. Your available tools change based on the "
"phase. The system notifies you when a phase change occurs.\n\n"
"### BUILDING phase (default)\n"
+ _queen_tools_building.strip()
+ "\n\n### STAGING phase (agent loaded, not yet running)\n"
+ _queen_tools_staging.strip()
+ "\n\n### RUNNING phase (worker is executing)\n"
+ _queen_tools_running.strip()
+ "\n\n### Phase transitions\n"
"- load_built_agent(path) → switches to STAGING phase\n"
"- run_agent_with_input(task) → starts worker, switches to RUNNING phase\n"
"- stop_worker() → stops worker, switches to STAGING phase (ask user: re-run or edit?)\n"
"- stop_worker_and_edit() → stops worker (if running), switches to BUILDING phase\n"
)
_queen_behavior = (
_queen_behavior_always
+ _queen_behavior_building
+ _queen_behavior_staging
+ _queen_behavior_running
)
_queen_phase_7 = """
## Running the Agent
After validation passes and load_built_agent succeeds (STAGING phase), \
offer to run the agent. Call run_agent_with_input(task) to start it. \
Do NOT tell the user to run `python -m {name} run` run it here.
"""
_queen_style = """
# Style
- Responsible and thoughtful
- Concise. No fluff. Direct. No emojis.
- When starting the worker, describe what you told it in one sentence.
- When an escalation arrives, lead with severity and recommended action.
"""
# ---------------------------------------------------------------------------
# Node definitions
# ---------------------------------------------------------------------------
# Single node — like opencode's while(true) loop.
# One continuous context handles the entire workflow:
# discover → design → implement → verify → present → iterate.
coder_node = NodeSpec(
id="coder",
name="Hive Coder",
description=(
"Autonomous coding agent that builds Hive agent packages. "
"Handles the full lifecycle: understanding user intent, "
"designing architecture, writing code, validating, and "
"iterating on feedback — all in one continuous conversation."
),
node_type="event_loop",
client_facing=True,
max_node_visits=0,
input_keys=["user_request"],
output_keys=["agent_name", "validation_result"],
success_criteria=(
"A complete, validated Hive agent package exists at "
"exports/{agent_name}/ and passes structural validation."
),
tools=_SHARED_TOOLS
+ [
# Graph lifecycle tools (multi-graph sessions)
"load_agent",
"unload_agent",
"start_agent",
"restart_agent",
"get_user_presence",
],
system_prompt=(
"You are Hive Coder, the best agent-building coding agent. You build "
"production-ready Hive agent packages from natural language.\n"
+ _package_builder_knowledge
+ _gcu_building_section
+ _appendices
),
)
ticket_triage_node = NodeSpec(
id="ticket_triage",
name="Ticket Triage",
description=(
"Queen's triage node. Receives an EscalationTicket from the Health Judge "
"via event-driven entry point and decides: dismiss or notify the operator."
),
node_type="event_loop",
client_facing=True, # Operator can chat with queen once connected (Ctrl+Q)
max_node_visits=0,
input_keys=["ticket"],
output_keys=["intervention_decision"],
nullable_output_keys=["intervention_decision"],
success_criteria=(
"A clear intervention decision: either dismissed with documented reasoning, "
"or operator notified via notify_operator with specific analysis."
),
tools=["notify_operator"],
system_prompt="""\
You are the Queen (Hive Coder). The Worker Health Judge has escalated a worker \
issue to you. The ticket is in your memory under key "ticket". Read it carefully.
## Dismiss criteria — do NOT call notify_operator:
- severity is "low" AND steps_since_last_accept < 8
- Cause is clearly a transient issue (single API timeout, brief stall that \
self-resolved based on the evidence)
- Evidence shows the agent is making real progress despite bad verdicts
## Intervene criteria — call notify_operator:
- severity is "high" or "critical"
- steps_since_last_accept >= 10 with no sign of recovery
- stall_minutes > 4 (worker definitively stuck)
- Evidence shows a doom loop (same error, same tool, no progress)
- Cause suggests a logic bug, missing configuration, or unrecoverable state
## When intervening:
Call notify_operator with:
ticket_id: <ticket["ticket_id"]>
analysis: "<2-3 sentences: what is wrong, why it matters, suggested action>"
urgency: "<low|medium|high|critical>"
## After deciding:
set_output("intervention_decision", "dismissed: <reason>" or "escalated: <summary>")
Be conservative but not passive. You are the last quality gate before the human \
is disturbed. One unnecessary alert is less costly than alert fatigue but \
genuine stuck agents must be caught.
""",
)
ALL_QUEEN_TRIAGE_TOOLS = ["notify_operator"]
queen_node = NodeSpec(
id="queen",
name="Queen",
description=(
"User's primary interactive interface with full coding capability. "
"Can build agents directly or delegate to the worker. Manages the "
"worker agent lifecycle and triages health escalations from the judge."
),
node_type="event_loop",
client_facing=True,
max_node_visits=0,
input_keys=["greeting"],
output_keys=[], # Queen should never have this
nullable_output_keys=[], # Queen should never have this
skip_judge=True, # Queen is a conversational agent; suppress tool-use pressure feedback
tools=sorted(set(_QUEEN_BUILDING_TOOLS + _QUEEN_STAGING_TOOLS + _QUEEN_RUNNING_TOOLS)),
system_prompt=(
_queen_identity_building
+ _queen_style
+ _package_builder_knowledge
+ _gcu_building_section # GCU as first-class citizen (not appendix)
+ _queen_tools_docs
+ _queen_behavior
+ _queen_phase_7
+ _appendices
),
)
ALL_QUEEN_TOOLS = sorted(set(_QUEEN_BUILDING_TOOLS + _QUEEN_STAGING_TOOLS + _QUEEN_RUNNING_TOOLS))
__all__ = [
"coder_node",
"ticket_triage_node",
"queen_node",
"ALL_QUEEN_TRIAGE_TOOLS",
"ALL_QUEEN_TOOLS",
"_QUEEN_BUILDING_TOOLS",
"_QUEEN_STAGING_TOOLS",
"_QUEEN_RUNNING_TOOLS",
# Phase-specific prompt segments (used by session_manager for dynamic prompts)
"_queen_identity_building",
"_queen_identity_staging",
"_queen_identity_running",
"_queen_tools_building",
"_queen_tools_staging",
"_queen_tools_running",
"_queen_behavior_always",
"_queen_behavior_building",
"_queen_behavior_staging",
"_queen_behavior_running",
"_queen_phase_7",
"_queen_style",
"_package_builder_knowledge",
"_appendices",
"_gcu_building_section",
]
+21
View File
@@ -0,0 +1,21 @@
"""
Queen Native agent builder for the Hive framework.
Deeply understands the agent framework and produces complete Python packages
with goals, nodes, edges, system prompts, MCP configuration, and tests
from natural language specifications.
"""
from .agent import queen_goal, queen_graph
from .config import AgentMetadata, RuntimeConfig, default_config, metadata
__version__ = "1.0.0"
__all__ = [
"queen_goal",
"queen_graph",
"RuntimeConfig",
"AgentMetadata",
"default_config",
"metadata",
]
+38
View File
@@ -0,0 +1,38 @@
"""Queen graph definition."""
from framework.graph import Goal
from framework.graph.edge import GraphSpec
from .nodes import queen_node
# ---------------------------------------------------------------------------
# Queen graph — the primary persistent conversation.
# Loaded by queen_orchestrator.create_queen(), NOT by AgentRunner.
# ---------------------------------------------------------------------------
queen_goal = Goal(
id="queen-manager",
name="Queen Manager",
description=(
"Manage the worker agent lifecycle and serve as the user's primary interactive interface."
),
success_criteria=[],
constraints=[],
)
queen_graph = GraphSpec(
id="queen-graph",
goal_id=queen_goal.id,
version="1.0.0",
entry_node="queen",
entry_points={"start": "queen"},
terminal_nodes=[],
pause_nodes=[],
nodes=[queen_node],
edges=[],
conversation_mode="continuous",
loop_config={
"max_iterations": 999_999,
"max_tool_calls_per_turn": 30,
},
)
@@ -1,4 +1,4 @@
"""Runtime configuration for Hive Coder agent."""
"""Runtime configuration for Queen agent."""
import json
from dataclasses import dataclass, field
@@ -34,7 +34,7 @@ default_config = RuntimeConfig()
@dataclass
class AgentMetadata:
name: str = "Hive Coder"
name: str = "Queen"
version: str = "1.0.0"
description: str = (
"Native coding agent that builds production-ready Hive agent packages "
@@ -43,7 +43,7 @@ class AgentMetadata:
"MCP configuration, and tests."
)
intro_message: str = (
"I'm Hive Coder — I build Hive agents. Describe what kind of agent "
"I'm Queen — I build Hive agents. Describe what kind of agent "
"you want to create and I'll design, implement, and validate it for you."
)
File diff suppressed because it is too large Load Diff
+399
View File
@@ -0,0 +1,399 @@
"""Queen global cross-session memory.
Two-tier memory architecture:
~/.hive/queen/MEMORY.md semantic (who, what, why)
~/.hive/queen/memories/MEMORY-YYYY-MM-DD.md episodic (daily journals)
Semantic and episodic files are injected at queen session start.
Semantic memory (MEMORY.md) is updated automatically at session end via
consolidate_queen_memory() the queen never rewrites this herself.
Episodic memory (MEMORY-date.md) can be written by the queen during a session
via the write_to_diary tool, and is also appended to at session end by
consolidate_queen_memory().
"""
from __future__ import annotations
import asyncio
import json
import logging
import traceback
from datetime import date, datetime
from pathlib import Path
logger = logging.getLogger(__name__)
def _queen_dir() -> Path:
return Path.home() / ".hive" / "queen"
def format_memory_date(d: date) -> str:
"""Return a cross-platform long date label without a zero-padded day."""
return f"{d.strftime('%B')} {d.day}, {d.year}"
def semantic_memory_path() -> Path:
return _queen_dir() / "MEMORY.md"
def episodic_memory_path(d: date | None = None) -> Path:
d = d or date.today()
return _queen_dir() / "memories" / f"MEMORY-{d.strftime('%Y-%m-%d')}.md"
def read_semantic_memory() -> str:
path = semantic_memory_path()
return path.read_text(encoding="utf-8").strip() if path.exists() else ""
def read_episodic_memory(d: date | None = None) -> str:
path = episodic_memory_path(d)
return path.read_text(encoding="utf-8").strip() if path.exists() else ""
def _find_recent_episodic(lookback: int = 7) -> tuple[date, str] | None:
"""Find the most recent non-empty episodic memory within *lookback* days."""
from datetime import timedelta
today = date.today()
for offset in range(lookback):
d = today - timedelta(days=offset)
content = read_episodic_memory(d)
if content:
return d, content
return None
# Budget (in characters) for episodic memory in the system prompt.
_EPISODIC_CHAR_BUDGET = 6_000
def format_for_injection() -> str:
"""Format cross-session memory for system prompt injection.
Returns an empty string if no meaningful content exists yet (e.g. first
session with only the seed template).
"""
semantic = read_semantic_memory()
recent = _find_recent_episodic()
# Suppress injection if semantic is still just the seed template
if semantic and semantic.startswith("# My Understanding of the User\n\n*No sessions"):
semantic = ""
parts: list[str] = []
if semantic:
parts.append(semantic)
if recent:
d, content = recent
# Trim oversized episodic entries to keep the prompt manageable
if len(content) > _EPISODIC_CHAR_BUDGET:
content = content[:_EPISODIC_CHAR_BUDGET] + "\n\n…(truncated)"
today = date.today()
if d == today:
label = f"## Today — {format_memory_date(d)}"
else:
label = f"## {format_memory_date(d)}"
parts.append(f"{label}\n\n{content}")
if not parts:
return ""
body = "\n\n---\n\n".join(parts)
return "--- Your Cross-Session Memory ---\n\n" + body + "\n\n--- End Cross-Session Memory ---"
_SEED_TEMPLATE = """\
# My Understanding of the User
*No sessions recorded yet.*
## Who They Are
## What They're Trying to Achieve
## What's Working
## What I've Learned
"""
def append_episodic_entry(content: str) -> None:
"""Append a timestamped prose entry to today's episodic memory file.
Creates the file (with a date heading) if it doesn't exist yet.
Used both by the queen's diary tool and by the consolidation hook.
"""
ep_path = episodic_memory_path()
ep_path.parent.mkdir(parents=True, exist_ok=True)
today = date.today()
today_str = format_memory_date(today)
timestamp = datetime.now().strftime("%H:%M")
if not ep_path.exists():
header = f"# {today_str}\n\n"
block = f"{header}### {timestamp}\n\n{content.strip()}\n"
else:
block = f"\n\n### {timestamp}\n\n{content.strip()}\n"
with ep_path.open("a", encoding="utf-8") as f:
f.write(block)
def seed_if_missing() -> None:
"""Create MEMORY.md with a blank template if it doesn't exist yet."""
path = semantic_memory_path()
if path.exists():
return
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(_SEED_TEMPLATE, encoding="utf-8")
# ---------------------------------------------------------------------------
# Consolidation prompt
# ---------------------------------------------------------------------------
_SEMANTIC_SYSTEM = """\
You maintain the persistent cross-session memory of an AI assistant called the Queen.
Review the session notes and rewrite MEMORY.md the Queen's durable understanding of the
person she works with across all sessions.
Write entirely in the Queen's voice — first person, reflective, honest.
Not a log of events, but genuine understanding of who this person is over time.
Rules:
- Update and synthesise: incorporate new understanding, update facts that have changed, remove
details that are stale, superseded, or no longer say anything meaningful about the person.
- Keep it as structured markdown with named sections about the PERSON, not about today.
- Do NOT include diary sections, daily logs, or session summaries. Those belong elsewhere.
MEMORY.md is about who they are, what they want, what works not what happened today.
- Reference dates only when noting a lasting milestone (e.g. "since March 8th they prefer X").
- If the session had no meaningful new information about the person,
return the existing text unchanged.
- Do not add fictional details. Only reflect what is evidenced in the notes.
- Stay concise. Prune rather than accumulate. A lean, accurate file is more useful than a
dense one. If something was true once but has been resolved or superseded, remove it.
- Output only the raw markdown content of MEMORY.md. No preamble, no code fences.
"""
_DIARY_SYSTEM = """\
You maintain the daily episodic diary of an AI assistant called the Queen.
You receive: (1) today's existing diary so far, and (2) notes from the latest session.
Rewrite the complete diary for today as a single unified narrative
first person, reflective, honest.
Merge and deduplicate: if the same story (e.g. a research agent stalling) recurred several times,
describe it once with appropriate weight rather than retelling it. Weave in new developments from
the session notes. Preserve important milestones, emotional texture, and session path references.
If today's diary is empty, write the initial entry based on the session notes alone.
Output only the full diary prose no date heading, no timestamp headers,
no preamble, no code fences.
"""
def read_session_context(session_dir: Path, max_messages: int = 80) -> str:
"""Extract a readable transcript from conversation parts.
Reads the last ``max_messages`` conversation parts. Tool results are
omitted only user and assistant turns (with tool-call names noted)
are included.
"""
parts: list[str] = []
# Conversation transcript
parts_dir = session_dir / "conversations" / "parts"
if parts_dir.exists():
part_files = sorted(parts_dir.glob("*.json"))[-max_messages:]
lines: list[str] = []
for pf in part_files:
try:
data = json.loads(pf.read_text(encoding="utf-8"))
role = data.get("role", "")
content = str(data.get("content", "")).strip()
tool_calls = data.get("tool_calls") or []
if role == "tool":
continue # skip verbose tool results
if role == "assistant" and tool_calls and not content:
names = [tc.get("function", {}).get("name", "?") for tc in tool_calls]
lines.append(f"[queen calls: {', '.join(names)}]")
elif content:
label = "user" if role == "user" else "queen"
lines.append(f"[{label}]: {content[:600]}")
except (KeyError, TypeError) as exc:
logger.debug("Skipping malformed conversation message: %s", exc)
continue
except Exception:
logger.warning("Unexpected error parsing conversation message", exc_info=True)
continue
if lines:
parts.append("## Conversation\n\n" + "\n".join(lines))
return "\n\n".join(parts)
# ---------------------------------------------------------------------------
# Context compaction (binary-split LLM summarisation)
# ---------------------------------------------------------------------------
# If the raw session context exceeds this many characters, compact it first
# before sending to the consolidation LLM. ~200 k chars ≈ 50 k tokens.
_CTX_COMPACT_CHAR_LIMIT = 200_000
_CTX_COMPACT_MAX_DEPTH = 8
_COMPACT_SYSTEM = (
"Summarise this conversation segment. Preserve: user goals, key decisions, "
"what was built or changed, emotional tone, and important outcomes. "
"Write concisely in third person past tense. Omit routine tool invocations "
"unless the result matters."
)
async def _compact_context(text: str, llm: object, *, _depth: int = 0) -> str:
"""Binary-split and LLM-summarise *text* until it fits within the char limit.
Mirrors the recursive binary-splitting strategy used by the main agent
compaction pipeline (EventLoopNode._llm_compact).
"""
if len(text) <= _CTX_COMPACT_CHAR_LIMIT or _depth >= _CTX_COMPACT_MAX_DEPTH:
return text
# Split near the midpoint on a line boundary so we don't cut mid-message
mid = len(text) // 2
split_at = text.rfind("\n", 0, mid) + 1
if split_at <= 0:
split_at = mid
half1, half2 = text[:split_at], text[split_at:]
async def _summarise(chunk: str) -> str:
try:
resp = await llm.acomplete(
messages=[{"role": "user", "content": chunk}],
system=_COMPACT_SYSTEM,
max_tokens=2048,
)
return resp.content.strip()
except Exception:
logger.warning(
"queen_memory: context compaction LLM call failed (depth=%d), truncating",
_depth,
)
return chunk[: _CTX_COMPACT_CHAR_LIMIT // 4]
s1, s2 = await asyncio.gather(_summarise(half1), _summarise(half2))
combined = s1 + "\n\n" + s2
if len(combined) > _CTX_COMPACT_CHAR_LIMIT:
return await _compact_context(combined, llm, _depth=_depth + 1)
return combined
async def consolidate_queen_memory(
session_id: str,
session_dir: Path,
llm: object,
) -> None:
"""Update MEMORY.md and append a diary entry based on the current session.
Reads conversation parts from session_dir. Called periodically in
the background and once at session end. Failures are logged and
silently swallowed so they never block teardown.
Args:
session_id: The session ID.
session_dir: Path to the session directory (~/.hive/queen/session/{id}).
llm: LLMProvider instance (must support acomplete()).
"""
try:
session_context = read_session_context(session_dir)
if not session_context:
logger.debug("queen_memory: no session context, skipping consolidation")
return
logger.info("queen_memory: consolidating memory for session %s ...", session_id)
# If the transcript is very large, compact it with recursive binary LLM
# summarisation before sending to the consolidation model.
if len(session_context) > _CTX_COMPACT_CHAR_LIMIT:
logger.info(
"queen_memory: session context is %d chars — compacting first",
len(session_context),
)
session_context = await _compact_context(session_context, llm)
logger.info("queen_memory: compacted to %d chars", len(session_context))
existing_semantic = read_semantic_memory()
today_journal = read_episodic_memory()
today = date.today()
today_str = format_memory_date(today)
user_msg = (
f"## Existing Semantic Memory (MEMORY.md)\n\n"
f"{existing_semantic or '(none yet)'}\n\n"
f"## Today's Diary So Far ({today_str})\n\n"
f"{today_journal or '(none yet)'}\n\n"
f"{session_context}\n\n"
f"## Session Reference\n\n"
f"Session ID: {session_id}\n"
f"Session dir: {session_dir}\n"
)
logger.debug(
"queen_memory: calling LLM (%d chars of context, ~%d tokens est.)",
len(user_msg),
len(user_msg) // 4,
)
from framework.agents.queen.config import default_config
semantic_resp, diary_resp = await asyncio.gather(
llm.acomplete(
messages=[{"role": "user", "content": user_msg}],
system=_SEMANTIC_SYSTEM,
max_tokens=default_config.max_tokens,
),
llm.acomplete(
messages=[{"role": "user", "content": user_msg}],
system=_DIARY_SYSTEM,
max_tokens=default_config.max_tokens,
),
)
new_semantic = semantic_resp.content.strip()
diary_entry = diary_resp.content.strip()
if new_semantic:
path = semantic_memory_path()
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(new_semantic, encoding="utf-8")
logger.info("queen_memory: semantic memory updated (%d chars)", len(new_semantic))
if diary_entry:
# Rewrite today's episodic file in-place — the LLM has merged and
# deduplicated the full day's content, so we replace rather than append.
ep_path = episodic_memory_path()
ep_path.parent.mkdir(parents=True, exist_ok=True)
heading = f"# {today_str}"
ep_path.write_text(f"{heading}\n\n{diary_entry}\n", encoding="utf-8")
logger.info(
"queen_memory: episodic diary rewritten for %s (%d chars)",
today_str,
len(diary_entry),
)
except Exception:
tb = traceback.format_exc()
logger.exception("queen_memory: consolidation failed")
# Write to file so the cause is findable regardless of log verbosity.
error_path = _queen_dir() / "consolidation_error.txt"
try:
error_path.parent.mkdir(parents=True, exist_ok=True)
error_path.write_text(
f"session: {session_id}\ntime: {datetime.now().isoformat()}\n\n{tb}",
encoding="utf-8",
)
except OSError:
pass # Cannot write error file; original exception already logged
@@ -27,7 +27,9 @@
## GCU Errors
15. **Manually wiring browser tools on event_loop nodes** — Use `node_type="gcu"` which auto-includes browser tools. Do NOT manually list browser tool names.
16. **Using GCU nodes as regular graph nodes** — GCU nodes are subagents only. They must ONLY appear in `sub_agents=["gcu-node-id"]` and be invoked via `delegate_to_sub_agent()`. Never connect via edges or use as entry/terminal nodes.
17. **Reusing the same GCU node ID for parallel tasks** — Each concurrent browser task needs a distinct GCU node ID (e.g. `gcu-site-a`, `gcu-site-b`). Two `delegate_to_sub_agent` calls with the same `agent_id` share a browser profile and will interfere with each other's pages.
18. **Passing `profile=` in GCU tool calls** — Profile isolation for parallel subagents is automatic. The framework injects a unique profile per subagent via an asyncio `ContextVar`. Hardcoding `profile="default"` in a GCU system prompt breaks this isolation.
## Worker Agent Errors
17. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
18. **Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
19. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
20. **Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
@@ -180,7 +180,7 @@ terminal_nodes = [] # Forever-alive
# Module-level vars read by AgentRunner.load()
conversation_mode = "continuous"
identity_prompt = "You are a helpful agent."
loop_config = {"max_iterations": 100, "max_tool_calls_per_turn": 20, "max_history_tokens": 32000}
loop_config = {"max_iterations": 100, "max_tool_calls_per_turn": 20, "max_context_tokens": 32000}
class MyAgent:
@@ -332,81 +332,46 @@ class MyAgent:
default_agent = MyAgent()
```
## agent.py — Async Entry Points Variant
## triggers.json — Timer and Webhook Triggers
When an agent needs timers, webhooks, or event-driven triggers, add
`async_entry_points` and optionally `runtime_config` as module-level variables.
These are IN ADDITION to the standard variables above.
When an agent needs timers, webhooks, or event-driven triggers, create a
`triggers.json` file in the agent's directory (alongside `agent.py`).
The queen loads these at session start and the user can manage them via
the `set_trigger` / `remove_trigger` tools at runtime.
```python
# Additional imports for async entry points
from framework.graph.edge import GraphSpec, AsyncEntryPointSpec
from framework.runtime.agent_runtime import (
AgentRuntime, AgentRuntimeConfig, create_agent_runtime,
)
# ... (goal, nodes, edges, entry_node, entry_points, etc. as above) ...
# Async entry points — event-driven triggers
async_entry_points = [
# Timer with cron: daily at 9am
AsyncEntryPointSpec(
id="daily-check",
name="Daily Check",
entry_node="process-node",
trigger_type="timer",
trigger_config={"cron": "0 9 * * *"},
isolation_level="shared",
max_concurrent=1,
),
# Timer with fixed interval: every 20 minutes
AsyncEntryPointSpec(
id="scheduled-check",
name="Scheduled Check",
entry_node="process-node",
trigger_type="timer",
trigger_config={"interval_minutes": 20, "run_immediately": False},
isolation_level="shared",
max_concurrent=1,
),
# Event: reacts to webhook events
AsyncEntryPointSpec(
id="webhook-event",
name="Webhook Event Handler",
entry_node="process-node",
trigger_type="event",
trigger_config={"event_types": ["webhook_received"]},
isolation_level="shared",
max_concurrent=10,
),
```json
[
{
"id": "daily-check",
"name": "Daily Check",
"trigger_type": "timer",
"trigger_config": {"cron": "0 9 * * *"},
"task": "Run the daily check process"
},
{
"id": "scheduled-check",
"name": "Scheduled Check",
"trigger_type": "timer",
"trigger_config": {"interval_minutes": 20},
"task": "Run the scheduled check"
},
{
"id": "webhook-event",
"name": "Webhook Event Handler",
"trigger_type": "webhook",
"trigger_config": {"event_types": ["webhook_received"]},
"task": "Process incoming webhook event"
}
]
# Webhook server config (only needed if using webhooks)
runtime_config = AgentRuntimeConfig(
webhook_host="127.0.0.1",
webhook_port=8080,
webhook_routes=[
{
"source_id": "my-source",
"path": "/webhooks/my-source",
"methods": ["POST"],
},
],
)
```
**Key rules for async entry points:**
- `async_entry_points` is a list of `AsyncEntryPointSpec` (NOT `EntryPointSpec`)
- `runtime_config` is `AgentRuntimeConfig` (NOT `RuntimeConfig` from config.py)
- Valid trigger_types: `timer`, `event`, `webhook`, `manual`, `api`
- Valid isolation_levels: `isolated`, `shared`, `synchronized`
**Key rules for triggers.json:**
- Valid trigger_types: `timer`, `webhook`
- Timer trigger_config (cron): `{"cron": "0 9 * * *"}` — standard 5-field cron expression
- Timer trigger_config (interval): `{"interval_minutes": float, "run_immediately": bool}`
- Event trigger_config: `{"event_types": ["webhook_received"], "filter_stream": "...", "filter_node": "..."}`
- Use `isolation_level="shared"` for async entry points that need to read
the primary session's memory (e.g., user-configured rules)
- The `_build_graph()` method passes `async_entry_points` to GraphSpec
- Reference: `exports/gmail_inbox_guardian/agent.py`
- Timer trigger_config (interval): `{"interval_minutes": float}`
- Each trigger must have a unique `id`
- The `task` field describes what the worker should do when the trigger fires
- Triggers are persisted back to `triggers.json` when modified via queen tools
## __init__.py
@@ -453,21 +418,6 @@ __all__ = [
]
```
**If the agent uses async entry points**, also import and export:
```python
from .agent import (
...,
async_entry_points,
runtime_config, # Only if using webhooks
)
__all__ = [
...,
"async_entry_points",
"runtime_config",
]
```
## __main__.py
```python
@@ -559,7 +509,7 @@ if __name__ == "__main__":
## mcp_servers.json
> **Auto-generated.** `initialize_agent_package` creates this file with hive-tools
> **Auto-generated.** `initialize_and_build_agent` creates this file with hive-tools
> as the default. Only edit manually to add additional MCP servers.
```json
@@ -31,8 +31,7 @@ module-level variables via `getattr()`:
| `conversation_mode` | no | not passed | Isolated mode (no context carryover) |
| `identity_prompt` | no | not passed | No agent-level identity |
| `loop_config` | no | `{}` | No iteration limits |
| `async_entry_points` | no | `[]` | No async triggers (timers, webhooks, events) |
| `runtime_config` | no | `None` | No webhook server |
| `triggers.json` (file) | no | not present | No triggers (timers, webhooks) |
**CRITICAL:** `__init__.py` MUST import and re-export ALL of these from
`agent.py`. Missing exports silently fall back to defaults, causing
@@ -226,7 +225,7 @@ Only three valid keys:
loop_config = {
"max_iterations": 100, # Max LLM turns per node visit
"max_tool_calls_per_turn": 20, # Max tool calls per LLM response
"max_history_tokens": 32000, # Triggers conversation compaction
"max_context_tokens": 32000, # Triggers conversation compaction
}
```
**INVALID keys** (do NOT use): `"strategy"`, `"mode"`, `"timeout"`,
@@ -257,44 +256,28 @@ Multiple ON_SUCCESS edges from same source → parallel execution via asyncio.ga
Judge is the SOLE acceptance mechanism — no ad-hoc framework gating.
## Async Entry Points (Webhooks, Timers, Events)
## Triggers (Timers, Webhooks)
For agents that react to external events, use `AsyncEntryPointSpec`:
For agents that react to external events, create a `triggers.json` file
in the agent's export directory:
```python
from framework.graph.edge import AsyncEntryPointSpec
from framework.runtime.agent_runtime import AgentRuntimeConfig
# Timer trigger (cron or interval)
async_entry_points = [
AsyncEntryPointSpec(
id="daily-check",
name="Daily Check",
entry_node="process",
trigger_type="timer",
trigger_config={"cron": "0 9 * * *"}, # daily at 9am
isolation_level="shared",
)
```json
[
{
"id": "daily-check",
"name": "Daily Check",
"trigger_type": "timer",
"trigger_config": {"cron": "0 9 * * *"},
"task": "Run the daily check process"
}
]
# Webhook server (optional)
runtime_config = AgentRuntimeConfig(
webhook_host="127.0.0.1",
webhook_port=8080,
webhook_routes=[{"source_id": "gmail", "path": "/webhooks/gmail", "methods": ["POST"]}],
)
```
### Key Fields
- `trigger_type`: `"timer"`, `"event"`, `"webhook"`, `"manual"`
- `trigger_type`: `"timer"` or `"webhook"`
- `trigger_config`: `{"cron": "0 9 * * *"}` or `{"interval_minutes": 20}`
- `isolation_level`: `"shared"` (recommended), `"isolated"`, `"synchronized"`
- `event_types`: For event triggers, e.g., `["webhook_received"]`
### Exports Required
Both `async_entry_points` and `runtime_config` must be exported from `__init__.py`.
See `exports/gmail_inbox_guardian/agent.py` for complete example.
- `task`: describes what the worker should do when the trigger fires
- Triggers can also be created/removed at runtime via `set_trigger` / `remove_trigger` queen tools
## Tool Discovery
@@ -109,9 +109,48 @@ Key rules to bake into GCU node prompts:
- Keep tool calls per turn ≤10
- Tab isolation: when browser is already running, use `browser_open(background=true)` and pass `target_id` to every call
## Multiple Concurrent GCU Subagents
When a task can be parallelized across multiple sites or profiles, declare a distinct GCU
node for each and invoke them all in the same LLM turn. The framework batches all
`delegate_to_sub_agent` calls made in one turn and runs them with `asyncio.gather`, so
they execute concurrently — not sequentially.
**Each GCU subagent automatically gets its own isolated browser context** — no `profile=`
argument is needed in tool calls. The framework derives a unique profile from the subagent's
node ID and instance counter and injects it via an asyncio `ContextVar` before the subagent
runs.
### Example: three sites in parallel
```python
# Three distinct GCU nodes
gcu_site_a = NodeSpec(id="gcu-site-a", node_type="gcu", ...)
gcu_site_b = NodeSpec(id="gcu-site-b", node_type="gcu", ...)
gcu_site_c = NodeSpec(id="gcu-site-c", node_type="gcu", ...)
orchestrator = NodeSpec(
id="orchestrator",
node_type="event_loop",
sub_agents=["gcu-site-a", "gcu-site-b", "gcu-site-c"],
system_prompt="""\
Call all three subagents in a single response to run them in parallel:
delegate_to_sub_agent(agent_id="gcu-site-a", task="Scrape prices from site A")
delegate_to_sub_agent(agent_id="gcu-site-b", task="Scrape prices from site B")
delegate_to_sub_agent(agent_id="gcu-site-c", task="Scrape prices from site C")
""",
)
```
**Rules:**
- Use distinct node IDs for each concurrent task — sharing an ID shares the browser context.
- The GCU node prompts do not need to mention `profile=`; isolation is automatic.
- Cleanup is automatic at session end, but GCU nodes can call `browser_stop()` explicitly
if they want to release resources mid-run.
## GCU Anti-Patterns
- Using `browser_screenshot` to read text (use `browser_snapshot`)
- Using `browser_screenshot` to read text (use `browser_snapshot` instead; screenshots are for visual context only)
- Re-navigating after scrolling (resets scroll position)
- Attempting login on auth walls
- Forgetting `target_id` in multi-tab scenarios
@@ -0,0 +1,60 @@
# Queen Memory — File System Structure
```
~/.hive/
├── queen/
│ ├── MEMORY.md ← Semantic memory
│ ├── memories/
│ │ ├── MEMORY-2026-03-09.md ← Episodic memory (today)
│ │ ├── MEMORY-2026-03-08.md
│ │ └── ...
│ └── session/
│ └── {session_id}/ ← One dir per session (or resumed-from session)
│ ├── conversations/
│ │ ├── parts/
│ │ │ ├── 00001.json ← One file per message (role, content, tool_calls)
│ │ │ ├── 00002.json
│ │ │ └── ...
│ │ └── spillover/
│ │ ├── conversation_1.md ← Compacted old conversation segments
│ │ ├── conversation_2.md
│ │ └── ...
│ └── data/
│ ├── web_search_1.txt ← Spillover: large tool results
│ ├── web_search_2.txt
│ └── ...
```
---
## The two memory tiers
| File | Tier | Written by | Read at |
|---|---|---|---|
| `MEMORY.md` | Semantic | Consolidation LLM (auto, post-session) | Session start (injected into system prompt) |
| `memories/MEMORY-YYYY-MM-DD.md` | Episodic | Queen via `write_to_diary` tool + consolidation LLM | Session start (today's file injected) |
---
## Session directory naming
The session directory name is **`queen_resume_from`** when a cold-restore resumes an existing
session, otherwise the new **`session_id`**. This means resumed sessions accumulate all messages
in the original directory rather than fragmenting across multiple folders.
---
## Consolidation
`consolidate_queen_memory()` runs every **5 minutes** in the background and once more at session
end. It reads:
1. `conversations/parts/*.json` — full message history (user + assistant turns; tool results skipped)
It then makes two LLM writes:
- Rewrites `MEMORY.md` in place (semantic memory — queen never touches this herself)
- Appends a timestamped prose entry to today's `memories/MEMORY-YYYY-MM-DD.md`
If the combined transcript exceeds ~200 K characters it is recursively binary-compacted via the
LLM before being sent to the consolidation model (mirrors `EventLoopNode._llm_compact`).
@@ -1,4 +1,4 @@
"""Test fixtures for Hive Coder agent."""
"""Test fixtures for Queen agent."""
import sys
from pathlib import Path
@@ -1,8 +1,8 @@
"""Queen's ticket receiver entry point.
When the Worker Health Judge emits a WORKER_ESCALATION_TICKET event on the
shared EventBus, this entry point fires and routes to the ``ticket_triage``
node, where the Queen deliberates and decides whether to notify the operator.
When a WORKER_ESCALATION_TICKET event is emitted on the shared EventBus,
this entry point fires and routes to the ``ticket_triage`` node, where the
Queen deliberates and decides whether to notify the operator.
Isolation level is ``isolated`` the queen's triage memory is kept separate
from the worker's shared memory. Each ticket triage runs in its own context.
+286
View File
@@ -0,0 +1,286 @@
"""Worker per-run digest (run diary).
Storage layout:
~/.hive/agents/{agent_name}/runs/{run_id}/digest.md
Each completed or failed worker run gets one digest file. The queen reads
these via get_worker_status(focus='diary') before digging into live runtime
logs the diary is a cheap, persistent record that survives across sessions.
"""
from __future__ import annotations
import logging
import traceback
from collections import Counter
from datetime import datetime
from pathlib import Path
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from framework.runtime.event_bus import AgentEvent, EventBus
logger = logging.getLogger(__name__)
_DIGEST_SYSTEM = """\
You maintain run digests for a worker agent.
A run digest is a concise, factual record of a single task execution.
Write 3-6 sentences covering:
- What the worker was asked to do (the task/goal)
- What approach it took and what tools it used
- What the outcome was (success, partial, or failure and why if relevant)
- Any notable issues, retries, or escalations to the queen
Write in third person past tense. Be direct and specific.
Omit routine tool invocations unless the result matters.
Output only the digest prose no headings, no code fences.
"""
def _worker_runs_dir(agent_name: str) -> Path:
return Path.home() / ".hive" / "agents" / agent_name / "runs"
def digest_path(agent_name: str, run_id: str) -> Path:
return _worker_runs_dir(agent_name) / run_id / "digest.md"
def _collect_run_events(bus: EventBus, run_id: str, limit: int = 2000) -> list[AgentEvent]:
"""Collect all events belonging to *run_id* from the bus history.
Strategy: find the EXECUTION_STARTED event that carries ``run_id``,
extract its ``execution_id``, then query the bus by that execution_id.
This works because TOOL_CALL_*, EDGE_TRAVERSED, NODE_STALLED etc. carry
execution_id but not run_id.
Falls back to a full-scan run_id filter when EXECUTION_STARTED is not
found (e.g. bus was rotated).
"""
from framework.runtime.event_bus import EventType
# Pass 1: find execution_id via EXECUTION_STARTED with matching run_id
started = bus.get_history(event_type=EventType.EXECUTION_STARTED, limit=limit)
exec_id: str | None = None
for e in started:
if getattr(e, "run_id", None) == run_id and e.execution_id:
exec_id = e.execution_id
break
if exec_id:
return bus.get_history(execution_id=exec_id, limit=limit)
# Fallback: scan all events and match by run_id attribute
return [e for e in bus.get_history(limit=limit) if getattr(e, "run_id", None) == run_id]
def _build_run_context(
events: list[AgentEvent],
outcome_event: AgentEvent | None,
) -> str:
"""Assemble a plain-text run context string for the digest LLM call."""
from framework.runtime.event_bus import EventType
# Reverse so events are in chronological order
events_chron = list(reversed(events))
lines: list[str] = []
# Task input from EXECUTION_STARTED
started = [e for e in events_chron if e.type == EventType.EXECUTION_STARTED]
if started:
inp = started[0].data.get("input", {})
if inp:
lines.append(f"Task input: {str(inp)[:400]}")
# Duration (elapsed so far if no outcome yet)
ref_ts = outcome_event.timestamp if outcome_event else datetime.utcnow()
if started:
elapsed = (ref_ts - started[0].timestamp).total_seconds()
m, s = divmod(int(elapsed), 60)
lines.append(f"Duration so far: {m}m {s}s" if m else f"Duration so far: {s}s")
# Outcome
if outcome_event is None:
lines.append("Status: still running (mid-run snapshot)")
elif outcome_event.type == EventType.EXECUTION_COMPLETED:
out = outcome_event.data.get("output", {})
out_str = f"Outcome: completed. Output: {str(out)[:300]}"
lines.append(out_str if out else "Outcome: completed.")
else:
err = outcome_event.data.get("error", "")
lines.append(f"Outcome: failed. Error: {str(err)[:300]}" if err else "Outcome: failed.")
# Node path (edge traversals)
edges = [e for e in events_chron if e.type == EventType.EDGE_TRAVERSED]
if edges:
parts = [
f"{e.data.get('source_node', '?')}->{e.data.get('target_node', '?')}"
for e in edges[-20:]
]
lines.append(f"Node path: {', '.join(parts)}")
# Tools used
tool_events = [e for e in events_chron if e.type == EventType.TOOL_CALL_COMPLETED]
if tool_events:
names = [e.data.get("tool_name", "?") for e in tool_events]
counts = Counter(names)
summary = ", ".join(f"{name}×{n}" if n > 1 else name for name, n in counts.most_common())
lines.append(f"Tools used: {summary}")
# Note any tool errors
errors = [e for e in tool_events if e.data.get("is_error")]
if errors:
err_names = Counter(e.data.get("tool_name", "?") for e in errors)
lines.append(f"Tool errors: {dict(err_names)}")
# Issues
issue_map = {
EventType.NODE_STALLED: "stall",
EventType.NODE_TOOL_DOOM_LOOP: "doom loop",
EventType.CONSTRAINT_VIOLATION: "constraint violation",
EventType.NODE_RETRY: "retry",
}
issue_parts: list[str] = []
for evt_type, label in issue_map.items():
n = sum(1 for e in events_chron if e.type == evt_type)
if n:
issue_parts.append(f"{n} {label}(s)")
if issue_parts:
lines.append(f"Issues: {', '.join(issue_parts)}")
# Escalations to queen
escalations = [e for e in events_chron if e.type == EventType.ESCALATION_REQUESTED]
if escalations:
lines.append(f"Escalations to queen: {len(escalations)}")
# Final LLM output snippet (last LLM_TEXT_DELTA snapshot)
text_events = [e for e in reversed(events_chron) if e.type == EventType.LLM_TEXT_DELTA]
if text_events:
snapshot = text_events[0].data.get("snapshot", "") or ""
if snapshot:
lines.append(f"Final LLM output: {snapshot[-400:].strip()}")
return "\n".join(lines)
async def consolidate_worker_run(
agent_name: str,
run_id: str,
outcome_event: AgentEvent | None,
bus: EventBus,
llm: Any,
) -> None:
"""Write (or overwrite) the digest for a worker run.
Called fire-and-forget either:
- After EXECUTION_COMPLETED / EXECUTION_FAILED (outcome_event set, final write)
- Periodically during a run on a cooldown timer (outcome_event=None, mid-run snapshot)
The digest file is always overwritten so each call produces the freshest view.
The final completion/failure call supersedes any mid-run snapshot.
Args:
agent_name: Worker agent directory name (determines storage path).
run_id: The run ID.
outcome_event: EXECUTION_COMPLETED or EXECUTION_FAILED event, or None for
a mid-run snapshot.
bus: The session EventBus (shared queen + worker).
llm: LLMProvider with an acomplete() method.
"""
try:
events = _collect_run_events(bus, run_id)
run_context = _build_run_context(events, outcome_event)
if not run_context:
logger.debug("worker_memory: no events for run %s, skipping digest", run_id)
return
is_final = outcome_event is not None
logger.info(
"worker_memory: generating %s digest for run %s ...",
"final" if is_final else "mid-run",
run_id,
)
from framework.agents.queen.config import default_config
resp = await llm.acomplete(
messages=[{"role": "user", "content": run_context}],
system=_DIGEST_SYSTEM,
max_tokens=min(default_config.max_tokens, 512),
)
digest_text = (resp.content or "").strip()
if not digest_text:
logger.warning("worker_memory: LLM returned empty digest for run %s", run_id)
return
path = digest_path(agent_name, run_id)
path.parent.mkdir(parents=True, exist_ok=True)
from framework.runtime.event_bus import EventType
ts = (outcome_event.timestamp if outcome_event else datetime.utcnow()).strftime(
"%Y-%m-%d %H:%M"
)
if outcome_event is None:
status = "running"
elif outcome_event.type == EventType.EXECUTION_COMPLETED:
status = "completed"
else:
status = "failed"
path.write_text(
f"# {run_id}\n\n**{ts}** | {status}\n\n{digest_text}\n",
encoding="utf-8",
)
logger.info(
"worker_memory: %s digest written for run %s (%d chars)",
status,
run_id,
len(digest_text),
)
except Exception:
tb = traceback.format_exc()
logger.exception("worker_memory: digest failed for run %s", run_id)
# Persist the error so it's findable without log access
error_path = _worker_runs_dir(agent_name) / run_id / "digest_error.txt"
try:
error_path.parent.mkdir(parents=True, exist_ok=True)
error_path.write_text(
f"run_id: {run_id}\ntime: {datetime.now().isoformat()}\n\n{tb}",
encoding="utf-8",
)
except Exception:
pass
def read_recent_digests(agent_name: str, max_runs: int = 5) -> list[tuple[str, str]]:
"""Return recent run digests as [(run_id, content), ...], newest first.
Args:
agent_name: Worker agent directory name.
max_runs: Maximum number of digests to return.
Returns:
List of (run_id, digest_content) tuples, ordered newest first.
"""
runs_dir = _worker_runs_dir(agent_name)
if not runs_dir.exists():
return []
digest_files = sorted(
runs_dir.glob("*/digest.md"),
key=lambda p: p.stat().st_mtime,
reverse=True,
)[:max_runs]
result: list[tuple[str, str]] = []
for f in digest_files:
try:
content = f.read_text(encoding="utf-8").strip()
if content:
result.append((f.parent.name, content))
except OSError:
continue
return result
-7
View File
@@ -1,7 +0,0 @@
"""Builder interface for analyzing and building agents."""
from framework.builder.query import BuilderQuery
__all__ = [
"BuilderQuery",
]
-501
View File
@@ -1,501 +0,0 @@
"""
Builder Query Interface - How I (Builder) analyze agent runs.
This is designed around the questions I need to answer:
1. What happened? (summaries, narratives)
2. Why did it fail? (failure analysis, decision traces)
3. What patterns emerge? (across runs, across nodes)
4. What should we change? (suggestions)
"""
from collections import defaultdict
from pathlib import Path
from typing import Any
from framework.schemas.decision import Decision
from framework.schemas.run import Run, RunStatus, RunSummary
from framework.storage.backend import FileStorage
class FailureAnalysis:
"""Structured analysis of why a run failed."""
def __init__(
self,
run_id: str,
failure_point: str,
root_cause: str,
decision_chain: list[str],
problems: list[str],
suggestions: list[str],
):
self.run_id = run_id
self.failure_point = failure_point
self.root_cause = root_cause
self.decision_chain = decision_chain
self.problems = problems
self.suggestions = suggestions
def to_dict(self) -> dict[str, Any]:
return {
"run_id": self.run_id,
"failure_point": self.failure_point,
"root_cause": self.root_cause,
"decision_chain": self.decision_chain,
"problems": self.problems,
"suggestions": self.suggestions,
}
def __str__(self) -> str:
lines = [
f"=== Failure Analysis for {self.run_id} ===",
"",
f"Failure Point: {self.failure_point}",
f"Root Cause: {self.root_cause}",
"",
"Decision Chain Leading to Failure:",
]
for i, dec in enumerate(self.decision_chain, 1):
lines.append(f" {i}. {dec}")
if self.problems:
lines.append("")
lines.append("Reported Problems:")
for prob in self.problems:
lines.append(f" - {prob}")
if self.suggestions:
lines.append("")
lines.append("Suggestions:")
for sug in self.suggestions:
lines.append(f"{sug}")
return "\n".join(lines)
class PatternAnalysis:
"""Patterns detected across multiple runs."""
def __init__(
self,
goal_id: str,
run_count: int,
success_rate: float,
common_failures: list[tuple[str, int]],
problematic_nodes: list[tuple[str, float]],
decision_patterns: dict[str, Any],
):
self.goal_id = goal_id
self.run_count = run_count
self.success_rate = success_rate
self.common_failures = common_failures
self.problematic_nodes = problematic_nodes
self.decision_patterns = decision_patterns
def to_dict(self) -> dict[str, Any]:
return {
"goal_id": self.goal_id,
"run_count": self.run_count,
"success_rate": self.success_rate,
"common_failures": self.common_failures,
"problematic_nodes": self.problematic_nodes,
"decision_patterns": self.decision_patterns,
}
def __str__(self) -> str:
lines = [
f"=== Pattern Analysis for Goal {self.goal_id} ===",
"",
f"Runs Analyzed: {self.run_count}",
f"Success Rate: {self.success_rate:.1%}",
]
if self.common_failures:
lines.append("")
lines.append("Common Failures:")
for failure, count in self.common_failures:
lines.append(f" - {failure} ({count} occurrences)")
if self.problematic_nodes:
lines.append("")
lines.append("Problematic Nodes (failure rate):")
for node, rate in self.problematic_nodes:
lines.append(f" - {node}: {rate:.1%} failure rate")
return "\n".join(lines)
class BuilderQuery:
"""
The interface I (Builder) use to understand what agents are doing.
This is optimized for the questions I need to answer when analyzing
agent behavior and deciding what to improve.
"""
def __init__(self, storage_path: str | Path):
self.storage = FileStorage(storage_path)
# === WHAT HAPPENED? ===
def get_run_summary(self, run_id: str) -> RunSummary | None:
"""Get a quick summary of a run."""
return self.storage.load_summary(run_id)
def get_full_run(self, run_id: str) -> Run | None:
"""Get the complete run with all decisions."""
return self.storage.load_run(run_id)
def list_runs_for_goal(self, goal_id: str) -> list[RunSummary]:
"""Get summaries of all runs for a goal."""
run_ids = self.storage.get_runs_by_goal(goal_id)
summaries = []
for run_id in run_ids:
summary = self.storage.load_summary(run_id)
if summary:
summaries.append(summary)
return summaries
def get_recent_failures(self, limit: int = 10) -> list[RunSummary]:
"""Get recent failed runs."""
run_ids = self.storage.get_runs_by_status(RunStatus.FAILED)
summaries = []
for run_id in run_ids[:limit]:
summary = self.storage.load_summary(run_id)
if summary:
summaries.append(summary)
return summaries
# === WHY DID IT FAIL? ===
def analyze_failure(self, run_id: str) -> FailureAnalysis | None:
"""
Deep analysis of why a run failed.
This is my primary tool for understanding what went wrong.
"""
run = self.storage.load_run(run_id)
if run is None or run.status != RunStatus.FAILED:
return None
# Find the first failed decision
failed_decisions = [d for d in run.decisions if not d.was_successful]
if not failed_decisions:
failure_point = "Unknown - no decision marked as failed"
root_cause = "Run failed but all decisions succeeded (external cause?)"
else:
first_failure = failed_decisions[0]
failure_point = first_failure.summary_for_builder()
root_cause = first_failure.outcome.error if first_failure.outcome else "Unknown"
# Build the decision chain leading to failure
decision_chain = []
for d in run.decisions:
decision_chain.append(d.summary_for_builder())
if not d.was_successful:
break
# Extract problems
problems = [f"[{p.severity}] {p.description}" for p in run.problems]
# Generate suggestions based on the failure
suggestions = self._generate_suggestions(run, failed_decisions)
return FailureAnalysis(
run_id=run_id,
failure_point=failure_point,
root_cause=root_cause,
decision_chain=decision_chain,
problems=problems,
suggestions=suggestions,
)
def get_decision_trace(self, run_id: str) -> list[str]:
"""Get a readable trace of all decisions in a run."""
run = self.storage.load_run(run_id)
if run is None:
return []
return [d.summary_for_builder() for d in run.decisions]
# === WHAT PATTERNS EMERGE? ===
def find_patterns(self, goal_id: str) -> PatternAnalysis | None:
"""
Find patterns across runs for a goal.
This helps me understand systemic issues vs one-off failures.
"""
run_ids = self.storage.get_runs_by_goal(goal_id)
if not run_ids:
return None
runs = []
for run_id in run_ids:
run = self.storage.load_run(run_id)
if run:
runs.append(run)
if not runs:
return None
# Calculate success rate
completed = [r for r in runs if r.status == RunStatus.COMPLETED]
success_rate = len(completed) / len(runs) if runs else 0.0
# Find common failures
failure_counts: dict[str, int] = defaultdict(int)
for run in runs:
for decision in run.decisions:
if not decision.was_successful and decision.outcome:
error = decision.outcome.error or "Unknown error"
failure_counts[error] += 1
common_failures = sorted(failure_counts.items(), key=lambda x: x[1], reverse=True)[:5]
# Find problematic nodes
node_stats: dict[str, dict[str, int]] = defaultdict(lambda: {"total": 0, "failed": 0})
for run in runs:
for decision in run.decisions:
node_stats[decision.node_id]["total"] += 1
if not decision.was_successful:
node_stats[decision.node_id]["failed"] += 1
problematic_nodes = []
for node_id, stats in node_stats.items():
if stats["total"] > 0:
failure_rate = stats["failed"] / stats["total"]
if failure_rate > 0.1: # More than 10% failure rate
problematic_nodes.append((node_id, failure_rate))
problematic_nodes.sort(key=lambda x: x[1], reverse=True)
# Decision patterns
decision_patterns = self._analyze_decision_patterns(runs)
return PatternAnalysis(
goal_id=goal_id,
run_count=len(runs),
success_rate=success_rate,
common_failures=common_failures,
problematic_nodes=problematic_nodes,
decision_patterns=decision_patterns,
)
def compare_runs(self, run_id_1: str, run_id_2: str) -> dict[str, Any]:
"""Compare two runs to understand what differed."""
run1 = self.storage.load_run(run_id_1)
run2 = self.storage.load_run(run_id_2)
if run1 is None or run2 is None:
return {"error": "One or both runs not found"}
return {
"run_1": {
"id": run1.id,
"status": run1.status.value,
"decisions": len(run1.decisions),
"success_rate": run1.metrics.success_rate,
},
"run_2": {
"id": run2.id,
"status": run2.status.value,
"decisions": len(run2.decisions),
"success_rate": run2.metrics.success_rate,
},
"differences": self._find_differences(run1, run2),
}
# === WHAT SHOULD WE CHANGE? ===
def suggest_improvements(self, goal_id: str) -> list[dict[str, Any]]:
"""
Generate improvement suggestions based on run analysis.
This is what I use to propose changes to the human engineer.
"""
patterns = self.find_patterns(goal_id)
if patterns is None:
return []
suggestions = []
# Suggestion: Fix problematic nodes
for node_id, failure_rate in patterns.problematic_nodes:
suggestions.append(
{
"type": "node_improvement",
"target": node_id,
"reason": f"Node has {failure_rate:.1%} failure rate",
"recommendation": (
f"Review and improve node '{node_id}' - "
"high failure rate suggests prompt or tool issues"
),
"priority": "high" if failure_rate > 0.3 else "medium",
}
)
# Suggestion: Address common failures
for failure, count in patterns.common_failures:
if count >= 2:
suggestions.append(
{
"type": "error_handling",
"target": failure,
"reason": f"Error occurred {count} times",
"recommendation": f"Add handling for: {failure}",
"priority": "high" if count >= 5 else "medium",
}
)
# Suggestion: Overall success rate
if patterns.success_rate < 0.8:
suggestions.append(
{
"type": "architecture",
"target": goal_id,
"reason": f"Goal success rate is only {patterns.success_rate:.1%}",
"recommendation": (
"Consider restructuring the agent graph or improving goal definition"
),
"priority": "high",
}
)
return suggestions
def get_node_performance(self, node_id: str) -> dict[str, Any]:
"""Get performance metrics for a specific node across all runs."""
run_ids = self.storage.get_runs_by_node(node_id)
total_decisions = 0
successful_decisions = 0
total_latency = 0
total_tokens = 0
decision_types: dict[str, int] = defaultdict(int)
for run_id in run_ids:
run = self.storage.load_run(run_id)
if run:
for decision in run.decisions:
if decision.node_id == node_id:
total_decisions += 1
if decision.was_successful:
successful_decisions += 1
if decision.outcome:
total_latency += decision.outcome.latency_ms
total_tokens += decision.outcome.tokens_used
decision_types[decision.decision_type.value] += 1
return {
"node_id": node_id,
"total_decisions": total_decisions,
"success_rate": successful_decisions / total_decisions if total_decisions > 0 else 0,
"avg_latency_ms": total_latency / total_decisions if total_decisions > 0 else 0,
"total_tokens": total_tokens,
"decision_type_distribution": dict(decision_types),
}
# === PRIVATE HELPERS ===
def _generate_suggestions(
self,
run: Run,
failed_decisions: list[Decision],
) -> list[str]:
"""Generate suggestions based on failure analysis."""
suggestions = []
for decision in failed_decisions:
# Check if there were alternatives
if len(decision.options) > 1:
chosen = decision.chosen_option
alternatives = [o for o in decision.options if o.id != decision.chosen_option_id]
if alternatives:
alt_desc = alternatives[0].description
chosen_desc = chosen.description if chosen else "unknown"
suggestions.append(
f"Consider alternative: '{alt_desc}' instead of '{chosen_desc}'"
)
# Check for missing context
if not decision.input_context:
suggestions.append(
f"Decision '{decision.intent}' had no input context - "
"ensure relevant data is passed"
)
# Check for constraint issues
if decision.active_constraints:
constraints = ", ".join(decision.active_constraints)
suggestions.append(f"Review constraints: {constraints} - may be too restrictive")
# Check for reported problems with suggestions
for problem in run.problems:
if problem.suggested_fix:
suggestions.append(problem.suggested_fix)
return suggestions
def _analyze_decision_patterns(self, runs: list[Run]) -> dict[str, Any]:
"""Analyze decision patterns across runs."""
type_counts: dict[str, int] = defaultdict(int)
option_counts: dict[str, dict[str, int]] = defaultdict(lambda: defaultdict(int))
for run in runs:
for decision in run.decisions:
type_counts[decision.decision_type.value] += 1
# Track which options are chosen for similar intents
intent_key = decision.intent[:50] # Truncate for grouping
if decision.chosen_option:
option_counts[intent_key][decision.chosen_option.description] += 1
# Find most common choices per intent
common_choices = {}
for intent, choices in option_counts.items():
if choices:
most_common = max(choices.items(), key=lambda x: x[1])
common_choices[intent] = {
"choice": most_common[0],
"count": most_common[1],
"alternatives": len(choices) - 1,
}
return {
"decision_type_distribution": dict(type_counts),
"common_choices": common_choices,
}
def _find_differences(self, run1: Run, run2: Run) -> list[str]:
"""Find key differences between two runs."""
differences = []
# Status difference
if run1.status != run2.status:
differences.append(f"Status: {run1.status.value} vs {run2.status.value}")
# Decision count difference
if len(run1.decisions) != len(run2.decisions):
differences.append(f"Decision count: {len(run1.decisions)} vs {len(run2.decisions)}")
# Find first divergence point
for i, (d1, d2) in enumerate(zip(run1.decisions, run2.decisions, strict=False)):
if d1.chosen_option_id != d2.chosen_option_id:
differences.append(
f"Diverged at decision {i}: "
f"chose '{d1.chosen_option_id}' vs '{d2.chosen_option_id}'"
)
break
# Node differences
nodes1 = set(run1.metrics.nodes_executed)
nodes2 = set(run2.metrics.nodes_executed)
if nodes1 != nodes2:
only_1 = nodes1 - nodes2
only_2 = nodes2 - nodes1
if only_1:
differences.append(f"Nodes only in run 1: {only_1}")
if only_2:
differences.append(f"Nodes only in run 2: {only_2}")
return differences
+15
View File
@@ -89,6 +89,21 @@ def main():
register_testing_commands(subparsers)
# Register skill commands (skill list, skill trust, ...)
from framework.skills.cli import register_skill_commands
register_skill_commands(subparsers)
# Register debugger commands (debugger)
from framework.debugger.cli import register_debugger_commands
register_debugger_commands(subparsers)
# Register MCP registry commands (mcp install, mcp add, ...)
from framework.runner.mcp_registry_cli import register_mcp_commands
register_mcp_commands(subparsers)
args = parser.parse_args()
if hasattr(args, "func"):
+293 -2
View File
@@ -19,6 +19,10 @@ from framework.graph.edge import DEFAULT_MAX_TOKENS
# ---------------------------------------------------------------------------
HIVE_CONFIG_FILE = Path.home() / ".hive" / "configuration.json"
# Hive LLM router endpoint (Anthropic-compatible).
# litellm's Anthropic handler appends /v1/messages, so this is just the base host.
HIVE_LLM_ENDPOINT = "https://api.adenhq.com"
logger = logging.getLogger(__name__)
@@ -47,15 +51,176 @@ def get_preferred_model() -> str:
"""Return the user's preferred LLM model string (e.g. 'anthropic/claude-sonnet-4-20250514')."""
llm = get_hive_config().get("llm", {})
if llm.get("provider") and llm.get("model"):
return f"{llm['provider']}/{llm['model']}"
provider = str(llm["provider"])
model = str(llm["model"]).strip()
# OpenRouter quickstart stores raw model IDs; tolerate pasted "openrouter/<id>" too.
if provider.lower() == "openrouter" and model.lower().startswith("openrouter/"):
model = model[len("openrouter/") :]
if model:
return f"{provider}/{model}"
return "anthropic/claude-sonnet-4-20250514"
def get_preferred_worker_model() -> str | None:
"""Return the user's preferred worker LLM model, or None if not configured.
Reads from the ``worker_llm`` section of ~/.hive/configuration.json.
Returns None when no worker-specific model is set, so callers can
fall back to the default (queen) model via ``get_preferred_model()``.
"""
worker_llm = get_hive_config().get("worker_llm", {})
if worker_llm.get("provider") and worker_llm.get("model"):
provider = str(worker_llm["provider"])
model = str(worker_llm["model"]).strip()
if provider.lower() == "openrouter" and model.lower().startswith("openrouter/"):
model = model[len("openrouter/") :]
if model:
return f"{provider}/{model}"
return None
def get_worker_api_key() -> str | None:
"""Return the API key for the worker LLM, falling back to the default key."""
worker_llm = get_hive_config().get("worker_llm", {})
if not worker_llm:
return get_api_key()
# Worker-specific subscription / env var
if worker_llm.get("use_claude_code_subscription"):
try:
from framework.runner.runner import get_claude_code_token
token = get_claude_code_token()
if token:
return token
except ImportError:
pass
if worker_llm.get("use_codex_subscription"):
try:
from framework.runner.runner import get_codex_token
token = get_codex_token()
if token:
return token
except ImportError:
pass
if worker_llm.get("use_kimi_code_subscription"):
try:
from framework.runner.runner import get_kimi_code_token
token = get_kimi_code_token()
if token:
return token
except ImportError:
pass
if worker_llm.get("use_antigravity_subscription"):
try:
from framework.runner.runner import get_antigravity_token
token = get_antigravity_token()
if token:
return token
except ImportError:
pass
api_key_env_var = worker_llm.get("api_key_env_var")
if api_key_env_var:
return os.environ.get(api_key_env_var)
# Fall back to default key
return get_api_key()
def get_worker_api_base() -> str | None:
"""Return the api_base for the worker LLM, falling back to the default."""
worker_llm = get_hive_config().get("worker_llm", {})
if not worker_llm:
return get_api_base()
if worker_llm.get("use_codex_subscription"):
return "https://chatgpt.com/backend-api/codex"
if worker_llm.get("use_kimi_code_subscription"):
return "https://api.kimi.com/coding"
if worker_llm.get("use_antigravity_subscription"):
# Antigravity uses AntigravityProvider directly — no api_base needed.
return None
if worker_llm.get("api_base"):
return worker_llm["api_base"]
if str(worker_llm.get("provider", "")).lower() == "openrouter":
return OPENROUTER_API_BASE
return None
def get_worker_llm_extra_kwargs() -> dict[str, Any]:
"""Return extra kwargs for the worker LLM provider."""
worker_llm = get_hive_config().get("worker_llm", {})
if not worker_llm:
return get_llm_extra_kwargs()
if worker_llm.get("use_claude_code_subscription"):
api_key = get_worker_api_key()
if api_key:
return {
"extra_headers": {"authorization": f"Bearer {api_key}"},
}
if worker_llm.get("use_codex_subscription"):
api_key = get_worker_api_key()
if api_key:
headers: dict[str, str] = {
"Authorization": f"Bearer {api_key}",
"User-Agent": "CodexBar",
}
try:
from framework.runner.runner import get_codex_account_id
account_id = get_codex_account_id()
if account_id:
headers["ChatGPT-Account-Id"] = account_id
except ImportError:
pass
return {
"extra_headers": headers,
"store": False,
"allowed_openai_params": ["store"],
}
if worker_llm.get("provider") == "ollama":
return {"num_ctx": worker_llm.get("num_ctx", 16384)}
return {}
def get_worker_max_tokens() -> int:
"""Return max_tokens for the worker LLM, falling back to default."""
worker_llm = get_hive_config().get("worker_llm", {})
if worker_llm and "max_tokens" in worker_llm:
return worker_llm["max_tokens"]
return get_max_tokens()
def get_worker_max_context_tokens() -> int:
"""Return max_context_tokens for the worker LLM, falling back to default."""
worker_llm = get_hive_config().get("worker_llm", {})
if worker_llm and "max_context_tokens" in worker_llm:
return worker_llm["max_context_tokens"]
return get_max_context_tokens()
def get_max_tokens() -> int:
"""Return the configured max_tokens, falling back to DEFAULT_MAX_TOKENS."""
return get_hive_config().get("llm", {}).get("max_tokens", DEFAULT_MAX_TOKENS)
DEFAULT_MAX_CONTEXT_TOKENS = 32_000
OPENROUTER_API_BASE = "https://openrouter.ai/api/v1"
def get_max_context_tokens() -> int:
"""Return the configured max_context_tokens, falling back to DEFAULT_MAX_CONTEXT_TOKENS."""
return get_hive_config().get("llm", {}).get("max_context_tokens", DEFAULT_MAX_CONTEXT_TOKENS)
def get_api_key() -> str | None:
"""Return the API key, supporting env var, Claude Code subscription, Codex, and ZAI Code.
@@ -90,6 +255,28 @@ def get_api_key() -> str | None:
except ImportError:
pass
# Kimi Code subscription: read API key from ~/.kimi/config.toml
if llm.get("use_kimi_code_subscription"):
try:
from framework.runner.runner import get_kimi_code_token
token = get_kimi_code_token()
if token:
return token
except ImportError:
pass
# Antigravity subscription: read OAuth token from accounts JSON
if llm.get("use_antigravity_subscription"):
try:
from framework.runner.runner import get_antigravity_token
token = get_antigravity_token()
if token:
return token
except ImportError:
pass
# Standard env-var path (covers ZAI Code and all API-key providers)
api_key_env_var = llm.get("api_key_env_var")
if api_key_env_var:
@@ -97,18 +284,116 @@ def get_api_key() -> str | None:
return None
# OAuth credentials for Antigravity are fetched from the opencode-antigravity-auth project.
# This project reverse-engineered and published the public OAuth credentials
# for Google's Antigravity/Cloud Code Assist API.
# Source: https://github.com/NoeFabris/opencode-antigravity-auth
_ANTIGRAVITY_CREDENTIALS_URL = (
"https://raw.githubusercontent.com/NoeFabris/opencode-antigravity-auth/dev/src/constants.ts"
)
_antigravity_credentials_cache: tuple[str | None, str | None] = (None, None)
def _fetch_antigravity_credentials() -> tuple[str | None, str | None]:
"""Fetch OAuth client ID and secret from the public npm package source on GitHub."""
global _antigravity_credentials_cache
if _antigravity_credentials_cache[0] and _antigravity_credentials_cache[1]:
return _antigravity_credentials_cache
import re
import urllib.request
try:
req = urllib.request.Request(
_ANTIGRAVITY_CREDENTIALS_URL, headers={"User-Agent": "Hive/1.0"}
)
with urllib.request.urlopen(req, timeout=10) as resp:
content = resp.read().decode("utf-8")
id_match = re.search(r'ANTIGRAVITY_CLIENT_ID\s*=\s*"([^"]+)"', content)
secret_match = re.search(r'ANTIGRAVITY_CLIENT_SECRET\s*=\s*"([^"]+)"', content)
client_id = id_match.group(1) if id_match else None
client_secret = secret_match.group(1) if secret_match else None
if client_id and client_secret:
_antigravity_credentials_cache = (client_id, client_secret)
return client_id, client_secret
except Exception as e:
logger.debug("Failed to fetch Antigravity credentials from public source: %s", e)
return None, None
def get_antigravity_client_id() -> str:
"""Return the Antigravity OAuth application client ID.
Checked in order:
1. ``ANTIGRAVITY_CLIENT_ID`` environment variable
2. ``llm.antigravity_client_id`` in ~/.hive/configuration.json
3. Fetch from public source (opencode-antigravity-auth project on GitHub)
"""
env = os.environ.get("ANTIGRAVITY_CLIENT_ID")
if env:
return env
cfg_val = get_hive_config().get("llm", {}).get("antigravity_client_id")
if cfg_val:
return cfg_val
# Fetch from public source
client_id, _ = _fetch_antigravity_credentials()
if client_id:
return client_id
raise RuntimeError("Could not obtain Antigravity OAuth client ID")
def get_antigravity_client_secret() -> str | None:
"""Return the Antigravity OAuth client secret.
Checked in order:
1. ``ANTIGRAVITY_CLIENT_SECRET`` environment variable
2. ``llm.antigravity_client_secret`` in ~/.hive/configuration.json
3. Fetch from public source (opencode-antigravity-auth project on GitHub)
Returns None when not found token refresh will be skipped and
the caller must use whatever access token is already available.
"""
env = os.environ.get("ANTIGRAVITY_CLIENT_SECRET")
if env:
return env
cfg_val = get_hive_config().get("llm", {}).get("antigravity_client_secret") or None
if cfg_val:
return cfg_val
# Fetch from public source
_, secret = _fetch_antigravity_credentials()
return secret
def get_gcu_enabled() -> bool:
"""Return whether GCU (browser automation) is enabled in user config."""
return get_hive_config().get("gcu_enabled", True)
def get_gcu_viewport_scale() -> float:
"""Return GCU viewport scale factor (0.1-1.0), default 0.8."""
scale = get_hive_config().get("gcu_viewport_scale", 0.8)
if isinstance(scale, (int, float)) and 0.1 <= scale <= 1.0:
return float(scale)
return 0.8
def get_api_base() -> str | None:
"""Return the api_base URL for OpenAI-compatible endpoints, if configured."""
llm = get_hive_config().get("llm", {})
if llm.get("use_codex_subscription"):
# Codex subscription routes through the ChatGPT backend, not api.openai.com.
return "https://chatgpt.com/backend-api/codex"
return llm.get("api_base")
if llm.get("use_kimi_code_subscription"):
# Kimi Code uses an Anthropic-compatible endpoint (no /v1 suffix).
return "https://api.kimi.com/coding"
if llm.get("use_antigravity_subscription"):
# Antigravity uses AntigravityProvider directly — no api_base needed.
return None
if llm.get("api_base"):
return llm["api_base"]
if str(llm.get("provider", "")).lower() == "openrouter":
return OPENROUTER_API_BASE
return None
def get_llm_extra_kwargs() -> dict[str, Any]:
@@ -149,6 +434,11 @@ def get_llm_extra_kwargs() -> dict[str, Any]:
"store": False,
"allowed_openai_params": ["store"],
}
if llm.get("provider") == "ollama":
# Pass num_ctx to Ollama so it doesn't silently truncate the ~9.5k Queen prompt.
# Ollama's default num_ctx is only 2048. We set it to 16384 here so LiteLLM
# passes it through as a provider-specific option.
return {"num_ctx": llm.get("num_ctx", 16384)}
return {}
@@ -164,6 +454,7 @@ class RuntimeConfig:
model: str = field(default_factory=get_preferred_model)
temperature: float = 0.7
max_tokens: int = field(default_factory=get_max_tokens)
max_context_tokens: int = field(default_factory=get_max_context_tokens)
api_key: str | None = field(default_factory=get_api_key)
api_base: str | None = field(default_factory=get_api_base)
extra_kwargs: dict[str, Any] = field(default_factory=get_llm_extra_kwargs)
+1 -3
View File
@@ -6,7 +6,7 @@ This module provides secure credential storage with:
- Template-based usage: {{cred.key}} patterns for injection
- Bipartisan model: Store stores values, tools define usage
- Provider system: Extensible lifecycle management (refresh, validate)
- Multiple backends: Encrypted files, env vars, HashiCorp Vault
- Multiple backends: Encrypted files, env vars
Quick Start:
from core.framework.credentials import CredentialStore, CredentialObject
@@ -38,8 +38,6 @@ For Aden server sync:
AdenSyncProvider,
)
For Vault integration:
from core.framework.credentials.vault import HashiCorpVaultStorage
"""
from .key_storage import (
+25 -7
View File
@@ -142,17 +142,27 @@ def save_aden_api_key(key: str) -> None:
os.environ[ADEN_ENV_VAR] = key
def delete_aden_api_key() -> None:
"""Remove ADEN_API_KEY from the encrypted store and ``os.environ``."""
def delete_aden_api_key() -> bool:
"""Remove ADEN_API_KEY from the encrypted store and ``os.environ``.
Returns True if the key existed and was deleted, False otherwise.
"""
deleted = False
try:
from .storage import EncryptedFileStorage
storage = EncryptedFileStorage()
storage.delete(ADEN_CREDENTIAL_ID)
deleted = storage.delete(ADEN_CREDENTIAL_ID)
except (FileNotFoundError, PermissionError) as e:
logger.debug("Could not delete %s from encrypted store: %s", ADEN_CREDENTIAL_ID, e)
except Exception:
logger.debug("Could not delete %s from encrypted store", ADEN_CREDENTIAL_ID)
logger.warning(
"Unexpected error deleting %s from encrypted store",
ADEN_CREDENTIAL_ID,
exc_info=True,
)
os.environ.pop(ADEN_ENV_VAR, None)
return deleted
# ---------------------------------------------------------------------------
@@ -167,8 +177,10 @@ def _read_credential_key_file() -> str | None:
value = CREDENTIAL_KEY_PATH.read_text(encoding="utf-8").strip()
if value:
return value
except (FileNotFoundError, PermissionError) as e:
logger.debug("Could not read %s: %s", CREDENTIAL_KEY_PATH, e)
except Exception:
logger.debug("Could not read %s", CREDENTIAL_KEY_PATH)
logger.warning("Unexpected error reading %s", CREDENTIAL_KEY_PATH, exc_info=True)
return None
@@ -196,6 +208,12 @@ def _read_aden_from_encrypted_store() -> str | None:
cred = storage.load(ADEN_CREDENTIAL_ID)
if cred:
return cred.get_key("api_key")
except (FileNotFoundError, PermissionError, KeyError) as e:
logger.debug("Could not load %s from encrypted store: %s", ADEN_CREDENTIAL_ID, e)
except Exception:
logger.debug("Could not load %s from encrypted store", ADEN_CREDENTIAL_ID)
logger.warning(
"Unexpected error loading %s from encrypted store",
ADEN_CREDENTIAL_ID,
exc_info=True,
)
return None
+26 -3
View File
@@ -27,6 +27,7 @@ from __future__ import annotations
import getpass
import json
import logging
import os
import sys
from collections.abc import Callable
@@ -37,6 +38,8 @@ from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from framework.graph import NodeSpec
logger = logging.getLogger(__name__)
# ANSI colors for terminal output
class Colors:
@@ -365,8 +368,11 @@ class CredentialSetupSession:
self._print("")
try:
api_key = self.password_fn(f"Paste your {cred.env_var}: ").strip()
except (EOFError, OSError) as exc:
logger.debug("Password input unavailable, falling back to plain input: %s", exc)
api_key = self._input(f"Paste your {cred.env_var}: ").strip()
except Exception:
# Fallback to regular input if password input fails
logger.warning("Unexpected error reading password input", exc_info=True)
api_key = self._input(f"Paste your {cred.env_var}: ").strip()
if not api_key:
@@ -403,7 +409,11 @@ class CredentialSetupSession:
try:
aden_key = self.password_fn("Paste your ADEN_API_KEY: ").strip()
except (EOFError, OSError) as exc:
logger.debug("Password input unavailable for ADEN_API_KEY: %s", exc)
aden_key = self._input("Paste your ADEN_API_KEY: ").strip()
except Exception:
logger.warning("Unexpected error reading ADEN_API_KEY input", exc_info=True)
aden_key = self._input("Paste your ADEN_API_KEY: ").strip()
if not aden_key:
@@ -433,8 +443,10 @@ class CredentialSetupSession:
value = store.get_key(cred_id, cred.credential_key)
if value:
os.environ[cred.env_var] = value
except (KeyError, OSError) as exc:
logger.debug("Could not export credential to env: %s", exc)
except Exception:
pass
logger.warning("Unexpected error exporting credential to env", exc_info=True)
return True
else:
self._print(
@@ -457,9 +469,12 @@ class CredentialSetupSession:
"message": result.message,
"details": result.details,
}
except Exception:
except ImportError:
# No health checker available
return None
except Exception:
logger.warning("Health check failed for %s", cred.credential_name, exc_info=True)
return None
def _store_credential(self, cred: MissingCredential, value: str) -> None:
"""Store credential in encrypted store and export to env."""
@@ -561,7 +576,11 @@ def _load_nodes_from_python_agent(agent_path: Path) -> list:
sys.modules[spec.name] = module
spec.loader.exec_module(module)
return getattr(module, "nodes", [])
except (ImportError, OSError) as exc:
logger.debug("Could not load agent module: %s", exc)
return []
except Exception:
logger.warning("Unexpected error loading agent module", exc_info=True)
return []
@@ -588,7 +607,11 @@ def _load_nodes_from_json_agent(agent_json: Path) -> list:
)
)
return nodes
except (json.JSONDecodeError, KeyError, OSError) as exc:
logger.debug("Could not load JSON agent: %s", exc)
return []
except Exception:
logger.warning("Unexpected error loading JSON agent", exc_info=True)
return []
+10
View File
@@ -51,6 +51,16 @@ def ensure_credential_key_env() -> None:
if found and value:
os.environ[var_name] = value
logger.debug("Loaded %s from shell config", var_name)
# Also load the currently configured LLM env var even if it's not in CREDENTIAL_SPECS.
# This keeps quickstart-written keys available to fresh processes on Unix shells.
from framework.config import get_hive_config
llm_env_var = str(get_hive_config().get("llm", {}).get("api_key_env_var", "")).strip()
if llm_env_var and not os.environ.get(llm_env_var):
found, value = check_env_var_in_shell_config(llm_env_var)
if found and value:
os.environ[llm_env_var] = value
logger.debug("Loaded configured LLM env var %s from shell config", llm_env_var)
except ImportError:
pass
@@ -1,55 +0,0 @@
"""
HashiCorp Vault integration for the credential store.
This module provides enterprise-grade secret management through
HashiCorp Vault integration.
Quick Start:
from core.framework.credentials import CredentialStore
from core.framework.credentials.vault import HashiCorpVaultStorage
# Configure Vault storage
storage = HashiCorpVaultStorage(
url="https://vault.example.com:8200",
# token read from VAULT_TOKEN env var
mount_point="secret",
path_prefix="hive/agents/prod"
)
# Create credential store with Vault backend
store = CredentialStore(storage=storage)
# Use normally - credentials are stored in Vault
credential = store.get_credential("my_api")
Requirements:
pip install hvac
Authentication:
Set the VAULT_TOKEN environment variable or pass the token directly:
export VAULT_TOKEN="hvs.xxxxxxxxxxxxx"
For production, consider using Vault auth methods:
- Kubernetes auth
- AppRole auth
- AWS IAM auth
Vault Configuration:
Ensure KV v2 secrets engine is enabled:
vault secrets enable -path=secret kv-v2
Grant appropriate policies:
path "secret/data/hive/credentials/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "secret/metadata/hive/credentials/*" {
capabilities = ["list", "delete"]
}
"""
from .hashicorp import HashiCorpVaultStorage
__all__ = ["HashiCorpVaultStorage"]
@@ -1,394 +0,0 @@
"""
HashiCorp Vault storage adapter.
Provides integration with HashiCorp Vault for enterprise secret management.
Requires the 'hvac' package: uv pip install hvac
"""
from __future__ import annotations
import logging
import os
from datetime import datetime
from typing import Any
from pydantic import SecretStr
from ..models import CredentialKey, CredentialObject, CredentialType
from ..storage import CredentialStorage
logger = logging.getLogger(__name__)
class HashiCorpVaultStorage(CredentialStorage):
"""
HashiCorp Vault storage adapter.
Features:
- KV v2 secrets engine support
- Namespace support (Enterprise)
- Automatic secret versioning
- Audit logging via Vault
The adapter stores credentials in Vault's KV v2 secrets engine with
the following structure:
{mount_point}/data/{path_prefix}/{credential_id}
data:
_type: "oauth2"
access_token: "xxx"
refresh_token: "yyy"
_expires_access_token: "2024-01-26T12:00:00"
_provider_id: "oauth2"
Example:
storage = HashiCorpVaultStorage(
url="https://vault.example.com:8200",
token="hvs.xxx", # Or use VAULT_TOKEN env var
mount_point="secret",
path_prefix="hive/credentials"
)
store = CredentialStore(storage=storage)
# Credentials are now stored in Vault
store.save_credential(credential)
credential = store.get_credential("my_api")
Authentication:
The adapter uses token-based authentication. The token can be provided:
1. Directly via the 'token' parameter
2. Via the VAULT_TOKEN environment variable
For production, consider using:
- Kubernetes auth method
- AppRole auth method
- AWS IAM auth method
Requirements:
uv pip install hvac
"""
def __init__(
self,
url: str,
token: str | None = None,
mount_point: str = "secret",
path_prefix: str = "hive/credentials",
namespace: str | None = None,
verify_ssl: bool = True,
):
"""
Initialize Vault storage.
Args:
url: Vault server URL (e.g., https://vault.example.com:8200)
token: Vault token. If None, reads from VAULT_TOKEN env var
mount_point: KV secrets engine mount point (default: "secret")
path_prefix: Path prefix for all credentials
namespace: Vault namespace (Enterprise feature)
verify_ssl: Whether to verify SSL certificates
Raises:
ImportError: If hvac is not installed
ValueError: If authentication fails
"""
try:
import hvac
except ImportError as e:
raise ImportError(
"HashiCorp Vault support requires 'hvac'. Install with: uv pip install hvac"
) from e
self._url = url
self._token = token or os.environ.get("VAULT_TOKEN")
self._mount = mount_point
self._prefix = path_prefix
self._namespace = namespace
if not self._token:
raise ValueError(
"Vault token required. Set VAULT_TOKEN env var or pass token parameter."
)
self._client = hvac.Client(
url=url,
token=self._token,
namespace=namespace,
verify=verify_ssl,
)
if not self._client.is_authenticated():
raise ValueError("Vault authentication failed. Check token and server URL.")
logger.info(f"Connected to HashiCorp Vault at {url}")
def _path(self, credential_id: str) -> str:
"""Build Vault path for credential."""
# Sanitize credential_id
safe_id = credential_id.replace("/", "_").replace("\\", "_")
return f"{self._prefix}/{safe_id}"
def save(self, credential: CredentialObject) -> None:
"""Save credential to Vault KV v2."""
path = self._path(credential.id)
data = self._serialize_for_vault(credential)
try:
self._client.secrets.kv.v2.create_or_update_secret(
path=path,
secret=data,
mount_point=self._mount,
)
logger.debug(f"Saved credential '{credential.id}' to Vault at {path}")
except Exception as e:
logger.error(f"Failed to save credential '{credential.id}' to Vault: {e}")
raise
def load(self, credential_id: str) -> CredentialObject | None:
"""Load credential from Vault."""
path = self._path(credential_id)
try:
response = self._client.secrets.kv.v2.read_secret_version(
path=path,
mount_point=self._mount,
)
data = response["data"]["data"]
return self._deserialize_from_vault(credential_id, data)
except Exception as e:
# Check if it's a "not found" error
error_str = str(e).lower()
if "not found" in error_str or "404" in error_str:
logger.debug(f"Credential '{credential_id}' not found in Vault")
return None
logger.error(f"Failed to load credential '{credential_id}' from Vault: {e}")
raise
def delete(self, credential_id: str) -> bool:
"""Delete credential from Vault (all versions)."""
path = self._path(credential_id)
try:
self._client.secrets.kv.v2.delete_metadata_and_all_versions(
path=path,
mount_point=self._mount,
)
logger.debug(f"Deleted credential '{credential_id}' from Vault")
return True
except Exception as e:
error_str = str(e).lower()
if "not found" in error_str or "404" in error_str:
return False
logger.error(f"Failed to delete credential '{credential_id}' from Vault: {e}")
raise
def list_all(self) -> list[str]:
"""List all credentials under the prefix."""
try:
response = self._client.secrets.kv.v2.list_secrets(
path=self._prefix,
mount_point=self._mount,
)
keys = response.get("data", {}).get("keys", [])
# Remove trailing slashes from folder names
return [k.rstrip("/") for k in keys]
except Exception as e:
error_str = str(e).lower()
if "not found" in error_str or "404" in error_str:
return []
logger.error(f"Failed to list credentials from Vault: {e}")
raise
def exists(self, credential_id: str) -> bool:
"""Check if credential exists in Vault."""
try:
path = self._path(credential_id)
self._client.secrets.kv.v2.read_secret_version(
path=path,
mount_point=self._mount,
)
return True
except Exception:
return False
def _serialize_for_vault(self, credential: CredentialObject) -> dict[str, Any]:
"""Convert credential to Vault secret format."""
data: dict[str, Any] = {
"_type": credential.credential_type.value,
}
if credential.provider_id:
data["_provider_id"] = credential.provider_id
if credential.description:
data["_description"] = credential.description
if credential.auto_refresh:
data["_auto_refresh"] = "true"
# Store each key
for key_name, key in credential.keys.items():
data[key_name] = key.get_secret_value()
if key.expires_at:
data[f"_expires_{key_name}"] = key.expires_at.isoformat()
if key.metadata:
data[f"_metadata_{key_name}"] = str(key.metadata)
return data
def _deserialize_from_vault(self, credential_id: str, data: dict[str, Any]) -> CredentialObject:
"""Reconstruct credential from Vault secret."""
# Extract metadata fields
cred_type = CredentialType(data.pop("_type", "api_key"))
provider_id = data.pop("_provider_id", None)
description = data.pop("_description", "")
auto_refresh = data.pop("_auto_refresh", "") == "true"
# Build keys dict
keys: dict[str, CredentialKey] = {}
# Find all non-metadata keys
key_names = [k for k in data.keys() if not k.startswith("_")]
for key_name in key_names:
value = data[key_name]
# Check for expiration
expires_at = None
expires_key = f"_expires_{key_name}"
if expires_key in data:
try:
expires_at = datetime.fromisoformat(data[expires_key])
except (ValueError, TypeError):
pass
# Check for metadata
metadata: dict[str, Any] = {}
metadata_key = f"_metadata_{key_name}"
if metadata_key in data:
try:
import ast
metadata = ast.literal_eval(data[metadata_key])
except (ValueError, SyntaxError):
pass
keys[key_name] = CredentialKey(
name=key_name,
value=SecretStr(value),
expires_at=expires_at,
metadata=metadata,
)
return CredentialObject(
id=credential_id,
credential_type=cred_type,
keys=keys,
provider_id=provider_id,
description=description,
auto_refresh=auto_refresh,
)
# --- Vault-Specific Operations ---
def get_secret_metadata(self, credential_id: str) -> dict[str, Any] | None:
"""
Get Vault metadata for a secret (version info, timestamps, etc.).
Args:
credential_id: The credential identifier
Returns:
Metadata dict or None if not found
"""
path = self._path(credential_id)
try:
response = self._client.secrets.kv.v2.read_secret_metadata(
path=path,
mount_point=self._mount,
)
return response.get("data", {})
except Exception:
return None
def soft_delete(self, credential_id: str, versions: list[int] | None = None) -> bool:
"""
Soft delete specific versions (can be recovered).
Args:
credential_id: The credential identifier
versions: Version numbers to delete. If None, deletes latest.
Returns:
True if successful
"""
path = self._path(credential_id)
try:
if versions:
self._client.secrets.kv.v2.delete_secret_versions(
path=path,
versions=versions,
mount_point=self._mount,
)
else:
self._client.secrets.kv.v2.delete_latest_version_of_secret(
path=path,
mount_point=self._mount,
)
return True
except Exception as e:
logger.error(f"Soft delete failed for '{credential_id}': {e}")
return False
def undelete(self, credential_id: str, versions: list[int]) -> bool:
"""
Recover soft-deleted versions.
Args:
credential_id: The credential identifier
versions: Version numbers to recover
Returns:
True if successful
"""
path = self._path(credential_id)
try:
self._client.secrets.kv.v2.undelete_secret_versions(
path=path,
versions=versions,
mount_point=self._mount,
)
return True
except Exception as e:
logger.error(f"Undelete failed for '{credential_id}': {e}")
return False
def load_version(self, credential_id: str, version: int) -> CredentialObject | None:
"""
Load a specific version of a credential.
Args:
credential_id: The credential identifier
version: Version number to load
Returns:
CredentialObject or None
"""
path = self._path(credential_id)
try:
response = self._client.secrets.kv.v2.read_secret_version(
path=path,
version=version,
mount_point=self._mount,
)
data = response["data"]["data"]
return self._deserialize_from_vault(credential_id, data)
except Exception:
return None
View File
+76
View File
@@ -0,0 +1,76 @@
"""CLI command for the LLM debug log viewer."""
import argparse
import subprocess
import sys
from pathlib import Path
_SCRIPT = Path(__file__).resolve().parents[3] / "scripts" / "llm_debug_log_visualizer.py"
def register_debugger_commands(subparsers: argparse._SubParsersAction) -> None:
"""Register the ``hive debugger`` command."""
parser = subparsers.add_parser(
"debugger",
help="Open the LLM debug log viewer",
description=(
"Start a local server that lets you browse LLM debug sessions "
"recorded in ~/.hive/llm_logs. Sessions are loaded on demand so "
"the browser stays responsive."
),
)
parser.add_argument(
"--session",
help="Execution ID to select initially.",
)
parser.add_argument(
"--port",
type=int,
default=0,
help="Port for the local server (0 = auto-pick a free port).",
)
parser.add_argument(
"--logs-dir",
help="Directory containing JSONL log files (default: ~/.hive/llm_logs).",
)
parser.add_argument(
"--limit-files",
type=int,
default=None,
help="Maximum number of newest log files to scan (default: 200).",
)
parser.add_argument(
"--output",
help="Write a static HTML file instead of starting a server.",
)
parser.add_argument(
"--no-open",
action="store_true",
help="Start the server but do not open a browser.",
)
parser.add_argument(
"--include-tests",
action="store_true",
help="Show test/mock sessions (hidden by default).",
)
parser.set_defaults(func=cmd_debugger)
def cmd_debugger(args: argparse.Namespace) -> int:
"""Launch the LLM debug log visualizer."""
cmd: list[str] = [sys.executable, str(_SCRIPT)]
if args.session:
cmd += ["--session", args.session]
if args.port:
cmd += ["--port", str(args.port)]
if args.logs_dir:
cmd += ["--logs-dir", args.logs_dir]
if args.limit_files is not None:
cmd += ["--limit-files", str(args.limit_files)]
if args.output:
cmd += ["--output", args.output]
if args.no_open:
cmd.append("--no-open")
if args.include_tests:
cmd.append("--include-tests")
return subprocess.call(cmd)
+45 -11
View File
@@ -33,10 +33,20 @@ class Message:
is_transition_marker: bool = False
# True when this message is real human input (from /chat), not a system prompt
is_client_input: bool = False
# Optional image content blocks (e.g. from browser_screenshot)
image_content: list[dict[str, Any]] | None = None
# True when message contains an activated skill body (AS-10: never prune)
is_skill_content: bool = False
def to_llm_dict(self) -> dict[str, Any]:
"""Convert to OpenAI-format message dict."""
if self.role == "user":
if self.image_content:
blocks: list[dict[str, Any]] = []
if self.content:
blocks.append({"type": "text", "text": self.content})
blocks.extend(self.image_content)
return {"role": "user", "content": blocks}
return {"role": "user", "content": self.content}
if self.role == "assistant":
@@ -47,6 +57,15 @@ class Message:
# role == "tool"
content = f"ERROR: {self.content}" if self.is_error else self.content
if self.image_content:
# Multimodal tool result: text + image content blocks
blocks: list[dict[str, Any]] = [{"type": "text", "text": content}]
blocks.extend(self.image_content)
return {
"role": "tool",
"tool_call_id": self.tool_use_id,
"content": blocks,
}
return {
"role": "tool",
"tool_call_id": self.tool_use_id,
@@ -72,6 +91,8 @@ class Message:
d["is_transition_marker"] = self.is_transition_marker
if self.is_client_input:
d["is_client_input"] = self.is_client_input
if self.image_content is not None:
d["image_content"] = self.image_content
return d
@classmethod
@@ -87,6 +108,7 @@ class Message:
phase_id=data.get("phase_id"),
is_transition_marker=data.get("is_transition_marker", False),
is_client_input=data.get("is_client_input", False),
image_content=data.get("image_content"),
)
@@ -307,13 +329,13 @@ class NodeConversation:
def __init__(
self,
system_prompt: str = "",
max_history_tokens: int = 32000,
max_context_tokens: int = 32000,
compaction_threshold: float = 0.8,
output_keys: list[str] | None = None,
store: ConversationStore | None = None,
) -> None:
self._system_prompt = system_prompt
self._max_history_tokens = max_history_tokens
self._max_context_tokens = max_context_tokens
self._compaction_threshold = compaction_threshold
self._output_keys = output_keys
self._store = store
@@ -373,6 +395,7 @@ class NodeConversation:
*,
is_transition_marker: bool = False,
is_client_input: bool = False,
image_content: list[dict[str, Any]] | None = None,
) -> Message:
msg = Message(
seq=self._next_seq,
@@ -381,6 +404,7 @@ class NodeConversation:
phase_id=self._current_phase,
is_transition_marker=is_transition_marker,
is_client_input=is_client_input,
image_content=image_content,
)
self._messages.append(msg)
self._next_seq += 1
@@ -409,6 +433,8 @@ class NodeConversation:
tool_use_id: str,
content: str,
is_error: bool = False,
image_content: list[dict[str, Any]] | None = None,
is_skill_content: bool = False,
) -> Message:
msg = Message(
seq=self._next_seq,
@@ -417,6 +443,8 @@ class NodeConversation:
tool_use_id=tool_use_id,
is_error=is_error,
phase_id=self._current_phase,
image_content=image_content,
is_skill_content=is_skill_content,
)
self._messages.append(msg)
self._next_seq += 1
@@ -525,16 +553,16 @@ class NodeConversation:
self._last_api_input_tokens = actual_input_tokens
def usage_ratio(self) -> float:
"""Current token usage as a fraction of *max_history_tokens*.
"""Current token usage as a fraction of *max_context_tokens*.
Returns 0.0 when ``max_history_tokens`` is zero (unlimited).
Returns 0.0 when ``max_context_tokens`` is zero (unlimited).
"""
if self._max_history_tokens <= 0:
if self._max_context_tokens <= 0:
return 0.0
return self.estimate_tokens() / self._max_history_tokens
return self.estimate_tokens() / self._max_context_tokens
def needs_compaction(self) -> bool:
return self.estimate_tokens() >= self._max_history_tokens * self._compaction_threshold
return self.estimate_tokens() >= self._max_context_tokens * self._compaction_threshold
# --- Output-key extraction ---------------------------------------------
@@ -610,8 +638,15 @@ class NodeConversation:
continue
if msg.is_error:
continue # never prune errors
if msg.is_skill_content:
continue # never prune activated skill instructions (AS-10)
if msg.content.startswith("[Pruned tool result"):
continue # already pruned
# Tiny results (set_output acks, confirmations) — pruning
# saves negligible space but makes the LLM think the call
# failed, causing costly retries.
if len(msg.content) < 100:
continue
# Phase-aware: protect current phase messages
if self._current_phase and msg.phase_id == self._current_phase:
@@ -901,8 +936,7 @@ class NodeConversation:
full_path = str((spill_path / conv_filename).resolve())
ref_parts.append(
f"[Previous conversation saved to '{full_path}'. "
f"Use load_data('{conv_filename}'), read_file('{full_path}'), "
f"or run_command('cat \"{full_path}\"') to review if needed.]"
f"Use load_data('{conv_filename}') to review if needed.]"
)
elif not collapsed_msgs:
ref_parts.append("[Previous freeform messages compacted.]")
@@ -1029,7 +1063,7 @@ class NodeConversation:
await self._store.write_meta(
{
"system_prompt": self._system_prompt,
"max_history_tokens": self._max_history_tokens,
"max_context_tokens": self._max_context_tokens,
"compaction_threshold": self._compaction_threshold,
"output_keys": self._output_keys,
}
@@ -1062,7 +1096,7 @@ class NodeConversation:
conv = cls(
system_prompt=meta.get("system_prompt", ""),
max_history_tokens=meta.get("max_history_tokens", 32000),
max_context_tokens=meta.get("max_context_tokens", 32000),
compaction_threshold=meta.get("compaction_threshold", 0.8),
output_keys=meta.get("output_keys"),
store=store,
+3 -3
View File
@@ -37,7 +37,7 @@ async def evaluate_phase_completion(
phase_description: str,
success_criteria: str,
accumulator_state: dict[str, Any],
max_history_tokens: int = 8_196,
max_context_tokens: int = 8_196,
) -> PhaseVerdict:
"""Level 2 judge: read the conversation and evaluate quality.
@@ -50,7 +50,7 @@ async def evaluate_phase_completion(
phase_description: Description of the phase
success_criteria: Natural-language criteria for phase completion
accumulator_state: Current output key values
max_history_tokens: Main conversation token budget (judge gets 20%)
max_context_tokens: Main conversation token budget (judge gets 20%)
Returns:
PhaseVerdict with action and optional feedback
@@ -89,7 +89,7 @@ FEEDBACK: (reason if RETRY, empty if ACCEPT)"""
response = await llm.acomplete(
messages=[{"role": "user", "content": user_prompt}],
system=system_prompt,
max_tokens=max(1024, max_history_tokens // 5),
max_tokens=max(1024, max_context_tokens // 5),
max_retries=1,
)
if not response.content or not response.content.strip():
+13 -85
View File
@@ -322,7 +322,11 @@ class AsyncEntryPointSpec(BaseModel):
id: str = Field(description="Unique identifier for this entry point")
name: str = Field(description="Human-readable name")
entry_node: str = Field(description="Node ID to start execution from")
entry_node: str = Field(
default="",
description="Deprecated: Node ID to start execution from. "
"Triggers are graph-level; worker always enters at GraphSpec.entry_node.",
)
trigger_type: str = Field(
default="manual",
description="How this entry point is triggered: webhook, api, timer, event, manual",
@@ -331,6 +335,10 @@ class AsyncEntryPointSpec(BaseModel):
default_factory=dict,
description="Trigger-specific configuration (e.g., webhook URL, timer interval)",
)
task: str = Field(
default="",
description="Worker task string when this trigger fires autonomously",
)
isolation_level: str = Field(
default="shared", description="State isolation: isolated, shared, or synchronized"
)
@@ -368,28 +376,8 @@ class GraphSpec(BaseModel):
edges=[...],
)
For multi-entry-point agents (concurrent streams):
GraphSpec(
id="support-agent-graph",
goal_id="support-001",
entry_node="process-webhook", # Default entry
async_entry_points=[
AsyncEntryPointSpec(
id="webhook",
name="Zendesk Webhook",
entry_node="process-webhook",
trigger_type="webhook",
),
AsyncEntryPointSpec(
id="api",
name="API Handler",
entry_node="process-request",
trigger_type="api",
),
],
nodes=[...],
edges=[...],
)
Triggers (timer, webhook, event) are now defined in ``triggers.json``
alongside the agent directory, not embedded in the graph spec.
"""
id: str
@@ -402,12 +390,6 @@ class GraphSpec(BaseModel):
default_factory=dict,
description="Named entry points for resuming execution. Format: {name: node_id}",
)
async_entry_points: list[AsyncEntryPointSpec] = Field(
default_factory=list,
description=(
"Asynchronous entry points for concurrent execution streams (used with AgentRuntime)"
),
)
terminal_nodes: list[str] = Field(
default_factory=list, description="IDs of nodes that end execution"
)
@@ -486,17 +468,6 @@ class GraphSpec(BaseModel):
return node
return None
def has_async_entry_points(self) -> bool:
"""Check if this graph uses async entry points (multi-stream execution)."""
return len(self.async_entry_points) > 0
def get_async_entry_point(self, entry_point_id: str) -> AsyncEntryPointSpec | None:
"""Get an async entry point by ID."""
for ep in self.async_entry_points:
if ep.id == entry_point_id:
return ep
return None
def get_outgoing_edges(self, node_id: str) -> list[EdgeSpec]:
"""Get all edges leaving a node, sorted by priority."""
edges = [e for e in self.edges if e.source == node_id]
@@ -587,37 +558,6 @@ class GraphSpec(BaseModel):
if not self.get_node(self.entry_node):
errors.append(f"Entry node '{self.entry_node}' not found")
# Check async entry points
seen_entry_ids = set()
for entry_point in self.async_entry_points:
# Check for duplicate IDs
if entry_point.id in seen_entry_ids:
errors.append(f"Duplicate async entry point ID: '{entry_point.id}'")
seen_entry_ids.add(entry_point.id)
# Check entry node exists
if not self.get_node(entry_point.entry_node):
errors.append(
f"Async entry point '{entry_point.id}' references "
f"missing node '{entry_point.entry_node}'"
)
# Validate isolation level
valid_isolation = {"isolated", "shared", "synchronized"}
if entry_point.isolation_level not in valid_isolation:
errors.append(
f"Async entry point '{entry_point.id}' has invalid isolation_level "
f"'{entry_point.isolation_level}'. Valid: {valid_isolation}"
)
# Validate trigger type
valid_triggers = {"webhook", "api", "timer", "event", "manual"}
if entry_point.trigger_type not in valid_triggers:
errors.append(
f"Async entry point '{entry_point.id}' has invalid trigger_type "
f"'{entry_point.trigger_type}'. Valid: {valid_triggers}"
)
# Check terminal nodes exist
for term in self.terminal_nodes:
if not self.get_node(term):
@@ -646,10 +586,6 @@ class GraphSpec(BaseModel):
for entry_point_node in self.entry_points.values():
to_visit.append(entry_point_node)
# Add all async entry points as valid starting points
for async_entry in self.async_entry_points:
to_visit.append(async_entry.entry_node)
# Traverse from all entry points
while to_visit:
current = to_visit.pop()
@@ -666,18 +602,10 @@ class GraphSpec(BaseModel):
for sub_agent_id in sub_agents:
reachable.add(sub_agent_id)
# Build set of async entry point nodes for quick lookup
async_entry_nodes = {ep.entry_node for ep in self.async_entry_points}
for node in self.nodes:
if node.id not in reachable:
# Skip if node is a pause node, entry point target, or async entry
# (pause/resume architecture and async entry points make reachable)
if (
node.id in self.pause_nodes
or node.id in self.entry_points.values()
or node.id in async_entry_nodes
):
# Skip if node is a pause node or entry point target
if node.id in self.pause_nodes or node.id in self.entry_points.values():
continue
errors.append(f"Node '{node.id}' is unreachable from entry")
@@ -0,0 +1,6 @@
"""EventLoopNode subpackage — modular components of the event loop orchestrator.
All public symbols are re-exported by the parent ``event_loop_node.py`` for
backward compatibility. Internal consumers may import directly from these
submodules for clarity.
"""
@@ -0,0 +1,641 @@
"""Conversation compaction pipeline.
Implements the multi-level compaction strategy:
1. Prune old tool results
2. Structure-preserving compaction (spillover)
3. LLM summary compaction (with recursive splitting)
4. Emergency deterministic summary (no LLM)
"""
from __future__ import annotations
import json
import logging
import os
import re
from datetime import UTC, datetime
from pathlib import Path
from typing import Any
from framework.graph.conversation import NodeConversation
from framework.graph.event_loop.event_publishing import publish_context_usage
from framework.graph.event_loop.types import LoopConfig, OutputAccumulator
from framework.graph.node import NodeContext
from framework.runtime.event_bus import EventBus
logger = logging.getLogger(__name__)
# Limits for LLM compaction
LLM_COMPACT_CHAR_LIMIT: int = 240_000
LLM_COMPACT_MAX_DEPTH: int = 10
async def compact(
ctx: NodeContext,
conversation: NodeConversation,
accumulator: OutputAccumulator | None,
*,
config: LoopConfig,
event_bus: EventBus | None,
char_limit: int = LLM_COMPACT_CHAR_LIMIT,
max_depth: int = LLM_COMPACT_MAX_DEPTH,
) -> None:
"""Run the full compaction pipeline if conversation needs compaction.
Pipeline stages (in order, short-circuits when budget is restored):
1. Prune old tool results
2. Structure-preserving compaction (free, no LLM)
3. LLM summary compaction (recursive split if too large)
4. Emergency deterministic summary (fallback)
"""
ratio_before = conversation.usage_ratio()
phase_grad = getattr(ctx, "continuous_mode", False)
pre_inventory: list[dict[str, Any]] | None = None
if ratio_before >= 1.0:
pre_inventory = build_message_inventory(conversation)
# --- Step 1: Prune old tool results (free, fast) ---
protect = max(2000, config.max_context_tokens // 12)
pruned = await conversation.prune_old_tool_results(
protect_tokens=protect,
min_prune_tokens=max(1000, protect // 3),
)
if pruned > 0:
logger.info(
"Pruned %d old tool results: %.0f%% -> %.0f%%",
pruned,
ratio_before * 100,
conversation.usage_ratio() * 100,
)
if not conversation.needs_compaction():
await log_compaction(
ctx,
conversation,
ratio_before,
event_bus,
pre_inventory=pre_inventory,
)
return
# --- Step 2: Standard structure-preserving compaction (free, no LLM) ---
spill_dir = config.spillover_dir
if spill_dir:
await conversation.compact_preserving_structure(
spillover_dir=spill_dir,
keep_recent=4,
phase_graduated=phase_grad,
)
if not conversation.needs_compaction():
await log_compaction(
ctx,
conversation,
ratio_before,
event_bus,
pre_inventory=pre_inventory,
)
return
# --- Step 3: LLM summary compaction ---
if ctx.llm is not None:
logger.info(
"LLM summary compaction triggered (%.0f%% usage)",
conversation.usage_ratio() * 100,
)
try:
summary = await llm_compact(
ctx,
list(conversation.messages),
accumulator,
char_limit=char_limit,
max_depth=max_depth,
max_context_tokens=config.max_context_tokens,
)
await conversation.compact(
summary,
keep_recent=2,
phase_graduated=phase_grad,
)
except Exception as e:
logger.warning("LLM compaction failed: %s", e)
if not conversation.needs_compaction():
await log_compaction(
ctx,
conversation,
ratio_before,
event_bus,
pre_inventory=pre_inventory,
)
return
# --- Step 4: Emergency deterministic summary (LLM failed/unavailable) ---
logger.warning(
"Emergency compaction (%.0f%% usage)",
conversation.usage_ratio() * 100,
)
summary = build_emergency_summary(ctx, accumulator, conversation, config)
await conversation.compact(
summary,
keep_recent=1,
phase_graduated=phase_grad,
)
await log_compaction(
ctx,
conversation,
ratio_before,
event_bus,
pre_inventory=pre_inventory,
)
# --- LLM compaction with binary-search splitting ----------------------
async def llm_compact(
ctx: NodeContext,
messages: list,
accumulator: OutputAccumulator | None = None,
_depth: int = 0,
*,
char_limit: int = LLM_COMPACT_CHAR_LIMIT,
max_depth: int = LLM_COMPACT_MAX_DEPTH,
max_context_tokens: int = 128_000,
) -> str:
"""Summarise *messages* with LLM, splitting recursively if too large.
If the formatted text exceeds ``LLM_COMPACT_CHAR_LIMIT`` or the LLM
rejects the call with a context-length error, the messages are split
in half and each half is summarised independently. Tool history is
appended once at the top-level call (``_depth == 0``).
"""
from framework.graph.conversation import extract_tool_call_history
from framework.graph.event_loop.tool_result_handler import is_context_too_large_error
if _depth > max_depth:
raise RuntimeError(f"LLM compaction recursion limit ({max_depth})")
formatted = format_messages_for_summary(messages)
# Proactive split: avoid wasting an API call on oversized input
if len(formatted) > char_limit and len(messages) > 1:
summary = await _llm_compact_split(
ctx,
messages,
accumulator,
_depth,
char_limit=char_limit,
max_depth=max_depth,
max_context_tokens=max_context_tokens,
)
else:
prompt = build_llm_compaction_prompt(
ctx,
accumulator,
formatted,
max_context_tokens=max_context_tokens,
)
summary_budget = max(1024, max_context_tokens // 2)
try:
response = await ctx.llm.acomplete(
messages=[{"role": "user", "content": prompt}],
system=(
"You are a conversation compactor for an AI agent. "
"Write a detailed summary that allows the agent to "
"continue its work. Preserve user-stated rules, "
"constraints, and account/identity preferences verbatim."
),
max_tokens=summary_budget,
)
summary = response.content
except Exception as e:
if is_context_too_large_error(e) and len(messages) > 1:
logger.info(
"LLM context too large (depth=%d, msgs=%d) — splitting",
_depth,
len(messages),
)
summary = await _llm_compact_split(
ctx,
messages,
accumulator,
_depth,
char_limit=char_limit,
max_depth=max_depth,
max_context_tokens=max_context_tokens,
)
else:
raise
# Append tool history at top level only
if _depth == 0:
tool_history = extract_tool_call_history(messages)
if tool_history and "TOOLS ALREADY CALLED" not in summary:
summary += "\n\n" + tool_history
return summary
async def _llm_compact_split(
ctx: NodeContext,
messages: list,
accumulator: OutputAccumulator | None,
_depth: int,
*,
char_limit: int = LLM_COMPACT_CHAR_LIMIT,
max_depth: int = LLM_COMPACT_MAX_DEPTH,
max_context_tokens: int = 128_000,
) -> str:
"""Split messages in half and summarise each half independently."""
mid = max(1, len(messages) // 2)
s1 = await llm_compact(
ctx,
messages[:mid],
None,
_depth + 1,
char_limit=char_limit,
max_depth=max_depth,
max_context_tokens=max_context_tokens,
)
s2 = await llm_compact(
ctx,
messages[mid:],
accumulator,
_depth + 1,
char_limit=char_limit,
max_depth=max_depth,
max_context_tokens=max_context_tokens,
)
return s1 + "\n\n" + s2
# --- Compaction helpers ------------------------------------------------
def format_messages_for_summary(messages: list) -> str:
"""Format messages as text for LLM summarisation."""
lines: list[str] = []
for m in messages:
if m.role == "tool":
content = m.content[:500]
if len(m.content) > 500:
content += "..."
lines.append(f"[tool result]: {content}")
elif m.role == "assistant" and m.tool_calls:
names = [tc.get("function", {}).get("name", "?") for tc in m.tool_calls]
text = m.content[:200] if m.content else ""
lines.append(f"[assistant (calls: {', '.join(names)})]: {text}")
else:
lines.append(f"[{m.role}]: {m.content}")
return "\n\n".join(lines)
def build_llm_compaction_prompt(
ctx: NodeContext,
accumulator: OutputAccumulator | None,
formatted_messages: str,
*,
max_context_tokens: int = 128_000,
) -> str:
"""Build prompt for LLM compaction targeting 50% of token budget."""
spec = ctx.node_spec
ctx_lines = [f"NODE: {spec.name} (id={spec.id})"]
if spec.description:
ctx_lines.append(f"PURPOSE: {spec.description}")
if spec.success_criteria:
ctx_lines.append(f"SUCCESS CRITERIA: {spec.success_criteria}")
if accumulator:
acc = accumulator.to_dict()
done = {k: v for k, v in acc.items() if v is not None}
todo = [k for k, v in acc.items() if v is None]
if done:
ctx_lines.append(
"OUTPUTS ALREADY SET:\n"
+ "\n".join(f" {k}: {str(v)[:150]}" for k, v in done.items())
)
if todo:
ctx_lines.append(f"OUTPUTS STILL NEEDED: {', '.join(todo)}")
elif spec.output_keys:
ctx_lines.append(f"OUTPUTS STILL NEEDED: {', '.join(spec.output_keys)}")
target_tokens = max_context_tokens // 2
target_chars = target_tokens * 4
node_ctx = "\n".join(ctx_lines)
return (
"You are compacting an AI agent's conversation history. "
"The agent is still working and needs to continue.\n\n"
f"AGENT CONTEXT:\n{node_ctx}\n\n"
f"CONVERSATION MESSAGES:\n{formatted_messages}\n\n"
"INSTRUCTIONS:\n"
f"Write a summary of approximately {target_chars} characters "
f"(~{target_tokens} tokens).\n"
"1. Preserve ALL user-stated rules, constraints, and preferences "
"verbatim.\n"
"2. Preserve key decisions made and results obtained.\n"
"3. Preserve in-progress work state so the agent can continue.\n"
"4. Be detailed enough that the agent can resume without "
"re-doing work.\n"
)
def build_message_inventory(conversation: NodeConversation) -> list[dict[str, Any]]:
"""Build a per-message size inventory for debug logging."""
inventory: list[dict[str, Any]] = []
for message in conversation.messages:
content_chars = len(message.content)
tool_call_args_chars = 0
tool_name = None
if message.tool_calls:
for tool_call in message.tool_calls:
args = tool_call.get("function", {}).get("arguments", "")
tool_call_args_chars += (
len(args) if isinstance(args, str) else len(json.dumps(args))
)
names = [
tool_call.get("function", {}).get("name", "?") for tool_call in message.tool_calls
]
tool_name = ", ".join(names)
elif message.role == "tool" and message.tool_use_id:
for previous in conversation.messages:
if previous.tool_calls:
for tool_call in previous.tool_calls:
if tool_call.get("id") == message.tool_use_id:
tool_name = tool_call.get("function", {}).get("name", "?")
break
if tool_name:
break
entry: dict[str, Any] = {
"seq": message.seq,
"role": message.role,
"content_chars": content_chars,
}
if tool_call_args_chars:
entry["tool_call_args_chars"] = tool_call_args_chars
if tool_name:
entry["tool"] = tool_name
if message.is_error:
entry["is_error"] = True
if message.phase_id:
entry["phase"] = message.phase_id
if content_chars > 2000:
entry["preview"] = message.content[:200] + ""
inventory.append(entry)
return inventory
def write_compaction_debug_log(
ctx: NodeContext,
before_pct: int,
after_pct: int,
level: str,
inventory: list[dict[str, Any]] | None,
) -> None:
"""Write detailed compaction analysis to ~/.hive/compaction_log/."""
log_dir = Path.home() / ".hive" / "compaction_log"
log_dir.mkdir(parents=True, exist_ok=True)
ts = datetime.now(UTC).strftime("%Y%m%dT%H%M%S_%f")
node_label = ctx.node_id.replace("/", "_")
log_path = log_dir / f"{ts}_{node_label}.md"
lines: list[str] = [
f"# Compaction Debug — {ctx.node_id}",
f"**Time:** {datetime.now(UTC).isoformat()}",
f"**Node:** {ctx.node_spec.name} (`{ctx.node_id}`)",
]
if ctx.stream_id:
lines.append(f"**Stream:** {ctx.stream_id}")
lines.append(f"**Level:** {level}")
lines.append(f"**Usage:** {before_pct}% → {after_pct}%")
lines.append("")
if inventory:
total_chars = sum(
entry.get("content_chars", 0) + entry.get("tool_call_args_chars", 0)
for entry in inventory
)
lines.append(
"## Pre-Compaction Message Inventory "
f"({len(inventory)} messages, {total_chars:,} total chars)"
)
lines.append("")
ranked = sorted(
inventory,
key=lambda entry: entry.get("content_chars", 0) + entry.get("tool_call_args_chars", 0),
reverse=True,
)
lines.append("| # | seq | role | tool | chars | % of total | flags |")
lines.append("|---|-----|------|------|------:|------------|-------|")
for i, entry in enumerate(ranked, 1):
chars = entry.get("content_chars", 0) + entry.get("tool_call_args_chars", 0)
pct = (chars / total_chars * 100) if total_chars else 0
tool = entry.get("tool", "")
flags: list[str] = []
if entry.get("is_error"):
flags.append("error")
if entry.get("phase"):
flags.append(f"phase={entry['phase']}")
lines.append(
f"| {i} | {entry['seq']} | {entry['role']} | {tool} "
f"| {chars:,} | {pct:.1f}% | {', '.join(flags)} |"
)
large = [entry for entry in ranked if entry.get("preview")]
if large:
lines.append("")
lines.append("### Large message previews")
for entry in large:
lines.append(
f"\n**seq={entry['seq']}** ({entry['role']}, {entry.get('tool', '')}):"
)
lines.append(f"```\n{entry['preview']}\n```")
lines.append("")
try:
log_path.write_text("\n".join(lines), encoding="utf-8")
logger.debug("Compaction debug log written to %s", log_path)
except OSError:
logger.debug("Failed to write compaction debug log to %s", log_path)
async def log_compaction(
ctx: NodeContext,
conversation: NodeConversation,
ratio_before: float,
event_bus: EventBus | None,
*,
pre_inventory: list[dict[str, Any]] | None = None,
) -> None:
"""Log compaction result to runtime logger and event bus."""
ratio_after = conversation.usage_ratio()
before_pct = round(ratio_before * 100)
after_pct = round(ratio_after * 100)
# Determine label from what happened
if after_pct >= before_pct - 1:
level = "prune_only"
elif ratio_after <= 0.6:
level = "llm"
else:
level = "structural"
logger.info(
"Compaction complete (%s): %d%% -> %d%%",
level,
before_pct,
after_pct,
)
if ctx.runtime_logger:
ctx.runtime_logger.log_step(
node_id=ctx.node_id,
node_type="event_loop",
step_index=-1,
llm_text=f"Context compacted ({level}): {before_pct}% \u2192 {after_pct}%",
verdict="COMPACTION",
verdict_feedback=f"level={level} before={before_pct}% after={after_pct}%",
)
if event_bus:
from framework.runtime.event_bus import AgentEvent, EventType
event_data: dict[str, Any] = {
"level": level,
"usage_before": before_pct,
"usage_after": after_pct,
}
if pre_inventory is not None:
event_data["message_inventory"] = pre_inventory
await event_bus.publish(
AgentEvent(
type=EventType.CONTEXT_COMPACTED,
stream_id=ctx.stream_id or ctx.node_id,
node_id=ctx.node_id,
data=event_data,
)
)
await publish_context_usage(event_bus, ctx, conversation, "post_compaction")
if os.environ.get("HIVE_COMPACTION_DEBUG"):
write_compaction_debug_log(ctx, before_pct, after_pct, level, pre_inventory)
def build_emergency_summary(
ctx: NodeContext,
accumulator: OutputAccumulator | None = None,
conversation: NodeConversation | None = None,
config: LoopConfig | None = None,
) -> str:
"""Build a structured emergency compaction summary.
Unlike normal/aggressive compaction which uses an LLM summary,
emergency compaction cannot afford an LLM call (context is already
way over budget). Instead, build a deterministic summary from the
node's known state so the LLM can continue working after
compaction without losing track of its task and inputs.
"""
parts = [
"EMERGENCY COMPACTION — previous conversation was too large "
"and has been replaced with this summary.\n"
]
# 1. Node identity
spec = ctx.node_spec
parts.append(f"NODE: {spec.name} (id={spec.id})")
if spec.description:
parts.append(f"PURPOSE: {spec.description}")
# 2. Inputs the node received
input_lines = []
for key in spec.input_keys:
value = ctx.input_data.get(key) or ctx.memory.read(key)
if value is not None:
# Truncate long values but keep them recognisable
v_str = str(value)
if len(v_str) > 200:
v_str = v_str[:200] + ""
input_lines.append(f" {key}: {v_str}")
if input_lines:
parts.append("INPUTS:\n" + "\n".join(input_lines))
# 3. Output accumulator state (what's been set so far)
if accumulator:
acc_state = accumulator.to_dict()
set_keys = {k: v for k, v in acc_state.items() if v is not None}
missing = [k for k, v in acc_state.items() if v is None]
if set_keys:
lines = [f" {k}: {str(v)[:150]}" for k, v in set_keys.items()]
parts.append("OUTPUTS ALREADY SET:\n" + "\n".join(lines))
if missing:
parts.append(f"OUTPUTS STILL NEEDED: {', '.join(missing)}")
elif spec.output_keys:
parts.append(f"OUTPUTS STILL NEEDED: {', '.join(spec.output_keys)}")
# 4. Available tools reminder
if spec.tools:
parts.append(f"AVAILABLE TOOLS: {', '.join(spec.tools)}")
# 5. Spillover files — list actual files so the LLM can load
# them immediately instead of having to call list_data_files first.
spillover_dir = config.spillover_dir if config else None
if spillover_dir:
try:
from pathlib import Path
data_dir = Path(spillover_dir)
if data_dir.is_dir():
all_files = sorted(f.name for f in data_dir.iterdir() if f.is_file())
# Separate conversation history files from regular data files
conv_files = [f for f in all_files if re.match(r"conversation_\d+\.md$", f)]
data_files = [f for f in all_files if f not in conv_files]
if conv_files:
conv_list = "\n".join(
f" - {f} (full path: {data_dir / f})" for f in conv_files
)
parts.append(
"CONVERSATION HISTORY (freeform messages saved during compaction — "
"use load_data('<filename>') to review earlier dialogue):\n" + conv_list
)
if data_files:
file_list = "\n".join(
f" - {f} (full path: {data_dir / f})" for f in data_files[:30]
)
parts.append("DATA FILES (use load_data('<filename>') to read):\n" + file_list)
if not all_files:
parts.append(
"NOTE: Large tool results may have been saved to files. "
"Use list_directory to check the data directory."
)
except Exception:
parts.append(
"NOTE: Large tool results were saved to files. "
"Use read_file(path='<path>') to read them."
)
# 6. Tool call history (prevent re-calling tools)
if conversation is not None:
tool_history = _extract_tool_call_history(conversation)
if tool_history:
parts.append(tool_history)
parts.append(
"\nContinue working towards setting the remaining outputs. "
"Use your tools and the inputs above."
)
return "\n\n".join(parts)
def _extract_tool_call_history(conversation: NodeConversation) -> str:
"""Extract tool call history from conversation messages.
This is the instance-level variant that operates on a NodeConversation
directly (vs. the module-level extract_tool_call_history in conversation.py
which works on raw message lists).
"""
from framework.graph.conversation import extract_tool_call_history
return extract_tool_call_history(list(conversation.messages))
@@ -0,0 +1,239 @@
"""Cursor persistence, queue draining, and pause detection.
Handles the checkpoint/resume cycle: restoring state from a previous
conversation store, writing cursor data, and managing injection/trigger
queues between iterations.
"""
from __future__ import annotations
import asyncio
import json
import logging
from collections.abc import Awaitable, Callable
from dataclasses import dataclass
from typing import Any
from framework.graph.conversation import ConversationStore, NodeConversation
from framework.graph.event_loop.types import LoopConfig, OutputAccumulator, TriggerEvent
from framework.graph.node import NodeContext
from framework.llm.capabilities import supports_image_tool_results
logger = logging.getLogger(__name__)
@dataclass
class RestoredState:
"""State recovered from a previous checkpoint."""
conversation: NodeConversation
accumulator: OutputAccumulator
start_iteration: int
recent_responses: list[str]
recent_tool_fingerprints: list[list[tuple[str, str]]]
async def restore(
conversation_store: ConversationStore | None,
ctx: NodeContext,
config: LoopConfig,
) -> RestoredState | None:
"""Attempt to restore from a previous checkpoint.
Returns a ``RestoredState`` with conversation, accumulator, iteration
counter, and stall/doom-loop detection state everything needed to
resume exactly where execution stopped.
"""
if conversation_store is None:
return None
# In isolated mode, filter parts by phase_id so the node only sees
# its own messages in the shared flat conversation store. In
# continuous mode (or when _restore is called for timer-resume)
# load all parts — the full conversation threads across nodes.
_is_continuous = getattr(ctx, "continuous_mode", False)
phase_filter = None if _is_continuous else ctx.node_id
conversation = await NodeConversation.restore(
conversation_store,
phase_id=phase_filter,
)
if conversation is None:
return None
accumulator = await OutputAccumulator.restore(conversation_store)
accumulator.spillover_dir = config.spillover_dir
accumulator.max_value_chars = config.max_output_value_chars
cursor = await conversation_store.read_cursor()
start_iteration = cursor.get("iteration", 0) + 1 if cursor else 0
# Restore stall/doom-loop detection state
recent_responses: list[str] = cursor.get("recent_responses", []) if cursor else []
raw_fps = cursor.get("recent_tool_fingerprints", []) if cursor else []
recent_tool_fingerprints: list[list[tuple[str, str]]] = [
[tuple(pair) for pair in fps] # type: ignore[misc]
for fps in raw_fps
]
logger.info(
f"Restored event loop: iteration={start_iteration}, "
f"messages={conversation.message_count}, "
f"outputs={list(accumulator.values.keys())}, "
f"stall_window={len(recent_responses)}, "
f"doom_window={len(recent_tool_fingerprints)}"
)
return RestoredState(
conversation=conversation,
accumulator=accumulator,
start_iteration=start_iteration,
recent_responses=recent_responses,
recent_tool_fingerprints=recent_tool_fingerprints,
)
async def write_cursor(
conversation_store: ConversationStore | None,
ctx: NodeContext,
conversation: NodeConversation,
accumulator: OutputAccumulator,
iteration: int,
*,
recent_responses: list[str] | None = None,
recent_tool_fingerprints: list[list[tuple[str, str]]] | None = None,
) -> None:
"""Write checkpoint cursor for crash recovery.
Persists iteration counter, accumulator outputs, and stall/doom-loop
detection state so that resume picks up exactly where execution stopped.
"""
if conversation_store:
cursor = await conversation_store.read_cursor() or {}
cursor.update(
{
"iteration": iteration,
"node_id": ctx.node_id,
"next_seq": conversation.next_seq,
"outputs": accumulator.to_dict(),
}
)
# Persist stall/doom-loop detection state for reliable resume
if recent_responses is not None:
cursor["recent_responses"] = recent_responses
if recent_tool_fingerprints is not None:
# Convert list[list[tuple]] → list[list[list]] for JSON
cursor["recent_tool_fingerprints"] = [
[list(pair) for pair in fps] for fps in recent_tool_fingerprints
]
await conversation_store.write_cursor(cursor)
async def drain_injection_queue(
queue: asyncio.Queue,
conversation: NodeConversation,
*,
ctx: NodeContext,
describe_images_as_text_fn: (
Callable[[list[dict[str, Any]]], Awaitable[str | None]] | None
) = None,
) -> int:
"""Drain all pending injected events as user messages. Returns count."""
count = 0
while not queue.empty():
try:
content, is_client_input, image_content = queue.get_nowait()
logger.info(
"[drain] injected message (client_input=%s, images=%d): %s",
is_client_input,
len(image_content) if image_content else 0,
content[:200] if content else "(empty)",
)
if image_content and ctx.llm and not supports_image_tool_results(ctx.llm.model):
logger.info(
"Model '%s' does not support images; attempting vision fallback",
ctx.llm.model,
)
if describe_images_as_text_fn is not None:
description = await describe_images_as_text_fn(image_content)
if description:
content = f"{content}\n\n{description}" if content else description
logger.info("[drain] image described as text via vision fallback")
else:
logger.info("[drain] no vision fallback available; images dropped")
image_content = None
# Real user input is stored as-is; external events get a prefix
if is_client_input:
await conversation.add_user_message(
content,
is_client_input=True,
image_content=image_content,
)
else:
await conversation.add_user_message(f"[External event]: {content}")
count += 1
except asyncio.QueueEmpty:
break
return count
async def drain_trigger_queue(
queue: asyncio.Queue,
conversation: NodeConversation,
) -> int:
"""Drain all pending trigger events as a single batched user message.
Multiple triggers are merged so the LLM sees them atomically and can
reason about all pending triggers before acting.
"""
triggers: list[TriggerEvent] = []
while not queue.empty():
try:
triggers.append(queue.get_nowait())
except asyncio.QueueEmpty:
break
if not triggers:
return 0
parts: list[str] = []
for t in triggers:
task = t.payload.get("task", "")
task_line = f"\nTask: {task}" if task else ""
payload_str = json.dumps(t.payload, default=str)
parts.append(f"[TRIGGER: {t.trigger_type}/{t.source_id}]{task_line}\n{payload_str}")
combined = "\n\n".join(parts)
logger.info("[drain] %d trigger(s): %s", len(triggers), combined[:200])
await conversation.add_user_message(combined)
return len(triggers)
async def check_pause(
ctx: NodeContext,
conversation: NodeConversation,
iteration: int,
) -> bool:
"""
Check if pause has been requested. Returns True if paused.
Note: This check happens BEFORE starting iteration N, after completing N-1.
If paused, the node exits having completed {iteration} iterations (0 to iteration-1).
"""
# Check executor-level pause event (for /pause command, Ctrl+Z)
if ctx.pause_event and ctx.pause_event.is_set():
completed = iteration # 0-indexed: iteration=3 means 3 iterations completed (0,1,2)
logger.info(f"⏸ Pausing after {completed} iteration(s) completed (executor-level)")
return True
# Check context-level pause flags (legacy/alternative methods)
pause_requested = ctx.input_data.get("pause_requested", False)
if not pause_requested:
try:
pause_requested = ctx.memory.read("pause_requested") or False
except (PermissionError, KeyError):
pause_requested = False
if pause_requested:
completed = iteration
logger.info(f"⏸ Pausing after {completed} iteration(s) completed (context-level)")
return True
return False
@@ -0,0 +1,360 @@
"""EventBus publishing helpers for the event loop.
Thin wrappers around EventBus.emit_*() calls that check for bus existence
before publishing. Extracted to reduce noise in the main orchestrator.
"""
from __future__ import annotations
import logging
import time
from framework.graph.conversation import NodeConversation
from framework.graph.event_loop.types import HookContext
from framework.graph.node import NodeContext
from framework.runtime.event_bus import EventBus
logger = logging.getLogger(__name__)
async def publish_loop_started(
event_bus: EventBus | None,
stream_id: str,
node_id: str,
max_iterations: int,
execution_id: str = "",
) -> None:
if event_bus:
await event_bus.emit_node_loop_started(
stream_id=stream_id,
node_id=node_id,
max_iterations=max_iterations,
execution_id=execution_id,
)
async def generate_action_plan(
event_bus: EventBus | None,
ctx: NodeContext,
stream_id: str,
node_id: str,
execution_id: str,
) -> None:
"""Generate a brief action plan via LLM and emit it as an SSE event.
Runs as a fire-and-forget task so it never blocks the main loop.
"""
try:
system_prompt = ctx.node_spec.system_prompt or ""
# Trim to keep the prompt small
prompt_summary = system_prompt[:500]
if len(system_prompt) > 500:
prompt_summary += "..."
tool_names = [t.name for t in ctx.available_tools]
output_keys = ctx.node_spec.output_keys or []
prompt = (
f'You are about to work on a task as node "{node_id}".\n\n'
f"System prompt:\n{prompt_summary}\n\n"
f"Tools available: {tool_names}\n"
f"Required outputs: {output_keys}\n\n"
f"Write a brief action plan (2-5 bullet points) describing "
f"what you will do to complete this task. Be specific and concise.\n"
f"Return ONLY the plan text, no preamble."
)
response = await ctx.llm.acomplete(
messages=[{"role": "user", "content": prompt}],
max_tokens=1024,
)
plan = response.content.strip()
if plan and event_bus:
await event_bus.emit_node_action_plan(
stream_id=stream_id,
node_id=node_id,
plan=plan,
execution_id=execution_id,
)
except Exception as e:
logger.warning("Action plan generation failed for node '%s': %s", node_id, e)
async def publish_iteration(
event_bus: EventBus | None,
stream_id: str,
node_id: str,
iteration: int,
execution_id: str = "",
extra_data: dict | None = None,
) -> None:
if event_bus:
await event_bus.emit_node_loop_iteration(
stream_id=stream_id,
node_id=node_id,
iteration=iteration,
execution_id=execution_id,
extra_data=extra_data,
)
async def publish_llm_turn_complete(
event_bus: EventBus | None,
stream_id: str,
node_id: str,
stop_reason: str,
model: str,
input_tokens: int,
output_tokens: int,
cached_tokens: int = 0,
execution_id: str = "",
iteration: int | None = None,
) -> None:
if event_bus:
await event_bus.emit_llm_turn_complete(
stream_id=stream_id,
node_id=node_id,
stop_reason=stop_reason,
model=model,
input_tokens=input_tokens,
output_tokens=output_tokens,
cached_tokens=cached_tokens,
execution_id=execution_id,
iteration=iteration,
)
def log_skip_judge(
ctx: NodeContext,
node_id: str,
iteration: int,
feedback: str,
tool_calls: list[dict],
llm_text: str,
turn_tokens: dict[str, int],
iter_start: float,
) -> None:
"""Log a CONTINUE step that skips judge evaluation (e.g., waiting for input)."""
if ctx.runtime_logger:
ctx.runtime_logger.log_step(
node_id=node_id,
node_type="event_loop",
step_index=iteration,
verdict="CONTINUE",
verdict_feedback=feedback,
tool_calls=tool_calls,
llm_text=llm_text,
input_tokens=turn_tokens.get("input", 0),
output_tokens=turn_tokens.get("output", 0),
latency_ms=int((time.time() - iter_start) * 1000),
)
async def publish_loop_completed(
event_bus: EventBus | None,
stream_id: str,
node_id: str,
iterations: int,
execution_id: str = "",
) -> None:
if event_bus:
await event_bus.emit_node_loop_completed(
stream_id=stream_id,
node_id=node_id,
iterations=iterations,
execution_id=execution_id,
)
async def publish_context_usage(
event_bus: EventBus | None,
ctx: NodeContext,
conversation: NodeConversation,
trigger: str,
) -> None:
"""Emit a CONTEXT_USAGE_UPDATED event with current context window state."""
if not event_bus:
return
from framework.runtime.event_bus import AgentEvent, EventType
estimated = conversation.estimate_tokens()
max_tokens = conversation._max_context_tokens
ratio = estimated / max_tokens if max_tokens > 0 else 0.0
await event_bus.publish(
AgentEvent(
type=EventType.CONTEXT_USAGE_UPDATED,
stream_id=ctx.stream_id or ctx.node_id,
node_id=ctx.node_id,
data={
"usage_ratio": round(ratio, 4),
"usage_pct": round(ratio * 100),
"message_count": conversation.message_count,
"estimated_tokens": estimated,
"max_context_tokens": max_tokens,
"trigger": trigger,
},
)
)
async def publish_stalled(
event_bus: EventBus | None,
stream_id: str,
node_id: str,
execution_id: str = "",
) -> None:
if event_bus:
await event_bus.emit_node_stalled(
stream_id=stream_id,
node_id=node_id,
reason="Consecutive similar responses detected",
execution_id=execution_id,
)
async def publish_text_delta(
event_bus: EventBus | None,
stream_id: str,
node_id: str,
content: str,
snapshot: str,
ctx: NodeContext,
execution_id: str = "",
iteration: int | None = None,
inner_turn: int = 0,
) -> None:
if event_bus:
if ctx.node_spec.client_facing:
await event_bus.emit_client_output_delta(
stream_id=stream_id,
node_id=node_id,
content=content,
snapshot=snapshot,
execution_id=execution_id,
iteration=iteration,
inner_turn=inner_turn,
)
else:
await event_bus.emit_llm_text_delta(
stream_id=stream_id,
node_id=node_id,
content=content,
snapshot=snapshot,
execution_id=execution_id,
inner_turn=inner_turn,
)
async def publish_tool_started(
event_bus: EventBus | None,
stream_id: str,
node_id: str,
tool_use_id: str,
tool_name: str,
tool_input: dict,
execution_id: str = "",
) -> None:
if event_bus:
await event_bus.emit_tool_call_started(
stream_id=stream_id,
node_id=node_id,
tool_use_id=tool_use_id,
tool_name=tool_name,
tool_input=tool_input,
execution_id=execution_id,
)
async def publish_tool_completed(
event_bus: EventBus | None,
stream_id: str,
node_id: str,
tool_use_id: str,
tool_name: str,
result: str,
is_error: bool,
execution_id: str = "",
) -> None:
if event_bus:
await event_bus.emit_tool_call_completed(
stream_id=stream_id,
node_id=node_id,
tool_use_id=tool_use_id,
tool_name=tool_name,
result=result,
is_error=is_error,
execution_id=execution_id,
)
async def publish_judge_verdict(
event_bus: EventBus | None,
stream_id: str,
node_id: str,
action: str,
feedback: str = "",
judge_type: str = "implicit",
iteration: int = 0,
execution_id: str = "",
) -> None:
if event_bus:
await event_bus.emit_judge_verdict(
stream_id=stream_id,
node_id=node_id,
action=action,
feedback=feedback,
judge_type=judge_type,
iteration=iteration,
execution_id=execution_id,
)
async def publish_output_key_set(
event_bus: EventBus | None,
stream_id: str,
node_id: str,
key: str,
execution_id: str = "",
) -> None:
if event_bus:
await event_bus.emit_output_key_set(
stream_id=stream_id, node_id=node_id, key=key, execution_id=execution_id
)
async def run_hooks(
hooks_config: dict[str, list],
event: str,
conversation: NodeConversation,
trigger: str | None = None,
) -> None:
"""Run all registered hooks for *event*, applying their results.
Each hook receives a HookContext and may return a HookResult that:
- replaces the system prompt (result.system_prompt)
- injects an extra user message (result.inject)
Hooks run in registration order; each sees the prompt as left by the
previous hook.
"""
hook_list = hooks_config.get(event, [])
if not hook_list:
return
for hook in hook_list:
ctx = HookContext(
event=event,
trigger=trigger,
system_prompt=conversation.system_prompt,
)
try:
result = await hook(ctx)
except Exception:
logger.warning("Hook '%s' raised an exception", event, exc_info=True)
continue
if result is None:
continue
if result.system_prompt:
conversation.update_system_prompt(result.system_prompt)
if result.inject:
await conversation.add_user_message(result.inject)
@@ -0,0 +1,175 @@
"""Judge evaluation pipeline for the event loop."""
from __future__ import annotations
import logging
from collections.abc import Callable
from framework.graph.conversation import NodeConversation
from framework.graph.event_loop.types import JudgeProtocol, JudgeVerdict, OutputAccumulator
from framework.graph.node import NodeContext
logger = logging.getLogger(__name__)
class SubagentJudge:
"""Judge for subagent execution."""
def __init__(self, task: str, max_iterations: int = 10):
self._task = task
self._max_iterations = max_iterations
async def evaluate(self, context: dict[str, object]) -> JudgeVerdict:
missing = context.get("missing_keys", [])
if not isinstance(missing, list) or not missing:
return JudgeVerdict(action="ACCEPT", feedback="")
iteration = context.get("iteration", 0)
if not isinstance(iteration, int):
iteration = 0
remaining = self._max_iterations - iteration - 1
if remaining <= 3:
urgency = (
f"URGENT: Only {remaining} iterations left. "
f"Stop all other work and call set_output NOW for: {missing}"
)
elif remaining <= self._max_iterations // 2:
urgency = (
f"WARNING: {remaining} iterations remaining. "
f"You must call set_output for: {missing}"
)
else:
urgency = f"Missing output keys: {missing}. Use set_output to provide them."
return JudgeVerdict(action="RETRY", feedback=f"Your task: {self._task}\n{urgency}")
async def judge_turn(
*,
mark_complete_flag: bool,
judge: JudgeProtocol | None,
ctx: NodeContext,
conversation: NodeConversation,
accumulator: OutputAccumulator,
assistant_text: str,
tool_results: list[dict[str, object]],
iteration: int,
get_missing_output_keys_fn: Callable[
[OutputAccumulator, list[str] | None, list[str] | None],
list[str],
],
max_context_tokens: int,
) -> JudgeVerdict:
"""Evaluate the current state using judge or implicit logic.
Evaluation levels (in order):
0. Short-circuits: mark_complete, skip_judge, tool-continue.
1. Custom judge (JudgeProtocol) full authority when set.
2. Implicit judge output-key check + optional conversation-aware
quality gate (when ``success_criteria`` is defined).
Returns a JudgeVerdict. ``feedback=None`` means no real evaluation
happened (skip_judge, tool-continue); the caller must not inject a
feedback message. Any non-None feedback (including ``""``) means a
real evaluation occurred and will be logged into the conversation.
"""
# --- Level 0: short-circuits (no evaluation) -----------------------
if mark_complete_flag:
return JudgeVerdict(action="ACCEPT")
if ctx.node_spec.skip_judge:
return JudgeVerdict(action="RETRY") # feedback=None → not logged
# --- Level 1: custom judge -----------------------------------------
if judge is not None:
context = {
"assistant_text": assistant_text,
"tool_calls": tool_results,
"output_accumulator": accumulator.to_dict(),
"accumulator": accumulator,
"iteration": iteration,
"conversation_summary": conversation.export_summary(),
"output_keys": ctx.node_spec.output_keys,
"missing_keys": get_missing_output_keys_fn(
accumulator, ctx.node_spec.output_keys, ctx.node_spec.nullable_output_keys
),
}
verdict = await judge.evaluate(context)
# Ensure evaluated RETRY always carries feedback for logging.
if verdict.action == "RETRY" and not verdict.feedback:
return JudgeVerdict(action="RETRY", feedback="Custom judge returned RETRY.")
return verdict
# --- Level 2: implicit judge ---------------------------------------
# Real tool calls were made — let the agent keep working.
if tool_results:
return JudgeVerdict(action="RETRY") # feedback=None → not logged
missing = get_missing_output_keys_fn(
accumulator, ctx.node_spec.output_keys, ctx.node_spec.nullable_output_keys
)
if missing:
return JudgeVerdict(
action="RETRY",
feedback=(
f"Task incomplete. Required outputs not yet produced: {missing}. "
f"Follow your system prompt instructions to complete the work."
),
)
# All output keys present — run safety checks before accepting.
output_keys = ctx.node_spec.output_keys or []
nullable_keys = set(ctx.node_spec.nullable_output_keys or [])
# All-nullable with nothing set → node produced nothing useful.
all_nullable = output_keys and nullable_keys >= set(output_keys)
none_set = not any(accumulator.get(k) is not None for k in output_keys)
if all_nullable and none_set:
return JudgeVerdict(
action="RETRY",
feedback=(
f"No output keys have been set yet. "
f"Use set_output to set at least one of: {output_keys}"
),
)
# Client-facing with no output keys → continuous interaction node.
# Inject tool-use pressure instead of auto-accepting.
if not output_keys and ctx.node_spec.client_facing:
return JudgeVerdict(
action="RETRY",
feedback=(
"STOP describing what you will do. "
"You have FULL access to all tools — file creation, "
"shell commands, MCP tools — and you CAN call them "
"directly in your response. Respond ONLY with tool "
"calls, no prose. Execute the task now."
),
)
# Level 2b: conversation-aware quality check (if success_criteria set)
if ctx.node_spec.success_criteria and ctx.llm:
from framework.graph.conversation_judge import evaluate_phase_completion
verdict = await evaluate_phase_completion(
llm=ctx.llm,
conversation=conversation,
phase_name=ctx.node_spec.name,
phase_description=ctx.node_spec.description,
success_criteria=ctx.node_spec.success_criteria,
accumulator_state=accumulator.to_dict(),
max_context_tokens=max_context_tokens,
)
if verdict.action != "ACCEPT":
return JudgeVerdict(
action=verdict.action,
feedback=verdict.feedback or "Phase criteria not met.",
)
return JudgeVerdict(action="ACCEPT", feedback="")
@@ -0,0 +1,106 @@
"""Stall and doom-loop detection for the event loop.
Pure functions with no class dependencies safe to call from any context.
"""
from __future__ import annotations
import json
def ngram_similarity(s1: str, s2: str, n: int = 2) -> float:
"""Jaccard similarity of n-gram sets.
Returns 0.0-1.0, where 1.0 is exact match.
Fast: O(len(s) + len(s2)) using set operations.
"""
def _ngrams(s: str) -> set[str]:
return {s[i : i + n] for i in range(len(s) - n + 1) if s.strip()}
if not s1 or not s2:
return 0.0
ngrams1, ngrams2 = _ngrams(s1.lower()), _ngrams(s2.lower())
if not ngrams1 or not ngrams2:
return 0.0
intersection = len(ngrams1 & ngrams2)
union = len(ngrams1 | ngrams2)
return intersection / union if union else 0.0
def is_stalled(
recent_responses: list[str],
threshold: int,
similarity_threshold: float,
) -> bool:
"""Detect stall using n-gram similarity.
Detects when ALL N consecutive responses are mutually similar
(>= threshold). A single dissimilar response resets the signal.
This catches phrases like "I'm still stuck" vs "I'm stuck"
without false-positives on "attempt 1" vs "attempt 2".
"""
if len(recent_responses) < threshold:
return False
if not recent_responses[0]:
return False
# Every consecutive pair must be similar
for i in range(1, len(recent_responses)):
if ngram_similarity(recent_responses[i], recent_responses[i - 1]) < similarity_threshold:
return False
return True
def fingerprint_tool_calls(
tool_results: list[dict],
) -> list[tuple[str, str]]:
"""Create deterministic fingerprints for a turn's tool calls.
Each fingerprint is (tool_name, canonical_args_json). Order-sensitive
so [search("a"), fetch("b")] != [fetch("b"), search("a")].
"""
fingerprints = []
for tr in tool_results:
name = tr.get("tool_name", "")
args = tr.get("tool_input", {})
try:
canonical = json.dumps(args, sort_keys=True, default=str)
except (TypeError, ValueError):
canonical = str(args)
fingerprints.append((name, canonical))
return fingerprints
def is_tool_doom_loop(
recent_tool_fingerprints: list[list[tuple[str, str]]],
threshold: int,
enabled: bool = True,
) -> tuple[bool, str]:
"""Detect doom loop via exact fingerprint match.
Detects when N consecutive turns invoke the same tools with
identical (canonicalized) arguments. Different arguments mean
different work, so only exact matches count.
Returns (is_doom_loop, description).
"""
if not enabled:
return False, ""
if len(recent_tool_fingerprints) < threshold:
return False, ""
first = recent_tool_fingerprints[0]
if not first:
return False, ""
# All turns in the window must match the first exactly
if all(fp == first for fp in recent_tool_fingerprints[1:]):
tool_names = [name for name, _ in first]
desc = (
f"Doom loop detected: {len(recent_tool_fingerprints)} "
f"identical consecutive tool calls ({', '.join(tool_names)})"
)
return True, desc
return False, ""
@@ -0,0 +1,412 @@
"""Subagent execution for the event loop.
Handles the full subagent lifecycle: validation, context setup, tool filtering,
conversation store derivation, execution, and cleanup. Also includes the
_EscalationReceiver helper used for subagent queen escalation routing.
"""
from __future__ import annotations
import asyncio
import json
import logging
import time
from collections.abc import Awaitable, Callable
from pathlib import Path
from typing import TYPE_CHECKING, Any
from framework.graph.conversation import ConversationStore
from framework.graph.event_loop.judge_pipeline import SubagentJudge
from framework.graph.event_loop.types import LoopConfig, OutputAccumulator
from framework.graph.node import NodeContext, SharedMemory
from framework.llm.provider import ToolResult, ToolUse
from framework.runtime.event_bus import EventBus
if TYPE_CHECKING:
from framework.graph.event_loop_node import EventLoopNode
logger = logging.getLogger(__name__)
class EscalationReceiver:
"""Temporary receiver registered in node_registry for subagent escalation routing.
When a subagent calls ``report_to_parent(wait_for_response=True)``, the callback
creates one of these, registers it under a unique escalation ID in the executor's
``node_registry``, and awaits ``wait()``. The TUI / runner calls
``inject_input(escalation_id, content)`` which the ``ExecutionStream`` routes here
via ``inject_event()`` matching the same ``hasattr(node, "inject_event")`` check
used for regular ``EventLoopNode`` instances.
"""
def __init__(self) -> None:
self._event = asyncio.Event()
self._response: str | None = None
self._awaiting_input = True # So inject_worker_message() can prefer us
async def inject_event(
self,
content: str,
*,
is_client_input: bool = False,
image_content: list[dict[str, Any]] | None = None,
) -> None:
"""Called by ExecutionStream.inject_input() when the user responds."""
self._response = content
self._event.set()
async def wait(self) -> str | None:
"""Block until inject_event() delivers the user's response."""
await self._event.wait()
return self._response
async def execute_subagent(
ctx: NodeContext,
agent_id: str,
task: str,
*,
config: LoopConfig,
event_loop_node_cls: type[EventLoopNode],
escalation_receiver_cls: type[EscalationReceiver],
accumulator: OutputAccumulator | None = None,
event_bus: EventBus | None = None,
tool_executor: Callable[[ToolUse], ToolResult | Awaitable[ToolResult]] | None = None,
conversation_store: ConversationStore | None = None,
subagent_instance_counter: dict[str, int] | None = None,
) -> ToolResult:
"""Execute a subagent and return the result as a ToolResult.
The subagent:
- Gets a fresh conversation with just the task
- Has read-only access to the parent's readable memory
- Cannot delegate to its own subagents (prevents recursion)
- Returns its output in structured JSON format
Args:
ctx: Parent node's context (for memory, tools, LLM access).
agent_id: The node ID of the subagent to invoke.
task: The task description to give the subagent.
accumulator: Parent's OutputAccumulator.
event_bus: EventBus for lifecycle events.
config: LoopConfig for iteration/tool limits.
tool_executor: Tool executor callable.
conversation_store: Parent conversation store (for deriving subagent store).
subagent_instance_counter: Mutable counter dict for unique subagent paths.
Returns:
ToolResult with structured JSON output.
"""
# Log subagent invocation start
logger.info(
"\n" + "=" * 60 + "\n"
"🤖 SUBAGENT INVOCATION\n"
"=" * 60 + "\n"
"Parent Node: %s\n"
"Subagent ID: %s\n"
"Task: %s\n" + "=" * 60,
ctx.node_id,
agent_id,
task[:500] + "..." if len(task) > 500 else task,
)
# 1. Validate agent exists in registry
if agent_id not in ctx.node_registry:
return ToolResult(
tool_use_id="",
content=json.dumps(
{
"message": f"Sub-agent '{agent_id}' not found in registry",
"data": None,
"metadata": {"agent_id": agent_id, "success": False, "error": "not_found"},
}
),
is_error=True,
)
subagent_spec = ctx.node_registry[agent_id]
# 2. Create read-only memory snapshot
parent_data = ctx.memory.read_all()
# Merge in-flight outputs from the parent's accumulator.
if accumulator:
for key, value in accumulator.to_dict().items():
if key not in parent_data:
parent_data[key] = value
subagent_memory = SharedMemory()
for key, value in parent_data.items():
subagent_memory.write(key, value, validate=False)
read_keys = set(parent_data.keys()) | set(subagent_spec.input_keys or [])
scoped_memory = subagent_memory.with_permissions(
read_keys=list(read_keys),
write_keys=[], # Read-only!
)
# 2b. Compute instance counter early so the callback and child context
# share the same stable node_id for this subagent invocation.
if subagent_instance_counter is not None:
subagent_instance_counter.setdefault(agent_id, 0)
subagent_instance_counter[agent_id] += 1
subagent_instance = str(subagent_instance_counter[agent_id])
else:
subagent_instance = "1"
if subagent_instance == "1":
sa_node_id = f"{ctx.node_id}:subagent:{agent_id}"
else:
sa_node_id = f"{ctx.node_id}:subagent:{agent_id}:{subagent_instance}"
# 2c. Set up report callback (one-way channel to parent / event bus)
subagent_reports: list[dict] = []
async def _report_callback(
message: str,
data: dict | None = None,
*,
wait_for_response: bool = False,
) -> str | None:
subagent_reports.append({"message": message, "data": data, "timestamp": time.time()})
if event_bus:
await event_bus.emit_subagent_report(
stream_id=ctx.node_id,
node_id=sa_node_id,
subagent_id=agent_id,
message=message,
data=data,
execution_id=ctx.execution_id,
)
if not wait_for_response:
return None
if not event_bus:
logger.warning(
"Subagent '%s' requested user response but no event_bus available",
agent_id,
)
return None
# Create isolated receiver and register for input routing
import uuid
escalation_id = f"{ctx.node_id}:escalation:{uuid.uuid4().hex[:8]}"
receiver = escalation_receiver_cls()
registry = ctx.shared_node_registry
registry[escalation_id] = receiver
try:
await event_bus.emit_escalation_requested(
stream_id=ctx.stream_id or ctx.node_id,
node_id=escalation_id,
reason=f"Subagent report (wait_for_response) from {agent_id}",
context=message,
execution_id=ctx.execution_id,
)
# Block until queen responds
return await receiver.wait()
finally:
registry.pop(escalation_id, None)
# 3. Filter tools for subagent
subagent_tool_names = set(subagent_spec.tools or [])
tool_source = ctx.all_tools if ctx.all_tools else ctx.available_tools
# GCU auto-population
if subagent_spec.node_type == "gcu" and not subagent_tool_names:
subagent_tools = [t for t in tool_source if t.name != "delegate_to_sub_agent"]
else:
subagent_tools = [
t
for t in tool_source
if t.name in subagent_tool_names and t.name != "delegate_to_sub_agent"
]
missing = subagent_tool_names - {t.name for t in subagent_tools}
if missing:
logger.warning(
"Subagent '%s' requested tools not found in catalog: %s",
agent_id,
sorted(missing),
)
logger.info(
"📦 Subagent '%s' configuration:\n"
" - System prompt: %s\n"
" - Tools available (%d): %s\n"
" - Memory keys inherited: %s",
agent_id,
(subagent_spec.system_prompt[:200] + "...")
if subagent_spec.system_prompt and len(subagent_spec.system_prompt) > 200
else subagent_spec.system_prompt,
len(subagent_tools),
[t.name for t in subagent_tools],
list(parent_data.keys()),
)
# 4. Build subagent context
max_iter = min(config.max_iterations, 10)
subagent_ctx = NodeContext(
runtime=ctx.runtime,
node_id=sa_node_id,
node_spec=subagent_spec,
memory=scoped_memory,
input_data={"task": task, **parent_data},
llm=ctx.llm,
available_tools=subagent_tools,
goal_context=(
f"Your specific task: {task}\n\n"
f"COMPLETION REQUIREMENTS:\n"
f"When your task is done, you MUST call set_output() "
f"for each required key: {subagent_spec.output_keys}\n"
f"Alternatively, call report_to_parent(mark_complete=true) "
f"with your findings in message/data.\n"
f"You have a maximum of {max_iter} turns to complete this task."
),
goal=ctx.goal,
max_tokens=ctx.max_tokens,
runtime_logger=ctx.runtime_logger,
is_subagent_mode=True, # Prevents nested delegation
report_callback=_report_callback,
node_registry={}, # Empty - no nested subagents
shared_node_registry=ctx.shared_node_registry, # For escalation routing
)
# 5. Create and execute subagent EventLoopNode
subagent_conv_store = None
if conversation_store is not None:
from framework.storage.conversation_store import FileConversationStore
parent_base = getattr(conversation_store, "_base", None)
if parent_base is not None:
conversations_dir = parent_base.parent
subagent_dir_name = f"{agent_id}-{subagent_instance}"
subagent_store_path = conversations_dir / subagent_dir_name
subagent_conv_store = FileConversationStore(base_path=subagent_store_path)
# Derive a subagent-scoped spillover dir
subagent_spillover = None
if config.spillover_dir:
subagent_spillover = str(Path(config.spillover_dir) / agent_id / subagent_instance)
subagent_node = event_loop_node_cls(
event_bus=event_bus,
judge=SubagentJudge(task=task, max_iterations=max_iter),
config=LoopConfig(
max_iterations=max_iter,
max_tool_calls_per_turn=config.max_tool_calls_per_turn,
tool_call_overflow_margin=config.tool_call_overflow_margin,
max_context_tokens=config.max_context_tokens,
stall_detection_threshold=config.stall_detection_threshold,
max_tool_result_chars=config.max_tool_result_chars,
spillover_dir=subagent_spillover,
),
tool_executor=tool_executor,
conversation_store=subagent_conv_store,
)
# Inject a unique GCU browser profile for this subagent
_profile_token = None
try:
from gcu.browser.session import set_active_profile as _set_gcu_profile
_profile_token = _set_gcu_profile(f"{agent_id}-{subagent_instance}")
except ImportError:
pass # GCU tools not installed; no-op
try:
logger.info("🚀 Starting subagent '%s' execution...", agent_id)
start_time = time.time()
result = await subagent_node.execute(subagent_ctx)
latency_ms = int((time.time() - start_time) * 1000)
separator = "-" * 60
logger.info(
"\n%s\n"
"✅ SUBAGENT '%s' COMPLETED\n"
"%s\n"
"Success: %s\n"
"Latency: %dms\n"
"Tokens used: %s\n"
"Output keys: %s\n"
"%s",
separator,
agent_id,
separator,
result.success,
latency_ms,
result.tokens_used,
list(result.output.keys()) if result.output else [],
separator,
)
result_json = {
"message": (
f"Sub-agent '{agent_id}' completed successfully"
if result.success
else f"Sub-agent '{agent_id}' failed: {result.error}"
),
"data": result.output,
"reports": subagent_reports if subagent_reports else None,
"metadata": {
"agent_id": agent_id,
"success": result.success,
"tokens_used": result.tokens_used,
"latency_ms": latency_ms,
"report_count": len(subagent_reports),
},
}
return ToolResult(
tool_use_id="",
content=json.dumps(result_json, indent=2, default=str),
is_error=not result.success,
)
except Exception as e:
logger.exception(
"\n" + "!" * 60 + "\n❌ SUBAGENT '%s' FAILED\nError: %s\n" + "!" * 60,
agent_id,
str(e),
)
result_json = {
"message": f"Sub-agent '{agent_id}' raised exception: {e}",
"data": None,
"metadata": {
"agent_id": agent_id,
"success": False,
"error": str(e),
},
}
return ToolResult(
tool_use_id="",
content=json.dumps(result_json, indent=2),
is_error=True,
)
finally:
# Restore the GCU profile context
if _profile_token is not None:
from gcu.browser.session import _active_profile as _gcu_profile_var
_gcu_profile_var.reset(_profile_token)
# Stop the browser session for this subagent's profile
if tool_executor is not None:
_subagent_profile = f"{agent_id}-{subagent_instance}"
try:
_stop_use = ToolUse(
id="gcu-cleanup",
name="browser_stop",
input={"profile": _subagent_profile},
)
_stop_result = tool_executor(_stop_use)
if asyncio.iscoroutine(_stop_result) or asyncio.isfuture(_stop_result):
await _stop_result
except Exception as _gcu_exc:
logger.warning(
"GCU browser_stop failed for profile %r: %s",
_subagent_profile,
_gcu_exc,
)
@@ -0,0 +1,369 @@
"""Synthetic tool builders for the event loop.
Factory functions that create ``Tool`` definitions for framework-level
synthetic tools (set_output, ask_user, escalate, delegate, report_to_parent).
Also includes the ``handle_set_output`` validation logic.
All functions are pure they receive explicit parameters and return
``Tool`` or ``ToolResult`` objects with no side effects.
"""
from __future__ import annotations
from typing import Any
from framework.llm.provider import Tool, ToolResult
def build_ask_user_tool() -> Tool:
"""Build the synthetic ask_user tool for explicit user-input requests.
Client-facing nodes call ask_user() when they need to pause and wait
for user input. Text-only turns WITHOUT ask_user flow through without
blocking, allowing progress updates and summaries to stream freely.
"""
return Tool(
name="ask_user",
description=(
"You MUST call this tool whenever you need the user's response. "
"Always call it after greeting the user, asking a question, or "
"requesting approval. Do NOT call it for status updates or "
"summaries that don't require a response. "
"Always include 2-3 predefined options. The UI automatically "
"appends an 'Other' free-text input after your options, so NEVER "
"include catch-all options like 'Custom idea', 'Something else', "
"'Other', or 'None of the above' — the UI handles that. "
"When the question primarily needs a typed answer but you must "
"include options, make one option signal that typing is expected "
"(e.g. 'I\\'ll type my response'). This helps users discover the "
"free-text input. "
"The ONLY exception: omit options when the question demands a "
"free-form answer the user must type out (e.g. 'Describe your "
"agent idea', 'Paste the error message'). "
'{"question": "What would you like to do?", "options": '
'["Build a new agent", "Modify existing agent", "Run tests"]} '
"Free-form example: "
'{"question": "Describe the agent you want to build."}'
),
parameters={
"type": "object",
"properties": {
"question": {
"type": "string",
"description": "The question or prompt shown to the user.",
},
"options": {
"type": "array",
"items": {"type": "string"},
"description": (
"2-3 specific predefined choices. Include in most cases. "
'Example: ["Option A", "Option B", "Option C"]. '
"The UI always appends an 'Other' free-text input, so "
"do NOT include catch-alls like 'Custom idea' or 'Other'. "
"Omit ONLY when the user must type a free-form answer."
),
"minItems": 2,
"maxItems": 3,
},
},
"required": ["question"],
},
)
def build_ask_user_multiple_tool() -> Tool:
"""Build the synthetic ask_user_multiple tool for batched questions.
Queen-only tool that presents multiple questions at once so the user
can answer them all in a single interaction rather than one at a time.
"""
return Tool(
name="ask_user_multiple",
description=(
"Ask the user multiple questions at once. Use this instead of "
"ask_user when you have 2 or more questions to ask in the same "
"turn — it lets the user answer everything in one go rather than "
"going back and forth. Each question can have its own predefined "
"options (2-3 choices) or be free-form. The UI renders all "
"questions together with a single Submit button. "
"ALWAYS prefer this over ask_user when you have multiple things "
"to clarify. "
"IMPORTANT: Do NOT repeat the questions in your text response — "
"the widget renders them. Keep your text to a brief intro only. "
'{"questions": ['
' {"id": "scope", "prompt": "What scope?", "options": ["Full", "Partial"]},'
' {"id": "format", "prompt": "Output format?", "options": ["PDF", "CSV", "JSON"]},'
' {"id": "details", "prompt": "Any special requirements?"}'
"]}"
),
parameters={
"type": "object",
"properties": {
"questions": {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": (
"Short identifier for this question (used in the response)."
),
},
"prompt": {
"type": "string",
"description": "The question text shown to the user.",
},
"options": {
"type": "array",
"items": {"type": "string"},
"description": (
"2-3 predefined choices. The UI appends an "
"'Other' free-text input automatically. "
"Omit only when the user must type a free-form answer."
),
"minItems": 2,
"maxItems": 3,
},
},
"required": ["id", "prompt"],
},
"minItems": 2,
"maxItems": 8,
"description": "List of questions to present to the user.",
},
},
"required": ["questions"],
},
)
def build_set_output_tool(output_keys: list[str] | None) -> Tool | None:
"""Build the synthetic set_output tool for explicit output declaration."""
if not output_keys:
return None
return Tool(
name="set_output",
description=(
"Set an output value for this node. Call once per output key. "
"Use this for brief notes, counts, status, and file references — "
"NOT for large data payloads. When a tool result was saved to a "
"data file, pass the filename as the value "
"(e.g. 'google_sheets_get_values_1.txt') so the next phase can "
"load the full data. Values exceeding ~2000 characters are "
"auto-saved to data files. "
f"Valid keys: {output_keys}"
),
parameters={
"type": "object",
"properties": {
"key": {
"type": "string",
"description": f"Output key. Must be one of: {output_keys}",
"enum": output_keys,
},
"value": {
"type": "string",
"description": (
"The output value — a brief note, count, status, "
"or data filename reference."
),
},
},
"required": ["key", "value"],
},
)
def build_escalate_tool() -> Tool:
"""Build the synthetic escalate tool for worker -> queen handoff."""
return Tool(
name="escalate",
description=(
"Escalate to the queen when requesting user input, "
"blocked by errors, missing "
"credentials, or ambiguous constraints that require supervisor "
"guidance. Include a concise reason and optional context. "
"The node will pause until the queen injects guidance."
),
parameters={
"type": "object",
"properties": {
"reason": {
"type": "string",
"description": (
"Short reason for escalation (e.g. 'Tool repeatedly failing')."
),
},
"context": {
"type": "string",
"description": "Optional diagnostic details for the queen.",
},
},
"required": ["reason"],
},
)
def build_delegate_tool(sub_agents: list[str], node_registry: dict[str, Any]) -> Tool | None:
"""Build the synthetic delegate_to_sub_agent tool for subagent invocation.
Args:
sub_agents: List of node IDs that can be invoked as subagents.
node_registry: Map of node_id -> NodeSpec for looking up subagent descriptions.
Returns:
Tool definition if sub_agents is non-empty, None otherwise.
"""
if not sub_agents:
return None
agent_descriptions = []
for agent_id in sub_agents:
spec = node_registry.get(agent_id)
if spec:
desc = getattr(spec, "description", "(no description)")
agent_descriptions.append(f"- {agent_id}: {desc}")
else:
agent_descriptions.append(f"- {agent_id}: (not found in registry)")
return Tool(
name="delegate_to_sub_agent",
description=(
"Delegate a task to a specialized sub-agent. The sub-agent runs "
"autonomously with read-only access to current memory and returns "
"its result. Use this to parallelize work or leverage specialized capabilities.\n\n"
"Available sub-agents:\n" + "\n".join(agent_descriptions)
),
parameters={
"type": "object",
"properties": {
"agent_id": {
"type": "string",
"description": f"The sub-agent to invoke. Must be one of: {sub_agents}",
"enum": sub_agents,
},
"task": {
"type": "string",
"description": (
"The task description for the sub-agent to execute. "
"Be specific about what you want the sub-agent to do and "
"what information to return."
),
},
},
"required": ["agent_id", "task"],
},
)
def build_report_to_parent_tool() -> Tool:
"""Build the synthetic report_to_parent tool for sub-agent progress reports.
Sub-agents call this to send one-way progress updates, partial findings,
or status reports to the parent node (and external observers via event bus)
without blocking execution.
When ``wait_for_response`` is True, the sub-agent blocks until the parent
relays the user's response — used for escalation (e.g. login pages, CAPTCHAs).
When ``mark_complete`` is True, the sub-agent terminates immediately after
sending the report no need to call set_output for each output key.
"""
return Tool(
name="report_to_parent",
description=(
"Send a report to the parent agent. By default this is fire-and-forget: "
"the parent receives the report but does not respond. "
"Set wait_for_response=true to BLOCK until the user replies — use this "
"when you need human intervention (e.g. login pages, CAPTCHAs, "
"authentication walls). The user's response is returned as the tool result. "
"Set mark_complete=true to finish your task and terminate immediately "
"after sending the report — use this when your findings are in the "
"message/data fields and you don't need to call set_output."
),
parameters={
"type": "object",
"properties": {
"message": {
"type": "string",
"description": "A human-readable status or progress message.",
},
"data": {
"type": "object",
"description": "Optional structured data to include with the report.",
},
"wait_for_response": {
"type": "boolean",
"description": (
"If true, block execution until the user responds. "
"Use for escalation scenarios requiring human intervention."
),
"default": False,
},
"mark_complete": {
"type": "boolean",
"description": (
"If true, terminate the sub-agent immediately after sending "
"this report. The report message and data are delivered to the "
"parent as the final result. No set_output calls are needed."
),
"default": False,
},
},
"required": ["message"],
},
)
def handle_set_output(
tool_input: dict[str, Any],
output_keys: list[str] | None,
) -> ToolResult:
"""Handle set_output tool call. Returns ToolResult (sync)."""
import logging
import re
logger = logging.getLogger(__name__)
key = tool_input.get("key", "")
value = tool_input.get("value", "")
valid_keys = output_keys or []
# Recover from truncated JSON (max_tokens hit mid-argument).
# The _raw key is set by litellm when json.loads fails.
if not key and "_raw" in tool_input:
raw = tool_input["_raw"]
key_match = re.search(r'"key"\s*:\s*"(\w+)"', raw)
if key_match:
key = key_match.group(1)
val_match = re.search(r'"value"\s*:\s*"', raw)
if val_match:
start = val_match.end()
value = raw[start:].rstrip()
for suffix in ('"}\n', '"}', '"'):
if value.endswith(suffix):
value = value[: -len(suffix)]
break
if key:
logger.warning(
"Recovered set_output args from truncated JSON: key=%s, value_len=%d",
key,
len(value),
)
# Re-inject so the caller sees proper key/value
tool_input["key"] = key
tool_input["value"] = value
if key not in valid_keys:
return ToolResult(
tool_use_id="",
content=f"Invalid output key '{key}'. Valid keys: {valid_keys}",
is_error=True,
)
return ToolResult(
tool_use_id="",
content=f"Output '{key}' set successfully.",
is_error=False,
)
@@ -0,0 +1,503 @@
"""Tool result handling: truncation, spillover, JSON preview, and execution.
Manages tool result size limits, file spillover for large results, and
smart JSON previews. Also includes transient error classification and
the context-window-exceeded error detector.
"""
from __future__ import annotations
import asyncio
import contextvars
import json
import logging
import re
from pathlib import Path
from typing import Any
from framework.llm.provider import ToolResult, ToolUse
from framework.llm.stream_events import ToolCallEvent
logger = logging.getLogger(__name__)
# Pattern for detecting context-window-exceeded errors across LLM providers.
_CONTEXT_TOO_LARGE_RE = re.compile(
r"context.{0,20}(length|window|limit|size)|"
r"too.{0,10}(long|large|many.{0,10}tokens)|"
r"(exceed|exceeds|exceeded).{0,30}(limit|window|context|tokens)|"
r"maximum.{0,20}token|prompt.{0,20}too.{0,10}long",
re.IGNORECASE,
)
def is_context_too_large_error(exc: BaseException) -> bool:
"""Detect whether an exception indicates the LLM input was too large."""
cls = type(exc).__name__
if "ContextWindow" in cls:
return True
return bool(_CONTEXT_TOO_LARGE_RE.search(str(exc)))
def is_transient_error(exc: BaseException) -> bool:
"""Classify whether an exception is transient (retryable) vs permanent.
Transient: network errors, rate limits, server errors, timeouts.
Permanent: auth errors, bad requests, context window exceeded.
"""
try:
from litellm.exceptions import (
APIConnectionError,
BadGatewayError,
InternalServerError,
RateLimitError,
ServiceUnavailableError,
)
transient_types: tuple[type[BaseException], ...] = (
RateLimitError,
APIConnectionError,
InternalServerError,
BadGatewayError,
ServiceUnavailableError,
TimeoutError,
ConnectionError,
OSError,
)
except ImportError:
transient_types = (TimeoutError, ConnectionError, OSError)
if isinstance(exc, transient_types):
return True
# RuntimeError from StreamErrorEvent with "Stream error:" prefix
if isinstance(exc, RuntimeError):
error_str = str(exc).lower()
transient_keywords = [
"rate limit",
"429",
"timeout",
"connection",
"internal server",
"502",
"503",
"504",
"service unavailable",
"bad gateway",
"overloaded",
"failed to parse tool call",
]
return any(kw in error_str for kw in transient_keywords)
return False
def extract_json_metadata(parsed: Any, *, _depth: int = 0, _max_depth: int = 3) -> str:
"""Return a concise structural summary of parsed JSON.
Reports key names, value types, and crucially array lengths so
the LLM knows how much data exists beyond the preview.
Returns an empty string for simple scalars.
"""
if _depth >= _max_depth:
if isinstance(parsed, dict):
return f"dict with {len(parsed)} keys"
if isinstance(parsed, list):
return f"list of {len(parsed)} items"
return type(parsed).__name__
if isinstance(parsed, dict):
if not parsed:
return "empty dict"
lines: list[str] = []
indent = " " * (_depth + 1)
for key, value in list(parsed.items())[:20]:
if isinstance(value, list):
line = f'{indent}"{key}": list of {len(value)} items'
if value:
first = value[0]
if isinstance(first, dict):
sample_keys = list(first.keys())[:10]
line += f" (each item: dict with keys {sample_keys})"
elif isinstance(first, list):
line += f" (each item: list of {len(first)} elements)"
lines.append(line)
elif isinstance(value, dict):
child = extract_json_metadata(value, _depth=_depth + 1, _max_depth=_max_depth)
lines.append(f'{indent}"{key}": {child}')
else:
lines.append(f'{indent}"{key}": {type(value).__name__}')
if len(parsed) > 20:
lines.append(f"{indent}... and {len(parsed) - 20} more keys")
return "\n".join(lines)
if isinstance(parsed, list):
if not parsed:
return "empty list"
desc = f"list of {len(parsed)} items"
first = parsed[0]
if isinstance(first, dict):
sample_keys = list(first.keys())[:10]
desc += f" (each item: dict with keys {sample_keys})"
elif isinstance(first, list):
desc += f" (each item: list of {len(first)} elements)"
return desc
return ""
def build_json_preview(parsed: Any, *, max_chars: int = 5000) -> str | None:
"""Build a smart preview of parsed JSON, truncating large arrays.
Shows first 3 + last 1 items of large arrays with explicit count
markers so the LLM cannot mistake the preview for the full dataset.
Returns ``None`` if no truncation was needed (no large arrays).
"""
_LARGE_ARRAY_THRESHOLD = 10
def _truncate_arrays(obj: Any) -> tuple[Any, bool]:
"""Return (truncated_copy, was_truncated)."""
if isinstance(obj, list) and len(obj) > _LARGE_ARRAY_THRESHOLD:
n = len(obj)
head = obj[:3]
tail = obj[-1:]
marker = f"... ({n - 4} more items omitted, {n} total) ..."
return head + [marker] + tail, True
if isinstance(obj, dict):
changed = False
out: dict[str, Any] = {}
for k, v in obj.items():
new_v, did = _truncate_arrays(v)
out[k] = new_v
changed = changed or did
return (out, True) if changed else (obj, False)
return obj, False
preview_obj, was_truncated = _truncate_arrays(parsed)
if not was_truncated:
return None # No large arrays — caller should use raw slicing
try:
result = json.dumps(preview_obj, indent=2, ensure_ascii=False)
except (TypeError, ValueError):
return None
if len(result) > max_chars:
# Even 3+1 items too big — try just 1 item
def _minimal_arrays(obj: Any) -> Any:
if isinstance(obj, list) and len(obj) > _LARGE_ARRAY_THRESHOLD:
n = len(obj)
return obj[:1] + [f"... ({n - 1} more items omitted, {n} total) ..."]
if isinstance(obj, dict):
return {k: _minimal_arrays(v) for k, v in obj.items()}
return obj
preview_obj = _minimal_arrays(parsed)
try:
result = json.dumps(preview_obj, indent=2, ensure_ascii=False)
except (TypeError, ValueError):
return None
if len(result) > max_chars:
result = result[:max_chars] + ""
return result
def truncate_tool_result(
result: ToolResult,
tool_name: str,
*,
max_tool_result_chars: int,
spillover_dir: str | None,
next_spill_filename_fn: Any, # Callable[[str], str]
) -> ToolResult:
"""Persist tool result to file and optionally truncate for context.
When *spillover_dir* is configured, EVERY non-error tool result is
saved to a file (short filename like ``web_search_1.txt``). A
``[Saved to '...']`` annotation is appended so the reference
survives pruning and compaction.
- Small results ( limit): full content kept + file annotation
- Large results (> limit): preview + file reference
- Errors: pass through unchanged
- load_data results: truncate with pagination hint (no re-spill)
"""
limit = max_tool_result_chars
# Errors always pass through unchanged
if result.is_error:
return result
# load_data reads FROM spilled files — never re-spill (circular).
# Just truncate with a pagination hint if the result is too large.
if tool_name == "load_data":
if limit <= 0 or len(result.content) <= limit:
return result # Small load_data result — pass through as-is
# Large load_data result — truncate with smart preview
PREVIEW_CAP = min(5000, max(limit - 500, limit // 2))
metadata_str = ""
smart_preview: str | None = None
try:
parsed_ld = json.loads(result.content)
metadata_str = extract_json_metadata(parsed_ld)
smart_preview = build_json_preview(parsed_ld, max_chars=PREVIEW_CAP)
except (json.JSONDecodeError, TypeError, ValueError):
pass
if smart_preview is not None:
preview_block = smart_preview
else:
preview_block = result.content[:PREVIEW_CAP] + ""
header = (
f"[{tool_name} result: {len(result.content):,} chars — "
f"too large for context. Use offset_bytes/limit_bytes "
f"parameters to read smaller chunks.]"
)
if metadata_str:
header += f"\n\nData structure:\n{metadata_str}"
header += (
"\n\nWARNING: This is an INCOMPLETE preview. Do NOT draw conclusions or counts from it."
)
truncated = f"{header}\n\nPreview (small sample only):\n{preview_block}"
logger.info(
"%s result truncated: %d%d chars (use offset/limit to paginate)",
tool_name,
len(result.content),
len(truncated),
)
return ToolResult(
tool_use_id=result.tool_use_id,
content=truncated,
is_error=False,
image_content=result.image_content,
is_skill_content=result.is_skill_content,
)
spill_dir = spillover_dir
if spill_dir:
spill_path = Path(spill_dir)
spill_path.mkdir(parents=True, exist_ok=True)
filename = next_spill_filename_fn(tool_name)
# Pretty-print JSON content so load_data's line-based
# pagination works correctly.
write_content = result.content
parsed_json: Any = None # track for metadata extraction
try:
parsed_json = json.loads(result.content)
write_content = json.dumps(parsed_json, indent=2, ensure_ascii=False)
except (json.JSONDecodeError, TypeError, ValueError):
pass # Not JSON — write as-is
(spill_path / filename).write_text(write_content, encoding="utf-8")
if limit > 0 and len(result.content) > limit:
# Large result: build a small, metadata-rich preview so the
# LLM cannot mistake it for the complete dataset.
PREVIEW_CAP = 5000
# Extract structural metadata (array lengths, key names)
metadata_str = ""
smart_preview: str | None = None
if parsed_json is not None:
metadata_str = extract_json_metadata(parsed_json)
smart_preview = build_json_preview(parsed_json, max_chars=PREVIEW_CAP)
if smart_preview is not None:
preview_block = smart_preview
else:
preview_block = result.content[:PREVIEW_CAP] + ""
# Assemble header with structural info + warning
header = (
f"[Result from {tool_name}: {len(result.content):,} chars — "
f"too large for context, saved to '{filename}'.]\n"
)
if metadata_str:
header += f"\nData structure:\n{metadata_str}"
header += (
f"\n\nWARNING: The preview below is INCOMPLETE. "
f"Do NOT draw conclusions or counts from it. "
f"Use load_data(filename='{filename}') to read the "
f"full data before analysis."
)
content = f"{header}\n\nPreview (small sample only):\n{preview_block}"
logger.info(
"Tool result spilled to file: %s (%d chars → %s)",
tool_name,
len(result.content),
filename,
)
else:
# Small result: keep full content + annotation
content = f"{result.content}\n\n[Saved to '{filename}']"
logger.info(
"Tool result saved to file: %s (%d chars → %s)",
tool_name,
len(result.content),
filename,
)
return ToolResult(
tool_use_id=result.tool_use_id,
content=content,
is_error=False,
image_content=result.image_content,
is_skill_content=result.is_skill_content,
)
# No spillover_dir — truncate in-place if needed
if limit > 0 and len(result.content) > limit:
PREVIEW_CAP = min(5000, max(limit - 500, limit // 2))
metadata_str = ""
smart_preview: str | None = None
try:
parsed_inline = json.loads(result.content)
metadata_str = extract_json_metadata(parsed_inline)
smart_preview = build_json_preview(parsed_inline, max_chars=PREVIEW_CAP)
except (json.JSONDecodeError, TypeError, ValueError):
pass
if smart_preview is not None:
preview_block = smart_preview
else:
preview_block = result.content[:PREVIEW_CAP] + ""
header = (
f"[Result from {tool_name}: {len(result.content):,} chars — "
f"truncated to fit context budget.]"
)
if metadata_str:
header += f"\n\nData structure:\n{metadata_str}"
header += (
"\n\nWARNING: This is an INCOMPLETE preview. "
"Do NOT draw conclusions or counts from the preview alone."
)
truncated = f"{header}\n\n{preview_block}"
logger.info(
"Tool result truncated in-place: %s (%d%d chars)",
tool_name,
len(result.content),
len(truncated),
)
return ToolResult(
tool_use_id=result.tool_use_id,
content=truncated,
is_error=False,
image_content=result.image_content,
is_skill_content=result.is_skill_content,
)
return result
async def execute_tool(
tool_executor: Any, # Callable[[ToolUse], ToolResult | Awaitable[ToolResult]] | None
tc: ToolCallEvent,
timeout: float,
skill_dirs: list[str] | None = None,
) -> ToolResult:
"""Execute a tool call, handling both sync and async executors.
Applies ``tool_call_timeout_seconds`` to prevent hung MCP servers
from blocking the event loop indefinitely. The initial executor
call is offloaded to a thread pool so that sync executors don't
freeze the event loop.
"""
if tool_executor is None:
return ToolResult(
tool_use_id=tc.tool_use_id,
content=f"No tool executor configured for '{tc.tool_name}'",
is_error=True,
)
skill_dirs = skill_dirs or []
skill_read_tools = {"view_file", "load_data", "read_file"}
if tc.tool_name in skill_read_tools and skill_dirs:
raw_path = tc.tool_input.get("path", "")
if raw_path:
resolved = Path(raw_path).resolve(strict=False)
resolved_roots = [Path(skill_dir).resolve(strict=False) for skill_dir in skill_dirs]
if any(resolved.is_relative_to(root) for root in resolved_roots):
try:
content = resolved.read_text(encoding="utf-8")
except Exception as exc:
return ToolResult(
tool_use_id=tc.tool_use_id,
content=f"Could not read skill resource '{raw_path}': {exc}",
is_error=True,
)
return ToolResult(
tool_use_id=tc.tool_use_id,
content=content,
is_skill_content=resolved.name == "SKILL.md",
)
tool_use = ToolUse(id=tc.tool_use_id, name=tc.tool_name, input=tc.tool_input)
async def _run() -> ToolResult:
# Offload the executor call to a thread. Sync MCP executors
# block on future.result() — running in a thread keeps the
# event loop free so asyncio.wait_for can fire the timeout.
# Copy the current context so contextvars (e.g. data_dir from
# execution context) propagate into the worker thread.
loop = asyncio.get_running_loop()
ctx = contextvars.copy_context()
result = await loop.run_in_executor(None, ctx.run, tool_executor, tool_use)
# Async executors return a coroutine — await it on the loop
if asyncio.iscoroutine(result) or asyncio.isfuture(result):
result = await result
return result
try:
if timeout > 0:
result = await asyncio.wait_for(_run(), timeout=timeout)
else:
result = await _run()
except TimeoutError:
logger.warning("Tool '%s' timed out after %.0fs", tc.tool_name, timeout)
return ToolResult(
tool_use_id=tc.tool_use_id,
content=(
f"Tool '{tc.tool_name}' timed out after {timeout:.0f}s. "
"The operation took too long and was cancelled. "
"Try a simpler request or a different approach."
),
is_error=True,
)
return result
def next_spill_filename(tool_name: str, counter: int) -> str:
"""Return a short, monotonic filename for a tool result spill."""
# Shorten common tool name prefixes to save tokens
short = tool_name.removeprefix("tool_").removeprefix("mcp_")
return f"{short}_{counter}.txt"
def restore_spill_counter(spillover_dir: str | None) -> int:
"""Scan spillover_dir for existing spill files and return the max counter.
Returns the highest spill number found (or 0 if none).
"""
if not spillover_dir:
return 0
spill_path = Path(spillover_dir)
if not spill_path.is_dir():
return 0
max_n = 0
for f in spill_path.iterdir():
if not f.is_file():
continue
m = re.search(r"_(\d+)\.txt$", f.name)
if m:
max_n = max(max_n, int(m.group(1)))
return max_n
+190
View File
@@ -0,0 +1,190 @@
"""Shared types and state containers for the event loop package."""
from __future__ import annotations
import json
import logging
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Literal, Protocol, runtime_checkable
from framework.graph.conversation import ConversationStore
logger = logging.getLogger(__name__)
@dataclass
class TriggerEvent:
"""A framework-level trigger signal (timer tick or webhook hit)."""
trigger_type: str
source_id: str
payload: dict[str, Any] = field(default_factory=dict)
timestamp: float = field(default_factory=time.time)
@dataclass
class JudgeVerdict:
"""Result of judge evaluation for the event loop."""
action: Literal["ACCEPT", "RETRY", "ESCALATE"]
# None = no evaluation happened (skip_judge, tool-continue); not logged.
# "" = evaluated but no feedback; logged with default text.
# "..." = evaluated with feedback; logged as-is.
feedback: str | None = None
@runtime_checkable
class JudgeProtocol(Protocol):
"""Protocol for event-loop judges."""
async def evaluate(self, context: dict[str, Any]) -> JudgeVerdict: ...
@dataclass
class LoopConfig:
"""Configuration for the event loop."""
max_iterations: int = 50
max_tool_calls_per_turn: int = 30
judge_every_n_turns: int = 1
stall_detection_threshold: int = 3
stall_similarity_threshold: float = 0.85
max_context_tokens: int = 32_000
store_prefix: str = ""
# Overflow margin for max_tool_calls_per_turn. Tool calls are only
# discarded when the count exceeds max_tool_calls_per_turn * (1 + margin).
tool_call_overflow_margin: float = 0.5
# Tool result context management.
max_tool_result_chars: int = 30_000
spillover_dir: str | None = None
# set_output value spilling.
max_output_value_chars: int = 2_000
# Stream retry.
max_stream_retries: int = 3
stream_retry_backoff_base: float = 2.0
stream_retry_max_delay: float = 60.0
# Tool doom loop detection.
tool_doom_loop_threshold: int = 3
# Client-facing auto-block grace period.
cf_grace_turns: int = 1
tool_doom_loop_enabled: bool = True
# Per-tool-call timeout.
tool_call_timeout_seconds: float = 60.0
# Subagent delegation timeout.
subagent_timeout_seconds: float = 600.0
# Lifecycle hooks.
hooks: dict[str, list] | None = None
def __post_init__(self) -> None:
if self.hooks is None:
object.__setattr__(self, "hooks", {})
@dataclass
class HookContext:
"""Context passed to every lifecycle hook."""
event: str
trigger: str | None
system_prompt: str
@dataclass
class HookResult:
"""What a hook may return to modify node state."""
system_prompt: str | None = None
inject: str | None = None
@dataclass
class OutputAccumulator:
"""Accumulates output key-value pairs with optional write-through persistence."""
values: dict[str, Any] = field(default_factory=dict)
store: ConversationStore | None = None
spillover_dir: str | None = None
max_value_chars: int = 0
async def set(self, key: str, value: Any) -> None:
"""Set a key-value pair, auto-spilling large values to files."""
value = self._auto_spill(key, value)
self.values[key] = value
if self.store:
cursor = await self.store.read_cursor() or {}
outputs = cursor.get("outputs", {})
outputs[key] = value
cursor["outputs"] = outputs
await self.store.write_cursor(cursor)
def _auto_spill(self, key: str, value: Any) -> Any:
"""Save large values to a file and return a reference string."""
if self.max_value_chars <= 0 or not self.spillover_dir:
return value
val_str = json.dumps(value, ensure_ascii=False) if not isinstance(value, str) else value
if len(val_str) <= self.max_value_chars:
return value
spill_path = Path(self.spillover_dir)
spill_path.mkdir(parents=True, exist_ok=True)
ext = ".json" if isinstance(value, (dict, list)) else ".txt"
filename = f"output_{key}{ext}"
write_content = (
json.dumps(value, indent=2, ensure_ascii=False)
if isinstance(value, (dict, list))
else str(value)
)
(spill_path / filename).write_text(write_content, encoding="utf-8")
file_size = (spill_path / filename).stat().st_size
logger.info(
"set_output value auto-spilled: key=%s, %d chars -> %s (%d bytes)",
key,
len(val_str),
filename,
file_size,
)
return (
f"[Saved to '{filename}' ({file_size:,} bytes). "
f"Use load_data(filename='{filename}') "
f"to access full data.]"
)
def get(self, key: str) -> Any | None:
return self.values.get(key)
def to_dict(self) -> dict[str, Any]:
return dict(self.values)
def has_all_keys(self, required: list[str]) -> bool:
return all(key in self.values and self.values[key] is not None for key in required)
@classmethod
async def restore(cls, store: ConversationStore) -> OutputAccumulator:
cursor = await store.read_cursor()
values = {}
if cursor and "outputs" in cursor:
values = cursor["outputs"]
return cls(values=values, store=store)
__all__ = [
"HookContext",
"HookResult",
"JudgeProtocol",
"JudgeVerdict",
"LoopConfig",
"OutputAccumulator",
"TriggerEvent",
]
File diff suppressed because it is too large Load Diff
+186 -44
View File
@@ -27,11 +27,24 @@ from framework.graph.node import (
SharedMemory,
)
from framework.graph.validator import OutputValidator
from framework.llm.provider import LLMProvider, Tool
from framework.llm.provider import LLMProvider, Tool, ToolUse
from framework.observability import set_trace_context
from framework.runtime.core import Runtime
from framework.schemas.checkpoint import Checkpoint
from framework.storage.checkpoint_store import CheckpointStore
from framework.utils.io import atomic_write
logger = logging.getLogger(__name__)
def _default_max_context_tokens() -> int:
"""Resolve max_context_tokens from global config, falling back to 32000."""
try:
from framework.config import get_max_context_tokens
return get_max_context_tokens()
except Exception:
return 32_000
@dataclass
@@ -138,6 +151,12 @@ class GraphExecutor:
tool_provider_map: dict[str, str] | None = None,
dynamic_tools_provider: Callable | None = None,
dynamic_prompt_provider: Callable | None = None,
iteration_metadata_provider: Callable | None = None,
skills_catalog_prompt: str = "",
protocols_prompt: str = "",
skill_dirs: list[str] | None = None,
context_warn_ratio: float | None = None,
batch_init_nudge: str | None = None,
):
"""
Initialize the executor.
@@ -163,6 +182,11 @@ class GraphExecutor:
tool list (for mode switching)
dynamic_prompt_provider: Optional callback returning current
system prompt (for phase switching)
skills_catalog_prompt: Available skills catalog for system prompt
protocols_prompt: Default skill operational protocols for system prompt
skill_dirs: Skill base directories for Tier 3 resource access
context_warn_ratio: Token usage ratio to trigger DS-13 preservation warning
batch_init_nudge: System prompt nudge for DS-12 batch auto-detection
"""
self.runtime = runtime
self.llm = llm
@@ -183,6 +207,24 @@ class GraphExecutor:
self.tool_provider_map = tool_provider_map
self.dynamic_tools_provider = dynamic_tools_provider
self.dynamic_prompt_provider = dynamic_prompt_provider
self.iteration_metadata_provider = iteration_metadata_provider
self.skills_catalog_prompt = skills_catalog_prompt
self.protocols_prompt = protocols_prompt
self.skill_dirs: list[str] = skill_dirs or []
self.context_warn_ratio: float | None = context_warn_ratio
self.batch_init_nudge: str | None = batch_init_nudge
if protocols_prompt:
self.logger.info(
"GraphExecutor[%s] received protocols_prompt (%d chars)",
stream_id,
len(protocols_prompt),
)
else:
self.logger.warning(
"GraphExecutor[%s] received EMPTY protocols_prompt",
stream_id,
)
# Parallel execution settings
self.enable_parallel_execution = enable_parallel_execution
@@ -212,11 +254,11 @@ class GraphExecutor:
"""
if not self._storage_path:
return
state_path = self._storage_path / "state.json"
try:
import json as _json
from datetime import datetime
state_path = self._storage_path / "state.json"
if state_path.exists():
state_data = _json.loads(state_path.read_text(encoding="utf-8"))
else:
@@ -239,9 +281,14 @@ class GraphExecutor:
state_data["memory"] = memory_snapshot
state_data["memory_keys"] = list(memory_snapshot.keys())
state_path.write_text(_json.dumps(state_data, indent=2), encoding="utf-8")
with atomic_write(state_path, encoding="utf-8") as f:
_json.dump(state_data, f, indent=2)
except Exception:
pass # Best-effort — never block execution
logger.warning(
"Failed to persist progress state to %s",
state_path,
exc_info=True,
)
def _validate_tools(self, graph: GraphSpec) -> list[str]:
"""
@@ -330,7 +377,7 @@ class GraphExecutor:
_depth,
)
else:
max_tokens = getattr(conversation, "_max_history_tokens", 32000)
max_tokens = getattr(conversation, "_max_context_tokens", 32000)
target_tokens = max_tokens // 2
target_chars = target_tokens * 4
@@ -403,6 +450,14 @@ class GraphExecutor:
)
return s1 + "\n\n" + s2
def _get_runtime_log_session_id(self) -> str:
"""Return the session-backed execution ID for runtime logging, if any."""
if not self._storage_path:
return ""
if self._storage_path.parent.name != "sessions":
return ""
return self._storage_path.name
async def execute(
self,
graph: GraphSpec,
@@ -696,10 +751,7 @@ class GraphExecutor:
)
if self.runtime_logger:
# Extract session_id from storage_path if available (for unified sessions)
session_id = ""
if self._storage_path and self._storage_path.name.startswith("session_"):
session_id = self._storage_path.name
session_id = self._get_runtime_log_session_id()
self.runtime_logger.start_run(goal_id=goal.id, session_id=session_id)
self.logger.info(f"🚀 Starting execution: {goal.name}")
@@ -925,6 +977,33 @@ class GraphExecutor:
self.logger.info(" Executing...")
result = await node_impl.execute(ctx)
# GCU tab cleanup: stop the browser profile after a top-level GCU node
# finishes so tabs don't accumulate. Mirrors the subagent cleanup in
# EventLoopNode._execute_subagent().
if node_spec.node_type == "gcu" and self.tool_executor is not None:
try:
from gcu.browser.session import (
_active_profile as _gcu_profile_var,
)
_gcu_profile = _gcu_profile_var.get()
_stop_use = ToolUse(
id="gcu-cleanup",
name="browser_stop",
input={"profile": _gcu_profile},
)
_stop_result = self.tool_executor(_stop_use)
if asyncio.iscoroutine(_stop_result) or asyncio.isfuture(_stop_result):
await _stop_result
except ImportError:
pass # GCU not installed
except Exception as _gcu_exc:
logger.warning(
"GCU browser_stop failed for profile %r: %s",
_gcu_profile,
_gcu_exc,
)
# Emit node-completed event (skip event_loop nodes)
if self._event_bus and node_spec.node_type != "event_loop":
await self._event_bus.emit_node_loop_completed(
@@ -1350,6 +1429,7 @@ class GraphExecutor:
next_spec = graph.get_node(current_node_id)
if next_spec and next_spec.node_type == "event_loop":
from framework.graph.prompt_composer import (
EXECUTION_SCOPE_PREAMBLE,
build_accounts_prompt,
build_narrative,
build_transition_marker,
@@ -1359,23 +1439,6 @@ class GraphExecutor:
# Build Layer 2 (narrative) from current state
narrative = build_narrative(memory, path, graph)
# Read agent working memory (adapt.md) once for both
# system prompt and transition marker.
_adapt_text: str | None = None
if self._storage_path:
_adapt_path = self._storage_path / "data" / "adapt.md"
if _adapt_path.exists():
_raw = _adapt_path.read_text(encoding="utf-8").strip()
_adapt_text = _raw or None
# Merge adapt.md into narrative for system prompt
if _adapt_text:
narrative = (
f"{narrative}\n\n--- Agent Memory ---\n{_adapt_text}"
if narrative
else _adapt_text
)
# Build per-node accounts prompt for the next node
_node_accounts = self.accounts_prompt or None
if self.accounts_data and self.tool_provider_map:
@@ -1389,9 +1452,14 @@ class GraphExecutor:
)
# Compose new system prompt (Layer 1 + 2 + 3 + accounts)
# Prepend scope preamble to focus so the LLM stays
# within this node's responsibility.
_focus = next_spec.system_prompt
if next_spec.output_keys and _focus:
_focus = f"{EXECUTION_SCOPE_PREAMBLE}\n\n{_focus}"
new_system = compose_system_prompt(
identity_prompt=getattr(graph, "identity_prompt", None),
focus_prompt=next_spec.system_prompt,
focus_prompt=_focus,
narrative=narrative,
accounts_prompt=_node_accounts,
)
@@ -1405,7 +1473,6 @@ class GraphExecutor:
memory=memory,
cumulative_tool_names=sorted(cumulative_tool_names),
data_dir=data_dir,
adapt_content=_adapt_text,
)
await continuous_conversation.add_user_message(
marker,
@@ -1604,7 +1671,7 @@ class GraphExecutor:
# Return with paused status
return ExecutionResult(
success=False,
error="Execution paused by user",
error="Execution cancelled",
output=saved_memory,
steps_executed=steps,
total_tokens=total_tokens,
@@ -1753,10 +1820,34 @@ class GraphExecutor:
if node_spec.tools:
available_tools = [t for t in self.tools if t.name in node_spec.tools]
# Create scoped memory view
# Create scoped memory view.
# When permissions are restricted (non-empty key lists), auto-include
# _-prefixed keys used by default skill protocols so agents can read/write
# operational state (e.g. _working_notes, _batch_ledger) regardless of
# what the node declares. When key lists are empty (unrestricted), leave
# unchanged — empty means "allow all".
read_keys = list(node_spec.input_keys)
write_keys = list(node_spec.output_keys)
# Only extend lists that were already restricted (non-empty).
# Empty means "allow all" — adding keys would accidentally
# activate the permission check and block legitimate reads/writes.
if read_keys or write_keys:
from framework.skills.defaults import SHARED_MEMORY_KEYS as _skill_keys
existing_underscore = [k for k in memory._data if k.startswith("_")]
extra_keys = set(_skill_keys) | set(existing_underscore)
# Only inject into read_keys when it was already non-empty — an empty
# read_keys means "allow all reads" and injecting skill keys would
# inadvertently restrict reads to skill keys only.
for k in extra_keys:
if read_keys and k not in read_keys:
read_keys.append(k)
if write_keys and k not in write_keys:
write_keys.append(k)
scoped_memory = memory.with_permissions(
read_keys=node_spec.input_keys,
write_keys=node_spec.output_keys,
read_keys=read_keys,
write_keys=write_keys,
)
# Build per-node accounts prompt (filtered to this node's tools)
@@ -1799,6 +1890,12 @@ class GraphExecutor:
shared_node_registry=self.node_registry, # For subagent escalation routing
dynamic_tools_provider=self.dynamic_tools_provider,
dynamic_prompt_provider=self.dynamic_prompt_provider,
iteration_metadata_provider=self.iteration_metadata_provider,
skills_catalog_prompt=self.skills_catalog_prompt,
protocols_prompt=self.protocols_prompt,
skill_dirs=self.skill_dirs,
default_skill_warn_ratio=self.context_warn_ratio,
default_skill_batch_nudge=self.batch_init_nudge,
)
VALID_NODE_TYPES = {
@@ -1872,7 +1969,7 @@ class GraphExecutor:
max_tool_calls_per_turn=lc.get("max_tool_calls_per_turn", 30),
tool_call_overflow_margin=lc.get("tool_call_overflow_margin", 0.5),
stall_detection_threshold=lc.get("stall_detection_threshold", 3),
max_history_tokens=lc.get("max_history_tokens", 32000),
max_context_tokens=lc.get("max_context_tokens", _default_max_context_tokens()),
max_tool_result_chars=lc.get("max_tool_result_chars", 30_000),
spillover_dir=spillover,
hooks=lc.get("hooks", {}),
@@ -2039,6 +2136,10 @@ class GraphExecutor:
edge=edge,
)
# Track which branch wrote which key for memory conflict detection
fanout_written_keys: dict[str, str] = {} # key -> branch_id that wrote it
fanout_keys_lock = asyncio.Lock()
self.logger.info(f" ⑂ Fan-out: executing {len(branches)} branches in parallel")
for branch in branches.values():
target_spec = graph.get_node(branch.node_id)
@@ -2130,8 +2231,31 @@ class GraphExecutor:
)
if result.success:
# Write outputs to shared memory using async write
# Write outputs to shared memory with conflict detection
conflict_strategy = self._parallel_config.memory_conflict_strategy
for key, value in result.output.items():
async with fanout_keys_lock:
prior_branch = fanout_written_keys.get(key)
if prior_branch and prior_branch != branch.branch_id:
if conflict_strategy == "error":
raise RuntimeError(
f"Memory conflict: key '{key}' already written "
f"by branch '{prior_branch}', "
f"conflicting write from '{branch.branch_id}'"
)
elif conflict_strategy == "first_wins":
self.logger.debug(
f" ⚠ Skipping write to '{key}' "
f"(first_wins: already set by {prior_branch})"
)
continue
else:
# last_wins (default): write and log
self.logger.debug(
f" ⚠ Key '{key}' overwritten "
f"(last_wins: {prior_branch} -> {branch.branch_id})"
)
fanout_written_keys[key] = branch.branch_id
await memory.write_async(key, value)
branch.result = result
@@ -2178,9 +2302,11 @@ class GraphExecutor:
return branch, e
# Execute all branches concurrently
tasks = [execute_single_branch(b) for b in branches.values()]
results = await asyncio.gather(*tasks, return_exceptions=False)
# Execute all branches concurrently with per-branch timeout
timeout = self._parallel_config.branch_timeout_seconds
branch_list = list(branches.values())
tasks = [asyncio.wait_for(execute_single_branch(b), timeout=timeout) for b in branch_list]
results = await asyncio.gather(*tasks, return_exceptions=True)
# Process results
total_tokens = 0
@@ -2188,17 +2314,33 @@ class GraphExecutor:
branch_results: dict[str, NodeResult] = {}
failed_branches: list[ParallelBranch] = []
for branch, result in results:
path.append(branch.node_id)
for i, result in enumerate(results):
branch = branch_list[i]
if isinstance(result, Exception):
if isinstance(result, asyncio.TimeoutError):
# Branch timed out
branch.status = "timed_out"
branch.error = f"Branch timed out after {timeout}s"
self.logger.warning(
f" ⏱ Branch {graph.get_node(branch.node_id).name}: "
f"timed out after {timeout}s"
)
path.append(branch.node_id)
failed_branches.append(branch)
elif result is None or not result.success:
elif isinstance(result, Exception):
path.append(branch.node_id)
failed_branches.append(branch)
else:
total_tokens += result.tokens_used
total_latency += result.latency_ms
branch_results[branch.branch_id] = result
returned_branch, node_result = result
path.append(returned_branch.node_id)
if node_result is None or isinstance(node_result, Exception):
failed_branches.append(returned_branch)
elif not node_result.success:
failed_branches.append(returned_branch)
else:
total_tokens += node_result.tokens_used
total_latency += node_result.latency_ms
branch_results[returned_branch.branch_id] = node_result
# Handle failures based on config
if failed_branches:
+56 -13
View File
@@ -37,24 +37,45 @@ Follow these rules for reliable, efficient browser interaction.
## Reading Pages
- ALWAYS prefer `browser_snapshot` over `browser_get_text("body")`
it returns a compact ~1-5 KB accessibility tree vs 100+ KB of raw HTML.
- Use `browser_snapshot_aria` when you need full ARIA properties
for detailed element inspection.
- Do NOT use `browser_screenshot` for reading text content
it produces huge base64 images with no searchable text.
- Interaction tools (`browser_click`, `browser_type`, `browser_fill`,
`browser_scroll`, etc.) return a page snapshot automatically in their
result. Use it to decide your next action do NOT call
`browser_snapshot` separately after every action.
Only call `browser_snapshot` when you need a fresh view without
performing an action, or after setting `auto_snapshot=false`.
- Do NOT use `browser_screenshot` to read text use
`browser_snapshot` for that (compact, searchable, fast).
- DO use `browser_screenshot` when you need visual context:
charts, images, canvas elements, layout verification, or when
the snapshot doesn't capture what you need.
- Only fall back to `browser_get_text` for extracting specific
small elements by CSS selector.
## Navigation & Waiting
- Always call `browser_wait` after navigation actions
(`browser_open`, `browser_navigate`, `browser_click` on links)
to let the page load.
- `browser_navigate` and `browser_open` already wait for the page to
load (`domcontentloaded`). Do NOT call `browser_wait` with no
arguments after navigation it wastes time.
Only use `browser_wait` when you need a *specific element* or *text*
to appear (pass `selector` or `text`).
- NEVER re-navigate to the same URL after scrolling
this resets your scroll position and loses loaded content.
## Scrolling
- Use large scroll amounts ~2000 when loading more content
sites like twitter and linkedin have lazy loading for paging.
- After scrolling, take a new `browser_snapshot` to see updated content.
- The scroll result includes a snapshot automatically no need to call
`browser_snapshot` separately.
## Batching Actions
- You can call multiple tools in a single turn they execute in parallel.
ALWAYS batch independent actions together. Examples:
- Fill multiple form fields in one turn.
- Navigate + snapshot in one turn.
- Click + scroll if targeting different elements.
- When batching, set `auto_snapshot=false` on all but the last action
to avoid redundant snapshots.
- Aim for 3-5 tool calls per turn minimum. One tool call per turn is
wasteful.
## Error Recovery
- If a tool fails, retry once with the same approach.
@@ -65,11 +86,33 @@ Follow these rules for reliable, efficient browser interaction.
then `browser_start`, then retry.
## Tab Management
- Use `browser_tabs` to list open tabs when managing multiple pages.
- Pass `target_id` to tools when operating on a specific tab.
- Open background tabs with `browser_open(url=..., background=true)`
to avoid losing your current context.
- Close tabs you no longer need with `browser_close` to free resources.
**Close tabs as soon as you are done with them** not only at the end of the task.
After reading or extracting data from a tab, close it immediately.
**Decision rules:**
- Finished reading/extracting from a tab? `browser_close(target_id=...)`
- Completed a multi-tab workflow? `browser_close_finished()` to clean up all your tabs
- More than 3 tabs open? stop and close finished ones before opening more
- Popup appeared that you didn't need? → close it immediately
**Origin awareness:** `browser_tabs` returns an `origin` field for each tab:
- `"agent"` you opened it; you own it; close it when done
- `"popup"` opened by a link or script; close after extracting what you need
- `"startup"` or `"user"` leave these alone unless the task requires it
**Cleanup tools:**
- `browser_close(target_id=...)` close one specific tab
- `browser_close_finished()` close all your agent/popup tabs (safe: leaves startup/user tabs)
- `browser_close_all()` close everything except the active tab (use only for full reset)
**Multi-tab workflow pattern:**
1. Open background tabs with `browser_open(url=..., background=true)` to stay on current tab
2. Process each tab and close it with `browser_close` when done
3. When the full workflow completes, call `browser_close_finished()` to confirm cleanup
4. Check `browser_tabs` at any point it shows `origin` and `age_seconds` per tab
Never accumulate tabs. Treat every tab you open as a resource you must free.
## Login & Auth Walls
- If you see a "Log in" or "Sign up" prompt instead of expected
-8
View File
@@ -167,14 +167,6 @@ class Goal(BaseModel):
return met_weight >= total_weight * 0.9 # 90% threshold
def check_constraint(self, constraint_id: str, value: Any) -> bool:
"""Check if a specific constraint is satisfied."""
for c in self.constraints:
if c.id == constraint_id:
# This would be expanded with actual evaluation logic
return True
return True
def to_prompt_context(self) -> str:
"""Generate context string for LLM prompts.
-203
View File
@@ -1,203 +0,0 @@
"""
Standardized HITL (Human-In-The-Loop) Protocol
This module defines the formal structure for pause/resume interactions
where agents need to gather input from humans.
"""
from dataclasses import dataclass, field
from enum import StrEnum
from typing import Any
class HITLInputType(StrEnum):
"""Type of input expected from human."""
FREE_TEXT = "free_text" # Open-ended text response
STRUCTURED = "structured" # Specific fields to fill
SELECTION = "selection" # Choose from options
APPROVAL = "approval" # Yes/no/modify decision
MULTI_FIELD = "multi_field" # Multiple related inputs
@dataclass
class HITLQuestion:
"""A single question to ask the human."""
id: str
question: str
input_type: HITLInputType = HITLInputType.FREE_TEXT
# For SELECTION type
options: list[str] = field(default_factory=list)
# For STRUCTURED type
fields: dict[str, str] = field(default_factory=dict) # {field_name: description}
# Metadata
required: bool = True
help_text: str = ""
@dataclass
class HITLRequest:
"""
Formal request for human input at a pause node.
This is what the agent produces when it needs human input.
"""
# Context
objective: str # What we're trying to accomplish
current_state: str # Where we are in the process
# What we need
questions: list[HITLQuestion] = field(default_factory=list)
missing_info: list[str] = field(default_factory=list)
# Guidance
instructions: str = ""
examples: list[str] = field(default_factory=list)
# Metadata
request_id: str = ""
node_id: str = ""
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
"objective": self.objective,
"current_state": self.current_state,
"questions": [
{
"id": q.id,
"question": q.question,
"input_type": q.input_type.value,
"options": q.options,
"fields": q.fields,
"required": q.required,
"help_text": q.help_text,
}
for q in self.questions
],
"missing_info": self.missing_info,
"instructions": self.instructions,
"examples": self.examples,
"request_id": self.request_id,
"node_id": self.node_id,
}
@dataclass
class HITLResponse:
"""
Human's response to a HITL request.
This is what gets passed back when resuming from a pause.
"""
# Original request reference
request_id: str
# Human's answers
answers: dict[str, Any] = field(default_factory=dict) # {question_id: answer}
raw_input: str = "" # Raw text if provided
# Metadata
response_time_ms: int = 0
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
"request_id": self.request_id,
"answers": self.answers,
"raw_input": self.raw_input,
"response_time_ms": self.response_time_ms,
}
class HITLProtocol:
"""
Standardized protocol for HITL interactions.
Usage in pause nodes:
1. Pause Node: Generates HITLRequest with questions
2. Executor: Saves state and returns request to user
3. User: Provides HITLResponse with answers
4. Resume Node: Processes response and merges into context
"""
@staticmethod
def create_request(
objective: str,
questions: list[HITLQuestion],
missing_info: list[str] | None = None,
node_id: str = "",
) -> HITLRequest:
"""Create a standardized HITL request."""
return HITLRequest(
objective=objective,
current_state="Awaiting clarification",
questions=questions,
missing_info=missing_info or [],
request_id=f"{node_id}_{hash(objective) % 10000}",
node_id=node_id,
)
@staticmethod
def parse_response(
raw_input: str,
request: HITLRequest,
use_haiku: bool = True,
) -> HITLResponse:
"""
Parse human's raw input into structured response.
Maps the raw input to the first question. For multi-question HITL,
the caller should present one question at a time.
"""
response = HITLResponse(request_id=request.request_id, raw_input=raw_input)
# If no questions, just return raw input
if not request.questions:
return response
# Map raw input to first question
response.answers[request.questions[0].id] = raw_input
return response
@staticmethod
def format_for_display(request: HITLRequest) -> str:
"""Format HITL request for user-friendly display."""
parts = []
if request.objective:
parts.append(f"📋 Objective: {request.objective}")
if request.current_state:
parts.append(f"📍 Current State: {request.current_state}")
if request.instructions:
parts.append(f"\n{request.instructions}")
if request.questions:
parts.append(f"\n❓ Questions ({len(request.questions)}):")
for i, q in enumerate(request.questions, 1):
parts.append(f"{i}. {q.question}")
if q.help_text:
parts.append(f" 💡 {q.help_text}")
if q.options:
parts.append(f" Options: {', '.join(q.options)}")
if request.missing_info:
parts.append("\n📝 Missing Information:")
for info in request.missing_info:
parts.append(f"{info}")
if request.examples:
parts.append("\n📚 Examples:")
for example in request.examples:
parts.append(f"{example}")
return "\n".join(parts)
+14
View File
@@ -565,6 +565,20 @@ class NodeContext:
# staging / running) without restarting the conversation.
dynamic_prompt_provider: Any = None # Callable[[], str] | None
# Skill system prompts — injected by the skill discovery pipeline
skills_catalog_prompt: str = "" # Available skills XML catalog
protocols_prompt: str = "" # Default skill operational protocols
skill_dirs: list[str] = field(default_factory=list) # Skill base dirs for resource access
# DS-12: batch auto-detection nudge appended to system prompt when input looks like a batch
default_skill_batch_nudge: str | None = None
# DS-13: token usage ratio at which to inject a context preservation warning
default_skill_warn_ratio: float | None = None
# Per-iteration metadata provider — when set, EventLoopNode merges
# the returned dict into node_loop_iteration event data. Used by
# the queen to record the current phase per iteration.
iteration_metadata_provider: Any = None # Callable[[], dict] | None
@dataclass
class NodeResult:
+71 -10
View File
@@ -26,6 +26,16 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# Injected into every worker node's system prompt so the LLM understands
# it is one step in a multi-node pipeline and should not overreach.
EXECUTION_SCOPE_PREAMBLE = (
"EXECUTION SCOPE: You are one node in a multi-step workflow graph. "
"Focus ONLY on the task described in your instructions below. "
"Call set_output() for each of your declared output keys, then stop. "
"Do NOT attempt work that belongs to other nodes — the framework "
"routes data between nodes automatically."
)
def _with_datetime(prompt: str) -> str:
"""Append current datetime with local timezone to a system prompt."""
@@ -140,14 +150,24 @@ def compose_system_prompt(
focus_prompt: str | None,
narrative: str | None = None,
accounts_prompt: str | None = None,
skills_catalog_prompt: str | None = None,
protocols_prompt: str | None = None,
execution_preamble: str | None = None,
node_type_preamble: str | None = None,
) -> str:
"""Compose the three-layer system prompt.
"""Compose the multi-layer system prompt.
Args:
identity_prompt: Layer 1 static agent identity (from GraphSpec).
focus_prompt: Layer 3 per-node focus directive (from NodeSpec.system_prompt).
narrative: Layer 2 auto-generated from conversation state.
accounts_prompt: Connected accounts block (sits between identity and narrative).
skills_catalog_prompt: Available skills catalog XML (Agent Skills standard).
protocols_prompt: Default skill operational protocols section.
execution_preamble: EXECUTION_SCOPE_PREAMBLE for worker nodes
(prepended before focus so the LLM knows its pipeline scope).
node_type_preamble: Node-type-specific preamble, e.g. GCU browser
best-practices prompt (prepended before focus).
Returns:
Composed system prompt with all layers present, plus current datetime.
@@ -162,10 +182,27 @@ def compose_system_prompt(
if accounts_prompt:
parts.append(f"\n{accounts_prompt}")
# Skills catalog (discovered skills available for activation)
if skills_catalog_prompt:
parts.append(f"\n{skills_catalog_prompt}")
# Operational protocols (default skill behavioral guidance)
if protocols_prompt:
parts.append(f"\n{protocols_prompt}")
# Layer 2: Narrative (what's happened so far)
if narrative:
parts.append(f"\n--- Context (what has happened so far) ---\n{narrative}")
# Execution scope preamble (worker nodes — tells the LLM it is one
# step in a multi-node pipeline and should not overreach)
if execution_preamble:
parts.append(f"\n{execution_preamble}")
# Node-type preamble (e.g. GCU browser best-practices)
if node_type_preamble:
parts.append(f"\n{node_type_preamble}")
# Layer 3: Focus (current phase directive)
if focus_prompt:
parts.append(f"\n--- Current Focus ---\n{focus_prompt}")
@@ -227,7 +264,6 @@ def build_transition_marker(
memory: SharedMemory,
cumulative_tool_names: list[str],
data_dir: Path | str | None = None,
adapt_content: str | None = None,
) -> str:
"""Build a 'State of the World' transition marker.
@@ -241,7 +277,6 @@ def build_transition_marker(
memory: Current shared memory state.
cumulative_tool_names: All tools available (cumulative set).
data_dir: Path to spillover data directory.
adapt_content: Agent working memory (adapt.md) content.
Returns:
Transition marker message text.
@@ -255,7 +290,9 @@ def build_transition_marker(
sections.append(f"\nCompleted: {previous_node.name}")
sections.append(f" {previous_node.description}")
# Outputs in memory
# Outputs in memory — use file references for large values so the
# next node loads full data from disk instead of seeing truncated
# inline previews that look deceptively complete.
all_memory = memory.read_all()
if all_memory:
memory_lines: list[str] = []
@@ -263,7 +300,29 @@ def build_transition_marker(
if value is None:
continue
val_str = str(value)
if len(val_str) > 300:
if len(val_str) > 300 and data_dir:
# Auto-spill large transition values to data files
import json as _json
data_path = Path(data_dir)
data_path.mkdir(parents=True, exist_ok=True)
ext = ".json" if isinstance(value, (dict, list)) else ".txt"
filename = f"output_{key}{ext}"
try:
write_content = (
_json.dumps(value, indent=2, ensure_ascii=False)
if isinstance(value, (dict, list))
else str(value)
)
(data_path / filename).write_text(write_content, encoding="utf-8")
file_size = (data_path / filename).stat().st_size
val_str = (
f"[Saved to '{filename}' ({file_size:,} bytes). "
f"Use load_data(filename='{filename}') to access.]"
)
except Exception:
val_str = val_str[:300] + "..."
elif len(val_str) > 300:
val_str = val_str[:300] + "..."
memory_lines.append(f" {key}: {val_str}")
if memory_lines:
@@ -280,13 +339,9 @@ def build_transition_marker(
]
if file_lines:
sections.append(
"\nData files (use read_file to access):\n" + "\n".join(file_lines)
"\nData files (use load_data to access):\n" + "\n".join(file_lines)
)
# Agent working memory
if adapt_content:
sections.append(f"\n--- Agent Memory ---\n{adapt_content}")
# Available tools
if cumulative_tool_names:
sections.append("\nAvailable tools: " + ", ".join(sorted(cumulative_tool_names)))
@@ -294,6 +349,12 @@ def build_transition_marker(
# Next phase
sections.append(f"\nNow entering: {next_node.name}")
sections.append(f" {next_node.description}")
if next_node.output_keys:
sections.append(
f"\nYour ONLY job in this phase: complete the task above and call "
f"set_output() for {next_node.output_keys}. Do NOT do work that "
f"belongs to later phases."
)
# Reflection prompt (engineered metacognition)
sections.append(
+15 -7
View File
@@ -115,11 +115,23 @@ class SafeEvalVisitor(ast.NodeVisitor):
return True
def visit_BoolOp(self, node: ast.BoolOp) -> Any:
values = [self.visit(v) for v in node.values]
# Short-circuit evaluation to match Python semantics.
# Previously all operands were eagerly evaluated, which broke
# guard patterns like: ``x is not None and x.get("key")``
if isinstance(node.op, ast.And):
return all(values)
result = True
for v in node.values:
result = self.visit(v)
if not result:
return result
return result
elif isinstance(node.op, ast.Or):
return any(values)
result = False
for v in node.values:
result = self.visit(v)
if result:
return result
return result
raise ValueError(f"Boolean operator {type(node.op).__name__} is not allowed")
def visit_IfExp(self, node: ast.IfExp) -> Any:
@@ -216,10 +228,6 @@ class SafeEvalVisitor(ast.NodeVisitor):
return func(*args, **keywords)
def visit_Index(self, node: ast.Index) -> Any:
# Python < 3.9
return self.visit(node.value)
def safe_eval(expr: str, context: dict[str, Any] | None = None) -> Any:
"""
+706
View File
@@ -0,0 +1,706 @@
"""Antigravity (Google internal Cloud Code Assist) LLM provider.
Antigravity is Google's unified gateway API that routes requests to Gemini,
Claude, and GPT-OSS models through a single Gemini-style interface. It is
NOT the public ``generativelanguage.googleapis.com`` API.
Authentication uses Google OAuth2. Token refresh is done directly with the
OAuth client secret no local proxy required.
Credential sources (checked in order):
1. ``~/.hive/antigravity-accounts.json`` (native OAuth implementation)
2. Antigravity IDE SQLite state DB (macOS / Linux)
"""
from __future__ import annotations
import json
import logging
import re
import time
import uuid
from collections.abc import AsyncIterator, Callable, Iterator
from pathlib import Path
from typing import Any
from framework.llm.provider import LLMProvider, LLMResponse, Tool
from framework.llm.stream_events import (
FinishEvent,
StreamErrorEvent,
StreamEvent,
TextDeltaEvent,
TextEndEvent,
ToolCallEvent,
)
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
_TOKEN_URL = "https://oauth2.googleapis.com/token"
# Fallback order: daily sandbox → autopush sandbox → production
_ENDPOINTS = [
"https://daily-cloudcode-pa.sandbox.googleapis.com",
"https://autopush-cloudcode-pa.sandbox.googleapis.com",
"https://cloudcode-pa.googleapis.com",
]
_DEFAULT_PROJECT_ID = "rising-fact-p41fc"
_TOKEN_REFRESH_BUFFER_SECS = 60
# Credentials file in ~/.hive/ (native implementation)
_ACCOUNTS_FILE = Path.home() / ".hive" / "antigravity-accounts.json"
_IDE_STATE_DB_MAC = (
Path.home()
/ "Library"
/ "Application Support"
/ "Antigravity"
/ "User"
/ "globalStorage"
/ "state.vscdb"
)
_IDE_STATE_DB_LINUX = (
Path.home() / ".config" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
)
_IDE_STATE_DB_KEY = "antigravityUnifiedStateSync.oauthToken"
_BASE_HEADERS: dict[str, str] = {
# Mimic the Antigravity Electron app so the API accepts the request.
"User-Agent": (
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 "
"(KHTML, like Gecko) Antigravity/1.18.3 Chrome/138.0.7204.235 "
"Electron/37.3.1 Safari/537.36"
),
"X-Goog-Api-Client": "google-cloud-sdk vscode_cloudshelleditor/0.1",
"Client-Metadata": '{"ideType":"ANTIGRAVITY","platform":"MACOS","pluginType":"GEMINI"}',
}
# ---------------------------------------------------------------------------
# Credential loading helpers
# ---------------------------------------------------------------------------
def _load_from_json_file() -> tuple[str | None, str | None, str, float]:
"""Read credentials from JSON accounts file.
Reads from ~/.hive/antigravity-accounts.json.
Returns ``(access_token | None, refresh_token | None, project_id, expires_at)``.
``expires_at`` is a Unix timestamp (seconds); 0.0 means unknown.
"""
if not _ACCOUNTS_FILE.exists():
return None, None, _DEFAULT_PROJECT_ID, 0.0
try:
with open(_ACCOUNTS_FILE, encoding="utf-8") as fh:
data = json.load(fh)
except (OSError, json.JSONDecodeError) as exc:
logger.debug("Failed to read Antigravity accounts file: %s", exc)
return None, None, _DEFAULT_PROJECT_ID, 0.0
accounts = data.get("accounts", [])
if not accounts:
return None, None, _DEFAULT_PROJECT_ID, 0.0
account = next((a for a in accounts if a.get("enabled", True) is not False), accounts[0])
schema_version = data.get("schemaVersion", 1)
if schema_version >= 4:
# V4 schema: refresh = "refreshToken|projectId[|managedProjectId]"
refresh_str = account.get("refresh", "")
parts = refresh_str.split("|") if refresh_str else []
refresh_token: str | None = parts[0] if parts else None
project_id = parts[1] if len(parts) >= 2 and parts[1] else _DEFAULT_PROJECT_ID
access_token: str | None = account.get("access")
expires_ms: int = account.get("expires", 0)
expires_at = float(expires_ms) / 1000.0 if expires_ms else 0.0
# Treat near-expiry tokens as absent so _ensure_token() triggers a refresh.
if access_token and expires_at and time.time() >= expires_at - _TOKEN_REFRESH_BUFFER_SECS:
access_token = None
expires_at = 0.0
return access_token, refresh_token, project_id, expires_at
else:
# V1V3 schema: plain accessToken / refreshToken fields
access_token = account.get("accessToken")
refresh_token = account.get("refreshToken")
# Estimate expiry from last_refresh + 1 h
last_refresh_str: str | None = data.get("last_refresh")
expires_at = 0.0
if last_refresh_str:
try:
from datetime import datetime # noqa: PLC0415
ts = datetime.fromisoformat(last_refresh_str.replace("Z", "+00:00")).timestamp()
expires_at = ts + 3600.0
if time.time() >= expires_at - _TOKEN_REFRESH_BUFFER_SECS:
access_token = None
except (ValueError, TypeError):
pass
return access_token, refresh_token, _DEFAULT_PROJECT_ID, expires_at
def _load_from_ide_db() -> tuple[str | None, str | None, float]:
"""Extract ``(access_token, refresh_token, expires_at)`` from the IDE SQLite DB."""
import base64 # noqa: PLC0415
import sqlite3 # noqa: PLC0415
for db_path in (_IDE_STATE_DB_MAC, _IDE_STATE_DB_LINUX):
if not db_path.exists():
continue
try:
con = sqlite3.connect(f"file:{db_path}?mode=ro", uri=True)
try:
row = con.execute(
"SELECT value FROM ItemTable WHERE key = ?",
(_IDE_STATE_DB_KEY,),
).fetchone()
finally:
con.close()
if not row:
continue
blob = base64.b64decode(row[0])
candidates = re.findall(rb"[A-Za-z0-9+/=_\-]{40,}", blob)
access_token: str | None = None
refresh_token: str | None = None
for candidate in candidates:
try:
padded = candidate + b"=" * (-len(candidate) % 4)
inner = base64.urlsafe_b64decode(padded)
except Exception:
continue
if not access_token:
m = re.search(rb"ya29\.[A-Za-z0-9_\-\.]+", inner)
if m:
access_token = m.group(0).decode("ascii")
if not refresh_token:
m = re.search(rb"1//[A-Za-z0-9_\-\.]+", inner)
if m:
refresh_token = m.group(0).decode("ascii")
if access_token and refresh_token:
break
if access_token:
# Estimate expiry from DB mtime (IDE refreshes while running)
mtime = db_path.stat().st_mtime
expires_at = mtime + 3600.0
return access_token, refresh_token, expires_at
except Exception as exc:
logger.debug("Failed to read Antigravity IDE state DB: %s", exc)
continue
return None, None, 0.0
def _do_token_refresh(refresh_token: str) -> tuple[str, float] | None:
"""POST to Google OAuth endpoint and return ``(new_access_token, expires_at)``.
The client secret is sourced via ``get_antigravity_client_secret()`` (env var,
config file, or npm package fallback). When unavailable the refresh is attempted
without it Google will reject it for web-app clients, but the npm fallback in
``get_antigravity_client_secret()`` should ensure the secret is found at runtime.
Returns None when the HTTP request fails.
"""
from framework.config import get_antigravity_client_secret # noqa: PLC0415
client_secret = get_antigravity_client_secret()
if not client_secret:
logger.debug(
"Antigravity client secret not configured — attempting refresh without it. "
"Set ANTIGRAVITY_CLIENT_SECRET or run quickstart to configure."
)
import urllib.error # noqa: PLC0415
import urllib.parse # noqa: PLC0415
import urllib.request # noqa: PLC0415
from framework.config import get_antigravity_client_id # noqa: PLC0415
params: dict[str, str] = {
"grant_type": "refresh_token",
"refresh_token": refresh_token,
"client_id": get_antigravity_client_id(),
}
if client_secret:
params["client_secret"] = client_secret
body = urllib.parse.urlencode(params).encode("utf-8")
req = urllib.request.Request(
_TOKEN_URL,
data=body,
headers={"Content-Type": "application/x-www-form-urlencoded"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=15) as resp: # noqa: S310
payload = json.loads(resp.read())
access_token: str = payload["access_token"]
expires_in: int = payload.get("expires_in", 3600)
logger.debug("Antigravity token refreshed successfully")
return access_token, time.time() + expires_in
except Exception as exc:
logger.debug("Antigravity token refresh failed: %s", exc)
return None
# ---------------------------------------------------------------------------
# Message conversion helpers
# ---------------------------------------------------------------------------
def _clean_tool_name(name: str) -> str:
"""Sanitize a tool name for the Antigravity function-calling schema."""
name = re.sub(r"[/\s]", "_", name)
if name and not (name[0].isalpha() or name[0] == "_"):
name = "_" + name
return name[:64]
def _to_gemini_contents(
messages: list[dict[str, Any]],
thought_sigs: dict[str, str] | None = None,
) -> list[dict[str, Any]]:
"""Convert OpenAI-format messages to Gemini-style ``contents`` array."""
# Pre-build a map tool_call_id → function_name from assistant messages.
# Tool result messages (role="tool") only carry tool_call_id, not the name,
# but Gemini requires functionResponse.name to match the functionCall.name.
tc_id_to_name: dict[str, str] = {}
for msg in messages:
if msg.get("role") == "assistant":
for tc in msg.get("tool_calls") or []:
tc_id = tc.get("id")
fn_name = tc.get("function", {}).get("name", "")
if tc_id and fn_name:
tc_id_to_name[tc_id] = fn_name
contents: list[dict[str, Any]] = []
# Consecutive tool-result messages must be batched into one user turn.
pending_tool_parts: list[dict[str, Any]] = []
def _flush_tool_results() -> None:
if pending_tool_parts:
contents.append({"role": "user", "parts": list(pending_tool_parts)})
pending_tool_parts.clear()
for msg in messages:
role = msg.get("role", "user")
content = msg.get("content")
if role == "system":
continue # Handled via systemInstruction, not in contents.
if role == "tool":
# OpenAI tool result → Gemini functionResponse part.
result_str = content if isinstance(content, str) else str(content or "")
tc_id = msg.get("tool_call_id", "")
# Look up function name from the pre-built map; fall back to msg.name.
fn_name = tc_id_to_name.get(tc_id) or msg.get("name", "")
pending_tool_parts.append(
{
"functionResponse": {
"name": fn_name,
"id": tc_id,
"response": {"content": result_str},
}
}
)
continue
_flush_tool_results()
gemini_role = "model" if role == "assistant" else "user"
parts: list[dict[str, Any]] = []
if isinstance(content, str) and content:
parts.append({"text": content})
elif isinstance(content, list):
for block in content:
if not isinstance(block, dict):
continue
if block.get("type") == "text":
text = block.get("text", "")
if text:
parts.append({"text": text})
# Other block types (image_url etc.) skipped.
# Assistant messages may carry OpenAI-style tool_calls.
for tc in msg.get("tool_calls") or []:
fn = tc.get("function", {})
try:
args = json.loads(fn.get("arguments", "{}") or "{}")
except (json.JSONDecodeError, TypeError):
args = {}
tc_id = tc.get("id", str(uuid.uuid4()))
fc_part: dict[str, Any] = {
"functionCall": {
"name": fn.get("name", ""),
"args": args,
"id": tc_id,
}
}
if thought_sigs:
sig = thought_sigs.get(tc_id, "")
if sig:
fc_part["thoughtSignature"] = sig # part-level, not inside functionCall
parts.append(fc_part)
if parts:
contents.append({"role": gemini_role, "parts": parts})
_flush_tool_results()
# Gemini requires the first turn to be a user turn. Drop any leading
# model messages so the API doesn't reject with a 400.
while contents and contents[0].get("role") == "model":
contents.pop(0)
return contents
# ---------------------------------------------------------------------------
# Response parsing helpers
# ---------------------------------------------------------------------------
def _map_finish_reason(reason: str) -> str:
return {"STOP": "stop", "MAX_TOKENS": "max_tokens", "OTHER": "tool_use"}.get(
(reason or "").upper(), "stop"
)
def _parse_complete_response(raw: dict[str, Any], model: str) -> LLMResponse:
"""Parse a non-streaming Antigravity response dict → LLMResponse."""
payload: dict[str, Any] = raw.get("response", raw)
candidates: list[dict[str, Any]] = payload.get("candidates", [])
usage: dict[str, Any] = payload.get("usageMetadata", {})
text_parts: list[str] = []
if candidates:
for part in candidates[0].get("content", {}).get("parts", []):
if "text" in part and not part.get("thought"):
text_parts.append(part["text"])
return LLMResponse(
content="".join(text_parts),
model=payload.get("modelVersion", model),
input_tokens=usage.get("promptTokenCount", 0),
output_tokens=usage.get("candidatesTokenCount", 0),
stop_reason=_map_finish_reason(candidates[0].get("finishReason", "") if candidates else ""),
raw_response=raw,
)
def _parse_sse_stream(
response: Any,
model: str,
on_thought_signature: Callable[[str, str], None] | None = None,
) -> Iterator[StreamEvent]:
"""Parse Antigravity SSE response line-by-line → StreamEvents.
Each SSE line looks like::
data: {"response": {"candidates": [...], "usageMetadata": {...}}, "traceId": "..."}
"""
accumulated = ""
input_tokens = 0
output_tokens = 0
finish_reason = ""
for raw_line in response:
line: str = raw_line.decode("utf-8", errors="replace").rstrip("\r\n")
if not line.startswith("data:"):
continue
data_str = line[5:].strip()
if not data_str or data_str == "[DONE]":
continue
try:
data: dict[str, Any] = json.loads(data_str)
except json.JSONDecodeError:
continue
# The outer envelope is {"response": {...}, "traceId": "..."}.
payload: dict[str, Any] = data.get("response", data)
usage = payload.get("usageMetadata", {})
if usage:
input_tokens = usage.get("promptTokenCount", input_tokens)
output_tokens = usage.get("candidatesTokenCount", output_tokens)
for candidate in payload.get("candidates", []):
fr = candidate.get("finishReason", "")
if fr:
finish_reason = fr
for part in candidate.get("content", {}).get("parts", []):
if "text" in part and not part.get("thought"):
delta: str = part["text"]
accumulated += delta
yield TextDeltaEvent(content=delta, snapshot=accumulated)
elif "functionCall" in part:
fc: dict[str, Any] = part["functionCall"]
tool_use_id = fc.get("id") or str(uuid.uuid4())
thought_sig = part.get("thoughtSignature", "") # sibling of functionCall
if thought_sig and on_thought_signature:
on_thought_signature(tool_use_id, thought_sig)
args = fc.get("args", {})
if isinstance(args, str):
try:
args = json.loads(args)
except json.JSONDecodeError:
args = {}
yield ToolCallEvent(
tool_use_id=tool_use_id,
tool_name=fc.get("name", ""),
tool_input=args,
)
if accumulated:
yield TextEndEvent(full_text=accumulated)
yield FinishEvent(
stop_reason=_map_finish_reason(finish_reason),
input_tokens=input_tokens,
output_tokens=output_tokens,
model=model,
)
# ---------------------------------------------------------------------------
# Provider
# ---------------------------------------------------------------------------
class AntigravityProvider(LLMProvider):
"""LLM provider for Google's internal Antigravity Code Assist gateway.
No local proxy required. Handles OAuth token refresh, Gemini-format
request/response conversion, and SSE streaming directly.
"""
def __init__(self, model: str = "gemini-3-flash") -> None:
# Strip any provider prefix ("openai/gemini-3-flash" → "gemini-3-flash").
if "/" in model:
model = model.split("/", 1)[1]
self.model = model
self._access_token: str | None = None
self._refresh_token: str | None = None
self._project_id: str = _DEFAULT_PROJECT_ID
self._token_expires_at: float = 0.0
self._thought_sigs: dict[str, str] = {} # tool_use_id → thoughtSignature
self._init_credentials()
# --- Credential management -------------------------------------------- #
def _init_credentials(self) -> None:
"""Load credentials from the best available source."""
access, refresh, project_id, expires_at = _load_from_json_file()
if refresh:
self._refresh_token = refresh
self._project_id = project_id
self._access_token = access
self._token_expires_at = expires_at
return
# Fall back to IDE state DB.
access, refresh, expires_at = _load_from_ide_db()
if access:
self._access_token = access
self._refresh_token = refresh
self._token_expires_at = expires_at
def has_credentials(self) -> bool:
"""Return True if any credential is available."""
return bool(self._access_token or self._refresh_token)
def _ensure_token(self) -> str:
"""Return a valid access token, refreshing via OAuth if needed."""
if (
self._access_token
and self._token_expires_at
and time.time() < self._token_expires_at - _TOKEN_REFRESH_BUFFER_SECS
):
return self._access_token
if self._refresh_token:
result = _do_token_refresh(self._refresh_token)
if result:
self._access_token, self._token_expires_at = result
return self._access_token
if self._access_token:
logger.warning("Using potentially stale Antigravity access token")
return self._access_token
raise RuntimeError(
"No valid Antigravity credentials. "
"Run: uv run python core/antigravity_auth.py auth account add"
)
# --- Request building -------------------------------------------------- #
def _build_body(
self,
messages: list[dict[str, Any]],
system: str,
tools: list[Tool] | None,
max_tokens: int,
) -> dict[str, Any]:
contents = _to_gemini_contents(messages, self._thought_sigs)
inner: dict[str, Any] = {
"contents": contents,
"generationConfig": {"maxOutputTokens": max_tokens},
}
if system:
inner["systemInstruction"] = {"parts": [{"text": system}]}
if tools:
inner["tools"] = [
{
"functionDeclarations": [
{
"name": _clean_tool_name(t.name),
"description": t.description,
"parameters": t.parameters
or {
"type": "object",
"properties": {},
},
}
for t in tools
]
}
]
return {
"project": self._project_id,
"model": self.model,
"request": inner,
"requestType": "agent",
"userAgent": "antigravity",
"requestId": f"agent-{uuid.uuid4()}",
}
# --- HTTP transport ---------------------------------------------------- #
def _post(self, body: dict[str, Any], *, streaming: bool) -> Any:
"""POST to the Antigravity endpoint, falling back through the endpoint list."""
import urllib.error # noqa: PLC0415
import urllib.request # noqa: PLC0415
token = self._ensure_token()
body_bytes = json.dumps(body).encode("utf-8")
path = (
"/v1internal:streamGenerateContent?alt=sse"
if streaming
else "/v1internal:generateContent"
)
headers = {
**_BASE_HEADERS,
"Authorization": f"Bearer {token}",
"Content-Type": "application/json",
}
if streaming:
headers["Accept"] = "text/event-stream"
last_exc: Exception | None = None
for base_url in _ENDPOINTS:
url = f"{base_url}{path}"
req = urllib.request.Request(url, data=body_bytes, headers=headers, method="POST")
try:
return urllib.request.urlopen(req, timeout=120) # noqa: S310
except urllib.error.HTTPError as exc:
if exc.code in (401, 403) and self._refresh_token:
# Token rejected — refresh once and retry this endpoint.
result = _do_token_refresh(self._refresh_token)
if result:
self._access_token, self._token_expires_at = result
headers["Authorization"] = f"Bearer {self._access_token}"
req2 = urllib.request.Request(
url, data=body_bytes, headers=headers, method="POST"
)
try:
return urllib.request.urlopen(req2, timeout=120) # noqa: S310
except urllib.error.HTTPError as exc2:
last_exc = exc2
continue
last_exc = exc
continue
elif exc.code >= 500:
last_exc = exc
continue
# Include the API response body in the exception for easier debugging.
try:
err_body = exc.read().decode("utf-8", errors="replace")
except Exception:
err_body = "(unreadable)"
raise RuntimeError(f"Antigravity HTTP {exc.code} from {url}: {err_body}") from exc
except (urllib.error.URLError, OSError) as exc:
last_exc = exc
continue
raise RuntimeError(
f"All Antigravity endpoints failed. Last error: {last_exc}"
) from last_exc
# --- LLMProvider interface --------------------------------------------- #
def complete(
self,
messages: list[dict[str, Any]],
system: str = "",
tools: list[Tool] | None = None,
max_tokens: int = 1024,
response_format: dict[str, Any] | None = None,
json_mode: bool = False,
max_retries: int | None = None,
) -> LLMResponse:
if json_mode:
suffix = "\n\nPlease respond with a valid JSON object."
system = (system + suffix) if system else suffix.strip()
body = self._build_body(messages, system, tools, max_tokens)
resp = self._post(body, streaming=False)
return _parse_complete_response(json.loads(resp.read()), self.model)
async def stream(
self,
messages: list[dict[str, Any]],
system: str = "",
tools: list[Tool] | None = None,
max_tokens: int = 4096,
) -> AsyncIterator[StreamEvent]:
import asyncio # noqa: PLC0415
import concurrent.futures # noqa: PLC0415
loop = asyncio.get_running_loop()
queue: asyncio.Queue[StreamEvent | None] = asyncio.Queue()
def _blocking_work() -> None:
try:
body = self._build_body(messages, system, tools, max_tokens)
http_resp = self._post(body, streaming=True)
for event in _parse_sse_stream(
http_resp, self.model, self._thought_sigs.__setitem__
):
loop.call_soon_threadsafe(queue.put_nowait, event)
except Exception as exc:
logger.error("Antigravity stream error: %s", exc)
loop.call_soon_threadsafe(queue.put_nowait, StreamErrorEvent(error=str(exc)))
finally:
loop.call_soon_threadsafe(queue.put_nowait, None) # sentinel
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
fut = loop.run_in_executor(executor, _blocking_work)
try:
while True:
event = await queue.get()
if event is None:
break
yield event
finally:
await fut
executor.shutdown(wait=False)
+106
View File
@@ -0,0 +1,106 @@
"""Model capability checks for LLM providers.
Vision support rules are derived from official vendor documentation:
- ZAI (z.ai): docs.z.ai/guides/vlm GLM-4.6V variants are vision; GLM-5/4.6/4.7 are text-only
- MiniMax: platform.minimax.io/docs minimax-vl-01 is vision; M2.x are text-only
- DeepSeek: api-docs.deepseek.com deepseek-vl2 is vision; chat/reasoner are text-only
- Cerebras: inference-docs.cerebras.ai no vision models at all
- Groq: console.groq.com/docs/vision vision capable; treat as supported by default
- Ollama/LM Studio/vLLM/llama.cpp: local runners denied by default; model names
don't reliably indicate vision support, so users must configure explicitly
"""
from __future__ import annotations
def _model_name(model: str) -> str:
"""Return the bare model name after stripping any 'provider/' prefix."""
if "/" in model:
return model.split("/", 1)[1]
return model
# Step 1: explicit vision allow-list — these always support images regardless
# of what the provider-level rules say. Checked first so that e.g. glm-4.6v
# is allowed even though glm-4.6 is denied.
_VISION_ALLOW_BARE_PREFIXES: tuple[str, ...] = (
# ZAI/GLM vision models (docs.z.ai/guides/vlm)
"glm-4v", # GLM-4V series (legacy)
"glm-4.6v", # GLM-4.6V, GLM-4.6V-flash, GLM-4.6V-flashx
# DeepSeek vision models
"deepseek-vl", # deepseek-vl2, deepseek-vl2-small, deepseek-vl2-tiny
# MiniMax vision model
"minimax-vl", # minimax-vl-01
)
# Step 2: provider-level deny — every model from this provider is text-only.
_TEXT_ONLY_PROVIDER_PREFIXES: tuple[str, ...] = (
# Cerebras: inference-docs.cerebras.ai lists only text models
"cerebras/",
# Local runners: model names don't reliably indicate vision support
"ollama/",
"ollama_chat/",
"lm_studio/",
"vllm/",
"llamacpp/",
)
# Step 3: per-model deny — text-only models within otherwise mixed providers.
# Matched against the bare model name (provider prefix stripped, lower-cased).
# The vision allow-list above is checked first, so vision variants of the same
# family are already handled before these deny patterns are reached.
_TEXT_ONLY_MODEL_BARE_PREFIXES: tuple[str, ...] = (
# --- ZAI / GLM family ---
# text-only: glm-5, glm-4.6, glm-4.7, glm-4.5, zai-glm-*
# vision: glm-4v, glm-4.6v (caught by allow-list above)
"glm-5",
"glm-4.6", # bare glm-4.6 is text-only; glm-4.6v is caught by allow-list
"glm-4.7",
"glm-4.5",
"zai-glm",
# --- DeepSeek ---
# text-only: deepseek-chat, deepseek-coder, deepseek-reasoner
# vision: deepseek-vl2 (caught by allow-list above)
# Note: LiteLLM's deepseek handler may flatten content lists for some models;
# VL models are allowed through and rely on LiteLLM's native VL support.
"deepseek-chat",
"deepseek-coder",
"deepseek-reasoner",
# --- MiniMax ---
# text-only: minimax-m2.*, minimax-text-*, abab* (legacy)
# vision: minimax-vl-01 (caught by allow-list above)
"minimax-m2",
"minimax-text",
"abab",
)
def supports_image_tool_results(model: str) -> bool:
"""Return whether *model* can receive image content in messages.
Used to gate both user-message images and tool-result image blocks.
Logic (checked in order):
1. Vision allow-list True (known vision model, skip all denies)
2. Provider deny False (entire provider is text-only)
3. Model deny False (specific text-only model within a mixed provider)
4. Default True (assume capable; unknown providers and models)
"""
model_lower = model.lower()
bare = _model_name(model_lower)
# 1. Explicit vision allow — takes priority over all denies
if any(bare.startswith(p) for p in _VISION_ALLOW_BARE_PREFIXES):
return True
# 2. Provider-level deny (all models from this provider are text-only)
if any(model_lower.startswith(p) for p in _TEXT_ONLY_PROVIDER_PREFIXES):
return False
# 3. Per-model deny (text-only variants within mixed-capability families)
if any(bare.startswith(p) for p in _TEXT_ONLY_MODEL_BARE_PREFIXES):
return False
# 5. Default: assume vision capable
# Covers: OpenAI, Anthropic, Google, Mistral, Kimi, and other hosted providers
return True
File diff suppressed because it is too large Load Diff
+2
View File
@@ -45,6 +45,8 @@ class ToolResult:
tool_use_id: str
content: str
is_error: bool = False
image_content: list[dict[str, Any]] | None = None
is_skill_content: bool = False # AS-10: marks activated skill body, protected from pruning
class LLMProvider(ABC):
+1
View File
@@ -71,6 +71,7 @@ class FinishEvent:
stop_reason: str = ""
input_tokens: int = 0
output_tokens: int = 0
cached_tokens: int = 0
model: str = ""
-4
View File
@@ -1,4 +0,0 @@
"""MCP servers for worker-bee."""
# Don't auto-import servers to avoid double-import issues when running with -m
__all__ = []
+1 -33
View File
@@ -1,33 +1 @@
"""Framework-level worker monitoring package.
Provides the Worker Health Judge: a reusable secondary graph that attaches to
any worker agent runtime and monitors its execution health via periodic log
inspection. Emits structured EscalationTickets when degradation is detected.
Usage::
from framework.monitoring import HEALTH_JUDGE_ENTRY_POINT, judge_goal, judge_graph
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
# Register tools bound to the worker runtime's EventBus
monitoring_registry = ToolRegistry()
register_worker_monitoring_tools(monitoring_registry, worker_runtime._event_bus, storage_path)
# Load judge as secondary graph on the worker runtime
await worker_runtime.add_graph(
graph_id="judge",
graph=judge_graph,
goal=judge_goal,
entry_points={"health_check": HEALTH_JUDGE_ENTRY_POINT},
storage_subpath="graphs/judge",
)
"""
from .judge import HEALTH_JUDGE_ENTRY_POINT, judge_goal, judge_graph, judge_node
__all__ = [
"HEALTH_JUDGE_ENTRY_POINT",
"judge_goal",
"judge_graph",
"judge_node",
]
"""Framework-level worker monitoring package."""
-258
View File
@@ -1,258 +0,0 @@
"""Worker Health Judge — framework-level reusable monitoring graph.
Attaches to any worker agent runtime as a secondary graph. Fires on a
2-minute timer, reads the worker's session logs via ``get_worker_health_summary``,
accumulates observations in a continuous conversation context, and emits a
structured ``EscalationTicket`` when it detects a degradation pattern.
Usage::
from framework.monitoring import judge_graph, judge_goal, HEALTH_JUDGE_ENTRY_POINT
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
# Register tools bound to the worker runtime's event bus
monitoring_registry = ToolRegistry()
register_worker_monitoring_tools(
monitoring_registry, worker_runtime._event_bus, storage_path
)
monitoring_tools = list(monitoring_registry.get_tools().values())
monitoring_executor = monitoring_registry.get_executor()
# Load judge as secondary graph on the worker runtime
await worker_runtime.add_graph(
graph_id="judge",
graph=judge_graph,
goal=judge_goal,
entry_points={"health_check": HEALTH_JUDGE_ENTRY_POINT},
storage_subpath="graphs/judge",
)
Design:
- ``isolation_level="isolated"`` the judge has its own memory, not
polluting the worker's shared memory namespace.
- ``conversation_mode="continuous"`` the judge's conversation carries
across timer ticks. The conversation IS the judge's memory. It tracks
trends by referring to its own prior messages ("Last check I saw 47
steps; now 52; 5 new steps, 3 RETRY").
- No shared memory keys. No external state files.
"""
from __future__ import annotations
from framework.graph import Constraint, Goal, NodeSpec, SuccessCriterion
from framework.graph.edge import AsyncEntryPointSpec, GraphSpec
# ---------------------------------------------------------------------------
# Goal
# ---------------------------------------------------------------------------
judge_goal = Goal(
id="worker-health-monitor",
name="Worker Health Monitor",
description=(
"Periodically assess the health of the worker agent by reading its "
"execution logs. Detect degradation patterns (excessive retries, "
"stalls, doom loops) and emit structured EscalationTickets when the "
"worker needs attention."
),
success_criteria=[
SuccessCriterion(
id="accurate-detection",
description="Only escalates genuine degradation, not normal retry cycles",
metric="false_positive_rate",
target="low",
weight=0.5,
),
SuccessCriterion(
id="timely-detection",
description="Detects genuine stalls within 2 timer ticks (≤4 minutes)",
metric="detection_latency_minutes",
target="<=4",
weight=0.5,
),
],
constraints=[
Constraint(
id="conservative-escalation",
description=(
"Do not escalate on a single bad verdict or a brief stall. "
"Require clear patterns (10+ consecutive bad verdicts or 4+ minute stall) "
"before creating a ticket."
),
constraint_type="hard",
category="quality",
),
Constraint(
id="complete-ticket",
description=(
"Every EscalationTicket must have all required fields filled. "
"Do not emit partial or placeholder tickets."
),
constraint_type="hard",
category="correctness",
),
],
)
# ---------------------------------------------------------------------------
# Node
# ---------------------------------------------------------------------------
judge_node = NodeSpec(
id="judge",
name="Worker Health Judge",
description=(
"Autonomous health monitor for worker agents. Reads execution logs "
"on each timer tick, compares to prior observations (via conversation "
"history), and emits a structured EscalationTicket when a genuine "
"degradation pattern is detected."
),
node_type="event_loop",
client_facing=False, # Autonomous monitor, not interactive
max_node_visits=0, # Unbounded — runs on every timer tick
input_keys=[],
output_keys=["health_verdict"],
nullable_output_keys=["health_verdict"],
success_criteria=(
"A clear health verdict is produced each check: either 'healthy' with "
"a brief observation, or a complete EscalationTicket is emitted via "
"emit_escalation_ticket and health_verdict describes the issue."
),
tools=[
"get_worker_health_summary",
"emit_escalation_ticket",
],
system_prompt="""\
You are the Worker Health Judge. You run every 2 minutes alongside a worker \
agent to monitor its execution health.
# Your Role
You observe the worker's iteration patterns over time and escalate only when \
you see genuine degradation not normal retry cycles. Your conversation history \
IS your memory. On each check, refer to your previous observations to track trends.
# Check Procedure
On each timer tick (every 2 minutes):
## Step 1: Read health snapshot
Call get_worker_health_summary() with no arguments to auto-discover the active \
session. This returns:
- worker_agent_id: the worker's agent name — use this for ticket identity fields
- worker_graph_id: the worker's primary graph ID — use this for ticket identity fields
- session_id: the session being monitored use this for worker_session_id in tickets
- total_steps: how many log steps have been recorded
- recent_verdicts: list of recent ACCEPT/RETRY/CONTINUE verdicts
- steps_since_last_accept: consecutive non-ACCEPT steps
- stall_minutes: wall-clock since last step (null if active)
- evidence_snippet: recent LLM output
## Step 2: Compare to prior check
Look at your conversation history. What was total_steps last time?
- If total_steps is UNCHANGED from prior check AND prior check was also unchanged:
STALL confirmed (worker has produced no new iterations in 4+ minutes).
Escalate with severity="high" or "critical" depending on stall duration.
- If total_steps increased: worker is making progress. Examine verdicts.
## Step 3: Analyze verdict pattern
- Healthy: Mix of ACCEPT and RETRY, steps_since_last_accept < 5. No action.
- Warning: steps_since_last_accept is 5-9. Note it, no escalation yet.
- Degraded: steps_since_last_accept >= 10. Examine evidence_snippet.
- If evidence shows the agent is making real progress (complex reasoning,
exploring solutions, productive tool use): may be a hard problem. Note it.
- If evidence shows a loop (same error, same tool call, no new information):
Escalate with severity="medium" or "high".
- Critical: steps_since_last_accept >= 20, OR stall_minutes >= 4.
Escalate with severity="critical".
## Step 4: Decide
### If healthy:
set_output("health_verdict", "healthy: <brief observation>")
Done.
### If escalating:
Build an EscalationTicket JSON string with ALL required fields:
{
"worker_agent_id": "<worker_agent_id from get_worker_health_summary>",
"worker_session_id": "<session_id from get_worker_health_summary>",
"worker_node_id": "<worker_graph_id from get_worker_health_summary>",
"worker_graph_id": "<worker_graph_id from get_worker_health_summary>",
"severity": "<low|medium|high|critical>",
"cause": "<what you observed — concrete, specific>",
"judge_reasoning": "<why you decided to escalate, not just dismiss>",
"suggested_action": "<what you recommend: restart, human review, etc.>",
"recent_verdicts": [<list from get_worker_health_summary>],
"total_steps_checked": <int>,
"steps_since_last_accept": <int>,
"stall_minutes": <float or null>,
"evidence_snippet": "<from get_worker_health_summary>"
}
Call: emit_escalation_ticket(ticket_json=<the JSON string above>)
Then: set_output("health_verdict", "escalated: <one-line summary>")
# Severity Guide
- low: Mild concern, worth noting. 5-9 consecutive bad verdicts.
- medium: Clear degradation pattern. 10-15 bad verdicts or brief stall (1-2 min).
- high: Serious issue. 15+ bad verdicts or stall 2-4 minutes or clear doom loop.
- critical: Worker is definitively stuck. 20+ bad verdicts or stall > 4 minutes.
# Conservative Bias
You MUST resist the urge to escalate prematurely. Worker agents naturally retry.
A node may legitimately need 5-8 retries before succeeding. Do not escalate unless:
1. The pattern is clear and sustained across your observation window, AND
2. The evidence shows no genuine progress
One missed escalation is less costly than two false alarms. The Queen will filter \
further. But do not be passive genuine stalls and doom loops must be caught.
# Rules
- Never escalate on the FIRST check unless stall_minutes > 4
- Always call get_worker_health_summary FIRST before deciding anything
- All ticket fields are REQUIRED do not submit partial tickets
- After any emit_escalation_ticket call, always set_output to complete the check
""",
)
# ---------------------------------------------------------------------------
# Entry Point
# ---------------------------------------------------------------------------
HEALTH_JUDGE_ENTRY_POINT = AsyncEntryPointSpec(
id="health_check",
name="Worker Health Check",
entry_node="judge",
trigger_type="timer",
trigger_config={
"interval_minutes": 2,
"run_immediately": True, # Fire immediately to establish a baseline
},
isolation_level="isolated", # Own memory namespace, not polluting worker's
)
# ---------------------------------------------------------------------------
# Graph
# ---------------------------------------------------------------------------
judge_graph = GraphSpec(
id="judge-graph",
goal_id=judge_goal.id,
version="1.0.0",
entry_node="judge",
entry_points={"health_check": "judge"},
terminal_nodes=["judge"], # Judge node can terminate after each check
pause_nodes=[],
nodes=[judge_node],
edges=[],
conversation_mode="continuous", # Conversation persists across timer ticks
async_entry_points=[HEALTH_JUDGE_ENTRY_POINT],
loop_config={
"max_iterations": 10, # One check shouldn't take many turns
"max_tool_calls_per_turn": 3, # get_summary + optionally emit_ticket
"max_history_tokens": 16000, # Compact — judge only needs recent context
},
)
+6 -6
View File
@@ -83,18 +83,18 @@ configure_logging(level="INFO", format="auto")
- Compact single-line format (easy to stream/parse)
- All trace context fields included automatically
### Human-Readable Format (Development)
### Human-Readable Format (Development / Terminal)
```
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] Starting agent execution
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] Processing input data [node_id:input-processor]
[INFO ] [trace:12345678 | exec:a1b2c3d4 | agent:sales-agent] LLM call completed [latency_ms:1250] [tokens_used:450]
[INFO ] [agent:sales-agent] Starting agent execution
[INFO ] [agent:sales-agent] Processing input data [node_id:input-processor]
[INFO ] [agent:sales-agent] LLM call completed [latency_ms:1250] [tokens_used:450]
```
**Features:**
- Color-coded log levels
- Shortened IDs for readability (first 8 chars)
- Context prefix shows trace correlation
- Terminal output omits trace_id and execution_id for readability
- For full traceability (e.g. debugging), use `ENV=production` to get JSON file logs with trace_id and execution_id
## Trace Context Fields
+30 -15
View File
@@ -4,8 +4,9 @@ Structured logging with automatic trace context propagation.
Key Features:
- Zero developer friction: Standard logger.info() calls get automatic context
- ContextVar-based propagation: Thread-safe and async-safe
- Dual output modes: JSON for production, human-readable for development
- Correlation IDs: trace_id follows entire request flow automatically
- Dual output modes: JSON for production (full trace_id/execution_id), human-readable for terminal
- Terminal omits trace_id/execution_id for readability
- Use ENV=production for file logs with full traceability
Architecture:
Runtime.start_run() Generates trace_id, sets context once
@@ -29,6 +30,8 @@ from typing import Any
# ContextVar is thread-safe and async-safe - perfect for concurrent agent execution
trace_context: ContextVar[dict[str, Any] | None] = ContextVar("trace_context", default=None)
_STANDARD_LOG_RECORD_FIELDS = set(logging.makeLogRecord({}).__dict__)
# ANSI escape code pattern (matches \033[...m or \x1b[...m)
ANSI_ESCAPE_PATTERN = re.compile(r"\x1b\[[0-9;]*m|\033\[[0-9;]*m")
@@ -91,6 +94,14 @@ class StructuredFormatter(logging.Formatter):
if model is not None:
log_entry["model"] = model
# Preserve arbitrary structured fields passed via ``extra=...``.
for key, value in record.__dict__.items():
if key in _STANDARD_LOG_RECORD_FIELDS or key.startswith("_"):
continue
if key in log_entry:
continue
log_entry[key] = value
# Add exception info if present (strip ANSI codes from exception text too)
if record.exc_info:
exception_text = self.formatException(record.exc_info)
@@ -101,10 +112,11 @@ class StructuredFormatter(logging.Formatter):
class HumanReadableFormatter(logging.Formatter):
"""
Human-readable formatter for development.
Human-readable formatter for development (terminal output).
Provides colorized logs with trace context for local debugging.
Includes trace_id prefix for correlation - AUTOMATIC!
Provides colorized logs for local debugging. Omits trace_id and execution_id
from the terminal for readability; use ENV=production (JSON file logs) when
traceability is needed.
"""
COLORS = {
@@ -118,18 +130,11 @@ class HumanReadableFormatter(logging.Formatter):
def format(self, record: logging.LogRecord) -> str:
"""Format log record as human-readable string."""
# Get trace context - AUTOMATIC!
# Get trace context; omit trace_id and execution_id in terminal for readability
context = trace_context.get() or {}
trace_id = context.get("trace_id", "")
execution_id = context.get("execution_id", "")
agent_id = context.get("agent_id", "")
# Build context prefix
prefix_parts = []
if trace_id:
prefix_parts.append(f"trace:{trace_id[:8]}")
if execution_id:
prefix_parts.append(f"exec:{execution_id[-8:]}")
if agent_id:
prefix_parts.append(f"agent:{agent_id}")
@@ -148,8 +153,9 @@ class HumanReadableFormatter(logging.Formatter):
if record_event is not None:
event = f" [{record_event}]"
# Format message: [LEVEL] [trace context] message
return f"{color}[{level}]{reset} {context_prefix}{record.getMessage()}{event}"
timestamp = self.formatTime(record, "%Y-%m-%d %H:%M:%S")
# Format message: TIMESTAMP [LEVEL] [trace context] message
return f"{timestamp} {color}[{level}]{reset} {context_prefix}{record.getMessage()}{event}"
def configure_logging(
@@ -210,6 +216,15 @@ def configure_logging(
root_logger.addHandler(handler)
root_logger.setLevel(level.upper())
# Suppress noisy LiteLLM INFO logs (model/provider line + Provider List URL
# printed on every single completion call). Warnings and errors still show.
# Honour LITELLM_LOG env var so users can opt-in to debug output.
_litellm_level = os.getenv("LITELLM_LOG", "").upper()
if _litellm_level and hasattr(logging, _litellm_level):
logging.getLogger("LiteLLM").setLevel(getattr(logging, _litellm_level))
else:
logging.getLogger("LiteLLM").setLevel(logging.WARNING)
# When in JSON mode, configure known third-party loggers to use JSON formatter
# This ensures libraries like LiteLLM, httpcore also output clean JSON
if format == "json":
+2
View File
@@ -1,5 +1,6 @@
"""Agent Runner - load and run exported agents."""
from framework.runner.mcp_registry import MCPRegistry
from framework.runner.orchestrator import AgentOrchestrator
from framework.runner.protocol import (
AgentMessage,
@@ -17,6 +18,7 @@ __all__ = [
"AgentInfo",
"ValidationResult",
"ToolRegistry",
"MCPRegistry",
"tool",
# Multi-agent
"AgentOrchestrator",
+114 -489
View File
@@ -51,11 +51,7 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
action="store_true",
help="Show detailed execution logs (steps, LLM calls, etc.)",
)
run_parser.add_argument(
"--tui",
action="store_true",
help="Launch interactive terminal dashboard",
)
run_parser.add_argument(
"--model",
"-m",
@@ -194,158 +190,6 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
shell_parser.set_defaults(func=cmd_shell)
# tui command (interactive agent dashboard)
tui_parser = subparsers.add_parser(
"tui",
help="Launch interactive TUI dashboard",
description="Browse available agents and launch the terminal dashboard.",
)
tui_parser.add_argument(
"--model",
"-m",
type=str,
default=None,
help="LLM model to use (any LiteLLM-compatible name)",
)
tui_parser.set_defaults(func=cmd_tui)
# code command (Hive Coder — framework agent builder)
code_parser = subparsers.add_parser(
"code",
help="Launch Hive Coder to build agents",
description="Interactive agent builder. Describe what you want and Hive Coder builds it.",
)
code_parser.add_argument(
"--model",
"-m",
type=str,
default=None,
help="LLM model to use (any LiteLLM-compatible name)",
)
code_parser.set_defaults(func=cmd_code)
# sessions command group (checkpoint/resume management)
sessions_parser = subparsers.add_parser(
"sessions",
help="Manage agent sessions",
description="List, inspect, and manage agent execution sessions.",
)
sessions_subparsers = sessions_parser.add_subparsers(
dest="sessions_cmd",
help="Session management commands",
)
# sessions list
sessions_list_parser = sessions_subparsers.add_parser(
"list",
help="List agent sessions",
description="List all sessions for an agent.",
)
sessions_list_parser.add_argument(
"agent_path",
type=str,
help="Path to agent folder",
)
sessions_list_parser.add_argument(
"--status",
choices=["all", "active", "failed", "completed", "paused"],
default="all",
help="Filter by session status (default: all)",
)
sessions_list_parser.add_argument(
"--has-checkpoints",
action="store_true",
help="Show only sessions with checkpoints",
)
sessions_list_parser.set_defaults(func=cmd_sessions_list)
# sessions show
sessions_show_parser = sessions_subparsers.add_parser(
"show",
help="Show session details",
description="Display detailed information about a specific session.",
)
sessions_show_parser.add_argument(
"agent_path",
type=str,
help="Path to agent folder",
)
sessions_show_parser.add_argument(
"session_id",
type=str,
help="Session ID to inspect",
)
sessions_show_parser.add_argument(
"--json",
action="store_true",
help="Output as JSON",
)
sessions_show_parser.set_defaults(func=cmd_sessions_show)
# sessions checkpoints
sessions_checkpoints_parser = sessions_subparsers.add_parser(
"checkpoints",
help="List session checkpoints",
description="List all checkpoints for a session.",
)
sessions_checkpoints_parser.add_argument(
"agent_path",
type=str,
help="Path to agent folder",
)
sessions_checkpoints_parser.add_argument(
"session_id",
type=str,
help="Session ID",
)
sessions_checkpoints_parser.set_defaults(func=cmd_sessions_checkpoints)
# pause command
pause_parser = subparsers.add_parser(
"pause",
help="Pause running session",
description="Request graceful pause of a running agent session.",
)
pause_parser.add_argument(
"agent_path",
type=str,
help="Path to agent folder",
)
pause_parser.add_argument(
"session_id",
type=str,
help="Session ID to pause",
)
pause_parser.set_defaults(func=cmd_pause)
# resume command
resume_parser = subparsers.add_parser(
"resume",
help="Resume session from checkpoint",
description="Resume a paused or failed session from a checkpoint.",
)
resume_parser.add_argument(
"agent_path",
type=str,
help="Path to agent folder",
)
resume_parser.add_argument(
"session_id",
type=str,
help="Session ID to resume",
)
resume_parser.add_argument(
"--checkpoint",
"-c",
type=str,
help="Specific checkpoint ID to resume from (default: latest)",
)
resume_parser.add_argument(
"--tui",
action="store_true",
help="Resume in TUI dashboard mode",
)
resume_parser.set_defaults(func=cmd_resume)
# setup-credentials command
setup_creds_parser = subparsers.add_parser(
"setup-credentials",
@@ -399,6 +243,8 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
action="store_true",
help="Open dashboard in browser after server starts",
)
serve_parser.add_argument("--verbose", "-v", action="store_true", help="Enable INFO log level")
serve_parser.add_argument("--debug", action="store_true", help="Enable DEBUG log level")
serve_parser.set_defaults(func=cmd_serve)
# open command (serve + auto-open browser)
@@ -436,6 +282,8 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
default=None,
help="LLM model for preloaded agents",
)
open_parser.add_argument("--verbose", "-v", action="store_true", help="Enable INFO log level")
open_parser.add_argument("--debug", action="store_true", help="Enable DEBUG log level")
open_parser.set_defaults(func=cmd_open)
@@ -531,18 +379,18 @@ def _prompt_before_start(agent_path: str, runner, model: str | None = None):
def cmd_run(args: argparse.Namespace) -> int:
"""Run an exported agent."""
import logging
from framework.credentials.models import CredentialError
from framework.observability import configure_logging
from framework.runner import AgentRunner
# Set logging level (quiet by default for cleaner output)
if args.quiet:
logging.basicConfig(level=logging.ERROR, format="%(message)s")
configure_logging(level="ERROR")
elif getattr(args, "verbose", False):
logging.basicConfig(level=logging.INFO, format="%(message)s")
configure_logging(level="INFO")
else:
logging.basicConfig(level=logging.WARNING, format="%(message)s")
configure_logging(level="WARNING")
# Load input context
context = {}
@@ -577,128 +425,67 @@ def cmd_run(args: argparse.Namespace) -> int:
)
return 1
# Run the agent (with TUI or standard)
if getattr(args, "tui", False):
from framework.tui.app import AdenTUI
# Standard execution
# AgentRunner handles credential setup interactively when stdin is a TTY.
try:
runner = AgentRunner.load(
args.agent_path,
model=args.model,
)
except CredentialError as e:
print(f"\n{e}", file=sys.stderr)
return 1
except FileNotFoundError as e:
print(f"Error: {e}", file=sys.stderr)
return 1
async def run_with_tui():
try:
# Load runner inside the async loop to ensure strict loop affinity
# (only one load — avoids spawning duplicate MCP subprocesses)
# AgentRunner handles credential setup interactively when stdin is a TTY.
try:
runner = AgentRunner.load(
args.agent_path,
model=args.model,
)
except CredentialError as e:
print(f"\n{e}", file=sys.stderr)
return
except Exception as e:
print(f"Error loading agent: {e}")
return
# Prompt before starting (allows credential updates)
if sys.stdin.isatty() and not args.quiet:
runner = _prompt_before_start(args.agent_path, runner, args.model)
if runner is None:
return 1
# Prompt before starting (allows credential updates)
if sys.stdin.isatty():
runner = _prompt_before_start(args.agent_path, runner, args.model)
if runner is None:
return
# Force setup inside the loop
if runner._agent_runtime is None:
try:
runner._setup()
except CredentialError as e:
print(f"\n{e}", file=sys.stderr)
return
# Start runtime before TUI so it's ready for user input
if runner._agent_runtime and not runner._agent_runtime.is_running:
await runner._agent_runtime.start()
app = AdenTUI(
runner._agent_runtime,
resume_session=getattr(args, "resume_session", None),
resume_checkpoint=getattr(args, "checkpoint", None),
)
# TUI handles execution via ChatRepl — user submits input,
# ChatRepl calls runtime.trigger_and_wait(). No auto-launch.
await app.run_async()
except Exception as e:
import traceback
traceback.print_exc()
print(f"TUI error: {e}")
await runner.cleanup_async()
return None
asyncio.run(run_with_tui())
print("TUI session ended.")
return 0
else:
# Standard execution — load runner here (not shared with TUI path)
# AgentRunner handles credential setup interactively when stdin is a TTY.
try:
runner = AgentRunner.load(
args.agent_path,
model=args.model,
# Load session/checkpoint state for resume (headless mode)
session_state = None
resume_session = getattr(args, "resume_session", None)
checkpoint = getattr(args, "checkpoint", None)
if resume_session:
session_state = _load_resume_state(args.agent_path, resume_session, checkpoint)
if session_state is None:
print(
f"Error: Could not load session state for {resume_session}",
file=sys.stderr,
)
except CredentialError as e:
print(f"\n{e}", file=sys.stderr)
return 1
except FileNotFoundError as e:
print(f"Error: {e}", file=sys.stderr)
return 1
# Prompt before starting (allows credential updates)
if sys.stdin.isatty() and not args.quiet:
runner = _prompt_before_start(args.agent_path, runner, args.model)
if runner is None:
return 1
# Load session/checkpoint state for resume (headless mode)
session_state = None
resume_session = getattr(args, "resume_session", None)
checkpoint = getattr(args, "checkpoint", None)
if resume_session:
session_state = _load_resume_state(args.agent_path, resume_session, checkpoint)
if session_state is None:
print(
f"Error: Could not load session state for {resume_session}",
file=sys.stderr,
)
return 1
if not args.quiet:
resume_node = session_state.get("paused_at", "unknown")
if checkpoint:
print(f"Resuming from checkpoint: {checkpoint}")
else:
print(f"Resuming session: {resume_session}")
print(f"Resume point: {resume_node}")
print()
# Auto-inject user_id if the agent expects it but it's not provided
entry_input_keys = runner.graph.nodes[0].input_keys if runner.graph.nodes else []
if "user_id" in entry_input_keys and context.get("user_id") is None:
import os
context["user_id"] = os.environ.get("USER", "default_user")
if not args.quiet:
info = runner.info()
print(f"Agent: {info.name}")
print(f"Goal: {info.goal_name}")
print(f"Steps: {info.node_count}")
print(f"Input: {json.dumps(context)}")
print()
print("=" * 60)
print("Executing agent...")
print("=" * 60)
resume_node = session_state.get("paused_at", "unknown")
if checkpoint:
print(f"Resuming from checkpoint: {checkpoint}")
else:
print(f"Resuming session: {resume_session}")
print(f"Resume point: {resume_node}")
print()
result = asyncio.run(runner.run(context, session_state=session_state))
# Auto-inject user_id if the agent expects it but it's not provided
entry_input_keys = runner.graph.nodes[0].input_keys if runner.graph.nodes else []
if "user_id" in entry_input_keys and context.get("user_id") is None:
import os
context["user_id"] = os.environ.get("USER", "default_user")
if not args.quiet:
info = runner.info()
print(f"Agent: {info.name}")
print(f"Goal: {info.goal_name}")
print(f"Steps: {info.node_count}")
print(f"Input: {json.dumps(context)}")
print()
print("=" * 60)
print("Executing agent...")
print("=" * 60)
print()
result = asyncio.run(runner.run(context, session_state=session_state))
# Format output
output = {
@@ -959,6 +746,17 @@ def cmd_dispatch(args: argparse.Namespace) -> int:
if args.agents:
# Use specific agents
for agent_name in args.agents:
# Guard against full paths: if the name contains path separators
# (e.g. "exports/my_agent"), it will be doubled with agents_dir
agent_name_path = Path(agent_name)
if len(agent_name_path.parts) > 1:
print(
f"Error: --agents expects agent names, not paths. "
f"Use: --agents {agent_name_path.name} "
f"instead of --agents {agent_name}",
file=sys.stderr,
)
return 1
agent_path = agents_dir / agent_name
if not _is_valid_agent_dir(agent_path):
print(f"Agent not found: {agent_path}", file=sys.stderr)
@@ -1124,16 +922,12 @@ def _format_natural_language_to_json(
def cmd_shell(args: argparse.Namespace) -> int:
"""Start an interactive agent session."""
import logging
from framework.credentials.models import CredentialError
from framework.observability import configure_logging
from framework.runner import AgentRunner
# Configure logging to show runtime visibility
logging.basicConfig(
level=logging.INFO,
format="%(message)s", # Simple format for clean output
)
configure_logging(level="INFO")
agents_dir = Path(args.agents_dir)
@@ -1364,154 +1158,6 @@ def _get_framework_agents_dir() -> Path:
return Path(__file__).resolve().parent.parent / "agents"
def _launch_agent_tui(
agent_path: str | Path,
model: str | None = None,
) -> int:
"""Load an agent and launch the TUI. Shared by cmd_tui and cmd_code."""
from framework.credentials.models import CredentialError
from framework.runner import AgentRunner
from framework.tui.app import AdenTUI
async def run_with_tui():
# AgentRunner handles credential setup interactively when stdin is a TTY.
try:
runner = AgentRunner.load(
agent_path,
model=model,
)
except CredentialError as e:
print(f"\n{e}", file=sys.stderr)
return
except Exception as e:
print(f"Error loading agent: {e}")
return
if runner._agent_runtime is None:
try:
runner._setup()
except CredentialError as e:
print(f"\n{e}", file=sys.stderr)
return
if runner._agent_runtime and not runner._agent_runtime.is_running:
await runner._agent_runtime.start()
app = AdenTUI(runner._agent_runtime)
try:
await app.run_async()
except Exception as e:
import traceback
traceback.print_exc()
print(f"TUI error: {e}")
await runner.cleanup_async()
asyncio.run(run_with_tui())
print("TUI session ended.")
return 0
def cmd_tui(args: argparse.Namespace) -> int:
"""Launch the interactive TUI dashboard with in-app agent picker."""
import logging
logging.basicConfig(level=logging.WARNING, format="%(message)s")
from framework.tui.app import AdenTUI
async def run_tui():
app = AdenTUI(
model=args.model,
)
await app.run_async()
asyncio.run(run_tui())
print("TUI session ended.")
return 0
def cmd_code(args: argparse.Namespace) -> int:
"""Launch Hive Coder with multi-graph support.
Unlike ``_launch_agent_tui``, this sets up graph lifecycle tools and
assigns ``graph_id="hive_coder"`` so the coder can load, supervise,
and restart secondary agent graphs within the same session.
"""
import logging
logging.basicConfig(level=logging.WARNING, format="%(message)s")
framework_agents_dir = _get_framework_agents_dir()
hive_coder_path = framework_agents_dir / "hive_coder"
if not (hive_coder_path / "agent.py").exists():
print("Error: Hive Coder agent not found.", file=sys.stderr)
return 1
# Ensure framework agents dir is on sys.path for import
fa_str = str(framework_agents_dir)
if fa_str not in sys.path:
sys.path.insert(0, fa_str)
from framework.credentials.models import CredentialError
from framework.runner import AgentRunner
from framework.tools.session_graph_tools import register_graph_tools
from framework.tui.app import AdenTUI
async def run_with_tui():
try:
runner = AgentRunner.load(hive_coder_path, model=args.model)
except CredentialError as e:
print(f"\n{e}", file=sys.stderr)
return
except Exception as e:
print(f"Error loading agent: {e}")
return
if runner._agent_runtime is None:
try:
runner._setup()
except CredentialError as e:
print(f"\n{e}", file=sys.stderr)
return
runtime = runner._agent_runtime
# -- Multi-graph setup --
# Tag the primary graph so events carry graph_id="hive_coder"
runtime._graph_id = "hive_coder"
runtime._active_graph_id = "hive_coder"
# Register graph lifecycle tools (load_agent, unload_agent, etc.)
register_graph_tools(runner._tool_registry, runtime)
# Refresh tool schemas AND executor so streams see the new tools.
# The executor closure references the registry dict by ref, but
# refreshing both is robust against any copy-on-read behavior.
runtime._tools = list(runner._tool_registry.get_tools().values())
runtime._tool_executor = runner._tool_registry.get_executor()
if not runtime.is_running:
await runtime.start()
app = AdenTUI(runtime)
try:
await app.run_async()
except Exception as e:
import traceback
traceback.print_exc()
print(f"TUI error: {e}")
await runner.cleanup_async()
asyncio.run(run_with_tui())
print("TUI session ended.")
return 0
def _extract_python_agent_metadata(agent_path: Path) -> tuple[str, str]:
"""Extract name and description from a Python-based agent's config.py.
@@ -1864,56 +1510,6 @@ def _interactive_multi(agents_dir: Path) -> int:
return 0
def cmd_sessions_list(args: argparse.Namespace) -> int:
"""List agent sessions."""
print("⚠ Sessions list command not yet implemented")
print("This will be available once checkpoint infrastructure is complete.")
print(f"\nAgent: {args.agent_path}")
print(f"Status filter: {args.status}")
print(f"Has checkpoints: {args.has_checkpoints}")
return 1
def cmd_sessions_show(args: argparse.Namespace) -> int:
"""Show detailed session information."""
print("⚠ Session show command not yet implemented")
print("This will be available once checkpoint infrastructure is complete.")
print(f"\nAgent: {args.agent_path}")
print(f"Session: {args.session_id}")
return 1
def cmd_sessions_checkpoints(args: argparse.Namespace) -> int:
"""List checkpoints for a session."""
print("⚠ Session checkpoints command not yet implemented")
print("This will be available once checkpoint infrastructure is complete.")
print(f"\nAgent: {args.agent_path}")
print(f"Session: {args.session_id}")
return 1
def cmd_pause(args: argparse.Namespace) -> int:
"""Pause a running session."""
print("⚠ Pause command not yet implemented")
print("This will be available once executor pause integration is complete.")
print(f"\nAgent: {args.agent_path}")
print(f"Session: {args.session_id}")
return 1
def cmd_resume(args: argparse.Namespace) -> int:
"""Resume a session from checkpoint."""
print("⚠ Resume command not yet implemented")
print("This will be available once checkpoint resume integration is complete.")
print(f"\nAgent: {args.agent_path}")
print(f"Session: {args.session_id}")
if args.checkpoint:
print(f"Checkpoint: {args.checkpoint}")
if args.tui:
print("Mode: TUI")
return 1
def cmd_setup_credentials(args: argparse.Namespace) -> int:
"""Interactive credential setup for an agent."""
from framework.credentials.setup import CredentialSetupSession
@@ -1965,6 +1561,22 @@ def _open_browser(url: str) -> None:
pass # Best-effort — don't crash if browser can't open
def _format_subprocess_output(output: str | bytes | None, limit: int = 2000) -> str:
"""Return subprocess output as trimmed text safe for console logging."""
if not output:
return ""
if isinstance(output, bytes):
text = output.decode(errors="replace")
else:
text = output
text = text.strip()
if len(text) <= limit:
return text
return text[-limit:]
def _build_frontend() -> bool:
"""Build the frontend if source is newer than dist. Returns True if dist exists."""
import subprocess
@@ -2000,18 +1612,25 @@ def _build_frontend() -> bool:
# Need to build
print("Building frontend...")
npm_cmd = "npm.cmd" if sys.platform == "win32" else "npm"
try:
# Incremental tsc caches can drift across branch changes and block builds.
for cache_file in frontend_dir.glob("tsconfig*.tsbuildinfo"):
cache_file.unlink(missing_ok=True)
# Ensure deps are installed
subprocess.run(
["npm", "install", "--no-fund", "--no-audit"],
[npm_cmd, "install", "--no-fund", "--no-audit"],
encoding="utf-8",
errors="replace",
cwd=frontend_dir,
check=True,
capture_output=True,
)
subprocess.run(
["npm", "run", "build"],
[npm_cmd, "run", "build"],
encoding="utf-8",
errors="replace",
cwd=frontend_dir,
check=True,
capture_output=True,
@@ -2022,25 +1641,31 @@ def _build_frontend() -> bool:
print("Node.js not found — skipping frontend build.")
return dist_dir.is_dir()
except subprocess.CalledProcessError as exc:
stderr = exc.stderr.decode(errors="replace") if exc.stderr else ""
print(f"Frontend build failed: {stderr[:500]}")
stdout = _format_subprocess_output(exc.stdout)
stderr = _format_subprocess_output(exc.stderr)
cmd = " ".join(exc.cmd) if isinstance(exc.cmd, (list, tuple)) else str(exc.cmd)
details = "\n".join(part for part in [stdout, stderr] if part).strip()
if details:
print(f"Frontend build failed while running {cmd}:\n{details}")
else:
print(f"Frontend build failed while running {cmd} (exit {exc.returncode}).")
return dist_dir.is_dir()
def cmd_serve(args: argparse.Namespace) -> int:
"""Start the HTTP API server."""
import logging
from aiohttp import web
_build_frontend()
from framework.observability import configure_logging
from framework.server.app import create_app
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
)
if getattr(args, "debug", False):
configure_logging(level="DEBUG")
else:
configure_logging(level="INFO")
model = getattr(args, "model", None)
app = create_app(model=model)
+199 -25
View File
@@ -1,7 +1,7 @@
"""MCP Client for connecting to Model Context Protocol servers.
This module provides a client for connecting to MCP servers and invoking their tools.
Supports both STDIO and HTTP transports using the official MCP Python SDK.
Supports STDIO, HTTP, UNIX socket, and SSE transports using the official MCP Python SDK.
"""
import asyncio
@@ -14,6 +14,8 @@ from typing import Any, Literal
import httpx
from framework.runner.mcp_errors import MCPToolNotFoundError
logger = logging.getLogger(__name__)
@@ -22,7 +24,7 @@ class MCPServerConfig:
"""Configuration for an MCP server connection."""
name: str
transport: Literal["stdio", "http"]
transport: Literal["stdio", "http", "unix", "sse"]
# For STDIO transport
command: str | None = None
@@ -33,6 +35,7 @@ class MCPServerConfig:
# For HTTP transport
url: str | None = None
headers: dict[str, str] = field(default_factory=dict)
socket_path: str | None = None
# Optional metadata
description: str = ""
@@ -52,7 +55,7 @@ class MCPClient:
"""
Client for communicating with MCP servers.
Supports both STDIO and HTTP transports using the official MCP SDK.
Supports STDIO, HTTP, UNIX socket, and SSE transports using the official MCP SDK.
Manages the connection lifecycle and provides methods to list and invoke tools.
"""
@@ -68,6 +71,8 @@ class MCPClient:
self._read_stream = None
self._write_stream = None
self._stdio_context = None # Context manager for stdio_client
self._sse_context = None # Context manager for sse_client
self._errlog_handle = None # Track errlog file handle for cleanup
self._http_client: httpx.Client | None = None
self._tools: dict[str, MCPTool] = {}
self._connected = False
@@ -140,6 +145,10 @@ class MCPClient:
self._connect_stdio()
elif self.config.transport == "http":
self._connect_http()
elif self.config.transport == "unix":
self._connect_unix()
elif self.config.transport == "sse":
self._connect_sse()
else:
raise ValueError(f"Unsupported transport: {self.config.transport}")
@@ -200,7 +209,8 @@ class MCPClient:
if os.name == "nt":
errlog = sys.stderr
else:
errlog = open(os.devnull, "w") # noqa: SIM115
self._errlog_handle = open(os.devnull, "w")
errlog = self._errlog_handle
self._stdio_context = stdio_client(server_params, errlog=errlog)
(
self._read_stream,
@@ -264,10 +274,94 @@ class MCPClient:
logger.warning(f"Health check failed for MCP server '{self.config.name}': {e}")
# Continue anyway, server might not have health endpoint
def _connect_unix(self) -> None:
"""Connect to MCP server via UNIX domain socket transport."""
if not self.config.url:
raise ValueError("url is required for UNIX transport")
if not self.config.socket_path:
raise ValueError("socket_path is required for UNIX transport")
self._http_client = httpx.Client(
base_url=self.config.url,
headers=self.config.headers,
timeout=30.0,
transport=httpx.HTTPTransport(uds=self.config.socket_path),
)
try:
response = self._http_client.get("/health")
response.raise_for_status()
logger.info(
"Connected to MCP server '%s' via UNIX socket at %s",
self.config.name,
self.config.socket_path,
)
except Exception as e:
logger.warning(f"Health check failed for MCP server '{self.config.name}': {e}")
# Continue anyway, server might not have health endpoint
def _connect_sse(self) -> None:
"""Connect to MCP server via SSE transport using MCP SDK with persistent session."""
if not self.config.url:
raise ValueError("url is required for SSE transport")
try:
loop_started = threading.Event()
connection_ready = threading.Event()
connection_error = []
def run_event_loop():
"""Run event loop in background thread."""
self._loop = asyncio.new_event_loop()
asyncio.set_event_loop(self._loop)
loop_started.set()
async def init_connection():
try:
from mcp import ClientSession
from mcp.client.sse import sse_client
self._sse_context = sse_client(
self.config.url,
headers=self.config.headers,
timeout=30.0,
)
(
self._read_stream,
self._write_stream,
) = await self._sse_context.__aenter__()
self._session = ClientSession(self._read_stream, self._write_stream)
await self._session.__aenter__()
await self._session.initialize()
connection_ready.set()
except Exception as e:
connection_error.append(e)
connection_ready.set()
self._loop.create_task(init_connection())
self._loop.run_forever()
self._loop_thread = threading.Thread(target=run_event_loop, daemon=True)
self._loop_thread.start()
loop_started.wait(timeout=5)
if not loop_started.is_set():
raise RuntimeError("Event loop failed to start")
connection_ready.wait(timeout=10)
if connection_error:
raise connection_error[0]
logger.info(f"Connected to MCP server '{self.config.name}' via SSE")
except Exception as e:
raise RuntimeError(f"Failed to connect to MCP server: {e}") from e
def _discover_tools(self) -> None:
"""Discover available tools from the MCP server."""
try:
if self.config.transport == "stdio":
if self.config.transport in {"stdio", "sse"}:
tools_list = self._run_async(self._list_tools_stdio_async())
else:
tools_list = self._list_tools_http()
@@ -364,14 +458,45 @@ class MCPClient:
self.connect()
if tool_name not in self._tools:
raise ValueError(f"Unknown tool: {tool_name}")
raise MCPToolNotFoundError(
server=self.config.name,
tool_name=tool_name,
)
if self.config.transport == "stdio":
with self._stdio_call_lock:
return self._run_async(self._call_tool_stdio_async(tool_name, arguments))
elif self.config.transport == "sse":
return self._call_tool_with_retry(
lambda: self._run_async(self._call_tool_stdio_async(tool_name, arguments))
)
elif self.config.transport == "unix":
return self._call_tool_with_retry(lambda: self._call_tool_http(tool_name, arguments))
else:
return self._call_tool_http(tool_name, arguments)
def _call_tool_with_retry(self, call: Any) -> Any:
"""Retry transient MCP transport failures once after reconnecting."""
if self.config.transport == "stdio":
return call()
if self.config.transport not in {"unix", "sse"}:
return call()
try:
return call()
except (httpx.ConnectError, httpx.ReadTimeout) as original_error:
logger.warning(
"Retrying MCP tool call after transport error from '%s': %s",
self.config.name,
original_error,
)
self._reconnect()
try:
return call()
except (httpx.ConnectError, httpx.ReadTimeout) as retry_error:
raise original_error from retry_error
async def _call_tool_stdio_async(self, tool_name: str, arguments: dict[str, Any]) -> Any:
"""Call tool via STDIO protocol using persistent session."""
if not self._session:
@@ -387,19 +512,35 @@ class MCPClient:
content_item = result.content[0]
if hasattr(content_item, "text"):
error_text = content_item.text
raise RuntimeError(f"MCP tool '{tool_name}' failed: {error_text}")
raise RuntimeError(
f"[Server: {self.config.name}] [Transport: {self.config.transport}] "
f"Tool '{tool_name}' failed: {error_text}"
)
# Extract content
# Extract content — preserve image blocks alongside text
if result.content:
# MCP returns content as a list of content items
if len(result.content) > 0:
content_item = result.content[0]
# Check if it's a text content item
if hasattr(content_item, "text"):
return content_item.text
elif hasattr(content_item, "data"):
return content_item.data
return result.content
text_parts: list[str] = []
image_parts: list[dict[str, Any]] = []
for item in result.content:
if hasattr(item, "text"):
text_parts.append(item.text)
elif hasattr(item, "data") and hasattr(item, "mimeType"):
# MCP ImageContent — preserve as structured image block
image_parts.append(
{
"type": "image_url",
"image_url": {
"url": f"data:{item.mimeType};base64,{item.data}",
},
}
)
elif hasattr(item, "data"):
text_parts.append(str(item.data))
text = "\n".join(text_parts) if text_parts else ""
if image_parts:
return {"_text": text, "_images": image_parts}
return text if text else None
return None
@@ -425,24 +566,36 @@ class MCPClient:
data = response.json()
if "error" in data:
raise RuntimeError(f"Tool execution error: {data['error']}")
raise RuntimeError(
f"[Server: {self.config.name}] [Transport: {self.config.transport}] "
f"Tool '{tool_name}' failed: {data['error']}"
)
return data.get("result", {}).get("content", [])
except Exception as e:
raise RuntimeError(f"Failed to call tool via HTTP: {e}") from e
raise RuntimeError(
f"[Server: {self.config.name}] [Transport: {self.config.transport}] "
f"Failed to call tool via HTTP: Tool '{tool_name}' failed: {e}"
) from e
def _reconnect(self) -> None:
"""Reconnect to the configured MCP server."""
logger.info(f"Reconnecting to MCP server '{self.config.name}'...")
self.disconnect()
self.connect()
_CLEANUP_TIMEOUT = 10
_THREAD_JOIN_TIMEOUT = 12
async def _cleanup_stdio_async(self) -> None:
"""Async cleanup for STDIO session and context managers.
"""Async cleanup for persistent MCP session and context managers.
Cleanup order is critical:
- The session must be closed BEFORE the stdio_context because the session
depends on the streams provided by stdio_context.
- This mirrors the initialization order in _connect_stdio(), where
stdio_context is entered first (providing streams), then the session is
created with those streams and entered.
- The session must be closed BEFORE the transport context manager because the
session depends on the streams provided by that context.
- This mirrors the initialization order in _connect_stdio() / _connect_sse(),
where the transport context is entered first (providing streams), then the
session is created with those streams and entered.
- Do not change this ordering without carefully considering these dependencies.
"""
# First: close session (depends on stdio_context streams)
@@ -475,6 +628,25 @@ class MCPClient:
finally:
self._stdio_context = None
try:
if self._sse_context:
await self._sse_context.__aexit__(None, None, None)
except asyncio.CancelledError:
logger.debug("SSE context cleanup was cancelled; proceeding with best-effort shutdown")
except Exception as e:
logger.warning(f"Error closing SSE context: {e}")
finally:
self._sse_context = None
# Third: close errlog file handle if we opened one
if self._errlog_handle is not None:
try:
self._errlog_handle.close()
except Exception as e:
logger.debug(f"Error closing errlog handle: {e}")
finally:
self._errlog_handle = None
def disconnect(self) -> None:
"""Disconnect from the MCP server."""
# Clean up persistent STDIO connection
@@ -541,10 +713,12 @@ class MCPClient:
# Setting None to None is safe and ensures clean state.
self._session = None
self._stdio_context = None
self._sse_context = None
self._read_stream = None
self._write_stream = None
self._loop = None
self._loop_thread = None
self._errlog_handle = None
# Clean up HTTP client
if self._http_client:
@@ -0,0 +1,409 @@
"""Shared MCP client connection management."""
import logging
import threading
import httpx
from framework.runner.mcp_client import MCPClient, MCPServerConfig
logger = logging.getLogger(__name__)
_TRANSITION_TIMEOUT = 30.0
class MCPConnectionManager:
"""Process-wide MCP client pool keyed by server name."""
_instance = None
_lock = threading.Lock()
def __init__(self) -> None:
self._pool: dict[str, MCPClient] = {}
self._refcounts: dict[str, int] = {}
self._configs: dict[str, MCPServerConfig] = {}
self._pool_lock = threading.Lock()
self._transitions: dict[str, threading.Event] = {}
@classmethod
def get_instance(cls) -> "MCPConnectionManager":
"""Return the process-level singleton instance."""
if cls._instance is None:
with cls._lock:
if cls._instance is None:
cls._instance = cls()
return cls._instance
@staticmethod
def _is_connected(client: MCPClient | None) -> bool:
return bool(client and getattr(client, "_connected", False))
def has_connection(self, server_name: str) -> bool:
"""Return True when a live pooled connection exists for ``server_name``."""
with self._pool_lock:
return self._is_connected(self._pool.get(server_name))
def acquire(self, config: MCPServerConfig) -> MCPClient:
"""Get or create a shared connection and increment its refcount."""
server_name = config.name
while True:
should_connect = False
transition_event: threading.Event | None = None
with self._pool_lock:
client = self._pool.get(server_name)
if self._is_connected(client) and server_name not in self._transitions:
new_refcount = self._refcounts.get(server_name, 0) + 1
self._refcounts[server_name] = new_refcount
self._configs[server_name] = config
logger.debug(
"Reusing pooled connection for MCP server '%s' (refcount=%d)",
server_name,
new_refcount,
)
return client
transition_event = self._transitions.get(server_name)
if transition_event is None:
transition_event = threading.Event()
self._transitions[server_name] = transition_event
self._configs[server_name] = config
should_connect = True
if not should_connect:
if not transition_event.wait(timeout=_TRANSITION_TIMEOUT):
logger.warning(
"Timed out waiting for transition on MCP server '%s', "
"forcing cleanup and retrying",
server_name,
)
with self._pool_lock:
stuck = self._transitions.get(server_name)
if stuck is transition_event:
self._transitions.pop(server_name, None)
transition_event.set()
continue
logger.info("Connecting to MCP server '%s'", server_name)
client = MCPClient(config)
try:
client.connect()
except Exception:
logger.warning(
"Failed to connect to MCP server '%s'",
server_name,
exc_info=True,
)
with self._pool_lock:
current = self._transitions.get(server_name)
if current is transition_event:
self._transitions.pop(server_name, None)
if (
server_name not in self._pool
and self._refcounts.get(server_name, 0) <= 0
):
self._configs.pop(server_name, None)
transition_event.set()
raise
with self._pool_lock:
current = self._transitions.get(server_name)
if current is transition_event:
self._pool[server_name] = client
self._refcounts[server_name] = self._refcounts.get(server_name, 0) + 1
self._configs[server_name] = config
self._transitions.pop(server_name, None)
transition_event.set()
logger.info(
"Connected to MCP server '%s' (refcount=1)",
server_name,
)
return client
# Lost the transition race, clean up and retry
try:
client.disconnect()
except Exception:
logger.debug(
"Error disconnecting stale client for '%s'",
server_name,
exc_info=True,
)
def release(self, server_name: str) -> None:
"""Decrement refcount and disconnect when the last user releases."""
while True:
disconnect_client: MCPClient | None = None
transition_event: threading.Event | None = None
should_disconnect = False
with self._pool_lock:
transition_event = self._transitions.get(server_name)
if transition_event is None:
refcount = self._refcounts.get(server_name, 0)
if refcount <= 0:
return
if refcount > 1:
self._refcounts[server_name] = refcount - 1
logger.debug(
"Released MCP server '%s' (refcount=%d)",
server_name,
refcount - 1,
)
return
disconnect_client = self._pool.pop(server_name, None)
self._refcounts.pop(server_name, None)
self._configs.pop(server_name, None)
transition_event = threading.Event()
self._transitions[server_name] = transition_event
should_disconnect = True
if not should_disconnect:
if not transition_event.wait(timeout=_TRANSITION_TIMEOUT):
logger.warning(
"Timed out waiting for transition on '%s' during release, forcing cleanup",
server_name,
)
with self._pool_lock:
stuck = self._transitions.get(server_name)
if stuck is transition_event:
self._transitions.pop(server_name, None)
transition_event.set()
continue
try:
if disconnect_client is not None:
disconnect_client.disconnect()
logger.info(
"Disconnected MCP server '%s' (last reference released)",
server_name,
)
except Exception:
logger.warning(
"Error disconnecting MCP server '%s' during release",
server_name,
exc_info=True,
)
finally:
with self._pool_lock:
current = self._transitions.get(server_name)
if current is transition_event:
self._transitions.pop(server_name, None)
transition_event.set()
return
def health_check(self, server_name: str) -> bool:
"""Return True when the pooled connection appears healthy."""
while True:
with self._pool_lock:
transition_event = self._transitions.get(server_name)
if transition_event is None:
client = self._pool.get(server_name)
config = self._configs.get(server_name)
break
if not transition_event.wait(timeout=_TRANSITION_TIMEOUT):
logger.warning(
"Timed out waiting for transition on '%s' during health check",
server_name,
)
return False
if client is None or config is None:
return False
try:
match config.transport:
case "stdio":
client.list_tools()
return True
case "http":
if not config.url:
return False
with httpx.Client(
base_url=config.url,
headers=config.headers,
timeout=5.0,
) as http_client:
response = http_client.get("/health")
response.raise_for_status()
return True
case "sse":
client.list_tools()
return True
case "unix":
if not config.socket_path:
return False
with httpx.Client(
base_url=config.url or "http://localhost",
headers=config.headers,
timeout=5.0,
transport=httpx.HTTPTransport(uds=config.socket_path),
) as http_client:
response = http_client.get("/health")
response.raise_for_status()
return True
case _:
logger.warning(
"Unknown transport '%s' for health check on '%s'",
config.transport,
server_name,
)
return False
except Exception:
logger.debug(
"Health check failed for MCP server '%s'",
server_name,
exc_info=True,
)
return False
def reconnect(self, server_name: str) -> MCPClient:
"""Force a disconnect and replace the pooled client with a fresh one."""
while True:
transition_event: threading.Event | None = None
old_client: MCPClient | None = None
with self._pool_lock:
transition_event = self._transitions.get(server_name)
if transition_event is None:
config = self._configs.get(server_name)
if config is None:
raise KeyError(f"Unknown MCP server: {server_name}")
old_client = self._pool.get(server_name)
transition_event = threading.Event()
self._transitions[server_name] = transition_event
break
if not transition_event.wait(timeout=_TRANSITION_TIMEOUT):
logger.warning(
"Timed out waiting for transition on '%s' during reconnect, forcing cleanup",
server_name,
)
with self._pool_lock:
stuck = self._transitions.get(server_name)
if stuck is transition_event:
self._transitions.pop(server_name, None)
transition_event.set()
# Disconnect old client safely
if old_client is not None:
try:
old_client.disconnect()
logger.info("Disconnected old client for '%s'", server_name)
except Exception:
logger.warning(
"Error disconnecting old client for '%s' during reconnect",
server_name,
exc_info=True,
)
logger.info("Reconnecting MCP server '%s'", server_name)
new_client = MCPClient(config)
try:
new_client.connect()
except Exception:
with self._pool_lock:
current = self._transitions.get(server_name)
if current is transition_event:
self._pool.pop(server_name, None)
self._transitions.pop(server_name, None)
transition_event.set()
raise
with self._pool_lock:
current = self._transitions.get(server_name)
if current is transition_event:
current_refcount = self._refcounts.get(server_name, 0)
if current_refcount <= 0:
# All holders released during reconnect. Discard the
# new client instead of creating a phantom reference.
# Caller should acquire() fresh if needed.
self._transitions.pop(server_name, None)
transition_event.set()
logger.info(
"Reconnected MCP server '%s' but refcount dropped to 0, "
"discarding new client",
server_name,
)
try:
new_client.disconnect()
except Exception:
logger.debug(
"Error disconnecting discarded client for '%s'",
server_name,
exc_info=True,
)
raise KeyError(
f"MCP server '{server_name}' was fully released during reconnect"
)
self._pool[server_name] = new_client
self._configs[server_name] = config
self._refcounts[server_name] = current_refcount
self._transitions.pop(server_name, None)
transition_event.set()
logger.info(
"Reconnected MCP server '%s' (refcount=%d)",
server_name,
current_refcount,
)
return new_client
try:
new_client.disconnect()
except Exception:
logger.debug(
"Error disconnecting stale client for '%s' after reconnect race",
server_name,
exc_info=True,
)
return self.acquire(config)
def cleanup_all(self) -> None:
"""Disconnect all pooled clients and clear manager state."""
while True:
with self._pool_lock:
if self._transitions:
pending = list(self._transitions.values())
else:
cleanup_events = {name: threading.Event() for name in self._pool}
clients = list(self._pool.items())
self._transitions.update(cleanup_events)
self._pool.clear()
self._refcounts.clear()
self._configs.clear()
break
all_resolved = all(event.wait(timeout=_TRANSITION_TIMEOUT) for event in pending)
if not all_resolved:
logger.warning(
"Timed out waiting for pending transitions during cleanup, "
"forcing cleanup of stuck transitions",
)
with self._pool_lock:
for sn, evt in list(self._transitions.items()):
if not evt.is_set():
self._transitions.pop(sn, None)
evt.set()
logger.info("Cleaning up %d pooled MCP connections", len(clients))
for server_name, client in clients:
try:
client.disconnect()
logger.debug("Disconnected MCP server '%s' during cleanup", server_name)
except Exception:
logger.warning(
"Error disconnecting MCP server '%s' during cleanup",
server_name,
exc_info=True,
)
with self._pool_lock:
for server_name, event in cleanup_events.items():
current = self._transitions.get(server_name)
if current is event:
self._transitions.pop(server_name, None)
event.set()
+99
View File
@@ -0,0 +1,99 @@
"""Structured error codes and exceptions for MCP server operations."""
from enum import Enum
class MCPErrorCode(Enum):
"""Standardized error codes for MCP operations."""
MCP_INSTALL_FAILED = "MCP_INSTALL_FAILED"
MCP_AUTH_MISSING = "MCP_AUTH_MISSING"
MCP_CONNECT_TIMEOUT = "MCP_CONNECT_TIMEOUT"
MCP_TOOL_NOT_FOUND = "MCP_TOOL_NOT_FOUND"
MCP_PROTOCOL_MISMATCH = "MCP_PROTOCOL_MISMATCH"
MCP_VERSION_CONFLICT = "MCP_VERSION_CONFLICT"
MCP_HEALTH_FAILED = "MCP_HEALTH_FAILED"
class MCPError(ValueError):
"""Base exception for all structured MCP errors."""
def __init__(self, code: MCPErrorCode, what: str, why: str, fix: str):
self.code = code
self.what = what
self.why = why
self.fix = fix
self.message = (
f"[{self.code.value}]\nWhat failed: {self.what}\nWhy: {self.why}\nFix: {self.fix}"
)
super().__init__(self.message)
class MCPToolNotFoundError(MCPError):
def __init__(self, server: str, tool_name: str):
super().__init__(
code=MCPErrorCode.MCP_TOOL_NOT_FOUND,
what=f"Tool '{tool_name}' not found on server '{server}'",
why=f"The server '{server}' does not expose a tool named '{tool_name}'.",
fix=f"Run 'hive mcp inspect {server}' to view available tools.",
)
class MCPConnectTimeoutError(MCPError):
def __init__(self, server: str, transport: str, timeout_sec: int):
super().__init__(
code=MCPErrorCode.MCP_CONNECT_TIMEOUT,
what=f"Connection timed out while starting server '{server}'",
why=f"The {transport} transport did not respond within {timeout_sec} seconds.",
fix=f"Check if the server is running. Run 'hive mcp doctor {server}' for diagnostics.",
)
class MCPAuthError(MCPError):
def __init__(self, server: str, env_var: str):
super().__init__(
code=MCPErrorCode.MCP_AUTH_MISSING,
what=f"Authentication failed for server '{server}'",
why=f"The required environment variable '{env_var}' is missing or empty.",
fix=f"Run: hive mcp config {server} --set {env_var}=<your-token>",
)
class MCPInstallError(MCPError):
def __init__(self, server: str, why: str, fix: str):
super().__init__(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Could not install MCP server '{server}'",
why=why,
fix=fix,
)
class MCPProtocolMismatchError(MCPError):
def __init__(self, server: str, detail: str):
super().__init__(
code=MCPErrorCode.MCP_PROTOCOL_MISMATCH,
what=f"Protocol mismatch with server '{server}'",
why=detail,
fix=f"Check the MCP SDK version required by '{server}' matches your installation.",
)
class MCPVersionConflictError(MCPError):
def __init__(self, server: str, detail: str):
super().__init__(
code=MCPErrorCode.MCP_VERSION_CONFLICT,
what=f"Version conflict with server '{server}'",
why=detail,
fix="Update or pin the MCP server package to a compatible version.",
)
class MCPHealthCheckError(MCPError):
def __init__(self, server: str, detail: str):
super().__init__(
code=MCPErrorCode.MCP_HEALTH_FAILED,
what=f"Health check failed for server '{server}'",
why=detail,
fix=f"Run 'hive mcp doctor {server}' to diagnose the issue.",
)
+904
View File
@@ -0,0 +1,904 @@
"""MCP Server Registry: local state management for installed MCP servers."""
from __future__ import annotations
import json
import logging
import os
import tempfile
import tomllib
from datetime import UTC, datetime
from importlib.metadata import PackageNotFoundError, version
from pathlib import Path
from typing import Any, Literal
import httpx
from framework.runner.mcp_client import MCPClient, MCPServerConfig
from framework.runner.mcp_connection_manager import MCPConnectionManager
from framework.runner.mcp_errors import (
MCPError,
MCPErrorCode,
MCPInstallError,
)
logger = logging.getLogger(__name__)
DEFAULT_INDEX_URL = (
"https://raw.githubusercontent.com/aden-hive/hive-mcp-registry/main/registry_index.json"
)
DEFAULT_REFRESH_INTERVAL_HOURS = 24
_LAST_FETCHED_FILENAME = "last_fetched"
_LEGACY_LAST_FETCHED_FILENAME = "last_fetched.json"
_DEFAULT_CONFIG = {
"index_url": DEFAULT_INDEX_URL,
"refresh_interval_hours": DEFAULT_REFRESH_INTERVAL_HOURS,
}
class MCPRegistry:
"""Manages local MCP server state in ~/.hive/mcp_registry/."""
def __init__(self, base_path: Path | None = None):
self._base = base_path or Path.home() / ".hive" / "mcp_registry"
self._installed_path = self._base / "installed.json"
self._config_path = self._base / "config.json"
self._cache_dir = self._base / "cache"
# ── Initialization ──────────────────────────────────────────────
def initialize(self) -> None:
"""Create directory structure and default files if missing."""
self._base.mkdir(parents=True, exist_ok=True)
self._cache_dir.mkdir(parents=True, exist_ok=True)
if not self._config_path.exists():
self._write_json(self._config_path, _DEFAULT_CONFIG)
if not self._installed_path.exists():
self._write_json(self._installed_path, {"servers": {}})
# ── Internal I/O ────────────────────────────────────────────────
def _read_installed(self) -> dict:
"""Read installed.json, initializing if needed."""
if not self._installed_path.exists():
self.initialize()
return json.loads(self._installed_path.read_text(encoding="utf-8"))
def _write_installed(self, data: dict) -> None:
"""Write installed.json."""
self._write_json(self._installed_path, data)
def _read_config(self) -> dict:
"""Read config.json."""
if not self._config_path.exists():
self.initialize()
return json.loads(self._config_path.read_text(encoding="utf-8"))
def _read_cached_index(self) -> dict:
"""Read cached registry_index.json."""
index_path = self._cache_dir / "registry_index.json"
if not index_path.exists():
return {"servers": {}}
return json.loads(index_path.read_text(encoding="utf-8"))
def _get_effective_manifest(
self,
name: str,
entry: dict,
cached_index: dict | None = None,
) -> dict:
"""Return the manifest currently in effect for an installed entry."""
manifest = entry.get("manifest", {})
if entry.get("source") != "registry":
return manifest
index = cached_index or self._read_cached_index()
cached_manifest = index.get("servers", {}).get(name)
if cached_manifest is not None:
return cached_manifest
# Fall back to persisted manifest data when the cache is unavailable.
if isinstance(manifest, dict) and manifest:
return manifest
return {}
@staticmethod
def _write_json(path: Path, data: dict) -> None:
"""Write JSON to file atomically (write to temp, fsync, rename)."""
content = json.dumps(data, indent=2) + "\n"
fd, tmp_path = tempfile.mkstemp(dir=path.parent, suffix=".tmp")
try:
with os.fdopen(fd, "w", encoding="utf-8") as f:
f.write(content)
f.flush()
os.fsync(f.fileno())
os.replace(tmp_path, path)
except BaseException:
try:
os.unlink(tmp_path)
except OSError:
pass
raise
# ── add_local ───────────────────────────────────────────────────
def add_local(
self,
name: str,
transport: str | None = None,
manifest: dict | None = None,
url: str | None = None,
command: str | None = None,
args: list[str] | None = None,
env: dict[str, str] | None = None,
headers: dict[str, str] | None = None,
cwd: str | None = None,
socket_path: str | None = None,
description: str = "",
) -> dict:
"""Register a local/running MCP server.
Can be called with an inline manifest dict, or with individual
transport/url/command params that build a manifest automatically.
"""
data = self._read_installed()
if name in data["servers"]:
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Server '{name}' already exists",
why="A server with this name is already registered locally.",
fix=f"Run: hive mcp remove {name} — then add it again.",
)
if manifest is not None:
# Inline manifest provided directly
manifest = {**manifest, "name": name}
transport_config = manifest.get("transport", {})
transport = transport or transport_config.get("default", "stdio")
if "transport" not in manifest:
manifest["transport"] = {"supported": [transport], "default": transport}
else:
# Build manifest from individual params
if not transport:
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Cannot register server '{name}'",
why="transport is required when manifest is not provided.",
fix="Pass --transport stdio|http|unix|sse when using hive mcp add.",
)
manifest = {
"name": name,
"description": description,
"transport": {"supported": [transport], "default": transport},
}
match transport:
case "http":
if not url:
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Cannot register server '{name}' with http transport",
why="url is required for http transport.",
fix="Pass --url https://your-server to hive mcp add.",
)
manifest["http"] = {"url": url, "headers": headers or {}}
case "stdio":
if not command:
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Cannot register server '{name}' with stdio transport",
why="command is required for stdio transport.",
fix="Pass --command <executable> to hive mcp add.",
)
manifest["stdio"] = {
"command": command,
"args": args or [],
"env": env or {},
"cwd": cwd,
}
case "unix":
if not socket_path:
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Cannot register server '{name}' with unix transport",
why="socket_path is required for unix transport.",
fix="Pass --socket-path /path/to/socket to hive mcp add.",
)
manifest["unix"] = {"socket_path": socket_path}
manifest["http"] = {"url": url or "http://localhost"}
case "sse":
if not url:
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Cannot register server '{name}' with sse transport",
why="url is required for sse transport.",
fix="Pass --url https://your-server to hive mcp add.",
)
manifest["sse"] = {"url": url}
case _:
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Cannot register server '{name}'",
why=f"Unsupported transport: '{transport}'.",
fix="Use one of: stdio, http, unix, sse.",
)
entry = self._make_entry(
source="local",
manifest=manifest,
transport=transport,
installed_by="hive mcp add",
)
data["servers"][name] = entry
self._write_installed(data)
logger.info("Registered local MCP server '%s' (%s)", name, transport)
return entry
# ── install ─────────────────────────────────────────────────────
def install(self, name: str, transport: str | None = None, version: str | None = None) -> dict:
"""Install a server from the cached remote registry index."""
data = self._read_installed()
if name in data["servers"]:
raise MCPInstallError(
server=name,
why=f"Server '{name}' already exists in the registry.",
fix=f"Run: hive mcp remove {name} — then install again.",
)
index = self._read_cached_index()
manifest = index.get("servers", {}).get(name)
if manifest is None:
raise MCPInstallError(
server=name,
why=f"Server '{name}' not found in registry index.",
fix="Run: hive mcp update — then try again.",
)
# Validate version if specified
if version is not None:
index_version = manifest.get("version")
if index_version is None:
raise MCPError(
code=MCPErrorCode.MCP_VERSION_CONFLICT,
what=f"Cannot pin version for '{name}'",
why="The registry manifest has no version field.",
fix="Run: hive mcp update — then omit --version to use latest.",
)
if index_version != version:
raise MCPError(
code=MCPErrorCode.MCP_VERSION_CONFLICT,
what=f"Version mismatch for '{name}'",
why=f"Requested {version} but index has {index_version}.",
fix="Run: hive mcp update — or omit --version to use latest.",
)
transport_config = manifest.get("transport", {})
supported = transport_config.get("supported", [])
if transport is not None:
if supported and transport not in supported:
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Transport '{transport}' not supported by '{name}'",
why=f"Server supports: {supported}.",
fix=f"Use one of the supported transports: {supported}.",
)
resolved_transport = transport
else:
resolved_transport = transport_config.get("default", "stdio")
entry = self._make_entry(
source="registry",
manifest=self._make_registry_manifest_snapshot(name, manifest),
transport=resolved_transport,
installed_by="hive mcp install",
pinned=version is not None,
auto_update=version is None,
resolved_package_version=manifest.get("version"),
)
data["servers"][name] = entry
self._write_installed(data)
logger.info(
"Installed MCP server '%s' v%s from registry",
name,
entry["manifest_version"],
)
return entry
# ── remove / enable / disable ───────────────────────────────────
def remove(self, name: str) -> None:
"""Remove a server from the registry."""
data = self._read_installed()
if name not in data["servers"]:
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Cannot remove server '{name}'",
why="Server is not installed.",
fix="Run: hive mcp list — to see installed servers.",
)
del data["servers"][name]
self._write_installed(data)
logger.info("Removed MCP server '%s'", name)
def enable(self, name: str) -> None:
"""Enable a disabled server."""
self._set_enabled(name, enabled=True)
def disable(self, name: str) -> None:
"""Disable a server without removing it."""
self._set_enabled(name, enabled=False)
def _set_enabled(self, name: str, *, enabled: bool) -> None:
data = self._read_installed()
if name not in data["servers"]:
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Cannot {'enable' if enabled else 'disable'} server '{name}'",
why="Server is not installed.",
fix="Run: hive mcp list — to see installed servers.",
)
data["servers"][name]["enabled"] = enabled
self._write_installed(data)
logger.info("%s MCP server '%s'", "Enabled" if enabled else "Disabled", name)
# ── list / get ──────────────────────────────────────────────────
def list_installed(self) -> list[dict]:
"""Return all installed servers as a list of dicts with name included."""
data = self._read_installed()
return [{"name": name, **entry} for name, entry in data["servers"].items()]
def get_server(self, name: str) -> dict | None:
"""Get a single installed server entry by name, or None if not found."""
data = self._read_installed()
entry = data["servers"].get(name)
if entry is None:
return None
return {"name": name, **entry}
def list_available(self) -> list[dict]:
"""List all servers from cached remote index."""
index = self._read_cached_index()
return [{"name": name, **m} for name, m in index.get("servers", {}).items()]
# ── set_override ────────────────────────────────────────────────
def set_override(
self,
name: str,
key: str,
value: str,
override_type: Literal["env", "headers"] = "env",
) -> None:
"""Set an env or header override for a server."""
data = self._read_installed()
if name not in data["servers"]:
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Cannot set override for server '{name}'",
why="Server is not installed.",
fix="Run: hive mcp list — to see installed servers.",
)
if override_type not in ("env", "headers"):
raise MCPError(
code=MCPErrorCode.MCP_INSTALL_FAILED,
what=f"Invalid override type '{override_type}' for server '{name}'",
why="Override type must be 'env' or 'headers'.",
fix="Use --type env or --type headers.",
)
data["servers"][name]["overrides"][override_type][key] = value
self._write_installed(data)
logger.info("Set %s override %s for MCP server '%s'", override_type, key, name)
# ── search ──────────────────────────────────────────────────────
def search(self, query: str) -> list[dict]:
"""Search registry index by name, tag, description, or tool name."""
query_lower = query.lower()
index = self._read_cached_index()
matches = []
for name, manifest in index.get("servers", {}).items():
if self._matches_query(name, manifest, query_lower):
matches.append({"name": name, **manifest})
return matches
@staticmethod
def _matches_query(name: str, manifest: dict, query: str) -> bool:
"""Check if a manifest matches a search query."""
if query in name.lower():
return True
description = manifest.get("description", "")
if query in description.lower():
return True
for tag in manifest.get("tags", []):
if query in tag.lower():
return True
for tool in manifest.get("tools", []):
tool_name = tool.get("name", "") if isinstance(tool, dict) else str(tool)
if query in tool_name.lower():
return True
return False
# ── update_index ────────────────────────────────────────────────
def is_index_stale(self) -> bool:
"""Check if the cached registry index needs refreshing."""
last_fetched_path = self._cache_dir / _LAST_FETCHED_FILENAME
legacy_path = self._cache_dir / _LEGACY_LAST_FETCHED_FILENAME
if not last_fetched_path.exists() and not legacy_path.exists():
return True
try:
path = last_fetched_path if last_fetched_path.exists() else legacy_path
data = json.loads(path.read_text(encoding="utf-8"))
last_fetched = datetime.fromisoformat(data["timestamp"])
config = self._read_config()
interval_hours = config.get("refresh_interval_hours", DEFAULT_REFRESH_INTERVAL_HOURS)
age_hours = (datetime.now(UTC) - last_fetched).total_seconds() / 3600
return age_hours >= interval_hours
except (KeyError, ValueError, OSError):
return True
def update_index(self) -> int:
"""Fetch the latest registry index from remote and cache it.
Returns the number of servers in the index.
"""
config = self._read_config()
url = config.get("index_url", DEFAULT_INDEX_URL)
response = httpx.get(url, timeout=10.0)
response.raise_for_status()
index = response.json()
self._write_json(self._cache_dir / "registry_index.json", index)
# Write last_fetched atomically too
self._write_json(
self._cache_dir / _LAST_FETCHED_FILENAME,
{"timestamp": datetime.now(UTC).isoformat()},
)
server_count = len(index.get("servers", {}))
logger.info("Updated registry index: %d servers available", server_count)
return server_count
# ── load_agent_selection ────────────────────────────────────────
def load_agent_selection(self, agent_path: Path) -> tuple[list[dict[str, Any]], int | None]:
"""Load mcp_registry.json from an agent directory and resolve servers.
Returns:
(server_config_dicts, max_tools) for :meth:`ToolRegistry.load_registry_servers`.
``max_tools`` is ``None`` when omitted or invalid in JSON.
"""
registry_json_path = agent_path / "mcp_registry.json"
if not registry_json_path.exists():
return [], None
selection = json.loads(registry_json_path.read_text(encoding="utf-8"))
# Validate types at the JSON boundary. Bad fields are dropped with a
# warning so the agent still starts (graceful degradation).
expected_types: dict[str, type] = {
"include": list,
"tags": list,
"exclude": list,
"profile": str,
"max_tools": int,
"versions": dict,
}
validated: dict[str, Any] = {}
for field, expected in expected_types.items():
value = selection.get(field)
if value is None:
continue
if not isinstance(value, expected):
logger.warning(
"mcp_registry.json: '%s' must be %s, got %s; ignoring",
field,
expected.__name__,
type(value).__name__,
)
continue
validated[field] = value
max_tools = validated.get("max_tools")
configs = self.resolve_for_agent(
include=validated.get("include"),
tags=validated.get("tags"),
exclude=validated.get("exclude"),
profile=validated.get("profile"),
max_tools=max_tools,
versions=validated.get("versions"),
)
return [self._server_config_to_dict(c) for c in configs], max_tools
# ── resolve_for_agent ───────────────────────────────────────────
def resolve_for_agent(
self,
include: list[str] | None = None,
tags: list[str] | None = None,
exclude: list[str] | None = None,
profile: str | None = None,
max_tools: int | None = None,
versions: dict[str, str] | None = None,
) -> list[MCPServerConfig]:
"""Resolve installed servers matching agent selection criteria.
Selection precedence per PRD section 7.2:
1. profile expands to server names (union with include + tags)
2. include adds explicit servers
3. tags adds servers whose tags overlap
4. exclude removes (always wins)
5. Load order: include-order first, then alphabetical for tag/profile matches
Returns list of MCPServerConfig objects ready for ToolRegistry.
"""
data = self._read_installed()
servers = data.get("servers", {})
cached_index = self._read_cached_index()
exclude_set = set(exclude or [])
# Phase 1: collect profile-matched servers (alphabetical)
profile_matched: list[str] = []
if profile:
for name, entry in sorted(servers.items()):
if name in exclude_set:
continue
if profile == "all":
profile_matched.append(name)
else:
manifest = self._get_effective_manifest(name, entry, cached_index)
profiles = manifest.get("hive", {}).get("profiles", [])
if profile in profiles:
profile_matched.append(name)
# Phase 2: collect tag-matched servers (alphabetical)
tag_matched: list[str] = []
if tags:
tag_set = set(tags)
for name, entry in sorted(servers.items()):
if name in exclude_set:
continue
manifest = self._get_effective_manifest(name, entry, cached_index)
server_tags = set(manifest.get("tags", []))
if tag_set & server_tags:
tag_matched.append(name)
# Phase 3: build final ordered list
# include-order first, then alphabetical for profile/tag matches
selected: list[str] = []
seen: set[str] = set()
for name in include or []:
if name not in seen and name not in exclude_set:
selected.append(name)
seen.add(name)
for name in profile_matched:
if name not in seen:
selected.append(name)
seen.add(name)
for name in tag_matched:
if name not in seen:
selected.append(name)
seen.add(name)
# Build configs, tracking aggregate tool count for max_tools cap (FR-56)
configs: list[MCPServerConfig] = []
total_tools = 0
for name in selected:
entry = servers.get(name)
if entry is None:
logger.warning(
"Server '%s' requested but not installed. Run: hive mcp install %s",
name,
name,
)
continue
if not entry.get("enabled", True):
continue
manifest = self._get_effective_manifest(name, entry, cached_index)
# Check version pin (VC-6)
if versions and name in versions:
installed_version = entry.get("manifest_version", "0.0.0")
pinned_version = versions[name]
if installed_version != pinned_version:
logger.warning(
"Server '%s' version mismatch: installed=%s, pinned=%s. "
"Run: hive mcp update %s",
name,
installed_version,
pinned_version,
name,
)
continue
# Check tool count cap before adding (FR-56), using manifest tool list when present.
# When ``tools`` is empty (e.g. ``add_local``), counts are unknown here—callers should
# pass the same ``max_tools`` to ToolRegistry.load_registry_servers to cap registration.
manifest_tools = manifest.get("tools", [])
server_tool_count = len(manifest_tools)
if max_tools is not None and server_tool_count == 0:
logger.debug(
"Server '%s' has no tools list in manifest; max_tools enforced at registration",
name,
)
elif max_tools is not None and total_tools + server_tool_count > max_tools:
logger.info(
"Skipping server '%s' (%d tools): would exceed max_tools=%d",
name,
server_tool_count,
max_tools,
)
continue
config = self._manifest_to_server_config(
name,
manifest,
entry.get("overrides", {}),
transport_override=entry.get("transport"),
)
if config is not None:
configs.append(config)
total_tools += server_tool_count
return configs
def _manifest_to_server_config(
self,
name: str,
manifest: dict,
overrides: dict | None = None,
transport_override: str | None = None,
) -> MCPServerConfig | None:
"""Convert a manifest and overrides to MCPServerConfig."""
overrides = overrides or {}
transport_config = manifest.get("transport", {})
transport = transport_override or transport_config.get("default", "stdio")
description = manifest.get("description", "")
match transport:
case "stdio":
stdio_config = manifest.get("stdio", {})
merged_env = {
**stdio_config.get("env", {}),
**overrides.get("env", {}),
}
return MCPServerConfig(
name=name,
transport="stdio",
command=stdio_config.get("command"),
args=stdio_config.get("args", []),
env=merged_env,
cwd=stdio_config.get("cwd"),
description=description,
)
case "http":
http_config = manifest.get("http", {})
url = http_config.get("url", "")
merged_headers = {
**http_config.get("headers", {}),
**overrides.get("headers", {}),
}
return MCPServerConfig(
name=name,
transport="http",
url=url,
headers=merged_headers,
description=description,
)
case "unix":
unix_config = manifest.get("unix", {})
http_config = manifest.get("http", {})
merged_headers = {
**http_config.get("headers", {}),
**overrides.get("headers", {}),
}
return MCPServerConfig(
name=name,
transport="unix",
socket_path=unix_config.get("socket_path"),
url=http_config.get("url") or "http://localhost",
headers=merged_headers,
description=description,
)
case "sse":
sse_config = manifest.get("sse", {})
merged_headers = {
**sse_config.get("headers", {}),
**overrides.get("headers", {}),
}
return MCPServerConfig(
name=name,
transport="sse",
url=sse_config.get("url", ""),
headers=merged_headers,
description=description,
)
case _:
logger.warning(
"Unsupported transport '%s' for server '%s'",
transport,
name,
)
return None
@staticmethod
def _server_config_to_dict(config: MCPServerConfig) -> dict[str, Any]:
"""Convert MCPServerConfig to plain dict for ToolRegistry.register_mcp_server()."""
return {
"name": config.name,
"transport": config.transport,
"command": config.command,
"args": config.args,
"env": config.env,
"cwd": config.cwd,
"url": config.url,
"headers": config.headers,
"socket_path": config.socket_path,
"description": config.description,
}
# ── run_health_check ────────────────────────────────────────────
def health_check(self, name: str | None = None) -> dict | dict[str, dict]:
"""Check health of installed server(s). Updates telemetry fields.
If name is None, checks all installed servers and returns
a dict mapping server names to their health results.
"""
if name is None:
results = {}
for server in self.list_installed():
results[server["name"]] = self.health_check(server["name"])
return results
data = self._read_installed()
if name not in data["servers"]:
raise MCPError(
code=MCPErrorCode.MCP_HEALTH_FAILED,
what=f"Cannot health-check server '{name}'",
why="Server is not installed.",
fix="Run: hive mcp list — to see installed servers.",
)
entry = data["servers"][name]
manifest = self._get_effective_manifest(name, entry)
config = self._manifest_to_server_config(
name,
manifest,
entry.get("overrides", {}),
transport_override=entry.get("transport"),
)
now = datetime.now(UTC).isoformat()
result: dict[str, Any] = {
"name": name,
"status": "unknown",
"tools": 0,
"error": None,
}
if config is None:
transport = entry.get("transport", "unknown")
result["status"] = "unhealthy"
result["error"] = f"Unsupported transport '{transport}'"
entry["last_health_status"] = "unhealthy"
entry["last_error"] = result["error"]
entry["last_health_check_at"] = now
self._write_installed(data)
return result
manager = MCPConnectionManager.get_instance()
try:
if manager.has_connection(name):
is_healthy = manager.health_check(name)
if not is_healthy:
raise MCPError(
code=MCPErrorCode.MCP_HEALTH_FAILED,
what=f"Health check failed for server '{name}'",
why="Shared MCP connection reported unhealthy.",
fix=f"Run: hive mcp doctor {name} — for diagnostics.",
)
pooled_client = manager.acquire(config)
try:
tools = pooled_client.list_tools()
finally:
manager.release(name)
else:
with MCPClient(config) as client:
tools = client.list_tools()
result["status"] = "healthy"
result["tools"] = len(tools)
entry["last_health_status"] = "healthy"
entry["last_error"] = None
entry["last_validated_with_hive_version"] = self._get_hive_version()
except Exception as exc:
result["status"] = "unhealthy"
result["error"] = str(exc)
entry["last_health_status"] = "unhealthy"
entry["last_error"] = str(exc)
entry["last_health_check_at"] = now
self._write_installed(data)
return result
def run_health_check(self, name: str | None = None) -> dict | dict[str, dict]:
"""Backward-compatible wrapper for the public health_check API."""
return self.health_check(name)
@staticmethod
def _get_hive_version() -> str:
"""Get the current Hive version."""
try:
return version("framework")
except PackageNotFoundError:
project_toml = Path(__file__).resolve().parents[2] / "pyproject.toml"
if not project_toml.exists():
return "unknown"
try:
with project_toml.open("rb") as f:
data = tomllib.load(f)
return data.get("project", {}).get("version", "unknown")
except (tomllib.TOMLDecodeError, OSError):
return "unknown"
# ── helpers ──────────────────────────────────────────────────────
@staticmethod
def _make_entry(
*,
source: str,
manifest: dict,
transport: str,
installed_by: str,
pinned: bool = False,
auto_update: bool = False,
resolved_package_version: str | None = None,
) -> dict:
"""Build a standard installed server entry."""
now = datetime.now(UTC).isoformat()
return {
"source": source,
"manifest_version": manifest.get("version", "0.0.0"),
"manifest": manifest,
"installed_at": now,
"installed_by": installed_by,
"transport": transport,
"enabled": True,
"pinned": pinned,
"auto_update": auto_update,
"resolved_package_version": resolved_package_version,
"overrides": {"env": {}, "headers": {}},
"last_health_check_at": None,
"last_health_status": None,
"last_error": None,
"last_used_at": None,
"last_validated_with_hive_version": None,
}
@staticmethod
def _make_registry_manifest_snapshot(name: str, manifest: dict) -> dict[str, Any]:
"""Persist a full manifest snapshot for registry-installed servers."""
manifest_snapshot = dict(manifest)
manifest_snapshot["name"] = name
return manifest_snapshot
+906
View File
@@ -0,0 +1,906 @@
"""CLI commands for MCP server registry management.
Commands:
hive mcp install <name> Install a server from the registry
hive mcp add Register a local/running MCP server
hive mcp remove <name> Remove an installed server
hive mcp enable <name> Enable a server
hive mcp disable <name> Disable a server
hive mcp list List installed servers
hive mcp info <name> Show server details
hive mcp config <name> Set env/header overrides
hive mcp search <query> Search the registry index
hive mcp health [name] Check server health
hive mcp update Refresh index and update installed servers
hive mcp update <name> Update a single installed server
"""
from __future__ import annotations
import json
import os
import sys
from pathlib import Path
from typing import Any
# ── Shared helpers ──────────────────────────────────────────────────
def _get_registry(base_path: Path | None = None):
"""Initialize and return an MCPRegistry instance."""
from framework.runner.mcp_registry import MCPRegistry
registry = MCPRegistry(base_path=base_path)
registry.initialize()
return registry
def _ensure_index_available(registry) -> bool:
"""Ensure the registry index is cached locally.
If no index exists or the cache is stale, fetches a fresh copy.
Returns True if a usable index exists, False otherwise.
Semantics:
- Stale cache + refresh fails -> warn and continue with stale cache (True)
- No cache + refresh fails -> hard fail (False)
"""
import httpx
cache_exists = (registry._cache_dir / "registry_index.json").exists()
if registry.is_index_stale():
print("Updating registry index...", file=sys.stderr)
try:
count = registry.update_index()
print(f"Registry index updated ({count} servers available).", file=sys.stderr)
return True
except (httpx.HTTPError, OSError) as exc:
if cache_exists:
print(
f"Warning: failed to update registry index: {exc}\nUsing cached index.",
file=sys.stderr,
)
return True
print(
f"Error: no registry index available and refresh failed: {exc}\n"
"Check your network connection and try: hive mcp update",
file=sys.stderr,
)
return False
return cache_exists
_SECURITY_NOTICE = (
"Registry servers run code on your machine. Only install servers you trust.\n"
"Learn more: https://github.com/aden-hive/hive-mcp-registry"
)
_NOTICE_SENTINEL = ".security_notice_shown"
def _print_security_notice_if_first_use(registry_base: Path) -> None:
"""Print a one-time security notice on first registry install.
Only prints the notice. Call _mark_security_notice_shown() after
a successful install to persist the sentinel.
"""
sentinel = registry_base / _NOTICE_SENTINEL
if sentinel.exists():
return
print(f"\n {_SECURITY_NOTICE}\n", file=sys.stderr)
def _mark_security_notice_shown(registry_base: Path) -> None:
"""Persist the security notice sentinel after a successful install."""
sentinel = registry_base / _NOTICE_SENTINEL
try:
sentinel.touch()
except OSError:
pass
def _prompt_for_missing_credentials(
registry,
name: str,
manifest: dict,
) -> None:
"""Prompt for required credentials not already set in env or overrides."""
credentials = manifest.get("credentials", [])
if not credentials:
return
server = registry.get_server(name)
existing_overrides = server.get("overrides", {}).get("env", {}) if server else {}
prompted = False
for cred in credentials:
if not isinstance(cred, dict):
continue
env_var = cred.get("env_var", "")
if not env_var:
continue
required = cred.get("required", False)
if not required:
continue
# Skip if already in environment or overrides
if os.environ.get(env_var) or existing_overrides.get(env_var):
continue
if not prompted:
print(f"\n{name} requires credentials:", file=sys.stderr)
prompted = True
description = cred.get("description", env_var)
help_url = cred.get("help_url", "")
help_hint = f" (get one at {help_url})" if help_url else ""
try:
value = input(f" {description}{help_hint}\n {env_var}: ").strip()
except (EOFError, KeyboardInterrupt):
print("\nSkipped credential prompting.", file=sys.stderr)
return
if value:
registry.set_override(name, env_var, value, override_type="env")
def _parse_key_value_pairs(values: list[str]) -> dict[str, str]:
"""Parse KEY=VAL pairs from CLI args. Raises ValueError on bad format."""
result = {}
for item in values:
if "=" not in item:
raise ValueError(
f"Invalid format: '{item}'. Expected KEY=VALUE.\n"
f"Example: --set JIRA_API_TOKEN=abc123"
)
key, _, value = item.partition("=")
if not key:
raise ValueError(f"Invalid format: '{item}'. Key cannot be empty.")
result[key] = value
return result
def _find_agents_using_server(registry, name: str) -> list[str]:
"""Scan agent directories for mcp_registry.json files that would load a server.
Uses MCPRegistry.load_agent_selection() to resolve actual selection logic
so results stay consistent with runtime behavior.
"""
agent_dirs: list[Path] = []
# parents: [0]=runner, [1]=framework, [2]=core, [3]=hive (project root)
# NOTE: This path arithmetic assumes running from the source tree layout.
# It will not resolve correctly if installed via pip into site-packages.
project_root = Path(__file__).resolve().parents[3]
core_dir = Path(__file__).resolve().parents[2]
candidates = [
project_root / "exports",
core_dir / "exports",
core_dir / "framework" / "agents",
]
for candidate in candidates:
if candidate.is_dir():
for child in candidate.iterdir():
if child.is_dir():
agent_dirs.append(child)
matches = []
for agent_dir in agent_dirs:
registry_json = agent_dir / "mcp_registry.json"
if not registry_json.exists():
continue
try:
configs = registry.load_agent_selection(agent_dir)
resolved_names = {c["name"] for c in configs}
if name in resolved_names:
matches.append(str(agent_dir))
except Exception:
continue
return matches
def _render_installed_table(entries: list[dict]) -> None:
"""Render installed servers as a formatted table."""
if not entries:
print("No servers installed.")
print("Run 'hive mcp install <name>' or 'hive mcp add' to get started.")
return
# Column widths
name_w = max(len(e["name"]) for e in entries)
name_w = max(name_w, 4)
transport_w = max(len(e.get("transport", "")) for e in entries)
transport_w = max(transport_w, 9)
header = (
f" {'NAME':<{name_w}} "
f"{'TRANSPORT':<{transport_w}} "
f"{'ENABLED':<7} "
f"{'HEALTH':<9} "
f"{'TOOLS':<5} "
f"{'TRUST':<10} "
f"{'SOURCE'}"
)
print(header)
print(" " + "" * (len(header) - 2))
for entry in entries:
enabled = "yes" if entry.get("enabled", True) else "no"
health = entry.get("last_health_status") or "unknown"
health_sym = {"healthy": "", "unhealthy": ""}.get(health, "")
source = entry.get("source", "")
manifest = entry.get("manifest", {})
tools_count = str(len(manifest.get("tools", [])))
trust_tier = manifest.get("status", "")
print(
f" {entry['name']:<{name_w}} "
f"{entry.get('transport', ''):<{transport_w}} "
f"{enabled:<7} "
f"{health_sym} {health:<7} "
f"{tools_count:<5} "
f"{trust_tier:<10} "
f"{source}"
)
def _render_available_table(entries: list[dict]) -> None:
"""Render available registry servers as a formatted table."""
if not entries:
print("No servers in registry index.")
print("Run 'hive mcp update' to refresh the index.")
return
name_w = max(len(e["name"]) for e in entries)
name_w = max(name_w, 4)
header = f" {'NAME':<{name_w}} {'VERSION':<9} {'STATUS':<10} DESCRIPTION"
print(header)
print(" " + "" * (len(header) - 2))
for entry in entries:
version = entry.get("version", "")
status = entry.get("status", "community")
desc = entry.get("description", "")
# Truncate long descriptions
if len(desc) > 60:
desc = desc[:57] + "..."
print(f" {entry['name']:<{name_w}} {version:<9} {status:<10} {desc}")
def _mask_overrides(overrides: dict) -> dict:
"""Replace override values with '<set>' markers. Shared by all output paths."""
masked: dict[str, dict[str, str]] = {}
if overrides.get("env"):
masked["env"] = dict.fromkeys(overrides["env"], "<set>")
else:
masked["env"] = {}
if overrides.get("headers"):
masked["headers"] = dict.fromkeys(overrides["headers"], "<set>")
else:
masked["headers"] = {}
return masked
def _emit_json(data: Any) -> None:
"""Print data as formatted JSON."""
print(json.dumps(data, indent=2, default=str))
# ── Command registration ───────────────────────────────────────────
def register_mcp_commands(subparsers) -> None:
"""Register the ``hive mcp`` subcommand group."""
mcp_parser = subparsers.add_parser("mcp", help="Manage MCP servers")
mcp_sub = mcp_parser.add_subparsers(dest="mcp_command", required=True)
# ── install ──
install_p = mcp_sub.add_parser("install", help="Install a server from the registry")
install_p.add_argument("name", help="Server name in the registry")
install_p.add_argument(
"--version", dest="version", default=None, help="Pin to a specific version"
)
install_p.add_argument(
"--transport", default=None, help="Override default transport (stdio, http, unix, sse)"
)
install_p.set_defaults(func=cmd_mcp_install)
# ── add ──
add_p = mcp_sub.add_parser("add", help="Register a local/running MCP server")
add_p.add_argument("--name", required=False, help="Server name")
add_p.add_argument(
"--transport",
choices=["stdio", "http", "unix", "sse"],
default=None,
help="Transport type",
)
add_p.add_argument("--url", default=None, help="Server URL (http, unix, sse)")
add_p.add_argument("--command", default=None, help="Command to run (stdio)")
add_p.add_argument("--args", nargs="*", default=None, help="Command arguments (stdio)")
add_p.add_argument("--socket-path", default=None, help="Unix socket path")
add_p.add_argument("--description", default="", help="Server description")
add_p.add_argument("--from", dest="from_manifest", default=None, help="Path to manifest.json")
add_p.set_defaults(func=cmd_mcp_add)
# ── remove ──
remove_p = mcp_sub.add_parser("remove", help="Remove an installed server")
remove_p.add_argument("name", help="Server name")
remove_p.set_defaults(func=cmd_mcp_remove)
# ── enable ──
enable_p = mcp_sub.add_parser("enable", help="Enable a disabled server")
enable_p.add_argument("name", help="Server name")
enable_p.set_defaults(func=cmd_mcp_enable)
# ── disable ──
disable_p = mcp_sub.add_parser("disable", help="Disable a server without removing it")
disable_p.add_argument("name", help="Server name")
disable_p.set_defaults(func=cmd_mcp_disable)
# ── list ──
list_p = mcp_sub.add_parser("list", help="List servers")
list_p.add_argument(
"--available", action="store_true", help="Show available servers from registry"
)
list_p.add_argument("--json", dest="output_json", action="store_true", help="Output as JSON")
list_p.set_defaults(func=cmd_mcp_list)
# ── info ──
info_p = mcp_sub.add_parser("info", help="Show server details")
info_p.add_argument("name", help="Server name")
info_p.add_argument("--json", dest="output_json", action="store_true", help="Output as JSON")
info_p.set_defaults(func=cmd_mcp_info)
# ── config ──
config_p = mcp_sub.add_parser("config", help="Set server configuration overrides")
config_p.add_argument("name", help="Server name")
config_p.add_argument(
"--set",
dest="set_env",
nargs="+",
metavar="KEY=VAL",
help="Set environment variable overrides",
)
config_p.add_argument(
"--set-header", dest="set_header", nargs="+", metavar="KEY=VAL", help="Set header overrides"
)
config_p.set_defaults(func=cmd_mcp_config)
# ── search ──
search_p = mcp_sub.add_parser("search", help="Search the registry")
search_p.add_argument("query", help="Search term (name, tag, description, tool name)")
search_p.add_argument("--json", dest="output_json", action="store_true", help="Output as JSON")
search_p.set_defaults(func=cmd_mcp_search)
# ── health ──
health_p = mcp_sub.add_parser("health", help="Check server health")
health_p.add_argument("name", nargs="?", default=None, help="Server name (all if omitted)")
health_p.add_argument("--json", dest="output_json", action="store_true", help="Output as JSON")
health_p.set_defaults(func=cmd_mcp_health)
# ── update ──
update_p = mcp_sub.add_parser(
"update", help="Update installed servers or refresh the registry index"
)
update_p.add_argument(
"name",
nargs="?",
default=None,
help="Server name to update (omit to update all registry servers)",
)
update_p.set_defaults(func=cmd_mcp_update)
# ── P0 command handlers ────────────────────────────────────────────
def cmd_mcp_install(args) -> int:
"""Install a server from the registry index."""
registry = _get_registry()
_print_security_notice_if_first_use(registry._base)
if not _ensure_index_available(registry):
return 1
try:
entry = registry.install(
args.name,
transport=args.transport,
version=args.version,
)
except ValueError as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
_mark_security_notice_shown(registry._base)
version_str = entry.get("manifest_version", "")
transport = entry.get("transport", "")
print(f"✓ Installed {args.name} v{version_str} ({transport})")
# Prompt for credentials defined in the manifest
manifest = entry.get("manifest", {})
_prompt_for_missing_credentials(registry, args.name, manifest)
print("\nNext steps:")
print(f" hive mcp health {args.name} Check that the server is reachable")
print(f" hive mcp info {args.name} View server details")
return 0
def cmd_mcp_add(args) -> int:
"""Register a local/running MCP server."""
registry = _get_registry()
# Handle --from manifest.json
if args.from_manifest:
return _cmd_mcp_add_from_manifest(registry, args.from_manifest)
if not args.name:
print(
"Error: --name is required.\n"
"Usage: hive mcp add --name my-server --transport http --url http://localhost:8080\n"
" or: hive mcp add --from manifest.json",
file=sys.stderr,
)
return 1
if not args.transport:
print(
f"Error: --transport is required.\n"
f"Supported transports: stdio, http, unix, sse\n"
f"Example: hive mcp add --name {args.name} --transport http --url http://localhost:8080",
file=sys.stderr,
)
return 1
try:
entry = registry.add_local(
name=args.name,
transport=args.transport,
url=args.url,
command=args.command,
args=args.args,
socket_path=args.socket_path,
description=args.description,
)
except ValueError as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
print(f"✓ Registered {args.name} ({entry['transport']})")
return 0
def _cmd_mcp_add_from_manifest(registry, manifest_path: str) -> int:
"""Register a server from a manifest.json file."""
path = Path(manifest_path)
if not path.exists():
print(
f"Error: manifest file not found: {manifest_path}\nCheck the path and try again.",
file=sys.stderr,
)
return 1
try:
manifest = json.loads(path.read_text(encoding="utf-8"))
except json.JSONDecodeError as exc:
print(
f"Error: invalid JSON in {manifest_path}: {exc}\n"
f"Validate with: python -m json.tool {manifest_path}",
file=sys.stderr,
)
return 1
name = manifest.get("name")
if not name:
print(
f"Error: manifest missing 'name' field.\nAdd a 'name' field to {manifest_path}.",
file=sys.stderr,
)
return 1
try:
entry = registry.add_local(name=name, manifest=manifest)
except ValueError as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
print(f"✓ Registered {name} from {manifest_path} ({entry['transport']})")
return 0
def cmd_mcp_remove(args) -> int:
"""Remove an installed server."""
registry = _get_registry()
try:
registry.remove(args.name)
except ValueError as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
print(f"✓ Removed {args.name}")
return 0
def cmd_mcp_enable(args) -> int:
"""Enable a disabled server."""
registry = _get_registry()
try:
registry.enable(args.name)
except ValueError as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
print(f"✓ Enabled {args.name}")
return 0
def cmd_mcp_disable(args) -> int:
"""Disable a server without removing it."""
registry = _get_registry()
try:
registry.disable(args.name)
except ValueError as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
print(f"✓ Disabled {args.name}")
return 0
def cmd_mcp_list(args) -> int:
"""List installed or available servers."""
registry = _get_registry()
if args.available:
if not _ensure_index_available(registry):
return 1
entries = registry.list_available()
if args.output_json:
_emit_json(entries)
else:
_render_available_table(entries)
else:
entries = registry.list_installed()
if args.output_json:
safe_entries = []
for entry in entries:
safe = dict(entry)
safe["overrides"] = _mask_overrides(safe.get("overrides", {}))
safe_entries.append(safe)
_emit_json(safe_entries)
else:
_render_installed_table(entries)
return 0
def cmd_mcp_info(args) -> int:
"""Show full details for a server."""
registry = _get_registry()
server = registry.get_server(args.name)
if server is None:
print(
f"Error: server '{args.name}' is not installed.\n"
f"Run 'hive mcp list' to see installed servers.\n"
f"Run 'hive mcp install {args.name}' to install from registry.",
file=sys.stderr,
)
return 1
# Enrich with agent usage for both JSON and human output
agents = _find_agents_using_server(registry, args.name)
if agents:
server["used_by_agents"] = agents
if args.output_json:
safe = dict(server)
safe["overrides"] = _mask_overrides(safe.get("overrides", {}))
_emit_json(safe)
return 0
manifest = server.get("manifest", {})
overrides = _mask_overrides(server.get("overrides", {}))
tools = manifest.get("tools", [])
status = manifest.get("status", "community")
hive_block = manifest.get("hive", {})
print(f"{server['name']}")
print("=" * 50)
# Core info
print(f" Source: {server.get('source', '')}")
print(f" Transport: {server.get('transport', '')}")
print(f" Version: {server.get('manifest_version', 'unknown')}")
print(f" Trust tier: {status}")
print(f" Enabled: {'yes' if server.get('enabled', True) else 'no'}")
# Description
desc = manifest.get("description", "")
if desc:
print(f" Description: {desc}")
# Health
health = server.get("last_health_status")
if health:
health_sym = {"healthy": "", "unhealthy": ""}.get(health, "")
print(f" Health: {health_sym} {health}")
last_check = server.get("last_health_check_at")
if last_check:
print(f" Last check: {last_check}")
last_error = server.get("last_error")
if last_error:
print(f" Last error: {last_error}")
# Tools
if tools:
print(f"\n Tools ({len(tools)}):")
for tool in tools:
if isinstance(tool, dict):
tool_name = tool.get("name", "")
tool_desc = tool.get("description", "")
print(f"{tool_name}: {tool_desc}" if tool_desc else f"{tool_name}")
else:
print(f"{tool}")
# Overrides
env_overrides = overrides.get("env", {})
header_overrides = overrides.get("headers", {})
if env_overrides or header_overrides:
print("\n Overrides:")
for key in env_overrides:
print(f" env.{key} = <set>")
for key in header_overrides:
print(f" header.{key} = <set>")
# Hive block
if hive_block:
profiles = hive_block.get("profiles", [])
if profiles:
print(f"\n Profiles: {', '.join(profiles)}")
min_ver = hive_block.get("min_version")
if min_ver:
print(f" Min Hive version: {min_ver}")
# Agent usage
if agents:
print("\n Used by agents:")
for agent in agents:
print(f"{agent}")
# Timestamps
print(f"\n Installed: {server.get('installed_at', 'unknown')}")
print(f" Installed by: {server.get('installed_by', 'unknown')}")
return 0
def cmd_mcp_config(args) -> int:
"""Set env or header overrides for a server."""
registry = _get_registry()
if not args.set_env and not args.set_header:
# Show current config
server = registry.get_server(args.name)
if server is None:
print(
f"Error: server '{args.name}' is not installed.\n"
f"Run 'hive mcp list' to see installed servers.",
file=sys.stderr,
)
return 1
masked = _mask_overrides(server.get("overrides", {}))
env_o = masked.get("env", {})
header_o = masked.get("headers", {})
if not env_o and not header_o:
print(f"No overrides set for {args.name}.")
print(f"Set one with: hive mcp config {args.name} --set KEY=VALUE")
else:
print(f"Overrides for {args.name}:")
for key in env_o:
print(f" env.{key} = <set>")
for key in header_o:
print(f" header.{key} = <set>")
return 0
try:
if args.set_env:
pairs = _parse_key_value_pairs(args.set_env)
for key, value in pairs.items():
registry.set_override(args.name, key, value, override_type="env")
print(f"✓ Set {len(pairs)} env override(s) for {args.name}")
if args.set_header:
pairs = _parse_key_value_pairs(args.set_header)
for key, value in pairs.items():
registry.set_override(args.name, key, value, override_type="headers")
print(f"✓ Set {len(pairs)} header override(s) for {args.name}")
except ValueError as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
return 0
# ── P1 command handlers ────────────────────────────────────────────
def cmd_mcp_search(args) -> int:
"""Search the registry index."""
registry = _get_registry()
if not _ensure_index_available(registry):
return 1
results = registry.search(args.query)
if args.output_json:
_emit_json(results)
return 0
if not results:
print(f"No servers matching '{args.query}'.")
return 0
print(f"Found {len(results)} server(s) matching '{args.query}':\n")
_render_available_table(results)
return 0
def cmd_mcp_health(args) -> int:
"""Check server health."""
registry = _get_registry()
try:
results = registry.health_check(name=args.name)
except ValueError as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
# Single server returns a flat dict, all-servers returns name->dict
if args.name:
results = {args.name: results}
if args.output_json:
_emit_json(results)
return 0
for name, result in results.items():
status = result.get("status", "unknown")
tools = result.get("tools", 0)
error = result.get("error")
sym = {"healthy": "", "unhealthy": ""}.get(status, "")
print(f" {sym} {name}: {status}", end="")
if status == "healthy" and tools:
print(f" ({tools} tools)")
elif error:
print(f"\n Error: {error}")
else:
print()
return 0
def cmd_mcp_update(args) -> int:
"""Update a single server, or refresh the index and update all registry servers."""
registry = _get_registry()
if args.name:
return _cmd_mcp_update_server(args.name, registry)
# Step 1: refresh the registry index
try:
count = registry.update_index()
except Exception as exc:
print(
f"Error: failed to update registry index: {exc}\n"
f"Check your network connection and try again.",
file=sys.stderr,
)
return 1
print(f"✓ Registry index updated ({count} servers available)")
# Step 2: update all installed registry servers (skip local/pinned)
installed = registry.list_installed()
registry_servers = [
s for s in installed if s.get("source") == "registry" and not s.get("pinned")
]
if not registry_servers:
return 0
print(f"\nUpdating {len(registry_servers)} installed server(s)...")
errors = 0
for server in registry_servers:
name = server["name"]
rc = _cmd_mcp_update_server(name, registry)
if rc != 0:
errors += 1
return 1 if errors else 0
def _cmd_mcp_update_server(name: str, registry=None) -> int:
"""Bridge: reinstall a server from the latest index.
This is a temporary bridge until #6355 adds proper version diffing,
tool-signature change detection, and --dry-run support.
"""
if registry is None:
registry = _get_registry()
server = registry.get_server(name)
if server is None:
print(
f"Error: server '{name}' is not installed.\n"
f"Run 'hive mcp install {name}' to install it.",
file=sys.stderr,
)
return 1
if server.get("source") != "registry":
print(
f"Error: '{name}' is a local server and cannot be updated from the registry.\n"
f"Use 'hive mcp remove {name}' and 'hive mcp add' to re-register it.",
file=sys.stderr,
)
return 1
if server.get("pinned"):
print(
f"Error: '{name}' is pinned to v{server.get('manifest_version', '?')}.\n"
f"To update a pinned server, remove and reinstall:\n"
f" hive mcp remove {name} && hive mcp install {name}",
file=sys.stderr,
)
return 1
# Refresh index, then reinstall
if not _ensure_index_available(registry):
return 1
old_version = server.get("manifest_version", "unknown")
transport = server.get("transport")
overrides = server.get("overrides", {})
was_enabled = server.get("enabled", True)
# Save the full entry before removing so we can restore on failure
saved_entry = dict(server)
saved_entry.pop("name", None)
try:
registry.remove(name)
entry = registry.install(name, transport=transport)
except ValueError as exc:
# Restore the original entry so update doesn't become an uninstall
data = registry._read_installed()
data["servers"][name] = saved_entry
registry._write_installed(data)
print(
f"Error: {exc}\nServer '{name}' has been restored to its previous state.",
file=sys.stderr,
)
return 1
new_version = entry.get("manifest_version", "unknown")
# Restore prior state from the previous installation
for key, value in overrides.get("env", {}).items():
registry.set_override(name, key, value, override_type="env")
for key, value in overrides.get("headers", {}).items():
registry.set_override(name, key, value, override_type="headers")
if not was_enabled:
registry.disable(name)
if old_version == new_version:
print(f"{name} is already at v{new_version}")
else:
print(f"✓ Updated {name}: v{old_version} → v{new_version}")
return 0
@@ -0,0 +1,252 @@
from __future__ import annotations
import json
import logging
from pathlib import Path
from typing import Any
logger = logging.getLogger(__name__)
_CACHE_INDEX_PATH = Path.home() / ".hive" / "mcp_registry" / "cache" / "registry_index.json"
_FIXTURE_INDEX_PATH = Path(__file__).resolve().parent / "fixtures" / "registry_index.json"
def resolve_registry_servers(
*,
include: list[str] | None = None,
tags: list[str] | None = None,
exclude: list[str] | None = None,
profile: str | None = None,
max_tools: int | None = None,
versions: dict[str, str] | None = None,
) -> list[dict[str, Any]]:
"""
Resolve registry-sourced MCP servers for `mcp_registry.json` selection.
This function is written to be mock-friendly during early development:
- If the real `MCPRegistry` core module is present, delegate to it.
- Otherwise, fall back to a cached local index (`~/.hive/.../registry_index.json`)
and then to the repo fixture index.
"""
# `max_tools` is enforced by ToolRegistry. We keep it in the resolver
# signature to match the PRD and future MCPRegistry interfaces.
_ = max_tools
try:
from framework.runner.mcp_registry import MCPRegistry # type: ignore
registry = MCPRegistry()
resolved = registry.resolve_for_agent(
include=include or [],
tags=tags or [],
exclude=exclude or [],
profile=profile,
max_tools=max_tools,
versions=versions or {},
)
# Future-proof: normalize both dicts and typed objects to dicts.
return [_normalize_server_config(x) for x in resolved]
except ImportError:
# Expected while #6349/#6574 is not merged locally.
pass
except Exception as e:
logger.warning("MCPRegistry resolution failed; falling back to cache/fixtures: %s", e)
return _resolve_from_local_index(
include=include,
tags=tags,
exclude=exclude,
profile=profile,
versions=versions or {},
)
def _resolve_from_local_index(
*,
include: list[str] | None,
tags: list[str] | None,
exclude: list[str] | None,
profile: str | None,
versions: dict[str, str],
) -> list[dict[str, Any]]:
index = _load_index_json()
servers = _coerce_index_servers(index)
servers_by_name: dict[str, dict[str, Any]] = {
s["name"]: s for s in servers if isinstance(s, dict) and "name" in s
}
include_list = include or []
tags_list = tags or []
exclude_set = set(exclude or [])
def _profiles_of(entry: dict[str, Any]) -> set[str]:
if isinstance(entry.get("profiles"), list):
return set(entry["profiles"])
hive = entry.get("hive")
if isinstance(hive, dict) and isinstance(hive.get("profiles"), list):
return set(hive["profiles"])
return set()
def _tags_of(entry: dict[str, Any]) -> set[str]:
if isinstance(entry.get("tags"), list):
return set(entry["tags"])
return set()
def _entry_version(entry: dict[str, Any]) -> str | None:
# Prefer flat `version`, but support a few common shapes.
v = entry.get("version")
if isinstance(v, str):
return v
v2 = entry.get("manifest_version")
if isinstance(v2, str):
return v2
hive = entry.get("manifest")
if isinstance(hive, dict) and isinstance(hive.get("version"), str):
return hive["version"]
return None
def _version_allows(server_name: str) -> bool:
if server_name not in versions:
return True
pinned = versions[server_name]
entry = servers_by_name.get(server_name)
if not entry:
return False
return _entry_version(entry) == pinned
resolved_names: list[str] = []
resolved_set: set[str] = set()
# 1) Include-order first
for name in include_list:
if name in exclude_set:
continue
if name in servers_by_name and _version_allows(name) and name not in resolved_set:
resolved_names.append(name)
resolved_set.add(name)
# 2) Then tag/profile matches, alphabetical
profile_candidates = set()
if profile:
for name, entry in servers_by_name.items():
if name in exclude_set or not _version_allows(name):
continue
if profile in _profiles_of(entry):
profile_candidates.add(name)
tag_candidates = set()
if tags_list:
tags_set = set(tags_list)
for name, entry in servers_by_name.items():
if name in exclude_set or not _version_allows(name):
continue
if _tags_of(entry).intersection(tags_set):
tag_candidates.add(name)
tag_profile_names = sorted((profile_candidates | tag_candidates) - resolved_set)
resolved_names.extend(tag_profile_names)
# Missing requested servers should warn (FR-54).
for name in include_list:
if name in exclude_set:
continue
if name not in resolved_set:
if name not in servers_by_name:
logger.warning(
"Server '%s' requested by mcp_registry.json but not found in index. "
"Run: hive mcp install %s",
name,
name,
)
elif name in versions:
logger.warning(
"Server '%s' was requested but pinned version '%s' was not found in index. "
"Run: hive mcp update %s or change the pin in mcp_registry.json",
name,
versions[name],
name,
)
else:
logger.warning(
"Server '%s' requested by mcp_registry.json was not selected. "
"Check selection filters/exclude lists.",
name,
)
resolved_configs: list[dict[str, Any]] = []
repo_root = Path(__file__).resolve().parents[3]
for name in resolved_names:
entry = servers_by_name.get(name)
if not entry:
continue
config = entry.get("mcp_config")
if not isinstance(config, dict):
# Best-effort: allow a direct MCP config shape at top-level.
config = {
k: v
for k, v in entry.items()
if k
in {
"name",
"transport",
"command",
"args",
"env",
"cwd",
"url",
"headers",
"description",
}
}
mcp_config = dict(config)
mcp_config["name"] = name
if mcp_config.get("transport") == "stdio":
_absolutize_stdio_config_in_place(repo_root, mcp_config)
resolved_configs.append(mcp_config)
return resolved_configs
def _load_index_json() -> Any:
if _CACHE_INDEX_PATH.exists():
return json.loads(_CACHE_INDEX_PATH.read_text(encoding="utf-8"))
if _FIXTURE_INDEX_PATH.exists():
logger.info("Using local fixture index because registry cache is missing")
return json.loads(_FIXTURE_INDEX_PATH.read_text(encoding="utf-8"))
logger.warning("No local MCP registry index found (cache and fixture missing)")
return {"servers": []}
def _coerce_index_servers(index: Any) -> list[dict[str, Any]]:
if isinstance(index, list):
return [x for x in index if isinstance(x, dict)]
if isinstance(index, dict):
servers = index.get("servers", [])
if isinstance(servers, list):
return [x for x in servers if isinstance(x, dict)]
return []
def _normalize_server_config(raw: Any) -> dict[str, Any]:
if isinstance(raw, dict):
return dict(raw)
# Future-proof object-to-dict normalization.
for attr in ("to_dict", "model_dump"):
maybe = getattr(raw, attr, None)
if callable(maybe):
return dict(maybe())
return dict(getattr(raw, "__dict__", {}))
def _absolutize_stdio_config_in_place(repo_root: Path, config: dict[str, Any]) -> None:
cwd = config.get("cwd")
if isinstance(cwd, str) and not Path(cwd).is_absolute():
config["cwd"] = str((repo_root / cwd).resolve())
# We intentionally do not absolutize `args` here.
# For stdio servers, arguments may include the script name relative to
# `cwd` (e.g. "coder_tools_server.py" with cwd="tools"). ToolRegistry's
# stdio resolution logic handles script path checks and platform quirks.
+4 -1
View File
@@ -1,6 +1,6 @@
"""Pre-load validation for agent graphs.
Runs structural and credential checks before MCP servers are spawned.
Runs structural, credential, and skill-trust checks before MCP servers are spawned.
Fails fast with actionable error messages.
"""
@@ -169,6 +169,9 @@ def run_preload_validation(
1. Graph structure (includes GCU subagent-only checks) non-recoverable
2. Credentials potentially recoverable via interactive setup
Skill discovery and trust gating (AS-13) happen later in runner._setup()
so they have access to agent-level skill configuration.
Raises PreloadValidationError for structural issues.
Raises CredentialError for credential issues.
"""
+551 -87
View File
@@ -9,14 +9,13 @@ from datetime import UTC
from pathlib import Path
from typing import TYPE_CHECKING, Any
from framework.config import get_hive_config, get_preferred_model
from framework.config import get_hive_config, get_max_context_tokens, get_preferred_model
from framework.credentials.validation import (
ensure_credential_key_env as _ensure_credential_key_env,
)
from framework.graph import Goal
from framework.graph.edge import (
DEFAULT_MAX_TOKENS,
AsyncEntryPointSpec,
EdgeCondition,
EdgeSpec,
GraphSpec,
@@ -29,6 +28,7 @@ from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import AgentRuntime, AgentRuntimeConfig, create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from framework.runtime.runtime_log_store import RuntimeLogStore
from framework.tools.flowchart_utils import generate_fallback_flowchart
if TYPE_CHECKING:
from framework.runner.protocol import AgentMessage, CapabilityResponse
@@ -517,6 +517,354 @@ def get_codex_account_id() -> str | None:
return None
# ---------------------------------------------------------------------------
# Kimi Code subscription token helpers
# ---------------------------------------------------------------------------
def get_kimi_code_token() -> str | None:
"""Get the API key from a Kimi Code CLI installation.
Reads the API key from ``~/.kimi/config.toml``, which is created when
the user runs ``kimi /login`` in the Kimi Code CLI.
Returns:
The API key if available, None otherwise.
"""
import tomllib
config_path = Path.home() / ".kimi" / "config.toml"
if not config_path.exists():
return None
try:
with open(config_path, "rb") as f:
config = tomllib.load(f)
providers = config.get("providers", {})
# kimi-cli stores credentials under providers.kimi-for-coding
for provider_cfg in providers.values():
if isinstance(provider_cfg, dict):
key = provider_cfg.get("api_key")
if key:
return key
except Exception:
pass
return None
# ---------------------------------------------------------------------------
# Antigravity subscription token helpers
# ---------------------------------------------------------------------------
# Antigravity IDE (native macOS/Linux app) stores OAuth tokens in its
# VSCode-style SQLite state database under the key
# "antigravityUnifiedStateSync.oauthToken" as a base64-encoded protobuf blob.
ANTIGRAVITY_IDE_STATE_DB = (
Path.home()
/ "Library"
/ "Application Support"
/ "Antigravity"
/ "User"
/ "globalStorage"
/ "state.vscdb"
)
# Linux fallback for the IDE state DB
ANTIGRAVITY_IDE_STATE_DB_LINUX = (
Path.home() / ".config" / "Antigravity" / "User" / "globalStorage" / "state.vscdb"
)
# Antigravity credentials stored by native OAuth implementation
ANTIGRAVITY_AUTH_FILE = Path.home() / ".hive" / "antigravity-accounts.json"
ANTIGRAVITY_OAUTH_TOKEN_URL = "https://oauth2.googleapis.com/token"
_ANTIGRAVITY_TOKEN_LIFETIME_SECS = 3600 # Google access tokens expire in 1 hour
_ANTIGRAVITY_IDE_STATE_DB_KEY = "antigravityUnifiedStateSync.oauthToken"
def _read_antigravity_ide_credentials() -> dict | None:
"""Read credentials from the Antigravity IDE's SQLite state database.
The Antigravity desktop IDE (VSCode-based) stores its OAuth token as a
base64-encoded protobuf blob in a SQLite database. The access token is
a standard Google OAuth ``ya29.*`` bearer token.
Returns:
Dict with ``accessToken`` and optionally ``refreshToken`` keys,
plus ``_source: "ide"`` to skip file-based save on refresh.
Returns None if the database is absent or the key is not found.
"""
import re
import sqlite3
for db_path in (ANTIGRAVITY_IDE_STATE_DB, ANTIGRAVITY_IDE_STATE_DB_LINUX):
if not db_path.exists():
continue
try:
con = sqlite3.connect(f"file:{db_path}?mode=ro", uri=True)
try:
row = con.execute(
"SELECT value FROM ItemTable WHERE key = ?",
(_ANTIGRAVITY_IDE_STATE_DB_KEY,),
).fetchone()
finally:
con.close()
if not row:
continue
import base64
blob = base64.b64decode(row[0])
# The protobuf blob contains the access token (ya29.*) and
# refresh token (1//*) as length-prefixed UTF-8 strings.
# Decode the inner base64 layer and extract with regex.
inner_b64_candidates = re.findall(rb"[A-Za-z0-9+/=_\-]{40,}", blob)
access_token: str | None = None
refresh_token: str | None = None
for candidate in inner_b64_candidates:
try:
padded = candidate + b"=" * (-len(candidate) % 4)
inner = base64.urlsafe_b64decode(padded)
except Exception:
continue
if not access_token:
m = re.search(rb"ya29\.[A-Za-z0-9_\-\.]+", inner)
if m:
access_token = m.group(0).decode("ascii")
if not refresh_token:
m = re.search(rb"1//[A-Za-z0-9_\-\.]+", inner)
if m:
refresh_token = m.group(0).decode("ascii")
if access_token and refresh_token:
break
if access_token:
return {
"accounts": [
{
"accessToken": access_token,
"refreshToken": refresh_token or "",
}
],
"_source": "ide",
"_db_path": str(db_path),
}
except Exception as exc:
logger.debug("Failed to read Antigravity IDE state DB: %s", exc)
continue
return None
def _read_antigravity_credentials() -> dict | None:
"""Read Antigravity auth data from all supported credential sources.
Checks in order:
1. Antigravity IDE SQLite state database (native macOS/Linux app)
2. Native OAuth credentials file (~/.hive/antigravity-accounts.json)
Returns:
Auth data dict with an ``accounts`` list on success, None otherwise.
"""
# 1. Native Antigravity IDE (primary on macOS)
ide_creds = _read_antigravity_ide_credentials()
if ide_creds:
return ide_creds
# 2. Native OAuth credentials file
if ANTIGRAVITY_AUTH_FILE.exists():
try:
with open(ANTIGRAVITY_AUTH_FILE, encoding="utf-8") as f:
data = json.load(f)
accounts = data.get("accounts", [])
if accounts and isinstance(accounts[0], dict):
return data
except (json.JSONDecodeError, OSError):
pass
return None
def _is_antigravity_token_expired(auth_data: dict) -> bool:
"""Check whether the Antigravity access token is expired or near expiry.
For IDE-sourced credentials: uses the state DB's mtime as last_refresh
since the IDE keeps the DB fresh while it's running.
For JSON-sourced credentials: uses the ``last_refresh`` field or file mtime.
"""
import time
from datetime import datetime
now = time.time()
if auth_data.get("_source") == "ide":
# The IDE refreshes tokens automatically while running.
# Use the DB file's mtime as a proxy for when the token was last updated.
try:
db_path = Path(auth_data.get("_db_path", str(ANTIGRAVITY_IDE_STATE_DB)))
last_refresh: float = db_path.stat().st_mtime
except OSError:
return True
expires_at = last_refresh + _ANTIGRAVITY_TOKEN_LIFETIME_SECS
return now >= (expires_at - _TOKEN_REFRESH_BUFFER_SECS)
last_refresh_val: float | str | None = auth_data.get("last_refresh")
if last_refresh_val is None:
try:
last_refresh_val = ANTIGRAVITY_AUTH_FILE.stat().st_mtime
except OSError:
return True
elif isinstance(last_refresh_val, str):
try:
last_refresh_val = datetime.fromisoformat(
last_refresh_val.replace("Z", "+00:00")
).timestamp()
except (ValueError, TypeError):
return True
expires_at = float(last_refresh_val) + _ANTIGRAVITY_TOKEN_LIFETIME_SECS
return now >= (expires_at - _TOKEN_REFRESH_BUFFER_SECS)
def _refresh_antigravity_token(refresh_token: str) -> dict | None:
"""Refresh the Antigravity access token via Google OAuth.
POSTs form-encoded ``grant_type=refresh_token`` to the Google token
endpoint using Antigravity's public OAuth client ID.
Returns:
Parsed response dict (containing ``access_token``) on success,
None on any error.
"""
import urllib.error
import urllib.parse
import urllib.request
from framework.config import get_antigravity_client_id, get_antigravity_client_secret
client_id = get_antigravity_client_id()
client_secret = get_antigravity_client_secret()
params: dict = {
"grant_type": "refresh_token",
"refresh_token": refresh_token,
"client_id": client_id,
}
if client_secret:
params["client_secret"] = client_secret
data = urllib.parse.urlencode(params).encode("utf-8")
req = urllib.request.Request(
ANTIGRAVITY_OAUTH_TOKEN_URL,
data=data,
headers={"Content-Type": "application/x-www-form-urlencoded"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=15) as resp: # noqa: S310
return json.loads(resp.read())
except (urllib.error.URLError, json.JSONDecodeError, TimeoutError, OSError) as exc:
logger.debug("Antigravity token refresh failed: %s", exc)
return None
def _save_refreshed_antigravity_credentials(auth_data: dict, token_data: dict) -> None:
"""Write refreshed tokens back to the Antigravity JSON credentials file.
Skipped for IDE-sourced credentials (the IDE manages its own DB).
Updates ``accounts[0].accessToken`` (and ``refreshToken`` if present),
then persists ``last_refresh`` as an ISO-8601 UTC string.
"""
from datetime import datetime
# IDE manages its own state — we do not write back to its SQLite DB
if auth_data.get("_source") == "ide":
return
try:
accounts = auth_data.get("accounts", [])
if not accounts:
return
account = accounts[0]
account["accessToken"] = token_data["access_token"]
if "refresh_token" in token_data:
account["refreshToken"] = token_data["refresh_token"]
auth_data["accounts"] = accounts
auth_data["last_refresh"] = datetime.now(UTC).isoformat()
ANTIGRAVITY_AUTH_FILE.parent.mkdir(parents=True, exist_ok=True)
fd = os.open(ANTIGRAVITY_AUTH_FILE, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600)
with os.fdopen(fd, "w", encoding="utf-8") as f:
json.dump(auth_data, f, indent=2)
logger.debug("Antigravity credentials refreshed and saved")
except (OSError, KeyError) as exc:
logger.debug("Failed to save refreshed Antigravity credentials: %s", exc)
def get_antigravity_token() -> str | None:
"""Get the OAuth access token from an Antigravity subscription.
Credential sources checked in order:
1. Antigravity IDE SQLite state DB (native app, macOS/Linux)
2. antigravity-auth CLI JSON file
For IDE credentials the token is read directly (the IDE refreshes it
automatically while running). For JSON credentials an automatic OAuth
refresh is attempted when the token is near expiry.
Returns:
The ``ya29.*`` Google OAuth access token, or None if unavailable.
"""
auth_data = _read_antigravity_credentials()
if not auth_data:
return None
accounts = auth_data.get("accounts", [])
if not accounts:
return None
account = accounts[0]
access_token = account.get("accessToken")
if not access_token:
return None
if not _is_antigravity_token_expired(auth_data):
return access_token
# Token is expired or near expiry — attempt a refresh
refresh_token = account.get("refreshToken")
if not refresh_token:
logger.warning(
"Antigravity token expired and no refresh token available. "
"Re-open the Antigravity IDE to refresh, or run 'antigravity-auth accounts add'."
)
return access_token # return stale token; proxy may still accept it briefly
logger.info("Antigravity token expired or near expiry, refreshing...")
token_data = _refresh_antigravity_token(refresh_token)
if token_data and "access_token" in token_data:
_save_refreshed_antigravity_credentials(auth_data, token_data)
return token_data["access_token"]
logger.warning(
"Antigravity token refresh failed. "
"Re-open the Antigravity IDE or run 'antigravity-auth accounts add'."
)
return access_token
def _is_antigravity_proxy_available() -> bool:
"""Return True if antigravity-auth serve is running on localhost:8069."""
import socket
try:
with socket.create_connection(("localhost", 8069), timeout=0.5):
return True
except (OSError, TimeoutError):
return False
@dataclass
class AgentInfo:
"""Information about an exported agent."""
@@ -535,9 +883,6 @@ class AgentInfo:
constraints: list[dict]
required_tools: list[str]
has_tools_module: bool
# Multi-entry-point support
async_entry_points: list[dict] = field(default_factory=list)
is_multi_entry_point: bool = False
@dataclass
@@ -595,22 +940,6 @@ def load_agent_export(data: str | dict) -> tuple[GraphSpec, Goal]:
)
edges.append(edge)
# Build AsyncEntryPointSpec objects for multi-entry-point support
async_entry_points = []
for aep_data in graph_data.get("async_entry_points", []):
async_entry_points.append(
AsyncEntryPointSpec(
id=aep_data["id"],
name=aep_data.get("name", aep_data["id"]),
entry_node=aep_data["entry_node"],
trigger_type=aep_data.get("trigger_type", "manual"),
trigger_config=aep_data.get("trigger_config", {}),
isolation_level=aep_data.get("isolation_level", "shared"),
priority=aep_data.get("priority", 0),
max_concurrent=aep_data.get("max_concurrent", 10),
)
)
# Build GraphSpec
graph = GraphSpec(
id=graph_data.get("id", "agent-graph"),
@@ -618,7 +947,6 @@ def load_agent_export(data: str | dict) -> tuple[GraphSpec, Goal]:
version=graph_data.get("version", "1.0.0"),
entry_node=graph_data.get("entry_node", ""),
entry_points=graph_data.get("entry_points", {}), # Support pause/resume architecture
async_entry_points=async_entry_points, # Support multi-entry-point agents
terminal_nodes=graph_data.get("terminal_nodes", []),
pause_nodes=graph_data.get("pause_nodes", []), # Support pause/resume architecture
nodes=nodes,
@@ -770,8 +1098,6 @@ class AgentRunner:
# AgentRuntime — unified execution path for all agents
self._agent_runtime: AgentRuntime | None = None
self._uses_async_entry_points = self.graph.has_async_entry_points()
# Pre-load validation: structural checks + credentials.
# Fails fast with actionable guidance — no MCP noise on screen.
run_preload_validation(
@@ -795,6 +1121,9 @@ class AgentRunner:
if mcp_config_path.exists():
self._load_mcp_servers_from_config(mcp_config_path)
# Auto-discover registry-selected MCP servers from mcp_registry.json
self._load_registry_mcp_servers(agent_path)
@staticmethod
def _import_agent_module(agent_path: Path):
"""Import an agent package from its directory path.
@@ -891,10 +1220,32 @@ class AgentRunner:
if agent_config and hasattr(agent_config, "max_tokens"):
max_tokens = agent_config.max_tokens
logger.info(
"Agent default_config overrides max_tokens: %d "
"(configuration.json value ignored)",
max_tokens,
)
else:
hive_config = get_hive_config()
max_tokens = hive_config.get("llm", {}).get("max_tokens", DEFAULT_MAX_TOKENS)
# Resolve max_context_tokens with priority:
# 1. agent loop_config["max_context_tokens"] (explicit, wins silently)
# 2. agent default_config.max_context_tokens (logged)
# 3. configuration.json llm.max_context_tokens
# 4. hardcoded default (32_000)
agent_loop_config: dict = dict(getattr(agent_module, "loop_config", {}))
if "max_context_tokens" not in agent_loop_config:
if agent_config and hasattr(agent_config, "max_context_tokens"):
agent_loop_config["max_context_tokens"] = agent_config.max_context_tokens
logger.info(
"Agent default_config overrides max_context_tokens: %d"
" (configuration.json value ignored)",
agent_config.max_context_tokens,
)
else:
agent_loop_config["max_context_tokens"] = get_max_context_tokens()
# Read intro_message from agent metadata (shown on TUI load)
agent_metadata = getattr(agent_module, "metadata", None)
intro_message = ""
@@ -908,13 +1259,12 @@ class AgentRunner:
"version": "1.0.0",
"entry_node": getattr(agent_module, "entry_node", nodes[0].id),
"entry_points": getattr(agent_module, "entry_points", {}),
"async_entry_points": getattr(agent_module, "async_entry_points", []),
"terminal_nodes": getattr(agent_module, "terminal_nodes", []),
"pause_nodes": getattr(agent_module, "pause_nodes", []),
"nodes": nodes,
"edges": edges,
"max_tokens": max_tokens,
"loop_config": getattr(agent_module, "loop_config", {}),
"loop_config": agent_loop_config,
}
# Only pass optional fields if explicitly defined by the agent module
conversation_mode = getattr(agent_module, "conversation_mode", None)
@@ -926,6 +1276,12 @@ class AgentRunner:
graph = GraphSpec(**graph_kwargs)
# Generate flowchart.json if missing (for template/legacy agents)
generate_fallback_flowchart(graph, goal, agent_path)
# Read skill configuration from agent module
agent_default_skills = getattr(agent_module, "default_skills", None)
agent_skills = getattr(agent_module, "skills", None)
# Read runtime config (webhook settings, etc.) if defined
agent_runtime_config = getattr(agent_module, "runtime_config", None)
@@ -937,7 +1293,7 @@ class AgentRunner:
configure_fn = getattr(agent_module, "configure_for_account", None)
list_accts_fn = getattr(agent_module, "list_connected_accounts", None)
return cls(
runner = cls(
agent_path=agent_path,
graph=graph,
goal=goal,
@@ -953,6 +1309,10 @@ class AgentRunner:
list_accounts=list_accts_fn,
credential_store=credential_store,
)
# Stash skill config for use in _setup()
runner._agent_default_skills = agent_default_skills
runner._agent_skills = agent_skills
return runner
# Fallback: load from agent.json (legacy JSON-based agents)
agent_json_path = agent_path / "agent.json"
@@ -970,7 +1330,10 @@ class AgentRunner:
except json.JSONDecodeError as exc:
raise ValueError(f"Invalid JSON in agent export file: {agent_json_path}") from exc
return cls(
# Generate flowchart.json if missing (for legacy JSON-based agents)
generate_fallback_flowchart(graph, goal, agent_path)
runner = cls(
agent_path=agent_path,
graph=graph,
goal=goal,
@@ -981,6 +1344,9 @@ class AgentRunner:
skip_credential_validation=skip_credential_validation or False,
credential_store=credential_store,
)
runner._agent_default_skills = None
runner._agent_skills = None
return runner
def register_tool(
self,
@@ -1061,6 +1427,56 @@ class AgentRunner:
"""Load and register MCP servers from a configuration file."""
self._tool_registry.load_mcp_config(config_path)
def _load_registry_mcp_servers(self, agent_path: Path) -> None:
"""Load and register MCP servers selected via ``mcp_registry.json``."""
registry_json = agent_path / "mcp_registry.json"
if registry_json.is_file():
self._tool_registry.set_mcp_registry_agent_path(agent_path)
else:
self._tool_registry.set_mcp_registry_agent_path(None)
from framework.runner.mcp_registry import MCPRegistry
try:
registry = MCPRegistry()
registry.initialize()
server_configs, selection_max_tools = registry.load_agent_selection(agent_path)
except Exception as exc:
logger.warning(
"Failed to load MCP registry servers for '%s': %s",
agent_path.name,
exc,
)
return
if not server_configs:
return
results = self._tool_registry.load_registry_servers(
server_configs,
preserve_existing_tools=True,
log_collisions=True,
max_tools=selection_max_tools,
)
loaded = [result for result in results if result["status"] == "loaded"]
skipped = [result for result in results if result["status"] != "loaded"]
logger.info(
"Loaded %d/%d MCP registry server(s) for agent '%s'",
len(loaded),
len(results),
agent_path.name,
)
if skipped:
logger.info(
"Skipped MCP registry servers for agent '%s': %s",
agent_path.name,
[
{"server": result["server"], "reason": result["skipped_reason"]}
for result in skipped
],
)
def set_approval_callback(self, callback: Callable) -> None:
"""
Set a callback for human-in-the-loop approval during execution.
@@ -1091,7 +1507,10 @@ class AgentRunner:
# Create LLM provider
# Uses LiteLLM which auto-detects the provider from model name
if self.mock_mode:
# Skip if already injected (e.g. worker agents with a pre-built LLM)
if self._llm is not None:
pass # LLM already configured externally
elif self.mock_mode:
# Use mock LLM for testing without real API calls
from framework.llm.mock import MockLLMProvider
@@ -1104,6 +1523,8 @@ class AgentRunner:
llm_config = config.get("llm", {})
use_claude_code = llm_config.get("use_claude_code_subscription", False)
use_codex = llm_config.get("use_codex_subscription", False)
use_kimi_code = llm_config.get("use_kimi_code_subscription", False)
use_antigravity = llm_config.get("use_antigravity_subscription", False)
api_base = llm_config.get("api_base")
api_key = None
@@ -1111,14 +1532,28 @@ class AgentRunner:
# Get OAuth token from Claude Code subscription
api_key = get_claude_code_token()
if not api_key:
print("Warning: Claude Code subscription configured but no token found.")
print("Run 'claude' to authenticate, then try again.")
logger.warning(
"Claude Code subscription configured but no token found. "
"Run 'claude' to authenticate, then try again."
)
elif use_codex:
# Get OAuth token from Codex subscription
api_key = get_codex_token()
if not api_key:
print("Warning: Codex subscription configured but no token found.")
print("Run 'codex' to authenticate, then try again.")
logger.warning(
"Codex subscription configured but no token found. "
"Run 'codex' to authenticate, then try again."
)
elif use_kimi_code:
# Get API key from Kimi Code CLI config (~/.kimi/config.toml)
api_key = get_kimi_code_token()
if not api_key:
logger.warning(
"Kimi Code subscription configured but no key found. "
"Run 'kimi /login' to authenticate, then try again."
)
elif use_antigravity:
pass # AntigravityProvider handles credentials internally
if api_key and use_claude_code:
# Use litellm's built-in Anthropic OAuth support.
@@ -1149,6 +1584,27 @@ class AgentRunner:
store=False,
allowed_openai_params=["store"],
)
elif api_key and use_kimi_code:
# Kimi Code subscription uses the Kimi coding API (OpenAI-compatible).
# The api_base is set automatically by LiteLLMProvider for kimi/ models.
self._llm = LiteLLMProvider(
model=self.model,
api_key=api_key,
api_base=api_base,
)
elif use_antigravity:
# Direct OAuth to Google's internal Cloud Code Assist gateway.
# No local proxy required — AntigravityProvider handles token
# refresh and Gemini-format request/response conversion natively.
from framework.llm.antigravity import AntigravityProvider # noqa: PLC0415
provider = AntigravityProvider(model=self.model)
if not provider.has_credentials():
print(
"Warning: Antigravity credentials not found. "
"Run: uv run python core/antigravity_auth.py auth account add"
)
self._llm = provider
else:
# Local models (e.g. Ollama) don't need an API key
if self._is_local_model(self.model):
@@ -1180,8 +1636,12 @@ class AgentRunner:
if api_key_env:
os.environ[api_key_env] = api_key
elif api_key_env:
print(f"Warning: {api_key_env} not set. LLM calls will fail.")
print(f"Set it with: export {api_key_env}=your-api-key")
logger.warning(
"%s not set. LLM calls will fail. "
"Set it with: export %s=your-api-key",
api_key_env,
api_key_env,
)
# Fail fast if the agent needs an LLM but none was configured
if self._llm is None:
@@ -1275,6 +1735,20 @@ class AgentRunner:
except Exception:
pass # Best-effort — agent works without account info
# Skill configuration — the runtime handles discovery, loading, trust-gating and
# prompt rasterization. The runner just builds the config.
from framework.skills.config import SkillsConfig
from framework.skills.manager import SkillsManagerConfig
skills_manager_config = SkillsManagerConfig(
skills_config=SkillsConfig.from_agent_vars(
default_skills=getattr(self, "_agent_default_skills", None),
skills=getattr(self, "_agent_skills", None),
),
project_root=self.agent_path,
interactive=self._interactive,
)
self._setup_agent_runtime(
tools,
tool_executor,
@@ -1282,6 +1756,7 @@ class AgentRunner:
accounts_data=accounts_data,
tool_provider_map=tool_provider_map,
event_bus=event_bus,
skills_manager_config=skills_manager_config,
)
def _get_api_key_env_var(self, model: str) -> str | None:
@@ -1302,6 +1777,8 @@ class AgentRunner:
return "MISTRAL_API_KEY"
elif model_lower.startswith("groq/"):
return "GROQ_API_KEY"
elif model_lower.startswith("openrouter/"):
return "OPENROUTER_API_KEY"
elif self._is_local_model(model_lower):
return None # Local models don't need an API key
elif model_lower.startswith("azure/"):
@@ -1314,6 +1791,10 @@ class AgentRunner:
return "TOGETHER_API_KEY"
elif model_lower.startswith("minimax/") or model_lower.startswith("minimax-"):
return "MINIMAX_API_KEY"
elif model_lower.startswith("kimi/"):
return "KIMI_API_KEY"
elif model_lower.startswith("hive/"):
return "HIVE_API_KEY"
else:
# Default: assume OpenAI-compatible
return "OPENAI_API_KEY"
@@ -1334,6 +1815,10 @@ class AgentRunner:
cred_id = "anthropic"
elif model_lower.startswith("minimax/") or model_lower.startswith("minimax-"):
cred_id = "minimax"
elif model_lower.startswith("kimi/"):
cred_id = "kimi"
elif model_lower.startswith("hive/"):
cred_id = "hive"
# Add more mappings as providers are added to LLM_CREDENTIALS
if cred_id is None:
@@ -1373,23 +1858,13 @@ class AgentRunner:
accounts_data: list[dict] | None = None,
tool_provider_map: dict[str, str] | None = None,
event_bus=None,
skills_catalog_prompt: str = "",
protocols_prompt: str = "",
skill_dirs: list[str] | None = None,
skills_manager_config=None,
) -> None:
"""Set up multi-entry-point execution using AgentRuntime."""
# Convert AsyncEntryPointSpec to EntryPointSpec for AgentRuntime
entry_points = []
for async_ep in self.graph.async_entry_points:
ep = EntryPointSpec(
id=async_ep.id,
name=async_ep.name,
entry_node=async_ep.entry_node,
trigger_type=async_ep.trigger_type,
trigger_config=async_ep.trigger_config,
isolation_level=async_ep.isolation_level,
priority=async_ep.priority,
max_concurrent=async_ep.max_concurrent,
max_resurrections=async_ep.max_resurrections,
)
entry_points.append(ep)
# Always create a primary entry point for the graph's entry node.
# For multi-entry-point agents this ensures the primary path (e.g.
@@ -1446,26 +1921,37 @@ class AgentRunner:
accounts_data=accounts_data,
tool_provider_map=tool_provider_map,
event_bus=event_bus,
skills_manager_config=skills_manager_config,
)
# Pass intro_message through for TUI display
self._agent_runtime.intro_message = self.intro_message
# ------------------------------------------------------------------
# Execution modes
#
# run() One-shot, blocking execution for worker agents
# (headless CLI via ``hive run``). Validates, runs
# the graph to completion, and returns the result.
#
# start() / trigger() Long-lived runtime for the frontend (queen).
# start() boots the runtime; trigger() sends
# non-blocking execution requests. Used by the
# server session manager and API routes.
# ------------------------------------------------------------------
async def run(
self,
input_data: dict | None = None,
session_state: dict | None = None,
entry_point_id: str | None = None,
) -> ExecutionResult:
"""
Execute the agent with given input data.
"""One-shot execution for worker agents (headless CLI).
Validates credentials before execution. If any required credentials
are missing, returns an error result with instructions on how to
provide them.
Validates credentials, runs the graph to completion, and returns
the result. Used by ``hive run`` and programmatic callers.
For single-entry-point agents, this is the standard execution path.
For multi-entry-point agents, you can optionally specify which entry point to use.
For the frontend (queen), use start() + trigger() instead.
Args:
input_data: Input data for the agent (e.g., {"lead_id": "123"})
@@ -1591,7 +2077,12 @@ class AgentRunner:
# === Runtime API ===
async def start(self) -> None:
"""Start the agent runtime."""
"""Boot the agent runtime for the frontend (queen).
Pair with trigger() to send execution requests. Used by the
server session manager. For headless worker agents, use run()
instead.
"""
if self._agent_runtime is None:
self._setup()
@@ -1608,10 +2099,10 @@ class AgentRunner:
input_data: dict[str, Any],
correlation_id: str | None = None,
) -> str:
"""
Trigger execution at a specific entry point (non-blocking).
"""Send a non-blocking execution request to a running runtime.
Returns execution ID for tracking.
Used by the server API routes after start(). For headless
worker agents, use run() instead.
Args:
entry_point_id: Which entry point to trigger
@@ -1696,19 +2187,6 @@ class AgentRunner:
for edge in self.graph.edges
]
# Build async entry points info
async_entry_points_info = [
{
"id": ep.id,
"name": ep.name,
"entry_node": ep.entry_node,
"trigger_type": ep.trigger_type,
"isolation_level": ep.isolation_level,
"max_concurrent": ep.max_concurrent,
}
for ep in self.graph.async_entry_points
]
return AgentInfo(
name=self.graph.id,
description=self.graph.description,
@@ -1735,8 +2213,6 @@ class AgentRunner:
],
required_tools=sorted(required_tools),
has_tools_module=(self.agent_path / "tools.py").exists(),
async_entry_points=async_entry_points_info,
is_multi_entry_point=self._uses_async_entry_points,
)
def validate(self) -> ValidationResult:
@@ -2051,18 +2527,6 @@ Respond with JSON only:
trigger_type="manual",
isolation_level="shared",
)
for aep in runner.graph.async_entry_points:
entry_points[aep.id] = EntryPointSpec(
id=aep.id,
name=aep.name,
entry_node=aep.entry_node,
trigger_type=aep.trigger_type,
trigger_config=aep.trigger_config,
isolation_level=aep.isolation_level,
priority=aep.priority,
max_concurrent=aep.max_concurrent,
)
await runtime.add_graph(
graph_id=gid,
graph=runner.graph,
+272 -21
View File
@@ -16,6 +16,8 @@ from framework.llm.provider import Tool, ToolResult, ToolUse
logger = logging.getLogger(__name__)
_INPUT_LOG_MAX_LEN = 500
# Per-execution context overrides. Each asyncio task (and thus each
# concurrent graph execution) gets its own copy, so there are no races
# when multiple ExecutionStreams run in parallel.
@@ -54,6 +56,8 @@ class ToolRegistry:
def __init__(self):
self._tools: dict[str, RegisteredTool] = {}
self._mcp_clients: list[Any] = [] # List of MCPClient instances
self._mcp_client_servers: dict[int, str] = {} # client id -> server name
self._mcp_managed_clients: set[int] = set() # client ids acquired from the manager
self._session_context: dict[str, Any] = {} # Auto-injected context for tools
self._provider_index: dict[str, set[str]] = {} # provider -> tool names
# MCP resync tracking
@@ -62,6 +66,8 @@ class ToolRegistry:
self._mcp_cred_snapshot: set[str] = set() # Credential filenames at MCP load time
self._mcp_aden_key_snapshot: str | None = None # ADEN_API_KEY value at MCP load time
self._mcp_server_tools: dict[str, set[str]] = {} # server name -> tool names
# Agent dir for re-loading registry MCP after credential resync.
self._mcp_registry_agent_path: Path | None = None
def register(
self,
@@ -243,6 +249,13 @@ class ToolRegistry:
def _wrap_result(tool_use_id: str, result: Any) -> ToolResult:
if isinstance(result, ToolResult):
return result
# MCP client returns dict with _images when image content is present
if isinstance(result, dict) and "_images" in result:
return ToolResult(
tool_use_id=tool_use_id,
content=result.get("_text", ""),
image_content=result["_images"],
)
return ToolResult(
tool_use_id=tool_use_id,
content=json.dumps(result) if not isinstance(result, str) else result,
@@ -269,6 +282,17 @@ class ToolRegistry:
r = await result
return _wrap_result(tool_use.id, r)
except Exception as exc:
inputs_str = json.dumps(tool_use.input, default=str)
if len(inputs_str) > _INPUT_LOG_MAX_LEN:
inputs_str = inputs_str[:_INPUT_LOG_MAX_LEN] + "...(truncated)"
logger.error(
"Async tool '%s' failed (tool_use_id=%s): %s\nInputs: %s",
tool_use.name,
tool_use.id,
exc,
inputs_str,
exc_info=True,
)
return ToolResult(
tool_use_id=tool_use.id,
content=json.dumps({"error": str(exc)}),
@@ -279,6 +303,17 @@ class ToolRegistry:
return _wrap_result(tool_use.id, result)
except Exception as e:
inputs_str = json.dumps(tool_use.input, default=str)
if len(inputs_str) > _INPUT_LOG_MAX_LEN:
inputs_str = inputs_str[:_INPUT_LOG_MAX_LEN] + "...(truncated)"
logger.error(
"Tool '%s' execution failed for tool_use_id=%s: %s\nInputs: %s",
tool_use.name,
tool_use.id,
e,
inputs_str,
exc_info=True,
)
return ToolResult(
tool_use_id=tool_use.id,
content=json.dumps({"error": str(e)}),
@@ -453,21 +488,129 @@ class ToolRegistry:
# Treat top-level keys as server names
server_list = [{"name": name, **cfg} for name, cfg in config.items()]
for server_config in server_list:
server_config = self._resolve_mcp_server_config(server_config, base_dir)
try:
self.register_mcp_server(server_config)
except Exception as e:
name = server_config.get("name", "unknown")
logger.warning(f"Failed to register MCP server '{name}': {e}")
resolved_server_list = [
self._resolve_mcp_server_config(server_config, base_dir)
for server_config in server_list
]
# Ordered first-wins for duplicate tool names across servers; keep tools.py tools.
self.load_registry_servers(
resolved_server_list,
log_summary=False,
preserve_existing_tools=True,
log_collisions=False,
)
# Snapshot credential files and ADEN_API_KEY so we can detect mid-session changes
self._mcp_cred_snapshot = self._snapshot_credentials()
self._mcp_aden_key_snapshot = os.environ.get("ADEN_API_KEY")
def _register_mcp_server_with_retry(
self,
server_config: dict[str, Any],
*,
preserve_existing_tools: bool = True,
tool_cap: int | None = None,
log_collisions: bool = False,
) -> tuple[bool, int, str | None]:
"""Register a single MCP server with one retry for transient failures."""
name = server_config.get("name", "unknown")
last_error: str | None = None
for attempt in range(2):
try:
count = self.register_mcp_server(
server_config,
preserve_existing_tools=preserve_existing_tools,
tool_cap=tool_cap,
log_collisions=log_collisions,
)
if count > 0:
return True, count, None
last_error = "registered 0 tools"
except Exception as exc:
last_error = str(exc)
if attempt == 0:
logger.warning(
"MCP server '%s' failed to register, retrying in 2s: %s",
name,
last_error,
)
import time
time.sleep(2)
else:
logger.warning("MCP server '%s' failed after retry: %s", name, last_error)
return False, 0, last_error
def load_registry_servers(
self,
server_list: list[dict[str, Any]],
*,
log_summary: bool = True,
preserve_existing_tools: bool = True,
max_tools: int | None = None,
log_collisions: bool = False,
) -> list[dict[str, Any]]:
"""Register MCP servers from a resolved config list (registry and/or static).
``preserve_existing_tools`` enforces first-wins tool names (FR-100): later
servers skip names already taken including tools from ``mcp_servers.json``
or ``tools.py`` when those were loaded first.
``max_tools`` caps how many *new* tool names are registered across this batch
(collisions do not consume the cap). When ``log_collisions`` is True, skipped
duplicate names emit a warning (FR-101).
"""
results: list[dict[str, Any]] = []
tools_added_batch = 0
for server_config in server_list:
remaining: int | None = None
if max_tools is not None:
remaining = max_tools - tools_added_batch
if remaining <= 0:
break
name = server_config.get("name", "unknown")
success, tools_loaded, error = self._register_mcp_server_with_retry(
server_config,
preserve_existing_tools=preserve_existing_tools,
tool_cap=remaining,
log_collisions=log_collisions,
)
tools_added_batch += tools_loaded
result = {
"server": name,
"status": "loaded" if success else "skipped",
"tools_loaded": tools_loaded,
"skipped_reason": None if success else (error or "unknown error"),
}
results.append(result)
if log_summary:
logger.info(
"MCP registry server resolution",
extra={
"event": "mcp_registry_server_resolution",
"server": result["server"],
"status": result["status"],
"tools_loaded": result["tools_loaded"],
"skipped_reason": result["skipped_reason"],
},
)
return results
def register_mcp_server(
self,
server_config: dict[str, Any],
use_connection_manager: bool = True,
*,
preserve_existing_tools: bool = True,
tool_cap: int | None = None,
log_collisions: bool = False,
) -> int:
"""
Register an MCP server and discover its tools.
@@ -483,12 +626,17 @@ class ToolRegistry:
- url: Server URL (for http)
- headers: HTTP headers (for http)
- description: Server description (optional)
use_connection_manager: When True, reuse a shared client keyed by server name
preserve_existing_tools: If True, do not replace tools already in the registry.
tool_cap: Max tools to newly register from this server (None = unlimited).
log_collisions: If True, log when this server skips a tool name already taken.
Returns:
Number of tools registered from this server
"""
try:
from framework.runner.mcp_client import MCPClient, MCPServerConfig
from framework.runner.mcp_connection_manager import MCPConnectionManager
# Build config object
config = MCPServerConfig(
@@ -500,15 +648,23 @@ class ToolRegistry:
cwd=server_config.get("cwd"),
url=server_config.get("url"),
headers=server_config.get("headers", {}),
socket_path=server_config.get("socket_path"),
description=server_config.get("description", ""),
)
# Create and connect client
client = MCPClient(config)
client.connect()
if use_connection_manager:
client = MCPConnectionManager.get_instance().acquire(config)
else:
client = MCPClient(config)
client.connect()
# Store client for cleanup
self._mcp_clients.append(client)
client_id = id(client)
self._mcp_client_servers[client_id] = config.name
if use_connection_manager:
self._mcp_managed_clients.add(client_id)
# Register each tool
server_name = server_config["name"]
@@ -516,6 +672,23 @@ class ToolRegistry:
self._mcp_server_tools[server_name] = set()
count = 0
for mcp_tool in client.list_tools():
if tool_cap is not None and count >= tool_cap:
break
if preserve_existing_tools and mcp_tool.name in self._tools:
if log_collisions:
origin_server = (
self._find_mcp_origin_server_for_tool(mcp_tool.name) or "<existing>"
)
logger.warning(
"MCP tool '%s' from '%s' shadowed by '%s' (loaded first)",
mcp_tool.name,
server_name,
origin_server,
)
# Skip registration; do not update MCP tool bookkeeping for this server.
continue
# Convert MCP tool to framework Tool (strips context params from LLM schema)
tool = self._convert_mcp_tool_to_framework_tool(mcp_tool)
@@ -548,14 +721,25 @@ class ToolRegistry:
}
merged_inputs = {**clean_inputs, **filtered_context}
result = client_ref.call_tool(tool_name, merged_inputs)
# MCP tools return content array, extract the result
# MCP client already extracts content (returns str
# or {"_text": ..., "_images": ...} for image results).
# Handle legacy list format from HTTP transport.
if isinstance(result, list) and len(result) > 0:
if isinstance(result[0], dict) and "text" in result[0]:
return result[0]["text"]
return result[0]
return result
except Exception as e:
logger.error(f"MCP tool '{tool_name}' execution failed: {e}")
inputs_str = json.dumps(inputs, default=str)
if len(inputs_str) > _INPUT_LOG_MAX_LEN:
inputs_str = inputs_str[:_INPUT_LOG_MAX_LEN] + "...(truncated)"
logger.error(
"MCP tool '%s' execution failed: %s\nInputs: %s",
tool_name,
e,
inputs_str,
exc_info=True,
)
return {"error": str(e)}
return executor
@@ -570,11 +754,27 @@ class ToolRegistry:
self._mcp_server_tools[server_name].add(mcp_tool.name)
count += 1
logger.info(f"Registered {count} tools from MCP server '{config.name}'")
logger.info(
"MCP Registry Load",
extra={
"server": config.name,
"status": "success",
"tools_loaded": count,
"skipped_reason": None,
},
)
return count
except Exception as e:
logger.error(f"Failed to register MCP server: {e}")
logger.error(
"MCP Registry Load",
extra={
"server": server_config.get("name", "unknown"),
"status": "failed",
"tools_loaded": 0,
"skipped_reason": str(e),
},
)
if "Connection closed" in str(e) and os.name == "nt":
logger.debug(
"On Windows, check that the MCP subprocess starts (e.g. uv in PATH, "
@@ -582,6 +782,12 @@ class ToolRegistry:
)
return 0
def _find_mcp_origin_server_for_tool(self, tool_name: str) -> str | None:
for server_name, tool_names in self._mcp_server_tools.items():
if tool_name in tool_names:
return server_name
return None
def _convert_mcp_tool_to_framework_tool(self, mcp_tool: Any) -> Tool:
"""
Convert an MCP tool to a framework Tool.
@@ -669,6 +875,37 @@ class ToolRegistry:
# MCP credential resync
# ------------------------------------------------------------------
def set_mcp_registry_agent_path(self, agent_path: Path | None) -> None:
"""Remember agent dir so registry MCP servers reload after credential resync."""
self._mcp_registry_agent_path = None if agent_path is None else Path(agent_path)
def reload_registry_mcp_servers_after_resync(self) -> None:
"""Re-run ``mcp_registry.json`` resolution and register servers (post-resync)."""
if self._mcp_registry_agent_path is None:
return
from framework.runner.mcp_registry import MCPRegistry
try:
reg = MCPRegistry()
reg.initialize()
configs, selection_max_tools = reg.load_agent_selection(self._mcp_registry_agent_path)
except Exception as exc:
logger.warning(
"Failed to reload MCP registry servers after resync for '%s': %s",
self._mcp_registry_agent_path.name,
exc,
)
return
if not configs:
return
self.load_registry_servers(
configs,
log_summary=True,
preserve_existing_tools=True,
log_collisions=True,
max_tools=selection_max_tools,
)
def _snapshot_credentials(self) -> set[str]:
"""Return the set of credential filenames currently on disk."""
try:
@@ -708,32 +945,46 @@ class ToolRegistry:
logger.info("%s — resyncing MCP servers", reason)
# 1. Disconnect existing MCP clients
for client in self._mcp_clients:
try:
client.disconnect()
except Exception as e:
logger.warning(f"Error disconnecting MCP client during resync: {e}")
self._mcp_clients.clear()
self._cleanup_mcp_clients("during resync")
# 2. Remove MCP-registered tools
for name in self._mcp_tool_names:
self._tools.pop(name, None)
self._mcp_tool_names.clear()
self._mcp_server_tools.clear()
# 3. Re-load MCP servers (spawns fresh subprocesses with new credentials)
self.load_mcp_config(self._mcp_config_path)
if self._mcp_registry_agent_path is not None:
self.reload_registry_mcp_servers_after_resync()
logger.info("MCP server resync complete")
return True
def cleanup(self) -> None:
"""Clean up all MCP client connections."""
self._cleanup_mcp_clients()
def _cleanup_mcp_clients(self, context: str = "") -> None:
"""Disconnect or release all tracked MCP clients for this registry."""
if context:
context = f" {context}"
for client in self._mcp_clients:
client_id = id(client)
server_name = self._mcp_client_servers.get(client_id, client.config.name)
try:
client.disconnect()
if client_id in self._mcp_managed_clients:
from framework.runner.mcp_connection_manager import MCPConnectionManager
MCPConnectionManager.get_instance().release(server_name)
else:
client.disconnect()
except Exception as e:
logger.warning(f"Error disconnecting MCP client: {e}")
logger.warning(f"Error disconnecting MCP client{context}: {e}")
self._mcp_clients.clear()
self._mcp_client_servers.clear()
self._mcp_managed_clients.clear()
def __del__(self):
"""Destructor to ensure cleanup."""

Some files were not shown because too many files have changed in this diff Show More