Align POST /api/sessions behavior across queen-only and one-step worker creation so callers can rely on deterministic session IDs. Add a regression test covering the forwarded session_id contract.
Made-with: Cursor
Pass browser_wait text through Playwright's function argument channel so quoted and multiline strings do not break the generated wait expression. Add a regression test covering text that previously would have been interpolated unsafely.
Made-with: Cursor
Changed HOW_TO_CONTRIBUTE.md back to CONTRIBUTING.md to comply with
GitHub's standard for contributing guidelines files.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Renamed CONTRIBUTING.md to HOW_TO_CONTRIBUTE.md and significantly expanded
the documentation with detailed sections on development setup, OS support,
tooling requirements, performance metrics, and contribution workflows.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Validate that agent names passed to --agents do not contain path
separators. Previously, passing 'exports/my_agent' would result in
the doubled path 'exports/exports/my_agent' with a confusing error.
Now a clear error message is shown suggesting the correct usage.
Fixes#6208
Co-authored-by: nightcityblade <nightcityblade@gmail.com>
Replace individual recipe READMEs with a comprehensive collection of 100 real-world agent prompt examples across marketing, sales, operations, engineering, and finance. This provides users with a broader range of use case inspiration in a single, organized reference document.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Replace individual recipe READMEs with a comprehensive collection of 100 real-world agent prompt examples across marketing, sales, operations, engineering, and finance. This provides users with a broader range of use case inspiration in a single, organized reference document.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Validate that agent names passed to --agents do not contain path
separators. Previously, passing 'exports/my_agent' would result in
the doubled path 'exports/exports/my_agent' with a confusing error.
Now a clear error message is shown suggesting the correct usage.
Fixes#6208
Co-authored-by: nightcityblade <nightcityblade@gmail.com>
Replace bare except Exception: clauses with specific exception handling:
- delete_aden_api_key(): Catch FileNotFoundError, PermissionError at debug
level; log unexpected errors at WARNING with exc_info=True
- _read_credential_key_file(): Catch FileNotFoundError, PermissionError at
debug level; log unexpected errors at WARNING with exc_info=True
- _read_aden_from_encrypted_store(): Catch FileNotFoundError, PermissionError,
KeyError at debug level; log unexpected errors at WARNING with exc_info=True
This makes credential issues easier to diagnose by:
- Logging unexpected errors at WARNING level (visible in production)
- Including full stack traces with exc_info=True
- Keeping expected failures (file not found, permissions) at debug level
Fixes#5931
Tests were asserting the old CLIENT_OUTPUT_DELTA + CLIENT_INPUT_REQUESTED
pattern; the fix in 89ccd66f routes escalations through the queen via
ESCALATION_REQUESTED instead.
Track the errlog file handle opened on non-Windows systems and
properly close it during cleanup to prevent file descriptor leaks.
Changes:
- Add _errlog_handle instance variable to track the file handle
- Store handle reference when opening os.devnull
- Close handle in _cleanup_stdio_async() after other cleanup
- Clear reference in disconnect() for safety
Fixes#6002
The handle_pause endpoint referenced session.mode_state (lines 360-361),
which does not exist on the Session dataclass. This caused an
AttributeError every time the pause endpoint reached the phase transition
step, preventing the queen phase from transitioning to staging and
returning a 500 error to the frontend.
Changed to session.phase_state, consistent with handle_stop (line 412),
handle_run (line 75), and the Session dataclass definition
(session_manager.py line 44).
Turns that call report_to_parent were incorrectly treated as "truly
empty" because the flag was not propagated. Thread it through
_run_single_turn and include it in the empty-turn guard.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add pg_get_table_stats for row counts and size info,
pg_list_indexes for index details, and pg_get_foreign_keys
for relationship discovery with both outgoing and incoming FKs.
Add lusha_bulk_enrich_persons for batch enrichment,
lusha_get_technologies for company tech stack lookup, and
lusha_search_decision_makers for senior contact discovery.
Add s3_copy_object for copying within/between buckets,
s3_get_object_metadata for HEAD-based metadata retrieval, and
s3_generate_presigned_url for temporary access URL generation.
Add pushover_cancel_receipt for stopping emergency retries,
pushover_send_glance for widget data updates, and
pushover_get_limits for checking message usage.
Add news_latest for breaking news without query, news_by_source
for source-filtered articles, and news_by_topic for topic-based
discovery with automatic date ranges.
Add scholar_cited_by for finding papers citing a given paper,
scholar_search_profiles for author profile discovery, and
serpapi_google_search for structured Google web results.
Add exa_search_news, exa_search_papers, and exa_search_companies
convenience wrappers with pre-configured category filters and
automatic date/domain filtering.
- greenhouse_list_offers: GET /offers or /applications/{id}/offers
- greenhouse_add_candidate_note: POST /candidates/{id}/activity_feed/notes
- greenhouse_list_scorecards: GET /applications/{id}/scorecards
- Add _post helper for POST requests
- brevo_list_contacts: GET /contacts with pagination and modified_since filter
- brevo_delete_contact: DELETE /contacts/{email} to remove contacts
- brevo_list_email_campaigns: GET /emailCampaigns with status filter and stats
- quickbooks_list_invoices: query invoices with status/customer filters
- quickbooks_get_customer: GET /customer/{id} with address and contact info
- quickbooks_create_payment: POST /payment with optional invoice linking
- cloudinary_get_usage: GET /usage for storage, bandwidth, transformation limits
- cloudinary_rename_resource: POST /rename to change public_id
- cloudinary_add_tag: POST /tags to add tags to resources
- twitter_get_user_followers: GET /users/{id}/followers with profile details
- twitter_get_tweet_replies: search recent replies via conversation_id
- twitter_get_list_tweets: GET /lists/{id}/tweets with author expansion
- apollo_get_person_activities: GET /activities for contact activity history
- apollo_list_email_accounts: GET /email_accounts for connected sending accounts
- apollo_bulk_enrich_people: POST /people/bulk_match for batch enrichment (up to 10)
- calendly_cancel_event: POST /scheduled_events/{id}/cancellation
- calendly_list_webhooks: GET /webhook_subscriptions for org/user scope
- calendly_get_event_type: GET /event_types/{id} for meeting template details
- Add _post helper for POST requests
- pagerduty_list_oncalls: GET /oncalls with schedule/policy filters
- pagerduty_add_incident_note: POST /incidents/{id}/notes to add notes
- pagerduty_list_escalation_policies: GET /escalation_policies with search
- airtable_delete_records: DELETE records by comma-separated IDs (up to 10)
- airtable_search_records: search records using FIND formula for partial matching
- airtable_list_collaborators: list base collaborators via meta API
- Add _delete helper for DELETE requests
- reddit_get_subreddit_info: GET /r/{name}/about for subscriber count, description
- reddit_get_post_detail: GET /by_id/t3_{id} for full post details with flair, ratios
- reddit_get_user_posts: GET /user/{name}/submitted for user's post history
Add confluence_update_page, confluence_delete_page, and
confluence_get_page_children tools using Confluence REST API v2.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add intercom_close_conversation, intercom_create_contact, and
intercom_list_conversations to Intercom credential spec.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add close_conversation, create_contact, and list_conversations client
methods plus intercom_close_conversation, intercom_create_contact, and
intercom_list_conversations MCP tools using Intercom API v2.11.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add gitlab_update_issue, gitlab_get_merge_request, and
gitlab_create_merge_request_note to GitLab credential spec.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add _put helper and gitlab_update_issue, gitlab_get_merge_request,
and gitlab_create_merge_request_note tools using GitLab REST API v4.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add slack_get_channel_info, slack_list_files, and slack_get_file_info
to Slack credential spec.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add get_channel_info, list_files, and get_file_info client methods
plus slack_get_channel_info, slack_list_files, and slack_get_file_info
MCP tools using Slack Web API.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add stripe_list_disputes, stripe_list_events, and
stripe_create_checkout_session to Stripe credential spec.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add list_disputes, list_events, and create_checkout_session client
methods plus stripe_list_disputes, stripe_list_events, and
stripe_create_checkout_session MCP tools using Stripe API.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add linear_cycles_list, linear_issue_comments_list, and
linear_issue_relation_create to Linear credential spec.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add list_cycles, list_issue_comments, and create_issue_relation client
methods plus linear_cycles_list, linear_issue_comments_list, and
linear_issue_relation_create MCP tools using Linear GraphQL API.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add zoom_update_meeting, zoom_list_meeting_participants, and
zoom_list_meeting_registrants to Zoom credential spec.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add zoom_update_meeting (PATCH), zoom_list_meeting_participants
(past meeting attendees), and zoom_list_meeting_registrants
using Zoom REST API v2.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add twilio_list_phone_numbers, twilio_list_calls, and
twilio_delete_message to both Twilio credential specs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add twilio_list_phone_numbers, twilio_list_calls, and
twilio_delete_message tools using Twilio REST API.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add shopify_update_product, shopify_get_customer, and
shopify_create_draft_order to both Shopify credential specs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add shopify_update_product, shopify_get_customer, and
shopify_create_draft_order tools using Shopify Admin REST API.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add zendesk_get_ticket_comments, zendesk_add_ticket_comment, and
zendesk_list_users to all three Zendesk credential specs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add zendesk_get_ticket_comments, zendesk_add_ticket_comment, and
zendesk_list_users tools using Zendesk Support API v2.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Register asana_update_task, asana_add_comment, and
asana_create_subtask in the Asana credential spec.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add _put helper and three new Asana MCP tools:
- asana_update_task: modify name, notes, completion, due date, assignee
- asana_add_comment: post comment stories on tasks
- asana_create_subtask: create subtasks under existing tasks
API ref: https://developers.asana.com/docs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add three new Trello MCP tools:
- trello_get_card: retrieve full card details with members/checklists/attachments
- trello_create_list: create new lists on boards
- trello_search_cards: full-text search across cards with board scoping
Update credential spec to include the new tool names.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add three new GitHub MCP tools:
- github_list_commits: query commits with author/date/branch filters
- github_create_release: create tagged releases with notes and draft support
- github_list_workflow_runs: monitor CI/CD pipeline runs with status filters
Update credential spec to include the new tool names.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add _GitHubClient methods for:
- list_commits: GET /repos/{owner}/{repo}/commits with sha/author/date filters
- create_release: POST /repos/{owner}/{repo}/releases with tag, notes, draft
- list_workflow_runs: GET /repos/{owner}/{repo}/actions/runs with filters
API ref: https://docs.github.com/en/rest
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add three new Telegram MCP tools:
- telegram_get_chat_member_count: retrieve group/channel membership size
- telegram_send_video: send video files via URL or file_id
- telegram_set_chat_description: update group/channel descriptions
Update credential spec to include the new tool names.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add _TelegramClient methods for:
- get_chat_member_count: getChatMemberCount API endpoint
- send_video: sendVideo with caption, parse_mode, duration support
- set_chat_description: setChatDescription for groups/channels
API ref: https://core.telegram.org/bots/api
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Register salesforce_delete_record, salesforce_search_records, and
salesforce_get_record_count in both Salesforce credential specs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add three new Salesforce MCP tools:
- salesforce_delete_record: DELETE /sobjects/{type}/{id}
- salesforce_search_records: SOSL full-text search via /search/
- salesforce_get_record_count: efficient COUNT() query for any SObject
API ref: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add three new Discord MCP tools:
- discord_get_channel: retrieve channel metadata (name, topic, type)
- discord_create_reaction: add emoji reactions to messages
- discord_delete_message: remove messages from channels
Update credential spec to include the new tool names.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add _DiscordClient methods for:
- get_channel: retrieve channel metadata via GET /channels/{id}
- create_reaction: add emoji reaction via PUT reactions endpoint
- delete_message: remove a message via DELETE messages endpoint
API ref: https://discord.com/developers/docs/resources
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Register notion_update_page, notion_archive_page, and
notion_append_blocks in the Notion credential spec.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add three new Notion MCP tools:
- notion_update_page: modify page properties via PATCH /pages/{id}
- notion_archive_page: archive or restore pages
- notion_append_blocks: add paragraphs, headings, lists, todos, etc.
API ref: https://developers.notion.com/reference
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Register jira_update_issue, jira_list_transitions, and
jira_transition_issue in all three Jira credential specs
(domain, email, token).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add three new Jira MCP tools:
- jira_update_issue: modify summary, description, priority, labels, assignee
- jira_list_transitions: discover available status transitions for an issue
- jira_transition_issue: move an issue to a new status with optional comment
API ref: https://developer.atlassian.com/cloud/jira/platform/rest/v3/
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add three new MCP tools:
- hubspot_delete_object: archive contacts, companies, or deals
- hubspot_list_associations: query links between CRM objects (v4 API)
- hubspot_create_association: link two CRM records together
Update credential spec to include the new tool names.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add _HubSpotClient methods for:
- delete_object: archive a CRM object via DELETE /crm/v3/objects
- list_associations: query associations via GET /crm/v4/objects associations endpoint
- create_association: link two CRM objects via PUT /crm/v4/objects associations endpoint
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The google_sheets.py file defined GOOGLE_SHEETS_CREDENTIALS (an API-key
based credential for reading public sheets via GOOGLE_SHEETS_API_KEY) but
was never wired into the package:
- Never imported in credentials/__init__.py
- Never merged into CREDENTIAL_SPECS
- Never listed in __all__
- Tool never calls credentials.get('google_sheets_key') — uses 'google' (OAuth2)
- Tool names in the spec were stale and did not match actual function names
The 'google' credential in email.py already correctly covers all Google
Sheets tools via OAuth2. This file was dead code with no referencing
imports anywhere in the repository.
Closes#6077
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Resolves inconsistency between CLAUDE.md/AGENTS.md (TUI deprecated) and
docs/roadmap.md (TUI listed as completed feature).
- Strike through TUI items in 3 roadmap sections
- Add deprecation note to TUI-to-GUI upgrade section
- Reference AGENTS.md and hive open as replacement
Fixes#5941
Signed-off-by: Robert Hallers <robert@terplabs.ai>
- Change get_chat client method from httpx.get+params to httpx.post+json
to avoid URL-encoding issues with @username chat IDs
- Remove {"success": True} normalization from delete_message,
send_chat_action, pin_message, and unpin_message MCP tools;
return raw Telegram API response consistently
- Update corresponding test mocks and assertions to match
Add 8 new operations to the Telegram Bot tool, bringing it from 2 to 10
operations. This covers message lifecycle (edit, delete, forward), media
(send photo), chat info (get chat), UX (typing indicators), and pin
management — making the tool practical for agent workflows beyond
fire-and-forget messaging.
New operations:
- telegram_edit_message: edit previously sent messages
- telegram_delete_message: delete messages
- telegram_forward_message: forward between chats
- telegram_send_photo: send photos via URL or file_id
- telegram_send_chat_action: show typing/uploading indicators
- telegram_get_chat: retrieve chat metadata
- telegram_pin_message: pin important messages
- telegram_unpin_message: unpin stale messages
Also includes input validation for chat actions, credential spec updates,
central registry wiring, and 31 new tests (52 total).
Closes#4808
Use is_file() instead of exists() to reject directories, and check
for empty content before passing to json parser. Prevents raw
tracebacks on invalid agent.json inputs.
Fixes#5787
- Replace joined.replace("\n", "\r\n") with re.sub(r"(?<!\r)\n", "\r\n", joined to prevent \r\n in replace op new_content from becoming \r\r\n (fixed in both hashline_edit.py and file_ops.py)
- Track and report skipped large files in grep_search instead of silently skipping them
- Extract HASHLINE_MAX_FILE_BYTES constant to hashline.py as single source of truth, imported by view_file, grep_search, hashline_edit, and file_ops
- Add tests for CRLF replace op (both copies) and large file skip reporting
register_all_tools() now only loads verified (stable) tools by default.
Pass include_unverified=True to also load new/community integrations.
This prevents unverified tools from being loaded in production.
Also fixes duplicate register_brevo and register_pushover calls.
- Remove s3_tool (duplicate of aws_s3_tool), power_bi_tool (duplicate of
powerbi_tool), x_tool (duplicate of twitter_tool)
- Remove integrations/plaid (duplicate of plaid_tool), integrations/sap_s4hana
(duplicate of sap_tool), stray tools/mssql.py
- Add help key to credential error responses across 14 tool modules
- Fix health checker registry keys (calendly -> calendly_pat, lusha -> lusha_api_key)
- Add health_check_endpoint to calendly and lusha credential specs
- Fix Trello env var (TRELLO_TOKEN -> TRELLO_API_TOKEN) and remove duplicate
Trello specs from hubspot.py
- Add credential_group="aws" to AWS S3 and Redshift specs sharing env vars
- Update conftest UNREGISTERED_COMMUNITY_MODULES to only contain mssql_tool
Implements 5 tools via Tines REST API:
- tines_list_stories: List workflow stories with search/filter
- tines_get_story: Get story details including entry/exit agents
- tines_list_actions: List actions (agents) in stories
- tines_get_action: Get action details with sources/receivers
- tines_get_action_logs: Get action execution logs by level
Uses Bearer token auth with tenant domain.
Implements 4 tools via X API v2:
- twitter_search_tweets: Search recent tweets with query operators
- twitter_get_user: Get user profile by username
- twitter_get_user_tweets: Get user timeline
- twitter_get_tweet: Get tweet details by ID
Uses Bearer token auth (app-only, read access).
Implements 5 tools via QuickBooks Online API v3:
- quickbooks_query: Query entities with SQL-like syntax
- quickbooks_get_entity: Get entity by type and ID
- quickbooks_create_customer: Create customers
- quickbooks_create_invoice: Create invoices with line items
- quickbooks_get_company_info: Get company details
Uses OAuth 2.0 Bearer token auth. Supports sandbox mode.
Implements 5 tools via AWS S3 REST API:
- s3_list_buckets: List all buckets in the account
- s3_list_objects: List objects with prefix/delimiter filtering
- s3_get_object: Get object content and metadata
- s3_put_object: Upload text objects
- s3_delete_object: Delete objects
Uses AWS Signature V4 signing (no boto3 dependency).
Implements 5 tools via Calendly API v2:
- calendly_get_current_user: Get user URI and profile info
- calendly_list_event_types: List meeting templates
- calendly_list_scheduled_events: List booked meetings with date filters
- calendly_get_scheduled_event: Get event details by URI
- calendly_list_invitees: List invitees for an event
Uses Bearer token auth (Personal Access Token).
Implements 5 tools via PagerDuty REST API v2:
- pagerduty_list_incidents: List incidents with status/urgency/date filters
- pagerduty_get_incident: Get incident details by ID
- pagerduty_create_incident: Create incidents on a service
- pagerduty_update_incident: Acknowledge or resolve incidents
- pagerduty_list_services: List services with name search
Uses Token auth header, From header for write operations.
Implements 6 tools via Airtable Web API:
- airtable_list_records: List records with filters, sort, field selection
- airtable_get_record: Get a single record by ID
- airtable_create_records: Create up to 10 records per request
- airtable_update_records: Partial update up to 10 records per request
- airtable_list_bases: List accessible bases
- airtable_get_base_schema: Get table and field schema for a base
Uses Bearer token auth (Personal Access Token).
Implements 6 tools via MongoDB Atlas Data API:
- mongodb_find: Find documents with filters, projection, sort, limit
- mongodb_find_one: Find a single document
- mongodb_insert_one: Insert a document
- mongodb_update_one: Update a document with MongoDB operators
- mongodb_delete_one: Delete a document
- mongodb_aggregate: Run aggregation pipelines
Uses API key auth header. All endpoints are POST.
Implements 5 tools via Zendesk Support API v2:
- zendesk_list_tickets: List tickets with status/sort filters
- zendesk_get_ticket: Get ticket details by ID
- zendesk_create_ticket: Create tickets with priority/type/tags
- zendesk_update_ticket: Update ticket fields and add comments
- zendesk_search_tickets: Search tickets with Zendesk query syntax
Uses Basic auth (email/token:api_token).
6 tools: jira_search_issues, jira_get_issue, jira_create_issue,
jira_list_projects, jira_get_project, jira_add_comment. Uses Basic auth
with email + API token and Atlassian Document Format for text fields.
5 tools: cloudinary_upload, cloudinary_list_resources, cloudinary_get_resource,
cloudinary_delete_resource, cloudinary_search. Uses Basic auth with
API key/secret and supports image, video, and raw resource types.
- Implement 6 YouTube API tools (search videos, get video/channel details, list channel videos, get playlist items, search channels)
- Add YOUTUBE_API_KEY credential spec with help_url and description
- Register YouTube tool in tools/__init__.py
- Add comprehensive test coverage (18 tests) with mocking
- Add detailed README with setup instructions and examples
- Use httpx for HTTP requests to YouTube Data API v3
- Verified with real API integration testing
Implements #5603
The grace period logic for client-facing auto-blocks was placed after
_await_user_input(), which blocks forever since no inject_event is
scheduled for text-only turns. This caused test_text_after_user_input
_goes_to_judge to hang indefinitely, blocking CI framework tests.
Move the grace period check before the blocking call so that within
the grace window, auto-blocks with missing outputs skip blocking
entirely and continue to the next LLM turn for judge RETRY pressure.
Also adds an _auto_missing check: nodes with no missing outputs
(e.g. queen monitoring with output_keys=[]) should still block
as their text-only output is legitimate conversation.
Fixes#5633
* fix: enforce 0600 permissions on OAuth token files
Credential files were written with default umask permissions.
Use os.open with explicit 0o600 mode to ensure token files
are always owner-read/write only, regardless of umask.
Fixes#5530
* style: fix line too long in checkpoint_store.py
- Expose page parameter on search_people and search_companies
(client + MCP tool) enabling access beyond the first 50 results
- Add guard requiring at least one filter on both search endpoints
to prevent broad requests that burn API credits
- Add unit tests for pagination and empty filter validation
Add complete SAP S/4HANA integration with:
- Connector for OData API access
- Credential management following Hive patterns
- Unit tests with mocked responses
- Documentation and usage examples
Refs #3182
- Add S3Storage class with upload, download, list, delete operations
- Support IAM roles, environment variables, and credential store
- Implement retry logic with adaptive backoff
- Add MCP tools: s3_upload, s3_download, s3_list, s3_delete, s3_check_credentials
- Include comprehensive tests with moto mocking
- Add documentation for setup and IAM permissions
Closes#3012
- Replace non-existent CLI commands (calculate, interactive, analyze)
with actual commands (run, shell, info) in core/README.md
- Fix test-list argument from <goal_id> to <agent_path> in core/README.md
- Fix misleading docstring on MockProvider.complete_with_tools()
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
* micro-fix: fix wrong credential path and env var in docs
Both docs/configuration.md and docs/environment-setup.md reference a
non-existent ADEN_CREDENTIALS_PATH env var and wrong default path
(~/.aden/credentials). The actual env var is HIVE_CREDENTIAL_KEY and
the default path is ~/.hive/credentials (see storage.py:119,125).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* micro-fix: clarify HIVE_CREDENTIAL_KEY comment wording
Reword comment to avoid implying the env var controls the path.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Use Credentials.from_service_account_file() instead of mutating os.environ
- Remove unused dimensions param from _format_report_response
- Remove unused metrics param from _format_realtime_response
- Extract duplicated property_id/limit validation into _validate_inputs helper
- Add credential_group="google_cloud" to GA and BigQuery specs
- Update tests to mock Credentials class
Add read-only GA4 Data API v1 tools: ga_run_report, ga_get_realtime,
ga_get_top_pages, and ga_get_traffic_sources. Includes credential spec,
unit tests, and README.
Add IntercomHealthChecker (subclass of OAuthBearerHealthChecker) and
register it in HEALTH_CHECKERS so the credential registry completeness
test passes in CI.
- Pass assignee_type from intercom_assign_conversation tool function
through to _IntercomClient.assign_conversation() and into the API payload
- Add tests for assignee_type="team" passthrough at client and tool levels
- Add tool README with setup, usage examples, and error handling
Addresses PR #5171 review feedback from @bryanadenhq
Client-facing nodes auto-block on text-only turns (wait for user input).
This breaks weak models (Codex) that output text like "Understood" instead
of calling tools after user responds.
Add _cf_expecting_work state: after user input, text-only turns with
missing output keys skip auto-block and go to judge, which pushes the
LLM to call set_output. Tool calls reset the state back to presenting
mode (auto-block on next text-only).
No behavioral change for strong models (they always call tools after
user input, so the new code path is never triggered).
Judge feedback was saying "Use set_output tool to provide them" which
caused Codex to skip all work and call set_output directly. Changed to
"Follow your system prompt instructions to complete the work."
* Enhance ToolRegistry type inference for function parameters
- Add _infer_schema() helper to handle Union types (Union[T, U] and T | U)
- Support Optional[T] and Union[T, None] with correct optional flag
- Infer generic types: list[T] -> array with items schema, dict[K, V] -> object with additionalProperties
- Detect Pydantic BaseModel parameters and use model_json_schema()
- Correctly mark parameters as required/optional based on type annotations
- Add comprehensive test suite covering all type inference scenarios
- Maintain backward compatibility for unannotated parameters
* Fix asyncio.run crash in GraphBuilder.run_test
* Revert "Enhance ToolRegistry type inference for function parameters"
This reverts commit dacd0fa8b926e01d3f29e7c9b2ff5101b4a52c3b.
* feat(arxiv): implement search_papers and initial download_paper tools
* feat(arxiv): improve PDF download handling with temp files and validation (WIP)
Switch to NamedTemporaryFile for safer temp file handling
Force export.arxiv.org domain for PDF downloads
Add custom User-Agent header
Validate Content-Type to ensure PDF response
Improve error handling and cleanup logic
Add timeout to requests
Work in progress – download_paper still under refinement.
* feat(arxiv): replace NamedTemporaryFile with module-level TemporaryDirectory
Switch from NamedTemporaryFile(delete=False) to a shared _TEMP_DIR for
the lifetime of the server process. Scopes file lifetime to the session,
guarantees cleanup via atexit, and removes the need for manual file
handle management.
Expand README with full args/returns/error reference and implementation
notes explaining the temp storage design decision.
* test(arxiv): add comprehensive tests for search_papers and download_paper
fix(arxiv): return structured error instead of raising on invalid PDF content type
- Add full test coverage for search_papers (validation, success, id_list, errors)
- Add full test coverage for download_paper (success, network errors, invalid content, cleanup)
- Mock arxiv client and requests to isolate behavior
- Ensure partial files are cleaned up on failure
- Align download_paper behavior with tool contract (no exceptions, structured responses)
* style(tools): apply ruff formatting to arxiv tool and update lockfile
- Adds a new autonomous agent template that monitors competitor websites, news, and GitHub
- Implements a 7-node graph workflow to collect, aggregate, and analyze competitive data
- Generates a weekly structured HTML digest with key highlights and 30-day trends
- Utilizes existing web_scrape, web_search, and github MCP tools
- Addresses issue #4153Closes#4153
* docs: fix CLI usage args for test-debug/test-list to match implementation
* docs: restore 'uv run' prefix to test commands
Reverts unintentional removal of 'uv run' in usage examples as requested in code review.
* chore: changes to .gitignore
* perf(json): add json.loads fast path + asyncio.to_thread for extract_json
Addresses maintainer feedback:
- json.loads candidate fast path in find_json_object (300x speedup source)
- asyncio.to_thread wrappers for both _extract_json call sites (unblocks event loop)
- Remove ~480 lines of over-engineered incremental parsing logic
Total: ~16 lines, zero duplication, zero API surface change
* fix: simplify async JSON handling per maintainer feedback and align tests
* fix(test): replace tautology assertion in test_mismatched_then_valid
The original assertion `assert result is not None or result is None`
is always true. Replace with a meaningful type check.
---------
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
- Add brevo_tool with 6 MCP tools: brevo_send_email, brevo_send_sms,
brevo_create_contact, brevo_get_contact, brevo_update_contact,
brevo_get_email_stats
- Add CredentialSpec for BREVO_API_KEY in credentials/brevo.py
- Register brevo_tool in tools/__init__.py and credentials/__init__.py
- Add README with setup instructions and usage examples
- Add 34 unit tests covering all tools, validation and error handling
Closes#5127
- Bump framework version 0.5.0 → 0.5.1
- Add CHANGELOG.md with full release notes
Highlights: Hive Coder meta-agent, multi-graph runtime, TUI revamp,
subscription model support, 5 new tool integrations, deprecated node
type removal.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Bump framework version 0.5.0 → 0.5.1
- Add CHANGELOG.md with full release notes
Highlights: Hive Coder meta-agent, multi-graph runtime, TUI revamp,
subscription model support, 5 new tool integrations, deprecated node
type removal.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Previously the table listed ~20 of ~50 available tools. This expands
it to cover all tools, grouped into categories: File System, Data Files,
Web & Search, Communication, Productivity & CRM, Cloud & APIs,
Security, and Utilities.
All tool names verified against registered @mcp.tool() functions in source.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- search_people: replaced freetext searchText concatenation with proper
structured Lusha API filters (jobTitles, seniority as list[int],
departments, locations as dict, company_names, industry_ids, search_text)
- search_companies: added locations, company_names, search_text params;
made all params optional for flexible queries
- Pagination: exposed limit param (clamped 10-50 per Lusha API constraints)
on both search tools, replacing hardcoded size=25
- get_signals: changed ids from list[str] to list[int], removed internal
str-to-int conversion as Lusha IDs are always numeric
- seniority type corrected to list[int] (API rejects string-encoded values
despite OpenAPI spec suggesting strings — verified via live integration)
- Unit tests updated for all changes (19/19 pass)
Verified against live Lusha API: all 6 tools return correct responses.
- search_companies: replace names filter with mainIndustriesIds (numeric
industry IDs) per Lusha API schema. Parameter changed from
industry: str to industry_ids: list[int] | None.
- _get_api_key: return None instead of raising TypeError on unexpected
credential type. Lets _get_client handle it with the standard error dict
pattern used across all tools.
- Updated unit tests for new industry_ids parameter and added test for
non-string credential handling.
Lusha API rejects filters.companies.include.searchText (HTTP 400).
Replaced with valid 'names' field in search_companies and removed
redundant company searchText from search_people. Updated unit tests.
Implements AI-powered web search, content extraction, and research tools
via the Exa API for agent workflows.
Tools: exa_search, exa_find_similar, exa_get_contents, exa_answer
Follows existing tool pattern (web_search_tool, hubspot_tool, slack_tool):
- register_tools(mcp, credentials) with @mcp.tool() decorators
- Credential fallback: CredentialStoreAdapter -> EXA_API_KEY env var
- Error handling: always returns dicts, never raises
- Retry with exponential backoff on HTTP 429
Includes:
- Neural/keyword search with domain, date, and category filters
- Similar page discovery via neural embeddings
- Content extraction from up to 10 URLs per request
- Citation-backed answer generation
- CredentialSpec in credentials/search.py
- Comprehensive unit tests (21 tests)
- 500/500 integration CI tests passing
Fixes#4177
- Fix export endpoint: /Export -> /ExportTo
- Add 202 Accepted response handling
- Add notifyOption to refresh_dataset API call
- Rename format parameter to export_format (avoid shadowing builtin)
- Add PNG support to export formats
- All critical API issues from review addressed
Terminals without extended key reporting (VS Code, Cursor) send
identical events for Enter and Shift+Enter, making it impossible
to insert newlines. Ctrl+J produces a distinct key event in all
terminals.
* feat(tools): add Razorpay payment processing integration
Add Razorpay MCP tool integration for payment processing, invoicing,
and refund management. Implements 6 MCP tools:
- razorpay_list_payments: List recent payments with filters (pagination, date range)
- razorpay_get_payment: Fetch detailed payment information by ID
- razorpay_create_payment_link: Create one-time payment links with shareable URLs
- razorpay_list_invoices: List invoices with status and type filtering
- razorpay_get_invoice: Fetch invoice details including line items
- razorpay_create_refund: Create full or partial refunds for payments
Features:
- Authentication via HTTP Basic Auth (RAZORPAY_API_KEY + RAZORPAY_API_SECRET)
- Credential spec in dedicated razorpay.py (follows repo pattern)
- Comprehensive error handling (401, 403, 404, 400, 429, 500, timeouts)
- Input validation (payment IDs, invoice IDs, amounts, currencies)
- Full test coverage (42 unit tests, 26 integration tests)
Closes#4404
* style: fix ruff I001 import order and W291 in tools
* fix: improve Razorpay credential tracking and validation
- Add razorpay_secret CredentialSpec with credential_group
- Fix amount=0 bug by using 'is not None' checks
- Add regex validation for payment/invoice IDs
* fix: use graceful credential handling instead of raising TypeError
Match codebase convention (calcom, lusha) - return None for non-string
credentials instead of raising TypeError, so the tool returns an error
dict instead of crashing.
---------
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
* feat(calendar): add Google Calendar integration with event management tools and health checks
* fix(calendar): align google_calendar_oauth credential spec with codebase pattern
Correct hyphenation and spelling in the product roadmap: change 'outcome oriented' to 'outcome-oriented' and fix 'Workder' to 'Worker' in the Deployment section.
Implements comprehensive Apify integration for web scraping and automation:
- Added 4 new tools: apify_run_actor, apify_get_dataset, apify_get_run, apify_search_actors
- Credential management for APIFY_API_TOKEN with health check
- Support for synchronous (wait=True) and asynchronous (wait=False) actor execution
- Actor ID validation and comprehensive error handling
- Full test coverage (26 tests passing)
- README with usage examples and documentation
Addresses #4510
The bullet character (•) cannot be displayed properly in PowerShell
on some Windows systems. Use ASCII dash (-) instead for compatibility.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add Zoho CRM MCP integration for lead/contact/account/deal workflows with notes support. Implements 5 MCP tools:
- zoho_crm_search: Search Leads/Contacts/Accounts/Deals by criteria or word with pagination
- zoho_crm_get_record: Fetch a single record by module and ID
- zoho_crm_create_record: Create records with pass-through field payloads
- zoho_crm_update_record: Update records by ID with partial field payloads
- zoho_crm_add_note: Create notes linked to CRM records via Parent_Id mapping
Features:
- Zoho OAuth2 provider added in core credentials (refresh-token flow)
- Zoho auth format: Authorization: Zoho-oauthtoken <token>
- Region/DC-aware routing using accounts domain/region + api_domain usage
- Persisted DC metadata on refresh (api_domain/accounts_domain/location)
- Credential spec and health check registration for zoho_crm
- Tool registration and allowed-tool list updates
- Normalized tool responses with retriable 429 handling
- README with setup, auth modes, usage, and testing instructions
- Comprehensive unit/integration coverage updates for tool, provider, and health checks
Validation:
- Scoped ruff lint/format checks passed
- Targeted test suite passed: 563 passed, 18 skipped
Closes#4418
* feat(tools): add Excel tool for reading/writing .xlsx/.xlsm files
Add excel_tool module with 7 MCP tools:
- excel_read: Read data from Excel files with pagination
- excel_write: Create new Excel files
- excel_append: Append rows to existing Excel files
- excel_info: Get file metadata (sheets, columns, rows)
- excel_sheet_list: List sheet names in a file
- excel_sql: Query Excel with SQL (DuckDB), multi-sheet support
- excel_search: Search values across sheets with match options
Includes 56 tests, openpyxl optional dependency, and documentation.
Fixes#2675
* fix(tools): address excel review feedback and stabilize tests
Implements geocoding, routing, and location intelligence via Google Maps
Platform Web Services APIs for logistics, delivery, and location-based
agent workflows.
Tools: maps_geocode, maps_reverse_geocode, maps_directions,
maps_distance_matrix, maps_place_details, maps_place_search
Includes:
- page_token support for paginated place search results
- GoogleMapsHealthChecker for credential validation
- Comprehensive unit tests (42 tool tests, 30 health check tests)
Closes#3179
* feat(vision): add GCP Vision API integration
* refactor(vision): move GCP Vision credentials to dedicated folder
* fix: clean up credentials imports and updated gitignore
* followed ruff alphabetic order for credentials
Brings in append_data tool, continuous conversation mode, conversation
judge, phase compaction, and prompt composer from the memory-inheritance
feature branch.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Shows clear success rate when some exclusions fail (e.g., "Only 2/3
exclusions added (67%)"). Helps users understand that performance
benefit may be reduced when not all paths are excluded.
- Calculates success rate as percentage
- Shows "X/Y added (Z%)" format for clarity
- Warns that performance benefit may be reduced
- Better visibility of partial failures
Improves user awareness of partial installation issues.
Adds Test-IsDefenderEnabled helper and rechecks Defender status and
paths before adding exclusions. Prevents adding ineffective exclusions
if Defender was disabled during user prompt (race condition).
- New helper: Test-IsDefenderEnabled() for quick boolean check
- Recheck Defender status immediately before Add operation
- Recheck paths to detect if added by another process
- Clear messages if status changed or already added
Fixes race condition where user could disable Defender or another
process could add exclusions during the prompt delay.
Validates that paths are within safe boundaries (project directory or
user AppData) before excluding them from Defender. Prevents accidental
or malicious exclusion of system directories.
- Adds safePrefixes validation (project dir + user AppData only)
- Checks each path against allowed prefixes
- Normalizes paths for consistent comparison
- Warns but processes non-existent paths (they may be created later)
Fixes potential security issue where modified script could exclude
system directories like C:\Windows or C:\Program Files.
- Subclass TextArea as ChatTextArea to intercept Enter key before
the base class swallows it (fixes submission not triggering)
- Remove event.shift access that raises AttributeError on Key events
- Make action_show_sessions directly call _submit_input instead of
just placing text in the widget
- Define repo root at top; lead with quick start (3 steps)
- Add 'What you get' and prerequisites in one place
- Full setup step-by-step; troubleshooting: problem → fix
- Manual MCP config as single section; verification optional
- Plain language, scannable structure, no duplicate sections
- Add scripts/setup-antigravity-mcp.sh to auto-detect repo root and write
~/.gemini/mcp.json with absolute paths (no manual path editing)
- Lead docs with Quick start (3 steps) and note ./ vs / for the script
- README: point to one-command setup; clarify script runs from repo folder
- Add step to run core/setup_mcp.sh first
- Document that IDE often loads ~/.claude/mcp.json, not project config
- Add Option A: copy to ~/.claude/mcp.json with absolute cwd paths
- Note that cwd schema warning in IDE is a false positive
- Renumber setup steps (1–5)
- Add .antigravity/mcp_config.json with agent-builder and tools MCP servers
- Add .antigravity/skills/ with symlinks to .claude/skills/ (5 skills)
- Add docs/antigravity-setup.md with setup and troubleshooting
- Update README.md with Antigravity IDE support section
- Update DEVELOPER.md and docs/contributing-lint-setup.md with Antigravity refs
Mirrors Cursor integration for consistent multi-IDE support.
Fixes#4445
Moved repeated inline imports (logging, json, re) to module-level:
- Eliminates import overhead on every method call
- Follows PEP 8 conventions
- Added module-level logger instance
- re is used at line 259 (re.search)
Changes:
- 4 lines added (imports + logger)
- 13 lines removed (inline imports)
- No functional changes
Replace single-line Input widget with TextArea in chat REPL so
Shift+Enter inserts newlines and multiline paste works correctly.
Add Windows clipboard support (clip.exe) and xsel fallback for Linux.
Implements 5 tools as proposed in #3224:
- scholar_search: Search Google Scholar for academic papers
- scholar_get_citations: Get citation formats (MLA, APA, Chicago, etc.)
- scholar_get_author: Author profiles with h-index, i10-index, metrics
- patents_search: Search Google Patents with filters
- patents_get_details: Detailed patent information by publication number
Follows existing tool pattern (web_search_tool, hubspot_tool, slack_tool):
- register_tools(mcp, credentials) with @mcp.tool() decorators
- _SerpAPIClient internal class for HTTP calls via httpx
- Credential fallback: CredentialStoreAdapter -> SERPAPI_API_KEY env var
- Error handling: always returns dicts, never raises
25 unit tests + live integration tests verified.
697/697 full test suite passing.
Fixes#3224
Changes logger.info() with debug prefix to logger.debug() for
session state resume information. This prevents debug-level
information from appearing in production logs at INFO level.
- Removes redundant '🔍 Debug:' prefix
- Uses appropriate debug logging level
- Follows Python logging best practices
- Improves production log clarity
Addresses #4377
- Fix Getting Started steps 6/7 duplicated; renumber to 8 and 9
- Add command to run tools package tests (cd tools && uv run pytest)
Co-authored-by: Cursor <cursoragent@cursor.com>
- Keep both APOLLO_CREDENTIALS and AIRTABLE_CREDENTIALS
- Keep both apollo_tool and airtable_tool imports (alphabetical)
Co-authored-by: Cursor <cursoragent@cursor.com>
- Keep both APOLLO_CREDENTIALS and CALENDLY_CREDENTIALS
- Keep both apollo_tool and calendly_tool imports (alphabetical)
Co-authored-by: Cursor <cursoragent@cursor.com>
- Add 429 handling with retry_after from Retry-After header
- Add _request_with_retry (2 retries) for all API calls
- Update tests to use httpx.request
Co-authored-by: Cursor <cursoragent@cursor.com>
- Add 429 handling with retry_after from Retry-After header
- Add _request_with_retry (2 retries) for all API calls
- Validate get_availability date range <= 7 days
- Update tests to use httpx.request
Co-authored-by: Cursor <cursoragent@cursor.com>
- Add comprehensive resumable session functionality
- Immediate pause with Ctrl+Z and /pause command
- Auto-save state on quit
- Session management with /resume and /sessions commands
- Full memory and conversation history restoration
- See CHANGELOG.md for complete list of changes
Reorganized imports in __init__.py for clarity and consistency. Cleaned up formatting and comments in wikipedia_tool.py. Enhanced test_wikipedia_tool.py by improving patching targets, clarifying comments, and refining test structure for better maintainability.
Reorders imports in tools/__init__.py for clarity and groups web and PDF tools together. Updates Wikipedia tool tests to patch httpx.get using the correct import path, ensuring mocks work as intended. Removes unnecessary print statement in Wikipedia tool error handling.
Introduces a new 'search_wikipedia' tool for searching Wikipedia and retrieving article summaries using the public Wikipedia REST API. Updates documentation and tool registration, and adds unit tests for the new tool.
Implements Reddit API integration for community management and content monitoring.
Features:
- Search & Monitoring: search posts/comments, get subreddit feeds (new/hot), get posts/comments (6 tools)
- Content Creation: submit posts, reply, edit, delete comments (5 tools)
- User Engagement: get profiles, upvote, downvote, save posts (4 tools)
- Moderation: remove/approve posts, ban users (3 tools)
Implementation:
- OAuth 2.0 authentication via REDDIT_CREDENTIALS
- PRAW library for Reddit API integration
- Comprehensive error handling and validation
- Full test coverage (25 tests passing)
Resolves#3595
Add test_x_credentials_share_credential_group to verify all X credentials
share the 'x' credential group. Update test_credential_group_default_empty
to account for X credentials alongside existing Google exceptions.
- Added `x_send_dm` tool using v2 endpoint (`POST /dm_conversations/with/:id/messages`) for reliable 1:1 messaging.
- Fixed 403 Forbidden payload validation errors by simplifying DM payload structure.
- Enhanced `_handle_response` to verify `x_tool.py` returns raw API error details for 403/400 responses, aiding in permission debugging.
- Updated `demo_x_tools.py` to support standard `.env` variable names (e.g., `X_API_KEY`) and added user lookup for DM testing.
- Added unit tests covering new DM functionality and payload verification in `test_x_tool.py`.
- Audited credential handling: Read-only tools (Search/Mentions) correctly use Bearer Token, while Write tools (Post/Reply/Delete/DM) enforce OAuth 1.0a User Context.
Verified with live API tests (see PR description for logs).
Fixes#2692
Added steps to configure the upstream remote and sync the main branch
before creating a feature branch. This helps contributors avoid starting
from stale code and reduces merge conflicts.
Covers:
- ON_FAILURE edge followed when node fails after max retries
- Original termination behavior preserved when no ON_FAILURE edge exists
- ON_FAILURE edge not followed on success (only ON_SUCCESS fires)
- ON_FAILURE routing with max_retries=0 (no retries)
- Failure handler appears in execution path and node_visit_counts
- Implement 5 core functions for data warehouse querying
- Add boto3 integration with Redshift Data API
- Security: Read-only SELECT queries by default
- Full credential store support
- 26/26 tests passing (100% coverage)
- Complete documentation with examples
Resolve conflict in tools/mcp_server.py: take main's
CredentialStoreAdapter.default() which encapsulates the same
CompositeStorage logic our branch had inline.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix max_node_visits blocking executor retries: the visit count was
incremented on every loop iteration including retries, causing nodes
with max_node_visits=1 (default) to be skipped on retry. Added
_is_retry flag to distinguish retries from new visits via edge
traversal.
- Fix 20 UP042 lint errors: replace (str, Enum) with StrEnum across
14 files. Python 3.11+ StrEnum is preferred and enforced by ruff.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* fix(mcp): handle missing exports path in test generation tools
* fix(mcp): centralize agent path validation across test tools
* fix: remove duplicate if blocks and improve error hint message
---------
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
prompt_choice used return codes to pass selections. Combined with set -e,
non-zero returns (options 2-6) caused immediate script exit.
Fix: Use global variable PROMPT_CHOICE instead of return codes.
Addresses all blockers and suggestions from code review:
**Blockers fixed:**
1. Register tools in tools/__init__.py - Added import, registration call,
and all 13 tool names to return list
2. Add credential spec - Created GitHub entry in credentials/integrations.py
with env_var, tools list, help URL, and health check config
3. Move tests to correct location - Relocated from
tools/src/.../github_tool/tests/ to tools/tests/tools/test_github_tool.py
4. Removed .claude/settings.local.json from PR
**Security improvements:**
1. URL parameter sanitization - Added _sanitize_path_param() to reject
path traversal attempts (/ or ..) in owner, repo, branch, username params
2. Error message sanitization - Added _sanitize_error_message() to prevent
token leaks from httpx.RequestError exceptions
All 38 tests passing.
Add optional cc and bcc parameters to send_email and
send_budget_alert_email. Empty strings and whitespace-only values are
filtered out via _normalize_recipients to prevent invalid payloads.
Integrate a mail service to enable email notifications for budget alerts.
Closes#7.
New tools:
- send_email: general-purpose email sending with multi-provider support
- send_budget_alert_email: formatted budget alert notifications with
severity levels (INFO/WARNING/CRITICAL/EXCEEDED)
Architecture:
- Multi-provider pattern (matching web_search_tool), Resend as primary
- from_email resolved via explicit param or EMAIL_FROM env var
- Credential integration via CredentialManager with env var fallback
Also fixes: web_scrape_tool test mock missing content-type header
- Fix ScreenStackError crash by moving runtime init inside async context
- Implement selectable logging with TextArea widget
- Create interactive ChatREPL for agent execution
- Optimize 3-pane layout: logs/graph on left (60%), chat on right (40%)
- Add thread-safe event handling with call_from_thread
- Add TUI selection guide documentation
All features tested and working.
The TUI feature is fully functional with a minimal, stable interface:
- Header with title
- Central display area
- Footer with keybindings
This provides a foundation for future enhancements. Custom widgets
(LogPane, GraphOverview, ChatRepl) are available in the codebase
and can be integrated once Textual rendering issues on Windows are
resolved.
The --tui flag successfully launches the TUI dashboard for any agent:
hive run <agent_path> --tui
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Defer widget creation from __init__ to compose() and store references
as instance variables to prevent garbage collection and initialization
order issues. This resolves ScreenStackError during TUI startup.
Changes:
- Move LogPane, GraphOverview, ChatRepl creation to compose()
- Store widgets as instance variables (self.log_pane, etc)
- Restore Horizontal/Vertical container layout
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
The compose() method was missing the proper container structure
for the layout, which caused initialization to fail. Restored the
Horizontal/Vertical container layout with proper pane structure.
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Implement a Textual-based terminal UI for the hive framework that displays:
- Agent execution status and progress
- Log output in real-time
- Graph visualization of agent execution flow
- Interactive REPL for user input/feedback
Changes:
- Add core/framework/tui/ module with AdenTUI app and custom widgets
- Add LogPane widget for streaming log output
- Add GraphOverview widget for execution graph visualization
- Add ChatRepl widget for interactive user input
- Add TUILogHandler for capturing framework logs to TUI
- Update cli.py to support --tui flag for launching dashboard
- Update runner.py to enable AgentRuntime when TUI is active
- Fix missing Textual container imports (Horizontal, Vertical, Container)
- Fix race conditions in async TUI initialization
- Fix threading issues in app event handling
The TUI is launched via: hive run <agent_path> --tui
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Guard against empty parent_dir when path has no directory component (e.g., 'data.csv'). Prevents FileNotFoundError from os.makedirs(''). Adds test coverage for root-level file writes.
- Fix race condition: cache now updates only after successful write
- Fix cache invalidation: summary cache invalidated on save_run()
- Add 4 tests to verify the fixes
* Fixed Python and pip version mismatch with robust code #476
- Ensured python version is found across all available python interpreters including python3, python, and py -3, and made it robust for easy interpreter add-on.
- Ensured that pip is found for the respective python interpreter.
- Generalized some variables like PYTHON_VERSION for flexiblity.
- Added a split to PYTHON_VERSION into Major and Minor version to create a robust code.
- Added clear documentation throughout the code .
Related to issue #476
* fix(setup): Code fixes raised during review by @Hundao
- PYTHON_CMD initialized to no value (blank). Fixes the bug
- PYTHON_VERSION used to generalize is changed to REQUIRED_PYTHON_VERSION due to name collision
- quotes added to "${POSSIBLE_PYTHONS[@]}" so py -3 can work.
Pending:
eval related issues pending.
* fix(setup): Code fixes raised during review by @Hundao
- eval removed altogether.
- py -3 is replaced with py in POSSIBLE_PYTHONS, and will be replaced to py -3 after the interpreter selection.
* fix(setup): Code fixes raised during review by @bryanadenhq
- Implemented Array and refactored entire code. PYTHON_CMD is changed at all places in the entire code.
- Redundant code is removed, design changed a bit for user understanding. (See Screenshots)
- Using 2>&1 as standard. Fix the mis-match in standard code writing.
Resolved conflict in scripts/setup-python.sh by keeping upstream's
improved formatting with color codes and ${PYTHON_CMD} variable.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Restore MCP server configurations in .mcp.json with updated paths
for separate virtual environments (core/.venv and tools/.venv)
- Align Python version: change .python-version from 3.13 to 3.11
to match pyproject.toml target version
- Remove AGENTS.md as suggested (quickstart is sufficient)
- Document cross-package imports and separate venv architecture
in ENVIRONMENT_SETUP.md
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Resolved conflict in tools/pyproject.toml by keeping the expanded format
with sql dependency from main.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Skip closing an issue as duplicate of another that is already closed
(avoids circular closure when bot and human close in opposite order).
- Skip when duplicate target is self (same issue number).
- Extract testable helpers: isDupeComment, isDupeCommentOldEnough,
authorDisagreedWithDupe, getLastDupeComment, decideAutoClose.
- Add 23 unit tests (Bun) and run them in CI before auto-close step.
- Add scripts/AUTO_CLOSE_DUPLICATES_CROSS_VERIFY.md for impact summary.
Co-authored-by: Cursor <cursoragent@cursor.com>
Track retries, failed nodes, and execution quality (clean/degraded/failed) to expose retry metrics in ExecutionResult. This allows dashboards and monitoring to distinguish between clean success and degraded success with retries.
The condition_map in _load_from_dict was missing the llm_decide
mapping, causing goal-aware routing to break for exported agents.
When agents with LLM_DECIDE edges were exported and re-imported,
the edge conditions were silently defaulted to ON_SUCCESS instead
of preserving the LLM_DECIDE routing logic.
This fix ensures that agents exported with goal-aware routing edges
maintain their correct behavior after re-import.
- Add Content-Type validation to skip non-HTML content
- Simplify max_length validation using max() and min()
- Improve title extraction with cleaner code
The MCP integration example referenced WorkflowBuilder which doesn't exist.
Changed to GraphBuilder which is the correct class name.
Fixes import error when running: python core/examples/mcp_integration_example.py
* config: add .gitattributes for cross-platform line ending consistency
- Add comprehensive .gitattributes to normalize line endings
- Ensure shell scripts always use LF (required for Unix execution)
- Mark binary files explicitly to prevent corruption
- Eliminate CRLF warnings for Windows contributors
- Follow cross-platform best practices
This fixes persistent 'LF will be replaced by CRLF' warnings that
confuse Windows contributors during normal git operations.
Fixes#950
* fix: add trailing newline at end of file
Per review feedback from @Hundao
* feat(tools): add Google Custom Search as alternative to Brave Search
Adds google_search tool using Google Custom Search API as an alternative
to the existing web_search tool (Brave Search).
Changes:
- Add google_search_tool with full implementation
- Register Google credentials (GOOGLE_API_KEY, GOOGLE_CSE_ID)
- Register tool in tools/__init__.py
- Add README with setup instructions
Closes#793
* test(tools): add unit tests for google_search tool
Adds 7 tests mirroring web_search_tool test patterns:
- Missing API key error handling
- Missing CSE ID error handling
- Empty query validation
- Long query validation
- num_results clamping
- Default parameters
- Custom language/country parameters
All tests pass.
* refactor(tools): add multi-provider support to web_search tool
BREAKING CHANGE: None - backward compatible. Brave remains default.
- Add Google Custom Search as alternative provider in web_search
- Add 'provider' parameter: 'auto' (default), 'google', 'brave'
- Auto mode tries Brave first for backward compatibility
- Remove separate google_search_tool (consolidated into web_search)
- Update tests to cover multi-provider functionality (13 tests)
- Update README documentation
Users with BRAVE_SEARCH_API_KEY: No changes needed
Users with GOOGLE_API_KEY + GOOGLE_CSE_ID: Can use provider='google'
Users with both: Brave preferred by default, use provider='google' to force
Closes#793
* feat(tools): fixed readme
---------
Co-authored-by: Mustafa Abdat <abdamus@hilti.com>
The repository does not include docker-compose files, but multiple docs
claimed "Docker Compose deployment out of the box." This was left over
from a previous release.
Changes:
- README.md: Update FAQ to describe Python package deployment
- README.ko.md: Same update for Korean translation
- docs/configuration.md: Remove "Docker Compose Integration" section
and docker compose commands
- docs/quizzes: Update tasks that referenced docker-compose.yml
- .github/CODEOWNERS: Remove docker-compose*.yml entry
- scripts/setup.sh: Remove docker-compose.override.yml copy step
Fixes#923
Centralized _get_api_key in prompts.py to support OpenAI, Cerebras, and Groq via environment variables while maintaining Anthropic support through CredentialManager.
Previously, when exports/ was missing or empty, the bash glob
`exports/*/` would not match anything and the loop would silently
do nothing. The job would pass without actually validating anything,
which was misleading.
Now the job:
- Explicitly checks if exports/ directory exists
- Uses nullglob to handle empty directories properly
- Logs clear messages when skipping validation
- Reports the number of agents validated when successful
Fixes#887
The "Available Tools" table listed `execute_command` but the actual
registered name is `execute_command_tool`. This aligns the docs with
the runtime name in __init__.py and the tool's own README.
Fixes#901
Wrap json.loads() calls in try-catch blocks for add_node() and update_node()
functions to match the error handling pattern used elsewhere in the file.
Fixes#907
- Add output_model field to NodeSpec for specifying Pydantic model
- Add max_validation_retries field (default: 2) for retry configuration
- Add validation_errors field to NodeResult for error tracking
- Implement validate_with_pydantic() in OutputValidator
- Implement format_validation_feedback() for LLM retry prompts
- Auto-generate JSON schema from Pydantic model for response_format
- Add retry loop that feeds validation errors back to LLM
- Add 28 comprehensive tests covering all new functionality
Allow LLMJudge to accept any LLMProvider instance instead of being
hardcoded to use Anthropic. This aligns with the framework's pluggable
LLM design and enables users to:
- Use the same LLM provider across their agent and tests
- Run tests with cheaper or local models
- Avoid requiring an Anthropic API key for testing
Backward compatible: existing code using LLMJudge() without arguments
continues to work by falling back to Anthropic.
Closes#477
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Fixes#755
Problem:
The stop() method had a critical race condition where _flush_pending() and
_batch_task competed for queue items, causing:
- Data loss during shutdown
- Queue items processed twice or lost
- Batch writer cancelled mid-write
Root Cause:
The method called _flush_pending() while _batch_task was still running.
Both operations drained the same queue simultaneously, leading to conflicts.
Solution:
Reordered shutdown sequence to:
1. Cancel batch task first
2. Wait for task completion (handles CancelledError with final flush)
3. Then flush any remaining items
This eliminates queue competition because:
- _batch_writer() flushes its current batch when cancelled
- After cancellation completes, _flush_pending() safely processes remaining items
- No race condition, no data loss
Changes:
- Moved batch task cancellation before _flush_pending()
- Ensures clean shutdown sequence
- Prevents queue drain conflicts
Testing:
- All 209 tests pass
- No duplicate flushes
- Clean shutdown guaranteed
Impact:
- Prevents data loss during graceful shutdown
- Eliminates race condition between flush operations
- Ensures all writes complete before stop returns
- Treat run_coroutine_threadsafe race (RuntimeError) as expected: mark cleanup_attempted and log debug
- Mark cleanup_attempted on timeout/errors to avoid misleading fallback
- Add warning when loop thread fails to terminate within join timeout
- Make CancelledError best-effort (log, no re-raise) for session and stdio cleanup
Changes based on Copilot AI review (2 issues):
1. Thread join timeout was shorter than cleanup timeout (Issue #1):
- Changed _THREAD_JOIN_TIMEOUT from 5 to 12 seconds
- Must be >= cleanup timeout (10s) plus buffer for loop.stop()
- Prevents thread abandonment during active cleanup
2. Added detailed comment for redundant None assignments (Issue #2):
- Explained why we set _session/_stdio_context to None even if
_cleanup_stdio_async() already did it
- Documents the safety cases: timeout, failure, skip, cancellation
- Makes code intent clear for future maintainers
Changes based on Copilot AI review (3 issues):
1. Increased thread join timeout (Issue #1):
- Changed from 2 to 5 seconds
- Made proportional to cleanup timeout
- Defined as class constant _THREAD_JOIN_TIMEOUT
2. Handle asyncio.CancelledError explicitly (Issue #2):
- Added separate except clause for CancelledError
- Logs specific warning for cancelled cleanup
- Re-raises CancelledError as per asyncio best practices
- Added for both session and stdio_context cleanup
3. Increased cleanup timeout to match connection timeout (Issue #3):
- Changed from 5 to 10 seconds (matches _connect_stdio timeout)
- Defined as class constant _CLEANUP_TIMEOUT
- Prevents incomplete cleanup with slow MCP servers
Per Copilot AI review: distinguish timeout scenarios from actual
cleanup failures by catching TimeoutError separately. This helps
with debugging by providing clearer error messages.
Changes based on Copilot AI review (5 issues):
1. Simplified _cleanup_stdio_async():
- Used try/finally pattern for cleaner reference clearing
- References cleared in finally block (always executed)
2. Removed deprecated asyncio.get_event_loop():
- Removed complex temp loop pattern entirely
- Simplified fallback to just log warning and clear refs
3. Simplified fallback path (Issue #4):
- When loop exists but not running, resources are in undefined state
- Complex event loop manipulation removed
- Just log warning and proceed with reference clearing
- OS will reclaim resources on process exit
4. Handled race condition (Issue #5):
- Added comment documenting the inherent race condition
- Added try/except around loop.call_soon_threadsafe()
- Track cleanup_attempted flag for proper fallback handling
5. Added explanatory comments:
- Documented why redundant None assignments exist (safety)
- Explained race condition handling approach
Note: Test coverage suggestion (#3) acknowledged but deferred
to separate PR to keep this fix focused.
Changes based on Copilot AI review:
1. Fixed fallback path using temp event loop pattern:
- asyncio.run() may fail if there's already an event loop in current thread
- Now uses new_event_loop() + set_event_loop() + run_until_complete() pattern
- Preserves and restores original loop if one existed
2. Set references to None immediately after __aexit__:
- self._session = None after closing session
- self._stdio_context = None after closing context
- Prevents window where closed objects are still referenced
- Also clears on error to prevent reuse of broken objects
3. Added documentation for critical cleanup order:
- Session must close BEFORE stdio_context
- Session depends on streams provided by stdio_context
- Mirrors initialization order in _connect_stdio()
- Added warning comment to prevent future breakage
When self._loop exists but is not running or is closed (e.g., crashed,
stopped externally, or closed), the code now falls through to the
standard approach that properly handles both sync and async contexts.
Key changes:
- Added is_running() AND is_closed() checks before using run_coroutine_threadsafe()
- Removed separate else branch with asyncio.run() that didn't handle async context
- Now falls through to standard approach which:
- Detects if already in async context (get_running_loop)
- Uses separate thread with new event loop if in async context
- Uses asyncio.run() only when no event loop is running
Edge cases covered:
1. self._loop is None (sync context) -> uses asyncio.run()
2. self._loop is None (async context) -> uses thread with new loop
3. self._loop running normally -> uses run_coroutine_threadsafe()
4. self._loop stopped (sync context) -> falls through, uses asyncio.run()
5. self._loop stopped (async context) -> falls through, uses thread
6. self._loop closed (sync context) -> falls through, uses asyncio.run()
7. self._loop closed (async context) -> falls through, uses thread
Fixes#625
Calculate available_slots from running execution count instead of
accessing the private _value attribute of asyncio.Semaphore.
Private attributes may change between Python versions and are not
part of the public API.
Fixes#609
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace direct dictionary access with .get() and explicit ValueError
to prevent KeyError when entry_point_id is not found in _streams dict.
Fixes#589
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Fixes#590
Previously, the code assumed `session_state["memory"]` was always a dict
when the key existed. If it was `None` or another non-dict type, this
would raise a TypeError during iteration.
Now we validate the type before iterating and log a warning if the
memory data is not a dict, preventing runtime crashes when resuming
from malformed session states.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Fixes#599
The `callable` keyword in Python is a builtin function to check if something
is callable, NOT a type annotation. For type hints, we need `Callable` from
the typing module.
Changed:
- `tool_executor: callable` → `tool_executor: Callable[[ToolUse], ToolResult]`
Files updated:
- core/framework/llm/provider.py
- core/framework/llm/anthropic.py
- core/framework/llm/litellm.py
This fixes mypy/pyright type checking errors like:
"Variable annotation syntax is for types; callable is a function"
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add 7 test methods to TestWebScrapeToolLinkConversion class to validate
the new URL conversion feature:
- test_relative_links_converted_to_absolute: ../page and page.html -> absolute
- test_root_relative_links_converted: /about -> absolute
- test_absolute_links_unchanged: https://external.com remains unchanged
- test_links_after_redirects: Uses final URL, not requested URL
- test_fragment_links_preserved: #section1 anchors work correctly
- test_query_parameters_preserved: ?id=123&sort=date retained
- test_empty_href_skipped: Empty text links filtered out
All tests use unittest.mock for HTTP response mocking to avoid live network calls.
Tests comprehensively validate the urljoin() implementation that converts all
relative URLs to absolute URLs based on the final response URL.
- Add urljoin import from urllib.parse
- Convert all extracted links to absolute URLs based on page base_url
- Use response.url as base_url to handle redirects correctly
- Fixes issue where relative links like '../page' were unusable
- Created MockLLMProvider class that generates placeholder JSON responses
- Updated AgentRunner._setup() to use MockLLMProvider when mock_mode=True
- Added MockLLMProvider to llm module exports
- Fixes issue where agents failed with 'LLM not available' in mock mode
The MockLLMProvider extracts expected output keys from system prompts
and generates mock JSON responses for structural validation without
making real LLM API calls. This enables:
- Testing agent structure without API keys
- Fast iteration on agent graphs
- CI/CD testing without credentials
- Zero-cost structural validation
Tested with simple agent - all nodes execute successfully in mock mode.
Added else branch to handle edge case where loop exists but is not running. Uses asyncio.run() as fallback to ensure cleanup happens even if the loop was stopped externally or due to an error.
Added _cleanup_stdio_async() method to properly call __aexit__() on session and stdio_context before stopping the event loop.
This prevents resource leaks, zombie processes, and unclosed file handles.
Add tests for file viewing functionality with max_size truncation, negative max_size, custom encoding, and invalid encoding scenarios to ensure proper error handling and behavior.
Previously, the hallucination detection in SharedMemory.write() and
OutputValidator.validate_no_hallucination() only checked the first 500
characters for code indicators. This allowed hallucinated code to bypass
detection by prefixing with innocuous text.
Changes:
- Add _contains_code_indicators() method to SharedMemory and OutputValidator
- Check entire string for strings under 10KB
- Use strategic sampling (start, 25%, 50%, 75%, end) for longer strings
- Expand code indicators to include JavaScript, SQL, and HTML/script patterns
- Add comprehensive test suite with 19 test cases
Fixes#443
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove hardcoded max_retries_per_node = 3
- Use node_spec.max_retries for all retry logic
- Add comprehensive test suite (6 test cases)
- Allows per-node retry configuration as intended
Fixes#363
Add support for custom file encoding and size limits when viewing files. The max_size parameter prevents loading excessively large files by truncating content and adding a warning message when the limit is exceeded. Also includes validation for negative max_size values and checks if path is a file.
The cmd_list function stored node count as 'nodes' but tried to
access it as 'steps', causing a KeyError when listing agents.
Changed agent['steps'] to agent['nodes'] to match the dict key.
Fixes#202
- Update docs/getting-started.md to explain exports/ is created by users
- Remove references to non-existent support_ticket_agent example
- Update DEVELOPER.md with correct agent creation instructions
Update AnthropicProvider.complete to accept response_format and forward it to LiteLLMProvider.
Added unit test in test_litellm_provider.py to verify parameter forwarding.
Adds the `.venv` directory to the `.gitignore` file to prevent accidental commits.
Also, enhances the `scripts/setup-python.sh` script to include error handling for the `pip install` command, providing a more informative message if the upgrade fails.
Replace AnthropicProvider with LiteLLMProvider in AgentOrchestrator to
enable support for multiple LLM providers (OpenAI, Anthropic, Gemini, etc).
- LiteLLM auto-detects provider from model name
- LiteLLM auto-detects appropriate API key from environment
- Removes restrictive ANTHROPIC_API_KEY check
- Matches pattern used in AgentRunner
Closes#47
The Dockerfile.dev files lacked the 'production' stage alias that
docker-compose.yml expects, causing build failures. Added 'AS production'
to enable proper dev builds with hot reload.
Fixes#26
- Add jest.config.js with TypeScript support (ts-jest)
- Add tsconfig.test.json for test compilation
- Add supertest for HTTP endpoint testing
- Create test utilities for database mocking (PostgreSQL, MongoDB)
- Add example health endpoint test as template
Closes#13
- Created quizzes for various engineering tracks: Getting Started, Architecture Deep Dive, Build Your First Agent, Frontend Challenge, and DevOps Challenge.
- Added README for the quizzes directory to guide users through the challenges.
- Implemented a script to set GitHub repository topics for adenhq/hive using GitHub CLI.
gh issue close <number> --repo adenhq/hive --reason "not planned"
```
Report success with link to the issue.
## Important Guidelines
1.**Never close valid issues** - If there's any merit to the claim, don't close it
2.**Be respectful** - The reporter took time to file the issue
3.**Be technical** - Provide code references and evidence
4.**Be educational** - Help them understand, don't just dismiss
5.**Check twice** - Make sure you understand the code before declaring something invalid
6.**Consider edge cases** - Maybe their environment reveals a real issue
## Example Critiques
### Security Misunderstanding
> "The claim that secrets are exposed in plaintext misunderstands the encryption architecture. While `SecretStr` is used for logging protection, actual encryption is provided by Fernet (AES-128-CBC) at the storage layer. The code path is: serialize → encrypt → write. Only encrypted bytes touch disk."
### Impossible Request
> "The requested feature would require [X] which violates [fundamental constraint]. This is not a limitation of our implementation but a fundamental property of [technology/protocol]."
### Already Handled
> "This scenario is already handled by [code reference]. The reporter may be using an older version or misconfigured environment."
Use mcp__github__get_issue to get the full details of issue #${{ github.event.issue.number }}
### 2. Check for duplicates
Search for similar existing issues using mcp__github__search_issues with relevant keywords from the issue title and body.
Criteria for duplicates:
- Same bug or error being reported
- Same feature request (even if worded differently)
- Same question being asked
- Issues describing the same root problem
If you find a duplicate:
- Add a comment using EXACTLY this format (required for auto-close to work):
"Found a possible duplicate of #<issue_number>: <brief explanation of why it's a duplicate>"
- Do NOT apply the "duplicate" label yet (the auto-close script will add it after 12 hours if no objections)
- Suggest the user react with a thumbs-down if they disagree
### 3. Check for Low-Quality / AI Spam
Analyze the issue quality. We are receiving many low-effort, AI-generated spam issues.
Flag the issue as INVALID if it matches these criteria:
- **Vague/Generic**: Title is "Fix bug" or "Error" without specific context.
- **Hallucinated**: Refers to files or features that do not exist in this repo.
- **Template Filler**: Body contains "Insert description here" or unrelated gibberish.
- **Low Effort**: No reproduction steps, no logs, only 1-2 sentences.
If identified as spam/low-quality:
- Add the "invalid" label.
- Add a comment:
"This issue has been automatically flagged as low-quality or potentially AI-generated spam. It lacks specific details (logs, reproduction steps, file references) required for us to help. Please open a new issue following the template exactly if this is a legitimate request."
- Do NOT proceed to other steps.
### 4. Check for invalid issues (General)
If the issue is not spam but still lacks information:
- Add the "invalid" label
- Comment asking for clarification
### 5. Categorize with labels (if NOT a duplicate or spam)
Apply appropriate labels based on the issue content. Use ONLY these labels:
- bug: Something isn't working
- enhancement: New feature or request
- question: Further information is requested
- documentation: Improvements or additions to documentation
- good first issue: Good for newcomers (if issue is well-defined and small scope)
- help wanted: Extra attention is needed (if issue needs community input)
- backlog: Tracked for the future, but not currently planned or prioritized
### 6. Estimate size (if NOT a duplicate, spam, or invalid)
Apply exactly ONE size label to help contributors match their capacity to the task:
// Extract Discord ID — look for the numeric value after the "Discord User ID" heading
const discordMatch = body.match(/### Discord User ID\s*\n\s*(\d{17,20})/);
if (!discordMatch) {
await github.rest.issues.createComment({
...context.repo,
issue_number: issue.number,
body: `Could not find a valid Discord ID in the issue body. Please make sure you entered a numeric ID (17-20 digits), not a username.\n\nExample: \`123456789012345678\``
});
await github.rest.issues.update({
...context.repo,
issue_number: issue.number,
state: 'closed',
state_reason: 'not_planned'
});
return;
}
const discordId = discordMatch[1];
// Extract display name (optional)
const nameMatch = body.match(/### Display Name \(optional\)\s*\n\s*(.+)/);
body: `A PR has been created to link your account. A maintainer will merge it shortly — once merged, you'll receive XP and Discord pings when your bounty PRs are merged.`
All notable changes to this project will be documented in this file.
**Release Date:** February 18, 2026
**Tag:** v0.5.1
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## The Hive Gets a Brain
## [Unreleased]
v0.5.1 is our most ambitious release yet. Hive agents can now **build other agents** -- the new Hive Coder meta-agent writes, tests, and fixes agent packages from natural language. The runtime grows multi-graph support so one session can orchestrate multiple agents simultaneously. The TUI gets a complete overhaul with an in-app agent picker, live streaming, and seamless escalation to the Coder. And we're now provider-agnostic: Claude Code subscriptions, OpenAI-compatible endpoints, and any LiteLLM-supported model work out of the box.
### Added
- Initial project structure
- React frontend (honeycomb) with Vite and TypeScript
- Node.js backend (hive) with Express and TypeScript
- Docker Compose configuration for local development
- Configuration system via `config.yaml`
- GitHub Actions CI/CD workflows
- Comprehensive documentation
---
### Changed
- N/A
## Highlights
### Deprecated
- N/A
### Hive Coder -- The Agent That Builds Agents
### Removed
- N/A
A native meta-agent that lives inside the framework at `core/framework/agents/hive_coder/`. Give it a natural-language specification and it produces a complete agent package -- goal definition, node prompts, edge routing, MCP tool wiring, tests, and all boilerplate files.
### Fixed
- N/A
```bash
# Launch the Coder directly
hive code
### Security
- N/A
# Or escalate from any running agent (TUI)
Ctrl+E # or /coder in chat
```
## [0.1.0] - 2025-01-13
The Coder ships with:
### Added
-Initial release
- **Reference documentation** -- anti-patterns, construction guide, and design patterns baked into its system prompt
-**Guardian watchdog** -- an event-driven monitor that catches agent failures and triggers automatic remediation
`AgentRuntime` now supports loading, managing, and switching between multiple agent graphs within a single session. Six new lifecycle tools give agents (and the TUI) full control:
The Hive Coder uses multi-graph internally -- when you escalate from a worker agent, the Coder loads as a separate graph while the worker stays alive in the background.
### TUI Revamp
The Terminal UI gets a ground-up rebuild with five major additions:
- **Agent Picker** (Ctrl+A) -- tabbed modal screen for browsing Your Agents, Framework agents, and Examples with metadata badges (node count, tool count, session count, tags)
- **Runtime-optional startup** -- TUI launches without a pre-loaded agent, showing the picker on first open
- **Live streaming pane** -- dedicated RichLog widget shows LLM tokens as they arrive, replacing the old one-token-per-line display
- **PDF attachments** -- `/attach` and `/detach` commands with native OS file dialog (macOS, Linux, Windows)
- **Interactive credential setup** -- Guided `CredentialSetupSession` with health checks and encrypted storage, accessible via `hive setup-credentials` or automatic prompting on credential errors. (@RichardTang-Aden)
- **Pre-start confirmation prompt** -- Interactive prompt before agent execution allowing credential updates or abort. (@RichardTang-Aden)
- **Event bus multi-graph support** -- `graph_id` on events, `filter_graph` on subscriptions, `ESCALATION_REQUESTED` event type, `exclude_own_graph` filter. (@TimothyZhang7)
### TUI Improvements
- **In-app agent picker** (Ctrl+A) -- Tabbed modal for browsing agents with metadata badges (nodes, tools, sessions, tags). (@TimothyZhang7)
- **Runtime-optional TUI startup** -- Launches without a pre-loaded agent, shows agent picker on startup. (@TimothyZhang7)
- **Hive Coder escalation** (Ctrl+E) -- Escalate to Hive Coder and return; also available via `/coder` and `/back` chat commands. (@TimothyZhang7)
- **PDF attachment support** -- `/attach` and `/detach` commands with native OS file dialog. (@TimothyZhang7)
- **Streaming output pane** -- Dedicated RichLog widget for live LLM token streaming. (@TimothyZhang7)
- **System prompt datetime injection** -- All system prompts now include current date/time for time-aware agent behavior. (@TimothyZhang7)
- **Utils module exports** -- Proper `__init__.py` exports for the utils module. (@Siddharth2624)
- **Increased default max_tokens** -- Opus 4.6 defaults to 32768, Sonnet 4.5 to 16384 (up from 8192). (@TimothyZhang7)
---
## Bug Fixes
- Flush WIP accumulator outputs on cancel/failure so edge conditions see correct values on resume
- Stall detection state preserved across resume (no more resets on checkpoint restore)
- Skip client-facing blocking for event-triggered executions (timer/webhook)
- Executor retry override scoped to actual EventLoopNode instances only
- Add `_awaiting_input` flag to EventLoopNode to prevent input injection race conditions
- Fix TUI streaming display (tokens no longer appear one-per-line)
- Fix `_return_from_escalation` crash when ChatRepl widgets not yet mounted
- Fix tools registration problems for Google Docs credentials (@RichardTang-Aden)
- Fix email agent version conflicts (@RichardTang-Aden)
- Fix coder tool timeouts (120s for tests, 300s cap for commands)
## Documentation
- Clarify installation and prevent root pip install misuse (@paarths-collab)
---
## Agent Updates
- **Email Inbox Management** -- Consolidate `gmail_inbox_guardian` and `inbox_management` into a single unified agent with updated prompts and config. (@RichardTang-Aden, @bryanadenhq)
- **Job Hunter** -- Updated node prompts, config, and agent metadata; added PDF resume selection. (@bryanadenhq)
- **Deep Research Agent** -- Revised node implementations with updated prompts and output handling.
NodeSpec(node_type="event_loop",...)# or just omit node_type (it's the default now)
```
If your agents set `max_node_visits=1` explicitly, they'll still work. The only change is the _default_ -- new agents without an explicit value now get unlimited visits.
To try the new Hive Coder:
```bash
# Launch Coder directly
hive code
# Or from TUI -- press Ctrl+E to escalate
hive tui
```
---
## What's Next
- **Agent-to-agent communication** -- one agent's output triggers another agent's entry point
- **Cost visibility** -- detailed runtime log of LLM costs per node and per session
- **Persistent webhook subscriptions** -- survive agent restarts without re-registering
- **Remote agent deployment** -- run agents as long-lived services with HTTP APIs
Hive provides advanced runtime control for your AI agents, enabling you to observe, intervene, and dynamically adjust agent behavior as it executes. By giving you real-time visibility and control, Hive helps you build more reliable AI systems—catching and correcting issues during execution rather than reacting after failures occur.
Build autonomous, reliable, self-improving AI agents without hardcoding workflows. Define your goal through conversation with hive coding agent(queen), and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, credential management, and real-time monitoring give you control without sacrificing adaptability.
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
- **ripgrep (optional, recommended on Windows):** The `search_files` tool uses ripgrep for faster file search. If not installed, a Python fallback is used. On Windows: `winget install BurntSushi.ripgrep` or `scoop install ripgrep`
> **Note for Windows Users:** It is strongly recommended to use **WSL (Windows Subsystem for Linux)** or **Git Bash** to run this framework. Some core automation scripts may not execute correctly in standard Command Prompt or PowerShell.
### Installation
> **Note**
> Hive uses a `uv` workspace layout and is not installed with `pip install`.
> Running `pip install -e .` from the repository root will create a placeholder package and Hive will not function correctly.
> Please use the quickstart script below to set up the environment.
```bash
# Clone the repository
git clone https://github.com/adenhq/hive.git
git clone https://github.com/aden-hive/hive.git
cd hive
# Copy and configure
cp config.yaml.example config.yaml
# Run setup and start services
npm run setup
docker compose up
# Run quickstart setup
./quickstart.sh
```
**Access the application:**
This sets up:
-Dashboard: http://localhost:3000
-API: http://localhost:4000
-Health: http://localhost:4000/health
-**framework** - Core agent runtime and graph executor (in `core/.venv`)
-**aden_tools** - MCP tools for agent capabilities (in `tools/.venv`)
-**credential store** - Encrypted API key storage (`~/.hive/credentials`)
- **LLM provider** - Interactive default model configuration
- All required Python dependencies with `uv`
- Finally, it will open the Hive interface in your browser
> **Tip:** To reopen the dashboard later, run `hive open` from the project directory.
Click "Try a sample agent" and check the templates. You can run a template directly or choose to build your version on top of the existing template.
### Run Agents
Now you can run an agent by selecting the agent (either an existing agent or example agent). You can click the Run button on the top left, or talk to the queen agent and it can run the agent for you.
- **Observe** - Real-time visibility into agent execution, decisions, and performance
- **Metrics & Analytics** - Track costs, latency, and token usage with TimescaleDB
- **Cost Control** - Set budgets and policies to manage LLM spending
- **Real-time Events** - WebSocket streaming for live agent monitoring
- **Self-Hostable** - Deploy on your own infrastructure with full control
- **Production-Ready** - Built for scale and reliability
- **Browser-Use** - Control the browser on your computer to achieve hard tasks
- **Parallel Execution** - Execute the generated graph in parallel. This way you can have multiple agents completing the jobs for you
- **[Goal-Driven Generation](docs/key_concepts/goals_outcome.md)** - Define objectives in natural language; the coding agent generates the agent graph and connection code to achieve them
- **[Adaptiveness](docs/key_concepts/evolution.md)** - Framework captures failures, calibrates according to the objectives, and evolves the agent graph
- **[Dynamic Node Connections](docs/key_concepts/graph.md)** - No predefined edges; connection code is generated by any capable LLM based on your goals
- **SDK-Wrapped Nodes** - Every node gets shared memory, local RLM memory, monitoring, tools, and LLM access out of the box
- **[Human-in-the-Loop](docs/key_concepts/graph.md#human-in-the-loop)** - Intervention nodes that pause execution for human input with configurable timeouts and escalation
- **Real-time Observability** - WebSocket streaming for live monitoring of agent execution, decisions, and node-to-node communication
- **Production-Ready** - Self-hostable, built for scale and reliability
Hive is built to be model-agnostic and system-agnostic.
- **LLM flexibility** - Hive Framework is designed to support various types of LLMs, including hosted and local models through LiteLLM-compatible providers.
- **Business system connectivity** - Hive Framework is designed to connect to all kinds of business systems as tools, such as CRM, support, messaging, data, file, and internal APIs via MCP.
## Why Aden
Hive focuses on generating agents that run real business processes rather than generic agents. Instead of requiring you to manually design workflows, define agent interactions, and handle failures reactively, Hive flips the paradigm: **you describe outcomes, and the system builds itself**—delivering an outcome-driven, adaptive experience with an easy-to-use set of tools and integrations.
- [Configuration Guide](docs/configuration.md) - All configuration options
- [Architecture Overview](docs/architecture.md) - System design and structure
- [Architecture Overview](docs/architecture/README.md) - System design and structure
## Roadmap
Aden Hive Agent Framework aims to help developers build outcome-oriented, self-adaptive agents. See [roadmap.md](docs/roadmap.md) for details.
```mermaid
flowchart TB
%% Main Entity
User([User])
%% =========================================
%% EXTERNAL EVENT SOURCES
%% =========================================
subgraph ExtEventSource [External Event Source]
E_Sch["Schedulers"]
E_WH["Webhook"]
E_SSE["SSE"]
end
%% =========================================
%% SYSTEM NODES
%% =========================================
subgraph WorkerBees [Worker Bees]
WB_C["Conversation"]
WB_SP["System prompt"]
subgraph Graph [Graph]
direction TB
N1["Node"] --> N2["Node"] --> N3["Node"]
N1 -.-> AN["Active Node"]
N2 -.-> AN
N3 -.-> AN
%% Nested Event Loop Node
subgraph EventLoopNode [Event Loop Node]
ELN_L["listener"]
ELN_SP["System Prompt<br/>(Task)"]
ELN_EL["Event loop"]
ELN_C["Conversation"]
end
end
end
subgraph JudgeNode [Judge]
J_C["Criteria"]
J_P["Principles"]
J_EL["Event loop"] <--> J_S["Scheduler"]
end
subgraph QueenBee [Queen Bee]
QB_SP["System prompt"]
QB_EL["Event loop"]
QB_C["Conversation"]
end
subgraph Infra [Infra]
SA["Sub Agent"]
TR["Tool Registry"]
WTM["Write through Conversation Memory<br/>(Logs/RAM/Harddrive)"]
SM["Shared Memory<br/>(State/Harddrive)"]
EB["Event Bus<br/>(RAM)"]
CS["Credential Store<br/>(Harddrive/Cloud)"]
end
subgraph PC [PC]
B["Browser"]
CB["Codebase<br/>v 0.0.x ... v n.n.n"]
end
%% =========================================
%% CONNECTIONS & DATA FLOW
%% =========================================
%% External Event Routing
E_Sch --> ELN_L
E_WH --> ELN_L
E_SSE --> ELN_L
ELN_L -->|"triggers"| ELN_EL
%% User Interactions
User -->|"Talk"| WB_C
User -->|"Talk"| QB_C
User -->|"Read/Write Access"| CS
%% Inter-System Logic
ELN_C <-->|"Mirror"| WB_C
WB_C -->|"Focus"| AN
WorkerBees -->|"Inquire"| JudgeNode
JudgeNode -->|"Approve"| WorkerBees
%% Judge Alignments
J_C <-.->|"aligns"| WB_SP
J_P <-.->|"aligns"| QB_SP
%% Escalate path
J_EL -->|"Report (Escalate)"| QB_EL
%% Pub/Sub Logic
AN -->|"publish"| EB
EB -->|"subscribe"| QB_C
%% Infra and Process Spawning
ELN_EL -->|"Spawn"| SA
SA -->|"Inform"| ELN_EL
SA -->|"Starts"| B
B -->|"Report"| ELN_EL
TR -->|"Assigned"| ELN_EL
CB -->|"Modify Worker Bee"| WB_C
%% =========================================
%% SHARED MEMORY & LOGS ACCESS
%% =========================================
%% Worker Bees Access (link to node inside Graph subgraph)
AN <-->|"Read/Write"| WTM
AN <-->|"Read/Write"| SM
%% Queen Bee Access
QB_C <-->|"Read/Write"| WTM
QB_EL <-->|"Read/Write"| SM
%% Credentials Access
CS -->|"Read Access"| QB_C
```
## Contributing
We welcome contributions from the community! We’re especially looking for help building tools, integrations, and example agents for the framework ([check #2805](https://github.com/aden-hive/hive/issues/2805)). If you’re interested in extending its functionality, this is the perfect place to start. Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
**Important:** Please get assigned to an issue before submitting a PR. Comment on an issue to claim it, and a maintainer will assign you. Issues with reproducible steps and proposals are prioritized. This helps prevent duplicate work.
1. Find or create an issue and get assigned
2. Fork the repository
3. Create your feature branch (`git checkout -b feature/amazing-feature`)
4. Commit your changes (`git commit -m 'Add amazing feature'`)
5. Push to the branch (`git push origin feature/amazing-feature`)
6. Open a Pull Request
## Community & Support
@@ -119,16 +360,6 @@ We use [Discord](https://discord.com/invite/MXE49hrKDk) for support, feature req
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## Join Our Team
**We're hiring!** Join us in engineering, research, and go-to-market roles.
@@ -143,8 +374,54 @@ For security concerns, please see [SECURITY.md](SECURITY.md).
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
## Frequently Asked Questions (FAQ)
**Q: What LLM providers does Hive support?**
Hive supports 100+ LLM providers through LiteLLM integration, including OpenAI (GPT-4, GPT-4o), Anthropic (Claude models), Google Gemini, DeepSeek, Mistral, Groq, and many more. Simply set the appropriate API key environment variable and specify the model name. We recommend using Claude, GLM and Gemini as they have the best performance.
**Q: Can I use Hive with local AI models like Ollama?**
Yes! Hive supports local models through LiteLLM. Simply use the model name format `ollama/model-name` (e.g., `ollama/llama3`, `ollama/mistral`) and ensure Ollama is running locally.
**Q: What makes Hive different from other agent frameworks?**
Hive generates your entire agent system from natural language goals using a coding agent—you don't hardcode workflows or manually define graphs. When agents fail, the framework automatically captures failure data, [evolves the agent graph](docs/key_concepts/evolution.md), and redeploys. This self-improving loop is unique to Aden.
**Q: Is Hive open-source?**
Yes, Hive is fully open-source under the Apache License 2.0. We actively encourage community contributions and collaboration.
**Q: Can Hive handle complex, production-scale use cases?**
Yes. Hive is explicitly designed for production environments with features like automatic failure recovery, real-time observability, cost controls, and horizontal scaling support. The framework handles both simple automations and complex multi-agent workflows.
**Q: Does Hive support human-in-the-loop workflows?**
Yes, Hive fully supports [human-in-the-loop](docs/key_concepts/graph.md#human-in-the-loop) workflows through intervention nodes that pause execution for human input. These include configurable timeouts and escalation policies, allowing seamless collaboration between human experts and AI agents.
**Q: What programming languages does Hive support?**
The Hive framework is built in Python. A JavaScript/TypeScript SDK is on the roadmap.
**Q: Can Hive agents interact with external tools and APIs?**
Yes. Aden's SDK-wrapped nodes provide built-in tool access, and the framework supports flexible tool ecosystems. Agents can integrate with external APIs, databases, and services through the node architecture.
**Q: How does cost control work in Hive?**
Hive provides granular budget controls including spending limits, throttles, and automatic model degradation policies. You can set budgets at the team, agent, or workflow level, with real-time cost tracking and alerts.
**Q: Where can I find examples and documentation?**
Visit [docs.adenhq.com](https://docs.adenhq.com/) for complete guides, API reference, and getting started tutorials. The repository also includes documentation in the `docs/` folder and a comprehensive [developer guide](docs/developer-guide.md).
**Q: How can I contribute to Aden?**
Contributions are welcome! Fork the repository, create your feature branch, implement your changes, and submit a pull request. See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
---
<p align="center">
Made with care by the <a href="https://adenhq.com">Aden</a> team
result=awaitrunner.run({"query":"latest AI breakthroughs"})
# The web_search tool from tools is automatically available!
```
## Benefits
1.**Discoverable Tools**: See what tools are available before using them
2.**Validation**: Connection is tested when registering the server
3.**Automatic Configuration**: No manual file editing required
4.**Documentation**: README includes MCP server information
5.**Runtime Ready**: Exported agents work immediately with configured tools
## Common MCP Servers
### tools
Provides:
-`web_search` - Brave Search API integration
-`web_scrape` - Web page content extraction
-`file_read` / `file_write` - File operations
-`pdf_read` - PDF text extraction
### Custom MCP Servers
You can register any MCP server that follows the Model Context Protocol specification.
## Troubleshooting
### "Failed to connect to MCP server"
- Verify the `command` and `args` are correct
- Check that the server is accessible at the specified path/URL
- Ensure any required environment variables are set
- For STDIO: verify the command can be executed from the `cwd`
- For HTTP: verify the server is running and accessible
### Tools not appearing
- Use `list_mcp_tools` to verify tools were discovered
- Check the tool names match exactly (case-sensitive)
- Ensure the MCP server is still registered (`list_mcp_servers`)
### Export doesn't include mcp_servers.json
- Verify you registered at least one MCP server
- Check `get_session_status` to see `mcp_servers_count > 0`
- Re-export the agent after registering servers
## Credential Validation
When adding nodes with tools that require API keys (like `web_search`), the agent builder automatically validates that the required credentials are available.
### How It Works
When you call `add_node` or `update_node` with a `tools` parameter, the agent builder:
1. Checks which tools require credentials (e.g., `web_search` requires `BRAVE_SEARCH_API_KEY`)
2. Validates those credentials are set in the environment or `.env` file
3. Returns an error if any credentials are missing
### Missing Credentials Error
If credentials are missing, you'll receive a response like:
```json
{
"valid":false,
"errors":["Missing credentials for tools: ['BRAVE_SEARCH_API_KEY']"],
"missing_credentials":[
{
"credential":"brave_search",
"env_var":"BRAVE_SEARCH_API_KEY",
"tools_affected":["web_search"],
"help_url":"https://brave.com/search/api/",
"description":"API key for Brave Search"
}
],
"action_required":"Add the credentials to your .env file and retry",
"example":"Add to .env:\nBRAVE_SEARCH_API_KEY=your_key_here",
"message":"Cannot add node: missing API credentials. Add them to .env and retry this command."
}
```
### Fixing Credential Errors
1. Get the required API key from the URL in `help_url`
This guide explains how to integrate Model Context Protocol (MCP) servers with the Hive Core Framework, enabling agents to use tools from external MCP servers.
## Overview
The framework provides built-in support for MCP servers, allowing you to:
- **Register MCP servers** via STDIO or HTTP transport
- **Auto-discover tools** from registered servers
- **Use MCP tools** seamlessly in your agents
- **Manage multiple MCP servers** simultaneously
## Quick Start
### 1. Register an MCP Server Programmatically
```python
fromframework.runner.runnerimportAgentRunner
# Load your agent
runner=AgentRunner.load("exports/my-agent")
# Register tools MCP server
runner.register_mcp_server(
name="tools",
transport="stdio",
command="python",
args=["-m","aden_tools.mcp_server","--stdio"],
cwd="/path/to/tools"
)
# Tools are now available to your agent
result=awaitrunner.run({"input":"data"})
```
### 2. Use Configuration File
Create `mcp_servers.json` in your agent folder:
```json
{
"servers":[
{
"name":"tools",
"transport":"stdio",
"command":"python",
"args":["-m","aden_tools.mcp_server","--stdio"],
"cwd":"../tools"
}
]
}
```
The framework will automatically load and register these servers when you load the agent:
> **Note:** The standalone `agent-builder` MCP server (`framework.mcp.agent_builder_server`) has been replaced. Agent building is now done via the `coder-tools` server's `initialize_and_build_agent` tool, with underlying logic in `tools/coder_tools_server.py`.
This guide covers the MCP tools available for building goal-driven agents.
## Setup
### Quick Setup
```bash
# Run the quickstart script (recommended)
./quickstart.sh
```
### Manual Configuration
Add to your MCP client configuration (e.g., Claude Desktop):
```json
{
"mcpServers":{
"coder-tools":{
"command":"uv",
"args":["run","coder_tools_server.py","--stdio"],
"cwd":"/path/to/hive/tools"
}
}
}
```
## Available MCP Tools
### Session Management
#### `create_session`
Create a new agent building session.
**Parameters:**
-`name` (string, required): Name of the agent
**Example:**
```json
{
"name":"research-summary-agent"
}
```
#### `get_session_status`
Get the current status of the build session.
**Returns:**
- Session name
- Goal status
- Number of nodes
- Number of edges
- Validation status
---
### Goal Definition
#### `set_goal`
Define the goal for the agent with success criteria and constraints.
**Parameters:**
-`goal_id` (string, required): Unique identifier for the goal
-`name` (string, required): Human-readable name
-`description` (string, required): What the agent should accomplish
-`success_criteria` (string, required): JSON array of success criteria
-`constraints` (string, optional): JSON array of constraints
A goal-driven agent runtime with Builder-friendly observability.
## Overview
Framework provides a runtime framework that captures **decisions**, not just actions. This enables a "Builder" LLM to analyze and improve agent behavior by understanding:
- What the agent was trying to accomplish
- What options it considered
- What it chose and why
- What happened as a result
## Installation
```bash
uv pip install -e .
```
## Agent Building
Agent scaffolding is handled by the `coder-tools` MCP server (in `tools/coder_tools_server.py`), which provides the `initialize_and_build_agent` tool and related utilities. The package generation logic lives directly in `tools/coder_tools_server.py`.
See the [Getting Started Guide](../docs/getting-started.md) for building agents.
## Quick Start
### Calculator Agent
Run an LLM-powered calculator:
```bash
# Run an exported agent
uv run python -m framework run exports/calculator --input '{"expression": "2 + 3 * 4"}'
# Interactive shell session
uv run python -m framework shell exports/calculator
# Show agent info
uv run python -m framework info exports/calculator
```
### Using the Runtime
```python
fromframeworkimportRuntime
runtime=Runtime("/path/to/storage")
# Start a run
run_id=runtime.start_run("my_goal","Description of what we're doing")
reasoning="Accuracy is more important for this task"
)
# Record the outcome
runtime.record_outcome(
decision_id=decision_id,
success=True,
result={"processed":100},
summary="Processed 100 items with detailed analysis"
)
# End the run
runtime.end_run(success=True,narrative="Successfully processed all data")
```
### Testing Agents
The framework includes a goal-based testing framework for validating agent behavior.
Tests are generated using MCP tools (`generate_constraint_tests`, `generate_success_tests`) which return guidelines. Claude writes tests directly using the Write tool based on these guidelines.
1.**Using tools that don't exist** — Always verify tools via `list_agent_tools()` before designing. Common hallucinations: `csv_read`, `csv_write`, `file_upload`, `database_query`, `bulk_fetch_emails`.
2.**Wrong mcp_servers.json format** — Flat dict (no `"mcpServers"` wrapper). `cwd` must be `"../../tools"`. `command` must be `"uv"` with args `["run", "python", ...]`.
3.**Missing module-level exports in `__init__.py`** — The runner reads `goal`, `nodes`, `edges`, `entry_node`, `entry_points`, `terminal_nodes`, `conversation_mode`, `identity_prompt`, `loop_config` via `getattr()`. ALL module-level variables from agent.py must be re-exported in `__init__.py`.
## Value Errors
4.**Fabricating tools** — Always verify via `list_agent_tools()` before designing and `validate_agent_package()` after building.
## Design Errors
5.**Adding framework gating for LLM behavior** — Don't add output rollback or premature rejection. Fix with better prompts or custom judges.
6.**Calling set_output in same turn as tool calls** — Call set_output in a SEPARATE turn.
## File Template Errors
7.**Wrong import paths** — Use `from framework.graph import ...`, NOT `from core.framework.graph import ...`.
8.**Missing storage path** — Agent class must set `self._storage_path = Path.home() / ".hive" / "agents" / "agent_name"`.
9.**Missing mcp_servers.json** — Without this, the agent has no tools at runtime.
10.**Bare `python` command** — Use `"command": "uv"` with args `["run", "python", ...]`.
## Testing Errors
11.**Using `runner.run()` on forever-alive agents** — `runner.run()` hangs forever because forever-alive agents have no terminal node. Write structural tests instead: validate graph structure, verify node specs, test `AgentRunner.load()` succeeds (no API key needed).
12.**Stale tests after restructuring** — When changing nodes/edges, update tests to match. Tests referencing old node names will fail.
13.**Running integration tests without API keys** — Use `pytest.skip()` when credentials are missing.
14.**Forgetting sys.path setup in conftest.py** — Tests need `exports/` and `core/` on sys.path.
## GCU Errors
15.**Manually wiring browser tools on event_loop nodes** — Use `node_type="gcu"` which auto-includes browser tools. Do NOT manually list browser tool names.
16.**Using GCU nodes as regular graph nodes** — GCU nodes are subagents only. They must ONLY appear in `sub_agents=["gcu-node-id"]` and be invoked via `delegate_to_sub_agent()`. Never connect via edges or use as entry/terminal nodes.
## Worker Agent Errors
17.**Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
18.**Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.