Compare commits

...

1629 Commits

Author SHA1 Message Date
Richard Tang ac3a5f5e93 chore: remove the ai generated temp doc 2026-03-10 15:29:21 -07:00
Timothy 1de37d2747 chore: lint 2026-03-10 15:00:14 -07:00
Timothy 2aefdf5b5f refactor: remove deprecated codes 2026-03-10 14:57:54 -07:00
Hundao 4caaa79900 Merge pull request #5988 from roberthallers/docs/fix-tui-deprecation-5941
docs: fix TUI deprecation inconsistency in roadmap
2026-03-10 16:46:41 +08:00
Hundao 296089d4cd Merge pull request #6108 from Hundao/fix/subagent-judge-feedback
fix: SubagentJudge and implicit judge return feedback=None on ACCEPT
2026-03-10 15:39:29 +08:00
hundao cae5f971cf fix: update test assertions for newly added tools
Tool counts and expected lists were outdated after new tools were added
to stripe, linear, apollo, discord, and google_analytics.
2026-03-10 15:36:12 +08:00
hundao bac716eea3 fix: pass feedback="" on evaluated ACCEPT verdicts in SubagentJudge and implicit judge
Fixes #6107
2026-03-10 15:24:39 +08:00
Navya Bijoy 14daf672e8 Fix: SessionManager._cleanup_stale_active_sessions indiscriminately cancels healthy concurrent agent sessions (#6081)
* fixes a bug in the  SessionManager

* chore: remove debug print from test

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-10 15:18:11 +08:00
Emmanuel Nwanguma e352ae5145 fix(mcp): close errlog file handle to prevent resource leak (#6094)
Track the errlog file handle opened on non-Windows systems and
properly close it during cleanup to prevent file descriptor leaks.

Changes:
- Add _errlog_handle instance variable to track the file handle
- Store handle reference when opening os.devnull
- Close handle in _cleanup_stdio_async() after other cleanup
- Clear reference in disconnect() for safety

Fixes #6002
2026-03-10 15:06:51 +08:00
Pushkal a58ffc2669 fix(server): use session.phase_state instead of session.mode_state in handle_pause (#6069)
The handle_pause endpoint referenced session.mode_state (lines 360-361),
which does not exist on the Session dataclass. This caused an
AttributeError every time the pause endpoint reached the phase transition
step, preventing the queen phase from transitioning to staging and
returning a 500 error to the frontend.

Changed to session.phase_state, consistent with handle_stop (line 412),
handle_run (line 75), and the Session dataclass definition
(session_manager.py line 44).
2026-03-10 15:03:19 +08:00
RichardTang-Aden 3fefea52be Merge pull request #6102 from aden-hive/micro-fix/report-to-parent-empty-check
micro-fix: track reported_to_parent to prevent false empty-turn detection
2026-03-09 21:12:23 -07:00
Richard Tang 06fd045b3e micro-fix: track reported_to_parent to prevent false empty-turn detection
Turns that call report_to_parent were incorrectly treated as "truly
empty" because the flag was not propagated. Thread it through
_run_single_turn and include it in the empty-turn guard.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 21:10:47 -07:00
RichardTang-Aden 2e43d2af46 Merge pull request #6100 from aden-hive/feature/integration-extended
Release / Create Release (push) Waiting to run
micro-fix: wrong reference for hive_coder
2026-03-09 19:52:35 -07:00
Richard Tang 2c9790c65d Merge remote-tracking branch 'origin' into feature/integration-extended 2026-03-09 19:52:17 -07:00
Richard Tang 9700ac71bb micro-fix: wrong reference for hive_coder 2026-03-09 19:50:07 -07:00
RichardTang-Aden 61ed67b068 Merge pull request #6097 from aden-hive/feature/integration-extended
Expand integration tool coverage across 40 vendors
2026-03-09 19:47:34 -07:00
Richard Tang c3bea8685a Merge remote-tracking branch 'origin/main' into feature/integration-extended 2026-03-09 19:47:21 -07:00
RichardTang-Aden 98c57b795a Merge pull request #6050 from aden-hive/feat/queen-planning-phase
Add queen planning phase, global memory, and refactor hive_coder
2026-03-09 19:46:23 -07:00
Richard Tang 9be1d03b5c chore ruff lint 2026-03-09 19:45:36 -07:00
Richard Tang 0d09510539 Merge remote-tracking branch 'origin/main' into feat/queen-planning-phase 2026-03-09 19:42:10 -07:00
Richard Tang 639c37ba17 feat: prompt to init the agent 2026-03-09 19:34:01 -07:00
Richard Tang 2258c23254 Merge branch 'feature/queen-global-memory' into feat/queen-planning-phase 2026-03-09 19:11:32 -07:00
Richard Tang 9714ea106d feat: improve initialize_and_build_agent clarity 2026-03-09 18:54:48 -07:00
Timothy f4ad500177 chore: lint 2026-03-09 18:53:01 -07:00
Timothy 9154a4d9f8 fix: resolve E501 line-too-long lint errors across 7 tool files 2026-03-09 18:51:01 -07:00
Timothy add6efe6f1 fix(micro-fix): increase stall threshold 2026-03-09 18:40:13 -07:00
Richard Tang 7ceb1efd02 fix: replace old tool name reference 2026-03-09 18:40:01 -07:00
Timothy a29ecf8435 chore(micro-fix): fix ci test blockage 2026-03-09 18:27:21 -07:00
Richard Tang d0ba5ef4f4 fix: update the wrong variable name 2026-03-09 18:12:29 -07:00
Richard Tang 860f637491 feat: add validation for module import 2026-03-09 17:53:50 -07:00
Richard Tang acb2cab317 feat: minor prompt change for switching to building mode 2026-03-09 17:41:23 -07:00
Richard Tang b453806918 feat: execution end message 2026-03-09 17:29:58 -07:00
Richard Tang 7ba8a0f51b feat: strengthen validation logic when loading 2026-03-09 17:08:20 -07:00
Richard Tang f6f398b6b1 feat: add GCU knowledge to planning 2026-03-09 17:02:13 -07:00
Timothy c4b22fa5c4 feat(postgres): update credential spec with new tool names 2026-03-09 16:47:27 -07:00
Timothy 0e64f977cd feat(postgres): add table stats, indexes, and foreign keys tools
Add pg_get_table_stats for row counts and size info,
pg_list_indexes for index details, and pg_get_foreign_keys
for relationship discovery with both outgoing and incoming FKs.
2026-03-09 16:47:09 -07:00
Timothy f24c9708fc feat(lusha): update credential spec with new tool names 2026-03-09 16:45:33 -07:00
Timothy bb4436e277 feat(lusha): add bulk enrich, technologies, and decision makers tools
Add lusha_bulk_enrich_persons for batch enrichment,
lusha_get_technologies for company tech stack lookup, and
lusha_search_decision_makers for senior contact discovery.
2026-03-09 16:45:17 -07:00
Timothy 795f66c90b feat(gsc): update credential spec with new tool names 2026-03-09 16:44:33 -07:00
Timothy 9ef6d51573 feat(gsc): add top queries, top pages, and delete sitemap tools
Add gsc_top_queries and gsc_top_pages convenience wrappers for
click-sorted analytics, and gsc_delete_sitemap for sitemap removal.
2026-03-09 16:44:20 -07:00
Timothy 3fed4e3409 feat(aws-s3): update credential specs with new tool names 2026-03-09 16:43:37 -07:00
Timothy 670e69f2ce feat(aws-s3): add copy, metadata, and presigned URL tools
Add s3_copy_object for copying within/between buckets,
s3_get_object_metadata for HEAD-based metadata retrieval, and
s3_generate_presigned_url for temporary access URL generation.
2026-03-09 16:42:46 -07:00
Timothy f6c4747905 feat(pushover): update credential spec with new tool names 2026-03-09 16:42:04 -07:00
Timothy 7b78f6c12f feat(pushover): add cancel receipt, glance update, and limits tools
Add pushover_cancel_receipt for stopping emergency retries,
pushover_send_glance for widget data updates, and
pushover_get_limits for checking message usage.
2026-03-09 16:41:52 -07:00
Timothy 1c75100f59 feat(news): update credential spec with new tool names 2026-03-09 16:41:15 -07:00
Timothy b325e103c6 feat(news): add latest, by-source, and by-topic search tools
Add news_latest for breaking news without query, news_by_source
for source-filtered articles, and news_by_topic for topic-based
discovery with automatic date ranges.
2026-03-09 16:40:54 -07:00
Timothy aef2d2d474 feat(serpapi): update credential spec with new tool names 2026-03-09 16:40:05 -07:00
Timothy 95a2b6711e feat(serpapi): add cited-by, profile search, and Google web search tools
Add scholar_cited_by for finding papers citing a given paper,
scholar_search_profiles for author profile discovery, and
serpapi_google_search for structured Google web results.
2026-03-09 16:38:50 -07:00
Timothy 7fb5e8145c feat(exa-search): update credential spec with new tool names 2026-03-09 16:37:56 -07:00
Timothy 8e45d0df83 feat(exa-search): add news, papers, and company search tools
Add exa_search_news, exa_search_papers, and exa_search_companies
convenience wrappers with pre-configured category filters and
automatic date/domain filtering.
2026-03-09 16:37:44 -07:00
Richard Tang 8d4657c13e Merge branch 'feat/queen-planning-phase' into feature/queen-global-memory 2026-03-09 16:10:42 -07:00
Timothy 3d175a6d54 feat(greenhouse): update credential spec with new tool names
Add greenhouse_list_offers, greenhouse_add_candidate_note, greenhouse_list_scorecards.
2026-03-09 16:02:53 -07:00
Timothy b9debaf957 feat(greenhouse): add list offers, candidate notes, and scorecards tools
- greenhouse_list_offers: GET /offers or /applications/{id}/offers
- greenhouse_add_candidate_note: POST /candidates/{id}/activity_feed/notes
- greenhouse_list_scorecards: GET /applications/{id}/scorecards
- Add _post helper for POST requests
2026-03-09 16:02:08 -07:00
Richard Tang bdcbcff6f3 feat: better instruction for planning mode switch 2026-03-09 16:01:34 -07:00
Timothy d2d7bdc374 feat(brevo): update credential spec with new tool names
Add brevo_list_contacts, brevo_delete_contact, brevo_list_email_campaigns.
2026-03-09 16:01:16 -07:00
Timothy 40e494b15d feat(brevo): add list contacts, delete contact, and list campaigns tools
- brevo_list_contacts: GET /contacts with pagination and modified_since filter
- brevo_delete_contact: DELETE /contacts/{email} to remove contacts
- brevo_list_email_campaigns: GET /emailCampaigns with status filter and stats
2026-03-09 16:00:42 -07:00
Timothy b5e840c0cb feat(quickbooks): update credential specs with new tool names
Add quickbooks_list_invoices, quickbooks_get_customer, quickbooks_create_payment
to both credential specs (token and realm_id).
2026-03-09 15:59:46 -07:00
Timothy f3d74c9ae4 feat(quickbooks): add list invoices, get customer, and create payment tools
- quickbooks_list_invoices: query invoices with status/customer filters
- quickbooks_get_customer: GET /customer/{id} with address and contact info
- quickbooks_create_payment: POST /payment with optional invoice linking
2026-03-09 15:59:23 -07:00
Richard Tang a22b321692 feat: improve phase switching tools 2026-03-09 15:33:03 -07:00
Timothy 2e7dbad118 feat(cloudinary): update credential specs with new tool names
Add cloudinary_get_usage, cloudinary_rename_resource, cloudinary_add_tag
to all three credential specs (cloud_name, key, secret).
2026-03-09 15:31:42 -07:00
Timothy 6183d1b65b feat(cloudinary): add usage, rename, and add tag tools
- cloudinary_get_usage: GET /usage for storage, bandwidth, transformation limits
- cloudinary_rename_resource: POST /rename to change public_id
- cloudinary_add_tag: POST /tags to add tags to resources
2026-03-09 15:31:22 -07:00
Timothy 09931e6d98 feat(twitter): update credential spec with new tool names
Add twitter_get_user_followers, twitter_get_tweet_replies, twitter_get_list_tweets.
2026-03-09 15:25:21 -07:00
Timothy cb394127d1 feat(twitter): add user followers, tweet replies, and list tweets tools
- twitter_get_user_followers: GET /users/{id}/followers with profile details
- twitter_get_tweet_replies: search recent replies via conversation_id
- twitter_get_list_tweets: GET /lists/{id}/tweets with author expansion
2026-03-09 15:21:47 -07:00
Timothy 588fa1f9ea feat(google-analytics): update credential spec with new tool names
Add ga_get_user_demographics, ga_get_conversion_events, ga_get_landing_pages.
2026-03-09 15:21:09 -07:00
Timothy 73325c280c feat(google-analytics): add demographics, conversion events, and landing pages tools
- ga_get_user_demographics: country/language/device breakdown
- ga_get_conversion_events: event counts, conversions, and revenue
- ga_get_landing_pages: top landing pages with bounce rate and session duration
2026-03-09 15:20:51 -07:00
Timothy 8c5ae8ffa8 feat(docker-hub): update credential spec with new tool names
Add docker_hub_get_tag_detail, docker_hub_delete_tag, docker_hub_list_webhooks.
2026-03-09 15:19:58 -07:00
Timothy 7389423c70 feat(docker-hub): add tag detail, delete tag, and list webhooks tools
- docker_hub_get_tag_detail: GET /repositories/{repo}/tags/{tag} with image architectures
- docker_hub_delete_tag: DELETE /repositories/{repo}/tags/{tag}
- docker_hub_list_webhooks: GET /repositories/{repo}/webhooks
- Add _delete helper for DELETE requests
2026-03-09 15:18:46 -07:00
Timothy 20c15446a7 feat(apollo): update credential spec with new tool names
Add apollo_get_person_activities, apollo_list_email_accounts,
apollo_bulk_enrich_people.
2026-03-09 15:17:38 -07:00
Richard Tang c05c30dd9a feat: add meta agent tools to planning 2026-03-09 15:14:34 -07:00
Timothy bcd2fb76bd feat(apollo): add person activities, email accounts, and bulk enrich tools
- apollo_get_person_activities: GET /activities for contact activity history
- apollo_list_email_accounts: GET /email_accounts for connected sending accounts
- apollo_bulk_enrich_people: POST /people/bulk_match for batch enrichment (up to 10)
2026-03-09 15:03:21 -07:00
Timothy 5fb97ab6df feat(calendly): update credential spec with new tool names
Add calendly_cancel_event, calendly_list_webhooks, calendly_get_event_type.
2026-03-09 15:00:46 -07:00
Timothy 0224ebc800 feat(calendly): add cancel event, list webhooks, and get event type tools
- calendly_cancel_event: POST /scheduled_events/{id}/cancellation
- calendly_list_webhooks: GET /webhook_subscriptions for org/user scope
- calendly_get_event_type: GET /event_types/{id} for meeting template details
- Add _post helper for POST requests
2026-03-09 15:00:34 -07:00
Timothy af88f7299a feat(pagerduty): update credential specs with new tool names
Add pagerduty_list_oncalls, pagerduty_add_incident_note,
pagerduty_list_escalation_policies to api_key spec.
Add pagerduty_add_incident_note to from_email spec (write operation).
2026-03-09 14:59:53 -07:00
Timothy 81729706ae feat(pagerduty): add oncalls, incident notes, and escalation policies tools
- pagerduty_list_oncalls: GET /oncalls with schedule/policy filters
- pagerduty_add_incident_note: POST /incidents/{id}/notes to add notes
- pagerduty_list_escalation_policies: GET /escalation_policies with search
2026-03-09 14:59:33 -07:00
Timothy bbb1b43ebe feat(airtable): update credential spec with new tool names
Add airtable_delete_records, airtable_search_records, airtable_list_collaborators.
2026-03-09 14:58:57 -07:00
Timothy 70ed5fa8df feat(airtable): add delete records, search records, and list collaborators tools
- airtable_delete_records: DELETE records by comma-separated IDs (up to 10)
- airtable_search_records: search records using FIND formula for partial matching
- airtable_list_collaborators: list base collaborators via meta API
- Add _delete helper for DELETE requests
2026-03-09 14:58:42 -07:00
Timothy 312db6620d feat(reddit): update credential specs with new tool names
Add reddit_get_subreddit_info, reddit_get_post_detail, reddit_get_user_posts
to both credential specs (client_id and client_secret).
2026-03-09 14:57:50 -07:00
Timothy 93c1fc5488 feat(reddit): add subreddit info, post detail, and user posts tools
- reddit_get_subreddit_info: GET /r/{name}/about for subscriber count, description
- reddit_get_post_detail: GET /by_id/t3_{id} for full post details with flair, ratios
- reddit_get_user_posts: GET /user/{name}/submitted for user's post history
2026-03-09 14:57:33 -07:00
Richard Tang 90762f275b feat: give planning mode the load tool 2026-03-09 14:55:53 -07:00
Timothy 801443027d feat(pipedrive): update credential spec with new tool names
Add pipedrive_update_deal, pipedrive_create_person, pipedrive_create_activity
to the credential spec tools list.
2026-03-09 14:54:22 -07:00
Timothy ca2ead76cd feat(pipedrive): add deal update, person creation, and activity creation tools
Add pipedrive_update_deal, pipedrive_create_person, and
pipedrive_create_activity tools using Pipedrive REST API v1.
2026-03-09 14:52:27 -07:00
Timothy d562144a6d feat(confluence): register new tools in credential specs
Add confluence_update_page, confluence_delete_page, and
confluence_get_page_children to all three Confluence credential specs.
2026-03-09 14:51:39 -07:00
Timothy af7fb7da27 feat(confluence): add page update, delete, and children listing tools
Add confluence_update_page, confluence_delete_page, and
confluence_get_page_children tools using Confluence REST API v2.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 14:51:26 -07:00
Timothy c17dd63b4a feat(intercom): register new tools in credential spec
Add intercom_close_conversation, intercom_create_contact, and
intercom_list_conversations to Intercom credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 14:50:49 -07:00
Timothy 866db289e2 feat(intercom): add close conversation, create contact, and list conversations tools
Add close_conversation, create_contact, and list_conversations client
methods plus intercom_close_conversation, intercom_create_contact, and
intercom_list_conversations MCP tools using Intercom API v2.11.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 14:50:30 -07:00
Timothy b4ac5e9607 feat(gitlab): register new tools in credential spec
Add gitlab_update_issue, gitlab_get_merge_request, and
gitlab_create_merge_request_note to GitLab credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 14:49:01 -07:00
Timothy 3ca7af4242 feat(gitlab): add issue update, MR detail, and MR comment tools
Add _put helper and gitlab_update_issue, gitlab_get_merge_request,
and gitlab_create_merge_request_note tools using GitLab REST API v4.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 14:48:40 -07:00
Richard Tang 2b12a9c91a Merge remote-tracking branch 'origin/feature/queen-global-memory' into feature/queen-global-memory 2026-03-09 14:47:27 -07:00
Richard Tang 9a94595a42 feat: extract the shared knowledge between planning and building 2026-03-09 14:45:31 -07:00
Richard Tang e1540dfaa6 refactor: drop hive code CLI 2026-03-09 14:30:13 -07:00
Richard Tang 4f5ac6d1b1 refactor: rename hive_coder to queen and extract queen orchestrator 2026-03-09 14:23:31 -07:00
Richard Tang c87d7b13da refactor: rename hive_coder to queen and extract queen orchestrator 2026-03-09 14:23:16 -07:00
Timothy c4acf0b659 fix: memory consolidation hook, simplify generated memory files 2026-03-09 14:15:01 -07:00
RichardTang-Aden 5e1ab3ca37 Merge pull request #5029 from karthik-kotra/docs/setup-troubleshooting
docs(setup): add troubleshooting steps for common WSL setup issues
2026-03-09 14:06:28 -07:00
Timothy 79c32c9f47 feat(slack): register new tools in credential spec
Add slack_get_channel_info, slack_list_files, and slack_get_file_info
to Slack credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:58:14 -07:00
Timothy 35ee29a843 feat(slack): add channel info, file listing, and file detail tools
Add get_channel_info, list_files, and get_file_info client methods
plus slack_get_channel_info, slack_list_files, and slack_get_file_info
MCP tools using Slack Web API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:57:45 -07:00
Timothy 573aea1d9c feat(stripe): register new tools in credential spec
Add stripe_list_disputes, stripe_list_events, and
stripe_create_checkout_session to Stripe credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:56:20 -07:00
Timothy 6ecbc30293 feat(stripe): add disputes, events, and checkout session tools
Add list_disputes, list_events, and create_checkout_session client
methods plus stripe_list_disputes, stripe_list_events, and
stripe_create_checkout_session MCP tools using Stripe API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:56:07 -07:00
Timothy 843b1f2e1d feat(linear): register new tools in credential spec
Add linear_cycles_list, linear_issue_comments_list, and
linear_issue_relation_create to Linear credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:54:48 -07:00
Timothy 89f6c8e4ef feat(linear): add cycle listing, issue comments, and issue relations tools
Add list_cycles, list_issue_comments, and create_issue_relation client
methods plus linear_cycles_list, linear_issue_comments_list, and
linear_issue_relation_create MCP tools using Linear GraphQL API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:52:09 -07:00
Timothy 304ac07bd8 feat(zoom): register new tools in credential spec
Add zoom_update_meeting, zoom_list_meeting_participants, and
zoom_list_meeting_registrants to Zoom credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:50:27 -07:00
Timothy 82f0684b83 feat(zoom): add meeting update, participants, and registrants tools
Add zoom_update_meeting (PATCH), zoom_list_meeting_participants
(past meeting attendees), and zoom_list_meeting_registrants
using Zoom REST API v2.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:45:11 -07:00
Timothy 963c37dc31 feat(twilio): register new tools in credential specs
Add twilio_list_phone_numbers, twilio_list_calls, and
twilio_delete_message to both Twilio credential specs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:41:26 -07:00
Timothy c02da3ba5a feat(twilio): add phone number listing, call history, and message deletion tools
Add twilio_list_phone_numbers, twilio_list_calls, and
twilio_delete_message tools using Twilio REST API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:40:58 -07:00
Timothy 7f34e95ec6 feat(shopify): register new tools in credential specs
Add shopify_update_product, shopify_get_customer, and
shopify_create_draft_order to both Shopify credential specs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:40:28 -07:00
Timothy f2998fe098 feat(shopify): add product update, customer detail, and draft order tools
Add shopify_update_product, shopify_get_customer, and
shopify_create_draft_order tools using Shopify Admin REST API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:40:15 -07:00
Timothy 323a2489b8 feat(zendesk): register new tools in credential specs
Add zendesk_get_ticket_comments, zendesk_add_ticket_comment, and
zendesk_list_users to all three Zendesk credential specs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:39:35 -07:00
Timothy f6d1cd640e feat(zendesk): add ticket comments and user listing tools
Add zendesk_get_ticket_comments, zendesk_add_ticket_comment, and
zendesk_list_users tools using Zendesk Support API v2.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:39:25 -07:00
Timothy ddf89a04fe feat(asana): update credential spec for new tools
Register asana_update_task, asana_add_comment, and
asana_create_subtask in the Asana credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:35:16 -07:00
Timothy c5dc89f5ee feat(asana): add update_task, add_comment, create_subtask tools
Add _put helper and three new Asana MCP tools:
- asana_update_task: modify name, notes, completion, due date, assignee
- asana_add_comment: post comment stories on tasks
- asana_create_subtask: create subtasks under existing tasks

API ref: https://developers.asana.com/docs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:35:05 -07:00
Timothy 6ade34b759 feat(trello): register get_card, create_list, search_cards tools
Add three new Trello MCP tools:
- trello_get_card: retrieve full card details with members/checklists/attachments
- trello_create_list: create new lists on boards
- trello_search_cards: full-text search across cards with board scoping

Update credential spec to include the new tool names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:20:43 -07:00
Timothy 09d5f0a9df feat(trello): add client methods for get_card, create_list, search
Add TrelloClient methods for:
- get_card: GET /1/cards/{id} with members, checklists, attachments
- create_list: POST /1/lists to create new board lists
- search: GET /1/search for full-text search across cards

API ref: https://developer.atlassian.com/cloud/trello/rest/api-group-cards/

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:19:59 -07:00
Timothy a60d63cca2 feat(github): register list_commits, create_release, list_workflow_runs
Add three new GitHub MCP tools:
- github_list_commits: query commits with author/date/branch filters
- github_create_release: create tagged releases with notes and draft support
- github_list_workflow_runs: monitor CI/CD pipeline runs with status filters

Update credential spec to include the new tool names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:19:16 -07:00
Timothy 8616975fc5 feat(github): add client methods for commits, releases, workflow runs
Add _GitHubClient methods for:
- list_commits: GET /repos/{owner}/{repo}/commits with sha/author/date filters
- create_release: POST /repos/{owner}/{repo}/releases with tag, notes, draft
- list_workflow_runs: GET /repos/{owner}/{repo}/actions/runs with filters

API ref: https://docs.github.com/en/rest

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:18:33 -07:00
Timothy e5ae919d8f feat(telegram): register get_chat_member_count, send_video, set_description
Add three new Telegram MCP tools:
- telegram_get_chat_member_count: retrieve group/channel membership size
- telegram_send_video: send video files via URL or file_id
- telegram_set_chat_description: update group/channel descriptions

Update credential spec to include the new tool names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:17:30 -07:00
Timothy 8e7f5eaaba feat(telegram): add client methods for member count, video, description
Add _TelegramClient methods for:
- get_chat_member_count: getChatMemberCount API endpoint
- send_video: sendVideo with caption, parse_mode, duration support
- set_chat_description: setChatDescription for groups/channels

API ref: https://core.telegram.org/bots/api

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:13:06 -07:00
Timothy 4d1ff8b054 feat(salesforce): update credential spec for new tools
Register salesforce_delete_record, salesforce_search_records, and
salesforce_get_record_count in both Salesforce credential specs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:12:25 -07:00
Timothy 9fa81e8599 feat(salesforce): add delete_record, search_records, get_record_count
Add three new Salesforce MCP tools:
- salesforce_delete_record: DELETE /sobjects/{type}/{id}
- salesforce_search_records: SOSL full-text search via /search/
- salesforce_get_record_count: efficient COUNT() query for any SObject

API ref: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:12:11 -07:00
Timothy cf8e19b059 feat(discord): register get_channel, create_reaction, delete_message tools
Add three new Discord MCP tools:
- discord_get_channel: retrieve channel metadata (name, topic, type)
- discord_create_reaction: add emoji reactions to messages
- discord_delete_message: remove messages from channels

Update credential spec to include the new tool names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:11:25 -07:00
Timothy dfa3f60fcf feat(discord): add client methods for get_channel, reactions, delete
Add _DiscordClient methods for:
- get_channel: retrieve channel metadata via GET /channels/{id}
- create_reaction: add emoji reaction via PUT reactions endpoint
- delete_message: remove a message via DELETE messages endpoint

API ref: https://discord.com/developers/docs/resources

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:10:49 -07:00
Timothy b795f1b253 feat(notion): update credential spec for new tools
Register notion_update_page, notion_archive_page, and
notion_append_blocks in the Notion credential spec.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:06:20 -07:00
Timothy 73423c0dd2 feat(notion): add update_page, archive_page, append_blocks tools
Add three new Notion MCP tools:
- notion_update_page: modify page properties via PATCH /pages/{id}
- notion_archive_page: archive or restore pages
- notion_append_blocks: add paragraphs, headings, lists, todos, etc.

API ref: https://developers.notion.com/reference

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:06:08 -07:00
Timothy 3d844e1539 feat(jira): update credential spec for new tools
Register jira_update_issue, jira_list_transitions, and
jira_transition_issue in all three Jira credential specs
(domain, email, token).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:04:32 -07:00
Timothy b619119eb5 feat(jira): add update_issue, list_transitions, transition_issue tools
Add three new Jira MCP tools:
- jira_update_issue: modify summary, description, priority, labels, assignee
- jira_list_transitions: discover available status transitions for an issue
- jira_transition_issue: move an issue to a new status with optional comment

API ref: https://developer.atlassian.com/cloud/jira/platform/rest/v3/

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:04:19 -07:00
Timothy b00ed4fc70 feat(hubspot): register delete_object, list/create_associations tools
Add three new MCP tools:
- hubspot_delete_object: archive contacts, companies, or deals
- hubspot_list_associations: query links between CRM objects (v4 API)
- hubspot_create_association: link two CRM records together

Update credential spec to include the new tool names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:03:37 -07:00
Timothy 5ec5fbe998 feat(hubspot): add client methods for delete, associations
Add _HubSpotClient methods for:
- delete_object: archive a CRM object via DELETE /crm/v3/objects
- list_associations: query associations via GET /crm/v4/objects associations endpoint
- create_association: link two CRM objects via PUT /crm/v4/objects associations endpoint

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 13:02:49 -07:00
Richard Tang 2ed814455a Merge branch 'feat/queen-planning-phase' into feature/queen-global-memory 2026-03-09 12:57:23 -07:00
Timothy ad1a4ef0c3 fix: cancellation button 2026-03-09 12:48:20 -07:00
Timothy 2111c808a9 feat: queen memory v1 2026-03-09 11:55:39 -07:00
Bryan @ Aden 402bb38267 Merge pull request #6079 from Waryjustice/fix/google-sheets-credentials-orphan
fix(credentials): remove orphaned google_sheets.py credential spec
2026-03-09 18:37:27 +00:00
Waryjustice 0a55928872 fix(credentials): remove orphaned google_sheets.py credential spec
The google_sheets.py file defined GOOGLE_SHEETS_CREDENTIALS (an API-key
based credential for reading public sheets via GOOGLE_SHEETS_API_KEY) but
was never wired into the package:

- Never imported in credentials/__init__.py
- Never merged into CREDENTIAL_SPECS
- Never listed in __all__
- Tool never calls credentials.get('google_sheets_key') — uses 'google' (OAuth2)
- Tool names in the spec were stale and did not match actual function names

The 'google' credential in email.py already correctly covers all Google
Sheets tools via OAuth2. This file was dead code with no referencing
imports anywhere in the repository.

Closes #6077

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-09 23:44:26 +05:30
Richard Tang cdf76ae3b9 fix: eventloop test 2026-03-09 10:23:56 -07:00
Richard Tang 42d0592941 refactor: judge evaluate 2026-03-09 10:09:15 -07:00
Richard Tang 1de7cf821d fix: handle judge with empty message 2026-03-09 09:58:29 -07:00
Timothy 4ea8540e25 fix: better logging for memory consolidation event 2026-03-08 20:44:40 -07:00
Timothy bfa3b8e0f6 fix: queen memory health 2026-03-08 20:28:53 -07:00
Richard Tang 55eccfd75f feat: intake node prompt in planning mode 2026-03-08 20:27:24 -07:00
Timothy 1e994a77b5 feat: queen agent global memory 2026-03-08 19:54:46 -07:00
Richard Tang d12afeb35d chore: ruff lint 2026-03-08 19:46:49 -07:00
Timothy @aden b55a77634b Delete .github/ISSUE_TEMPLATE/link-discord.yml 2026-03-08 19:44:48 -07:00
Richard Tang e84fefd319 feat: separate the queen and worker tools in prompts 2026-03-08 19:40:30 -07:00
Bryan @ Aden f7db603922 Merge pull request #6048 from aden-hive/fix/draft-email-tool
(micro-fix): draft email tool
2026-03-09 02:26:58 +00:00
bryan b4a47a12ff fix: linter formatting 2026-03-08 19:26:06 -07:00
bryan 2228851b16 feat: added reply in thread to draft email tool 2026-03-08 19:24:38 -07:00
Richard Tang d2b510014d feat: adjust tools and knowledge separation between planning and building 2026-03-08 19:21:50 -07:00
Bryan @ Aden ed0a211906 Merge pull request #6047 from aden-hive/fix/reply-email-tool
(micro-fix): reply email tool
2026-03-09 02:00:03 +00:00
bryan 63744ddaef fix: update to pass linter 2026-03-08 18:58:50 -07:00
bryan 82331acb77 feat: update reply email tool to contain the email thread in the body 2026-03-08 18:53:53 -07:00
Richard Tang 3ed5fda448 feat: planning phase for the queen 2026-03-08 18:49:45 -07:00
Timothy @aden b96bbcaa72 Merge pull request #6044 from Amdev-5/fix/e501-coder-tools-server-6043
fix: E501 line too long in coder_tools_server.py
2026-03-08 17:39:59 -07:00
Timothy edfa49bf7a fix: ci test 2026-03-08 17:29:36 -07:00
RichardTang-Aden eb9e4ed23c Merge pull request #5955 from akshajtiwari/ci-first-issue
CI: add uv caching, improve PR requirements workflow
2026-03-08 17:17:22 -07:00
Amdev-5 fed9e90271 fix: E501 line too long in coder_tools_server.py
Break ternary expression across multiple lines to satisfy
the 100-char line length limit.

Fixes #6043
2026-03-09 05:35:45 +05:30
Timothy ca565ae664 fix: validate agent package for orphaned nodes 2026-03-07 09:29:48 -08:00
Timothy 42ce97e0fc fix: agent package validation - no orphaned nodes 2026-03-07 08:47:01 -08:00
Akshaj Tiwari bea17b5f79 simplify label creation logic by assuming label pre-exists 2026-03-07 19:02:04 +05:30
Akshaj Tiwari ab0d5ce8d3 change pr.updated_at to pr.created_at for the grace period check 2026-03-07 18:58:36 +05:30
Akshaj Tiwari b374d5119a resolving the ci.yml issues by using enable-cache instead of manual caching 2026-03-07 18:49:17 +05:30
Robert Hallers 7a467ef9b8 docs: mark TUI as deprecated in roadmap to match CLAUDE.md
Resolves inconsistency between CLAUDE.md/AGENTS.md (TUI deprecated) and
docs/roadmap.md (TUI listed as completed feature).

- Strike through TUI items in 3 roadmap sections
- Add deprecation note to TUI-to-GUI upgrade section
- Reference AGENTS.md and hive open as replacement

Fixes #5941

Signed-off-by: Robert Hallers <robert@terplabs.ai>
2026-03-07 02:36:04 -05:00
RichardTang-Aden 9129b4a42e Merge pull request #5975 from aden-hive/feat/queen-responsibility
Release / Create Release (push) Waiting to run
feat: separate queen responsibility by phases
2026-03-06 19:21:53 -08:00
Richard Tang e906646d49 Merge remote-tracking branch 'origin/feature/thinking-hook' into feat/queen-responsibility 2026-03-06 19:15:29 -08:00
Timothy 086a532521 fix: skip queen judge, turn off aggressive compaction 2026-03-06 19:14:25 -08:00
Richard Tang 19dd40ed3a chore: ruff lint 2026-03-06 19:11:53 -08:00
Richard Tang 196f3d645f feat: building phase prompts improvements 2026-03-06 18:59:12 -08:00
Richard Tang 80fd91d175 feat: building phase prompt optimization 2026-03-06 18:53:33 -08:00
Richard Tang 695410f880 Merge remote-tracking branch 'origin/feature/thinking-hook' into feat/queen-responsibility 2026-03-06 18:39:21 -08:00
Timothy 009c62dac6 chore: re-organize hooks 2026-03-06 18:38:20 -08:00
Richard Tang 27c8904341 feat: limit the tool description in simple mode 2026-03-06 18:37:08 -08:00
Richard Tang ddfce58071 feat: simplify building prompts 2026-03-06 18:29:44 -08:00
Richard Tang 1bb850bdbe Merge remote-tracking branch 'origin/feature/thinking-hook' into feat/queen-responsibility 2026-03-06 17:59:33 -08:00
Richard Tang 5019633ba3 fix: remove wrong tool examples 2026-03-06 17:58:50 -08:00
Timothy @aden b0fd8b83f0 Merge pull request #5896 from VasuBansal7576/codex/pr-minimax-single
fix: add minimax provider mapping and stream fallback
2026-03-06 17:54:41 -08:00
Richard Tang 2dc58eeeb0 fix: add missing gcu prompts 2026-03-06 17:48:37 -08:00
Richard Tang 50ab55ded5 feat: loading improvement 2026-03-06 17:35:17 -08:00
Timothy 9f656577a2 fix: turn signal edge case 2026-03-06 17:18:54 -08:00
Richard Tang 5c87b4b194 Merge remote-tracking branch 'origin/feature/thinking-hook' into feat/queen-responsibility 2026-03-06 17:10:25 -08:00
Timothy 7f866b24a1 Merge branch 'feat/queen-responsibility' into feature/thinking-hook 2026-03-06 17:07:12 -08:00
Timothy 5eb623e931 fix: back to back compaction edge case 2026-03-06 17:06:53 -08:00
Richard Tang 5583896429 fix: add back gcu instruction 2026-03-06 17:05:47 -08:00
Timothy 3a8321b975 chore: fix compaction reference 2026-03-06 16:59:44 -08:00
Richard Tang 8443ec87a6 fix: output key that terminated the queen 2026-03-06 16:55:10 -08:00
Richard Tang 1e06e87f4c feat: improve validation 2026-03-06 16:34:28 -08:00
Richard Tang e2558e3f95 fix: llm friendly input 2026-03-06 16:04:58 -08:00
Richard Tang 1344d3bb8e feat: add node parameters and fix problems for initialize_agent_package 2026-03-06 15:54:27 -08:00
Timothy dc7ec6c058 Merge branch 'feat/queen-responsibility' into feature/thinking-hook 2026-03-06 15:35:17 -08:00
Richard Tang 0ba781609a refactor: remove unused builder functions 2026-03-06 15:32:45 -08:00
Richard Tang 5ce230b0a6 refactor: move the coder tools 2026-03-06 14:56:19 -08:00
Timothy ff1527a77a fix: thinking hook max token 2026-03-06 14:53:32 -08:00
Timothy 252dea0bc3 Merge branch 'feat/queen-responsibility' into feature/thinking-hook 2026-03-06 14:37:02 -08:00
Timothy 126cbe529f feat: queen thinking hook 2026-03-06 14:30:10 -08:00
Richard Tang 207a0a0ca5 feat: fix for mcp tools and templates 2026-03-06 14:27:20 -08:00
Richard Tang d9f502173b feat: queen building improvements 2026-03-06 13:51:49 -08:00
Richard Tang f090ce4d5a fix: duplicated session calls 2026-03-06 12:28:37 -08:00
Richard Tang 1f7efcd940 feat: queen prompt optimization 2026-03-06 12:27:08 -08:00
Akshaj Tiwari fbbbaadd1e remove workflow_dispatch trigger from PR requirements workflows(forgot this commit) 2026-03-07 00:59:56 +05:30
Richard Tang 4de140a170 Merge remote-tracking branch 'origin/main' into feat/queen-responsibility 2026-03-06 11:17:03 -08:00
Richard Tang ed5cfb93a4 fix: catch Cannot write to closing transport error 2026-03-06 11:15:13 -08:00
Richard Tang 891bf08477 feat: re-organized queen prompt 2026-03-06 11:13:15 -08:00
Akshaj Tiwari 37651e534f add PR requirements warning and enforcement workflow and remove the workflow dispatch trigger 2026-03-07 00:39:35 +05:30
Akshaj Tiwari df63c3e781 add the pr requirement changes and remove the workflow dispatch option from ci.yml(tested) 2026-03-06 23:44:45 +05:30
Akshaj Tiwari 838da4a16e style: fix ruff import ordering 2026-03-06 22:57:14 +05:30
Akshaj Tiwari e916d573f6 adding workflow dispatch for testing 2026-03-06 22:51:19 +05:30
Richard Tang 08d51bb377 refactor: remove the old coderagent for TUI 2026-03-06 09:20:49 -08:00
Akshaj Tiwari fa5ebf19a4 first commit with the cache and working directory attributes 2026-03-06 22:49:06 +05:30
Richard Tang 17de0efcaf feat: rename escalation tool 2026-03-06 08:48:16 -08:00
Timothy 4099603a91 chore: lint 2026-03-06 08:21:40 -08:00
Vasu Bansal 988a58c1b7 fix: harden legacy agent.json loading error handling 2026-03-06 20:31:59 +05:30
Vasu Bansal cbc7ec3a32 fix: resolve aden client import duplication after rebase 2026-03-06 16:13:49 +05:30
Vasu Bansal 07d4bf8044 fix: resolve ruff import-order failure in aden client 2026-03-06 16:10:02 +05:30
Vasu Bansal e302e93ac9 chore: retrigger ci 2026-03-06 16:10:02 +05:30
Vasu Bansal 80f5a363d2 fix: address minimax review feedback in quickstart and provider wiring 2026-03-06 16:10:02 +05:30
Vasu Bansal 7b5b6d2c51 fix: add minimax provider mapping and stream fallback 2026-03-06 16:10:02 +05:30
Timothy 0b1fd72e49 chore: lint 2026-03-05 21:28:17 -08:00
Timothy 353f5c31a2 chore: lint 2026-03-05 21:24:21 -08:00
Richard Tang ad40b049ae feat: update the escalate tool 2026-03-05 20:53:33 -08:00
Richard Tang 42c9a11b1a feat: remove the terminal node for queen 2026-03-05 20:42:28 -08:00
Richard Tang c3fb1885c3 feat: make terminal node validation warning 2026-03-05 20:20:49 -08:00
Richard Tang bb413bad1f feat: prompts to allow user to override 2026-03-05 19:50:11 -08:00
Timothy afef3cb66a Merge branch 'fix/output-cleaner' into feat/queen-responsibility 2026-03-05 19:48:28 -08:00
Timothy b1f3d931cd fix: remove output cleaner 2026-03-05 19:48:13 -08:00
Richard Tang 297b24e061 feat: instruction for running phase to handle escalation 2026-03-05 19:47:13 -08:00
Richard Tang 775a0fa511 chore: prompt debug tool 2026-03-05 19:35:52 -08:00
Richard Tang 77abed89b9 feat: queen identity 2026-03-05 19:29:04 -08:00
Richard Tang e2aeb72d49 feat: queen identity 2026-03-05 19:25:59 -08:00
Richard Tang 2b440f84f0 feat: add the termination session back to queen 2026-03-05 19:10:49 -08:00
Richard Tang 8fd66a12c5 Merge remote-tracking branch 'origin/feat/queen-responsibility' into feat/queen-responsibility 2026-03-05 19:03:27 -08:00
Richard Tang f23d5a3ff5 feat: add terminal node back in graph 2026-03-05 19:02:41 -08:00
Timothy be8ec867e5 fix: better stall detection 2026-03-05 18:51:36 -08:00
Timothy b2ba42e541 Merge branch 'feat/queen-responsibility' into feat/worker-progressive-disclosure 2026-03-05 15:26:09 -08:00
Richard Tang 94d0038e03 chore: remove duplicates in anti-patterns 2026-03-05 14:54:46 -08:00
Timothy e1bf300e3c feat: progressive disclosure of runtime data 2026-03-05 14:44:02 -08:00
Timothy @aden f36add83f0 Merge pull request #5901 from aden-hive/fix/bom-safe-json-load
fix(micro-fix): bom safe json loading
2026-03-05 14:29:37 -08:00
Timothy Zhang a57d58e8d4 fix: bom safe json loading 2026-03-05 14:27:15 -08:00
Richard Tang c6b922e831 feat: condense framework guide 2026-03-05 14:26:03 -08:00
Richard Tang 71d12a7904 feat: condensed queen building prompts 2026-03-05 14:18:03 -08:00
Richard Tang 24c25d408c feat: remove unused prompts 2026-03-05 14:13:02 -08:00
Richard Tang 2e99fc9fe5 feat: change graph guidelines 2026-03-05 14:08:52 -08:00
Richard Tang c1f066b8ba feat: add gcu and validation in initialize_agent_package 2026-03-05 14:01:43 -08:00
Richard Tang e7a6074800 fix: prevent duplicate session creation when starting from home 2026-03-05 13:44:39 -08:00
Richard Tang 719942d29a fix: bug for multiple session calls 2026-03-05 13:04:17 -08:00
Richard Tang 190450a2b2 refactor: skip judge logic improvement 2026-03-05 12:38:18 -08:00
Richard Tang 44d609b719 feat: allow judge to wait queen input 2026-03-05 12:33:27 -08:00
Richard Tang 8c9892f9f6 feat: re-organized the tools for list mcp tools 2026-03-05 12:06:57 -08:00
Bryan @ Aden 85c204a442 Merge pull request #5403 from jackthepunished/feat/telegram-tool-expansion
feat(tools): expand Telegram tool with message management, media, and chat info operations
2026-03-05 19:48:05 +00:00
Timothy @aden 56075a25a3 Merge pull request #5884 from aden-hive/feature/hive-as-a-game
feat(micro-fix): link discord github template
2026-03-05 10:55:13 -08:00
Timothy 2b0a6779cc feat: link discord github template 2026-03-05 10:54:30 -08:00
Timothy @aden b9ddce9d41 Merge pull request #5881 from aden-hive/docs-contributor-registration---Timothy
docs: Add TimothyZhang7 to contributors list
2026-03-05 10:37:26 -08:00
Timothy @aden 0c85406bc2 Add TimothyZhang7 to contributors list 2026-03-05 10:36:19 -08:00
Timothy @aden 1051134594 Merge pull request #5878 from aden-hive/feature/hive-as-a-game
chore: fix repo owner
2026-03-05 10:32:20 -08:00
jackthepunished 653d24df9d fix: address review — use POST for getChat, return raw API responses
- Change get_chat client method from httpx.get+params to httpx.post+json
  to avoid URL-encoding issues with @username chat IDs
- Remove {"success": True} normalization from delete_message,
  send_chat_action, pin_message, and unpin_message MCP tools;
  return raw Telegram API response consistently
- Update corresponding test mocks and assertions to match
2026-03-05 20:40:46 +03:00
jackthepunished b687fa9e94 feat(tools): expand Telegram tool with message management, media, and chat info operations
Add 8 new operations to the Telegram Bot tool, bringing it from 2 to 10
operations. This covers message lifecycle (edit, delete, forward), media
(send photo), chat info (get chat), UX (typing indicators), and pin
management — making the tool practical for agent workflows beyond
fire-and-forget messaging.

New operations:
- telegram_edit_message: edit previously sent messages
- telegram_delete_message: delete messages
- telegram_forward_message: forward between chats
- telegram_send_photo: send photos via URL or file_id
- telegram_send_chat_action: show typing/uploading indicators
- telegram_get_chat: retrieve chat metadata
- telegram_pin_message: pin important messages
- telegram_unpin_message: unpin stale messages

Also includes input validation for chat actions, credential spec updates,
central registry wiring, and 31 new tests (52 total).

Closes #4808
2026-03-05 20:40:45 +03:00
Timothy c7f0ab0444 chore: fix repo owner 2026-03-05 09:33:02 -08:00
Timothy @aden 93bf373a5b Merge pull request #5869 from aden-hive/feature/hive-as-a-game
feat: integration bounty program with Lurkr XP, Discord roles, and automated tracking
2026-03-05 09:29:48 -08:00
Timothy 2d87042a70 fix: bad chars 2026-03-05 09:00:51 -08:00
Timothy 8a28abb7b8 fix: github actions 2026-03-05 09:00:06 -08:00
Emmanuel Nwanguma 0cdfbac5a1 docs(tools): add README for brevo, csv, runtime_logs, account_info tools (#5602)
* docs(tools): add README for brevo, csv, runtime_logs, account_info tools

- brevo_tool: Transactional email/SMS and contact management via Brevo API
- csv_tool: Read, write, query CSV files with DuckDB SQL support
- runtime_logs_tool: Query three-level runtime logging system
- account_info_tool: Query connected accounts and identities

* docs: fix runtime_logs_tool README to match implementation

- query_runtime_logs: add missing status values (degraded, in_progress, needs_attention)
- query_runtime_log_details: add missing needs_attention_only parameter
- query_runtime_log_raw: fix step_type -> step_index (int, not str)
- Fix file names: nodes.jsonl -> details.jsonl, steps.jsonl -> tool_logs.jsonl
- Fix error handling examples to match actual code

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-05 18:51:47 +08:00
alidevh 29a3ae471f fix(config): add logging for config parse errors (#4955)
Co-authored-by: alihassan <239741857+alidevh@users.noreply.github.com>
2026-03-05 18:22:34 +08:00
singhhnitin 9c0f56f027 Improve indirect variable expansion for provider API key detection (#5504)
Co-authored-by: Nitin Singh <nitinsingh3323@gmail.com>
2026-03-05 18:12:14 +08:00
Hundao 462e303a6e ci: skip POSIX permission tests on Windows (temporary, see #5842) (#5847)
Windows does not support POSIX file permissions, causing 4 test
failures on Windows CI. Skip these tests until the proper
ReplaceFileW fix lands.
2026-03-05 17:50:05 +08:00
Hundao a84b3c7867 fix: validate agent.json before parsing in AgentRunner.load() (#5846)
Use is_file() instead of exists() to reject directories, and check
for empty content before passing to json parser. Prevents raw
tracebacks on invalid agent.json inputs.

Fixes #5787
2026-03-05 17:23:46 +08:00
Anushka Punekar 606267d053 fix(cli): validate --output path before agent execution in cmd_run (#5838)
* fix(cli): validate --output path before agent execution in cmd_run

* style: fix indentation and formatting

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-05 16:51:33 +08:00
Timothy @aden 35791ae478 Merge pull request #5834 from aden-hive/fix/quickstart-tweaks
Release / Create Release (push) Waiting to run
chore(micro-fix): tweak quickstart
2026-03-04 20:06:40 -08:00
Timothy 10f0002080 chore: tweak quickstart 2026-03-04 20:05:16 -08:00
Bryan @ Aden 60bff4107d Merge pull request #5833 from aden-hive/feat/google-scopes
micro-fix: quickstart build failing
2026-03-05 04:03:55 +00:00
bryan be11fa4b29 fix: quickstart build failing 2026-03-04 20:02:23 -08:00
Richard Tang 6ade844722 feat: escalation implementation 2026-03-04 19:59:02 -08:00
Bryan @ Aden da8bc796d3 Merge pull request #5832 from aden-hive/feat/google-scopes
(micro-fix): chore: updating tool tests
2026-03-05 03:53:25 +00:00
bryan 429619379e fix: linter issues 2026-03-04 19:50:25 -08:00
bryan 0fecedbbbf chore: updating tool tests 2026-03-04 19:47:55 -08:00
Timothy @aden a2244ada75 Merge pull request #5764 from aden-hive/feat/google-scopes
Feat/google scopes
2026-03-04 19:43:30 -08:00
bryan 7608ba9290 Merge branch 'main' into feat/google-scopes 2026-03-04 19:40:46 -08:00
Richard Tang b9a3c67fea feat: dynamically load the system prompt in the context 2026-03-04 19:40:20 -08:00
bryan f5f3396d5c chore: update icons of sample agents 2026-03-04 19:38:14 -08:00
bryan ed80ae80f0 feat: twitter news sample agent 2026-03-04 19:37:34 -08:00
Timothy c7a47c71f0 fix: simplify game plan 2026-03-04 19:15:27 -08:00
Richard Tang 219bbe00fc feat: move guardrail to validation 2026-03-04 19:11:37 -08:00
Timothy @aden b14b8f8c52 Merge pull request #5815 from levxn/bug/agent-sessions
Restoring session during server restart | smooth conversation picked from where left off | fix unhandled error in event routes |
2026-03-04 19:10:38 -08:00
bryan df1a83d475 feat/local-business-sample-agent 2026-03-04 19:09:02 -08:00
bryan 5b7727cfd1 fix: permanent top bar 2026-03-04 19:08:20 -08:00
Timothy 93e270dafb fix: change initial plan 2026-03-04 19:01:24 -08:00
Timothy be675dbb17 fix: restructure docs 2026-03-04 18:59:20 -08:00
Timothy 1c24848db3 feat: implement hive github repo and discord as a connected game 2026-03-04 18:52:42 -08:00
Richard Tang ef6af5404f refactor: new builder flow 2026-03-04 18:42:20 -08:00
Richard Tang b7d57f3d49 feat: fix run_command and remove agent search 2026-03-04 18:04:42 -08:00
Richard Tang 58c892babb Merge remote-tracking branch 'origin/main' into feat/queen-responsibility 2026-03-04 18:03:03 -08:00
Timothy @aden 4b5ec796bc Merge pull request #5829 from aden-hive/feat/remove-old-session-status-tools
fix: remove the reference in the coder agent init
2026-03-04 17:42:34 -08:00
Richard Tang 24df4729ca fix: remove the reference in the coder agent init 2026-03-04 17:40:28 -08:00
Richard Tang 9e2004e33b Merge remote-tracking branch 'origin/main' into feat/queen-responsibility 2026-03-04 17:30:09 -08:00
Timothy @aden 1e6538efac Merge pull request #5828 from aden-hive/feat/remove-old-session-status-tools
Remove deprecated get_agent_session_state and get_agent_session_memory tools
2026-03-04 17:29:35 -08:00
Richard Tang f9e53f58af refactor: remove old get_agent_session_state and get_agent_session_memory tools 2026-03-04 17:23:10 -08:00
Timothy 41388efc31 fix: Windows compat — guard os.fchmod and remove deleted LLM_CREDENTIALS import
os.fchmod does not exist on Windows; guard with hasattr check.
Remove LLM_CREDENTIALS reference from test (module deleted in e1db3a4).
2026-03-04 17:22:21 -08:00
Timothy @aden fab5ce6fd0 Merge pull request #5824 from aden-hive/chore/fix-tool-tests
chore(micro-fix): fix test
2026-03-04 17:16:10 -08:00
Richard Tang b8be3056ed feat: list agent tool instruction 2026-03-04 17:03:45 -08:00
Timothy 207d6baee5 chore: fix test 2026-03-04 16:49:39 -08:00
Timothy @aden fec72bb2b6 Merge pull request #5294 from Antiarin/feat/hashline-edit-tool
[Integration]feat: add hashline anchor-based file editing tool
2026-03-04 16:38:13 -08:00
Richard Tang 39029b82d6 refactor: remove the coding agent check in quickstart 2026-03-04 16:27:33 -08:00
Richard Tang 232890b970 docs: remove the reference of the old agent skills 2026-03-04 16:23:30 -08:00
Timothy c4c4c24c59 Merge branch 'main' into feat/hashline-edit-tool 2026-03-04 16:23:07 -08:00
Richard Tang 13a8e28ae2 refactor: remove all old unused skills 2026-03-04 16:18:28 -08:00
bryan 917c7706ea chore: lint fix 2026-03-04 16:14:56 -08:00
bryan 8fadcd5b21 Merge branch 'main' into feat/google-scopes 2026-03-04 16:12:31 -08:00
Timothy @aden 2005ba2dca Merge pull request #5823 from aden-hive/micro-fix/lint
chore(micro-fix): lint
2026-03-04 16:11:51 -08:00
Richard Tang 34a44aa83c chore: update remaining reference for queen mode 2026-03-04 16:11:28 -08:00
Timothy 557d5fd6e5 chore: lint 2026-03-04 16:10:35 -08:00
Richard Tang 8468c45dc2 refactor: rename the queen mode to queen phase for clarity 2026-03-04 16:10:15 -08:00
Timothy @aden 79d2a15f95 Merge pull request #5814 from fermano/feature/windows-filesysten
Feature/windows filesystem
2026-03-04 16:07:33 -08:00
Richard Tang d2c3649566 refactor: re-organize the agent initialization tools 2026-03-04 16:06:00 -08:00
Timothy ab32e44128 style: ruff format fixes
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 16:00:13 -08:00
Bryan @ Aden 047059f85f Merge pull request #5774 from kostasuser01gr/docs/4780-roadmap-updates
docs: update roadmap to reflect completed features (refs #4780)
2026-03-04 23:59:56 +00:00
Timothy e8364f616d Merge remote-tracking branch 'origin/main' into feature/windows-filesysten 2026-03-04 15:59:49 -08:00
Bryan @ Aden 9098c9b6c6 Merge pull request #5785 from code-Miracle49/fix/remove-duplicate-execute-subagent
micro-fix: remove duplicate _execute_subagent method in EventLoopNode
2026-03-04 23:55:36 +00:00
Timothy 84fd9ebac8 style: fix E501 line-too-long lint errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 15:53:45 -08:00
Timothy @aden 23d5d76d56 Merge pull request #5822 from aden-hive/fix/anthropic-vendor-issue
fix: remove hardcoded Anthropic dependencies from core framework
2026-03-04 15:46:16 -08:00
Timothy b0c86588b6 chore: lint 2026-03-04 15:44:11 -08:00
Timothy 5aff1f9489 chore: lint 2026-03-04 15:43:59 -08:00
Timothy Zhang 199cb3d8cc fix: stdin conflicts 2026-03-04 14:45:02 -08:00
Fernando Mano a98a4ca0b6 feature(WindowsFilesystemSupport): #5677 - Windows File System Support and Testing -- fixing lint issues 2026-03-04 21:22:11 -03:00
Fernando Mano c4f49aadfa feature(WindowsFilesystemSupport): #5677 - Windows File System Support and Testing -- fixing lint issues 2026-03-04 21:20:33 -03:00
Fernando Mano ca5ac389cf feature(WindowsFilesystemSupport): #5677 - Windows File System Support and Testing -- fixing lint issues 2026-03-04 21:15:08 -03:00
Fernando Mano 7a658f7953 feature(WindowsFilesystemSupport): #5677 - Windows File System Support and Testing -- fixing lint issues 2026-03-04 21:05:49 -03:00
Timothy e05fc99da7 Merge branch 'main' into fix/anthropic-vendor-issue 2026-03-04 14:43:27 -08:00
Bryan @ Aden 787090667e Merge pull request #5816 from aden-hive/fix/pause-stop-worker
(micro-fix): update pause in pipeline uses stop_worker like queen
2026-03-04 21:50:23 +00:00
Antiarin 80b36b4052 fix: CRLF double-conversion in hashline edit and add large file skip reporting
- Replace joined.replace("\n", "\r\n") with re.sub(r"(?<!\r)\n", "\r\n", joined to prevent \r\n in replace op new_content from becoming \r\r\n (fixed in both hashline_edit.py and file_ops.py)
 - Track and report skipped large files in grep_search instead of silently skipping them
 - Extract HASHLINE_MAX_FILE_BYTES constant to hashline.py as single source of truth, imported by view_file, grep_search, hashline_edit, and file_ops
 - Add tests for CRLF replace op (both copies) and large file skip reporting
2026-03-05 03:12:35 +05:30
bryan 0b8ed521c0 fix:update pause in pipeline uses stop_worker like queen 2026-03-04 13:42:20 -08:00
levxn 1ec7c5545f fixing lints and formatting 2026-03-05 02:59:57 +05:30
levxn cc6b6760c3 enables resume from where it was left off 2026-03-05 02:34:23 +05:30
Levin 26aed90ab2 Merge branch 'aden-hive:main' into bug/agent-sessions 2026-03-05 02:32:56 +05:30
Timothy 1c58ccb0c1 chore: lint 2026-03-04 12:45:27 -08:00
Timothy 79b80fe817 feat: coder tools to also support hashline editing 2026-03-04 12:41:07 -08:00
Antiarin c0f3841af7 feat: add file size check in grep_search to skip large files and switch case in hashline edit
- Implemented a check to skip files larger than 10MB in the grep_search function to optimize memory usage.
2026-03-04 12:41:07 -08:00
Antiarin 2b7d9bc471 feat:Updating the docs 2026-03-04 12:41:07 -08:00
Antiarin 98dc493a39 feat: Add cross-tool hashing anchor, grep search, and all viewfile 2026-03-04 12:41:07 -08:00
Antiarin cfaa57b28d feat:Add hashing tool 2026-03-04 12:40:25 -08:00
Timothy @aden 219e603de6 Merge pull request #5813 from aden-hive/refactor/quickstart
Refactor/quickstart
2026-03-04 12:27:45 -08:00
Timothy @aden 7663a5bce8 Merge pull request #5797 from Waryjustice/fix/windows-browser-auto-open
fix: browser auto-open after quickstart does not work on Windows
2026-03-04 12:27:35 -08:00
Timothy f2841b945d chore: lint 2026-03-04 12:24:08 -08:00
bryan faff64c413 chore: agents.md update 2026-03-04 12:12:27 -08:00
Timothy 6fbcdc1d87 fix: auto install node 20 2026-03-04 12:11:29 -08:00
bryan 69a11af949 chore: best effort alignment of windows quickstart 2026-03-04 11:43:50 -08:00
bryan 9ef272020e chore: added llm key health check 2026-03-04 11:35:12 -08:00
Richard Tang 71226d9625 fix: logger schema mismatch 2026-03-04 10:55:28 -08:00
bryan 258cfe7de5 chore: added easy way to update llm provider key 2026-03-04 10:42:57 -08:00
bryan 0d53b21133 chore: doc updates about hive open 2026-03-04 10:33:34 -08:00
Richard Tang 9102328d1c feat: make LLM logger by default on 2026-03-04 10:30:17 -08:00
Fernando Mano 704a0fd63a feature(WindowsFilesystemSupport): #5677 - Windows File System Support and Testing -- remove testing codeand prepare for PR 2026-03-04 15:09:06 -03:00
bryan 0ccb28ffab fix: enter to use previously configured 2026-03-04 10:05:59 -08:00
Fernando Mano bf4101ac38 feature(WindowsFilesystemSupport): #5677 - Windows File System Support and Testing -- remove testing codeand prepare for PR 2026-03-04 15:03:02 -03:00
bryan b30b571b44 chore: update recommended models 2026-03-04 09:54:29 -08:00
bryan bc44c3a401 chore: make gcu enabled by default 2026-03-04 09:52:42 -08:00
bryan 7fbf57cbb7 fix: linter update 2026-03-04 09:52:16 -08:00
Fernando Mano bc349e8fde feature(WindowsFilesystemSupport): #5677 - Windows File System Support and Testing -- remove testing codeand prepare for PR 2026-03-04 14:41:42 -03:00
bryan 67d094f51a fix: tool tests 2026-03-04 09:22:34 -08:00
bryan 873af04c6e fix: utilize mac keychain for claude code subscription 2026-03-04 09:22:12 -08:00
Shaurya Singh 2f0439dca8 Merge branch 'main' into fix/windows-browser-auto-open 2026-03-04 22:50:39 +05:30
Fernando Mano 8470c6a980 feature(WindowsFilesystemSupport): #5677 - Windows File System Support and Testing 2026-03-04 14:16:48 -03:00
Levin 43092ba1d7 Merge branch 'aden-hive:main' into bug/agent-sessions 2026-03-04 22:33:40 +05:30
bryan 1920192656 feat: hive open cmd 2026-03-04 08:55:18 -08:00
bryan 61487db481 chore: linter fixes 2026-03-04 08:44:04 -08:00
Waryjustice f56feaf821 fix: browser auto-open after quickstart does not work on Windows 2026-03-04 22:12:53 +05:30
Timothy @aden 4cbd5a4c6c Merge pull request #5786 from osb910/fix/charmap-decode-error
fix(core): add utf-8 encoding to backend open calls (micro-fix)
2026-03-04 08:39:10 -08:00
Timothy 65aa5629e8 chore: fix lint 2026-03-04 08:34:01 -08:00
bryan c42c8ba505 Merge branch 'main' into feat/google-scopes 2026-03-04 08:25:29 -08:00
Omar Shareef 7193d09bed formatting warning fix 2026-03-04 16:43:46 +02:00
Omar Shareef 49f8fae0b4 fix: systematically enforce UTF-8 encoding across tools and core to fix Windows charmap decode errors 2026-03-04 16:04:53 +02:00
Omar Shareef e1a490756e fix: systematically enforce UTF-8 encoding across tools and core to fix Windows charmap decode errors 2026-03-04 15:58:03 +02:00
code-Miracle49 c313ea7ee2 micro-fix: remove duplicate _execute_subagent method in EventLoopNode 2026-03-04 12:54:43 +01:00
Omar Shareef 91bfaf36e3 fix(core): add utf-8 encoding to backend open calls
This fixes a charmap decoding error on Windows when opening agent files without explicitly specifying the encoding.
2026-03-04 13:32:59 +02:00
levxn e3ea9212dd latest upstream and MC resolved in workspace.tsx 2026-03-04 12:08:14 +05:30
kostasuser01gr 99d41d8cc6 docs: update roadmap to reflect completed features (refs #4780) 2026-03-04 08:37:14 +02:00
levxn 8988c1e760 session management and ability to converse from where the chat was left off, fix v1 2026-03-04 11:40:44 +05:30
Timothy @aden 465adf5b1f Merge pull request #5767 from aden-hive/feat/integrations
Feat/integrations
2026-03-03 22:04:08 -08:00
RichardTang-Aden 132d00d166 Merge pull request #5769 from aden-hive/queen-mode-separation
Release / Create Release (push) Waiting to run
Queen mode separation: building, staging, and running modes
2026-03-03 21:31:23 -08:00
bryan b1a5f8e730 chore: tool test fixes 2026-03-03 21:01:19 -08:00
Richard Tang a604fee3aa chore: mode label update 2026-03-03 20:47:35 -08:00
Timothy 8018325923 style: fix all ruff lint errors (E501, E722, E741, F841)
- Break long lines (E501) across 25+ files
- Replace bare except with except Exception (E722)
- Rename ambiguous variable `l` to `item` (E741)
- Prefix unused variables with underscore (F841)
2026-03-03 20:42:30 -08:00
Richard Tang 3f86bd4009 chore: lint fix 2026-03-03 20:39:04 -08:00
Timothy b4cf10214b chore: lint issues 2026-03-03 20:38:30 -08:00
Bryan @ Aden c7818c2c33 Merge pull request #5766 from aden-hive/fix/credential-modal-delete
(micro-fix): Fix/credential modal delete
2026-03-04 04:38:23 +00:00
Timothy e421bcc326 chore: lint issues 2026-03-03 20:36:28 -08:00
Richard Tang 09e5a4dcc0 chore: frontend verbrige 2026-03-03 20:31:26 -08:00
Richard Tang ce08c44235 feat: improve ui indicator 2026-03-03 20:28:32 -08:00
Richard Tang e743234324 fix: strenghthen prompt to collect user intent 2026-03-03 20:23:53 -08:00
Timothy 9b76ac48b7 chore: new depedency 2026-03-03 20:23:10 -08:00
bryan 06a9adb051 chore: linter fix 2026-03-03 20:15:42 -08:00
Richard Tang 6ae16345a8 fix: reference err from merging 2026-03-03 20:15:37 -08:00
Richard Tang 8daaf000b1 Merge remote-tracking branch 'origin/feat/question-widget' into queen-mode-separation 2026-03-03 20:09:10 -08:00
bryan 9ce753055c feat: meeting scheduler agent 2026-03-03 20:01:58 -08:00
bryan 0ce87b5155 refactor: update calendar list events tool 2026-03-03 20:01:42 -08:00
Richard Tang 273f411eee feat: replace the reload agent to stop worker 2026-03-03 20:01:27 -08:00
Richard Tang 6929cecf8a fix: tag for frontend 2026-03-03 19:53:18 -08:00
Richard Tang 9221a7ff03 Merge remote-tracking branch 'origin/queen-mode-separation' into queen-mode-separation 2026-03-03 19:43:33 -08:00
Richard Tang a6089c5b3b feat: returning queen bee status when starting session 2026-03-03 19:43:04 -08:00
Richard Tang a7ee972b32 feat: enable the frontend to cancel the current queen run and sync queen mode 2026-03-03 19:30:55 -08:00
Richard Tang c817989b99 feat: allow frontend change to control mode 2026-03-03 19:29:33 -08:00
Richard Tang 2272a6854c refactor: consolidate discorver_mcp_tools and list_agent_tools 2026-03-03 19:08:58 -08:00
Timothy 040fc1ee8d feat: corrected agent generation guidelines 2026-03-03 18:53:40 -08:00
Richard Tang f00b8d7b8c fix: update the initial state condition 2026-03-03 18:35:24 -08:00
Timothy @aden 6c8c6d7048 Merge pull request #5234 from Antiarin/fix/guardian-self-trigger-loop
fix(tui): fix pause/stop to cancel all running tasks across all graphs
2026-03-03 18:17:15 -08:00
Richard Tang f27ef52c7a feat: update queen initial state 2026-03-03 18:15:51 -08:00
Richard Tang 0a2ff1db97 feat: new queen stages and tools 2026-03-03 18:07:47 -08:00
Timothy 6da48eac6f feat: split tool loading into verified and unverified tiers
register_all_tools() now only loads verified (stable) tools by default.
Pass include_unverified=True to also load new/community integrations.
This prevents unverified tools from being loaded in production.

Also fixes duplicate register_brevo and register_pushover calls.
2026-03-03 17:54:45 -08:00
Timothy 638ff04e24 fix: remove duplicate community tool directories and fix credential wiring
- Remove s3_tool (duplicate of aws_s3_tool), power_bi_tool (duplicate of
  powerbi_tool), x_tool (duplicate of twitter_tool)
- Remove integrations/plaid (duplicate of plaid_tool), integrations/sap_s4hana
  (duplicate of sap_tool), stray tools/mssql.py
- Add help key to credential error responses across 14 tool modules
- Fix health checker registry keys (calendly -> calendly_pat, lusha -> lusha_api_key)
- Add health_check_endpoint to calendly and lusha credential specs
- Fix Trello env var (TRELLO_TOKEN -> TRELLO_API_TOKEN) and remove duplicate
  Trello specs from hubspot.py
- Add credential_group="aws" to AWS S3 and Redshift specs sharing env vars
- Update conftest UNREGISTERED_COMMUNITY_MODULES to only contain mssql_tool
2026-03-03 17:46:28 -08:00
Timothy d7075b459b fix: cleanse llm conversations 2026-03-03 17:44:21 -08:00
bryan d0e7aa14b6 fix: hide delete button for Aden-managed credentials 2026-03-03 17:36:04 -08:00
bryan 59fee56c54 fix: share server credential store with runner to avoid redundant Aden syncs 2026-03-03 17:35:24 -08:00
bryan 2207306169 fix: resolve MCP server cwd from repo root instead of agent path 2026-03-03 17:34:51 -08:00
Richard Tang 8ff2e91f2d feat: add queen agent building and running mode switching 2026-03-03 16:01:41 -08:00
bryan 730370a007 test: update calendar and health check tests 2026-03-03 15:42:22 -08:00
bryan f87909109c refactor: simplify health check system 2026-03-03 15:42:07 -08:00
bryan d6a6d8b5ef refactor: unify Google OAuth to single credential 2026-03-03 15:41:53 -08:00
bryan 57563abfa7 feat: add Google Sheets tool 2026-03-03 15:41:26 -08:00
Richard Tang 61afaa4c8b feat: add uv instruction to agents 2026-03-03 14:51:58 -08:00
Richard Tang 0de47dbc3f feat: agents.md for agent collaboration 2026-03-03 14:51:58 -08:00
Richard Tang 676ef56134 fix: mcp path 2026-03-03 14:51:58 -08:00
Richard Tang f0899bb35d feat: use send instead of draft for email reply agent 2026-03-03 14:51:58 -08:00
Richard Tang f490038e36 chore: move the email reply sample agent 2026-03-03 14:51:58 -08:00
Richard Tang cbf220eb00 feat: email reply sample agent 2026-03-03 14:51:58 -08:00
Richard Tang bf0d80ea20 docs: reorder section in documentation 2026-03-03 14:51:58 -08:00
Richard Tang 3ae889a6f8 docs: add running screenshot and update the coding agent instruction 2026-03-03 14:51:58 -08:00
Richard Tang 03ca1067ac docs: sync all i18n READMEs with primary README 2026-03-03 14:51:58 -08:00
Richard Tang 3cda30a40a docs: update the latest features from recent changes 2026-03-03 14:51:58 -08:00
Richard Tang 26934527b9 docs: update readme instructions 2026-03-03 14:51:58 -08:00
Richard Tang 2619acde22 docs: remove TUI in the readme 2026-03-03 14:51:58 -08:00
Richard Tang b983d3cfd2 chore: ignore local dev skills 2026-03-03 14:51:58 -08:00
Richard Tang 87a9dd15fe fix: load-new-session from home 2026-03-03 14:51:58 -08:00
RichardTang-Aden 4066962ade Merge pull request #5751 from aden-hive/load-new-session-from-home
Fix new session from home and add email reply agent template
2026-03-03 14:48:17 -08:00
Richard Tang 0f26e34f09 fix: improve the reply template 2026-03-03 14:45:07 -08:00
Richard Tang d76e436e3d fix: new session should have their own id 2026-03-03 14:44:51 -08:00
Timothy 4ff531dec7 fix: update expected health checkers set (add calendly, zoho_crm) 2026-03-03 14:10:34 -08:00
Timothy 4f8b3d7aff fix: update credential specs for community Linear/Trello tools, skip unregistered community modules 2026-03-03 14:09:04 -08:00
Timothy 210fa9c474 fix: use community Brevo implementation (6 tools), remove orphaned x_tool test 2026-03-03 14:06:00 -08:00
Timothy 25361cac8c fix: align tests with community implementations, revert Reddit to httpx (praw unavailable) 2026-03-03 14:02:33 -08:00
Timothy 28defebd6d fix: remove community youtube_transcript tool.py requiring uninstalled SDK 2026-03-03 13:58:45 -08:00
Timothy c74381619e Merge branch 'feature/queen-worker-comm' into feat/question-widget 2026-03-03 13:57:52 -08:00
Timothy d58f3103dd fix: guard register_tools for s3_tool and mssql_tool when SDK not available 2026-03-03 13:54:46 -08:00
Timothy 5d1ed35660 fix: remove shell heredoc artifacts from community power_bi_tool 2026-03-03 13:52:20 -08:00
Timothy 1f3e305534 fix: guard optional SDK imports (boto3, pyodbc) and remove s3_tool registration 2026-03-03 13:51:04 -08:00
Timothy 7d8fdd279c fix: revert Asana to httpx-based implementation (asana SDK not available) 2026-03-03 13:33:35 -08:00
Timothy cacae9f290 fix: compaction logics 2026-03-03 13:33:01 -08:00
Timothy bb061b770f merge: incorporate QuickBooks community PR #4158
# Conflicts:
#	examples/templates/deep_research_agent/config.py
#	examples/templates/tech_news_reporter/config.py
#	tools/README.md
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/quickbooks.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/quickbooks_tool/__init__.py
#	tools/src/aden_tools/tools/quickbooks_tool/quickbooks_tool.py
#	tools/tests/tools/test_quickbooks_tool.py
2026-03-03 13:27:04 -08:00
Timothy a8768b9ed6 merge: incorporate MSSQL community PR #4200
# Conflicts:
#	tools/pyproject.toml
#	tools/src/aden_tools/credentials/integrations.py
#	tools/src/aden_tools/tools/__init__.py
2026-03-03 13:26:36 -08:00
Timothy b437aa5f6c merge: incorporate Linear community PR #3585
# Conflicts:
#	.claude/skills/hive-credentials/SKILL.md
#	tools/README.md
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/linear_tool/__init__.py
#	tools/src/aden_tools/tools/linear_tool/linear_tool.py
2026-03-03 13:24:57 -08:00
Timothy 9248182570 merge: incorporate Trello community PR #3376
# Conflicts:
#	tools/README.md
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/trello_tool/__init__.py
#	tools/src/aden_tools/tools/trello_tool/trello_tool.py
#	tools/tests/tools/test_trello_tool.py
2026-03-03 13:24:23 -08:00
bryan 511c1a6ed5 fix: update queen prompt around ask_user 2026-03-03 13:22:59 -08:00
Timothy 7c77c7170f merge: incorporate YouTube Transcript community PR #3520
# Conflicts:
#	tools/pyproject.toml
#	tools/src/aden_tools/tools/__init__.py
2026-03-03 13:22:46 -08:00
Timothy 85fcb6516c merge: incorporate Redshift community PR #3533
# Conflicts:
#	tools/pyproject.toml
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/redshift_tool/__init__.py
#	tools/src/aden_tools/tools/redshift_tool/redshift_tool.py
#	tools/tests/tools/test_redshift_tool.py
2026-03-03 13:17:41 -08:00
Timothy e8e76d85f7 merge: incorporate Pushover community PR #5424
# Conflicts:
#	tools/src/aden_tools/tools/pushover_tool/__init__.py
#	tools/src/aden_tools/tools/pushover_tool/pushover_tool.py
2026-03-03 13:17:18 -08:00
Timothy 5aaa5ae4d5 merge: incorporate Twitter/X community PR #3807
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/tests/test_credentials.py
2026-03-03 13:16:45 -08:00
Timothy c3a8ee9c7b merge: incorporate Calendly community PR #3947
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/calendly.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/calendly_tool/__init__.py
#	tools/src/aden_tools/tools/calendly_tool/calendly_tool.py
#	tools/tests/test_health_checks.py
#	tools/tests/tools/test_calendly_tool.py
2026-03-03 13:14:20 -08:00
Timothy 5d07a8aba5 merge: incorporate Airtable community PR #3953
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/airtable.py
#	tools/src/aden_tools/credentials/health_check.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/airtable_tool/__init__.py
#	tools/src/aden_tools/tools/airtable_tool/airtable_tool.py
#	tools/tests/test_health_checks.py
#	tools/tests/tools/test_airtable_tool.py
2026-03-03 13:13:47 -08:00
Timothy d18e0594b8 merge: incorporate Reddit community PR #3963
# Conflicts:
#	tools/pyproject.toml
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/health_check.py
#	tools/src/aden_tools/credentials/reddit.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/reddit_tool/__init__.py
#	tools/src/aden_tools/tools/reddit_tool/reddit_tool.py
#	tools/tests/tools/test_reddit_tool.py
#	uv.lock
2026-03-03 13:12:55 -08:00
Timothy 26dcc86a24 merge: incorporate Zoho CRM community PR #4713
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/zoho_crm_tool/__init__.py
#	tools/src/aden_tools/tools/zoho_crm_tool/zoho_crm_tool.py
#	tools/tests/test_health_checks.py
2026-03-03 13:11:51 -08:00
Timothy e928ad19e5 merge: incorporate Lusha community PR #4714
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/lusha.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/lusha_tool/__init__.py
#	tools/src/aden_tools/tools/lusha_tool/lusha_tool.py
#	tools/tests/tools/test_lusha_tool.py
2026-03-03 13:11:33 -08:00
Timothy 6768aaa575 merge: incorporate Apify community PR #4770
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/apify.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/apify_tool/__init__.py
#	tools/src/aden_tools/tools/apify_tool/apify_tool.py
#	tools/tests/tools/test_apify_tool.py
2026-03-03 13:10:45 -08:00
Timothy f561aacbfc merge: incorporate Attio community PR #4832
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/attio.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/attio_tool/__init__.py
#	tools/src/aden_tools/tools/attio_tool/attio_tool.py
2026-03-03 13:10:09 -08:00
RichardTang-Aden af1ece40c2 Merge pull request #5742 from aden-hive/load-new-session-from-home
Load new session from home
2026-03-03 13:09:44 -08:00
Timothy d9edd7adf7 merge: incorporate Asana community PR #4857
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/asana.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/asana_tool/__init__.py
#	tools/tests/tools/test_asana_tool.py
2026-03-03 13:08:30 -08:00
Richard Tang 3541fab363 feat: add uv instruction to agents 2026-03-03 13:06:50 -08:00
Richard Tang 1160dceeff feat: agents.md for agent collaboration 2026-03-03 13:06:09 -08:00
bryan bbe8efeba2 fix: prevent queen auto-block from overwriting pending worker questions 2026-03-03 13:04:33 -08:00
Timothy b4a5323009 merge: incorporate Brevo community PR #5136
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/brevo.py
#	tools/src/aden_tools/tools/brevo_tool/__init__.py
#	tools/src/aden_tools/tools/brevo_tool/brevo_tool.py
2026-03-03 13:04:29 -08:00
Timothy ade8b5b9a7 merge: incorporate Databricks community PR #5428
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/databricks.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/databricks_tool/__init__.py
#	tools/src/aden_tools/tools/databricks_tool/databricks_tool.py
#	tools/tests/tools/test_databricks_tool.py
2026-03-03 13:02:30 -08:00
Timothy e4ace3d484 merge: incorporate YouTube community PR #5673 (resolve conflicts, preserve README) 2026-03-03 12:29:32 -08:00
Timothy f3dd25adc5 merge: incorporate Power BI community PR #4341 2026-03-03 12:27:06 -08:00
Timothy ec251f8168 merge: incorporate SAP S/4HANA community PR #5519 2026-03-03 12:27:02 -08:00
Timothy 1bb9579dc5 merge: incorporate Plaid community PR #5518 2026-03-03 12:26:56 -08:00
Timothy 7ebf4146ce merge: incorporate AWS S3 community PR #5521 2026-03-03 12:26:50 -08:00
Richard Tang a8db4cb2f5 fix: mcp path 2026-03-03 12:19:32 -08:00
Richard Tang 24433396dd feat: use send instead of draft for email reply agent 2026-03-03 12:04:44 -08:00
Richard Tang 02bdf17641 chore: move the email reply sample agent 2026-03-03 11:59:14 -08:00
Timothy e0e05f3488 chore: register Obsidian tool in tool/credential registries 2026-03-03 11:55:12 -08:00
Timothy c92f2510c8 test: add Obsidian tool unit tests (read, write, append, search, list, active) 2026-03-03 11:55:12 -08:00
Timothy ea1fbe9ee1 chore: add Obsidian credential spec (REST API key) 2026-03-03 11:55:11 -08:00
Timothy 84a0be0179 feat: add Obsidian knowledge management integration (#3741)
6 tools: obsidian_read_note, obsidian_write_note, obsidian_append_note,
obsidian_search, obsidian_list_files, obsidian_get_active.
Uses Local REST API plugin with Bearer token auth. Supports vault
browsing, full-text search, and note CRUD with frontmatter metadata.
2026-03-03 11:55:04 -08:00
RichardTang-Aden 54f5c0dc91 Merge pull request #5735 from aden-hive/docs/readme/v6
docs: reorder section in documentation
2026-03-03 11:54:09 -08:00
Richard Tang adf1a10318 docs: reorder section in documentation 2026-03-03 11:53:05 -08:00
RichardTang-Aden e2a679a265 Merge pull request #5734 from aden-hive/docs/readme/v6
docs: add running screenshot and update the coding agent instruction
2026-03-03 11:50:56 -08:00
Richard Tang a3916a6932 docs: add running screenshot and update the coding agent instruction 2026-03-03 11:49:19 -08:00
Timothy 1b5780461e chore: register Langfuse tool in tool/credential registries 2026-03-03 11:42:49 -08:00
Timothy c8d35b63a4 test: add Langfuse tool unit tests (traces, scores, prompts) 2026-03-03 11:42:49 -08:00
Timothy feb1ebae04 chore: add Langfuse credential specs (public key, secret key) 2026-03-03 11:42:48 -08:00
Timothy efe49d0a5b feat: add Langfuse LLM observability integration (#5322)
6 tools: langfuse_list_traces, langfuse_get_trace, langfuse_list_scores,
langfuse_create_score, langfuse_list_prompts, langfuse_get_prompt.
Uses HTTP Basic Auth with public/secret key pair. Supports cloud and
self-hosted instances with offset-based pagination.
2026-03-03 11:41:11 -08:00
Timothy e50a5ea22a chore: register Zoom and n8n tools in tool/credential registries 2026-03-03 11:31:25 -08:00
Timothy 6382c94d0a test: add n8n tool unit tests (workflows, executions, activate/deactivate) 2026-03-03 11:31:21 -08:00
Timothy 58ce84c9cc chore: add n8n credential specs (API key, base URL) 2026-03-03 11:31:20 -08:00
Timothy 08fd6ff765 feat: add n8n workflow automation integration (#2931)
6 tools: n8n_list_workflows, n8n_get_workflow, n8n_activate_workflow,
n8n_deactivate_workflow, n8n_list_executions, n8n_get_execution.
Uses X-N8N-API-KEY header auth with configurable base URL.
Supports cursor-based pagination and execution status filtering.
2026-03-03 11:31:15 -08:00
Timothy a9cb79909c test: add Zoom tool unit tests (user, meetings, recordings) 2026-03-03 11:31:07 -08:00
Timothy 852f8ccd94 chore: add Zoom credential spec (Server-to-Server OAuth token) 2026-03-03 11:31:07 -08:00
Timothy 9388ef3e99 feat: add Zoom meeting management integration (#2867)
6 tools: zoom_get_user, zoom_list_meetings, zoom_get_meeting,
zoom_create_meeting, zoom_delete_meeting, zoom_list_recordings.
Uses Server-to-Server OAuth Bearer token. Supports token-based
pagination and cloud recording retrieval by date range.
2026-03-03 11:31:00 -08:00
Timothy 04afb0c4bb chore: register Salesforce and Shopify tools in tool/credential registries 2026-03-03 11:22:40 -08:00
Timothy a07fd44de3 test: add Shopify tool unit tests (orders, products, customers, search) 2026-03-03 11:22:35 -08:00
Timothy f6c1b13846 chore: add Shopify credential specs (access token, store name) 2026-03-03 11:22:35 -08:00
Timothy 654fa3dd1f feat: add Shopify Admin REST API integration - orders, products, customers (#2984)
6 tools: shopify_list_orders, shopify_get_order, shopify_list_products,
shopify_get_product, shopify_list_customers, shopify_search_customers.
Uses X-Shopify-Access-Token header auth with store subdomain.
2026-03-03 11:22:29 -08:00
Timothy 8183449d27 test: add Salesforce CRM tool unit tests (SOQL, CRUD, describe, list objects) 2026-03-03 11:22:16 -08:00
Timothy a9acfb86ad chore: add Salesforce credential specs (access token, instance URL) 2026-03-03 11:22:15 -08:00
Timothy d7d070ac5f feat: add Salesforce CRM integration - SOQL, records, and metadata (#2916)
6 tools: salesforce_soql_query, salesforce_get_record, salesforce_create_record,
salesforce_update_record, salesforce_describe_object, salesforce_list_objects.
Uses OAuth2 Bearer token auth with instance URL. Supports pagination via
nextRecordsUrl and field-level describe with picklist values.
2026-03-03 11:22:08 -08:00
RichardTang-Aden ead51f1eb6 Merge pull request #5732 from aden-hive/docs/readme/v6
docs: update README and sync all i18n translations
2026-03-03 11:19:06 -08:00
Timothy 8c01b573ce chore: register Redshift and SAP S/4HANA in tool/credential registries 2026-03-03 11:11:12 -08:00
Timothy 7744f21b9d test: add SAP S/4HANA tool unit tests (POs, partners, products, sales orders) 2026-03-03 11:11:08 -08:00
Timothy 9ed23a235f chore: add SAP S/4HANA credential specs (base URL, username, password) 2026-03-03 11:11:07 -08:00
Timothy e88328321f feat: add SAP S/4HANA Cloud read-only procurement integration (#3182) 2026-03-03 11:11:06 -08:00
Timothy a4c516bea1 test: add Redshift tool unit tests (execute, describe, results, databases, tables) 2026-03-03 11:11:00 -08:00
Timothy 1c932a04ef chore: add Redshift credential specs (AWS access key, secret key) 2026-03-03 11:11:00 -08:00
Timothy 76d34be4c2 feat: add Amazon Redshift Data API integration - SQL and schema browsing (#3267) 2026-03-03 11:10:59 -08:00
bryan cb0e9ff9ec chore: fixing tests 2026-03-03 11:07:49 -08:00
Timothy d6e8afe316 chore: register Azure SQL and Kafka in tool/credential registries 2026-03-03 11:03:31 -08:00
Timothy a04f2bcf99 test: add Kafka tool unit tests (topics, produce, consumer groups) 2026-03-03 11:03:27 -08:00
Timothy c138e7c638 chore: add Kafka credential specs (REST URL, cluster ID) 2026-03-03 11:03:27 -08:00
Timothy fc08c7007f feat: add Apache Kafka integration via Confluent REST Proxy (#4774) 2026-03-03 11:03:26 -08:00
Timothy d559bb3446 test: add Azure SQL tool unit tests (servers, databases, firewall rules) 2026-03-03 11:03:18 -08:00
Timothy 55a8c39e4b chore: add Azure SQL credential specs (token, subscription ID) 2026-03-03 11:03:17 -08:00
Timothy 02d6f10e5f feat: add Azure SQL Database management integration (#3377) 2026-03-03 11:03:16 -08:00
Timothy 77428a91cc chore: register Power BI and Snowflake in tool/credential registries 2026-03-03 10:56:46 -08:00
Timothy 51403dc276 test: add Snowflake tool unit tests (execute, status, cancel) 2026-03-03 10:56:43 -08:00
Timothy 914a07a35d chore: add Snowflake credential specs (account, token) 2026-03-03 10:56:42 -08:00
Timothy 3c70d7b424 feat: add Snowflake SQL REST API integration (#3230) 2026-03-03 10:56:41 -08:00
Timothy ce1ee4ff17 test: add Power BI tool unit tests (workspaces, datasets, reports, refresh) 2026-03-03 10:56:35 -08:00
Timothy fca41d9bda chore: add Power BI credential spec (POWERBI_ACCESS_TOKEN) 2026-03-03 10:56:34 -08:00
Timothy ff889e02f7 feat: add Power BI integration - workspaces, datasets, reports (#3973) 2026-03-03 10:56:34 -08:00
Richard Tang cbd2c86bbf docs: sync all i18n READMEs with primary README 2026-03-03 10:53:11 -08:00
Timothy 43ab460462 chore: register Terraform Cloud and Lusha in tool/credential registries 2026-03-03 10:49:21 -08:00
Timothy caa06e266b test: add Lusha tool unit tests (enrich, search, usage) 2026-03-03 10:49:17 -08:00
Timothy 3622ca78ee chore: add Lusha credential spec (LUSHA_API_KEY) 2026-03-03 10:49:17 -08:00
Timothy 019e3f9659 feat: add Lusha B2B contact and company enrichment integration (#3461) 2026-03-03 10:49:16 -08:00
Timothy 208cb579a2 test: add Terraform Cloud tool unit tests (workspaces, runs) 2026-03-03 10:49:09 -08:00
Timothy 17de7e4485 chore: add Terraform Cloud credential spec (TFC_TOKEN) 2026-03-03 10:49:08 -08:00
Timothy 810616eee1 feat: add Terraform Cloud integration - workspaces and runs (#4773) 2026-03-03 10:48:41 -08:00
Timothy 191f583669 chore: register Twitter/X and Tines in tool/credential registries 2026-03-03 10:35:46 -08:00
Timothy 1d638cc18e test: add Tines tool unit tests (stories, actions, logs) 2026-03-03 10:35:42 -08:00
Timothy 3efa1f3b88 chore: add Tines credential specs (domain, api_key) 2026-03-03 10:35:42 -08:00
Timothy 4daa33db09 feat: add Tines integration - security automation stories and actions
Implements 5 tools via Tines REST API:
- tines_list_stories: List workflow stories with search/filter
- tines_get_story: Get story details including entry/exit agents
- tines_list_actions: List actions (agents) in stories
- tines_get_action: Get action details with sources/receivers
- tines_get_action_logs: Get action execution logs by level

Uses Bearer token auth with tenant domain.
2026-03-03 10:35:37 -08:00
Timothy fab2fb0056 test: add Twitter/X tool unit tests (search, user, timeline, tweet) 2026-03-03 10:35:29 -08:00
Timothy ce885c120e chore: add Twitter/X credential spec (bearer_token) 2026-03-03 10:35:28 -08:00
Timothy 75b53c47ff feat: add Twitter/X integration - tweet search and user lookup via API v2
Implements 4 tools via X API v2:
- twitter_search_tweets: Search recent tweets with query operators
- twitter_get_user: Get user profile by username
- twitter_get_user_tweets: Get user timeline
- twitter_get_tweet: Get tweet details by ID

Uses Bearer token auth (app-only, read access).
2026-03-03 10:35:21 -08:00
Timothy 2936f73707 chore: register AWS S3 and QuickBooks in tool/credential registries 2026-03-03 10:22:46 -08:00
Timothy e26426b138 test: add QuickBooks tool unit tests (query, entities, invoices) 2026-03-03 10:22:42 -08:00
Timothy 62cacb8e28 chore: add QuickBooks credential specs (access_token, realm_id) 2026-03-03 10:22:42 -08:00
Timothy f3e37190ce feat: add QuickBooks Online integration - accounting API
Implements 5 tools via QuickBooks Online API v3:
- quickbooks_query: Query entities with SQL-like syntax
- quickbooks_get_entity: Get entity by type and ID
- quickbooks_create_customer: Create customers
- quickbooks_create_invoice: Create invoices with line items
- quickbooks_get_company_info: Get company details

Uses OAuth 2.0 Bearer token auth. Supports sandbox mode.
2026-03-03 10:22:35 -08:00
Timothy 0863bbbd2f test: add AWS S3 tool unit tests (buckets, objects, get, put, delete) 2026-03-03 10:22:25 -08:00
Timothy b23fa1daad chore: add AWS S3 credential specs (access_key_id, secret_access_key) 2026-03-03 10:22:24 -08:00
Timothy 05cc1ce599 feat: add AWS S3 integration - object storage via REST API with SigV4
Implements 5 tools via AWS S3 REST API:
- s3_list_buckets: List all buckets in the account
- s3_list_objects: List objects with prefix/delimiter filtering
- s3_get_object: Get object content and metadata
- s3_put_object: Upload text objects
- s3_delete_object: Delete objects

Uses AWS Signature V4 signing (no boto3 dependency).
2026-03-03 10:22:16 -08:00
RichardTang-Aden a1c045fd91 Merge pull request #5727 from aden-hive/docs/readme/v6
Docs: Remove TUI references from README
2026-03-03 10:14:13 -08:00
Timothy e6939f8d51 chore: register PagerDuty and Calendly in tool/credential registries 2026-03-03 10:13:18 -08:00
Timothy 801fef12e1 test: add Calendly tool unit tests (user, events, invitees) 2026-03-03 10:13:14 -08:00
Timothy 5845629175 chore: add Calendly credential spec (personal_access_token) 2026-03-03 10:13:13 -08:00
Timothy 11b916301a feat: add Calendly integration - scheduling events and invitees
Implements 5 tools via Calendly API v2:
- calendly_get_current_user: Get user URI and profile info
- calendly_list_event_types: List meeting templates
- calendly_list_scheduled_events: List booked meetings with date filters
- calendly_get_scheduled_event: Get event details by URI
- calendly_list_invitees: List invitees for an event

Uses Bearer token auth (Personal Access Token).
2026-03-03 10:13:07 -08:00
Timothy aa5d80b1d2 test: add PagerDuty tool unit tests (incidents, services) 2026-03-03 10:13:02 -08:00
Timothy aa5f990acd chore: add PagerDuty credential specs (api_key, from_email) 2026-03-03 10:13:01 -08:00
Timothy 9764c82c2a feat: add PagerDuty integration - incident management and services
Implements 5 tools via PagerDuty REST API v2:
- pagerduty_list_incidents: List incidents with status/urgency/date filters
- pagerduty_get_incident: Get incident details by ID
- pagerduty_create_incident: Create incidents on a service
- pagerduty_update_incident: Acknowledge or resolve incidents
- pagerduty_list_services: List services with name search

Uses Token auth header, From header for write operations.
2026-03-03 10:12:55 -08:00
Richard Tang f921846879 docs: update the latest features from recent changes 2026-03-03 10:12:43 -08:00
Richard Tang a370403b16 docs: update readme instructions 2026-03-03 10:06:13 -08:00
Timothy 543a71eb6c chore: register MongoDB and Airtable in tool/credential registries 2026-03-03 10:06:12 -08:00
Timothy 8285593c13 test: add Airtable tool unit tests (records, bases, schema) 2026-03-03 10:06:08 -08:00
Timothy 6fbfe773fb chore: add Airtable credential spec (personal_access_token) 2026-03-03 10:06:07 -08:00
Timothy a8c54b1e5f feat: add Airtable integration - record CRUD and base metadata
Implements 6 tools via Airtable Web API:
- airtable_list_records: List records with filters, sort, field selection
- airtable_get_record: Get a single record by ID
- airtable_create_records: Create up to 10 records per request
- airtable_update_records: Partial update up to 10 records per request
- airtable_list_bases: List accessible bases
- airtable_get_base_schema: Get table and field schema for a base

Uses Bearer token auth (Personal Access Token).
2026-03-03 10:06:03 -08:00
Timothy a5323abfca test: add MongoDB tool unit tests (find, insert, update, delete, aggregate) 2026-03-03 10:05:53 -08:00
Timothy ba4df2d2c4 chore: add MongoDB credential specs (data_api_url, api_key, data_source) 2026-03-03 10:05:52 -08:00
Timothy 6510633a8c feat: add MongoDB Atlas Data API integration - document CRUD and aggregation
Implements 6 tools via MongoDB Atlas Data API:
- mongodb_find: Find documents with filters, projection, sort, limit
- mongodb_find_one: Find a single document
- mongodb_insert_one: Insert a document
- mongodb_update_one: Update a document with MongoDB operators
- mongodb_delete_one: Delete a document
- mongodb_aggregate: Run aggregation pipelines

Uses API key auth header. All endpoints are POST.
2026-03-03 10:05:42 -08:00
Timothy 9172e5f46b chore: register Twilio and Zendesk in tool/credential registries 2026-03-03 09:56:14 -08:00
Timothy ed3e3848c0 test: add Zendesk tool unit tests (list, get, create, update, search) 2026-03-03 09:56:10 -08:00
Timothy ee90185d5c chore: add Zendesk credential specs (subdomain, email, api_token) 2026-03-03 09:56:09 -08:00
Timothy 6eb2633677 feat: add Zendesk integration - ticket management and search
Implements 5 tools via Zendesk Support API v2:
- zendesk_list_tickets: List tickets with status/sort filters
- zendesk_get_ticket: Get ticket details by ID
- zendesk_create_ticket: Create tickets with priority/type/tags
- zendesk_update_ticket: Update ticket fields and add comments
- zendesk_search_tickets: Search tickets with Zendesk query syntax

Uses Basic auth (email/token:api_token).
2026-03-03 09:56:00 -08:00
Timothy c1f215dcf2 test: add Twilio tool unit tests (SMS, WhatsApp, list, get) 2026-03-03 09:55:50 -08:00
Timothy 97cc9a1045 chore: add Twilio credential specs (account_sid, auth_token) 2026-03-03 09:55:49 -08:00
Timothy 5f7b02a4b7 feat: add Twilio integration - SMS and WhatsApp messaging
Implements 4 tools via Twilio REST API:
- twilio_send_sms: Send SMS messages
- twilio_send_whatsapp: Send WhatsApp messages
- twilio_list_messages: List message history with filters
- twilio_get_message: Get message details by SID

Uses Basic auth (AccountSID:AuthToken), form-urlencoded POST.
2026-03-03 09:55:43 -08:00
Richard Tang ad6d504ea4 docs: remove TUI in the readme 2026-03-03 09:52:06 -08:00
Timothy e696b41a0e chore: register GitLab and Google Sheets in tool/credential registries 2026-03-03 09:49:23 -08:00
Timothy 1f9acc6135 test: add Google Sheets tool unit tests (metadata, read, batch read) 2026-03-03 09:49:23 -08:00
Timothy 7e8699cb4b chore: add Google Sheets credential spec (api_key) 2026-03-03 09:49:22 -08:00
Timothy fd4fc657d6 feat: add Google Sheets integration - read spreadsheet data via API v4
3 tools: sheets_get_spreadsheet, sheets_read_range, sheets_batch_read.
Uses API key auth for read-only access to public spreadsheets.
2026-03-03 09:49:21 -08:00
Timothy 34403648b9 test: add GitLab tool unit tests (projects, issues, MRs) 2026-03-03 09:49:15 -08:00
Timothy 3795d50eb9 chore: add GitLab credential spec (personal access token) 2026-03-03 09:49:14 -08:00
Timothy 80515dde5a feat: add GitLab integration - projects, issues, merge requests
6 tools: gitlab_list_projects, gitlab_get_project, gitlab_list_issues,
gitlab_get_issue, gitlab_create_issue, gitlab_list_merge_requests.
Supports GitLab.com and self-hosted via configurable base URL.
2026-03-03 09:49:13 -08:00
Timothy b59094d35f fix: queen should not return on empty stream 2026-03-03 09:44:15 -08:00
Timothy efcd296d83 chore: register Notion and Jira tools in tool/credential registries 2026-03-03 09:43:32 -08:00
Timothy 802cb292b0 test: add Jira tool unit tests (issues, projects, comments) 2026-03-03 09:43:32 -08:00
Timothy 8e55f74d73 chore: add Jira credential specs (domain, email, api_token) 2026-03-03 09:43:31 -08:00
Timothy 3d810485a0 feat: add Jira integration - issues, projects, comments via REST API v3
6 tools: jira_search_issues, jira_get_issue, jira_create_issue,
jira_list_projects, jira_get_project, jira_add_comment. Uses Basic auth
with email + API token and Atlassian Document Format for text fields.
2026-03-03 09:43:30 -08:00
Timothy 94cfd48661 test: add Notion tool unit tests (search, pages, databases) 2026-03-03 09:43:16 -08:00
Timothy 87c8e741f3 chore: add Notion credential spec (api_token) 2026-03-03 09:43:15 -08:00
Timothy d0e92ed18d feat: add Notion integration - pages, databases, and search
5 tools: notion_search, notion_get_page, notion_create_page,
notion_query_database, notion_get_database. Uses Bearer auth
with Notion internal integration token.
2026-03-03 09:43:14 -08:00
Richard Tang 88640f9222 feat: email reply sample agent 2026-03-03 09:41:20 -08:00
Timothy 1927045519 chore: register Greenhouse and YouTube Transcript in tool/credential registries 2026-03-03 09:36:47 -08:00
Timothy 68cffb86c9 test: add YouTube Transcript tool unit tests (get, list transcripts) 2026-03-03 09:36:47 -08:00
Timothy 5bec989647 feat: add YouTube Transcript integration - captions and transcript retrieval
2 tools: youtube_get_transcript, youtube_list_transcripts.
Uses youtube-transcript-api library, no API key required.
2026-03-03 09:36:46 -08:00
Timothy 66f5d2f36c test: add Greenhouse tool unit tests (jobs, candidates, applications) 2026-03-03 09:36:40 -08:00
Timothy 941f815254 chore: add Greenhouse credential spec (api_token) 2026-03-03 09:36:39 -08:00
Timothy 42afd10518 feat: add Greenhouse integration - ATS jobs, candidates, applications
6 tools: greenhouse_list_jobs, greenhouse_get_job, greenhouse_list_candidates,
greenhouse_get_candidate, greenhouse_list_applications, greenhouse_get_application.
Uses Harvest API v1 with Basic auth (API token).
2026-03-03 09:36:38 -08:00
Timothy 3efa285a59 chore: register Cloudinary and Reddit tools in tool/credential registries 2026-03-03 09:31:22 -08:00
Timothy 4f2b4172b4 test: add Reddit tool unit tests (search, posts, comments, user) 2026-03-03 09:31:18 -08:00
Timothy 0d7de71b94 chore: add Reddit credential specs (client_id, client_secret) 2026-03-03 09:31:17 -08:00
Timothy f0f5b4bede feat: add Reddit integration - search, posts, comments, user info
4 tools: reddit_search, reddit_get_posts, reddit_get_comments, reddit_get_user.
Uses OAuth2 client_credentials flow for app-only access.
2026-03-03 09:31:17 -08:00
Timothy bfd27e97d3 test: add Cloudinary tool unit tests (upload, list, get, delete, search) 2026-03-03 09:31:10 -08:00
Timothy f2def27390 chore: add Cloudinary credential specs (cloud_name, api_key, api_secret) 2026-03-03 09:31:10 -08:00
Timothy b3f7bd6cc0 feat: add Cloudinary integration - upload, manage, search media assets
5 tools: cloudinary_upload, cloudinary_list_resources, cloudinary_get_resource,
cloudinary_delete_resource, cloudinary_search. Uses Basic auth with
API key/secret and supports image, video, and raw resource types.
2026-03-03 09:31:09 -08:00
Timothy 0e8e78dc5b chore: register Trello and Confluence tools in tool/credential registries 2026-03-03 09:22:03 -08:00
Timothy b259d85776 test: add Confluence tool tests (9 tests) 2026-03-03 09:22:02 -08:00
Timothy 175d9c3b7c feat: add Confluence credential spec with Basic auth (email + API token) 2026-03-03 09:21:55 -08:00
Timothy a2a810aabf feat: add Confluence integration - spaces, pages, content search via CQL 2026-03-03 09:21:54 -08:00
Timothy 175c7cfd51 test: add Trello tool tests (12 tests) 2026-03-03 09:21:47 -08:00
Timothy 5ada973d38 feat: add Trello credential spec with API key and token auth 2026-03-03 09:21:39 -08:00
Timothy 0103276136 feat: add Trello integration - boards, lists, cards management 2026-03-03 09:21:37 -08:00
Timothy 1d9e8ec138 chore: register HuggingFace tool in tool/credential registries 2026-03-03 09:11:59 -08:00
Timothy 83ac2e71bb test: add HuggingFace tool tests (10 tests) 2026-03-03 09:11:56 -08:00
Timothy 0b35a729a7 feat: add HuggingFace credential spec with token auth 2026-03-03 09:11:55 -08:00
Timothy 56723a519a feat: add HuggingFace Hub integration - models, datasets, spaces search 2026-03-03 09:11:49 -08:00
Timothy ebff394c76 chore: register Plaid tool in tool/credential registries 2026-03-03 09:08:44 -08:00
Timothy ceecc97bc8 test: add Plaid tool tests (13 tests) 2026-03-03 09:08:40 -08:00
Timothy 313154f880 feat: add Plaid credential spec with client_id and secret auth 2026-03-03 09:08:38 -08:00
Timothy 3eb6417cdc feat: add Plaid integration - accounts, balances, transactions, institutions 2026-03-03 09:08:29 -08:00
Timothy 1b35d6ca0a chore: register Pinecone tool in tool/credential registries 2026-03-03 09:05:20 -08:00
Timothy 1d89f0ba9d test: add Pinecone tool tests (18 tests) 2026-03-03 09:05:16 -08:00
Timothy 864df0e21a feat: add Pinecone credential spec with API key auth 2026-03-03 09:05:14 -08:00
Timothy 3f626decc4 feat: add Pinecone vector database integration - indexes, vectors, queries 2026-03-03 09:05:06 -08:00
Timothy bf1760b1a9 chore: register DuckDuckGo tool in tool registry 2026-03-03 08:56:06 -08:00
Timothy 8a58ea6344 test: add DuckDuckGo tool tests (6 tests) 2026-03-03 08:56:06 -08:00
Timothy 662ff4c35f feat: add DuckDuckGo search integration - web search, news, images 2026-03-03 08:56:01 -08:00
Timothy af02352b49 chore: register Linear tool in tool/credential registries 2026-03-03 08:43:41 -08:00
Timothy db9f987d46 test: add Linear tool tests (10 tests) 2026-03-03 08:43:41 -08:00
Timothy 8490ce1389 feat: add Linear credential spec with API key auth 2026-03-03 08:43:41 -08:00
Timothy 55ea9a56a4 feat: add Linear integration - issues, projects, teams, search via GraphQL 2026-03-03 08:43:41 -08:00
Timothy bd2381b10d chore: register Asana tool in tool/credential registries 2026-03-03 08:40:02 -08:00
Timothy 443de755bd test: add Asana tool tests (12 tests) 2026-03-03 08:40:02 -08:00
Timothy 55ec5f14ee feat: add Asana credential spec with PAT auth 2026-03-03 08:40:02 -08:00
Timothy 2e019302c9 feat: add Asana integration - tasks, projects, workspaces, search 2026-03-03 08:40:02 -08:00
Timothy b1e829644b chore: register Yahoo Finance tool in tool registry 2026-03-03 08:36:20 -08:00
Timothy 18f773e91b test: add Yahoo Finance tool tests (8 tests) 2026-03-03 08:36:19 -08:00
Timothy 987cfee930 feat: add Yahoo Finance integration - quotes, history, financials, company info 2026-03-03 08:36:19 -08:00
Timothy 57f6b8498a chore: register Google Search Console tool in tool/credential registries 2026-03-03 08:34:30 -08:00
Timothy 9f0d35977c test: add Google Search Console tool tests (10 tests) 2026-03-03 08:34:30 -08:00
Timothy e5910bbf2f feat: add Google Search Console credential spec with OAuth2 auth 2026-03-03 08:34:30 -08:00
Timothy 0015bf7b38 feat: add Google Search Console integration - analytics, sitemaps, URL inspection 2026-03-03 08:34:30 -08:00
Timothy a6b9234abb chore: register Zoho CRM tool in tool/credential registries 2026-03-03 08:32:13 -08:00
Timothy 086f3942b8 test: add Zoho CRM tool tests (12 tests) 2026-03-03 08:32:13 -08:00
Timothy 924f4abede feat: add Zoho CRM credential spec with OAuth token auth 2026-03-03 08:32:13 -08:00
Timothy 02be91cb08 feat: add Zoho CRM integration - leads, contacts, deals, accounts, notes 2026-03-03 08:32:13 -08:00
Timothy c2298393ab chore: register Apify tool in tool/credential registries 2026-03-03 08:29:33 -08:00
Timothy 4b8c63bf6e test: add Apify tool tests (11 tests) 2026-03-03 08:29:33 -08:00
Timothy e089c3b72c feat: add Apify credential spec with API token auth 2026-03-03 08:29:33 -08:00
Timothy a93983b5db feat: add Apify integration - actors, runs, datasets, key-value stores 2026-03-03 08:29:27 -08:00
Timothy 20f6329004 chore: register Attio tool in tool/credential registries 2026-03-03 08:25:12 -08:00
Timothy 3c2cf71c47 test: add Attio tool tests (14 tests) 2026-03-03 08:25:08 -08:00
Timothy 56288c3137 feat: add Attio credential spec with API key auth 2026-03-03 08:25:04 -08:00
Timothy 79188921a5 feat: add Attio CRM integration - records, lists, notes, tasks 2026-03-03 08:24:58 -08:00
RichardTang-Aden 65962ddf58 Merge pull request #5709 from aden-hive/load-new-session-from-home
Fix new session creation when submitting prompt from home page
2026-03-03 08:20:20 -08:00
Timothy 5ab66008ae chore: register Pipedrive tool in tool/credential registries 2026-03-03 08:18:45 -08:00
Timothy f38c9ee049 test: add Pipedrive tool tests (16 tests) 2026-03-03 08:18:41 -08:00
Timothy 86f5e71ec2 feat: add Pipedrive credential spec with API token auth 2026-03-03 08:18:29 -08:00
Timothy 1e15cc8495 feat: add Pipedrive CRM integration - deals, contacts, orgs, activities, pipelines 2026-03-03 08:18:24 -08:00
Richard Tang bba44430c4 chore: ignore local dev skills 2026-03-03 08:17:32 -08:00
Timothy 077d82ad82 chore: register Docker Hub tool in tool/credential registries 2026-03-03 08:14:27 -08:00
Timothy e4cf7f3da2 test: add Docker Hub tool tests (9 tests) 2026-03-03 08:14:24 -08:00
Timothy e3bdc9e8d7 feat: add Docker Hub credential spec with PAT auth 2026-03-03 08:14:20 -08:00
Timothy f1c1c9aab3 feat: add Docker Hub integration - search, repos, tags, image details 2026-03-03 08:14:15 -08:00
Timothy 97cbcf7658 fix: adapt path guarantee 2026-03-03 08:11:49 -08:00
Richard Tang 69c71d77fb fix: load-new-session from home 2026-03-03 08:09:22 -08:00
Timothy 4860739a2f chore: register Vercel in tool/credential registries (#5044) 2026-03-03 08:08:16 -08:00
Timothy 791ee40cd6 test: add Vercel tool unit tests (#5044) 2026-03-03 08:08:12 -08:00
Timothy e0191ac52b feat: add Vercel credential spec (#5044) 2026-03-03 08:08:07 -08:00
Timothy e0724df196 feat: add Vercel tool - deployments, projects, domains, env vars (#5044) 2026-03-03 08:08:00 -08:00
Timothy 2a56294638 chore: register Databricks in tool/credential registries (#5167) 2026-03-03 08:05:25 -08:00
Timothy d5cd557013 test: add Databricks tool unit tests (#5167) 2026-03-03 08:05:21 -08:00
Timothy 2a43f23a3d feat: add Databricks credential spec (#5167) 2026-03-03 08:05:03 -08:00
Timothy 69af8f569a feat: add Databricks tool - SQL, jobs, clusters, workspace (#5167) 2026-03-03 08:04:34 -08:00
bryan dcc11c9ea3 chore: move test deps to testing extra and dev group 2026-03-03 08:03:02 -08:00
Timothy 4b4abb47b0 Merge branch 'feature/queen-worker-comm' into fix/queen-recovery 2026-03-03 08:02:59 -08:00
Timothy 0e86dbcc9b chore: register Redis tool in tool/credential registries (#5370) 2026-03-03 08:01:43 -08:00
Timothy 92c75aa6f5 test: add Redis tool unit tests (#5370) 2026-03-03 08:01:37 -08:00
Timothy be41d848e5 feat: add Redis credential spec (#5370) 2026-03-03 08:01:32 -08:00
Timothy f7c299f6f0 feat: add Redis tool implementation - KV, hash, list, pub/sub (#5370) 2026-03-03 08:01:25 -08:00
Timothy b6a0f65a09 feat: add Pushover push notification integration (#5415)
4 tools: pushover_send, pushover_validate_user, pushover_list_sounds,
pushover_check_receipt. Supports priority levels, HTML, sounds, TTL.
All 12 unit tests and 13 conformance tests passing.
2026-03-03 07:58:29 -08:00
Timothy 1e7b0068ed chore: register Supabase tool in tool/credential registries 2026-03-03 07:54:34 -08:00
bryan 207d2fb911 feat: wire QuestionWidget into ChatPanel and workspace 2026-03-03 07:54:32 -08:00
Timothy de5105f313 feat: add Supabase integration - DB, Auth, Edge Functions (#5489)
7 tools: supabase_select, supabase_insert, supabase_update, supabase_delete,
supabase_auth_signup, supabase_auth_signin, supabase_edge_invoke.
All 19 unit tests and 13 conformance tests passing.
2026-03-03 07:54:27 -08:00
bryan c65a99c87d feat: add QuestionWidget component 2026-03-03 07:54:21 -08:00
bryan b4d7e57250 feat: update queen prompt for structured ask_user 2026-03-03 07:53:35 -08:00
bryan 63845a07aa feat: add queen-context endpoint and SSE replay 2026-03-03 07:53:22 -08:00
bryan 68ac73aa55 feat: add options support to ask_user tool 2026-03-03 07:53:05 -08:00
Timothy 6d32f1bb36 chore: register YouTube and Microsoft Graph tools in tool/credential registries 2026-03-03 07:51:33 -08:00
Timothy 9c316cee28 feat: add Microsoft Graph integration - Outlook, Teams, OneDrive (#5601)
11 tools: outlook_list_messages, outlook_get_message, outlook_send_mail,
teams_list_teams, teams_list_channels, teams_send_channel_message,
teams_get_channel_messages, onedrive_search_files, onedrive_list_files,
onedrive_download_file, onedrive_upload_file.
All 15 unit tests and 13 conformance tests passing.
2026-03-03 07:47:49 -08:00
Timothy 6af4f2d6e6 feat: add YouTube Data API integration (#5603)
8 tools: search_videos, get_video_details, get_channel, list_channel_videos,
get_playlist, search_channels, get_video_comments, get_video_categories.
All 17 unit tests and 13 conformance tests passing.
2026-03-03 07:47:34 -08:00
Timothy bc9a43d5a9 fix: execution recovery 2026-03-03 07:43:05 -08:00
levxn 7c7b60a5e9 every sessions loads properly without any issue 2026-03-03 19:46:27 +05:30
levxn 3f0b8bff5b fixes a minor unhandled error in event routes 2026-03-03 18:53:43 +05:30
Amdev-5 57651900f1 Merge remote-tracking branch 'origin/main' into lusha 2026-03-03 18:46:12 +05:30
Amdev-5 46b0617018 Merge remote-tracking branch 'origin/main' into lusha
# Conflicts:
#	tools/src/aden_tools/credentials/health_check.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/tests/test_health_checks.py
2026-03-03 18:34:54 +05:30
levxn 91190cf82d restarts with previous session continuity 2026-03-03 17:48:01 +05:30
RichardTang-Aden 7b98a6613a Merge pull request #5656 from aden-hive/feature/queen-worker-comm
Release / Create Release (push) Waiting to run
Feature/queen worker comm
2026-03-02 22:50:13 -08:00
Richard Tang 26481e27a6 fix: fix tests and lint 2026-03-02 22:46:38 -08:00
Aaryann Chandola 87a26db779 Merge branch 'aden-hive:main' into fix/guardian-self-trigger-loop 2026-03-03 11:56:15 +05:30
Richard Tang bb227b3d73 chore: ruff lint 2026-03-02 21:30:07 -08:00
Richard Tang 8a0cf5e0ae Merge remote-tracking branch 'origin/feature/queen-worker-comm' into feature/queen-worker-comm 2026-03-02 21:27:22 -08:00
Timothy 69218d5699 chore: lint codes 2026-03-02 20:16:34 -08:00
Timothy 7d1433af21 fix: queen agent flakiness 2026-03-02 19:57:18 -08:00
Richard Tang 0bfbf1e9c5 fix: unused /hive-credentials prompts in the validation 2026-03-02 19:53:57 -08:00
Richard Tang 1ca4f5b22b refactor: update the preload_validation logics 2026-03-02 19:46:50 -08:00
Richard Tang 0984e4c1e8 feat: add gcu subagent validation and refactor the prestart validation steps 2026-03-02 18:35:25 -08:00
P Gokul Sree Chandra 7d9bd2e86b feat(tools): add YouTube Data API integration
- Implement 6 YouTube API tools (search videos, get video/channel details, list channel videos, get playlist items, search channels)
- Add YOUTUBE_API_KEY credential spec with help_url and description
- Register YouTube tool in tools/__init__.py
- Add comprehensive test coverage (18 tests) with mocking
- Add detailed README with setup instructions and examples
- Use httpx for HTTP requests to YouTube Data API v3
- Verified with real API integration testing

Implements #5603
2026-03-03 07:35:04 +05:30
Sarthak Karode 4cbf5a7434 feat(core): add pytest framework testing integration with helpful error messages (#5485) 2026-03-03 10:01:33 +08:00
Hundao b33178c5be fix(graph): move auto-block grace period check before _await_user_input (#5672)
The grace period logic for client-facing auto-blocks was placed after
_await_user_input(), which blocks forever since no inject_event is
scheduled for text-only turns. This caused test_text_after_user_input
_goes_to_judge to hang indefinitely, blocking CI framework tests.

Move the grace period check before the blocking call so that within
the grace window, auto-blocks with missing outputs skip blocking
entirely and continue to the next LLM turn for judge RETRY pressure.

Also adds an _auto_missing check: nodes with no missing outputs
(e.g. queen monitoring with output_keys=[]) should still block
as their text-only output is legitimate conversation.

Fixes #5633
2026-03-03 09:39:14 +08:00
Richard Tang dc6a336c60 fix: removed the unused build_capability_summary 2026-03-02 16:26:47 -08:00
Antiarin 20ef5cb14f test(runtime): add async test for canceling multiple tasks across streams 2026-03-03 05:54:42 +05:30
Antiarin 2c3ec7e74c fix(tui): fix pause/stop to cancel all running tasks across all graphs 2026-03-03 05:30:20 +05:30
Richard Tang b855336448 chore: ruff format issue 2026-03-02 15:47:30 -08:00
Richard Tang de021977fd Merge remote-tracking branch 'origin/main' into feature/queen-worker-comm 2026-03-02 15:39:15 -08:00
Timothy cd2b3fcd16 Merge branch 'feature/new-inbox-management-agent' into feature/queen-worker-comm 2026-03-02 14:46:14 -08:00
Timothy b64024ede5 fix: gcu error log throwing 2026-03-02 14:45:57 -08:00
bryan a280d23113 fix: removing escalate to coder from worker tools 2026-03-02 12:02:35 -08:00
Timothy 41785abdba fix: rephrasing 2026-03-02 11:54:22 -08:00
Timothy de494c7e55 Merge branch 'feature/queen-worker-comm' into feature/new-inbox-management-agent 2026-03-02 11:44:08 -08:00
Timothy 5fa0903ea8 fix: teach email agent to search emails 2026-03-02 11:43:40 -08:00
Timothy 7bd99fe074 fix: email inbox management agent 2026-03-02 11:01:21 -08:00
bryan c838e1ca6d feat: agent building animation 2026-03-02 10:54:57 -08:00
bryan f475923353 feat: subagents populate node panel 2026-03-02 09:59:24 -08:00
Timothy 43f43c92e3 Merge branch 'feature/queen-worker-comm' into feature/new-inbox-management-agent 2026-03-02 09:40:55 -08:00
Timothy 5463134322 fix: inbox management template v2 2026-03-02 09:40:36 -08:00
Timothy 3fbb392103 fix: add credentials to queen lifecycle tools 2026-03-02 09:39:38 -08:00
RichardTang-Aden a162da17e1 Merge pull request #5639 from RichardTang-Aden/main
feat: support Gemini 3.1 pro
2026-03-02 09:24:27 -08:00
Richard Tang b565134d57 chore: fix the ruff lint 2026-03-02 09:23:02 -08:00
Richard Tang 3aafc89912 feat: support Gemini 3.1 pro 2026-03-02 09:20:48 -08:00
bryan 93449f92fe fix: clear build cache in quickstart 2026-03-02 09:00:48 -08:00
Bryan @ Aden d766e68d42 Merge pull request #5494 from Antiarin/security/harden-validate-agent-path
[Bug][Security]: agent_path accepts arbitrary filesystem paths with no validation
2026-03-02 16:57:51 +00:00
Hundao 1d8b1f9774 fix: enforce 0600 permissions on OAuth token files (#5631)
* fix: enforce 0600 permissions on OAuth token files

Credential files were written with default umask permissions.
Use os.open with explicit 0o600 mode to ensure token files
are always owner-read/write only, regardless of umask.

Fixes #5530

* style: fix line too long in checkpoint_store.py
2026-03-02 18:30:40 +08:00
Rajneesh Chaudhary 5ea9abae83 fix(core): prevent sse critical event queue from blocking event bus (#5533) (#5536)
Disconnects slow clients instead of blocking the publisher task.

Signed-off-by: Rajneesh180 <rajneeshrehsaan48@gmail.com>
2026-03-02 17:57:52 +08:00
ArshpreetSingh04 15957499c5 docs(core): fix outdated goal-agent path reference in README (#5629)
Update the MCP client configuration example in core/README.md to replace the outdated `goal-agent` path with the correct `hive/core` path.

Fixes #5628
2026-03-02 17:07:25 +08:00
Timothy 0b50d9e874 fix: block idle event 2026-03-01 21:01:59 -08:00
Amdev-5 cce073dbdb fix(lusha): add pagination and empty filter validation
- Expose page parameter on search_people and search_companies
  (client + MCP tool) enabling access beyond the first 50 results
- Add guard requiring at least one filter on both search endpoints
  to prevent broad requests that burn API credits
- Add unit tests for pagination and empty filter validation
2026-03-02 10:20:08 +05:30
Timothy a1e54922bd fix: timer count down update 2026-03-01 20:22:46 -08:00
Timothy 63c0ca34ea Merge branch 'feature/agent-runtime-idling' into feature/queen-worker-comm 2026-03-01 20:14:46 -08:00
Timothy 135477e516 feat: agent idling detection 2026-03-01 20:14:35 -08:00
Timothy 8cac49cd91 feat: frontend display of scheduler count down 2026-03-01 20:13:21 -08:00
Timothy 28dce63682 fix: conversation ordering 2026-03-01 18:56:41 -08:00
Timothy 313ac952e0 Merge branch 'feature/tool-pill-v2' into feature/queen-worker-comm 2026-03-01 18:33:54 -08:00
Timothy 0633d5130b fix: command line refresh frontend build 2026-03-01 18:33:43 -08:00
Timothy 995e487b49 Merge branch 'feature/tool-pill-v2' into feature/queen-worker-comm 2026-03-01 18:26:49 -08:00
Timothy 64b58b57e0 fix: remove reddish color 2026-03-01 18:26:27 -08:00
Timothy c6465908df feat: colorful tool pills 2026-03-01 18:11:57 -08:00
Timothy ca96bcc09f fix: add pending question content to worker status 2026-03-01 18:11:15 -08:00
Timothy 65ee628fae fix: tool pill turn id 2026-03-01 17:58:31 -08:00
Timothy 02043614e5 feat: consolidate worker status report, fix conversation order 2026-03-01 17:56:27 -08:00
Timothy 212b9bf9d4 fix: load agent 2026-03-01 16:26:55 -08:00
Timothy 6070c30a88 Merge branch 'feat/open-hive' into feature/queen-worker-comm 2026-03-01 16:06:43 -08:00
Timothy 8a653e51bc feat: separate worker and queen input 2026-03-01 15:50:28 -08:00
Vasu Bansal 6a92588264 fix(plaid): update v0.6 credential compatibility and stabilize tests 2026-03-01 01:16:16 +05:30
Vasu Bansal 276aad6f0d feat: add Plaid banking integration
- Implement Plaid connector for account balances
- Add transaction history retrieval
- Include GL reconciliation functionality
- Add institution metadata lookup
- Include comprehensive tests and documentation

Closes #4016
2026-03-01 01:16:16 +05:30
Vasu Bansal 10620bda4f fix(sap): update credential-store compatibility and test imports 2026-03-01 01:07:00 +05:30
Vasu Bansal c214401a00 feat(integration): add SAP S/4HANA connector
Add complete SAP S/4HANA integration with:
- Connector for OData API access
- Credential management following Hive patterns
- Unit tests with mocked responses
- Documentation and usage examples

Refs #3182
2026-03-01 01:07:00 +05:30
Vasu Bansal 260ac33324 fix(s3): support v0.6 credential refs and register S3 tools 2026-03-01 00:56:22 +05:30
Vasu Bansal d4cd643860 feat: add AWS S3 integration for cloud object storage
- Add S3Storage class with upload, download, list, delete operations
- Support IAM roles, environment variables, and credential store
- Implement retry logic with adaptive backoff
- Add MCP tools: s3_upload, s3_download, s3_list, s3_delete, s3_check_credentials
- Include comprehensive tests with moto mocking
- Add documentation for setup and IAM permissions

Closes #3012
2026-03-01 00:54:57 +05:30
IamSayeed dc16cfda21 Merge branch 'main' into feature/add-asana-integration 2026-02-28 11:28:43 +05:30
RichardTang-Aden d562670425 Merge pull request #5501 from aden-hive/feat/open-hive
Feat: v6 windows compatibility support
2026-02-27 19:58:48 -08:00
Timothy Zhang 677bee6fe5 Merge branch 'feat/open-hive' of https://github.com/adenhq/hive into feat/open-hive 2026-02-27 19:55:54 -08:00
Timothy Zhang de27bfe76f fix: windows competibility 2026-02-27 19:55:48 -08:00
Timothy 1c1dcb9c33 chore: new architecture 2026-02-27 19:55:05 -08:00
RichardTang-Aden 4ba950f155 Merge pull request #5499 from aden-hive/feat/open-hive
Release / Create Release (push) Waiting to run
feat: tool call revamp, Intercom & GA integrations, credential improvements
2026-02-27 19:41:11 -08:00
bryan 9c3a11d7bb chore: remove load agent 2026-02-27 19:14:35 -08:00
Richard Tang b7d357aea2 Merge remote-tracking branch 'upstream/feat/open-hive' into feat/sub-agent-framework 2026-02-27 19:07:45 -08:00
bryan b2fed68346 chore: fix linter 2026-02-27 18:57:52 -08:00
bryan 0e996928be fix: load credentials check into new agent session 2026-02-27 18:50:03 -08:00
Timothy 6ff4ec3643 Merge branch 'feature/tool-call-revamp' into feat/open-hive 2026-02-27 18:45:35 -08:00
Timothy a0eda3e492 fix: event loop iterations 2026-02-27 18:41:13 -08:00
bryan 099f9514ef Merge branch 'main' into feat/open-hive 2026-02-27 18:10:42 -08:00
Timothy b2096e4a55 Merge branch 'feat/open-hive' into feature/tool-call-revamp 2026-02-27 18:10:32 -08:00
Timothy 1bf2164745 fix: spamming session update 2026-02-27 18:10:09 -08:00
bryan 48205bbde7 fix: dismiss credential banner 2026-02-27 18:09:51 -08:00
Bryan @ Aden 296aab6ecb Merge pull request #5171 from Ttian18/feat/tina/intercom-tool-4256
feat(tools): add Intercom tool integration (#4256)
2026-02-28 02:01:57 +00:00
Richard Tang 14182c45fc refactor: reorganized file tools 2026-02-27 17:52:21 -08:00
Richard Tang 2fa8f4283c Merge remote-tracking branch 'upstream/feat/open-hive' into feat/sub-agent-framework 2026-02-27 17:51:43 -08:00
Bryan @ Aden ad3cec2361 Merge pull request #4239 from Ttian18/feat/tina-google-analytics-tool
[Integration]: Google Analytics - Website Traffic & Marketing Performance #3727
2026-02-28 01:50:07 +00:00
bryan eddb628298 fix: remove mock_mode from queen/coder system prompt templates 2026-02-27 17:38:03 -08:00
bryan f63b226d8d fix: pipeline visual update 2026-02-27 17:32:59 -08:00
Timothy cc5bd61d86 feature: new tool calling logic 2026-02-27 17:29:00 -08:00
bryan 8bd14fb16f fix: graph summary for intake 2026-02-27 17:08:34 -08:00
bryan 30b5472e33 fix: center text and open hive 2026-02-27 16:47:20 -08:00
Adam Albarghouthi bc836db0f9 micro-fix: fix incorrect CLI commands and docstring in core docs (#5457)
- Replace non-existent CLI commands (calculate, interactive, analyze)
  with actual commands (run, shell, info) in core/README.md
- Fix test-list argument from <goal_id> to <agent_path> in core/README.md
- Fix misleading docstring on MockProvider.complete_with_tools()

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-02-28 08:40:58 +08:00
bryan bd3b0fb8eb chore: windows quickstart update 2026-02-27 16:13:36 -08:00
Adam Albarghouthi 7f28474967 micro-fix: fix wrong credential path and env var in docs (#5458)
* micro-fix: fix wrong credential path and env var in docs

Both docs/configuration.md and docs/environment-setup.md reference a
non-existent ADEN_CREDENTIALS_PATH env var and wrong default path
(~/.aden/credentials). The actual env var is HIVE_CREDENTIAL_KEY and
the default path is ~/.hive/credentials (see storage.py:119,125).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* micro-fix: clarify HIVE_CREDENTIAL_KEY comment wording

Reword comment to avoid implying the env var controls the path.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 08:01:16 +08:00
bryan 09460b28bc refactor: move credentials from shell config to ~/.hive 2026-02-27 15:55:08 -08:00
wlkjyy 5d8ba1e49c micro-fix: tests: use unified session_* run IDs in runtime logging tests (#5480)
* tests: use session_* run IDs in runtime logging tests

* refactor: extract _sid() helper for session IDs in runtime logger tests
2026-02-28 07:54:59 +08:00
Richard Tang ccb394675b Merge remote-tracking branch 'upstream/feat/open-hive' into feat/sub-agent-framework 2026-02-27 14:48:47 -08:00
Richard Tang 931487a7d4 feat: clean the options for browser open tools that should not be used by LLM 2026-02-27 14:48:31 -08:00
bryan 3654c57f66 Merge branch 'main' into feat/open-hive 2026-02-27 14:48:10 -08:00
Richard Tang fb28280ced feat: human-friendly LLM and tool calls logs 2026-02-27 14:45:12 -08:00
bryan 6215441b58 fix: SSE reconnect on session change, tool pill per-call tracking, cancel/pause event emission 2026-02-27 14:37:54 -08:00
Richard Tang 52f16d5bb6 Merge remote-tracking branch 'upstream/feat/open-hive' into feat/sub-agent-framework 2026-02-27 13:49:14 -08:00
Antiarin e5b6c8581a feat: implement agent path validation and restrict loading to allowed directories 2026-02-28 02:56:31 +05:30
Timothy e1db3a4af9 fix: remove hardcoded anthropic logics 2026-02-27 10:23:59 -08:00
bryan 5dcca99913 fix: credential modal updates 2026-02-26 20:54:11 -08:00
Zhang 890b906f15 fix(tools): address review feedback on Google Analytics tool
- Use Credentials.from_service_account_file() instead of mutating os.environ
- Remove unused dimensions param from _format_report_response
- Remove unused metrics param from _format_realtime_response
- Extract duplicated property_id/limit validation into _validate_inputs helper
- Add credential_group="google_cloud" to GA and BigQuery specs
- Update tests to mock Credentials class
2026-02-26 20:46:20 -08:00
Timothy @aden 6a8286d4cf Merge pull request #5462 from aden-hive/feat/open-hive
Release / Create Release (push) Waiting to run
Feat/open hive
2026-02-26 20:41:58 -08:00
Timothy 680024f790 fix: cancel worker logic 2026-02-26 20:35:17 -08:00
Timothy 6f7bfb92a8 fix: patch the errorneous skip judge logic 2026-02-26 20:31:45 -08:00
Zhang 335a9603e8 feat(tools): add Google Analytics 4 integration (#3727)
Add read-only GA4 Data API v1 tools: ga_run_report, ga_get_realtime,
ga_get_top_pages, and ga_get_traffic_sources. Includes credential spec,
unit tests, and README.
2026-02-26 20:22:12 -08:00
Zhang 5e8a6202e7 fix(credentials): add Intercom health checker to registry (#4256)
Add IntercomHealthChecker (subclass of OAuthBearerHealthChecker) and
register it in HEALTH_CHECKERS so the credential registry completeness
test passes in CI.
2026-02-26 20:01:43 -08:00
Zhang 55a4cdefd7 fix(tools): pass assignee_type through to Intercom API and add README (#4256)
- Pass assignee_type from intercom_assign_conversation tool function
  through to _IntercomClient.assign_conversation() and into the API payload
- Add tests for assignee_type="team" passthrough at client and tool levels
- Add tool README with setup, usage examples, and error handling

Addresses PR #5171 review feedback from @bryanadenhq
2026-02-26 19:56:36 -08:00
Richard Tang 2b63135afb Merge remote-tracking branch 'upstream/feat/open-hive' into feat/sub-agent-framework 2026-02-26 19:33:24 -08:00
Timothy 49d8c3572d fix: stalled agent stop tools 2026-02-26 19:09:01 -08:00
bryan 4b40962186 feat: agent loading after change 2026-02-26 19:08:28 -08:00
Richard Tang 779b376c6e Merge remote-tracking branch 'upstream/feat/open-hive' into feat/sub-agent-framework 2026-02-26 19:02:35 -08:00
bryan 4e2a9a247a patch: credentials modal blocking incorrectly 2026-02-26 18:34:51 -08:00
Richard Tang b1f3d6b155 Merge remote-tracking branch 'upstream/feat/open-hive' into feat/sub-agent-framework 2026-02-26 17:59:15 -08:00
Timothy ea28a9d3c3 fix: turn off judge for now 2026-02-26 17:57:49 -08:00
bryan 69a03e463f cancel + queue msg 2026-02-26 17:57:21 -08:00
Richard Tang e7da62e61c Merge remote-tracking branch 'upstream/feat/open-hive' into feat/sub-agent-framework 2026-02-26 17:17:37 -08:00
Richard Tang 7176745e1c feat: GCU enabled in the quickstart menu 2026-02-26 17:15:37 -08:00
Timothy cce0e26f5c Merge branch 'feature/system-prompt-v2-worker-path' into feat/open-hive 2026-02-26 17:13:46 -08:00
Timothy 641af16dfc fix: nuanced preference tweaking 2026-02-26 17:09:47 -08:00
Timothy a335c427ef fix: worker file path fix 2026-02-26 17:05:51 -08:00
bryan 9ea6c959ae feat: mid-session credential management and MCP resync 2026-02-26 17:03:06 -08:00
Richard Tang 20efd523c9 Merge remote-tracking branch 'upstream/feature/llm-turn-logging' into feat/sub-agent-framework 2026-02-26 16:16:37 -08:00
Timothy 8fc7fff496 feature: log llm turn stop reasons 2026-02-26 16:14:51 -08:00
Richard Tang edf51e6996 feat: prompts for GCU 2026-02-26 15:45:03 -08:00
Richard Tang 6b867883ce chore: ruff lint 2026-02-26 15:03:06 -08:00
Richard Tang 35a05f4120 Merge remote-tracking branch 'upstream/feat/open-hive' into feat/sub-agent-framework 2026-02-26 14:59:48 -08:00
Richard Tang e0e78a97ce refactor: re-organize all the broswer tool and make them built-in for the gcu node type 2026-02-26 12:51:10 -08:00
Navya Bijoy ddd30a950d Integration: add Databricks MCP tool integration
Implements the Databricks MCP tool integration for the Hive agent framework
2026-02-26 21:01:59 +05:30
Richard Tang e4e476f463 Merge remote-tracking branch 'origin/feat/open-hive' into fix/codex-and-litellm-improvement 2026-02-26 07:24:16 -08:00
KRYSTALM7 3ca0e63d54 feat(tools): add Pushover push notification integration
Closes #5415
2026-02-26 13:54:34 +00:00
hundao c4c8917ecb fix: skip auto-block when weak models output text instead of calling tools
Client-facing nodes auto-block on text-only turns (wait for user input).
This breaks weak models (Codex) that output text like "Understood" instead
of calling tools after user responds.

Add _cf_expecting_work state: after user input, text-only turns with
missing output keys skip auto-block and go to judge, which pushes the
LLM to call set_output. Tool calls reset the state back to presenting
mode (auto-block on next text-only).

No behavioral change for strong models (they always call tools after
user input, so the new code path is never triggered).
2026-02-26 20:58:33 +08:00
hundao 1524d2ef00 fix: remove implementation hints from judge feedback for weak models
Judge feedback was saying "Use set_output tool to provide them" which
caused Codex to skip all work and call set_output directly. Changed to
"Follow your system prompt instructions to complete the work."
2026-02-26 20:56:14 +08:00
bryan 5032834034 fix: exit quickstart if claude code not configured 2026-02-25 20:38:28 -08:00
Richard Tang 0b83f6ea99 fix(wip): codex tool use bug fixes 2026-02-25 20:09:49 -08:00
bryan 415201f467 remove gpt-nano 2026-02-25 19:58:12 -08:00
bryan 73005a8498 fix: validate credentials before queen-initiated worker start 2026-02-25 19:37:54 -08:00
Timothy 4edb960fbd fix: stupid hard limit 2026-02-25 19:35:41 -08:00
Timothy 42d11ead01 fix: fake goal prompt injection 2026-02-25 19:30:20 -08:00
bryan 5e18f85b10 fix: deferred cred validation + dismissable error banners 2026-02-25 19:14:55 -08:00
Timothy 85b25bf006 fix: missing mcp reference 2026-02-25 19:10:35 -08:00
Timothy c1ba108489 Merge branch 'feature/refactor-system-prompt' into feat/open-hive 2026-02-25 18:46:16 -08:00
Richard Tang 214098aaae fix: remove the run_command tool from the predefined engineering tool set for worker agent 2026-02-25 18:36:00 -08:00
bryan 241a0b7adc fix: clean up stale active sessions on worker load 2026-02-25 18:29:15 -08:00
bryan 9a7b41a4be feat: show schedule info when clicking trigger nodes 2026-02-25 18:12:20 -08:00
Timothy fe918adb16 fix: cosmetics 2026-02-25 18:01:58 -08:00
bryan 746f026654 feat: 3-layer resume prompts + trigger node visualization 2026-02-25 17:49:44 -08:00
Richard Tang 8294cd3dd9 feat: fix codex tool call usage 2026-02-25 17:49:41 -08:00
Timothy 3bbc63b1db feature/refactored-system-prompt-narratives 2026-02-25 17:33:21 -08:00
Richard Tang 337fb6d922 refactor: deprecate the unused llm functions 2026-02-25 17:32:33 -08:00
bryan bda6b18e8a fix: session reconnect + iteration-based message IDs 2026-02-25 16:55:33 -08:00
bryan d256ff929f fix: faster input_requested + execution_id tracking 2026-02-25 15:38:46 -08:00
bryan f71b20cf07 filter out queen judge from action plan 2026-02-25 12:26:03 -08:00
Richard Tang db26b0afd6 feat: pop out the codex OAuth consent page 2026-02-25 11:53:00 -08:00
Timothy 145860f42e fix: consolidate validation endpoints 2026-02-25 09:54:06 -08:00
bryan d9f84648d0 revoke credential ux 2026-02-25 09:46:46 -08:00
Timothy 9fb7e0bae7 fix: load agent graph consistently 2026-02-25 09:02:37 -08:00
Bryan @ Aden b00203702e Merge pull request #5344 from juni2003/docs/fix-readme-org-links
docs(readme): fix broken org links
2026-02-25 16:57:00 +00:00
bryan ead85dd41f fix closing tab, remove 0/0 from credential modal 2026-02-25 08:11:24 -08:00
bryan cf5bf6f174 initial prompt from home page 2026-02-25 07:58:49 -08:00
bryan 46237e7309 kill judge and queen 2026-02-24 20:01:43 -08:00
bryan afa686b47b Merge branch 'main' into feat/open-hive 2026-02-24 19:38:46 -08:00
Timothy 21e02c9e50 Merge branch 'fix/credential-popup' into feat/open-hive 2026-02-24 19:29:21 -08:00
Timothy 30a188d7c8 fix: credential popup 2026-02-24 19:28:53 -08:00
bryan 355f51b25e quickstart update 2026-02-24 19:26:54 -08:00
Timothy 8e1cde86e8 Merge branch 'feat/openhive-cred-fixes' into feat/open-hive 2026-02-24 19:10:30 -08:00
Timothy c13b02c7d9 Merge branch 'fix/credential-loading' into feat/open-hive 2026-02-24 19:09:39 -08:00
bryan 9e72801c28 agent loading 2026-02-24 19:05:12 -08:00
RichardTang-Aden 3a3d538b73 Merge pull request #5367 from RichardTang-Aden/feat/codex-subscription-rebased
Feat/codex subscription rebased
2026-02-24 18:53:39 -08:00
Richard Tang b11bca0c67 chore: lint reformat 2026-02-24 18:53:04 -08:00
Richard Tang faf8975b42 chore: improve script code and solved lint errors 2026-02-24 18:51:37 -08:00
Richard Tang 863168880e fix: unused credential detect path removed 2026-02-24 18:41:36 -08:00
Timothy 384a1f0560 fix: credential loading 2026-02-24 18:40:39 -08:00
bryan 4bd1b1b9e6 credential updated 2026-02-24 18:33:09 -08:00
Richard Tang 8c3866a014 feat: optimized for the LLM selection option 2026-02-24 18:27:03 -08:00
Richard Tang 61283d9bd6 feat: Codex subscription OAuth 2026-02-24 18:24:36 -08:00
Richard Tang 585a7186d4 feat: support openai codex subscription as the LLM provider 2026-02-24 18:24:36 -08:00
Timothy 72a31c2a65 fix: credential validity, update api readme 2026-02-24 18:11:10 -08:00
RichardTang-Aden 10d9e54857 Merge pull request #4576 from mubarakar95/perf/reduce-subprocess-spawning-windows
perf: reduce subprocess spawning in quickstart scripts (#4427)
2026-02-24 17:47:22 -08:00
bryan e68695ee92 merge 2026-02-24 17:43:29 -08:00
RichardTang-Aden 11379fc0ef Merge branch 'main' into perf/reduce-subprocess-spawning-windows 2026-02-24 17:43:25 -08:00
Timothy 6d102382bd fix: session id issues 2026-02-24 17:42:09 -08:00
bryan 56335927e7 change from agentid to session id 2026-02-24 15:53:14 -08:00
Timothy a3fe994b22 fix: remove duplicative queen session starter api 2026-02-24 15:14:02 -08:00
Timothy 5754bdcc78 Merge branch 'feature/session-manager' into feat/open-hive 2026-02-24 15:01:01 -08:00
Timothy eef2fa9ffb feature: session manager, superceding agent manager 2026-02-24 15:00:09 -08:00
bryan 7286907cd4 multiple agent session running 2026-02-24 14:56:24 -08:00
Richard Tang 754e33a1ae feat: browser tools optimization 2026-02-24 14:05:26 -08:00
Timothy 1fbb431f1b Merge branch 'fix/globalize-queen-judge' into feat/open-hive 2026-02-24 13:28:29 -08:00
Timothy 0ad52b90d8 fix: globalize queen and judge agent's storage 2026-02-24 13:27:33 -08:00
bryan c44b12cc8b remove subgraph, persistent tabs, node action plan 2026-02-24 12:42:07 -08:00
Timothy 8381c95617 Merge branch 'fix/session-loading-isolation' into feat/open-hive 2026-02-24 11:18:48 -08:00
Timothy 3963855d1d fix: isolate session loading 2026-02-24 11:02:58 -08:00
Junaid 51154a3070 docs(readme): fix broken org links
Update repository URLs from adenhq/hive to aden-hive/hive to prevent 404s
2026-02-24 23:55:23 +05:00
Richard Tang b11b43bbe1 feat: reorganized the log structure for subagents 2026-02-24 10:41:13 -08:00
bryan 7a7ece1805 markdown support, removed subgraph, stop button 2026-02-24 10:40:24 -08:00
bryan 28a71b70a8 readme for http apis 2026-02-24 09:22:56 -08:00
bryan 33d3a13fde Merge branch 'feature/concurrent-judge-runtime' into feat/open-hive 2026-02-24 09:11:42 -08:00
bryan 5ea278a08d integrated queen, worker, judge 2026-02-24 09:09:28 -08:00
Timothy fd95f8da28 feat: active streams and waiting nodes 2026-02-24 09:03:21 -08:00
Richard Tang 86f4645d1c fix: inherit the tool call overflow margin for subagent 2026-02-24 08:20:08 -08:00
Richard Tang 2d05e96cd5 fix: spillover for subagent 2026-02-24 08:18:52 -08:00
bryan c1d5952ad9 Merge branch 'feature/concurrent-judge-runtime' into feat/open-hive 2026-02-24 08:07:31 -08:00
RichardTang-Aden ebeac68707 Merge pull request #5272 from SANTHAN-KUMAR/main
fix(web_scrape): reorder status checks before wait & replace hardcoded sleep with networkidle
2026-02-24 08:02:41 -08:00
bryan 72673e12fb remove mock data 2026-02-24 08:02:08 -08:00
Timothy 3867d3926b Merge branch 'main' into feature/concurrent-judge-runtime 2026-02-24 07:43:22 -08:00
Timothy 0b2b7a2622 feat: event bus logging 2026-02-24 07:43:05 -08:00
bryan 3951ee1a7d Merge branch 'main' into feat/open-hive 2026-02-24 07:28:42 -08:00
bryan 1afde51c7b additional graph update 2026-02-24 07:28:11 -08:00
bryan cbeef18f0a wip graph 2026-02-24 07:27:48 -08:00
SANTHAN-KUMAR de5fcab933 fix(web_scrape): implement robots.txt support & clean up dead mock
- Add respect_robots_txt parameter (default True) using stdlib
  urllib.robotparser, checked before browser launch to skip
  disallowed URLs early
- Remove dead wait_for_timeout mock from test helper
- Restore respect_robots_txt docs in README (param, error, note)
- Add 2 tests: blocked by robots.txt, disabled robots.txt check
- Fix import ordering (ruff I001)
2026-02-24 15:29:13 +05:30
SANTHAN-KUMAR a7a2100472 Merge branch 'aden-hive:main' into main 2026-02-24 15:24:10 +05:30
Uttkarsh Joshi 1947d8c3ca Fix asyncio.run crash in GraphBuilder and enhance ToolRegistry type inference (fixes: #2680) (#2895)
* Enhance ToolRegistry type inference for function parameters

- Add _infer_schema() helper to handle Union types (Union[T, U] and T | U)
- Support Optional[T] and Union[T, None] with correct optional flag
- Infer generic types: list[T] -> array with items schema, dict[K, V] -> object with additionalProperties
- Detect Pydantic BaseModel parameters and use model_json_schema()
- Correctly mark parameters as required/optional based on type annotations
- Add comprehensive test suite covering all type inference scenarios
- Maintain backward compatibility for unannotated parameters

* Fix asyncio.run crash in GraphBuilder.run_test

* Revert "Enhance ToolRegistry type inference for function parameters"

This reverts commit dacd0fa8b926e01d3f29e7c9b2ff5101b4a52c3b.
2026-02-24 17:06:11 +08:00
austin931114 55c63736ef Merge pull request #5315 from sabasiddique1/fix/roadmap-mermaid-diagram-render
docs: fix Roadmap Mermaid diagram not rendering on GitHub
2026-02-24 10:05:48 +01:00
austin931114 a2b68d893f Merge pull request #5317 from kart1ka/fix/retired-haiku-3.5-model
micro-fix: replace retired claude-3-5-haiku-20241022 with claude-haiku-4-5
2026-02-24 09:50:17 +01:00
Kartik Saini fd06e43d9c Merge branch 'main' into fix/retired-haiku-3.5-model 2026-02-24 11:36:18 +05:30
Kartik Saini b550f6efa0 fix(llm): replace retired claude-3-5-haiku-20241022 with claude-haiku-4-5-20251001 2026-02-24 11:22:40 +05:30
Saba Siddique 47adf88773 docs: fix Roadmap Mermaid diagram for GitHub rendering 2026-02-24 10:36:35 +05:00
Richard Tang 9c44d3b793 feat: add the upgraded file operation tools 2026-02-23 20:25:25 -08:00
Richard Tang 9b89ac694e feat: new snapshot tools 2026-02-23 19:34:42 -08:00
RichardTang-Aden 8748da38cf Merge pull request #5310 from RichardTang-Aden/fix/llm-token-source
Feat: tui workflow improvement and the fix for quickstart  problem for GLM
2026-02-23 19:23:59 -08:00
Timothy f697dc99fb feat: queen primitives 2026-02-23 19:15:55 -08:00
Richard Tang 630d8208cf fix: avoid using headless broswer 2026-02-23 19:09:18 -08:00
bryan ecb038c955 chat now creates multiple chats msgs 2026-02-23 19:07:54 -08:00
Richard Tang 77ff31cec6 feat: add back the quickstart prompt to restart terminal 2026-02-23 18:33:33 -08:00
Richard Tang 9b342dc593 feat: add health check for the browser start 2026-02-23 18:28:59 -08:00
Richard Tang 5ea8677a5d feat: tui get started menu 2026-02-23 18:06:59 -08:00
Richard Tang ad879de6ff feat: clean the browser snapshot tool 2026-02-23 17:56:05 -08:00
Richard Tang 97f5b3423f feat: source the llm token after quickstart 2026-02-23 17:50:53 -08:00
Timothy @aden 4968207eef Merge pull request #5276 from TimothyZhang7/fix/identity-persistence
Fix/identity persistence
2026-02-23 17:47:53 -08:00
Timothy @aden f859e2203a Merge branch 'main' into fix/identity-persistence 2026-02-23 17:45:23 -08:00
Bryan @ Aden fb3dad4354 Merge pull request #5231 from vakrahul/fix/local-llm-keyless-crash
fix(core): support local LLMs (Ollama, vLLM, LM Studio, Llama.cpp) in AgentRunner #3994
2026-02-24 01:42:53 +00:00
Timothy adc82c6a65 fix: lint issue 2026-02-23 17:42:36 -08:00
bryan 96084fea16 wip chat 2026-02-23 17:41:12 -08:00
Timothy @aden 6f52026c84 Merge branch 'main' into fix/identity-persistence 2026-02-23 17:35:44 -08:00
Timothy @aden 3576218ea9 Merge pull request #5270 from aden-hive/feature/local-credential-namespace
feat: local credential testing
2026-02-23 17:32:34 -08:00
vakrahul 4c662db530 fix: add missing accounts_prompt to add_graph in AgentRuntime 2026-02-24 07:01:48 +05:30
Timothy da1ce4e5a7 fix: lint 2026-02-23 17:30:27 -08:00
vakrahul c4944c5662 fix: pass accounts_prompt to ExecutionStream in add_graph and GraphExecutor 2026-02-24 06:31:56 +05:30
Bryan @ Aden d892f87651 Merge pull request #1814 from nafiyad/feat/wikipedia-search-tool
Feat/wikipedia search tool
2026-02-24 00:34:54 +00:00
Nafiyad Adane 447f23d157 style: run ruff check and format on tools/ 2026-02-23 17:17:58 -07:00
Nafiyad Adane aa12f0d295 Merge main into feat/wikipedia-search-tool 2026-02-23 17:12:31 -07:00
Richard Tang 795266aab4 feat: store the subagent logs in the node logs folder 2026-02-23 16:02:39 -08:00
Richard Tang 4e4ef121f9 feat: Progressive feedback in SubagentJudge 2026-02-23 15:48:34 -08:00
Richard Tang ddb9126955 fix: result the bug for calling the snapshot tool too many times 2026-02-23 15:38:04 -08:00
Richard Tang bac6d6dd68 feat: subagent ending judge and communication 2026-02-23 15:25:59 -08:00
bryan de9226aae0 credentials 2026-02-23 14:11:16 -08:00
Timothy 16e1ab1a87 feat: concurrent judge session 2026-02-23 13:56:59 -08:00
Bryan @ Aden 54287e06ad Merge pull request #4519 from Rudra2637/clarify-criterion-evaluation
Clarify supported criterion evaluation and progress semantics
2026-02-23 20:39:48 +00:00
Richard Tang 3451570541 feat: enable subagent to talk back to the parent via tools 2026-02-23 12:31:51 -08:00
Rudra2637 b33de5f0e1 Fix lint and formatting issues 2026-02-24 01:49:45 +05:30
Rudra2637 2d5ef20d4d Restore comment explaining 0.8 threshold 2026-02-24 01:13:52 +05:30
Rudra2637 177346b159 Fix docstring indentation 2026-02-24 01:09:40 +05:30
bryan 08819b1609 Merge branch 'main' into feat/open-hive 2026-02-23 11:13:32 -08:00
Richard Tang e5e939f344 feat: add a basic test tool for the broswer control tools validity 2026-02-23 11:08:08 -08:00
Richard Tang 0d51d25482 feat: highlight interactive actions 2026-02-23 11:03:19 -08:00
Rudra2637 35b1332551 Add type field to SuccessCriterion and restore evaluation guard 2026-02-24 00:29:31 +05:30
Bryan @ Aden 52586a024b Merge pull request #5273 from aden-hive/chore/add-community-cred
(micro-fix): add community credit for competitive intelligence agent
2026-02-23 18:53:45 +00:00
bryan 05a314b121 add community credit for competitive intelligence agent 2026-02-23 10:42:13 -08:00
Richard Tang a0a5b10df0 fix: remove the max subagent logic 2026-02-23 10:35:55 -08:00
Richard Tang 04bac93c14 feat: fix tool bugs and add background tabs option 2026-02-23 10:20:52 -08:00
Bryan @ Aden 8e262e2270 Merge pull request #5179 from nafiyad/feature/competitive-intelligence-agent-4153
Add competitive intelligence agent template
2026-02-23 18:20:02 +00:00
SANTHAN-KUMAR 4961d3ba8c fix(web_scrape): reorder status checks & replace hardcoded wait with networkidle
- Move response validation (null, HTTP status, content-type) before
  the rendering wait so errors return immediately without sleeping
- Replace wait_for_timeout(2000) with wait_for_load_state("networkidle")
  to align code with README (timeout=3000, wrapped in try/except)
- Fix README: remove phantom respect_robots_txt param, fix timeout
  30s→60s, remove false robots.txt claim
- Add 3 tests for early-exit error paths
2026-02-23 23:40:10 +05:30
Timothy 733bb4d2dd fix: get all account info including local apis 2026-02-23 10:09:22 -08:00
vakrahul ba31c760a6 fix: restore accounts_prompt propagation chain to ExecutionStream 2026-02-23 23:31:08 +05:30
Timothy a388bc6837 feat: local credential testing 2026-02-23 09:55:38 -08:00
Timothy 3f5bbbf1e3 feat: implementation of concurrent judge 2026-02-23 09:52:11 -08:00
Emmanuel Nwanguma 002da15375 docs(tools): add README for security tools (#5164)
* docs(tools): add README + comprehensive tests for security tools

READMEs added for 7 security scanning tools:
- port_scanner: TCP connect scans, banner grabbing, risky port detection
- ssl_tls_scanner: TLS version, cipher, certificate analysis
- http_headers_scanner: OWASP security headers validation
- dns_security_scanner: SPF, DMARC, DKIM, DNSSEC, zone transfer
- subdomain_enumerator: Passive CT log subdomain discovery
- tech_stack_detector: Web technology fingerprinting
- risk_scorer: Weighted letter-grade risk scoring

Comprehensive unit tests (92 total):
- Port scanner: constants, port categories, _check_port async tests
- SSL/TLS scanner: weak ciphers, TLS versions, cert parsing helpers
- HTTP headers scanner: security headers, leaky headers validation
- DNS security scanner: SPF/DMARC/DKIM/DNSSEC checks
- Subdomain enumerator: keyword detection, severity levels
- Tech stack detector: cookies, CDN, CMS, framework detection
- Risk scorer: grading logic, category scoring, JSON parsing

Fixes #5094

* revert: remove test changes per review feedback
2026-02-23 21:46:51 +08:00
Shubham Yadav 005609da3a Feat/PostgreSQL (Read-Only MCP) (#4160)
* feat(tools): add read-only PostgreSQL MCP tool

* test(tools): add postgres tool tests

* docs(tools): add postgres tool README

* feat(tools): update PostgreSQL MCP tool with refactored code structure and adding postgres credentials

* feat(tools): implement thread-safe connection pooling for PostgreSQL MCP tool

* fix(postgres): correct psycopg2 dependency and README setup instructions

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-02-23 21:17:10 +08:00
Youssef Mohammed Abdelal Mohammed 182d9ca6f9 feat(tools): add arXiv search and download tools (#5222)
* feat(arxiv): implement search_papers and initial download_paper tools

* feat(arxiv): improve PDF download handling with temp files and validation (WIP)

Switch to NamedTemporaryFile for safer temp file handling

Force export.arxiv.org domain for PDF downloads

Add custom User-Agent header

Validate Content-Type to ensure PDF response

Improve error handling and cleanup logic

Add timeout to requests

Work in progress – download_paper still under refinement.

* feat(arxiv): replace NamedTemporaryFile with module-level TemporaryDirectory

Switch from NamedTemporaryFile(delete=False) to a shared _TEMP_DIR for
the lifetime of the server process. Scopes file lifetime to the session,
guarantees cleanup via atexit, and removes the need for manual file
handle management.

Expand README with full args/returns/error reference and implementation
notes explaining the temp storage design decision.

* test(arxiv): add comprehensive tests for search_papers and download_paper

fix(arxiv): return structured error instead of raising on invalid PDF content type

- Add full test coverage for search_papers (validation, success, id_list, errors)
- Add full test coverage for download_paper (success, network errors, invalid content, cleanup)
- Mock arxiv client and requests to isolate behavior
- Ensure partial files are cleaned up on failure
- Align download_paper behavior with tool contract (no exceptions, structured responses)

* style(tools): apply ruff formatting to arxiv tool and update lockfile
2026-02-23 20:57:04 +08:00
vakrahul a6b43f8016 fix: address PR review feedback (accounts_prompt, tests, and remove markdowns) 2026-02-23 17:38:22 +05:30
vakrahul 31700fa8da fix: address PR review feedback (accounts_prompt, tests, and remove markdown) 2026-02-23 17:38:04 +05:30
Rudra2637 6b475ec1cf Removed invalid type guard and clean up comments 2026-02-23 11:06:10 +05:30
Timothy 1b27844c52 feat: local credential testing 2026-02-22 20:58:42 -08:00
RichardTang-Aden 3a0b91f7ab Merge pull request #5251 from vincentjiang777/main
docs: roadmap updates for architecure v3
2026-02-22 20:48:43 -08:00
RichardTang-Aden 82108e32fa Merge branch 'main' into main 2026-02-22 20:48:10 -08:00
Timothy 28f4fecfb3 feat: handle account identity systematically 2026-02-22 20:45:36 -08:00
Vincent Jiang ff1bb08217 docs: roadmap updates for architecure v3 2026-02-22 20:41:26 -08:00
Nafiyad Adane 10617fee0d chore(templates): export agent.json configuration
Generated the agent.json fallback configuration using the agent-builder MCP server export functionality as requested by the reviewer.
2026-02-22 21:12:08 -07:00
Bryan @ Aden 866103ddf4 Merge pull request #5212 from JamieJiHeonKim/docs/fix-readme-formatting-and-links
Docs/fix readme formatting and links
2026-02-23 03:51:32 +00:00
bryan fcfaca6bd0 Merge branch 'main' into feat/open-hive 2026-02-22 19:50:39 -08:00
bryan 4c7d9ab0fb added click cursor and rename dashboard to workspace 2026-02-22 19:21:37 -08:00
bryan 061aec4b3d my agents configured 2026-02-22 19:04:48 -08:00
Richard Tang 047f4a1a0c Merge branch 'main' into feat/sub-agent-framework 2026-02-22 18:31:47 -08:00
Bryan @ Aden f12ab10725 Merge pull request #4930 from Ttian18/fix/tina/shift-enter-newline-4565
fix(tui): add Ctrl+J as newline fallback in chat input
2026-02-23 02:31:47 +00:00
Bryan @ Aden 0882fa6ce5 Merge pull request #5165 from ishaannk/main
feat: add stop/cancel execution control for agents
2026-02-23 02:24:46 +00:00
Richard Tang 7994b90dfa feat: add the max_sub_agents config and constrain 2026-02-22 18:23:52 -08:00
Richard Tang 04b6a80370 feat: shared agent profile 2026-02-22 18:17:40 -08:00
RichardTang-Aden 0b87e4c45d Merge pull request #5245 from TimothyZhang7/main
Release / Create Release (push) Waiting to run
doc(architecture): update documents
2026-02-22 18:04:51 -08:00
Timothy 9c7e846828 chore: put event loop node zoom inside worker bee graph 2026-02-22 18:03:31 -08:00
bryan 30bd0e483a home page and mock chatroom 2026-02-22 18:03:02 -08:00
Timothy 13cc93c334 chore: architecture 2026-02-22 17:54:12 -08:00
Timothy 564b1bb752 chore: roadmap diagram 2026-02-22 17:45:56 -08:00
bryan 2f31a92d31 Merge branch 'main' into feat/open-hive 2026-02-22 16:06:44 -08:00
ishaannk fd89c7f56f Fix: Add trailing newlines for ruff format compliance 2026-02-23 04:41:38 +05:30
bryan 35738c8279 react structure 2026-02-22 14:52:15 -08:00
vakrahul a0d14b8a25 fix(core): add zero-config local LLM support and fix AgentRunner crash (#3994) and adding docs 2026-02-22 22:59:11 +05:30
Timothy @aden 9c781ed78e Merge pull request #5224 from TimothyZhang7/feature/credential-v2
fix(micro-fix): tui select account
2026-02-21 23:13:18 -08:00
Timothy 460a24e34a fix: tui select account 2026-02-21 23:01:41 -08:00
Shivam Shahi– oss/acc 0f8627f17a format 2026-02-22 00:25:15 +05:30
JamieJiHeonKim 8ae030e16e docs: add link to linting and formatting setup in CONTRIBUTING.md 2026-02-21 13:00:46 -05:00
JamieJiHeonKim 3c6467c814 docs: fix unclosed code block in deep_research_agent README 2026-02-21 12:54:45 -05:00
ishank 2f11f0c911 Merge branch 'aden-hive:main' into main 2026-02-21 17:29:51 +05:30
ishaannk c3ae67fb1d address review comments: rename Stop to Pause and UI toggle change 2026-02-21 17:08:42 +05:30
Timothy @aden 8c750c7edd Merge pull request #5194 from Antiarin/fix/escalate-to-coder-execution-id
fix[bug](graph): add execution_id to base Runtime and restore ctx.execution_id in escalation handler
2026-02-21 03:05:14 -08:00
Antiarin 571838a289 fix(graph): add execution_id to NodeContext for escalate_to_coder 2026-02-21 16:20:04 +05:30
RichardTang-Aden dafaaae792 Merge pull request #5182 from TimothyZhang7/feature/credential-v2
Feature/credential v2
2026-02-20 19:52:00 -08:00
Timothy b45e14efb4 fix: zai api key setup 2026-02-20 19:40:07 -08:00
RichardTang-Aden e70cbf26e2 Merge pull request #5095 from NSkogstad-AUS/docs/update-tools-readme
Docs/update tools readme
2026-02-20 19:08:23 -08:00
RichardTang-Aden daafdc3704 Merge pull request #5103 from alhousseynou-ndiaye/feature/document-agent
docs: add document processing recipe
2026-02-20 19:06:59 -08:00
bryan 6661934fed harden server apis and agent loading 2026-02-20 18:28:52 -08:00
Nafiyad Adane f568728de1 Add competitive intelligence agent template
- Adds a new autonomous agent template that monitors competitor websites, news, and GitHub
- Implements a 7-node graph workflow to collect, aggregate, and analyze competitive data
- Generates a weekly structured HTML digest with key highlights and 30-day trends
- Utilizes existing web_scrape, web_search, and github MCP tools
- Addresses issue #4153

Closes #4153
2026-02-20 19:13:47 -07:00
bryan 263d35bbd6 Merge branch 'main' into feat/open-hive 2026-02-20 18:09:01 -08:00
Bryan @ Aden bece21d217 Merge pull request #5169 from Schlaflied/docs/sync-zh-CN-readme
docs(i18n): sync zh-CN.md with latest README and fix broken links
2026-02-21 02:06:48 +00:00
bryan d4788e147a backend apis for open hive 2026-02-20 18:01:51 -08:00
Timothy f4594ecf37 fix: gmail batch tool schema coercion 2026-02-20 17:53:35 -08:00
Bryan @ Aden 8f1462cb79 Merge pull request #5113 from vakrahul/features/stripe-tools
feat(tools): add Stripe payment processing integration
2026-02-21 01:44:23 +00:00
Timothy 76d4d0de69 feat: credential v2 with provider loading and test agent 2026-02-20 17:43:00 -08:00
vakrahul 6ab4e1d641 fix: address maintainer PR reviews feedback for Stripe 2026-02-21 07:06:20 +05:30
Zhang fc0c3e169f feat(tools): add Intercom tool with conversations, contacts, and tags (#4256) 2026-02-20 17:14:30 -08:00
Zhang 4760f95bda feat(credentials): add Intercom credential spec (#4256)
Register INTERCOM_ACCESS_TOKEN in INTEGRATION_CREDENTIALS for the
8 Intercom tools (search/get conversations, contacts, notes, tags,
assignment, teams). Tool implementation follows in subsequent commits.
2026-02-20 17:13:54 -08:00
vakrahul c5d87c99fd fix: address maintainer PR review feedback for Stripe 2026-02-21 06:30:31 +05:30
Schlaflied f53f403022 docs(i18n): sync zh-CN.md with latest README and fix broken links 2026-02-20 19:29:45 -05:00
Timothy b887b2951e wip: credential v2 2026-02-20 13:55:06 -08:00
ishaannk 842b69b155 feat: add stop/cancel execution control for agents 2026-02-21 02:03:29 +05:30
Nicolas Suescun d6c34106fc docs: fix CLI arguments mismatch for test-debug and test-list (#4113)
* docs: fix CLI usage args for test-debug/test-list to match implementation

* docs: restore 'uv run' prefix to test commands

Reverts unintentional removal of 'uv run' in usage examples as requested in code review.

* chore: changes to .gitignore
2026-02-20 17:58:17 +08:00
Nihal 67cbd31280 fix(graph): harden JSON parsing for async safety and large LLM outputs (#4869)
* perf(json): add json.loads fast path + asyncio.to_thread for extract_json

Addresses maintainer feedback:
- json.loads candidate fast path in find_json_object (300x speedup source)
- asyncio.to_thread wrappers for both _extract_json call sites (unblocks event loop)
- Remove ~480 lines of over-engineered incremental parsing logic

Total: ~16 lines, zero duplication, zero API surface change

* fix: simplify async JSON handling per maintainer feedback and align tests

* fix(test): replace tautology assertion in test_mismatched_then_valid

The original assertion `assert result is not None or result is None`
is always true. Replace with a meaningful type check.

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-02-20 17:23:25 +08:00
Utkarsh Singh cd0cf69099 feat(tools): add Brevo transactional email and SMS integration
- Add brevo_tool with 6 MCP tools: brevo_send_email, brevo_send_sms,
  brevo_create_contact, brevo_get_contact, brevo_update_contact,
  brevo_get_email_stats
- Add CredentialSpec for BREVO_API_KEY in credentials/brevo.py
- Register brevo_tool in tools/__init__.py and credentials/__init__.py
- Add README with setup instructions and usage examples
- Add 34 unit tests covering all tools, validation and error handling

Closes #5127
2026-02-20 13:19:07 +05:30
Timothy @aden cf877f2b49 Merge pull request #5121 from TimothyZhang7/fix/credential-error-types
Fix(micro-fix)/credential error types
2026-02-19 16:23:31 -08:00
Timothy 6f34cb2c8a fix: credential error types 2026-02-19 14:52:29 -08:00
Richard Tang a04a8a866d fix: sub-agents reachability check 2026-02-19 11:33:32 -08:00
Timothy b88aa2b53c Merge branch 'feature/tui-credential-setup' 2026-02-19 11:12:28 -08:00
Timothy 356cab19eb Merge branch 'fix/google-tool-healthcheck' into feature/tui-credential-setup 2026-02-19 11:12:15 -08:00
vakrahul 7c6d5fa446 test_credentials changess 2026-02-19 19:46:43 +05:30
vakrahul 2dae3e47fd test_credentials changes 2026-02-19 19:43:45 +05:30
vakrahul 6fce789607 feat: add Stripe tool integration and testss 2026-02-19 17:14:57 +05:30
vakrahul 9bbb5b38e6 feat: add Stripe tool integration and tests 2026-02-19 16:55:22 +05:30
vakrahul ac73aa93bf feat: add Stripe tool integration and tests 2026-02-19 16:51:08 +05:30
Timothy 52a56e4a10 fix: google tools need healthcheck 2026-02-18 23:07:12 -08:00
alhousseynou-ndiaye a1cede510d docs: add document processing recipe 2026-02-19 07:47:13 +01:00
Timothy @aden 682c10e873 Merge pull request #5099 from TimothyZhang7/main
release(docs): v0.5.1
2026-02-18 22:11:45 -08:00
Timothy 5605e24a0d fix: streaming output leakage 2026-02-18 22:10:02 -08:00
Timothy f7268a44d9 fix: worker credential setup 2026-02-18 21:50:18 -08:00
Timothy af7a4ff4e8 release: v0.5.1
- Bump framework version 0.5.0 → 0.5.1
- Add CHANGELOG.md with full release notes

Highlights: Hive Coder meta-agent, multi-graph runtime, TUI revamp,
subscription model support, 5 new tool integrations, deprecated node
type removal.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 21:15:20 -08:00
Timothy 60b9c0d763 release: v0.5.1
- Bump framework version 0.5.0 → 0.5.1
- Add CHANGELOG.md with full release notes

Highlights: Hive Coder meta-agent, multi-graph runtime, TUI revamp,
subscription model support, 5 new tool integrations, deprecated node
type removal.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 21:00:41 -08:00
Timothy @aden 5c550270c6 Merge pull request #5071 from TimothyZhang7/feature/queen-bee
Release / Create Release (push) Waiting to run
Feature/queen bee
2026-02-18 20:59:04 -08:00
Timothy e03fd48e48 fix: lint 2026-02-18 20:51:40 -08:00
Timothy 6420c74c24 fix: ci tests 2026-02-18 20:47:07 -08:00
Timothy ad74351530 fix: agent switch return 2026-02-18 20:39:25 -08:00
Timothy @aden 1b5f656429 Merge branch 'main' into feature/queen-bee 2026-02-18 20:34:19 -08:00
Timothy @aden 132d84c529 Merge pull request #5058 from adenhq/fix/deprecation
fix(arch): remove all deprecated concepts and deadcodes
2026-02-18 20:32:24 -08:00
Timothy @aden a03b378e9b Merge branch 'main' into fix/deprecation 2026-02-18 20:29:39 -08:00
Timothy 74635e1d7d feat: subscription model support, tui revamp 2026-02-18 20:28:11 -08:00
Bryan @ Aden 893053ede7 Merge pull request #5098 from adenhq/update/inbox-agent-fixes
(micro-fix): Update/inbox agent fixes
2026-02-19 03:35:25 +00:00
bryan 596ec6fec5 fixed credentials 2026-02-18 19:26:59 -08:00
bryan 5863b83172 Merge branch 'main' into update/inbox-agent-fixes 2026-02-18 19:12:01 -08:00
bryan 20c92b197a fixes to inbox agent 2026-02-18 19:08:55 -08:00
Richard Tang 8c9baa62b0 feat: create default hive profile for browser use 2026-02-18 18:10:37 -08:00
RichardTang-Aden ec9c6b4666 Merge pull request #5097 from RichardTang-Aden/feat/credential-setup-cli
Feat/credential setup cli
2026-02-18 17:22:53 -08:00
Richard Tang 8a73e5c119 chore: ruff lint 2026-02-18 17:21:45 -08:00
Richard Tang 717f0eee9a Merge branch 'main' into feat/credential-setup-cli 2026-02-18 17:20:40 -08:00
Richard Tang 09fb47f089 chore: ruff format 2026-02-18 17:14:26 -08:00
Richard Tang b46d943e71 chore: lint issues 2026-02-18 17:13:01 -08:00
Richard Tang 262eaa6d84 feat: mcp dependencies for gcu 2026-02-18 16:34:19 -08:00
NSkogstad-AUS b980d6f6ab docs(tools): fixed small inaccuracy with gmail description 2026-02-19 11:33:50 +11:00
NSkogstad-AUS 61f27369ef docs(tools): update Available Tools table with additional search functionalities 2026-02-19 11:28:18 +11:00
NSkogstad-AUS 204b0b4744 docs(tools): expand Available Tools table with all tools by category
Previously the table listed ~20 of ~50 available tools. This expands
it to cover all tools, grouped into categories: File System, Data Files,
Web & Search, Communication, Productivity & CRM, Cloud & APIs,
Security, and Utilities.

All tool names verified against registered @mcp.tool() functions in source.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-19 11:20:49 +11:00
Richard Tang fc1a48f3bc feat: breaking the browser use tools by types 2026-02-18 16:10:17 -08:00
Richard Tang 060f320cd1 feat(wip): gcu node and basic browser tools 2026-02-18 15:52:46 -08:00
Timothy 1b6ebb1e42 fix: put guardian back to hive coder 2026-02-18 15:06:25 -08:00
Richard Tang bff32bcaa3 feat: allow sub_agent in the agent framework 2026-02-18 14:43:01 -08:00
Timothy 7dfc75b3e6 feat: muti graph agent session 2026-02-18 12:46:59 -08:00
Richard Tang 2920b5ab01 chore: lint issues 2026-02-17 20:05:05 -08:00
Richard Tang 81ad0467b0 Merge branch 'main' into fix/deprecation 2026-02-17 20:02:47 -08:00
Richard Tang 115ca55ea0 fix: broken ci tests 2026-02-17 20:00:47 -08:00
Richard Tang f2814a26e6 chore: lint issue 2026-02-17 19:57:31 -08:00
Richard Tang 4d309950b0 fix: unused code and ci 2026-02-17 19:55:54 -08:00
RichardTang-Aden 39216a4c12 Merge pull request #5016 from adenhq/feat/pdf-ingestion
Feat/pdf ingestion
2026-02-17 19:29:35 -08:00
Aden HQ c7fa621aeb Merge branch 'main' into feat/pdf-ingestion 2026-02-17 19:28:54 -08:00
Timothy 5914d28cbe feat(queen): hive queen bee implementation v1 2026-02-17 19:19:09 -08:00
Richard Tang 8c3ad3d70a fix: email agent version 2026-02-17 19:17:07 -08:00
Richard Tang 9eb3fc6285 fix: fix email agent version 2026-02-17 19:09:17 -08:00
Richard Tang e95f7e7339 fix: make the email inbox management agent identical to main 2026-02-17 19:02:08 -08:00
RichardTang-Aden d949551399 Merge pull request #3332 from haliaeetusvocifer/feat/google-docs-integration
added feature google docs integration
2026-02-17 18:25:03 -08:00
Richard Tang a7dbd85ed4 fix: google docs credentials 2026-02-17 18:24:31 -08:00
Richard Tang 1f288dab1c fix: tools registration problems 2026-02-17 18:15:41 -08:00
Richard Tang 021754d941 Merge branch 'main' into feat/google-docs-integration 2026-02-17 18:05:07 -08:00
bryan 7412904fbf update to job hunter and website vulnerability 2026-02-17 16:58:12 -08:00
Timothy cd1976e2b9 feat: support openai compatible endpoints 2026-02-17 16:27:36 -08:00
haliaeetusvocifer 5f3e9379a3 completed task 2026-02-17 23:59:47 +00:00
Richard Tang 0e565d6cea feat: add the agent start confirmation and credential update option 2026-02-17 13:17:03 -08:00
Richard Tang 67b249dcd5 feat: add the credential setup step after credential validation 2026-02-17 12:59:20 -08:00
Timothy bbf1c8c790 fix(arch): remove all deprecated concepts and deadcodes 2026-02-17 10:59:15 -08:00
Amdev-5 9744363342 fix(lusha): address PR review round 2 — structured filters, pagination, correct types
- search_people: replaced freetext searchText concatenation with proper
  structured Lusha API filters (jobTitles, seniority as list[int],
  departments, locations as dict, company_names, industry_ids, search_text)
- search_companies: added locations, company_names, search_text params;
  made all params optional for flexible queries
- Pagination: exposed limit param (clamped 10-50 per Lusha API constraints)
  on both search tools, replacing hardcoded size=25
- get_signals: changed ids from list[str] to list[int], removed internal
  str-to-int conversion as Lusha IDs are always numeric
- seniority type corrected to list[int] (API rejects string-encoded values
  despite OpenAPI spec suggesting strings — verified via live integration)
- Unit tests updated for all changes (19/19 pass)

Verified against live Lusha API: all 6 tools return correct responses.
2026-02-17 22:00:09 +05:30
Amdev-5 6fe8439e94 fix(lusha): use mainIndustriesIds for company search, safer credential handling
- search_companies: replace names filter with mainIndustriesIds (numeric
  industry IDs) per Lusha API schema. Parameter changed from
  industry: str to industry_ids: list[int] | None.
- _get_api_key: return None instead of raising TypeError on unexpected
  credential type. Lets _get_client handle it with the standard error dict
  pattern used across all tools.
- Updated unit tests for new industry_ids parameter and added test for
  non-string credential handling.
2026-02-17 21:33:02 +05:30
Amdev-5 8e61ffe377 fix(tools): remove invalid searchText field from Lusha prospecting filters
Lusha API rejects filters.companies.include.searchText (HTTP 400).
Replaced with valid 'names' field in search_companies and removed
redundant company searchText from search_people. Updated unit tests.
2026-02-17 21:33:02 +05:30
Amdev-5 723476f7a7 feat(tools): add Lusha MCP integration with credentials and health checks 2026-02-17 21:33:02 +05:30
karthik-kotra 41cd11d5c9 docs(setup): add troubleshooting steps for common WSL setup issues 2026-02-17 07:30:00 +00:00
IamSayeed 0f253027ae Merge branch 'main' into feature/add-asana-integration 2026-02-17 12:20:01 +05:30
Sayeed Rizwan 6053895a82 fix(asana): resolve from PR feedback - refactor client, fix specs, add tests 2026-02-17 12:18:06 +05:30
bryan 44a8b453b5 Merge branch 'main' into feat/pdf-ingestion 2026-02-16 18:40:46 -08:00
bryan 26511fe962 added pdf select and updated job hunter 2026-02-16 18:38:13 -08:00
RichardTang-Aden ce5893216a Merge pull request #4871 from paarths-collab/docs/root-install-warning
docs(readme): clarify uv workspace setup and prevent root pip install misuse
2026-02-16 18:36:57 -08:00
RichardTang-Aden 4e821e4dbf Merge pull request #5011 from RichardTang-Aden/main
micro-fix: chore: update the intro message of the agent
2026-02-16 18:05:18 -08:00
Richard Tang d11e97de59 chore: update the intro message of the agent 2026-02-16 18:03:58 -08:00
RichardTang-Aden 4b10d3e360 Merge pull request #5010 from RichardTang-Aden/main
feat: merge the sample agent so we have one email inbox management agent
2026-02-16 18:00:31 -08:00
Richard Tang e04479930f chore: update descriptions 2026-02-16 17:56:17 -08:00
Richard Tang 8a8c4cc3f5 chore: rename the email 2026-02-16 17:53:05 -08:00
Richard Tang 1e06ff611e refactor: merge the sample agent so we have one email inbox management agent 2026-02-16 17:43:18 -08:00
Pravin Mishra 1edc7bb9c7 feat(tools): add Discord integration (#2913) (#4247)
* feat(tools): add Discord integration (#2913)

- discord_list_guilds: list servers the bot is in
- discord_list_channels: list channels for a guild
- discord_send_message: send message to channel
- discord_get_messages: get recent messages

Auth: DISCORD_BOT_TOKEN, credential spec, health checker.
Uses Discord API v10 (Bot token).

Co-authored-by: Cursor <cursoragent@cursor.com>

* style: apply ruff format to discord tool files

Co-authored-by: Cursor <cursoragent@cursor.com>

* feat(discord): add rate limit handling, message validation, channel filter

- Rate limit (429): return clear error with retry_after from API
- Message length: validate before send, max 2000 chars per Discord limit
- Channel filter: text_only param (default True) for list_channels
- Add 6 new tests for rate limit, validation, filtering

Co-authored-by: Cursor <cursoragent@cursor.com>

* feat(discord): add retry on 429 rate limit

- Retry up to 2 times using Discord's retry_after
- Cap wait at 60s, fallback to exponential backoff if no retry_after
- Add _request_with_retry helper for all API calls
- Add 3 tests: retry then success, retry exhausted, tool-level retry

Co-authored-by: Cursor <cursoragent@cursor.com>

* fix(discord): remove unused DISCORD_API_BASE import

Co-authored-by: Cursor <cursoragent@cursor.com>

---------

Co-authored-by: mishrapravin114 <mishrapravin114@users.noreply.github.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-17 09:29:56 +08:00
Shivam Shahi– oss/acc ceffa38717 Merge branch 'main' into feat/zoho-crm 2026-02-17 02:46:29 +05:30
Siddharth Varshney 7b1e0af155 feat(utils): add proper __init__.py exports for utils module (#3979) 2026-02-16 20:20:09 +08:00
Jeet Karia 7b15616e29 feat(tools): add Exa Search API integration with 4 MCP tools (#4941)
Implements AI-powered web search, content extraction, and research tools
via the Exa API for agent workflows.

Tools: exa_search, exa_find_similar, exa_get_contents, exa_answer

Follows existing tool pattern (web_search_tool, hubspot_tool, slack_tool):
- register_tools(mcp, credentials) with @mcp.tool() decorators
- Credential fallback: CredentialStoreAdapter -> EXA_API_KEY env var
- Error handling: always returns dicts, never raises
- Retry with exponential backoff on HTTP 429

Includes:
- Neural/keyword search with domain, date, and category filters
- Similar page discovery via neural embeddings
- Content extraction from up to 10 URLs per request
- Citation-backed answer generation
- CredentialSpec in credentials/search.py
- Comprehensive unit tests (21 tests)
- 500/500 integration CI tests passing

Fixes #4177
2026-02-16 19:28:32 +08:00
Your hh3538962 ae205fa3f2 fix(tools): address Power BI integration code review feedback
- Fix export endpoint: /Export -> /ExportTo
- Add 202 Accepted response handling
- Add notifyOption to refresh_dataset API call
- Rename format parameter to export_format (avoid shadowing builtin)
- Add PNG support to export formats
- All critical API issues from review addressed
2026-02-16 14:00:09 +05:00
Zhang bd7d2277d8 fix(tui): add Ctrl+J as newline fallback in chat input
Terminals without extended key reporting (VS Code, Cursor) send
identical events for Enter and Shift+Enter, making it impossible
to insert newlines. Ctrl+J produces a distinct key event in all
terminals.
2026-02-15 20:37:52 -08:00
Shivam Shahi– oss/acc 99ed00fd02 feat(tools): add Razorpay payment processing integration (#4467)
* feat(tools): add Razorpay payment processing integration

Add Razorpay MCP tool integration for payment processing, invoicing,
and refund management. Implements 6 MCP tools:

- razorpay_list_payments: List recent payments with filters (pagination, date range)
- razorpay_get_payment: Fetch detailed payment information by ID
- razorpay_create_payment_link: Create one-time payment links with shareable URLs
- razorpay_list_invoices: List invoices with status and type filtering
- razorpay_get_invoice: Fetch invoice details including line items
- razorpay_create_refund: Create full or partial refunds for payments

Features:
- Authentication via HTTP Basic Auth (RAZORPAY_API_KEY + RAZORPAY_API_SECRET)
- Credential spec in dedicated razorpay.py (follows repo pattern)
- Comprehensive error handling (401, 403, 404, 400, 429, 500, timeouts)
- Input validation (payment IDs, invoice IDs, amounts, currencies)
- Full test coverage (42 unit tests, 26 integration tests)

Closes #4404

* style: fix ruff I001 import order and W291 in tools

* fix: improve Razorpay credential tracking and validation

- Add razorpay_secret CredentialSpec with credential_group
- Fix amount=0 bug by using 'is not None' checks
- Add regex validation for payment/invoice IDs

* fix: use graceful credential handling instead of raising TypeError

Match codebase convention (calcom, lusha) - return None for non-string
credentials instead of raising TypeError, so the tool returns an error
dict instead of crashing.

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-02-16 12:02:16 +08:00
Timothy @aden f7af5f9ee8 Merge pull request #4926 from TimothyZhang7/example/gmail-inbox-guardian-agent
Release / Create Release (push) Waiting to run
chore(micro-fix): change the timer to 5 minutes
2026-02-15 19:30:27 -08:00
RichardTang-Aden e5bcc8005f Merge pull request #4922 from adenhq/feat/vulnerability_agent
Feat/vulnerability agent
2026-02-15 19:21:00 -08:00
Timothy 352d285212 chore: change the timer to 5 minutes 2026-02-15 18:47:22 -08:00
Timothy @aden 3ef60f9d14 Merge pull request #4925 from TimothyZhang7/example/gmail-inbox-guardian-agent
Example(micro-fix)/gmail inbox guardian agent
2026-02-15 18:44:19 -08:00
Timothy a103312127 feature: display timer status interactively 2026-02-15 18:41:45 -08:00
Timothy 3d0bba4167 example(agents): ready-to-use gmail automation agent 2026-02-15 18:34:14 -08:00
Timothy @aden 3df718cc14 Merge pull request #4920 from TimothyZhang7/fix/issue-4905
Fix/issue 4905
2026-02-15 18:08:28 -08:00
RichardTang-Aden c7497a180e Merge pull request #4918 from TimothyZhang7/feature/multi-entry-event-driven-agents
Fix/multi entry event driven agents
2026-02-15 18:05:43 -08:00
Bryan @ Aden 3f39039a21 Merge pull request #4800 from LukeM94/doc/typo-fixes-in-roadmap.md
doc: Fix typos in docs/roadmap.md
2026-02-16 01:57:35 +00:00
Bryan @ Aden 88fbd90fcc Merge pull request #4799 from zhanglinqian/fix-recipes-readme-links
docs: fix incorrect directory names in recipes README
2026-02-16 01:55:51 +00:00
bryan e0bf09dd78 lint fixes 2026-02-15 17:45:56 -08:00
bryan 3e158b07af Merge branch 'main' into feat/vulnerability_agent 2026-02-15 17:35:59 -08:00
Timothy 5319ed7ee1 chore: remove unsolicited docs 2026-02-15 17:32:36 -08:00
Timothy 978904d2a4 fix(executor): async operations on non-streaming llm complete for healthy event loop 2026-02-15 17:31:18 -08:00
bryan 4d876ecc54 vulnerability check to sample agents 2026-02-15 17:27:09 -08:00
RichardTang-Aden ba327d0b9e Merge pull request #4919 from adenhq/feat/sample-agent/job_hunter
Sample agent, micro-fix: remove dependency of brave search
2026-02-15 16:58:06 -08:00
Richard Tang b69cf3523c feat: remove dependency of brave search 2026-02-15 16:47:53 -08:00
Timothy 4d8c8e9308 feat(arch): architecture patches to support multi-entry agents consuming external events 2026-02-15 16:19:58 -08:00
Shivam Shahi– oss/acc 669a05892b Merge branch 'main' into feat/zoho-crm 2026-02-15 21:47:52 +05:30
Amit Kumar b70885934c [Integration]: Cal.com - Open Source Scheduling Infrastructure #3188 (#3255)
* feat(tools): add Cal.com scheduling integration with 8 MCP tools

Adds Cal.com API integration for booking and scheduling management:
- calcom_list_bookings, calcom_get_booking, calcom_create_booking, calcom_cancel_booking
- calcom_get_availability, calcom_update_schedule
- calcom_list_event_types, calcom_get_event_type

Includes dedicated credential spec, 20 unit tests, and full integration conformance.
Resolves #3188

* fix(calcom): address PR review + add missing list_schedules MCP tool

- Add isinstance(api_key, str) guard in _get_api_key, return None on
  non-string values for consistent error-dict handling
- Remove duplicate metadata from responses object in create_booking;
  metadata stays at top-level per Cal.com v1 API spec
- Expose availability parameter in calcom_update_schedule MCP tool
- Add calcom_list_schedules tool wrapping existing client method —
  needed to discover schedule IDs before calling update_schedule
- Register calcom_list_schedules in credential spec for CI conformance
- Add tests for credential handling, schedule availability, and
  list_schedules (24 tests total, 9 tools)

* docs: update README to include calcom_list_schedules (9 tools)

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-02-15 20:14:00 +08:00
paarths-collab 722b087fc0 docs(readme): clarify installation and prevent root pip install misuse 2026-02-15 17:39:41 +05:30
IamSayeed 4898a9759a Merge branch 'main' into feature/add-asana-integration 2026-02-15 13:07:15 +05:30
Sayeed Rizwan 2c2fa25580 fix: Resolve merge conflicts in credential and tool registries 2026-02-15 13:00:23 +05:30
Sayeed Rizwan 56496d7dbd feat: Add Asana integration for project management automation
- Implement 25 MCP tools for comprehensive Asana operations
  - Task management (create, update, search, delete, complete, comment, subtask)
  - Project management (create, update, list, get tasks)
  - Workspace & team operations (list workspaces, get users)
  - Section management for Kanban workflows
  - Tag and custom field support

- Add Personal Access Token (PAT) authentication
- Use official asana>=3.2.0 Python SDK (v5+ API)
- Include comprehensive error handling with ApiException
- Add 5 unit tests with 100% pass rate
- Provide detailed documentation and usage examples

Technical Details:
- Uses asana.ApiClient with Configuration pattern
- Implements workspace resolution by name or GID
- Handles paginated responses automatically
- Follows CredentialStoreAdapter pattern
- Matches existing tool structure (slack_tool, github_tool)

Closes #4156
2026-02-15 11:33:17 +05:30
Aaryann Chandola 0c7ea272db [integration] feat(tools): add Google Calendar integration (#3171)
* feat(calendar): add Google Calendar integration with event management tools and health checks

* fix(calendar): align google_calendar_oauth credential spec with codebase pattern
2026-02-15 08:25:53 +08:00
y0sif dd0696e44d chore: resolve merge conflicts with main 2026-02-14 21:38:44 +02:00
y0sif dcda273e0b chore: resolve merge conflicts with main 2026-02-14 21:32:33 +02:00
y0sif f3b159c650 docs(tools): document Attio CRM in README 2026-02-14 21:23:47 +02:00
y0sif 06df037e28 chore: add Attio credentials to test spec file 2026-02-14 21:22:55 +02:00
y0sif e814e516d1 chore: add Attio credentials to init file 2026-02-14 21:21:37 +02:00
y0sif 0375e068ed test(tools): add Attio tool tests 2026-02-14 21:20:03 +02:00
y0sif 34ffc533d3 feat(tools): add Attio CRM integration 2026-02-14 21:19:14 +02:00
Aaryann Chandola 5e4f322fc0 add new time tool for current date/time retrieval (#3425) 2026-02-14 21:57:07 +08:00
zhanglinqian c02e45f1aa docs: fix incorrect directory names in recipes README 2026-02-14 21:19:47 +08:00
LukeM94 a7217f138c Fix typos in docs/roadmap.md
Correct hyphenation and spelling in the product roadmap: change 'outcome oriented' to 'outcome-oriented' and fix 'Workder' to 'Worker' in the Deployment section.
2026-02-14 13:18:03 +00:00
mubarakar95 ea2ea1a4ae Merge branch 'main' into integration/apify 2026-02-14 17:53:39 +05:30
mubarakar95 9e11947687 style: apply ruff formatting to apify_tool.py 2026-02-14 17:22:35 +05:30
mubarakar95 47117281e1 fix(test): resolve E501 line too long in test_apify_tool.py 2026-02-14 17:22:33 +05:30
mubarakar95 032dd13f5a feat(tools): implement Apify integration with 4 tools and comprehensive tests
- Added credential spec with health check endpoint
- Implemented apify_run_actor (sync/async execution)
- Implemented apify_get_dataset (result retrieval)
- Implemented apify_get_run (status checking)
- Implemented apify_search_actors (marketplace search)
- Created comprehensive README with examples and use cases
- Added 24 unit tests with mocked API responses
- All tests passing, conformance validated, linting clean

Resolves: #4510
2026-02-14 17:22:25 +05:30
Emmanuel Nwanguma 3502f25048 [Integration] feat(tools): add BigQuery MCP tool for SQL querying and data analysis (#3350)
* feat(tools): add BigQuery MCP tool for SQL querying and data analysis

- Add run_bigquery_query tool for executing read-only SQL queries
- Add describe_dataset tool for exploring dataset schemas
- Implement safety features: read-only enforcement, row limits (max 10k)
- Add comprehensive unit tests (27 tests passing)
- Follow CredentialStoreAdapter pattern from email tool
- Support ADC and service account authentication

Fixes #3067

* fix(bigquery): address PR review feedback

- Add credential_id and api_key_instructions to CredentialSpec
- Fix credential key name from 'bigquery_credentials' to 'bigquery'
- Pass credential path to BigQuery client via environment variable
- Fix ADC error message detection for both error variants
- Move google-cloud-bigquery to optional dependencies
- Update tests to use correct credential key names

All 27 tests passing

* fix(bigquery): return {error, help} when dependency missing

* fix(bigquery): return full ImportError message for missing dependency

* fix(bigquery): include 'help' key when dependency missing
2026-02-14 19:48:03 +08:00
mubarakar95 13d8ebbeff feat: Add Apify integration (issue #4510)
Implements comprehensive Apify integration for web scraping and automation:

- Added 4 new tools: apify_run_actor, apify_get_dataset, apify_get_run, apify_search_actors
- Credential management for APIFY_API_TOKEN with health check
- Support for synchronous (wait=True) and asynchronous (wait=False) actor execution
- Actor ID validation and comprehensive error handling
- Full test coverage (26 tests passing)
- README with usage examples and documentation

Addresses #4510
2026-02-14 11:53:56 +05:30
RichardTang-Aden 93c026fe31 Merge pull request #4759 from adenhq/feat/sample-agent/job_hunter
[Feature][Sample Agent]: Job Hunting Agent
2026-02-13 20:24:35 -08:00
bryan e515977b96 Merge branch 'main' into feat/vulnerability_agent 2026-02-13 20:16:48 -08:00
bryan 045490a097 testing agent run 1-5 2026-02-13 20:16:12 -08:00
Richard Tang b25903fb7f feat: init job hunter 2026-02-13 20:08:55 -08:00
bryan acf4bd5152 tools for sample agent 2026-02-13 19:14:59 -08:00
Timothy 1f5711e1a1 Merge branch 'fix/transient-error-handlings' into feat/inbox-management 2026-02-13 18:53:33 -08:00
RichardTang-Aden ca2dd90313 Merge pull request #4610 from adenhq/feat/inbox-management
Feat/inbox management
2026-02-13 18:51:26 -08:00
Timothy 21e07f3b65 Merge branch 'feature/load-by-bytes' into feat/inbox-management 2026-02-13 18:43:45 -08:00
Timothy e8a06ddd34 feat: load by bytes instead of rows 2026-02-13 18:41:16 -08:00
bryan 34cc09904f fix pytest 2026-02-13 18:27:22 -08:00
bryan f6bba8b62f Merge branch 'main' into feat/inbox-management 2026-02-13 18:17:57 -08:00
bryan d241ad60f8 updated tui to two panel 2026-02-13 18:13:12 -08:00
Timothy 5a3fcf9a8a Merge branch 'main' into feat/inbox-management 2026-02-13 16:39:07 -08:00
Timothy 1f8a47203f fix: common transient errors and loop detection 2026-02-13 16:14:43 -08:00
RichardTang-Aden 7240090274 Merge pull request #4525 from e-cesar9/fix/4428-windows-defender-exclusions
perf(windows): add Windows Defender exclusions for 40% faster uv sync
2026-02-13 15:49:58 -08:00
Richard Tang 2e6a47c2df fix(windows): replace unicode bullets with ASCII dashes
The bullet character (•) cannot be displayed properly in PowerShell
on some Windows systems. Use ASCII dash (-) instead for compatibility.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-13 15:48:48 -08:00
Richard Tang 7f5ecd7913 fix: uv local storage 2026-02-13 15:47:57 -08:00
RichardTang-Aden 105b98b113 Merge pull request #4646 from jaathavan18/docs/issue-4645-fix-exports-link
docs: fix broken exports/ link in environment-setup.md
2026-02-13 15:20:07 -08:00
RichardTang-Aden 114e65ab41 Merge pull request #4649 from jaathavan18/docs/issue-4648-fix-setup-python-ref
docs: fix references to nonexistent setup-python.sh
2026-02-13 15:19:50 -08:00
RichardTang-Aden 0fc13a5cc3 Merge pull request #4660 from Hima-de/fix/antigravity-setup-docs
docs: fix nonexistent setup-python.sh reference in antigravity-setup.md
2026-02-13 15:19:28 -08:00
Shivam Shahi– oss/acc 2efa0e01df ruff format fix 2026-02-14 00:35:30 +05:30
bryan e651799e9e Merge branch 'main' into feat/inbox-management 2026-02-13 10:34:33 -08:00
Timothy @aden fcd3e514de Merge pull request #4722 from TimothyZhang7/main
chore(micro-fix): fix removed message
2026-02-13 09:43:05 -08:00
Timothy 7ab41de3a2 chore: fix removed message 2026-02-13 09:40:37 -08:00
Timothy @aden 58e023f277 Merge pull request #4720 from TimothyZhang7/feature/event-source
Feature/event source
2026-02-13 09:37:13 -08:00
Timothy @aden a98f2d5b86 Merge pull request #4647 from TimothyZhang7/feature/memory-inheritance
feat:phased compaction and event bus integration
2026-02-13 09:25:44 -08:00
Shivam Shahi– oss/acc 6044369fdf feat(tools): add Zoho CRM v8 integration with OAuth2 and MCP tools
Add Zoho CRM MCP integration for lead/contact/account/deal workflows with notes support. Implements 5 MCP tools:
- zoho_crm_search: Search Leads/Contacts/Accounts/Deals by criteria or word with pagination
- zoho_crm_get_record: Fetch a single record by module and ID
- zoho_crm_create_record: Create records with pass-through field payloads
- zoho_crm_update_record: Update records by ID with partial field payloads
- zoho_crm_add_note: Create notes linked to CRM records via Parent_Id mapping

Features:
- Zoho OAuth2 provider added in core credentials (refresh-token flow)
- Zoho auth format: Authorization: Zoho-oauthtoken <token>
- Region/DC-aware routing using accounts domain/region + api_domain usage
- Persisted DC metadata on refresh (api_domain/accounts_domain/location)
- Credential spec and health check registration for zoho_crm
- Tool registration and allowed-tool list updates
- Normalized tool responses with retriable 429 handling
- README with setup, auth modes, usage, and testing instructions
- Comprehensive unit/integration coverage updates for tool, provider, and health checks

Validation:
- Scoped ruff lint/format checks passed
- Targeted test suite passed: 563 passed, 18 skipped

Closes #4418
2026-02-13 18:28:12 +05:30
Amit Kumar eca43231c0 [Feature]: Add Excel Tool for reading/writing .xlsx/.xlsm files #2675 (#3004)
* feat(tools): add Excel tool for reading/writing .xlsx/.xlsm files

Add excel_tool module with 7 MCP tools:
- excel_read: Read data from Excel files with pagination
- excel_write: Create new Excel files
- excel_append: Append rows to existing Excel files
- excel_info: Get file metadata (sheets, columns, rows)
- excel_sheet_list: List sheet names in a file
- excel_sql: Query Excel with SQL (DuckDB), multi-sheet support
- excel_search: Search values across sheets with match options

Includes 56 tests, openpyxl optional dependency, and documentation.

Fixes #2675

* fix(tools): address excel review feedback and stabilize tests
2026-02-13 19:32:07 +08:00
Amit Kumar 6763077887 feat(tools): add Google Maps Platform integration with 6 MCP tools (#3796)
Implements geocoding, routing, and location intelligence via Google Maps
Platform Web Services APIs for logistics, delivery, and location-based
agent workflows.

Tools: maps_geocode, maps_reverse_geocode, maps_directions,
maps_distance_matrix, maps_place_details, maps_place_search

Includes:
- page_token support for paginated place search results
- GoogleMapsHealthChecker for credential validation
- Comprehensive unit tests (42 tool tests, 30 health check tests)

Closes #3179
2026-02-13 19:01:28 +08:00
Siddharth Varshney f85ff8a2f8 feat(tools): add Telegram Bot integration (#3550)
* feat(tools): add Telegram Bot integration

- Add telegram_send_message and telegram_send_document tools
- Add credential spec for TELEGRAM_BOT_TOKEN
- Add comprehensive tests (18 test cases)
- Add README documentation with setup instructions

* fix(telegram_tool): catch network errors instead of letting them raise

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-02-13 17:10:40 +08:00
Amit Kumar 1a5c3480e6 feat(tools): add NewsData + Finlight MCP tools for market intelligence (#4473)
* feat(tools): add news MCP tools

* fix(tools): adjust news providers and fallback

* fix(tools): add rate limiting + sentiment normalization to news tools

Address PR feedback: exponential backoff (3 retries, 2^attempt delays)
on HTTP 429 for both NewsData and Finlight with seamless provider
fallback, and normalize sentiment scores to [-1.0, +1.0] range.

* fix(news_tool): make provider fallback lazy and handle network errors

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-02-13 16:25:56 +08:00
Hima-de 69a7fe7b92 docs: fix nonexistent setup-python.sh reference in antigravity-setup.md
Replaces references to ./scripts/setup-python.sh with ./quickstart.sh,
which is the actual setup script in the repository.

Fixes #4648
2026-02-13 06:30:36 +00:00
Emmanuel Nwanguma a5418d760f fix(runtime): validate and create storage path on init (#4466)
- Add path validation in Runtime.__init__()
- Log warning when creating nonexistent storage directory
- Auto-create directory with parents to prevent silent failures later
- Implements Option 3 from issue discussion (explicit, safe, informative)

Fixes #1870
2026-02-13 14:24:14 +08:00
T.Trinath Reddy 0deeb87c63 feat(vision): add GCP Vision API integration (#4231)
* feat(vision): add GCP Vision API integration

* refactor(vision): move GCP Vision credentials to dedicated folder

* fix: clean up credentials imports and updated gitignore

* followed ruff alphabetic order for credentials
2026-02-13 14:00:15 +08:00
Timothy d1d5f49c5a fix: add more events to event bus 2026-02-12 20:42:20 -08:00
bryan 917e23ccc8 Merge branch 'main' into feat/inbox-management 2026-02-12 20:30:34 -08:00
RichardTang-Aden 988922304f Merge pull request #4451 from Ttian18/fix/tina-tui-copy-newline-4423
Release / Create Release (push) Waiting to run
fix(tui): add multiline input and cross-platform clipboard support
2026-02-12 20:27:32 -08:00
bryan ab2bd726c3 Merge branch 'main' into feat/inbox-management 2026-02-12 20:24:46 -08:00
bryan 713fefb163 inbox-manager is now continuous 2026-02-12 20:22:40 -08:00
Timothy 83140a1398 feat: event source in runtime 2026-02-12 19:52:15 -08:00
bryan cafa6dd930 update to inbox management agent 2026-02-12 19:17:54 -08:00
jaathavan18 82e1af1a7a docs: fix references to nonexistent setup-python.sh (#4648) 2026-02-12 22:08:35 -05:00
jaathavan18 30c3dc9205 docs: fix broken exports/ link in environment-setup.md (#4645) 2026-02-12 22:04:25 -05:00
Timothy 9a3c6703e1 feat:phased compaction and event bus integration 2026-02-12 18:41:32 -08:00
RichardTang-Aden e26468aa19 Merge pull request #4587 from Antiarin/feat/codex-integration
feat(codex): add project setup, quickstart bootstrap, and docs for Codex agent support
2026-02-12 17:55:47 -08:00
Richard Tang fe14992696 docs: update documentation 2026-02-12 17:54:08 -08:00
Richard Tang d0775b95c6 fix: remove unused agents.md and quickstart clause 2026-02-12 17:48:31 -08:00
Richard Tang 96121b5757 fix: correct codex setups 2026-02-12 17:35:15 -08:00
Timothy @aden 11c003c48d Merge pull request #4636 from TimothyZhang7/feature/memory-inheritance
Feature: Conversation Memory & Continuous Agent Session
2026-02-12 13:52:19 -08:00
Timothy fbe72c58ae chore: fix ci tests 2026-02-12 13:49:34 -08:00
bryan 816156e87f merge: PR #4636 feature/memory-inheritance into feat/inbox-management
Brings in append_data tool, continuous conversation mode, conversation
judge, phase compaction, and prompt composer from the memory-inheritance
feature branch.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 13:23:25 -08:00
bryan 7bceab3cea wip: inbox management agent setup and gmail tool updates
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 13:22:19 -08:00
Timothy 83d7f56728 chore: update agent skills for new design philosophy 2026-02-12 12:10:37 -08:00
Timothy 76deba2a6a feat: consistent memory system 2026-02-12 11:41:22 -08:00
Antiarin d9d048b9e3 docs: update quickstart script for symlink handling and add Codex CLI documentation 2026-02-12 23:34:17 +05:30
bryan 930f417729 Merge remote-tracking branch 'origin/main' into feat/inbox-management 2026-02-12 08:44:13 -08:00
bryan 8e214d06c1 moved inbox management to sample agents 2026-02-12 08:33:43 -08:00
e-cesar9 63e0348963 fix(ux): improve failure messaging with success rate
Shows clear success rate when some exclusions fail (e.g., "Only 2/3
exclusions added (67%)"). Helps users understand that performance
benefit may be reduced when not all paths are excluded.

- Calculates success rate as percentage
- Shows "X/Y added (Z%)" format for clarity
- Warns that performance benefit may be reduced
- Better visibility of partial failures

Improves user awareness of partial installation issues.
2026-02-12 17:30:17 +01:00
e-cesar9 b46a5f0247 fix(robustness): recheck Defender status before adding exclusions
Adds Test-IsDefenderEnabled helper and rechecks Defender status and
paths before adding exclusions. Prevents adding ineffective exclusions
if Defender was disabled during user prompt (race condition).

- New helper: Test-IsDefenderEnabled() for quick boolean check
- Recheck Defender status immediately before Add operation
- Recheck paths to detect if added by another process
- Clear messages if status changed or already added

Fixes race condition where user could disable Defender or another
process could add exclusions during the prompt delay.
2026-02-12 17:29:44 +01:00
e-cesar9 79dfd90068 fix(security): add path validation for Defender exclusions
Validates that paths are within safe boundaries (project directory or
user AppData) before excluding them from Defender. Prevents accidental
or malicious exclusion of system directories.

- Adds safePrefixes validation (project dir + user AppData only)
- Checks each path against allowed prefixes
- Normalizes paths for consistent comparison
- Warns but processes non-existent paths (they may be created later)

Fixes potential security issue where modified script could exclude
system directories like C:\Windows or C:\Program Files.
2026-02-12 17:29:00 +01:00
Antiarin f9d5c7c751 fix(codex): patch prompt injection, checkout version, and style in codex-issue-triage workflow 2026-02-12 20:25:19 +05:30
Antiarin 8958fb2d88 eat(codex): add project setup, quickstart bootstrap, and docs for Codex agent support 2026-02-12 18:32:58 +05:30
mubarakar95 40e74e408b perf: reduce subprocess spawning in quickstart scripts (#4427)
## Problem
Windows process creation (CreateProcess) is 10-100x slower than Linux fork/exec.
The quickstart scripts were spawning 4+ separate `uv run python -c "import X"`
processes to verify imports, adding ~600ms overhead on Windows.

## Solution
Consolidated all import checks into a single batch script that checks multiple
modules in one subprocess call, reducing spawn overhead by ~75%.

## Changes
- **New**: `scripts/check_requirements.py` - Batched import checker
- **New**: `scripts/test_check_requirements.py` - Test suite
- **New**: `scripts/benchmark_quickstart.ps1` - Performance benchmark tool
- **Modified**: `quickstart.ps1` - Updated import verification (2 sections)
- **Modified**: `quickstart.sh` - Updated import verification

## Performance Impact
**Benchmark results on Windows:**
- Before: ~19.8 seconds for import checks
- After: ~4.9 seconds for import checks
- **Improvement: 14.9 seconds saved (75.2% faster)**

## Testing
-  All functional tests pass (`scripts/test_check_requirements.py`)
-  Quickstart scripts work correctly on Windows
-  Error handling verified (invalid imports reported correctly)
-  Performance benchmark confirms 75%+ improvement

Fixes #4427
2026-02-12 15:38:58 +05:30
Zhang 3c51f2ac36 fix(tui): address code review feedback on TextArea migration
- Subclass TextArea as ChatTextArea to intercept Enter key before
  the base class swallows it (fixes submission not triggering)
- Remove event.shift access that raises AttributeError on Key events
- Make action_show_sessions directly call _submit_input instead of
  just placing text in the widget
2026-02-11 23:45:30 -08:00
RichardTang-Aden 170a0918f7 Merge pull request #4538 from jaathavan18/docs/issue-4493-remove-changelog-references-v2
docs: remove references to nonexistent CHANGELOG.md
2026-02-11 19:15:00 -08:00
jaathavan18 e3da3b619c docs: remove references to nonexistent CHANGELOG.md (#4493) 2026-02-11 21:48:28 -05:00
RichardTang-Aden 97440f9e8a Merge branch 'main' into feature/x-twitter-integration 2026-02-11 17:13:33 -08:00
RichardTang-Aden 6e32513b79 Merge pull request #4535 from RichardTang-Aden/main
docs: add the automonous agent guide
2026-02-11 17:11:02 -08:00
Richard Tang 520e1963ee docs: add the automonous agent guide 2026-02-11 17:10:21 -08:00
RichardTang-Aden 843b9b55e2 Merge pull request #2862 from himanshu748/feat/antigravity-ide-2571
feat: Antigravity IDE support for MCP servers and skills
2026-02-11 16:59:15 -08:00
Richard Tang ccd305ff96 fix: remove incorrect rules folder 2026-02-11 16:58:02 -08:00
Richard Tang 3bd0d1e48c feat: add Antigravity workflows for all hive skills 2026-02-11 16:56:15 -08:00
Richard Tang d9bfa8e675 docs: rename .antigravity to .agent for Antigravity IDE compatibility 2026-02-11 16:42:26 -08:00
Richard Tang 27746147e2 fix: fix the antigravity config folder 2026-02-11 16:41:21 -08:00
Richard Tang 3a0b642980 fix: update the antigravity config 2026-02-11 16:39:28 -08:00
Richard Tang 8c0241f087 fix: use --directory instead of cwd for Antigravity MCP compatibility 2026-02-11 16:36:12 -08:00
Richard Tang 958d016174 fix: use uv run for MCP servers, script reads from template 2026-02-11 16:03:32 -08:00
Richard Tang 913d318ada docs: emphasize restarting Antigravity after MCP setup, fix config path and skill names 2026-02-11 15:57:11 -08:00
Richard Tang 8212920cb7 fix: correct .antigravity/skills symlinks to point to hive-* directories 2026-02-11 15:52:27 -08:00
Richard Tang 6414be7bd4 fix: change the wrong antigravity mcp path 2026-02-11 15:47:54 -08:00
himanshu748 ac62a82d08 docs: clarify Antigravity setup for everyone
- Define repo root at top; lead with quick start (3 steps)
- Add 'What you get' and prerequisites in one place
- Full setup step-by-step; troubleshooting: problem → fix
- Manual MCP config as single section; verification optional
- Plain language, scannable structure, no duplicate sections
2026-02-11 15:45:56 -08:00
himanshu748 a670548a57 Antigravity: one-command setup script and clearer docs
- Add scripts/setup-antigravity-mcp.sh to auto-detect repo root and write
  ~/.gemini/mcp.json with absolute paths (no manual path editing)
- Lead docs with Quick start (3 steps) and note ./ vs / for the script
- README: point to one-command setup; clarify script runs from repo folder
2026-02-11 15:45:56 -08:00
himanshu748 c4a7463f9d docs(antigravity): global config, absolute cwd, cwd warning note
- Add step to run core/setup_mcp.sh first
- Document that IDE often loads ~/.claude/mcp.json, not project config
- Add Option A: copy to ~/.claude/mcp.json with absolute cwd paths
- Note that cwd schema warning in IDE is a false positive
- Renumber setup steps (1–5)
2026-02-11 15:45:30 -08:00
himanshu748 edf0ac5270 docs: add how-to-verify section for Antigravity setup 2026-02-11 15:45:30 -08:00
himanshu748 8ff6b76f37 feat: Antigravity IDE support for MCP servers and skills (#2571)
- Add .antigravity/mcp_config.json with agent-builder and tools MCP servers
- Add .antigravity/skills/ with symlinks to .claude/skills/ (5 skills)
- Add docs/antigravity-setup.md with setup and troubleshooting
- Update README.md with Antigravity IDE support section
- Update DEVELOPER.md and docs/contributing-lint-setup.md with Antigravity refs

Mirrors Cursor integration for consistent multi-IDE support.
2026-02-11 15:45:30 -08:00
Bryan @ Aden c9f9eb365c Merge pull request #4441 from saurabh007007/feature/docs/spelling
docs:fix typo in docs for the directory in environment steup
2026-02-11 23:37:44 +00:00
e-cesar9 7a17c115d3 perf(windows): add Windows Defender exclusions for 40% faster uv sync
Fixes #4428

- Add Step 2.5 to quickstart.ps1 for optional Defender exclusions
- Requires admin + explicit user consent (default=no)
- Handles non-admin gracefully (shows manual commands)
- Improves uv sync by ~40% and cold-start by ~30% on Windows 11
- Never fails installation (graceful degradation)

Implementation:
- 3 helper functions: Test-IsAdmin, Test-DefenderExclusions, Add-DefenderExclusions
- Excludes: project dir, .venv, %APPDATA%\uv
- Security: Default=no, admin-only, clear trade-offs explained
- Robustness: Handles Defender disabled, existing exclusions, partial failures
- UX: Follows existing Write-Step/Ok/Warn patterns, clipboard fallback
- Testing: Idempotent, can run multiple times safely
2026-02-11 23:57:55 +01:00
Bryan @ Aden 9a2a11055f Merge pull request #4520 from jaathavan18/docs/issue-4495-remove-config-yaml-reference
docs: remove reference to nonexistent config.yaml
2026-02-11 22:36:16 +00:00
bryan f21aecd91c Add Gmail inbox management tools (list, get, trash, modify labels) 2026-02-11 14:33:01 -08:00
jaathavan18 4aef73c1d7 docs: remove reference to nonexistent config.yaml (#4495) 2026-02-11 15:07:04 -05:00
Rudra2637 906480a6e8 Guard unsupported criterion types in _evaluate_criterion 2026-02-12 01:12:52 +05:30
bryan 9df147b450 removed send_budget_alert_email, require provider param 2026-02-11 11:26:23 -08:00
RichardTang-Aden b71b4b0fc2 Merge pull request #4500 from RichardTang-Aden/main
fix: remove unused tools from root .mcp.json
2026-02-11 10:23:45 -08:00
Richard Tang 1bd2510c52 fix: remove unused tools from root .mcp.json 2026-02-11 10:20:20 -08:00
RichardTang-Aden 28b81092f9 Merge pull request #4498 from jaathavan18/docs/issue-4494-fix-env-example-reference
docs: fix .env.example references in tools README
2026-02-11 10:11:53 -08:00
RichardTang-Aden 4b9a3abba6 micro-fix: python lint error (#4499)
* micro-fix: python lint error

* micro-fix: python lint format
2026-02-11 10:09:17 -08:00
Richard Tang 0c76b6dcb1 micro-fix: python lint format 2026-02-11 10:08:40 -08:00
Richard Tang 090a85b41b micro-fix: python lint error 2026-02-11 10:04:25 -08:00
jaathavan18 992d573573 docs: fix .env.example references in tools README (#4494) 2026-02-11 12:58:51 -05:00
e-cesar9 9e768e660b micro-fix: move inline imports to module level in edge.py (#4480)
Fixes #4445

Moved repeated inline imports (logging, json, re) to module-level:
- Eliminates import overhead on every method call
- Follows PEP 8 conventions
- Added module-level logger instance
- re is used at line 259 (re.search)

Changes:
- 4 lines added (imports + logger)
- 13 lines removed (inline imports)
- No functional changes
2026-02-11 09:55:14 -08:00
RichardTang-Aden 26b9ed362e Merge pull request #4479 from e-cesar9/microfix/4444-typo-stirct
micro-fix: fix typo STIRCT → STRICT in safe_eval.py
2026-02-11 09:48:55 -08:00
Your hh3538962 765f7cae58 feat(tools): add get_datasets, get_reports, and export_report functions to Power BI integration 2026-02-11 22:19:51 +05:00
Your hh3538962 b455c8a2ad Merge remote-tracking branch 'origin/main' into feat/power-bi-integration 2026-02-11 22:07:00 +05:00
Timothy @aden 976ae75fde Merge pull request #4487 from adenhq/feature/windows-quickstart
chore(micro-fix): windows quickstart
2026-02-11 08:18:53 -08:00
Timothy @aden 9da91b5319 Merge branch 'main' into feature/windows-quickstart 2026-02-11 08:18:07 -08:00
Timothy @aden 2493beaf5a Merge pull request #4437 from GovindhKishore/fix/powershell-syntax-error
micro-fix: resolve NativeCommandError in uv sync and ParserError in dynamic env var assignment
2026-02-11 07:51:01 -08:00
e-cesar9 d63dd021ab micro-fix: fix typo STIRCT → STRICT in safe_eval.py
Fixes #4444
2026-02-11 14:40:25 +01:00
Sapna vishnoi da25e0ffa5 Merge branch 'main' into feat/redshift-integration 2026-02-11 13:42:26 +05:30
Zhang 697ba89314 fix(tui): add multiline input and cross-platform clipboard support (#4423)
Replace single-line Input widget with TextArea in chat REPL so
Shift+Enter inserts newlines and multiline paste works correctly.
Add Windows clipboard support (clip.exe) and xsel fallback for Linux.
2026-02-11 00:01:09 -08:00
Arshad Uzzama Shaik b6c65ab5d5 fix(security): handle unix-style absolute paths correctly on Windows (#1204)
* fix(security): normalize unix-style paths on windows to prevent sandbox escape (fixes #499)

* fix(security): handle unix-style absolute paths correctly on Windows

* fix: address review comments on path sanitization

- Strip leading whitespace to prevent startswith bypass
- Replace lstrip with single-char strip to preserve UNC paths
- Expand ValueError comment for clarity

---------

Co-authored-by: Arshad Shaik <arshad.shaik@violetis.ai>
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-02-11 15:40:38 +08:00
Arshad Uzzama Shaik 162f9a55ad fix(mcp): log errors in _load_active_session instead of silencing them (fixes #682) (#684)
Co-authored-by: Arshad Shaik <arshad.shaik@violetis.ai>
2026-02-11 15:09:10 +08:00
Govindh Kishore e484fdfa51 fix: Replaced multi-argument Join-Path with [System.IO.Path]::Combine for PS 5.1 compatibility. 2026-02-11 12:30:25 +05:30
Amit Kumar 77d9ccf2e4 feat(tools): add SerpAPI tools for Google Scholar & Patents search (#3986)
Implements 5 tools as proposed in #3224:
- scholar_search: Search Google Scholar for academic papers
- scholar_get_citations: Get citation formats (MLA, APA, Chicago, etc.)
- scholar_get_author: Author profiles with h-index, i10-index, metrics
- patents_search: Search Google Patents with filters
- patents_get_details: Detailed patent information by publication number

Follows existing tool pattern (web_search_tool, hubspot_tool, slack_tool):
- register_tools(mcp, credentials) with @mcp.tool() decorators
- _SerpAPIClient internal class for HTTP calls via httpx
- Credential fallback: CredentialStoreAdapter -> SERPAPI_API_KEY env var
- Error handling: always returns dicts, never raises

25 unit tests + live integration tests verified.
697/697 full test suite passing.

Fixes #3224
2026-02-11 14:46:59 +08:00
Govindh Kishore 94e39ee09e fix: resolve NativeCommandError by stabilizing uv sync output capture 2026-02-11 11:59:15 +05:30
saurabh007007 373ad77008 docs:fix typo in docs 2026-02-11 11:46:26 +05:30
Govindh Kishore 661b0c0038 fix: resolve PowerShell ParserError in quickstart.ps1 2026-02-11 11:02:37 +05:30
Govindh Kishore 8ed38bf0e2 fix: resolve PowerShell ParserError in quickstart.ps1 2026-02-11 11:01:33 +05:30
RichardTang-Aden 4d675dfff7 Merge pull request #4431 from adenhq/developer-success-documentation
docs: Developer success documentation
2026-02-10 20:12:11 -08:00
Richard Tang b42a3293f1 docs: change docs tune 2026-02-10 19:37:14 -08:00
Timothy @aden 87e9bf853d Merge pull request #4425 from TimothyZhang7/feature/windows-quickstart
feat(quickstart): windows script and cli
2026-02-10 19:26:23 -08:00
Timothy @aden c56f78422a Merge pull request #4408 from adenhq/fix/first-success
(micro-fix): Fix/first success
2026-02-10 18:42:12 -08:00
Timothy ac311e10ba Merge remote-tracking branch 'origin/main' into fix/first-success 2026-02-10 18:40:27 -08:00
Timothy @aden 0297520263 Merge pull request #4309 from TimothyZhang7/feature/automated-testing-skill
Feature/automated testing skill
2026-02-10 18:36:57 -08:00
Bryan @ Aden 4803552a7a Merge pull request #4429 from adenhq/feat/opencode-skills
(micro-fix): added skills to opencode, linking to claude
2026-02-11 02:23:37 +00:00
Bryan @ Aden b8d85ff723 Merge pull request #4021 from NamanRajput-git/main
docs: fix incorrect submission email in quizzes documentation
2026-02-11 02:20:33 +00:00
bryan 7d571dfaec added skills to opencode, linking to claude 2026-02-10 18:20:21 -08:00
Richard Tang ba02e53bdd docs: update the use cases 2026-02-10 18:15:40 -08:00
Bryan @ Aden 153e6142ff Merge pull request #4243 from vakrahul/feat/opencode-integration
feat: Add Opencode integration as a Coding Agent
2026-02-11 02:12:10 +00:00
Bryan @ Aden 228449c9d8 Merge pull request #3894 from AkhileshBabuT/docs/fix-environment-setup-3837-3841
docs: fix environment-setup.md for uv workspace setup
2026-02-11 02:09:33 +00:00
vakrahul c65eed8802 docs: apply PR review changes for quickstart and markdown formatting and updating 2026-02-11 07:36:23 +05:30
Richard Tang 40d32f2e01 docs: deployment strategies 2026-02-10 18:02:08 -08:00
vakrahul c83aac5e12 docs: apply PR review changes for quickstart and markdown formatting 2026-02-11 07:29:48 +05:30
vakrahul 48b9241247 chore: address PR review feedback (formatting and outdated references) 2026-02-11 07:18:17 +05:30
Richard Tang 7779bc5336 docs: use cases for first success 2026-02-10 17:42:05 -08:00
Bryan @ Aden beec549f74 Merge pull request #4417 from e-cesar9/micro-fix/debug-logging-level
micro-fix: change debug logging to appropriate level in executor.py
2026-02-11 01:30:20 +00:00
Bryan @ Aden 310698ecc0 Merge pull request #4287 from nhockcuncon77/docs/contributing-numbering-and-tools-tests
docs(contributing): fix duplicate step numbers and add tools test command
2026-02-11 01:26:00 +00:00
Bryan @ Aden 4f719c4778 Merge pull request #4299 from YashovardhanB28/fix/llm-token-counting
docs(fix): correct LLMNode token counting bug
2026-02-11 01:25:51 +00:00
bryan 4cc00f3bdc removed dead tests, updated some drifting behavior 2026-02-10 16:56:36 -08:00
bryan 1f9c47fef1 removed twitter agent, create new terminal, resume command, update quickstart 2026-02-10 16:41:08 -08:00
e-cesar9 80a4980640 micro-fix: change debug logging to appropriate level in executor.py
Changes logger.info() with debug prefix to logger.debug() for
session state resume information. This prevents debug-level
information from appearing in production logs at INFO level.

- Removes redundant '🔍 Debug:' prefix
- Uses appropriate debug logging level
- Follows Python logging best practices
- Improves production log clarity

Addresses #4377
2026-02-11 00:15:17 +01:00
Timothy Zhang 8dbe424f5a fix: environment loader compatibility tweak for windows 2026-02-10 15:10:32 -08:00
Timothy Zhang ec9bf033e6 feat: windows quickstart and hive cli 2026-02-10 15:09:38 -08:00
Richard Tang a2d21ec7bc docs: update the developer profiles 2026-02-10 13:10:59 -08:00
Richard Tang 06ccc853ee docs: explain developer success as our principle 2026-02-10 13:05:54 -08:00
Richard Tang 4847332161 add placeholders 2026-02-10 13:01:22 -08:00
Richard Tang 8c1ee54725 docs: publish our developer success roadmap 2026-02-10 12:56:54 -08:00
Timothy @aden 5e537d9d55 Merge pull request #4410 from TimothyZhang7/feature/windows-quickstart
feat(quickstart): windows script
2026-02-10 12:52:13 -08:00
Timothy d6b95067a1 feat(quickstart): windows script 2026-02-10 12:49:25 -08:00
bryan 32cae75ef5 intro msg 2026-02-10 11:28:34 -08:00
bryan 21e7554cdb cred update in quickstart, sample agent check before agent run, agent has welcome msg 2026-02-10 11:27:17 -08:00
Your hh3538962 e07703c01f feat(tools): add Power BI integration - initial structure with workspace and dataset refresh functions 2026-02-10 13:23:32 +05:00
Timothy 374442e900 Merge branch 'main' into feature/automated-testing-skill 2026-02-09 20:16:11 -08:00
vakrahul a1a0ec5ddb Remove skills folder (reviewer will handle symlinks) and cleanup doc refs 2026-02-10 09:45:44 +05:30
Timothy 1fd56b079c fix: test cases 2026-02-09 20:12:37 -08:00
Timothy @aden a12163d63f Merge pull request #4304 from adenhq/fix/init-config
Release / Create Release (push) Waiting to run
model selection + max_tokens in quickstart
2026-02-09 20:11:55 -08:00
RichardTang-Aden 0cd6f21980 Merge pull request #4270 from TimothyZhang7/feature/hard-goal-negotiation
Feature/hard goal negotiation
2026-02-09 20:04:20 -08:00
Richard Tang a88fc1d75c fix: remove the unnecessary summary before checking capabilities and gaps 2026-02-09 19:59:49 -08:00
vakrahul 87b0037fcd Add Opencode skills (mirrored from .cursor/skills) 2026-02-10 09:25:43 +05:30
Timothy 767d32d420 fix: stucking test cases 2026-02-09 19:51:50 -08:00
Richard Tang e9bde26611 fix: fixed minor issues introduced by the merge 2026-02-09 19:45:55 -08:00
Richard Tang c02f40622c Merge remote-tracking branch 'upstream/main' into feature/hard-goal-negotiation 2026-02-09 19:42:55 -08:00
vakrahul 929dc24e93 Fix config: use uv for mcp and remove pinned model 2026-02-10 09:12:42 +05:30
vakrahul 8cfb533fef Fix docs: remove recommended tag, fix rendering, remove duplicate setup 2026-02-10 09:10:47 +05:30
Timothy @aden 3328a388b3 Merge pull request #3877 from adenhq/fix/oauth-refresh
(micro-fix): update oauth to refresh token
2026-02-09 19:30:49 -08:00
Richard Tang 8f632eb005 feat: add communication style guideline 2026-02-09 19:28:48 -08:00
Richard Tang c8ee961436 fix: update the step label to avoid confusion 2026-02-09 19:04:05 -08:00
Timothy 6fd7efece6 feat: hive test, quickstart local settings 2026-02-09 18:53:42 -08:00
Richard Tang bc9f6b0af8 feat: update goal negotiation for a more conversational negotiation 2026-02-09 18:52:07 -08:00
bryan 7d48f17867 model selection + max_tokens in quickstart 2026-02-09 18:07:57 -08:00
Timothy 776583b3ad fix: use more sensible default models 2026-02-09 17:07:34 -08:00
Timothy 9c28dae583 fix: gemini should use GEMINI_API_KEY 2026-02-09 17:05:11 -08:00
YashovardhanB 59a315b90b fix(graph): correct LLMNode token counting to include retries 2026-02-10 06:21:54 +05:30
Timothy 866518f188 fix: headless runtime consolidation 2026-02-09 16:29:06 -08:00
RichardTang-Aden 736ae65a1d Merge pull request #4262 from adenhq/feat/build-from-sample
Build from Sample Agent
2026-02-09 16:05:42 -08:00
Bryan @ Aden 76c9f7c9a9 Merge pull request #1834 from fermano/feat/observability-trace-context
feat(observability): structured logging for trace context
2026-02-09 15:25:51 -08:00
Fernando Mano 32ad225d7f feat(observability): Adding OTel-compliant logging to L3 tool logs as introduced by #3715. -- remove redundant text from readme.md 2026-02-09 19:56:17 -03:00
amazonproai e5428bec5c docs(contributing): fix duplicate step numbers and add tools test command
- Fix Getting Started steps 6/7 duplicated; renumber to 8 and 9
- Add command to run tools package tests (cd tools && uv run pytest)

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-09 14:27:53 -08:00
bryan 7ae6f67470 updates to skills, renaming, suggested agents, remove changelog 2026-02-09 13:49:36 -08:00
Timothy faf534511b feat: automated test agent skill 2026-02-09 12:39:20 -08:00
Timothy @aden 594bceb8f5 Merge branch 'adenhq:main' into feature/hard-goal-negotiation 2026-02-09 12:28:19 -08:00
bryan 9dc0f48ec9 implemented building from sample agent template and updated deep research agent 2026-02-09 12:13:41 -08:00
Timothy 9d11f834b8 feat: automated testing skill 2026-02-09 11:05:59 -08:00
mishrapravin114 a4abf3eb2b Merge upstream/main: resolve conflicts with Apollo integration
- Keep both APOLLO_CREDENTIALS and AIRTABLE_CREDENTIALS
- Keep both apollo_tool and airtable_tool imports (alphabetical)

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-10 00:25:17 +05:30
mishrapravin114 269d72d073 Merge upstream/main: resolve conflicts with Apollo integration
- Keep both APOLLO_CREDENTIALS and CALENDLY_CREDENTIALS
- Keep both apollo_tool and calendly_tool imports (alphabetical)

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-10 00:20:17 +05:30
mishrapravin114 c8f5dccbd2 docs(airtable): add rate limit section to README
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-10 00:17:49 +05:30
mishrapravin114 8b797ee73f feat(airtable): add rate limit retry and retry_after
- Add 429 handling with retry_after from Retry-After header
- Add _request_with_retry (2 retries) for all API calls
- Update tests to use httpx.request

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-10 00:17:37 +05:30
mishrapravin114 de38adb1e4 feat(calendly): add rate limit handling, retry, 7-day validation
- Add 429 handling with retry_after from Retry-After header
- Add _request_with_retry (2 retries) for all API calls
- Validate get_availability date range <= 7 days
- Update tests to use httpx.request

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-10 00:16:37 +05:30
Sapna vishnoi c169bcc5d8 Merge branch 'main' into feat/redshift-integration 2026-02-09 23:32:08 +05:30
vakrahul 131b72cd0c feat: Add native Opencode support with Windows compatibility 2026-02-09 23:27:54 +05:30
kubrakaradirek 80ea286beb fix: resolve complex merge conflicts and restore integrations 2026-02-09 16:09:43 +03:00
Fernando Mano ce5a2d4a81 feat(observability): Adding OTel-compliant logging to L3 tool logs as introduced by #3715. -- remove line that would cause third-party loggers to log twice 2026-02-09 09:36:25 -03:00
kubrakaradirek 3499be782e feat: implement MSSQL tool with schema discovery closes #3377 2026-02-09 15:32:57 +03:00
Fernando Mano 7f489cee46 Merge branch 'main' into feat/observability-trace-context 2026-02-09 09:25:51 -03:00
Anjali Yadav 3c2d669a2f fix(credentials): correctly resolve integration_id in AdenCredentialResponse.from_dict (#3965)
* fix(credentials): respect integration_id in AdenCredentialResponse.from_dict

* style: fix forward reference annotation for Ruff
2026-02-09 17:52:55 +08:00
Gordon Ng 16603ae49c Test MCP 2026-02-09 01:48:49 -05:00
Gordon Ng bf6bd9ce7f test mcp 2026-02-09 01:48:46 -05:00
Gordon Ng a54c0f6f46 update 2026-02-09 01:20:25 -05:00
Gordon Ng beeed11d48 update 2026-02-09 01:11:33 -05:00
Timothy @aden ec36e96499 Merge pull request #4146 from TimothyZhang7/main
docs(release): release v0.4.2 - resumable sessions
2026-02-08 20:49:59 -08:00
Timothy 9ecd4980e4 chore: release v0.4.2 - resumable sessions
Release / Create Release (push) Waiting to run
- Add comprehensive resumable session functionality
- Immediate pause with Ctrl+Z and /pause command
- Auto-save state on quit
- Session management with /resume and /sessions commands
- Full memory and conversation history restoration
- See CHANGELOG.md for complete list of changes
2026-02-08 20:44:36 -08:00
Timothy @aden 64446ff9b6 Merge pull request #4141 from TimothyZhang7/feature/resumable-sessions
Feature/resumable sessions

Release candidate for v0.4.2
2026-02-08 20:40:33 -08:00
Timothy e3d2262292 fix: quit timeout, and tui interactions 2026-02-08 20:30:30 -08:00
Timothy 891cfa387a Merge branch 'main' into feature/resumable-sessions 2026-02-08 19:46:30 -08:00
Timothy f0243fddf2 feat: session resumable states and checkpoint system 2026-02-08 19:42:02 -08:00
Bryan @ Aden 85ff8e364b Merge pull request #3828 from Sandeepa-git/docs/fix-contributing-typo
docs(contributing): fix formatting typo in issue link
2026-02-08 19:07:48 -08:00
Bryan @ Aden 75f1afe8e3 Merge pull request #3857 from Manudeserti/docs/add-deep-research-readme
docs: add missing README for Deep Research Agent
2026-02-08 19:07:40 -08:00
Bryan @ Aden 7b660311e5 Merge pull request #4025 from hamzanajam7/docs/fix-getting-started-project-structure
docs(getting-started): fix project structure tree for tools and mcp_server location
2026-02-08 18:44:24 -08:00
Bryan @ Aden 98a493296d Merge pull request #4026 from hamzanajam7/docs/add-contributing-link-readme
docs(readme): add Contributing link to Quick Links section
2026-02-08 18:43:23 -08:00
RichardTang-Aden bc2a42aed2 Merge pull request #3901 from Templar121/docs/clarify-hive-test-generation
docs: clarify test generation responsibility in hive skill
2026-02-08 14:22:31 -08:00
Gaurav kapur 8b501d9091 fix: write node outputs to memory before edge evaluation (#3599) (#3694)
* fix: write node outputs to memory before edge evaluation (#3599)

* test: add regression tests for conditional edge direct key access
2026-02-08 23:23:37 +08:00
Manas Dutta 25331590a7 feat(reddit): add Reddit health checker and update tool functions 2026-02-08 19:26:01 +05:30
Nafiyad Adane cddae0ed18 Refactor Wikipedia tool and improve test structure
Reorganized imports in __init__.py for clarity and consistency. Cleaned up formatting and comments in wikipedia_tool.py. Enhanced test_wikipedia_tool.py by improving patching targets, clarifying comments, and refining test structure for better maintainability.
2026-02-07 18:49:49 -07:00
Nafiyad Adane 9dca42be27 Fix Wikipedia tool import order and test patching
Reorders imports in tools/__init__.py for clarity and groups web and PDF tools together. Updates Wikipedia tool tests to patch httpx.get using the correct import path, ensuring mocks work as intended. Removes unnecessary print statement in Wikipedia tool error handling.
2026-02-07 18:49:44 -07:00
Nafiyad Adane a1f3fe4d55 Add Wikipedia search tool and tests
Introduces a new 'search_wikipedia' tool for searching Wikipedia and retrieving article summaries using the public Wikipedia REST API. Updates documentation and tool registration, and adds unit tests for the new tool.
2026-02-07 18:49:38 -07:00
Fernando Mano 0304b392b2 feat(observability): Adding OTel-compliant logging to L3 tool logs as introduced by #3715. 2026-02-07 19:52:03 -03:00
hamzanajam7 ae9b4e82fe docs(readme): add Contributing link to Quick Links section
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 14:52:50 -05:00
hamzanajam7 4bac5e4c46 docs(getting-started): fix project structure tree for tools and mcp_server location
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 14:49:04 -05:00
Fernando Mano c4d3400ec4 Merge main into feat/observability-trace-context; resolve execution_stream conflicts 2026-02-07 16:49:04 -03:00
Naman Rajput 1da9bb0c0f Merge pull request #1 from NamanRajput-git/fix-submission-email
Fix submission email in quizzes documentation
2026-02-08 00:53:27 +05:30
Naman Rajput 760ed51ad3 Fix submission email in quizzes documentation
Updates incorrect careers@aden.com to contact@adenhq.com
2026-02-08 00:45:53 +05:30
GastonAQS bff9f8976e Merge branch 'main' into feature/add-trello-integration 2026-02-07 15:57:48 -03:00
Manas Dutta b71628e211 Merge branch 'main' into feature/reddit-integration 2026-02-07 19:35:02 +05:30
Amit Kumar 6d0a3b952a feat(tools): add Apollo.io contact and company data enrichment integration (#3167)
Add Apollo.io MCP tool integration for B2B contact and company data
enrichment. Implements 4 MCP tools:
- apollo_enrich_person: Enrich contact by email, LinkedIn URL, or name+domain
- apollo_enrich_company: Enrich company by domain
- apollo_search_people: Search contacts with filters (titles, seniorities, etc.)
- apollo_search_companies: Search companies with filters (industries, size, etc.)

Features:
- Authentication via X-Api-Key header (APOLLO_API_KEY env var)
- Credential spec in dedicated apollo.py (follows repo pattern)
- Comprehensive error handling (401, 403, 404, 422, 429)
- Full test coverage (36 tests)

Closes #3061
2026-02-07 21:57:13 +08:00
Manas Dutta 8c1cb1f55b feat: add Reddit integration with 18 MCP tools
Implements Reddit API integration for community management and content monitoring.

Features:
- Search & Monitoring: search posts/comments, get subreddit feeds (new/hot), get posts/comments (6 tools)
- Content Creation: submit posts, reply, edit, delete comments (5 tools)
- User Engagement: get profiles, upvote, downvote, save posts (4 tools)
- Moderation: remove/approve posts, ban users (3 tools)

Implementation:
- OAuth 2.0 authentication via REDDIT_CREDENTIALS
- PRAW library for Reddit API integration
- Comprehensive error handling and validation
- Full test coverage (25 tests passing)

Resolves #3595
2026-02-07 18:38:59 +05:30
mishrapravin114 66214384a9 fix: add register_airtable import and fix ruff I001 import order
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 17:18:26 +05:30
mishrapravin114 6d6646887c feat(tools): add Airtable bases and records integration
- Add Airtable tool with 5 MCP tools:
  - airtable_list_bases
  - airtable_list_tables
  - airtable_list_records (with filter/sort)
  - airtable_create_record
  - airtable_update_record
- Add AIRTABLE_CREDENTIALS with credentialSpec + credentialStore
- Add AirtableHealthChecker for token validation
- Add README with setup and usage
- Add unit tests (9 tests total)

Fixes #2911

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 17:14:46 +05:30
mishrapravin114 6f8db0ed08 style: apply ruff format to calendly and health check files
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 17:00:05 +05:30
mishrapravin114 6aaf6836ea fix(calendly): resolve ruff lint errors (UP017, E501)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 16:58:48 +05:30
mishrapravin114 4f2348f50e feat(tools): add Calendly scheduling integration
- Add Calendly tool with 4 MCP tools:
  - calendly_list_event_types
  - calendly_get_availability
  - calendly_get_booking_link
  - calendly_cancel_event
- Add CALENDLY_CREDENTIALS with credentialSpec + credentialStore
- Add CalendlyHealthChecker for token validation
- Add README with setup and usage
- Add unit tests (12 tests total)

Fixes #2930

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 16:51:27 +05:30
Subhayan Mukherjee 873fcd5822 docs: clarify test generation responsibility in hive skill 2026-02-07 11:39:52 +05:30
AkhileshBabuT a08f3a8925 docs: fix environment-setup.md for uv workspace setup
- Fix #3837: Replace pip install -e with uv sync
- Fix #3841: Update venv location to reflect single root .venv
2026-02-06 23:42:07 -05:00
RichardTang-Aden 2a98d3a489 Merge pull request #3890 from RichardTang-Aden/update-readme-gifs
docs(readme): quick fix for the doc links
2026-02-06 20:34:34 -08:00
Richard Tang b681ba03b1 chore: quick fix for the doc links 2026-02-06 20:32:20 -08:00
RichardTang-Aden fe775a36c0 Merge pull request #3887 from RichardTang-Aden/update-readme-gifs
Release / Create Release (push) Waiting to run
feat: add video in the README
2026-02-06 20:21:48 -08:00
Timothy @aden 2df9adcb43 Merge pull request #3886 from TimothyZhang7/fix/quickstart-secret-key
fix(micro-fix): quickstart secret key setup
2026-02-06 20:21:06 -08:00
Richard Tang c756cbf6d5 feat: add video in the README 2026-02-06 20:20:53 -08:00
Timothy d0ac67c9d3 fix: quickstart secret key setup 2026-02-06 20:18:12 -08:00
Timothy 47cd55052f feat: hive-create needs to do some hard negotiation 2026-02-06 19:56:05 -08:00
bryan fb203b5bdf update oauth to refresh token 2026-02-06 19:43:30 -08:00
RichardTang-Aden 6ee47e243d Merge pull request #3876 from RichardTang-Aden/update-readme
Docs Update readme
2026-02-06 19:39:16 -08:00
Richard Tang c1844b7a9d docs: improve readme 2026-02-06 19:30:16 -08:00
Richard Tang 99a29e79e5 fix: fix the documentation python run to uv run 2026-02-06 19:22:16 -08:00
Richard Tang 589a66ef26 docs: remove unused docs 2026-02-06 19:19:49 -08:00
RichardTang-Aden 3f960763cb Merge pull request #3875 from RichardTang-Aden/update-readme
Update readme images
2026-02-06 19:08:46 -08:00
Richard Tang 15f8f3783c chore: update images 2026-02-06 19:07:47 -08:00
Richard Tang a2b045c7e3 chore: remove unnecessary links 2026-02-06 18:18:50 -08:00
Richard Tang 055cef2fdc feat: improve quickstart.sh messages 2026-02-06 18:15:13 -08:00
Timothy @aden 6c6c69cbc3 Merge pull request #3872 from TimothyZhang7/refactor/consolidate-multi-level-log-for-tui
docs(path): Align Agent Storage Path to .hive/agents/{agent_name}/
2026-02-06 17:40:39 -08:00
Timothy 6fe0062e6e refactor(path): consolidate tui runner log path 2026-02-06 17:33:32 -08:00
Richard Tang 26b8b2f448 chore: move unused docs 2026-02-06 17:11:13 -08:00
Timothy @aden 7e40d6950a Merge pull request #3871 from TimothyZhang7/main
fix(micro-fix): uv paths in templates
2026-02-06 17:07:19 -08:00
Timothy 590bfa92cb chore: fix mcp server default config 2026-02-06 17:04:03 -08:00
Timothy f0e89a1720 fix: mcp server config with uv 2026-02-06 17:01:42 -08:00
Timothy @aden 575563b1e8 Merge pull request #3870 from adenhq/feat/multi-level-logging
fix: hardening hive cli setup
2026-02-06 16:37:37 -08:00
Timothy 82ea0e47ce fix: hardening hive cli setup 2026-02-06 16:31:31 -08:00
RichardTang-Aden 2f57ca10f7 Merge pull request #3862 from adenhq/feat/hive-tui
(micro-fix): documentation update
2026-02-06 16:19:46 -08:00
RichardTang-Aden 75c2d541c4 Merge branch 'main' into feat/hive-tui 2026-02-06 16:19:30 -08:00
Richard Tang b666f8b50b docs: minor doc update 2026-02-06 16:16:56 -08:00
RichardTang-Aden 09f9322676 Merge pull request #3863 from RichardTang-Aden/fix-remove-old-mock-mode
Fix remove old mock mode
2026-02-06 16:02:01 -08:00
Richard Tang f9a864ef93 fix: remove mock mode in the template 2026-02-06 15:59:48 -08:00
Richard Tang 27f28afe9c fix: remove --mock in the codebase + documentation 2026-02-06 15:59:22 -08:00
Timothy @aden 8f85722fef Merge pull request #3715 from adenhq/feat/multi-level-logging
Feat/multi level logging
2026-02-06 15:59:16 -08:00
bryan 5588445a01 documentation update 2026-02-06 15:59:01 -08:00
Timothy 40529b5722 fix: debugger to instruct on hive tui 2026-02-06 15:56:13 -08:00
Timothy @aden cee632f50c Merge pull request #3855 from adenhq/feat/hive-tui
update tui to support menu, highlight/copy, update quickstart
2026-02-06 15:24:10 -08:00
bryan 3453e3aa05 Merge branch 'feat/hive-tui' into feat/multi-level-logging 2026-02-06 15:21:52 -08:00
Timothy 8de637c421 fix: deprecated tests 2026-02-06 14:00:31 -08:00
Timothy 6c75de862c fix: skip outdated tests 2026-02-06 13:46:12 -08:00
Timothy 2971134882 docs: runtime logging structure 2026-02-06 13:26:53 -08:00
Timothy 6e79860b43 feat: hive debugger skill 2026-02-06 13:22:25 -08:00
Manudeserti 3f6bdda2a0 docs: add missing README for deep_research_agent 2026-02-06 18:11:00 -03:00
bryan 74d0287ec5 update tui to support menu, highlight/copy, update quickstart to include hive tui 2026-02-06 13:10:04 -08:00
RichardTang-Aden 51e81d80fc Merge pull request #3853 from adenhq/docs-key-concepts
Docs key concepts
2026-02-06 12:45:16 -08:00
Richard Tang cd014e41e4 docs: update links in the README.md 2026-02-06 12:44:34 -08:00
Richard Tang 830f11c47d docs: add key concept section 2026-02-06 12:41:22 -08:00
Timothy a73239dd98 feat: runtime log tools 2026-02-06 12:37:18 -08:00
Timothy d68783a612 refactor: unify storage layer for agent runtime 2026-02-06 12:20:46 -08:00
Timothy a28ea40a7d fix: execution log details in error trace 2026-02-06 11:03:19 -08:00
Sandeepa f2492bd4d4 docs(contributing): fix formatting typo in issue link 2026-02-07 00:22:48 +05:30
Timothy @aden b22be7a6cb Merge pull request #3818 from TimothyZhang7/main
(micro-fix)(skills): cursor skill symlinks to claude skill
2026-02-06 09:32:23 -08:00
RichardTang-Aden deb7f2f72a Merge pull request #3814 from Amdev-5/feature/x-twitter-integration
fix(tests): update credential group test for X integration
2026-02-06 09:16:42 -08:00
Amdev-5 d989d9c65a fix(tests): update credential group test for X integration
Add test_x_credentials_share_credential_group to verify all X credentials
share the 'x' credential group. Update test_credential_group_default_empty
to account for X credentials alongside existing Google exceptions.
2026-02-06 22:17:40 +05:30
bryan 4173c606ab Merge feature/x-twitter-final-integration from Amdev-5/hive - X (Twitter) tool with DM support 2026-02-06 08:03:43 -08:00
Amdev-5 a01430d20f Merge verification fixes into PR branch 2026-02-06 16:42:56 +05:30
Amdev-5 2a8f775732 feat(tools): enhance X tool with DM support and robust error handling
- Added `x_send_dm` tool using v2 endpoint (`POST /dm_conversations/with/:id/messages`) for reliable 1:1 messaging.
- Fixed 403 Forbidden payload validation errors by simplifying DM payload structure.
- Enhanced `_handle_response` to verify `x_tool.py` returns raw API error details for 403/400 responses, aiding in permission debugging.
- Updated `demo_x_tools.py` to support standard `.env` variable names (e.g., `X_API_KEY`) and added user lookup for DM testing.
- Added unit tests covering new DM functionality and payload verification in `test_x_tool.py`.
- Audited credential handling: Read-only tools (Search/Mentions) correctly use Bearer Token, while Write tools (Post/Reply/Delete/DM) enforce OAuth 1.0a User Context.

Verified with live API tests (see PR description for logs).
2026-02-06 15:48:20 +05:30
bryan 5b00445c05 Merge branch 'main' into feat/multi-level-logging 2026-02-05 19:09:18 -08:00
Timothy @aden 5179677e8f Merge pull request #3744 from adenhq/chore/update-hive-credential
(micro-fix): update hive-credentials
2026-02-05 18:55:19 -08:00
bryan 2c25b2eae7 Merge branch 'main' into chore/update-hive-credential 2026-02-05 18:45:11 -08:00
RichardTang-Aden f6705fe2d3 Merge pull request #3746 from RichardTang-Aden/integration-ci
(micro-fix)(chore): fix format
2026-02-05 18:36:32 -08:00
Richard Tang c2771fed20 chore: fix format 2026-02-05 18:30:50 -08:00
RichardTang-Aden fc781eccd9 Merge pull request #3745 from RichardTang-Aden/integration-ci
(micro-fix)(chore): fix lint
2026-02-05 18:15:38 -08:00
bryan d5a25ae081 update hive-credentials 2026-02-05 18:13:25 -08:00
Richard Tang 23b6fb6391 chore: fix lint 2026-02-05 18:12:47 -08:00
Timothy 433967f0cf fix: cursor skill symlinks to claude skill 2026-02-05 18:11:24 -08:00
RichardTang-Aden 2a876c2a10 Merge pull request #3743 from RichardTang-Aden/integration-ci
feat(ci): add integration credential specs and CI validation
2026-02-05 18:06:22 -08:00
Richard Tang ff0adeaba7 docs: update outdated skill references 2026-02-05 18:00:06 -08:00
Richard Tang 846edbf256 docs: update documentation structure 2026-02-05 18:00:04 -08:00
Richard Tang c68dd48f6d feat: add slack credential spec and contribution doc 2026-02-05 17:39:44 -08:00
bryan 8b828dd139 Merge branch 'main' into feat/multi-level-logging 2026-02-05 17:19:17 -08:00
Richard Tang 50c0a5da9e feat: integration credentials implementation check 2026-02-05 17:06:34 -08:00
Timothy @aden 2f0e5c42f1 Merge pull request #3724 from TimothyZhang7/main
docs(hive): hive commands rebrand
2026-02-05 15:06:25 -08:00
Timothy @aden 903288468a Merge pull request #3725 from adenhq/chore/gmail-to-google
(micro-fix): changing gmail to google
2026-02-05 14:54:18 -08:00
bryan 9e3bba6f59 updated tests 2026-02-05 14:52:19 -08:00
bryan bc16f0752f changing gmail to google 2026-02-05 14:46:38 -08:00
Timothy 86badd70fa docs(hive): hive commands rebrand 2026-02-05 14:35:50 -08:00
Timothy @aden ce5379516c Merge pull request #3722 from TimothyZhang7/main
docs(templates): put example templates in there
2026-02-05 14:31:50 -08:00
Timothy a50078bbf2 chore: moves the templates 2026-02-05 14:25:49 -08:00
Timothy 2cef168442 fix: aden hive url 2026-02-05 14:08:18 -08:00
Timothy @aden 0a1a9e3545 Merge pull request #3720 from TimothyZhang7/feature/example-agent-registry
docs(skills): Rename skills to hive-* namespace and improve create workflow
2026-02-05 13:59:45 -08:00
Timothy 3c8682d80c fix: mention of skill in readme 2026-02-05 13:59:02 -08:00
Timothy ecc5a1608f fix: make sure of the skill ordering 2026-02-05 13:54:20 -08:00
RichardTang-Aden bc81b55600 Merge pull request #3713 from adenhq/update/gmail-send-tool
(micro-fix): created gmail send tool
2026-02-05 13:15:08 -08:00
Timothy 28b628c1b4 fix: update skill names and examples 2026-02-05 13:13:19 -08:00
Timothy 148264ac73 fix: skill problems 2026-02-05 13:11:18 -08:00
bryan 4046e4e379 created gmail send tool 2026-02-05 13:10:47 -08:00
Timothy 28298d9af2 fix: streamline the executor configuration and data tool usage 2026-02-05 12:50:00 -08:00
Fernando Mano 9d156325e0 Merge branch 'main' into feat/observability-trace-context 2026-02-05 17:06:07 -03:00
bryan 221712128d bug fix for crashing agent 2026-02-05 11:59:57 -08:00
bryan e9fc36f2d3 Merge branch 'main' into feat/multi-level-logging 2026-02-05 09:10:56 -08:00
bryan 305b880b1d including missing tool log inputs 2026-02-05 09:08:42 -08:00
Anshumaan Saraf 34782a6b85 docs(CONTRIBUTING): add upstream sync steps (#3477)
Fixes #2692

Added steps to configure the upstream remote and sync the main branch
before creating a feature branch. This helps contributors avoid starting
from stale code and reduces merge conflicts.
2026-02-05 16:28:07 +08:00
Patrick d25d94e71b docs(aden-credential-sync): typo (#3601) 2026-02-05 16:11:13 +08:00
Sapna vishnoi 4a0d9b2855 Merge branch 'main' into feat/redshift-integration 2026-02-05 11:44:09 +05:30
y0sif 92c65d69ea chore: resolve merge conflicts with main 2026-02-05 07:13:36 +02:00
Timothy @aden 51f1b449cd Merge pull request #3584 from TimothyZhang7/main
fix: gap between lint and format
2026-02-04 21:05:22 -08:00
Timothy 804e47dde4 fix: gap between lint and format 2026-02-04 21:02:50 -08:00
Yosif Soliman 910a8968c4 fix(linear): correct GraphQL variable type for workflow states query 2026-02-05 07:00:28 +02:00
Timothy @aden 582c810d15 Merge pull request #3583 from TimothyZhang7/main
fix: test case
2026-02-04 20:59:58 -08:00
Timothy cede629718 fix: test case 2026-02-04 20:53:53 -08:00
Timothy @aden 10941dc7fc Merge pull request #3579 from TimothyZhang7/fix/do-not-mention-deprecated-nodes
Release / Create Release (push) Waiting to run
fix: mentions of deprecated nodes in agent builder
2026-02-04 20:46:10 -08:00
Timothy c1c16878e4 fix: mentions of deprecated nodes in agent builder 2026-02-04 20:42:16 -08:00
Timothy @aden 80a41b434b Merge pull request #3240 from levxn/main
Integration: Advanced Slack MCP tools Integration (~45+ tools), compatibility and working checked with all other existing tools
2026-02-04 20:34:24 -08:00
Timothy 9a8e117f1d chore: fix lint and tests 2026-02-04 20:30:47 -08:00
Bryan @ Aden 878603033a Merge pull request #3573 from TimothyZhang7/feature/quickstart-credential-store
feat: credential store init, add textual dep, standardize uv commands
2026-02-04 20:20:52 -08:00
Timothy 1c6f17e8db Merge remote-tracking branch 'origin/main' into feature/quickstart-credential-store 2026-02-04 20:03:44 -08:00
Timothy 8f32ef8064 chore: uv for all 2026-02-04 19:57:41 -08:00
bryan 7519c73f2a Merge branch 'main' into feat/multi-level-logging 2026-02-04 19:34:01 -08:00
Bryan @ Aden e12bc96e21 Merge pull request #3557 from TimothyZhang7/feature/tui-dashboard
feat: TUI dashboard, EventLoopNode refinements, auto-creation, data tools, and runner overhaul
2026-02-04 19:04:41 -08:00
bryan bf402aaa18 initial multi-level logging 2026-02-04 17:26:58 -08:00
RichardTang-Aden 2355d3d729 Merge pull request #3525 from Acid-OP/fix/on-failure-edge-routing
fix: follow ON_FAILURE edges when node fails after max retries
2026-02-04 17:00:35 -08:00
Richard Tang a093a59cb0 test: add tests for ON_FAILURE edge routing after max retries
Covers:
- ON_FAILURE edge followed when node fails after max retries
- Original termination behavior preserved when no ON_FAILURE edge exists
- ON_FAILURE edge not followed on success (only ON_SUCCESS fires)
- ON_FAILURE routing with max_retries=0 (no retries)
- Failure handler appears in execution path and node_visit_counts
2026-02-04 16:58:30 -08:00
Gaurav Kapur d7917988c3 fix: follow ON_FAILURE edges when node fails after max retries 2026-02-04 16:51:46 -08:00
Timothy @aden ae566a2027 Merge pull request #2652 from mubarakar95/feature/tui-dashboard
feat: Implement production-ready TUI with interactive agent execution, real-time monitoring, and screenshot support
2026-02-04 15:43:54 -08:00
Timothy b15473d3f3 fix: graph validation 2026-02-04 15:32:53 -08:00
RichardTang-Aden 265bf885ec Merge pull request #3556 from RichardTang-Aden/remove-unused-scripts
chore(scripts): remove deprecated setup scripts
2026-02-04 15:07:12 -08:00
Richard Tang e318281989 refactor: clean up the use of setup-python 2026-02-04 14:49:39 -08:00
Timothy 3e2a11d60d feat: integrate agent builder with tui 2026-02-04 14:47:28 -08:00
Timothy @aden 4b9f73310e Merge branch 'main' into feature/tui-dashboard 2026-02-04 10:53:43 -08:00
Sapna vishnoi cdb4679c5a Merge branch 'main' into feat/redshift-integration 2026-02-05 00:05:38 +05:30
Sapna.Vishnoi 1a9dce89b4 feat(tools): Add Amazon Redshift integration
- Implement 5 core functions for data warehouse querying
- Add boto3 integration with Redshift Data API
- Security: Read-only SELECT queries by default
- Full credential store support
- 26/26 tests passing (100% coverage)
- Complete documentation with examples
2026-02-04 23:58:35 +05:30
Levin b17c26116d Merge branch 'adenhq:main' into main 2026-02-04 23:46:35 +05:30
Timothy @aden 3114af75e4 Merge pull request #3536 from TimothyZhang7/feature/coding-agent-reskill
feat: update skills and agent builder tools, bump pinned ruff version
2026-02-04 10:11:58 -08:00
Levin 7a6d10639b Merge branch 'main' into main 2026-02-04 23:39:17 +05:30
Timothy 6ff29ea6aa fix: max retry go back to 0 for event loop node 2026-02-04 10:08:40 -08:00
Timothy a23f01973a feat: update skills and agent builder tools, bump pinned ruff version 2026-02-04 09:52:28 -08:00
bryan 0aaa3a3eca uv.lock update 2026-02-04 07:57:22 -08:00
bryan 82f05d1102 Merge branch 'main' into feature/tui-dashboard 2026-02-04 07:49:08 -08:00
Bryan @ Aden 8ff6d9c8bd Merge pull request #3423 from adenhq/event-loop-arch
Event Loop Architecture: Streaming Multi-Turn Agent Nodes
2026-02-04 07:43:56 -08:00
Aneesh cf1e4d7f88 Merge remote-tracking branch 'origin/main' into feature/youtube-transcript 2026-02-04 19:46:52 +05:30
Aneesh f2f0b4fc61 feat(tools): add youtube transcript integration via youtube-transcript-api 2026-02-04 19:24:40 +05:30
bryan a2e102fe15 windows lint fix 2026-02-03 20:00:11 -08:00
Timothy 119280da1a Merge remote-tracking branch 'upstream/main' into event-loop-arch
Resolve conflict in tools/mcp_server.py: take main's
CredentialStoreAdapter.default() which encapsulates the same
CompositeStorage logic our branch had inline.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 19:42:47 -08:00
RichardTang-Aden 4d49f74d5a Merge pull request #3372 from ranjithkumar9343/ranjithkumar9343-patch-1
docs: add Windows compatibility warning
2026-02-03 19:36:50 -08:00
Timothy 6a42b9c66b fix: resolve CI failures in lint and tests
- Fix max_node_visits blocking executor retries: the visit count was
  incremented on every loop iteration including retries, causing nodes
  with max_node_visits=1 (default) to be skipped on retry. Added
  _is_retry flag to distinguish retries from new visits via edge
  traversal.

- Fix 20 UP042 lint errors: replace (str, Enum) with StrEnum across
  14 files. Python 3.11+ StrEnum is preferred and enforced by ruff.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 19:36:35 -08:00
RichardTang-Aden fc4a39480a Merge pull request #3455 from adenhq/feat/non-oauth-setup
update credentials store to work with non-oauth keys
2026-02-03 19:31:46 -08:00
y0sif b21dd25181 fix(linear): handle credential decryption errors gracefully, handle mcp tool issue with credentials 2026-02-04 05:21:23 +02:00
y0sif 04a18bcbe5 docs(tools): document Linear integration in README and setup credentials claude skill 2026-02-04 04:05:15 +02:00
y0sif 7f66dd67eb feat(linear): add OAuth setup instructions 2026-02-04 04:03:37 +02:00
Timothy b98afb01c8 chore: lint 2026-02-03 18:01:39 -08:00
Timothy ccd6bb7656 chore: lint 2026-02-03 17:57:02 -08:00
bryan ea30e5c631 consolidate workspace to uv monorepo 2026-02-03 17:47:57 -08:00
y0sif cfa03b89c8 test(tools): add comprehensive Linear tool tests 2026-02-04 03:47:28 +02:00
y0sif 9866d7a22b feat(tools): add Linear project management integration 2026-02-04 03:47:03 +02:00
Timothy d16a3c3b22 chore: give comand line to demo 2026-02-03 17:38:21 -08:00
Timothy a03bd78c2e fix: formulate tool call results 2026-02-03 17:25:44 -08:00
bryan 3cca41aab1 updating tests 2026-02-03 15:17:19 -08:00
bryan d19aaed946 fixing linter issues 2026-02-03 14:55:51 -08:00
RichardTang-Aden 9a7db8cf94 Merge pull request #3450 from adenhq/adenhq-patch-1
(micro-fix): Update issue templates
2026-02-03 14:53:09 -08:00
bryan f50630c551 update credentials store to work with non-oauth keys 2026-02-03 14:49:12 -08:00
Timothy 0ef2e64733 fix: use mcp tools properly 2026-02-03 14:36:09 -08:00
Aden HQ 3a8e121d43 Update issue templates 2026-02-03 14:24:13 -08:00
bryan 23e249144d Merge remote-tracking branch 'origin/main' into feature/tui-dashboard
# Conflicts:
#	.github/workflows/ci.yml
2026-02-03 12:26:51 -08:00
bryan 25014bfa89 update ci to use uv, updated linting 2026-02-03 12:14:13 -08:00
bryan 78ea585779 tui upgrades 2026-02-03 11:55:22 -08:00
bryan ac13c11f89 Merge remote-tracking branch 'origin/event-loop-arch' into feature/tui-dashboard 2026-02-03 11:06:44 -08:00
Timothy 51d341b88c fix: tool pruning logic 2026-02-03 10:29:35 -08:00
Timothy 7dd70b8e31 feat: tool truncation 2026-02-03 08:50:14 -08:00
GastonAQS 331a6e442f feat: add Trello integration tools and API client 2026-02-03 10:32:25 -03:00
ranjithkumar9343 84b332d989 docs: add Windows compatibility warning
Added a note for Windows users to use WSL/Git Bash to prevent setup errors.
2026-02-03 17:43:36 +05:30
oluwasegun.haziz.omd 7fae57f311 more fixes 2026-02-03 11:53:45 +00:00
Sashank Thapa 1c2295b2b5 Merge branch 'adenhq:main' into feature/twitter-x-mcp-tool 2026-02-03 16:20:45 +05:30
Levin fd1826a267 Merge branch 'adenhq:main' into main 2026-02-03 16:11:05 +05:30
Anjali Yadav bcc6848275 fix(mcp): handle missing exports directory in test generation tools (#3066)
* fix(mcp): handle missing exports path in test generation tools

* fix(mcp): centralize agent path validation across test tools

* fix: remove duplicate if blocks and improve error hint message

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-02-03 18:15:34 +08:00
Hundao 75dd053a40 fix(ci): migrate remaining CI jobs from pip to uv (#3366)
Closes #3363
2026-02-03 18:11:32 +08:00
Yogesh Sakharam Diwate 20f2aa09f2 docs(readme): update architecture diagram src path (#3348) 2026-02-03 14:58:29 +08:00
RichardTang-Aden fb8c810b3d Merge pull request #3293 from Antiarin/feat/llm-datetime-context
feat: inject runtime datetime into LLM system prompts
2026-02-02 20:07:00 -08:00
levxn b99b6c5cd3 latest upstream 2026-02-03 09:35:38 +05:30
haliaeetusvocifer 1f653969a9 added feature google docs integration 2026-02-03 03:51:39 +00:00
RichardTang-Aden ad21cf4243 Merge pull request #3327 from adenhq/fix-for-quickstart
fix: replace the use of PYTHON_CMD in quickstart.sh
2026-02-02 19:35:19 -08:00
Richard Tang 1e45cfff67 feat: Check for litellm import in both CORE_PYTHON and TOOLS_PYTHON environments. 2026-02-02 19:27:32 -08:00
Richard Tang 0280600a47 fix: fix error when test import installation 2026-02-02 19:24:13 -08:00
RichardTang-Aden 571ad518dc Merge pull request #3064 from kuldeepgaur02/fix/quickstart-provider-selection
fix: silent exit when selecting non-Anthropic LLM provider
2026-02-02 19:03:16 -08:00
RichardTang-Aden fe37a25cf1 Merge pull request #3322 from RichardTang-Aden/main
fix: Resolve quickstart.sh compatibility issues and migrate from pip to uv
2026-02-02 18:55:00 -08:00
Timothy e06138628c chore: remove local claude settings 2026-02-02 18:48:18 -08:00
Timothy 1ed0edd158 Merge remote-tracking branch 'upstream/main' into event-loop-arch
# Conflicts:
#	.claude/settings.local.json
2026-02-02 18:46:07 -08:00
Richard Tang 49dbc46082 feat: migrate from pip to uv 2026-02-02 18:45:09 -08:00
Richard Tang a16a4adc09 feat: add message when LLM key is not available 2026-02-02 18:41:20 -08:00
Richard Tang b4ab1cbd56 fix: fix quickstart competibility 2026-02-02 18:34:09 -08:00
Timothy 6faa63f0d0 fix: loop prevention in feedback edges 2026-02-02 18:26:45 -08:00
bryan f4737dcfe7 Merge remote-tracking branch 'origin/event-loop-arch' into feature/tui-dashboard 2026-02-02 18:25:46 -08:00
Richard Tang 2b44af427f fix: quickstart.sh competibility fix 2026-02-02 18:21:40 -08:00
RichardTang-Aden 11f7401bc2 Merge branch 'adenhq:main' into main 2026-02-02 18:00:40 -08:00
RichardTang-Aden db7b5180dd Merge pull request #3270 from adenhq/bot-detecting-issue-size
feat: Edit bot prompt to be able to decide on the technical size of issue
2026-02-02 17:49:52 -08:00
Timothy @aden 5b4e56252c Merge pull request #2820 from lakshitaa-chellaramani/feature/github-tool
Feature/GitHub tool
2026-02-02 17:49:34 -08:00
Timothy e3c71f77de chore: fix ruff format 2026-02-02 17:37:37 -08:00
Timothy b09824faec chore: fix lint 2026-02-02 17:36:02 -08:00
RichardTang-Aden c69bc24598 Merge pull request #3301 from adenhq/add-example-structure
docs: sample agent folder, remove docker file in Readme
2026-02-02 16:47:04 -08:00
Richard Tang 0cf17e1c63 feat: sample agent folder, remove docker file in Readme 2026-02-02 14:15:58 -08:00
Timothy @aden feac803491 Merge pull request #3256 from adenhq/feat/integration-tests
Feat/integration tests
2026-02-02 13:23:42 -08:00
Timothy 4aacec30d8 fix: text delta granularity, tool limit problem 2026-02-02 13:21:50 -08:00
RichardTang-Aden b459a2f7a9 Merge pull request #918 from Siddharth2624/fix-malformed-json-tool-args
Handle malformed JSON tool arguments in LiteLLMProvider
2026-02-02 13:04:13 -08:00
bryan ca7f6d3514 fixes to linting 2026-02-02 12:52:11 -08:00
Antiarin ca8ede65f0 feat: inject runtime datetime into LLM system prompts 2026-02-03 02:10:14 +05:30
bryan b033c56ae5 Merge remote-tracking branch 'origin/main' into feature/tui-dashboard 2026-02-02 12:29:27 -08:00
Richard Tang 9a177c46e1 feat: edit bot prompt to be able to decide on the technical size of issue 2026-02-02 11:39:44 -08:00
bryan d49e858d32 lint update 2026-02-02 11:12:09 -08:00
Timothy @aden 20bea9cd7f Merge pull request #2273 from krish341360/fix/concurrent-storage-race-condition
Release / Create Release (push) Waiting to run
fix: race condition in ConcurrentStorage and cache invalidation bug
2026-02-02 10:57:45 -08:00
bryan d7afa5dcf2 wp-12 2026-02-02 10:41:12 -08:00
Timothy 22e816bf86 chore: update gitignore 2026-02-02 10:30:03 -08:00
krish341360 a7709d489c style: apply ruff formatting to test file 2026-02-02 23:53:27 +05:30
Timothy @aden 3240616808 Merge pull request #3250 from adenhq/feat/validation-client-facing
(micro-fix): added graph validation for client-facing nodes [WP-10]
2026-02-02 10:02:38 -08:00
krish341360 18dfc997b8 fix: resolve lint errors in test file 2026-02-02 23:24:21 +05:30
Timothy @aden 92d0b6addf Merge pull request #3050 from Rockysahu704/rocky-first-contribution
docs: clearify who Hive is for and when to use it
2026-02-02 09:51:05 -08:00
Timothy @aden b9f83d4d61 Merge pull request #3244 from TimothyZhang7/feature/aden-sync-by-provider
Feature/aden sync by provider
2026-02-02 09:39:00 -08:00
levxn 694feaffd2 phase 3 tools implemented, totals now to 45+ tools for Slack for multipurpose integration including CRM support 2026-02-02 22:44:10 +05:30
Timothy @aden 9c16826ad3 Merge pull request #3137 from adenhq/feat/clientIO-gateway
implemented clientIO gateway [WP-9]
2026-02-02 07:29:03 -08:00
levxn eb68e2143b slack tools add ons and latest upstream commit 2026-02-02 19:07:05 +05:30
Harsh Kishorani f305745295 feat(setup): add native PowerShell setup script for Windows (#746)
* feat(setup): add PowerShell setup script with venv for Windows

* docs: restore PEP 668 troubleshooting section

* docs: restore Alpine Linux setup section

---------

Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-02-02 17:02:19 +08:00
Timothy df4d0ad3fd feat: aden provider credential store by provider 2026-02-01 20:34:21 -08:00
Timothy 20b2e4b3dd fix: robust compaction logic 2026-02-01 19:59:27 -08:00
RichardTang-Aden fc22586752 Merge pull request #3128 from adenhq/fix/tests
(micro-fix): fixed pytests and warnings
2026-02-01 19:53:07 -08:00
Richard Tang 646440eba3 chore: update developer doc 2026-02-01 19:49:35 -08:00
Richard Tang 53e5579326 fix: remove requirements.txt 2026-02-01 19:45:32 -08:00
Richard Tang 29a1630d0f feat: add tool tests in CI 2026-02-01 19:38:33 -08:00
bryan 171f4ab2ae fixed pytests and warnings 2026-02-01 19:11:44 -08:00
bryan 9c33da7b8d added graph validation for client-facing nodes [WP-10] 2026-02-01 18:45:35 -08:00
levxn 960a4549ef latest upstream v1 2026-02-01 19:03:39 +05:30
Kuldeep Raj Gour 363a650dfa fix: silent exit when selecting non-Anthropic LLM provider
prompt_choice used return codes to pass selections. Combined with set -e,
non-zero returns (options 2-6) caused immediate script exit.

Fix: Use global variable PROMPT_CHOICE instead of return codes.
2026-02-01 15:51:46 +05:45
Rocky Sahu b6e2634537 docs: clearify who Hive is for and when to use it 2026-02-01 14:38:02 +05:30
Siddharth Varshney 9f424f2fc0 Remove unused Fake* classes and unrelated note block
- Remove unused FakeFunction, FakeToolCall, FakeMessage, FakeChoice, FakeResponse classes from test_litellm_provider.py
- Remove unrelated note block from building-production-ai-agents.md
- Fix lint issues (trailing whitespace)
2026-01-31 20:56:52 +00:00
levxn 25989d9f90 slack add ons v3, now ~30 tools 2026-02-01 01:29:32 +05:30
Lakshitaa Chellaramani 0715fc5498 Merge branch 'main' into feature/github-tool 2026-01-31 23:31:22 +05:30
lakshitaa f9fddd6663 fix(github-tool): Address PR feedback - security and integration fixes
Addresses all blockers and suggestions from code review:

**Blockers fixed:**
1. Register tools in tools/__init__.py - Added import, registration call,
   and all 13 tool names to return list
2. Add credential spec - Created GitHub entry in credentials/integrations.py
   with env_var, tools list, help URL, and health check config
3. Move tests to correct location - Relocated from
   tools/src/.../github_tool/tests/ to tools/tests/tools/test_github_tool.py
4. Removed .claude/settings.local.json from PR

**Security improvements:**
1. URL parameter sanitization - Added _sanitize_path_param() to reject
   path traversal attempts (/ or ..) in owner, repo, branch, username params
2. Error message sanitization - Added _sanitize_error_message() to prevent
   token leaks from httpx.RequestError exceptions

All 38 tests passing.
2026-01-31 23:26:33 +05:30
levxn 684da96a83 slack add ons v2, now 15 tools in total 2026-01-31 20:39:38 +05:30
levxn abae7979cb excluded a file not needed 2026-01-31 18:37:35 +05:30
levxn 49bce57fcf slack bot integration v1 2026-01-31 18:35:27 +05:30
Sashank Thapa fa43ca3785 Merge branch 'adenhq:main' into feature/twitter-x-mcp-tool 2026-01-31 16:26:39 +05:30
kozuedoingregression b4a2c3bd14 ruff formatting and lint fixes 2026-01-31 16:18:16 +05:30
kozuedoingregression 2d4ec4f462 lint fix 2026-01-31 16:14:25 +05:30
kozuedoingregression 1e8b933da0 add X (Twitter) integration tool 2026-01-31 15:49:16 +05:30
Richard Tang 63d017fc21 fix: bash version support 2026-01-30 20:32:56 -08:00
lakshitaa bfb660275e feat(tools): Add GitHub tool for repository and issue management
Implements comprehensive GitHub REST API v3 integration with 15 MCP tools
for managing repositories, issues, pull requests, code search, and branches.

Features:
- Repository management (list, get, search repos)
- Issue operations (create, update, close, list issues)
- Pull request management (create, list, get PRs)
- Code search across GitHub
- Branch operations (list, get branch info)

Technical details:
- 15 MCP tools organized in 5 categories
- 38 comprehensive tests with mocking (all passing)
- Full credential store support (env var + CredentialStoreAdapter)
- Proper error handling (timeout, network, API errors)
- Follows HubSpot/Slack tool patterns exactly

Files:
- tools/src/aden_tools/tools/github_tool/github_tool.py (757 lines)
- tools/src/aden_tools/tools/github_tool/tests/test_github_tool.py (628 lines)
- tools/src/aden_tools/tools/github_tool/README.md (646 lines)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-31 04:32:46 +05:30
lakshitaa d6ae48bc58 Merge upstream/main 2026-01-31 03:19:12 +05:30
Fernando Mano 4310852ee6 chore: Merge branch 'main' into feat/observability-trace-context 2026-01-30 15:09:54 -03:00
mubarakar95 d32308b6d2 Add TUI enhancements: screenshot feature, header polish, and keybinding updates
- Implement SVG screenshot functionality (Ctrl+S)
- Remove header icon and disable expansion
- Hide borders in screenshots for clean output
- Change command palette to Ctrl+O
- Make screenshot shortcut work globally (priority binding)
2026-01-30 16:56:34 +05:30
mubarakar95 604d16e353 Enhance TUI: Fix rendering, polish layout, and clean up header 2026-01-30 15:40:15 +05:30
mubarakar95 db577785d6 feat: Implement fully functional TUI dashboard
- Fix ScreenStackError crash by moving runtime init inside async context
- Implement selectable logging with TextArea widget
- Create interactive ChatREPL for agent execution
- Optimize 3-pane layout: logs/graph on left (60%), chat on right (40%)
- Add thread-safe event handling with call_from_thread
- Add TUI selection guide documentation
All features tested and working.
2026-01-30 11:49:04 +05:30
mubarakar95 c9ae3a0541 feat: finalize TUI with minimal stable mode
The TUI feature is fully functional with a minimal, stable interface:
- Header with title
- Central display area
- Footer with keybindings

This provides a foundation for future enhancements. Custom widgets
(LogPane, GraphOverview, ChatRepl) are available in the codebase
and can be integrated once Textual rendering issues on Windows are
resolved.

The --tui flag successfully launches the TUI dashboard for any agent:
  hive run <agent_path> --tui

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-30 01:31:59 +05:30
mubarakar95 ed95dab9f3 fix: implement lazy widget loading and store references
Defer widget creation from __init__ to compose() and store references
as instance variables to prevent garbage collection and initialization
order issues. This resolves ScreenStackError during TUI startup.

Changes:
- Move LogPane, GraphOverview, ChatRepl creation to compose()
- Store widgets as instance variables (self.log_pane, etc)
- Restore Horizontal/Vertical container layout

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-30 01:27:56 +05:30
mubarakar95 a6536cef94 fix: restore Horizontal/Vertical layout containers in TUI compose
The compose() method was missing the proper container structure
for the layout, which caused initialization to fail. Restored the
Horizontal/Vertical container layout with proper pane structure.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-30 01:22:41 +05:30
mubarakar95 3ccc81e81c feat: add interactive TUI dashboard for agent execution
Implement a Textual-based terminal UI for the hive framework that displays:
- Agent execution status and progress
- Log output in real-time
- Graph visualization of agent execution flow
- Interactive REPL for user input/feedback

Changes:
- Add core/framework/tui/ module with AdenTUI app and custom widgets
- Add LogPane widget for streaming log output
- Add GraphOverview widget for execution graph visualization
- Add ChatRepl widget for interactive user input
- Add TUILogHandler for capturing framework logs to TUI
- Update cli.py to support --tui flag for launching dashboard
- Update runner.py to enable AgentRuntime when TUI is active
- Fix missing Textual container imports (Horizontal, Vertical, Container)
- Fix race conditions in async TUI initialization
- Fix threading issues in app event handling

The TUI is launched via: hive run <agent_path> --tui

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-30 01:19:28 +05:30
krish341360 94197cbcb9 fix: race condition in ConcurrentStorage and cache invalidation bug
- Fix race condition: cache now updates only after successful write
- Fix cache invalidation: summary cache invalidated on save_run()
- Add 4 tests to verify the fixes
2026-01-29 16:35:38 +05:30
Fernando Mano 853f1e9873 chore: Merge remote-tracking branch 'refs/remotes/origin/feat/observability-trace-context' into feat/observability-trace-context 2026-01-28 16:52:38 -03:00
Fernando Mano ae5fe84fb2 feat(observability): Structured logging with automatic trace context propagation -- fix ruff formatting errors 2026-01-28 15:04:06 -03:00
Fernando Mano 92b538d5ae Merge branch 'adenhq:main' into feat/observability-trace-context 2026-01-28 14:52:37 -03:00
Fernando Mano 5351703949 feat(observability): Structured logging with automatic trace context propagation -- fix lint error 2026-01-28 14:52:02 -03:00
Aneesh 48b1e0e038 Docs: clarify agent creation assumptions in Getting Started 2026-01-28 22:49:30 +05:30
Fernando Mano 7ba8169444 feat(observability): Structured logging with automatic trace context propagation -- remove colored logs for some cases when in prod mode 2026-01-28 12:46:54 -03:00
Fernando Mano d090c954ae feat(observability): Structured logging with automatic trace context propagation -- adjust all logs to print full uuids when in prod mode and include documentation 2026-01-28 12:31:11 -03:00
Fernando Mano 9bee1666f1 chore: Merge branch 'main' into feat/observability-trace-context 2026-01-28 11:35:13 -03:00
Fernando Mano fb94637339 feat(observability): Structured logging with automatic trace context propagation 2026-01-28 11:27:24 -03:00
Siddharth Varshney a96cd546c8 Merge branch 'main' into fix-malformed-json-tool-args 2026-01-28 15:35:33 +05:30
Siddharth Varshney eb33d4f1c2 Remove duplicate malformed JSON tool-call test 2026-01-28 09:57:27 +00:00
Siddharth Varshney 4253956326 Handle malformed JSON tool arguments safely 2026-01-28 09:49:17 +00:00
Siddharth Varshney d6b05bf337 Handle malformed JSON tool arguments in LiteLLMProvider 2026-01-26 23:27:32 +00:00
958 changed files with 212612 additions and 42242 deletions
-40
View File
@@ -1,40 +0,0 @@
{
"permissions": {
"allow": [
"Bash(npm install:*)",
"Bash(npm test:*)",
"Skill(building-agents-construction)",
"Skill(building-agents-construction:*)",
"Bash(PYTHONPATH=core:exports pytest:*)",
"mcp__agent-builder__create_session",
"mcp__agent-builder__get_session_status",
"mcp__agent-builder__set_goal",
"mcp__agent-builder__list_mcp_servers",
"mcp__agent-builder__test_node",
"mcp__agent-builder__add_node",
"mcp__agent-builder__add_edge",
"mcp__agent-builder__validate_graph",
"Bash(ruff check:*)",
"Bash(PYTHONPATH=core:exports python:*)",
"mcp__agent-builder__list_tests",
"mcp__agent-builder__generate_constraint_tests",
"Bash(python -m agent:*)",
"Bash(python agent.py:*)",
"Bash(python -c:*)",
"Bash(done)",
"Bash(xargs cat:*)",
"mcp__agent-builder__list_mcp_tools",
"mcp__agent-builder__add_mcp_server",
"mcp__agent-builder__check_missing_credentials",
"mcp__agent-builder__store_credential",
"mcp__agent-builder__list_stored_credentials",
"mcp__agent-builder__delete_stored_credential",
"mcp__agent-builder__verify_credentials",
"Bash(PYTHONPATH=/home/timothy/oss/hive/core:/home/timothy/oss/hive/exports python:*)",
"Bash(PYTHONPATH=core:exports:tools/src python -m hubspot_input:*)",
"mcp__agent-builder__export_graph"
]
},
"enabledMcpjsonServers": ["agent-builder", "tools"],
"enableAllProjectMcpServers": true
}
+16
View File
@@ -0,0 +1,16 @@
{
"permissions": {
"allow": [
"Bash(git status:*)",
"Bash(gh run view:*)",
"Bash(uv run:*)",
"Bash(env:*)",
"Bash(python -m py_compile:*)",
"Bash(python -m pytest:*)",
"Bash(source:*)",
"Bash(find:*)",
"Bash(PYTHONPATH=core:exports:tools/src uv run pytest:*)"
]
},
"enabledMcpjsonServers": ["tools"]
}
-463
View File
@@ -1,463 +0,0 @@
---
name: agent-workflow
description: Complete workflow for building, implementing, and testing goal-driven agents. Orchestrates building-agents-* and testing-agent skills. Use when starting a new agent project, unsure which skill to use, or need end-to-end guidance.
license: Apache-2.0
metadata:
author: hive
version: "2.0"
type: workflow-orchestrator
orchestrates:
- building-agents-core
- building-agents-construction
- building-agents-patterns
- testing-agent
- setup-credentials
---
# Agent Development Workflow
Complete Standard Operating Procedure (SOP) for building production-ready goal-driven agents.
## Overview
This workflow orchestrates specialized skills to take you from initial concept to production-ready agent:
1. **Understand Concepts**`/building-agents-core` (optional)
2. **Build Structure**`/building-agents-construction`
3. **Optimize Design**`/building-agents-patterns` (optional)
4. **Setup Credentials**`/setup-credentials` (if agent uses tools requiring API keys)
5. **Test & Validate**`/testing-agent`
## When to Use This Workflow
Use this meta-skill when:
- Starting a new agent from scratch
- Unclear which skill to use first
- Need end-to-end guidance for agent development
- Want consistent, repeatable agent builds
**Skip this workflow** if:
- You only need to test an existing agent → use `/testing-agent` directly
- You know exactly which phase you're in → use specific skill directly
## Quick Decision Tree
```
"Need to understand agent concepts" → building-agents-core
"Build a new agent" → building-agents-construction
"Optimize my agent design" → building-agents-patterns
"Set up API keys for my agent" → setup-credentials
"Test my agent" → testing-agent
"Not sure what I need" → Read phases below, then decide
"Agent has structure but needs implementation" → See agent directory STATUS.md
```
## Phase 0: Understand Concepts (Optional)
**Duration**: 5-10 minutes
**Skill**: `/building-agents-core`
**Input**: Questions about agent architecture
### When to Use
- First time building an agent
- Need to understand node types, edges, goals
- Want to validate tool availability
- Learning about pause/resume architecture
### What This Phase Provides
- Architecture overview (Python packages, not JSON)
- Core concepts (Goal, Node, Edge, Pause/Resume)
- Tool discovery and validation procedures
- Workflow overview
**Skip this phase** if you already understand agent fundamentals.
## Phase 1: Build Agent Structure
**Duration**: 15-30 minutes
**Skill**: `/building-agents-construction`
**Input**: User requirements ("Build an agent that...")
### What This Phase Does
Creates the complete agent architecture:
- Package structure (`exports/agent_name/`)
- Goal with success criteria and constraints
- Workflow graph (nodes and edges)
- Node specifications
- CLI interface
- Documentation
### Process
1. **Create package** - Directory structure with skeleton files
2. **Define goal** - Success criteria and constraints written to agent.py
3. **Design nodes** - Each node approved and written incrementally
4. **Connect edges** - Workflow graph with conditional routing
5. **Finalize** - Agent class, exports, and documentation
### Outputs
-`exports/agent_name/` package created
- ✅ Goal defined in agent.py
- ✅ 3-5 success criteria defined
- ✅ 1-5 constraints defined
- ✅ 5-10 nodes specified in nodes/__init__.py
- ✅ 8-15 edges connecting workflow
- ✅ Validated structure (passes `python -m agent_name validate`)
- ✅ README.md with usage instructions
- ✅ CLI commands (info, validate, run, shell)
### Success Criteria
You're ready for Phase 2 when:
- Agent structure validates without errors
- All nodes and edges are defined
- CLI commands work (info, validate)
- You see: "Agent complete: exports/agent_name/"
### Common Outputs
The building-agents-construction skill produces:
```
exports/agent_name/
├── __init__.py (package exports)
├── __main__.py (CLI interface)
├── agent.py (goal, graph, agent class)
├── nodes/__init__.py (node specifications)
├── config.py (configuration)
├── implementations.py (may be created for Python functions)
└── README.md (documentation)
```
### Next Steps
**If structure complete and validated:**
→ Check `exports/agent_name/STATUS.md` or `IMPLEMENTATION_GUIDE.md`
→ These files explain implementation options
→ You may need to add Python functions or MCP tools (not covered by current skills)
**If want to optimize design:**
→ Proceed to Phase 1.5 (building-agents-patterns)
**If ready to test:**
→ Proceed to Phase 2
## Phase 1.5: Optimize Design (Optional)
**Duration**: 10-15 minutes
**Skill**: `/building-agents-patterns`
**Input**: Completed agent structure
### When to Use
- Want to add pause/resume functionality
- Need error handling patterns
- Want to optimize performance
- Need examples of complex routing
- Want best practices guidance
### What This Phase Provides
- Practical examples and patterns
- Pause/resume architecture
- Error handling strategies
- Anti-patterns to avoid
- Performance optimization techniques
**Skip this phase** if your agent design is straightforward.
## Phase 2: Test & Validate
**Duration**: 20-40 minutes
**Skill**: `/testing-agent`
**Input**: Working agent from Phase 1
### What This Phase Does
Creates comprehensive test suite:
- Constraint tests (verify hard requirements)
- Success criteria tests (measure goal achievement)
- Edge case tests (handle failures gracefully)
- Integration tests (end-to-end workflows)
### Process
1. **Analyze agent** - Read goal, constraints, success criteria
2. **Generate tests** - Create pytest files in `exports/agent_name/tests/`
3. **User approval** - Review and approve each test
4. **Run evaluation** - Execute tests and collect results
5. **Debug failures** - Identify and fix issues
6. **Iterate** - Repeat until all tests pass
### Outputs
- ✅ Test files in `exports/agent_name/tests/`
- ✅ Test report with pass/fail metrics
- ✅ Coverage of all success criteria
- ✅ Coverage of all constraints
- ✅ Edge case handling verified
### Success Criteria
You're done when:
- All tests pass
- All success criteria validated
- All constraints verified
- Agent handles edge cases
- Test coverage is comprehensive
### Next Steps
**Agent ready for:**
- Production deployment
- Integration into larger systems
- Documentation and handoff
- Continuous monitoring
## Phase Transitions
### From Phase 1 to Phase 2
**Trigger signals:**
- "Agent complete: exports/..."
- Structure validation passes
- README indicates implementation complete
**Before proceeding:**
- Verify agent can be imported: `from exports.agent_name import default_agent`
- Check if implementation is needed (see STATUS.md or IMPLEMENTATION_GUIDE.md)
- Confirm agent executes without import errors
### Skipping Phases
**When to skip Phase 1:**
- Agent structure already exists
- Only need to add tests
- Modifying existing agent
**When to skip Phase 2:**
- Prototyping or exploring
- Agent not production-bound
- Manual testing sufficient
## Common Patterns
### Pattern 1: Complete New Build (Simple)
```
User: "Build an agent that monitors files"
→ Use /building-agents-construction
→ Agent structure created
→ Use /testing-agent
→ Tests created and passing
→ Done: Production-ready agent
```
### Pattern 1b: Complete New Build (With Learning)
```
User: "Build an agent (first time)"
→ Use /building-agents-core (understand concepts)
→ Use /building-agents-construction (build structure)
→ Use /building-agents-patterns (optimize design)
→ Use /testing-agent (validate)
→ Done: Production-ready agent
```
### Pattern 2: Test Existing Agent
```
User: "Test my agent at exports/my_agent"
→ Skip Phase 1
→ Use /testing-agent directly
→ Tests created
→ Done: Validated agent
```
### Pattern 3: Iterative Development
```
User: "Build an agent"
→ Use /building-agents-construction (Phase 1)
→ Implementation needed (see STATUS.md)
→ [User implements functions]
→ Use /testing-agent (Phase 2)
→ Tests reveal bugs
→ [Fix bugs manually]
→ Re-run tests
→ Done: Working agent
```
### Pattern 4: Complex Agent with Patterns
```
User: "Build an agent with multi-turn conversations"
→ Use /building-agents-core (learn pause/resume)
→ Use /building-agents-construction (build structure)
→ Use /building-agents-patterns (implement pause/resume pattern)
→ Use /testing-agent (validate conversation flows)
→ Done: Complex conversational agent
```
## Skill Dependencies
```
agent-workflow (meta-skill)
├── building-agents-core (foundational)
│ ├── Architecture concepts
│ ├── Node/Edge/Goal definitions
│ ├── Tool discovery procedures
│ └── Workflow overview
├── building-agents-construction (procedural)
│ ├── Creates package structure
│ ├── Defines goal
│ ├── Adds nodes incrementally
│ ├── Connects edges
│ ├── Finalizes agent class
│ └── Requires: building-agents-core
├── building-agents-patterns (reference)
│ ├── Best practices
│ ├── Pause/resume patterns
│ ├── Error handling
│ ├── Anti-patterns
│ └── Performance optimization
└── testing-agent
├── Reads agent goal
├── Generates tests
├── Runs evaluation
└── Reports results
```
## Troubleshooting
### "Agent structure won't validate"
- Check node IDs match between nodes/__init__.py and agent.py
- Verify all edges reference valid node IDs
- Ensure entry_node exists in nodes list
- Run: `PYTHONPATH=core:exports python -m agent_name validate`
### "Agent has structure but won't run"
- Check for STATUS.md or IMPLEMENTATION_GUIDE.md in agent directory
- Implementation may be needed (Python functions or MCP tools)
- This is expected - building-agents-construction creates structure, not implementation
- See implementation guide for completion options
### "Tests are failing"
- Review test output for specific failures
- Check agent goal and success criteria
- Verify constraints are met
- Use `/testing-agent` to debug and iterate
- Fix agent code and re-run tests
### "Not sure which phase I'm in"
Run these checks:
```bash
# Check if agent structure exists
ls exports/my_agent/agent.py
# Check if it validates
PYTHONPATH=core:exports python -m my_agent validate
# Check if tests exist
ls exports/my_agent/tests/
# If structure exists and validates → Phase 2 (testing)
# If structure doesn't exist → Phase 1 (building)
# If tests exist but failing → Debug phase
```
## Best Practices
### For Phase 1 (Building)
1. **Start with clear requirements** - Know what the agent should do
2. **Define success criteria early** - Measurable goals drive design
3. **Keep nodes focused** - One responsibility per node
4. **Use descriptive names** - Node IDs should explain purpose
5. **Validate incrementally** - Check structure after each major addition
### For Phase 2 (Testing)
1. **Test constraints first** - Hard requirements must pass
2. **Mock external dependencies** - Use mock mode for LLMs/APIs
3. **Cover edge cases** - Test failures, not just success paths
4. **Iterate quickly** - Fix one test at a time
5. **Document test patterns** - Future tests follow same structure
### General Workflow
1. **Use version control** - Git commit after each phase
2. **Document decisions** - Update README with changes
3. **Keep iterations small** - Build → Test → Fix → Repeat
4. **Preserve working states** - Tag successful iterations
5. **Learn from failures** - Failed tests reveal design issues
## Exit Criteria
You're done with the workflow when:
✅ Agent structure validates
✅ All tests pass
✅ Success criteria met
✅ Constraints verified
✅ Documentation complete
✅ Agent ready for deployment
## Additional Resources
- **building-agents-core**: See `.claude/skills/building-agents-core/SKILL.md`
- **building-agents-construction**: See `.claude/skills/building-agents-construction/SKILL.md`
- **building-agents-patterns**: See `.claude/skills/building-agents-patterns/SKILL.md`
- **testing-agent**: See `.claude/skills/testing-agent/SKILL.md`
- **Agent framework docs**: See `core/README.md`
- **Example agents**: See `exports/` directory
## Summary
This workflow provides a proven path from concept to production-ready agent:
1. **Learn** with `/building-agents-core` → Understand fundamentals (optional)
2. **Build** with `/building-agents-construction` → Get validated structure
3. **Optimize** with `/building-agents-patterns` → Apply best practices (optional)
4. **Test** with `/testing-agent` → Get verified functionality
The workflow is **flexible** - skip phases as needed, iterate freely, and adapt to your specific requirements. The goal is **production-ready agents** built with **consistent, repeatable processes**.
## Skill Selection Guide
**Choose building-agents-core when:**
- First time building agents
- Need to understand architecture
- Validating tool availability
- Learning about node types and edges
**Choose building-agents-construction when:**
- Actually building an agent
- Have clear requirements
- Ready to write code
- Want step-by-step guidance
**Choose building-agents-patterns when:**
- Agent structure complete
- Need advanced patterns
- Implementing pause/resume
- Optimizing performance
- Want best practices
**Choose testing-agent when:**
- Agent structure complete
- Ready to validate functionality
- Need comprehensive test coverage
- Debugging agent behavior
@@ -1,199 +0,0 @@
# Example: File Monitor Agent
This example shows the complete agent-workflow in action for building a file monitoring agent.
## Initial Request
```
User: "Build an agent that monitors ~/Downloads and copies new files to ~/Documents"
```
## Phase 1: Building (20 minutes)
### Step 1: Create Structure
Agent invokes `/building-agents` skill and:
1. Creates `exports/file_monitor_agent/` package
2. Writes skeleton files (__init__.py, __main__.py, agent.py, etc.)
**Output**: Package structure visible immediately
### Step 2: Define Goal
```python
goal = Goal(
id="file-monitor-copy",
name="Automated File Monitor & Copy",
success_criteria=[
# 100% detection rate
# 100% copy success
# 100% conflict resolution
# >99% uptime
],
constraints=[
# Preserve originals
# Handle errors gracefully
# Track state
# Respect permissions
]
)
```
**Output**: Goal written to agent.py
### Step 3: Design Nodes
7 nodes approved and written incrementally:
1. `initialize-state` - Set up tracking
2. `list-downloads` - Scan directory
3. `identify-new-files` - Find new files
4. `check-for-new-files` - Router
5. `copy-files` - Copy with conflict resolution
6. `update-state` - Mark as processed
7. `wait-interval` - Sleep between cycles
**Output**: All nodes in nodes/__init__.py
### Step 4: Connect Edges
8 edges connecting the workflow loop:
```
initialize → list → identify → check
↓ ↓
copy wait
↓ ↑
update ↓
↓ ↓
wait → list (loop)
```
**Output**: Edges written to agent.py
### Step 5: Finalize
```bash
$ PYTHONPATH=core:exports python -m file_monitor_agent validate
✓ Agent is valid
$ PYTHONPATH=core:exports python -m file_monitor_agent info
Agent: File Monitor & Copy Agent
Nodes: 7
Edges: 8
```
**Phase 1 Complete**: Structure validated ✅
### Status After Phase 1
```
exports/file_monitor_agent/
├── __init__.py ✅ (exports)
├── __main__.py ✅ (CLI)
├── agent.py ✅ (goal, graph, agent class)
├── nodes/__init__.py ✅ (7 nodes)
├── config.py ✅ (configuration)
├── implementations.py ✅ (Python functions)
├── README.md ✅ (documentation)
├── IMPLEMENTATION_GUIDE.md ✅ (next steps)
└── STATUS.md ✅ (current state)
```
**Note**: Implementation gap exists - data flow needs connection (covered in STATUS.md)
## Phase 2: Testing (25 minutes)
### Step 1: Analyze Agent
Agent invokes `/testing-agent` skill and:
1. Reads goal from `exports/file_monitor_agent/agent.py`
2. Identifies 4 success criteria to test
3. Identifies 4 constraints to verify
4. Plans test coverage
### Step 2: Generate Tests
Creates test files:
```
exports/file_monitor_agent/tests/
├── conftest.py (fixtures)
├── test_constraints.py (4 constraint tests)
├── test_success_criteria.py (4 success tests)
└── test_edge_cases.py (error handling)
```
Tests approved incrementally by user.
### Step 3: Run Tests
```bash
$ PYTHONPATH=core:exports pytest exports/file_monitor_agent/tests/
test_constraints.py::test_preserves_originals PASSED
test_constraints.py::test_handles_errors PASSED
test_constraints.py::test_tracks_state PASSED
test_constraints.py::test_respects_permissions PASSED
test_success_criteria.py::test_detects_all_files PASSED
test_success_criteria.py::test_copies_all_files PASSED
test_success_criteria.py::test_resolves_conflicts PASSED
test_success_criteria.py::test_continuous_run PASSED
test_edge_cases.py::test_empty_directory PASSED
test_edge_cases.py::test_permission_denied PASSED
test_edge_cases.py::test_disk_full PASSED
test_edge_cases.py::test_large_files PASSED
========================== 12 passed in 3.42s ==========================
```
**Phase 2 Complete**: All tests pass ✅
## Final Output
**Production-Ready Agent:**
```bash
# Run the agent
./RUN_AGENT.sh
# Or manually
PYTHONPATH=core:exports:tools/src python -m file_monitor_agent run
```
**Capabilities:**
- Monitors ~/Downloads continuously
- Copies new files to ~/Documents
- Resolves conflicts with timestamps
- Handles errors gracefully
- Tracks processed files
- Runs as background service
**Total Time**: ~45 minutes from concept to production
## Key Learnings
1. **Incremental building** - Files written immediately, visible throughout
2. **Validation early** - Structure validated before moving to implementation
3. **Test-driven** - Tests reveal real behavior
4. **Documentation included** - README, STATUS, and guides auto-generated
5. **Repeatable process** - Same workflow for any agent type
## Variations
**For simpler agents:**
- Fewer nodes (3-5 instead of 7)
- Simpler workflow (linear instead of looping)
- Faster build time (10-15 minutes)
**For complex agents:**
- More nodes (10-15+)
- Multiple subgraphs
- Pause/resume points for human-in-the-loop
- Longer build time (45-60 minutes)
The workflow scales to your needs!
@@ -1,361 +0,0 @@
---
name: building-agents-construction
description: Step-by-step guide for building goal-driven agents. Creates package structure, defines goals, adds nodes, connects edges, and finalizes agent class. Use when actively building an agent.
license: Apache-2.0
metadata:
author: hive
version: "2.0"
type: procedural
part_of: building-agents
requires: building-agents-core
---
# Agent Construction - EXECUTE THESE STEPS
**THIS IS AN EXECUTABLE WORKFLOW. DO NOT DISPLAY THIS FILE. EXECUTE THE STEPS BELOW.**
When this skill is loaded, IMMEDIATELY begin executing Step 1. Do not explain what you will do - just do it.
---
## STEP 1: Initialize Build Environment
**EXECUTE THESE TOOL CALLS NOW:**
1. Register the hive-tools MCP server:
```
mcp__agent-builder__add_mcp_server(
name="hive-tools",
transport="stdio",
command="python",
args='["mcp_server.py", "--stdio"]',
cwd="tools",
description="Hive tools MCP server"
)
```
2. Create a build session (replace AGENT_NAME with the user's requested agent name in snake_case):
```
mcp__agent-builder__create_session(name="AGENT_NAME")
```
3. Discover available tools:
```
mcp__agent-builder__list_mcp_tools()
```
4. Create the package directory:
```
mkdir -p exports/AGENT_NAME/nodes
```
**AFTER completing these calls**, tell the user:
> ✅ Build environment initialized
>
> - Session created
> - Available tools: [list the tools from step 3]
>
> Proceeding to define the agent goal...
**THEN immediately proceed to STEP 2.**
---
## STEP 2: Define and Approve Goal
**PROPOSE a goal to the user.** Based on what they asked for, propose:
- Goal ID (kebab-case)
- Goal name
- Goal description
- 3-5 success criteria (each with: id, description, metric, target, weight)
- 2-4 constraints (each with: id, description, constraint_type, category)
**FORMAT your proposal as a clear summary, then ask for approval:**
> **Proposed Goal: [Name]**
>
> [Description]
>
> **Success Criteria:**
>
> 1. [criterion 1]
> 2. [criterion 2]
> ...
>
> **Constraints:**
>
> 1. [constraint 1]
> 2. [constraint 2]
> ...
**THEN call AskUserQuestion:**
```
AskUserQuestion(questions=[{
"question": "Do you approve this goal definition?",
"header": "Goal",
"options": [
{"label": "Approve", "description": "Goal looks good, proceed"},
{"label": "Modify", "description": "I want to change something"}
],
"multiSelect": false
}])
```
**WAIT for user response.**
- If **Approve**: Call `mcp__agent-builder__set_goal(...)` with the goal details, then proceed to STEP 3
- If **Modify**: Ask what they want to change, update proposal, ask again
---
## STEP 3: Design Node Workflow
**BEFORE designing nodes**, review the available tools from Step 1. Nodes can ONLY use tools that exist.
**DESIGN the workflow** as a series of nodes. For each node, determine:
- node_id (kebab-case)
- name
- description
- node_type: `"llm_generate"` (no tools) or `"llm_tool_use"` (uses tools)
- input_keys (what data this node receives)
- output_keys (what data this node produces)
- tools (ONLY tools that exist - empty list for llm_generate)
- system_prompt
**PRESENT the workflow to the user:**
> **Proposed Workflow: [N] nodes**
>
> 1. **[node-id]** - [description]
>
> - Type: [llm_generate/llm_tool_use]
> - Input: [keys]
> - Output: [keys]
> - Tools: [tools or "none"]
>
> 2. **[node-id]** - [description]
> ...
>
> **Flow:** node1 → node2 → node3 → ...
**THEN call AskUserQuestion:**
```
AskUserQuestion(questions=[{
"question": "Do you approve this workflow design?",
"header": "Workflow",
"options": [
{"label": "Approve", "description": "Workflow looks good, proceed to build nodes"},
{"label": "Modify", "description": "I want to change the workflow"}
],
"multiSelect": false
}])
```
**WAIT for user response.**
- If **Approve**: Proceed to STEP 4
- If **Modify**: Ask what they want to change, update design, ask again
---
## STEP 4: Build Nodes One by One
**FOR EACH node in the approved workflow:**
1. **Call** `mcp__agent-builder__add_node(...)` with the node details
- input_keys and output_keys must be JSON strings: `'["key1", "key2"]'`
- tools must be a JSON string: `'["tool1"]'` or `'[]'`
2. **Call** `mcp__agent-builder__test_node(...)` to validate:
```
mcp__agent-builder__test_node(
node_id="the-node-id",
test_input='{"key": "test value"}',
mock_llm_response='{"output_key": "test output"}'
)
```
3. **Check result:**
- If valid: Tell user "✅ Node [id] validated" and continue to next node
- If invalid: Show errors, fix the node, re-validate
4. **Show progress** after each node:
```
mcp__agent-builder__get_session_status()
```
> ✅ Node [X] of [Y] complete: [node-id]
**AFTER all nodes are added and validated**, proceed to STEP 5.
---
## STEP 5: Connect Edges
**DETERMINE the edges** based on the workflow flow. For each connection:
- edge_id (kebab-case)
- source (node that outputs)
- target (node that receives)
- condition: `"on_success"`, `"always"`, `"on_failure"`, or `"conditional"`
- condition_expr (Python expression, only if conditional)
- priority (integer, lower = higher priority)
**FOR EACH edge, call:**
```
mcp__agent-builder__add_edge(
edge_id="source-to-target",
source="source-node-id",
target="target-node-id",
condition="on_success",
condition_expr="",
priority=1
)
```
**AFTER all edges are added, validate the graph:**
```
mcp__agent-builder__validate_graph()
```
- If valid: Tell user "✅ Graph structure validated" and proceed to STEP 6
- If invalid: Show errors, fix edges, re-validate
---
## STEP 6: Generate Agent Package
**EXPORT the graph data:**
```
mcp__agent-builder__export_graph()
```
This returns JSON with all the goal, nodes, edges, and MCP server configurations.
**THEN write the Python package files** using the exported data. Create these files in `exports/AGENT_NAME/`:
1. `config.py` - Runtime configuration with model settings
2. `nodes/__init__.py` - All NodeSpec definitions
3. `agent.py` - Goal, edges, graph config, and agent class
4. `__init__.py` - Package exports
5. `__main__.py` - CLI interface
6. `mcp_servers.json` - MCP server configurations
7. `README.md` - Usage documentation
**IMPORTANT entry_points format:**
- MUST be: `{"start": "first-node-id"}`
- NOT: `{"first-node-id": ["input_keys"]}` (WRONG)
- NOT: `{"first-node-id"}` (WRONG - this is a set)
**Use the example agent** at `.claude/skills/building-agents-construction/examples/online_research_agent/` as a template for file structure and patterns.
**AFTER writing all files, tell the user:**
> ✅ Agent package created: `exports/AGENT_NAME/`
>
> **Files generated:**
>
> - `__init__.py` - Package exports
> - `agent.py` - Goal, nodes, edges, agent class
> - `config.py` - Runtime configuration
> - `__main__.py` - CLI interface
> - `nodes/__init__.py` - Node definitions
> - `mcp_servers.json` - MCP server config
> - `README.md` - Usage documentation
>
> **Test your agent:**
>
> ```bash
> cd /home/timothy/oss/hive
> PYTHONPATH=core:exports python -m AGENT_NAME validate
> PYTHONPATH=core:exports python -m AGENT_NAME info
> ```
---
## STEP 7: Verify and Test
**RUN validation:**
```bash
cd /home/timothy/oss/hive && PYTHONPATH=core:exports python -m AGENT_NAME validate
```
- If valid: Agent is complete!
- If errors: Fix the issues and re-run
**SHOW final session summary:**
```
mcp__agent-builder__get_session_status()
```
**TELL the user the agent is ready** and suggest next steps:
- Run with mock mode to test without API calls
- Use `/testing-agent` skill for comprehensive testing
- Use `/setup-credentials` if the agent needs API keys
---
## REFERENCE: Node Types
| Type | tools param | Use when |
| -------------- | ---------------------- | ---------------------------------------------- |
| `llm_generate` | `'[]'` | Pure reasoning, JSON output, no external calls |
| `llm_tool_use` | `'["tool1", "tool2"]'` | Needs to call MCP tools |
---
## REFERENCE: Edge Conditions
| Condition | When edge is followed |
| ------------- | ------------------------------------- |
| `on_success` | Source node completed successfully |
| `on_failure` | Source node failed |
| `always` | Always, regardless of success/failure |
| `conditional` | When condition_expr evaluates to True |
---
## REFERENCE: System Prompt Best Practice
For nodes with JSON output, include this in the system_prompt:
```
CRITICAL: Return ONLY raw JSON. NO markdown, NO code blocks.
Just the JSON object starting with { and ending with }.
Return this exact structure:
{
"key1": "...",
"key2": "..."
}
```
---
## COMMON MISTAKES TO AVOID
1. **Using tools that don't exist** - Always check `mcp__agent-builder__list_mcp_tools()` first
2. **Wrong entry_points format** - Must be `{"start": "node-id"}`, NOT a set or list
3. **Skipping validation** - Always validate nodes and graph before proceeding
4. **Not waiting for approval** - Always ask user before major steps
5. **Displaying this file** - Execute the steps, don't show documentation
@@ -1,80 +0,0 @@
# Online Research Agent
Deep-dive research agent that searches 10+ sources and produces comprehensive narrative reports with citations.
## Features
- Generates multiple search queries from a topic
- Searches and fetches 15+ web sources
- Evaluates and ranks sources by relevance
- Synthesizes findings into themes
- Writes narrative report with numbered citations
- Quality checks for uncited claims
- Saves report to local markdown file
## Usage
### CLI
```bash
# Show agent info
python -m online_research_agent info
# Validate structure
python -m online_research_agent validate
# Run research on a topic
python -m online_research_agent run --topic "impact of AI on healthcare"
# Interactive shell
python -m online_research_agent shell
```
### Python API
```python
from online_research_agent import default_agent
# Simple usage
result = await default_agent.run({"topic": "climate change solutions"})
# Check output
if result.success:
print(f"Report saved to: {result.output['file_path']}")
print(result.output['final_report'])
```
## Workflow
```
parse-query → search-sources → fetch-content → evaluate-sources
write-report ← synthesize-findings
quality-check → save-report
```
## Output
Reports are saved to `./research_reports/` as markdown files with:
1. Executive Summary
2. Introduction
3. Key Findings (by theme)
4. Analysis
5. Conclusion
6. References
## Requirements
- Python 3.11+
- LLM provider API key (Groq, Cerebras, etc.)
- Internet access for web search/fetch
## Configuration
Edit `config.py` to change:
- `model`: LLM model (default: groq/moonshotai/kimi-k2-instruct-0905)
- `temperature`: Generation temperature (default: 0.7)
- `max_tokens`: Max tokens per response (default: 16384)
@@ -1,23 +0,0 @@
"""
Online Research Agent - Deep-dive research with narrative reports.
Research any topic by searching multiple sources, synthesizing information,
and producing a well-structured narrative report with citations.
"""
from .agent import OnlineResearchAgent, default_agent, goal, nodes, edges
from .config import RuntimeConfig, AgentMetadata, default_config, metadata
__version__ = "1.0.0"
__all__ = [
"OnlineResearchAgent",
"default_agent",
"goal",
"nodes",
"edges",
"RuntimeConfig",
"AgentMetadata",
"default_config",
"metadata",
]
@@ -1,429 +0,0 @@
"""Agent graph construction for Online Research Agent."""
from framework.graph import EdgeSpec, EdgeCondition, Goal, SuccessCriterion, Constraint
from framework.graph.edge import GraphSpec
from framework.graph.executor import ExecutionResult
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from framework.llm import LiteLLMProvider
from framework.runner.tool_registry import ToolRegistry
from .config import default_config, metadata
from .nodes import (
parse_query_node,
search_sources_node,
fetch_content_node,
evaluate_sources_node,
synthesize_findings_node,
write_report_node,
quality_check_node,
save_report_node,
)
# Goal definition
goal = Goal(
id="comprehensive-online-research",
name="Comprehensive Online Research",
description="Research any topic by searching multiple sources, synthesizing information, and producing a well-structured narrative report with citations.",
success_criteria=[
SuccessCriterion(
id="source-coverage",
description="Query 10+ diverse sources",
metric="source_count",
target=">=10",
weight=0.20,
),
SuccessCriterion(
id="relevance",
description="All sources directly address the query",
metric="relevance_score",
target="90%",
weight=0.25,
),
SuccessCriterion(
id="synthesis",
description="Synthesize findings into coherent narrative",
metric="coherence_score",
target="85%",
weight=0.25,
),
SuccessCriterion(
id="citations",
description="Include citations for all claims",
metric="citation_coverage",
target="100%",
weight=0.15,
),
SuccessCriterion(
id="actionable",
description="Report answers the user's question",
metric="answer_completeness",
target="90%",
weight=0.15,
),
],
constraints=[
Constraint(
id="no-hallucination",
description="Only include information found in sources",
constraint_type="quality",
category="accuracy",
),
Constraint(
id="source-attribution",
description="Every factual claim must cite its source",
constraint_type="quality",
category="accuracy",
),
Constraint(
id="recency-preference",
description="Prefer recent sources when relevant",
constraint_type="quality",
category="relevance",
),
Constraint(
id="no-paywalled",
description="Avoid sources that require payment to access",
constraint_type="functional",
category="accessibility",
),
],
)
# Node list
nodes = [
parse_query_node,
search_sources_node,
fetch_content_node,
evaluate_sources_node,
synthesize_findings_node,
write_report_node,
quality_check_node,
save_report_node,
]
# Edge definitions
edges = [
EdgeSpec(
id="parse-to-search",
source="parse-query",
target="search-sources",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
EdgeSpec(
id="search-to-fetch",
source="search-sources",
target="fetch-content",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
EdgeSpec(
id="fetch-to-evaluate",
source="fetch-content",
target="evaluate-sources",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
EdgeSpec(
id="evaluate-to-synthesize",
source="evaluate-sources",
target="synthesize-findings",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
EdgeSpec(
id="synthesize-to-write",
source="synthesize-findings",
target="write-report",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
EdgeSpec(
id="write-to-quality",
source="write-report",
target="quality-check",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
EdgeSpec(
id="quality-to-save",
source="quality-check",
target="save-report",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
]
# Graph configuration
entry_node = "parse-query"
entry_points = {"start": "parse-query"}
pause_nodes = []
terminal_nodes = ["save-report"]
class OnlineResearchAgent:
"""
Online Research Agent - Deep-dive research with narrative reports.
Uses AgentRuntime for multi-entrypoint support with HITL pause/resume.
"""
def __init__(self, config=None):
self.config = config or default_config
self.goal = goal
self.nodes = nodes
self.edges = edges
self.entry_node = entry_node
self.entry_points = entry_points
self.pause_nodes = pause_nodes
self.terminal_nodes = terminal_nodes
self._runtime: AgentRuntime | None = None
self._graph: GraphSpec | None = None
def _build_entry_point_specs(self) -> list[EntryPointSpec]:
"""Convert entry_points dict to EntryPointSpec list."""
specs = []
for ep_id, node_id in self.entry_points.items():
if ep_id == "start":
trigger_type = "manual"
name = "Start"
elif "_resume" in ep_id:
trigger_type = "resume"
name = f"Resume from {ep_id.replace('_resume', '')}"
else:
trigger_type = "manual"
name = ep_id.replace("-", " ").title()
specs.append(
EntryPointSpec(
id=ep_id,
name=name,
entry_node=node_id,
trigger_type=trigger_type,
isolation_level="shared",
)
)
return specs
def _create_runtime(self, mock_mode=False) -> AgentRuntime:
"""Create AgentRuntime instance."""
import json
from pathlib import Path
# Persistent storage in ~/.hive for telemetry and run history
storage_path = Path.home() / ".hive" / "online_research_agent"
storage_path.mkdir(parents=True, exist_ok=True)
tool_registry = ToolRegistry()
# Load MCP servers (always load, needed for tool validation)
agent_dir = Path(__file__).parent
mcp_config_path = agent_dir / "mcp_servers.json"
if mcp_config_path.exists():
with open(mcp_config_path) as f:
mcp_servers = json.load(f)
for server_config in mcp_servers.get("servers", []):
# Resolve relative cwd paths
cwd = server_config.get("cwd")
if cwd and not Path(cwd).is_absolute():
server_config["cwd"] = str(agent_dir / cwd)
tool_registry.register_mcp_server(server_config)
llm = None
if not mock_mode:
# LiteLLMProvider uses environment variables for API keys
llm = LiteLLMProvider(
model=self.config.model,
api_key=self.config.api_key,
api_base=self.config.api_base,
)
self._graph = GraphSpec(
id="online-research-agent-graph",
goal_id=self.goal.id,
version="1.0.0",
entry_node=self.entry_node,
entry_points=self.entry_points,
terminal_nodes=self.terminal_nodes,
pause_nodes=self.pause_nodes,
nodes=self.nodes,
edges=self.edges,
default_model=self.config.model,
max_tokens=self.config.max_tokens,
)
# Create AgentRuntime with all entry points
self._runtime = create_agent_runtime(
graph=self._graph,
goal=self.goal,
storage_path=storage_path,
entry_points=self._build_entry_point_specs(),
llm=llm,
tools=list(tool_registry.get_tools().values()),
tool_executor=tool_registry.get_executor(),
)
return self._runtime
async def start(self, mock_mode=False) -> None:
"""Start the agent runtime."""
if self._runtime is None:
self._create_runtime(mock_mode=mock_mode)
await self._runtime.start()
async def stop(self) -> None:
"""Stop the agent runtime."""
if self._runtime is not None:
await self._runtime.stop()
async def trigger(
self,
entry_point: str,
input_data: dict,
correlation_id: str | None = None,
session_state: dict | None = None,
) -> str:
"""
Trigger execution at a specific entry point (non-blocking).
Args:
entry_point: Entry point ID (e.g., "start", "pause-node_resume")
input_data: Input data for the execution
correlation_id: Optional ID to correlate related executions
session_state: Optional session state to resume from (with paused_at, memory)
Returns:
Execution ID for tracking
"""
if self._runtime is None or not self._runtime.is_running:
raise RuntimeError("Agent runtime not started. Call start() first.")
return await self._runtime.trigger(
entry_point, input_data, correlation_id, session_state=session_state
)
async def trigger_and_wait(
self,
entry_point: str,
input_data: dict,
timeout: float | None = None,
session_state: dict | None = None,
) -> ExecutionResult | None:
"""
Trigger execution and wait for completion.
Args:
entry_point: Entry point ID
input_data: Input data for the execution
timeout: Maximum time to wait (seconds)
session_state: Optional session state to resume from (with paused_at, memory)
Returns:
ExecutionResult or None if timeout
"""
if self._runtime is None or not self._runtime.is_running:
raise RuntimeError("Agent runtime not started. Call start() first.")
return await self._runtime.trigger_and_wait(
entry_point, input_data, timeout, session_state=session_state
)
async def run(
self, context: dict, mock_mode=False, session_state=None
) -> ExecutionResult:
"""
Run the agent (convenience method for simple single execution).
For more control, use start() + trigger_and_wait() + stop().
"""
await self.start(mock_mode=mock_mode)
try:
# Determine entry point based on session_state
if session_state and "paused_at" in session_state:
paused_node = session_state["paused_at"]
resume_key = f"{paused_node}_resume"
if resume_key in self.entry_points:
entry_point = resume_key
else:
entry_point = "start"
else:
entry_point = "start"
result = await self.trigger_and_wait(
entry_point, context, session_state=session_state
)
return result or ExecutionResult(success=False, error="Execution timeout")
finally:
await self.stop()
async def get_goal_progress(self) -> dict:
"""Get goal progress across all executions."""
if self._runtime is None:
raise RuntimeError("Agent runtime not started")
return await self._runtime.get_goal_progress()
def get_stats(self) -> dict:
"""Get runtime statistics."""
if self._runtime is None:
return {"running": False}
return self._runtime.get_stats()
def info(self):
"""Get agent information."""
return {
"name": metadata.name,
"version": metadata.version,
"description": metadata.description,
"goal": {
"name": self.goal.name,
"description": self.goal.description,
},
"nodes": [n.id for n in self.nodes],
"edges": [e.id for e in self.edges],
"entry_node": self.entry_node,
"entry_points": self.entry_points,
"pause_nodes": self.pause_nodes,
"terminal_nodes": self.terminal_nodes,
"multi_entrypoint": True,
}
def validate(self):
"""Validate agent structure."""
errors = []
warnings = []
node_ids = {node.id for node in self.nodes}
for edge in self.edges:
if edge.source not in node_ids:
errors.append(f"Edge {edge.id}: source '{edge.source}' not found")
if edge.target not in node_ids:
errors.append(f"Edge {edge.id}: target '{edge.target}' not found")
if self.entry_node not in node_ids:
errors.append(f"Entry node '{self.entry_node}' not found")
for terminal in self.terminal_nodes:
if terminal not in node_ids:
errors.append(f"Terminal node '{terminal}' not found")
for pause in self.pause_nodes:
if pause not in node_ids:
errors.append(f"Pause node '{pause}' not found")
# Validate entry points
for ep_id, node_id in self.entry_points.items():
if node_id not in node_ids:
errors.append(
f"Entry point '{ep_id}' references unknown node '{node_id}'"
)
return {
"valid": len(errors) == 0,
"errors": errors,
"warnings": warnings,
}
# Create default instance
default_agent = OnlineResearchAgent()
@@ -1,396 +0,0 @@
"""Node definitions for Online Research Agent."""
from framework.graph import NodeSpec
# Node 1: Parse Query
parse_query_node = NodeSpec(
id="parse-query",
name="Parse Query",
description="Analyze the research topic and generate 3-5 diverse search queries to cover different aspects",
node_type="llm_generate",
input_keys=["topic"],
output_keys=["search_queries", "research_focus", "key_aspects"],
output_schema={
"research_focus": {
"type": "string",
"required": True,
"description": "Brief statement of what we're researching",
},
"key_aspects": {
"type": "array",
"required": True,
"description": "List of 3-5 key aspects to investigate",
},
"search_queries": {
"type": "array",
"required": True,
"description": "List of 3-5 search queries",
},
},
system_prompt="""\
You are a research query strategist. Given a research topic, analyze it and generate search queries.
Your task:
1. Understand the core research question
2. Identify 3-5 key aspects to investigate
3. Generate 3-5 diverse search queries that will find comprehensive information
CRITICAL: Return ONLY raw JSON. NO markdown, NO code blocks.
Return this JSON structure:
{
"research_focus": "Brief statement of what we're researching",
"key_aspects": ["aspect1", "aspect2", "aspect3"],
"search_queries": [
"query 1 - broad overview",
"query 2 - specific angle",
"query 3 - recent developments",
"query 4 - expert opinions",
"query 5 - data/statistics"
]
}
""",
tools=[],
max_retries=3,
)
# Node 2: Search Sources
search_sources_node = NodeSpec(
id="search-sources",
name="Search Sources",
description="Execute web searches using the generated queries to find 15+ source URLs",
node_type="llm_tool_use",
input_keys=["search_queries", "research_focus"],
output_keys=["source_urls", "search_results_summary"],
output_schema={
"source_urls": {
"type": "array",
"required": True,
"description": "List of source URLs found",
},
"search_results_summary": {
"type": "string",
"required": True,
"description": "Brief summary of what was found",
},
},
system_prompt="""\
You are a research assistant executing web searches. Use the web_search tool to find sources.
Your task:
1. Execute each search query using web_search tool
2. Collect URLs from search results
3. Aim for 15+ diverse sources
After searching, return JSON with found sources:
{
"source_urls": ["url1", "url2", ...],
"search_results_summary": "Brief summary of what was found"
}
""",
tools=["web_search"],
max_retries=3,
)
# Node 3: Fetch Content
fetch_content_node = NodeSpec(
id="fetch-content",
name="Fetch Content",
description="Fetch and extract content from the discovered source URLs",
node_type="llm_tool_use",
input_keys=["source_urls", "research_focus"],
output_keys=["fetched_sources", "fetch_errors"],
output_schema={
"fetched_sources": {
"type": "array",
"required": True,
"description": "List of fetched source objects with url, title, content",
},
"fetch_errors": {
"type": "array",
"required": True,
"description": "List of URLs that failed to fetch",
},
},
system_prompt="""\
You are a content fetcher. Use web_scrape tool to retrieve content from URLs.
Your task:
1. Fetch content from each source URL using web_scrape tool
2. Extract the main content relevant to the research focus
3. Track any URLs that failed to fetch
After fetching, return JSON:
{
"fetched_sources": [
{"url": "...", "title": "...", "content": "extracted text..."},
...
],
"fetch_errors": ["url that failed", ...]
}
""",
tools=["web_scrape"],
max_retries=3,
)
# Node 4: Evaluate Sources
evaluate_sources_node = NodeSpec(
id="evaluate-sources",
name="Evaluate Sources",
description="Score sources for relevance and quality, filter to top 10",
node_type="llm_generate",
input_keys=["fetched_sources", "research_focus", "key_aspects"],
output_keys=["ranked_sources", "source_analysis"],
output_schema={
"ranked_sources": {
"type": "array",
"required": True,
"description": "List of ranked sources with scores",
},
"source_analysis": {
"type": "string",
"required": True,
"description": "Overview of source quality and coverage",
},
},
system_prompt="""\
You are a source evaluator. Assess each source for quality and relevance.
Scoring criteria:
- Relevance to research focus (1-10)
- Source credibility (1-10)
- Information depth (1-10)
- Recency if relevant (1-10)
Your task:
1. Score each source
2. Rank by combined score
3. Select top 10 sources
4. Note what each source uniquely contributes
Return JSON:
{
"ranked_sources": [
{"url": "...", "title": "...", "content": "...", "score": 8.5, "unique_value": "..."},
...
],
"source_analysis": "Overview of source quality and coverage"
}
""",
tools=[],
max_retries=3,
)
# Node 5: Synthesize Findings
synthesize_findings_node = NodeSpec(
id="synthesize-findings",
name="Synthesize Findings",
description="Extract key facts from sources and identify common themes",
node_type="llm_generate",
input_keys=["ranked_sources", "research_focus", "key_aspects"],
output_keys=["key_findings", "themes", "source_citations"],
output_schema={
"key_findings": {
"type": "array",
"required": True,
"description": "List of key findings with sources and confidence",
},
"themes": {
"type": "array",
"required": True,
"description": "List of themes with descriptions and supporting sources",
},
"source_citations": {
"type": "object",
"required": True,
"description": "Map of facts to supporting URLs",
},
},
system_prompt="""\
You are a research synthesizer. Analyze multiple sources to extract insights.
Your task:
1. Identify key facts from each source
2. Find common themes across sources
3. Note contradictions or debates
4. Build a citation map (fact -> source URL)
Return JSON:
{
"key_findings": [
{"finding": "...", "sources": ["url1", "url2"], "confidence": "high/medium/low"},
...
],
"themes": [
{"theme": "...", "description": "...", "supporting_sources": ["url1", ...]},
...
],
"source_citations": {
"fact or claim": ["supporting url1", "url2"],
...
}
}
""",
tools=[],
max_retries=3,
)
# Node 6: Write Report
write_report_node = NodeSpec(
id="write-report",
name="Write Report",
description="Generate a narrative report with proper citations",
node_type="llm_generate",
input_keys=[
"key_findings",
"themes",
"source_citations",
"research_focus",
"ranked_sources",
],
output_keys=["report_content", "references"],
output_schema={
"report_content": {
"type": "string",
"required": True,
"description": "Full markdown report text with citations",
},
"references": {
"type": "array",
"required": True,
"description": "List of reference objects with number, url, title",
},
},
system_prompt="""\
You are a research report writer. Create a well-structured narrative report.
Report structure:
1. Executive Summary (2-3 paragraphs)
2. Introduction (context and scope)
3. Key Findings (organized by theme)
4. Analysis (synthesis and implications)
5. Conclusion
6. References (numbered list of all sources)
Citation format: Use numbered citations like [1], [2] that correspond to the References section.
IMPORTANT:
- Every factual claim MUST have a citation
- Write in clear, professional prose
- Be objective and balanced
- Highlight areas of consensus and debate
Return JSON:
{
"report_content": "Full markdown report text with citations...",
"references": [
{"number": 1, "url": "...", "title": "..."},
...
]
}
""",
tools=[],
max_retries=3,
)
# Node 7: Quality Check
quality_check_node = NodeSpec(
id="quality-check",
name="Quality Check",
description="Verify all claims have citations and report is coherent",
node_type="llm_generate",
input_keys=["report_content", "references", "source_citations"],
output_keys=["quality_score", "issues", "final_report"],
output_schema={
"quality_score": {
"type": "number",
"required": True,
"description": "Quality score 0-1",
},
"issues": {
"type": "array",
"required": True,
"description": "List of issues found and fixed",
},
"final_report": {
"type": "string",
"required": True,
"description": "Corrected full report",
},
},
system_prompt="""\
You are a quality assurance reviewer. Check the research report for issues.
Check for:
1. Uncited claims (factual statements without [n] citation)
2. Broken citations (references to non-existent numbers)
3. Coherence (logical flow between sections)
4. Completeness (all key aspects covered)
5. Accuracy (claims match source content)
If issues found, fix them in the final report.
Return JSON:
{
"quality_score": 0.95,
"issues": [
{"type": "uncited_claim", "location": "paragraph 3", "fixed": true},
...
],
"final_report": "Corrected full report with all issues fixed..."
}
""",
tools=[],
max_retries=3,
)
# Node 8: Save Report
save_report_node = NodeSpec(
id="save-report",
name="Save Report",
description="Write the final report to a local markdown file",
node_type="llm_tool_use",
input_keys=["final_report", "references", "research_focus"],
output_keys=["file_path", "save_status"],
output_schema={
"file_path": {
"type": "string",
"required": True,
"description": "Path where report was saved",
},
"save_status": {
"type": "string",
"required": True,
"description": "Status of save operation",
},
},
system_prompt="""\
You are a file manager. Save the research report to disk.
Your task:
1. Generate a filename from the research focus (slugified, with date)
2. Use the write_to_file tool to save the report as markdown
3. Save to the ./research_reports/ directory
Filename format: research_YYYY-MM-DD_topic-slug.md
Return JSON:
{
"file_path": "research_reports/research_2026-01-23_topic-name.md",
"save_status": "success"
}
""",
tools=["write_to_file"],
max_retries=3,
)
__all__ = [
"parse_query_node",
"search_sources_node",
"fetch_content_node",
"evaluate_sources_node",
"synthesize_findings_node",
"write_report_node",
"quality_check_node",
"save_report_node",
]
@@ -1,303 +0,0 @@
---
name: building-agents-core
description: Core concepts for goal-driven agents - architecture, node types, tool discovery, and workflow overview. Use when starting agent development or need to understand agent fundamentals.
license: Apache-2.0
metadata:
author: hive
version: "1.0"
type: foundational
part_of: building-agents
---
# Building Agents - Core Concepts
Foundational knowledge for building goal-driven agents as Python packages.
## Architecture: Python Services (Not JSON Configs)
Agents are built as Python packages:
```
exports/my_agent/
├── __init__.py # Package exports
├── __main__.py # CLI (run, info, validate, shell)
├── agent.py # Graph construction (goal, edges, agent class)
├── nodes/__init__.py # Node definitions (NodeSpec)
├── config.py # Runtime config
└── README.md # Documentation
```
**Key Principle: Agent is visible and editable during build**
- ✅ Files created immediately as components are approved
- ✅ User can watch files grow in their editor
- ✅ No session state - just direct file writes
- ✅ No "export" step - agent is ready when build completes
## Core Concepts
### Goal
Success criteria and constraints (written to agent.py)
```python
goal = Goal(
id="research-goal",
name="Technical Research Agent",
description="Research technical topics thoroughly",
success_criteria=[
SuccessCriterion(
id="completeness",
description="Cover all aspects of topic",
metric="coverage_score",
target=">=0.9",
weight=0.4,
),
# 3-5 success criteria total
],
constraints=[
Constraint(
id="accuracy",
description="All information must be verified",
constraint_type="hard",
category="quality",
),
# 1-5 constraints total
],
)
```
### Node
Unit of work (written to nodes/__init__.py)
**Node Types:**
- `llm_generate` - Text generation, parsing
- `llm_tool_use` - Actions requiring tools
- `router` - Conditional branching
- `function` - Deterministic operations
```python
search_node = NodeSpec(
id="search-web",
name="Search Web",
description="Search for information online",
node_type="llm_tool_use",
input_keys=["query"],
output_keys=["search_results"],
system_prompt="Search the web for: {query}",
tools=["web_search"],
max_retries=3,
)
```
### Edge
Connection between nodes (written to agent.py)
**Edge Conditions:**
- `on_success` - Proceed if node succeeds
- `on_failure` - Handle errors
- `always` - Always proceed
- `conditional` - Based on expression
```python
EdgeSpec(
id="search-to-analyze",
source="search-web",
target="analyze-results",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
)
```
### Pause/Resume
Multi-turn conversations
- **Pause nodes** - Stop execution, wait for user input
- **Resume entry points** - Continue from pause with user's response
```python
# Example pause/resume configuration
pause_nodes = ["request-clarification"]
entry_points = {
"start": "analyze-request",
"request-clarification_resume": "process-clarification"
}
```
## Tool Discovery & Validation
**CRITICAL:** Before adding a node with tools, you MUST verify the tools exist.
Tools are provided by MCP servers. Never assume a tool exists - always discover dynamically.
### Step 1: Register MCP Server (if not already done)
```python
mcp__agent-builder__add_mcp_server(
name="tools",
transport="stdio",
command="python",
args='["mcp_server.py", "--stdio"]',
cwd="../tools"
)
```
### Step 2: Discover Available Tools
```python
# List all tools from all registered servers
mcp__agent-builder__list_mcp_tools()
# Or list tools from a specific server
mcp__agent-builder__list_mcp_tools(server_name="tools")
```
This returns available tools with their descriptions and parameters:
```json
{
"success": true,
"tools_by_server": {
"tools": [
{
"name": "web_search",
"description": "Search the web...",
"parameters": ["query"]
},
{
"name": "web_scrape",
"description": "Scrape a URL...",
"parameters": ["url"]
}
]
},
"total_tools": 14
}
```
### Step 3: Validate Before Adding Nodes
Before writing a node with `tools=[...]`:
1. Call `list_mcp_tools()` to get available tools
2. Check each tool in your node exists in the response
3. If a tool doesn't exist:
- **DO NOT proceed** with the node
- Inform the user: "The tool 'X' is not available. Available tools are: ..."
- Ask if they want to use an alternative or proceed without the tool
### Tool Validation Anti-Patterns
**Never assume a tool exists** - always call `list_mcp_tools()` first
**Never write a node with unverified tools** - validate before writing
**Never silently drop tools** - if a tool doesn't exist, inform the user
**Never guess tool names** - use exact names from discovery response
### Example Validation Flow
```python
# 1. User requests: "Add a node that searches the web"
# 2. Discover available tools
tools_response = mcp__agent-builder__list_mcp_tools()
# 3. Check if web_search exists
available = [t["name"] for tools in tools_response["tools_by_server"].values() for t in tools]
if "web_search" not in available:
# Inform user and ask how to proceed
print("'web_search' not available. Available tools:", available)
else:
# Proceed with node creation
# ...
```
## Workflow Overview: Incremental File Construction
```
1. CREATE PACKAGE → mkdir + write skeletons
2. DEFINE GOAL → Write to agent.py + config.py
3. FOR EACH NODE:
- Propose design
- User approves
- Write to nodes/__init__.py IMMEDIATELY ← FILE WRITTEN
- (Optional) Validate with test_node ← MCP VALIDATION
- User can open file and see it
4. CONNECT EDGES → Update agent.py ← FILE WRITTEN
- (Optional) Validate with validate_graph ← MCP VALIDATION
5. FINALIZE → Write agent class to agent.py ← FILE WRITTEN
6. DONE - Agent ready at exports/my_agent/
```
**Files written immediately. MCP tools optional for validation/testing bookkeeping.**
### The Key Difference
**OLD (Bad):**
```
MCP add_node → Session State → MCP add_node → Session State → ...
MCP export_graph
Files appear
```
**NEW (Good):**
```
Write node to file → (Optional: MCP test_node) → Write node to file → ...
↓ ↓
File visible File visible
immediately immediately
```
**Bottom line:** Use Write/Edit for construction, MCP for validation if needed.
## When to Use This Skill
Use building-agents-core when:
- Starting a new agent project and need to understand fundamentals
- Need to understand agent architecture before building
- Want to validate tool availability before proceeding
- Learning about node types, edges, and graph execution
**Next Steps:**
- Ready to build? → Use `building-agents-construction` skill
- Need patterns and examples? → Use `building-agents-patterns` skill
## MCP Tools for Validation
After writing files, optionally use MCP tools for validation:
**test_node** - Validate node configuration with mock inputs
```python
mcp__agent-builder__test_node(
node_id="search-web",
test_input='{"query": "test query"}',
mock_llm_response='{"results": "mock output"}'
)
```
**validate_graph** - Check graph structure
```python
mcp__agent-builder__validate_graph()
# Returns: unreachable nodes, missing connections, etc.
```
**create_session** - Track session state for bookkeeping
```python
mcp__agent-builder__create_session(session_name="my-build")
```
**Key Point:** Files are written FIRST. MCP tools are for validation only.
## Related Skills
- **building-agents-construction** - Step-by-step building process
- **building-agents-patterns** - Best practices and examples
- **agent-workflow** - Complete workflow orchestrator
- **testing-agent** - Test and validate completed agents
@@ -1,497 +0,0 @@
---
name: building-agents-patterns
description: Best practices, patterns, and examples for building goal-driven agents. Includes pause/resume architecture, hybrid workflows, anti-patterns, and handoff to testing. Use when optimizing agent design.
license: Apache-2.0
metadata:
author: hive
version: "1.0"
type: reference
part_of: building-agents
---
# Building Agents - Patterns & Best Practices
Design patterns, examples, and best practices for building robust goal-driven agents.
**Prerequisites:** Complete agent structure using `building-agents-construction`.
## Practical Example: Hybrid Workflow
How to build a node using both direct file writes and optional MCP validation:
```python
# 1. WRITE TO FILE FIRST (Primary - makes it visible)
node_code = '''
search_node = NodeSpec(
id="search-web",
node_type="llm_tool_use",
input_keys=["query"],
output_keys=["search_results"],
system_prompt="Search the web for: {query}",
tools=["web_search"],
)
'''
Edit(
file_path="exports/research_agent/nodes/__init__.py",
old_string="# Nodes will be added here",
new_string=node_code
)
print("✅ Added search_node to nodes/__init__.py")
print("📁 Open exports/research_agent/nodes/__init__.py to see it!")
# 2. OPTIONALLY VALIDATE WITH MCP (Secondary - bookkeeping)
validation = mcp__agent-builder__test_node(
node_id="search-web",
test_input='{"query": "python tutorials"}',
mock_llm_response='{"search_results": [...mock results...]}'
)
print(f"✓ Validation: {validation['success']}")
```
**User experience:**
- Immediately sees node in their editor (from step 1)
- Gets validation feedback (from step 2)
- Can edit the file directly if needed
This combines visibility (files) with validation (MCP tools).
## Pause/Resume Architecture
For agents needing multi-turn conversations with user interaction:
### Basic Pause/Resume Flow
```python
# Define pause nodes - execution stops at these nodes
pause_nodes = ["request-clarification", "await-approval"]
# Define entry points - where to resume from each pause
entry_points = {
"start": "analyze-request", # Initial entry
"request-clarification_resume": "process-clarification", # Resume from clarification
"await-approval_resume": "execute-action", # Resume from approval
}
```
### Example: Multi-Turn Research Agent
```python
# Nodes
nodes = [
NodeSpec(id="analyze-request", ...),
NodeSpec(id="request-clarification", ...), # PAUSE NODE
NodeSpec(id="process-clarification", ...),
NodeSpec(id="generate-results", ...),
NodeSpec(id="await-approval", ...), # PAUSE NODE
NodeSpec(id="execute-action", ...),
]
# Edges with resume flows
edges = [
EdgeSpec(
id="analyze-to-clarify",
source="analyze-request",
target="request-clarification",
condition=EdgeCondition.CONDITIONAL,
condition_expr="needs_clarification == true",
),
# When resumed, goes to process-clarification
EdgeSpec(
id="clarify-to-process",
source="request-clarification",
target="process-clarification",
condition=EdgeCondition.ALWAYS,
),
EdgeSpec(
id="results-to-approval",
source="generate-results",
target="await-approval",
condition=EdgeCondition.ALWAYS,
),
# When resumed, goes to execute-action
EdgeSpec(
id="approval-to-execute",
source="await-approval",
target="execute-action",
condition=EdgeCondition.ALWAYS,
),
]
# Configuration
pause_nodes = ["request-clarification", "await-approval"]
entry_points = {
"start": "analyze-request",
"request-clarification_resume": "process-clarification",
"await-approval_resume": "execute-action",
}
```
### Running Pause/Resume Agents
```python
# Initial run - will pause at first pause node
result1 = await agent.run(
context={"query": "research topic"},
session_state=None
)
# Check if paused
if result1.paused_at:
print(f"Paused at: {result1.paused_at}")
# Resume with user input
result2 = await agent.run(
context={"user_response": "clarification details"},
session_state=result1.session_state # Pass previous state
)
```
## Anti-Patterns
### What NOT to Do
**Don't rely on `export_graph`** - Write files immediately, not at end
```python
# BAD: Building in session state, exporting at end
mcp__agent-builder__add_node(...)
mcp__agent-builder__add_node(...)
mcp__agent-builder__export_graph() # Files appear only now
# GOOD: Writing files immediately
Write(file_path="...", content=node_code) # File visible now
Write(file_path="...", content=node_code) # File visible now
```
**Don't hide code in session** - Write to files as components approved
```python
# BAD: Accumulating changes invisibly
session.add_component(component1)
session.add_component(component2)
# User can't see anything yet
# GOOD: Incremental visibility
Edit(file_path="...", ...) # User sees change 1
Edit(file_path="...", ...) # User sees change 2
```
**Don't wait to write files** - Agent visible from first step
```python
# BAD: Building everything before writing
design_all_nodes()
design_all_edges()
write_everything_at_once()
# GOOD: Write as you go
write_package_structure() # Visible
write_goal() # Visible
write_node_1() # Visible
write_node_2() # Visible
```
**Don't batch everything** - Write incrementally
```python
# BAD: Batching all nodes
nodes = [design_node_1(), design_node_2(), ...]
write_all_nodes(nodes)
# GOOD: One at a time with user feedback
write_node_1() # User approves
write_node_2() # User approves
write_node_3() # User approves
```
### MCP Tools - Correct Usage
**MCP tools OK for:**
`test_node` - Validate node configuration with mock inputs
`validate_graph` - Check graph structure
`create_session` - Track session state for bookkeeping
✅ Other validation tools
**Just don't:** Use MCP as the primary construction method or rely on export_graph
## Best Practices
### 1. Show Progress After Each Write
```python
# After writing a node
print("✅ Added analyze_request_node to nodes/__init__.py")
print("📊 Progress: 1/6 nodes added")
print("📁 Open exports/my_agent/nodes/__init__.py to see it!")
```
### 2. Let User Open Files During Build
```python
# Encourage file inspection
print("✅ Goal written to agent.py")
print("")
print("💡 Tip: Open exports/my_agent/agent.py in your editor to see the goal!")
```
### 3. Write Incrementally - One Component at a Time
```python
# Good flow
write_package_structure()
show_user("Package created")
write_goal()
show_user("Goal written")
for node in nodes:
get_approval(node)
write_node(node)
show_user(f"Node {node.id} written")
```
### 4. Test As You Build
```python
# After adding several nodes
print("💡 You can test current state with:")
print(" PYTHONPATH=core:exports python -m my_agent validate")
print(" PYTHONPATH=core:exports python -m my_agent info")
```
### 5. Keep User Informed
```python
# Clear status updates
print("🔨 Creating package structure...")
print("✅ Package created: exports/my_agent/")
print("")
print("📝 Next: Define agent goal")
```
## Continuous Monitoring Agents
For agents that run continuously without terminal nodes:
```python
# No terminal nodes - loops forever
terminal_nodes = []
# Workflow loops back to start
edges = [
EdgeSpec(id="monitor-to-check", source="monitor", target="check-condition"),
EdgeSpec(id="check-to-wait", source="check-condition", target="wait"),
EdgeSpec(id="wait-to-monitor", source="wait", target="monitor"), # Loop
]
# Entry node only
entry_node = "monitor"
entry_points = {"start": "monitor"}
pause_nodes = []
```
**Example: File Monitor**
```python
nodes = [
NodeSpec(id="list-files", ...),
NodeSpec(id="check-new-files", node_type="router", ...),
NodeSpec(id="process-files", ...),
NodeSpec(id="wait-interval", node_type="function", ...),
]
edges = [
EdgeSpec(id="list-to-check", source="list-files", target="check-new-files"),
EdgeSpec(
id="check-to-process",
source="check-new-files",
target="process-files",
condition=EdgeCondition.CONDITIONAL,
condition_expr="new_files_count > 0",
),
EdgeSpec(
id="check-to-wait",
source="check-new-files",
target="wait-interval",
condition=EdgeCondition.CONDITIONAL,
condition_expr="new_files_count == 0",
),
EdgeSpec(id="process-to-wait", source="process-files", target="wait-interval"),
EdgeSpec(id="wait-to-list", source="wait-interval", target="list-files"), # Loop back
]
terminal_nodes = [] # No terminal - runs forever
```
## Complex Routing Patterns
### Multi-Condition Router
```python
router_node = NodeSpec(
id="decision-router",
node_type="router",
input_keys=["analysis_result"],
output_keys=["decision"],
system_prompt="""
Based on the analysis result, decide the next action:
- If confidence > 0.9: route to "execute"
- If 0.5 <= confidence <= 0.9: route to "review"
- If confidence < 0.5: route to "clarify"
Return: {"decision": "execute|review|clarify"}
""",
)
# Edges for each route
edges = [
EdgeSpec(
id="router-to-execute",
source="decision-router",
target="execute-action",
condition=EdgeCondition.CONDITIONAL,
condition_expr="decision == 'execute'",
priority=1,
),
EdgeSpec(
id="router-to-review",
source="decision-router",
target="human-review",
condition=EdgeCondition.CONDITIONAL,
condition_expr="decision == 'review'",
priority=2,
),
EdgeSpec(
id="router-to-clarify",
source="decision-router",
target="request-clarification",
condition=EdgeCondition.CONDITIONAL,
condition_expr="decision == 'clarify'",
priority=3,
),
]
```
## Error Handling Patterns
### Graceful Failure with Fallback
```python
# Primary node with error handling
nodes = [
NodeSpec(id="api-call", max_retries=3, ...),
NodeSpec(id="fallback-cache", ...),
NodeSpec(id="report-error", ...),
]
edges = [
# Success path
EdgeSpec(
id="api-success",
source="api-call",
target="process-results",
condition=EdgeCondition.ON_SUCCESS,
),
# Fallback on failure
EdgeSpec(
id="api-to-fallback",
source="api-call",
target="fallback-cache",
condition=EdgeCondition.ON_FAILURE,
priority=1,
),
# Report if fallback also fails
EdgeSpec(
id="fallback-to-error",
source="fallback-cache",
target="report-error",
condition=EdgeCondition.ON_FAILURE,
priority=1,
),
]
```
## Performance Optimization
### Parallel Node Execution
```python
# Use multiple edges from same source for parallel execution
edges = [
EdgeSpec(
id="start-to-search1",
source="start",
target="search-source-1",
condition=EdgeCondition.ALWAYS,
),
EdgeSpec(
id="start-to-search2",
source="start",
target="search-source-2",
condition=EdgeCondition.ALWAYS,
),
EdgeSpec(
id="start-to-search3",
source="start",
target="search-source-3",
condition=EdgeCondition.ALWAYS,
),
# Converge results
EdgeSpec(
id="search1-to-merge",
source="search-source-1",
target="merge-results",
),
EdgeSpec(
id="search2-to-merge",
source="search-source-2",
target="merge-results",
),
EdgeSpec(
id="search3-to-merge",
source="search-source-3",
target="merge-results",
),
]
```
## Handoff to Testing
When agent is complete, transition to testing phase:
```python
print("""
✅ Agent complete: exports/my_agent/
Next steps:
1. Switch to testing-agent skill
2. Generate and approve tests
3. Run evaluation
4. Debug any failures
Command: "Test the agent at exports/my_agent/"
""")
```
### Pre-Testing Checklist
Before handing off to testing-agent:
- [ ] Agent structure validates: `python -m agent_name validate`
- [ ] All nodes defined in nodes/__init__.py
- [ ] All edges connect valid nodes
- [ ] Entry node specified
- [ ] Agent can be imported: `from exports.agent_name import default_agent`
- [ ] README.md with usage instructions
- [ ] CLI commands work (info, validate)
## Related Skills
- **building-agents-core** - Fundamental concepts
- **building-agents-construction** - Step-by-step building
- **testing-agent** - Test and validate agents
- **agent-workflow** - Complete workflow orchestrator
---
**Remember: Agent is actively constructed, visible the whole time. No hidden state. No surprise exports. Just transparent, incremental file building.**
-572
View File
@@ -1,572 +0,0 @@
---
name: setup-credentials
description: Set up and install credentials for an agent. Detects missing credentials from agent config, collects them from the user, and stores them securely in the encrypted credential store at ~/.hive/credentials.
license: Apache-2.0
metadata:
author: hive
version: "2.1"
type: utility
---
# Setup Credentials
Interactive credential setup for agents with multiple authentication options. Detects what's missing, offers auth method choices, validates with health checks, and stores credentials securely.
## When to Use
- Before running or testing an agent for the first time
- When `AgentRunner.run()` fails with "missing required credentials"
- When a user asks to configure credentials for an agent
- After building a new agent that uses tools requiring API keys
## Workflow
### Step 1: Identify the Agent
Determine which agent needs credentials. The user will either:
- Name the agent directly (e.g., "set up credentials for hubspot-agent")
- Have an agent directory open (check `exports/` for agent dirs)
- Be working on an agent in the current session
Locate the agent's directory under `exports/{agent_name}/`.
### Step 2: Detect Required Credentials
Read the agent's configuration to determine which tools and node types it uses:
```python
from core.framework.runner import AgentRunner
runner = AgentRunner.load("exports/{agent_name}")
validation = runner.validate()
# validation.missing_credentials contains env var names
# validation.warnings contains detailed messages with help URLs
```
Alternatively, check the credential store directly:
```python
from core.framework.credentials import CredentialStore
# Use encrypted storage (default: ~/.hive/credentials)
store = CredentialStore.with_encrypted_storage()
# Check what's available
available = store.list_credentials()
print(f"Available credentials: {available}")
# Check if specific credential exists
if store.is_available("hubspot"):
print("HubSpot credential found")
else:
print("HubSpot credential missing")
```
To see all known credential specs (for help URLs and setup instructions):
```python
from aden_tools.credentials import CREDENTIAL_SPECS
for name, spec in CREDENTIAL_SPECS.items():
print(f"{name}: env_var={spec.env_var}, aden={spec.aden_supported}")
```
### Step 3: Present Auth Options for Each Missing Credential
For each missing credential, check what authentication methods are available:
```python
from aden_tools.credentials import CREDENTIAL_SPECS
spec = CREDENTIAL_SPECS.get("hubspot")
if spec:
# Determine available auth options
auth_options = []
if spec.aden_supported:
auth_options.append("aden")
if spec.direct_api_key_supported:
auth_options.append("direct")
auth_options.append("custom") # Always available
# Get setup info
setup_info = {
"env_var": spec.env_var,
"description": spec.description,
"help_url": spec.help_url,
"api_key_instructions": spec.api_key_instructions,
}
```
Present the available options using AskUserQuestion:
```
Choose how to configure HUBSPOT_ACCESS_TOKEN:
1) Aden Authorization Server (Recommended)
Secure OAuth2 flow via integration.adenhq.com
- Quick setup with automatic token refresh
- No need to manage API keys manually
2) Direct API Key
Enter your own API key manually
- Requires creating a HubSpot Private App
- Full control over scopes and permissions
3) Custom Credential Store (Advanced)
Programmatic configuration for CI/CD
- For automated deployments
- Requires manual API calls
```
### Step 4: Execute Auth Flow Based on User Choice
#### Option 1: Aden Authorization Server
This is the recommended flow for supported integrations (HubSpot, etc.).
**How Aden OAuth Works:**
The ADEN_API_KEY represents a user who has already completed OAuth authorization on Aden's platform. When users sign up and connect integrations on Aden, those OAuth tokens are stored server-side. Having an ADEN_API_KEY means:
1. User has an Aden account
2. User has already authorized integrations (HubSpot, etc.) via OAuth on Aden
3. We just need to sync those credentials down to the local credential store
**4.1a. Check for ADEN_API_KEY**
```python
import os
aden_key = os.environ.get("ADEN_API_KEY")
```
If not set, guide user to get one from Aden (this is where they do OAuth):
```python
from aden_tools.credentials import open_browser, get_aden_setup_url
# Open browser to Aden - user will sign up and connect integrations there
url = get_aden_setup_url() # https://integration.adenhq.com/setup
success, msg = open_browser(url)
print("Please sign in to Aden and connect your integrations (HubSpot, etc.).")
print("Once done, copy your API key and return here.")
```
Ask user to provide the ADEN_API_KEY they received.
**4.1b. Save ADEN_API_KEY to Shell Config**
With user approval, persist ADEN_API_KEY to their shell config:
```python
from aden_tools.credentials import (
detect_shell,
add_env_var_to_shell_config,
get_shell_source_command,
)
shell_type = detect_shell() # 'bash', 'zsh', or 'unknown'
# Ask user for approval before modifying shell config
# If approved:
success, config_path = add_env_var_to_shell_config(
"ADEN_API_KEY",
user_provided_key,
comment="Aden authorization server API key"
)
if success:
source_cmd = get_shell_source_command()
print(f"Saved to {config_path}")
print(f"Run: {source_cmd}")
```
Also save to `~/.hive/configuration.json` for the framework:
```python
import json
from pathlib import Path
config_path = Path.home() / ".hive" / "configuration.json"
config = json.loads(config_path.read_text()) if config_path.exists() else {}
config["aden"] = {
"api_key_configured": True,
"api_url": "https://api.adenhq.com"
}
config_path.parent.mkdir(parents=True, exist_ok=True)
config_path.write_text(json.dumps(config, indent=2))
```
**4.1c. Sync Credentials from Aden Server**
Since the user has already authorized integrations on Aden, use the one-liner factory method:
```python
from core.framework.credentials import CredentialStore
# This single call handles everything:
# - Creates encrypted local storage at ~/.hive/credentials
# - Configures Aden client from ADEN_API_KEY env var
# - Syncs all credentials from Aden server automatically
store = CredentialStore.with_aden_sync(
base_url="https://api.adenhq.com",
auto_sync=True, # Syncs on creation
)
# Check what was synced
synced = store.list_credentials()
print(f"Synced credentials: {synced}")
# If the required credential wasn't synced, the user hasn't authorized it on Aden yet
if "hubspot" not in synced:
print("HubSpot not found in your Aden account.")
print("Please visit https://integration.adenhq.com to connect HubSpot, then try again.")
```
For more control over the sync process:
```python
from core.framework.credentials import CredentialStore
from core.framework.credentials.aden import (
AdenCredentialClient,
AdenClientConfig,
AdenSyncProvider,
)
# Create client (API key loaded from ADEN_API_KEY env var)
client = AdenCredentialClient(AdenClientConfig(
base_url="https://api.adenhq.com",
))
# Create provider and store
provider = AdenSyncProvider(client=client)
store = CredentialStore.with_encrypted_storage()
# Manual sync
synced_count = provider.sync_all(store)
print(f"Synced {synced_count} credentials from Aden")
```
**4.1d. Run Health Check**
```python
from aden_tools.credentials import check_credential_health
# Get the token from the store
cred = store.get_credential("hubspot")
token = cred.keys["access_token"].value.get_secret_value()
result = check_credential_health("hubspot", token)
if result.valid:
print("HubSpot credentials validated successfully!")
else:
print(f"Validation failed: {result.message}")
# Offer to retry the OAuth flow
```
#### Option 2: Direct API Key
For users who prefer manual API key management.
**4.2a. Show Setup Instructions**
```python
from aden_tools.credentials import CREDENTIAL_SPECS
spec = CREDENTIAL_SPECS.get("hubspot")
if spec and spec.api_key_instructions:
print(spec.api_key_instructions)
# Output:
# To get a HubSpot Private App token:
# 1. Go to HubSpot Settings > Integrations > Private Apps
# 2. Click "Create a private app"
# 3. Name your app (e.g., "Hive Agent")
# ...
if spec and spec.help_url:
print(f"More info: {spec.help_url}")
```
**4.2b. Collect API Key from User**
Use AskUserQuestion to securely collect the API key:
```
Please provide your HubSpot access token:
(This will be stored securely in ~/.hive/credentials)
```
**4.2c. Run Health Check Before Storing**
```python
from aden_tools.credentials import check_credential_health
result = check_credential_health("hubspot", user_provided_token)
if not result.valid:
print(f"Warning: {result.message}")
# Ask user if they want to:
# 1. Try a different token
# 2. Continue anyway (not recommended)
```
**4.2d. Store in Encrypted Credential Store**
```python
from core.framework.credentials import CredentialStore, CredentialObject, CredentialKey
from pydantic import SecretStr
store = CredentialStore.with_encrypted_storage()
cred = CredentialObject(
id="hubspot",
name="HubSpot Access Token",
keys={
"access_token": CredentialKey(
name="access_token",
value=SecretStr(user_provided_token),
)
},
)
store.save_credential(cred)
```
**4.2e. Export to Current Session**
```bash
export HUBSPOT_ACCESS_TOKEN="the-value"
```
#### Option 3: Custom Credential Store (Advanced)
For programmatic/CI/CD setups.
**4.3a. Show Documentation**
```
For advanced credential management, you can use the CredentialStore API directly:
from core.framework.credentials import CredentialStore, CredentialObject, CredentialKey
from pydantic import SecretStr
store = CredentialStore.with_encrypted_storage()
cred = CredentialObject(
id="hubspot",
name="HubSpot Access Token",
keys={"access_token": CredentialKey(name="access_token", value=SecretStr("..."))}
)
store.save_credential(cred)
For CI/CD environments:
- Set HIVE_CREDENTIAL_KEY for encryption
- Pre-populate ~/.hive/credentials programmatically
- Or use environment variables directly (HUBSPOT_ACCESS_TOKEN)
Documentation: See core/framework/credentials/README.md
```
### Step 5: Record Configuration Method
Track which auth method was used for each credential in `~/.hive/configuration.json`:
```python
import json
from pathlib import Path
from datetime import datetime
config_path = Path.home() / ".hive" / "configuration.json"
config = json.loads(config_path.read_text()) if config_path.exists() else {}
if "credential_methods" not in config:
config["credential_methods"] = {}
config["credential_methods"]["hubspot"] = {
"method": "aden", # or "direct" or "custom"
"configured_at": datetime.now().isoformat(),
}
config_path.write_text(json.dumps(config, indent=2))
```
### Step 6: Verify All Credentials
Run validation again to confirm everything is set:
```python
runner = AgentRunner.load("exports/{agent_name}")
validation = runner.validate()
assert not validation.missing_credentials, "Still missing credentials!"
```
Report the result to the user.
## Health Check Reference
Health checks validate credentials by making lightweight API calls:
| Credential | Endpoint | What It Checks |
| -------------- | --------------------------------------- | --------------------------------- |
| `hubspot` | `GET /crm/v3/objects/contacts?limit=1` | Bearer token validity, CRM scopes |
| `brave_search` | `GET /res/v1/web/search?q=test&count=1` | API key validity |
```python
from aden_tools.credentials import check_credential_health, HealthCheckResult
result: HealthCheckResult = check_credential_health("hubspot", token_value)
# result.valid: bool
# result.message: str
# result.details: dict (status_code, rate_limited, etc.)
```
## Encryption Key (HIVE_CREDENTIAL_KEY)
The encrypted credential store requires `HIVE_CREDENTIAL_KEY` to encrypt/decrypt credentials.
- If the user doesn't have one, `EncryptedFileStorage` will auto-generate one and log it
- The user MUST persist this key (e.g., in `~/.bashrc` or a secrets manager)
- Without this key, stored credentials cannot be decrypted
- This is the ONLY secret that should live in `~/.bashrc` or environment config
If `HIVE_CREDENTIAL_KEY` is not set:
1. Let the store generate one
2. Tell the user to save it: `export HIVE_CREDENTIAL_KEY="{generated_key}"`
3. Recommend adding it to `~/.bashrc` or their shell profile
## Security Rules
- **NEVER** log, print, or echo credential values in tool output
- **NEVER** store credentials in plaintext files, git-tracked files, or agent configs
- **NEVER** hardcode credentials in source code
- **ALWAYS** use `SecretStr` from Pydantic when handling credential values in Python
- **ALWAYS** use the encrypted credential store (`~/.hive/credentials`) for persistence
- **ALWAYS** run health checks before storing credentials (when possible)
- **ALWAYS** verify credentials were stored by re-running validation, not by reading them back
- When modifying `~/.bashrc` or `~/.zshrc`, confirm with the user first
## Credential Sources Reference
All credential specs are defined in `tools/src/aden_tools/credentials/`:
| File | Category | Credentials | Aden Supported |
| ----------------- | ------------- | --------------------------------------------- | -------------- |
| `llm.py` | LLM Providers | `anthropic` | No |
| `search.py` | Search Tools | `brave_search`, `google_search`, `google_cse` | No |
| `integrations.py` | Integrations | `hubspot` | Yes |
**Note:** Additional LLM providers (Cerebras, Groq, OpenAI) are handled by LiteLLM via environment
variables (`CEREBRAS_API_KEY`, `GROQ_API_KEY`, `OPENAI_API_KEY`) but are not yet in CREDENTIAL_SPECS.
Add them to `llm.py` as needed.
To check what's registered:
```python
from aden_tools.credentials import CREDENTIAL_SPECS
for name, spec in CREDENTIAL_SPECS.items():
print(f"{name}: aden={spec.aden_supported}, direct={spec.direct_api_key_supported}")
```
## Migration: CredentialManager → CredentialStore
**CredentialManager is deprecated.** Use CredentialStore instead.
| Old (Deprecated) | New (Recommended) |
| ----------------------------------------- | -------------------------------------------------------------------- |
| `CredentialManager()` | `CredentialStore.with_encrypted_storage()` |
| `creds.get("hubspot")` | `store.get("hubspot")` or `store.get_key("hubspot", "access_token")` |
| `creds.validate_for_tools(tools)` | Use `store.is_available(cred_id)` per credential |
| `creds.get_auth_options("hubspot")` | Check `CREDENTIAL_SPECS["hubspot"].aden_supported` |
| `creds.get_setup_instructions("hubspot")` | Access `CREDENTIAL_SPECS["hubspot"]` directly |
**Why migrate?**
- **CredentialStore** supports encrypted storage, multi-key credentials, template resolution, and automatic token refresh
- **CredentialManager** only reads from environment variables and .env files (no encryption, no refresh)
- **CredentialStoreAdapter** exists for backward compatibility during migration
```python
# Old way (deprecated)
from aden_tools.credentials import CredentialManager
creds = CredentialManager()
token = creds.get("hubspot")
# New way (recommended)
from core.framework.credentials import CredentialStore
store = CredentialStore.with_encrypted_storage()
token = store.get("hubspot")
# With Aden sync (recommended for OAuth integrations)
store = CredentialStore.with_aden_sync()
token = store.get_key("hubspot", "access_token")
```
## Example Session
```
User: /setup-credentials for my hubspot-agent
Agent: Let me check what credentials your hubspot-agent needs.
[Runs validation, finds ANTHROPIC_API_KEY and HUBSPOT_ACCESS_TOKEN missing]
Agent: Your hubspot-agent requires 2 credentials:
1. ANTHROPIC_API_KEY - Anthropic API key for LLM calls
2. HUBSPOT_ACCESS_TOKEN - HubSpot access token
Let's set these up one at a time.
--- Setting up ANTHROPIC_API_KEY ---
This credential only supports direct API key entry.
[Shows instructions for getting Anthropic API key]
[AskUserQuestion: "Please provide your Anthropic API key:"]
[User provides key]
[Stores in credential store, exports to env]
--- Setting up HUBSPOT_ACCESS_TOKEN ---
[AskUserQuestion with 3 options]
Choose how to configure HUBSPOT_ACCESS_TOKEN:
1) Aden Authorization Server (Recommended)
2) Direct API Key
3) Custom Credential Store (Advanced)
User: 1 (Aden)
Agent: Great! Let me check if you have an Aden API key configured...
[Checks for ADEN_API_KEY, not found]
[Opens browser to integration.adenhq.com/setup]
Agent: I've opened Aden's setup page. Please:
1. Sign in or create an account
2. Connect your HubSpot account (OAuth happens on Aden's site)
3. Copy the API key shown after setup
[AskUserQuestion: "Please provide your Aden API key:"]
[User provides key]
Agent: Would you like me to save this to your shell config (~/.zshrc) for future sessions?
[User approves]
[Saves to ~/.zshrc and ~/.hive/configuration.json]
Agent: Let me sync your credentials from Aden...
[Syncs credentials from Aden server - OAuth already done on Aden's side]
[Runs health check]
Agent: HubSpot credentials validated successfully!
All credentials are now configured:
- ANTHROPIC_API_KEY: Stored in encrypted credential store
- HUBSPOT_ACCESS_TOKEN: Synced from Aden (OAuth completed on Aden)
- Validation passed - your agent is ready to run!
```
File diff suppressed because it is too large Load Diff
@@ -1,351 +0,0 @@
# Example: Testing a YouTube Research Agent
This example walks through testing a YouTube research agent that finds relevant videos based on a topic.
## Prerequisites
- Agent built with building-agents skill at `exports/youtube-research/`
- Goal defined with success criteria and constraints
## Step 1: Load the Goal
First, load the goal that was defined during the Goal stage:
```json
{
"id": "youtube-research",
"name": "YouTube Research Agent",
"description": "Find relevant YouTube videos on a given topic",
"success_criteria": [
{
"id": "find_videos",
"description": "Find 3-5 relevant videos",
"metric": "video_count",
"target": "3-5",
"weight": 1.0
},
{
"id": "relevance",
"description": "Videos must be relevant to the topic",
"metric": "relevance_score",
"target": ">0.8",
"weight": 0.8
}
],
"constraints": [
{
"id": "api_limits",
"description": "Must not exceed YouTube API rate limits",
"constraint_type": "hard",
"category": "technical"
},
{
"id": "content_safety",
"description": "Must filter out inappropriate content",
"constraint_type": "hard",
"category": "safety"
}
]
}
```
## Step 2: Get Constraint Test Guidelines
During the Goal stage (or early Eval), get test guidelines for constraints:
```python
result = generate_constraint_tests(
goal_id="youtube-research",
goal_json='<goal JSON above>',
agent_path="exports/youtube-research"
)
```
**The result contains guidelines (not generated tests):**
- `output_file`: Where to write tests
- `file_header`: Imports and fixtures to use
- `test_template`: Format for test functions
- `constraints_formatted`: The constraints to test
- `test_guidelines`: Rules for writing tests
## Step 3: Write Constraint Tests
Using the guidelines, write tests directly with the Write tool:
```python
# Write constraint tests using the provided file_header and guidelines
Write(
file_path="exports/youtube-research/tests/test_constraints.py",
content='''
"""Constraint tests for youtube-research agent."""
import os
import pytest
from exports.youtube_research import default_agent
pytestmark = pytest.mark.skipif(
not os.environ.get("ANTHROPIC_API_KEY") and not os.environ.get("MOCK_MODE"),
reason="API key required for real testing."
)
@pytest.mark.asyncio
async def test_constraint_api_limits_respected():
"""Verify API rate limits are not exceeded."""
import time
mock_mode = bool(os.environ.get("MOCK_MODE"))
for i in range(10):
result = await default_agent.run({"topic": f"test_{i}"}, mock_mode=mock_mode)
time.sleep(0.1)
# Should complete without rate limit errors
assert "rate limit" not in str(result).lower()
@pytest.mark.asyncio
async def test_constraint_content_safety_filter():
"""Verify inappropriate content is filtered."""
mock_mode = bool(os.environ.get("MOCK_MODE"))
result = await default_agent.run({"topic": "general topic"}, mock_mode=mock_mode)
for video in result.videos:
assert video.safe_for_work is True
assert video.age_restricted is False
'''
)
```
## Step 4: Get Success Criteria Test Guidelines
After the agent is built, get success criteria test guidelines:
```python
result = generate_success_tests(
goal_id="youtube-research",
goal_json='<goal JSON>',
node_names="search_node,filter_node,rank_node,format_node",
tool_names="youtube_search,video_details,channel_info",
agent_path="exports/youtube-research"
)
```
## Step 5: Write Success Criteria Tests
Using the guidelines, write success criteria tests:
```python
Write(
file_path="exports/youtube-research/tests/test_success_criteria.py",
content='''
"""Success criteria tests for youtube-research agent."""
import os
import pytest
from exports.youtube_research import default_agent
pytestmark = pytest.mark.skipif(
not os.environ.get("ANTHROPIC_API_KEY") and not os.environ.get("MOCK_MODE"),
reason="API key required for real testing."
)
@pytest.mark.asyncio
async def test_find_videos_happy_path():
"""Test finding videos for a common topic."""
mock_mode = bool(os.environ.get("MOCK_MODE"))
result = await default_agent.run({"topic": "machine learning"}, mock_mode=mock_mode)
assert result.success
assert 3 <= len(result.videos) <= 5
assert all(v.title for v in result.videos)
assert all(v.video_id for v in result.videos)
@pytest.mark.asyncio
async def test_find_videos_minimum_boundary():
"""Test at minimum threshold (3 videos)."""
mock_mode = bool(os.environ.get("MOCK_MODE"))
result = await default_agent.run({"topic": "niche topic xyz"}, mock_mode=mock_mode)
assert len(result.videos) >= 3
@pytest.mark.asyncio
async def test_relevance_score_threshold():
"""Test relevance scoring meets threshold."""
mock_mode = bool(os.environ.get("MOCK_MODE"))
result = await default_agent.run({"topic": "python programming"}, mock_mode=mock_mode)
for video in result.videos:
assert video.relevance_score > 0.8
@pytest.mark.asyncio
async def test_find_videos_no_results_graceful():
"""Test graceful handling of no results."""
mock_mode = bool(os.environ.get("MOCK_MODE"))
result = await default_agent.run({"topic": "xyznonexistent123"}, mock_mode=mock_mode)
# Should not crash, return empty or message
assert result.videos == [] or result.message
'''
)
```
## Step 6: Run All Tests
Execute all tests:
```python
result = run_tests(
goal_id="youtube-research",
agent_path="exports/youtube-research",
test_types='["all"]',
parallel=4
)
```
**Results:**
```json
{
"goal_id": "youtube-research",
"overall_passed": false,
"summary": {
"total": 6,
"passed": 5,
"failed": 1,
"pass_rate": "83.3%"
},
"duration_ms": 4521,
"results": [
{"test_id": "test_constraint_api_001", "passed": true, "duration_ms": 1234},
{"test_id": "test_constraint_content_001", "passed": true, "duration_ms": 456},
{"test_id": "test_success_001", "passed": true, "duration_ms": 789},
{"test_id": "test_success_002", "passed": true, "duration_ms": 654},
{"test_id": "test_success_003", "passed": true, "duration_ms": 543},
{"test_id": "test_success_004", "passed": false, "duration_ms": 845,
"error_category": "IMPLEMENTATION_ERROR",
"error_message": "TypeError: 'NoneType' object has no attribute 'videos'"}
]
}
```
## Step 7: Debug the Failed Test
```python
result = debug_test(
goal_id="youtube-research",
test_name="test_find_videos_no_results_graceful",
agent_path="exports/youtube-research"
)
```
**Debug Output:**
```json
{
"test_id": "test_success_004",
"test_name": "test_find_videos_no_results_graceful",
"input": {"topic": "xyznonexistent123"},
"expected": "Empty list or message",
"actual": {"error": "TypeError: 'NoneType' object has no attribute 'videos'"},
"passed": false,
"error_message": "TypeError: 'NoneType' object has no attribute 'videos'",
"error_category": "IMPLEMENTATION_ERROR",
"stack_trace": "Traceback (most recent call last):\n File \"filter_node.py\", line 42\n for video in result.videos:\nTypeError: 'NoneType' object has no attribute 'videos'",
"logs": [
{"timestamp": "2026-01-20T10:00:01", "node": "search_node", "level": "INFO", "msg": "Searching for: xyznonexistent123"},
{"timestamp": "2026-01-20T10:00:02", "node": "search_node", "level": "WARNING", "msg": "No results found"},
{"timestamp": "2026-01-20T10:00:02", "node": "filter_node", "level": "ERROR", "msg": "NoneType error"}
],
"runtime_data": {
"execution_path": ["start", "search_node", "filter_node"],
"node_outputs": {
"search_node": null
}
},
"suggested_fix": "Add null check in filter_node before accessing .videos attribute",
"iteration_guidance": {
"stage": "Agent",
"action": "Fix the code in nodes/edges",
"restart_required": false,
"description": "The goal is correct, but filter_node doesn't handle null results from search_node."
}
}
```
## Step 8: Iterate Based on Category
Since this is an **IMPLEMENTATION_ERROR**, we:
1. **Don't restart** the Goal → Agent → Eval flow
2. **Fix the agent** using building-agents skill:
- Modify `filter_node` to handle null results
3. **Re-run Eval** (tests only)
### Fix in building-agents:
```python
# Update the filter_node to handle null
add_node(
node_id="filter_node",
name="Filter Node",
description="Filter and rank videos",
node_type="function",
input_keys=["search_results"],
output_keys=["filtered_videos"],
system_prompt="""
Filter videos by relevance.
IMPORTANT: Handle case where search_results is None or empty.
Return empty list if no results.
"""
)
```
### Re-export and re-test:
```python
# Re-export the fixed agent
export_graph(path="exports/youtube-research")
# Re-run tests
result = run_tests(
goal_id="youtube-research",
agent_path="exports/youtube-research",
test_types='["all"]'
)
```
**Updated Results:**
```json
{
"goal_id": "youtube-research",
"overall_passed": true,
"summary": {
"total": 6,
"passed": 6,
"failed": 0,
"pass_rate": "100.0%"
}
}
```
## Summary
1. **Got guidelines** for constraint tests during Goal stage
2. **Wrote** constraint tests using Write tool
3. **Got guidelines** for success criteria tests during Eval stage
4. **Wrote** success criteria tests using Write tool
5. **Ran** tests in parallel
6. **Debugged** the one failure
7. **Categorized** as IMPLEMENTATION_ERROR
8. **Fixed** the agent (not the goal)
9. **Re-ran** Eval only (didn't restart full flow)
10. **Passed** all tests
The agent is now validated and ready for production use.
-20
View File
@@ -1,20 +0,0 @@
{
"mcpServers": {
"agent-builder": {
"command": "python",
"args": ["-m", "framework.mcp.agent_builder_server"],
"cwd": "core",
"env": {
"PYTHONPATH": "../tools/src"
}
},
"tools": {
"command": "python",
"args": ["mcp_server.py", "--stdio"],
"cwd": "tools",
"env": {
"PYTHONPATH": "src"
}
}
}
}
-1
View File
@@ -1 +0,0 @@
../../.claude/skills/agent-workflow
@@ -1 +0,0 @@
../../.claude/skills/building-agents-construction
-1
View File
@@ -1 +0,0 @@
../../.claude/skills/building-agents-core
-1
View File
@@ -1 +0,0 @@
../../.claude/skills/building-agents-patterns
-1
View File
@@ -1 +0,0 @@
../../.claude/skills/testing-agent
+3 -2
View File
@@ -1,9 +1,10 @@
---
name: Bug Report
about: Report a bug to help us improve
title: '[Bug]: '
labels: bug
title: "[Bug]: "
labels: bug, enhancement
assignees: ''
---
## Describe the Bug
+2 -1
View File
@@ -1,9 +1,10 @@
---
name: Feature Request
about: Suggest a new feature or enhancement
title: '[Feature]: '
title: "[Feature]: "
labels: enhancement
assignees: ''
---
## Problem Statement
@@ -0,0 +1,89 @@
name: Integration Bounty
description: A bounty task for the integration contribution program
title: "[Bounty]: "
labels: []
body:
- type: markdown
attributes:
value: |
## Integration Bounty
This issue is part of the [Integration Bounty Program](../../docs/bounty-program/README.md).
**Claim this bounty** by commenting below — a maintainer will assign you within 24 hours.
- type: dropdown
id: bounty-type
attributes:
label: Bounty Type
options:
- "Test a Tool (20 pts)"
- "Write Docs (20 pts)"
- "Code Contribution (30 pts)"
- "New Integration (75 pts)"
validations:
required: true
- type: dropdown
id: difficulty
attributes:
label: Difficulty
options:
- Easy
- Medium
- Hard
validations:
required: true
- type: input
id: tool-name
attributes:
label: Tool Name
description: The integration this bounty targets (e.g., `airtable`, `salesforce`)
placeholder: e.g., airtable
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: What needs to be done to complete this bounty.
placeholder: |
Describe the specific task, including:
- What the contributor needs to do
- Links to relevant files in the repo
- Any setup requirements (API keys, accounts, etc.)
validations:
required: true
- type: textarea
id: acceptance-criteria
attributes:
label: Acceptance Criteria
description: What "done" looks like. The PR or report must meet all criteria.
placeholder: |
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] CI passes
validations:
required: true
- type: textarea
id: relevant-files
attributes:
label: Relevant Files
description: Links to tool directory, credential spec, health check file, etc.
placeholder: |
- Tool: `tools/src/aden_tools/tools/{tool_name}/`
- Credential spec: `tools/src/aden_tools/credentials/{category}.py`
- Health checks: `tools/src/aden_tools/credentials/health_check.py`
- type: textarea
id: resources
attributes:
label: Resources
description: Links to API docs, examples, or guides that will help the contributor.
placeholder: |
- [Building Tools Guide](../../tools/BUILDING_TOOLS.md)
- [Tool README Template](../../docs/bounty-program/templates/tool-readme-template.md)
- API docs: https://...
@@ -0,0 +1,71 @@
---
name: Integration Request
about: Suggest a new integration
title: "[Integration]:"
labels: ''
assignees: ''
---
## Service
Name and brief description of the service and what it enables agents to do.
**Description:** [e.g., "API key for Slack Bot" — short one-liner for the credential spec]
## Credential Identity
- **credential_id:** [e.g., `slack`]
- **env_var:** [e.g., `SLACK_BOT_TOKEN`]
- **credential_key:** [e.g., `access_token`, `api_key`, `bot_token`]
## Tools
Tool function names that require this credential:
- [e.g., `slack_send_message`]
- [e.g., `slack_list_channels`]
## Auth Methods
- **Direct API key supported:** Yes / No
- **Aden OAuth supported:** Yes / No
If Aden OAuth is supported, describe the OAuth scopes/permissions required.
## How to Get the Credential
Link where users obtain the key/token:
[e.g., https://api.slack.com/apps]
Step-by-step instructions:
1. Go to ...
2. Create a ...
3. Select scopes/permissions: ...
4. Copy the key/token
## Health Check
A lightweight API call to validate the credential (no writes, no charges).
- **Endpoint:** [e.g., `https://slack.com/api/auth.test`]
- **Method:** [e.g., `GET` or `POST`]
- **Auth header:** [e.g., `Authorization: Bearer {token}` or `X-Api-Key: {key}`]
- **Parameters (if any):** [e.g., `?limit=1`]
- **200 means:** [e.g., key is valid]
- **401 means:** [e.g., invalid or expired]
- **429 means:** [e.g., rate limited but key is valid]
## Credential Group
Does this require multiple credentials configured together? (e.g., Google Custom Search needs
both an API key and a CSE ID)
- [ ] No, single credential
- [ ] Yes — list the other credential IDs in the group:
## Additional Context
Links to API docs, rate limits, free tier availability, or anything else relevant.
+37
View File
@@ -0,0 +1,37 @@
name: Bounty completed
description: Awards points and notifies Discord when a bounty PR is merged
on:
pull_request:
types: [closed]
jobs:
bounty-notify:
if: >
github.event.pull_request.merged == true &&
contains(join(github.event.pull_request.labels.*.name, ','), 'bounty:')
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
pull-requests: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Award XP and notify Discord
run: bun run scripts/bounty-tracker.ts notify
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
DISCORD_WEBHOOK_URL: ${{ secrets.DISCORD_BOUNTY_WEBHOOK_URL }}
LURKR_API_KEY: ${{ secrets.LURKR_API_KEY }}
LURKR_GUILD_ID: ${{ secrets.LURKR_GUILD_ID }}
PR_NUMBER: ${{ github.event.pull_request.number }}
+58 -30
View File
@@ -5,7 +5,7 @@ on:
branches: [main]
pull_request:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@@ -21,23 +21,24 @@ jobs:
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
- name: Install uv
uses: astral-sh/setup-uv@v4
with:
enable-cache: true
- name: Install dependencies
run: |
cd core
pip install -e .
pip install -r requirements-dev.txt
run: uv sync --project core --group dev
- name: Ruff lint
run: |
ruff check core/
ruff check tools/
uv run --project core ruff check core/
uv run --project core ruff check tools/
- name: Ruff format
run: |
ruff format --check core/
ruff format --check tools/
uv run --project core ruff format --check core/
uv run --project core ruff format --check tools/
test:
name: Test Python Framework
@@ -52,23 +53,24 @@ jobs:
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
- name: Install dependencies
- name: Install uv
uses: astral-sh/setup-uv@v4
with:
enable-cache: true
- name: Install dependencies and run tests
working-directory: core
run: |
cd core
pip install -e .
pip install -r requirements-dev.txt
uv sync
uv run pytest tests/ -v
- name: Run tests
run: |
cd core
pytest tests/ -v
validate:
name: Validate Agent Exports
runs-on: ubuntu-latest
needs: [lint, test]
test-tools:
name: Test Tools (${{ matrix.os }})
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
steps:
- uses: actions/checkout@v4
@@ -76,13 +78,39 @@ jobs:
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
- name: Install dependencies
- name: Install uv
uses: astral-sh/setup-uv@v4
with:
enable-cache: true
- name: Install dependencies and run tests
working-directory: tools
run: |
cd core
pip install -e .
pip install -r requirements-dev.txt
uv sync --extra dev
uv run pytest tests/ -v
validate:
name: Validate Agent Exports
runs-on: ubuntu-latest
needs: [lint, test, test-tools]
steps:
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install uv
uses: astral-sh/setup-uv@v4
with:
enable-cache: true
- name: Install dependencies
working-directory: core
run: |
uv sync
- name: Validate exported agents
run: |
@@ -105,7 +133,7 @@ jobs:
for agent_dir in "${agent_dirs[@]}"; do
if [ -f "$agent_dir/agent.json" ]; then
echo "Validating $agent_dir"
python -c "import json; json.load(open('$agent_dir/agent.json'))"
uv run python -c "import json; json.load(open('$agent_dir/agent.json'))"
validated=$((validated + 1))
fi
done
+7 -1
View File
@@ -80,7 +80,13 @@ jobs:
- help wanted: Extra attention is needed (if issue needs community input)
- backlog: Tracked for the future, but not currently planned or prioritized
You may apply multiple labels if appropriate (e.g., "bug" and "help wanted").
### 6. Estimate size (if NOT a duplicate, spam, or invalid)
Apply exactly ONE size label to help contributors match their capacity to the task:
- "size: small": Docs, typos, single-file fixes, config changes
- "size: medium": Bug fixes with tests, adding a single tool, changes within one package
- "size: large": Cross-package changes (core + tools), new modules, complex logic, architectural refactors
You may apply multiple labels if appropriate (e.g., "bug", "size: small", and "good first issue").
## Tools Available:
- mcp__github__get_issue: Get issue details
+126
View File
@@ -0,0 +1,126 @@
name: Link Discord account
description: Auto-creates a PR to add contributor to contributors.yml when a link-discord issue is opened
on:
issues:
types: [opened]
jobs:
link-discord:
if: contains(github.event.issue.labels.*.name, 'link-discord')
runs-on: ubuntu-latest
timeout-minutes: 2
permissions:
contents: write
issues: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Parse issue and update contributors.yml
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const issue = context.payload.issue;
const githubUsername = issue.user.login;
// Parse the issue body for form fields
const body = issue.body || '';
// Extract Discord ID — look for the numeric value after the "Discord User ID" heading
const discordMatch = body.match(/### Discord User ID\s*\n\s*(\d{17,20})/);
if (!discordMatch) {
await github.rest.issues.createComment({
...context.repo,
issue_number: issue.number,
body: `Could not find a valid Discord ID in the issue body. Please make sure you entered a numeric ID (17-20 digits), not a username.\n\nExample: \`123456789012345678\``
});
await github.rest.issues.update({
...context.repo,
issue_number: issue.number,
state: 'closed',
state_reason: 'not_planned'
});
return;
}
const discordId = discordMatch[1];
// Extract display name (optional)
const nameMatch = body.match(/### Display Name \(optional\)\s*\n\s*(.+)/);
const displayName = nameMatch ? nameMatch[1].trim() : '';
// Check if user already exists
const yml = fs.readFileSync('contributors.yml', 'utf-8');
if (yml.includes(`github: ${githubUsername}`)) {
await github.rest.issues.createComment({
...context.repo,
issue_number: issue.number,
body: `@${githubUsername} is already in \`contributors.yml\`. If you need to update your Discord ID, please edit the file directly via PR.`
});
await github.rest.issues.update({
...context.repo,
issue_number: issue.number,
state: 'closed',
state_reason: 'completed'
});
return;
}
// Append entry to contributors.yml
let entry = ` - github: ${githubUsername}\n discord: "${discordId}"`;
if (displayName && displayName !== '_No response_') {
entry += `\n name: ${displayName}`;
}
entry += '\n';
const updated = yml.trimEnd() + '\n' + entry;
fs.writeFileSync('contributors.yml', updated);
// Set outputs for commit step
core.exportVariable('GITHUB_USERNAME', githubUsername);
core.exportVariable('DISCORD_ID', discordId);
core.exportVariable('ISSUE_NUMBER', issue.number.toString());
- name: Create PR
run: |
# Check if there are changes
if git diff --quiet contributors.yml; then
echo "No changes to contributors.yml"
exit 0
fi
BRANCH="docs/link-discord-${GITHUB_USERNAME}"
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git checkout -b "$BRANCH"
git add contributors.yml
git commit -m "docs: link @${GITHUB_USERNAME} to Discord"
git push origin "$BRANCH"
gh pr create \
--title "docs: link @${GITHUB_USERNAME} to Discord" \
--body "Adds @${GITHUB_USERNAME} (Discord \`${DISCORD_ID}\`) to \`contributors.yml\` for bounty XP tracking.
Closes #${ISSUE_NUMBER}" \
--base main \
--head "$BRANCH" \
--label "link-discord"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Notify on issue
uses: actions/github-script@v7
with:
script: |
const username = process.env.GITHUB_USERNAME;
const issueNumber = parseInt(process.env.ISSUE_NUMBER);
await github.rest.issues.createComment({
...context.repo,
issue_number: issueNumber,
body: `A PR has been created to link your account. A maintainer will merge it shortly — once merged, you'll receive XP and Discord pings when your bounty PRs are merged.`
});
@@ -0,0 +1,54 @@
# Closes PRs that still have the `pr-requirements-warning` label
# after contributors were warned in pr-requirements.yml.
name: PR Requirements Enforcement
on:
schedule:
- cron: "0 0 * * *" # runs every day once at midnight
jobs:
enforce:
name: Close PRs still failing contribution requirements
runs-on: ubuntu-latest
permissions:
pull-requests: write
issues: write
steps:
- name: Close PRs still failing requirements
uses: actions/github-script@v7
with:
script: |
const { owner, repo } = context.repo;
const prs = await github.paginate(github.rest.pulls.list, {
owner,
repo,
state: "open",
per_page: 100
});
for (const pr of prs) {
// Skip draft PRs — author may still be actively working toward compliance
if (pr.draft) continue;
const labels = pr.labels.map(l => l.name);
if (!labels.includes("pr-requirements-warning")) continue;
const gracePeriod = 24 * 60 * 60 * 1000;
const lastUpdated = new Date(pr.created_at);
const now = new Date();
if (now - lastUpdated < gracePeriod) {
console.log(`Skipping PR #${pr.number} — still within grace period`);
continue;
}
const prNumber = pr.number;
const prAuthor = pr.user.login;
await github.rest.issues.createComment({
owner,
repo,
issue_number: prNumber,
body: `Closing PR because the contribution requirements were not resolved within the 24-hour grace period.
If this was closed in error, feel free to reopen the PR after fixing the requirements.`
});
await github.rest.pulls.update({
owner,
repo,
pull_number: prNumber,
state: "closed"
});
console.log(`Closed PR #${prNumber} by ${prAuthor} (PR requirements were not met)`);
}
+31 -17
View File
@@ -43,9 +43,10 @@ jobs:
console.log(` Found issue references: ${issueNumbers.length > 0 ? issueNumbers.join(', ') : 'none'}`);
if (issueNumbers.length === 0) {
const message = `## PR Closed - Requirements Not Met
const message = `## PR Requirements Warning
This PR has been automatically closed because it doesn't meet the requirements.
This PR does not meet the contribution requirements.
If the issue is not fixed within ~24 hours, it may be automatically closed.
**Missing:** No linked issue found.
@@ -67,14 +68,15 @@ jobs:
**Why is this required?** See #472 for details.`;
const comments = await github.rest.issues.listComments({
const comments = await github.paginate(github.rest.issues.listComments, {
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: prNumber,
per_page: 100,
});
const botComment = comments.data.find(
(c) => c.user.type === 'Bot' && c.body.includes('PR Closed - Requirements Not Met')
const botComment = comments.find(
(c) => c.user.type === 'Bot' && c.body.includes('PR Requirements Warning')
);
if (!botComment) {
@@ -86,11 +88,11 @@ jobs:
});
}
await github.rest.pulls.update({
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: prNumber,
state: 'closed',
issue_number: prNumber,
labels: ['pr-requirements-warning'],
});
core.setFailed('PR must reference an issue');
@@ -132,9 +134,10 @@ jobs:
`#${i.number} (assignees: ${i.assignees.length > 0 ? i.assignees.join(', ') : 'none'})`
).join(', ');
const message = `## PR Closed - Requirements Not Met
const message = `## PR Requirements Warning
This PR has been automatically closed because it doesn't meet the requirements.
This PR does not meet the contribution requirements.
If the issue is not fixed within ~24 hours, it may be automatically closed.
**PR Author:** @${prAuthor}
**Found issues:** ${issueList}
@@ -157,14 +160,15 @@ jobs:
**Why is this required?** See #472 for details.`;
const comments = await github.rest.issues.listComments({
const comments = await github.paginate(github.rest.issues.listComments, {
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: prNumber,
per_page: 100,
});
const botComment = comments.data.find(
(c) => c.user.type === 'Bot' && c.body.includes('PR Closed - Requirements Not Met')
const botComment = comments.find(
(c) => c.user.type === 'Bot' && c.body.includes('PR Requirements Warning')
);
if (!botComment) {
@@ -176,14 +180,24 @@ jobs:
});
}
await github.rest.pulls.update({
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: prNumber,
state: 'closed',
issue_number: prNumber,
labels: ['pr-requirements-warning'],
});
core.setFailed('PR author must be assigned to the linked issue');
} else {
console.log(`PR requirements met! Issue #${issueWithAuthorAssigned} has ${prAuthor} as assignee.`);
}
try {
await github.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: prNumber,
name: "pr-requirements-warning"
});
}catch (error){
//ignore if label doesn't exist
}
}
+5 -4
View File
@@ -21,18 +21,19 @@ jobs:
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Install dependencies
run: |
cd core
pip install -e .
pip install -r requirements-dev.txt
uv sync
- name: Run tests
run: |
cd core
pytest tests/ -v
uv run pytest tests/ -v
- name: Generate changelog
id: changelog
+40
View File
@@ -0,0 +1,40 @@
name: Weekly bounty leaderboard
description: Posts the integration bounty leaderboard to Discord every Monday
on:
schedule:
# Every Monday at 9:00 UTC
- cron: "0 9 * * 1"
workflow_dispatch:
inputs:
since_date:
description: "Only count PRs merged after this date (YYYY-MM-DD). Leave empty for all-time."
required: false
jobs:
leaderboard:
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
pull-requests: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Post leaderboard to Discord
run: bun run scripts/bounty-tracker.ts leaderboard
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }}
GITHUB_REPOSITORY_NAME: ${{ github.event.repository.name }}
DISCORD_WEBHOOK_URL: ${{ secrets.DISCORD_BOUNTY_WEBHOOK_URL }}
LURKR_API_KEY: ${{ secrets.LURKR_API_KEY }}
LURKR_GUILD_ID: ${{ secrets.LURKR_GUILD_ID }}
SINCE_DATE: ${{ github.event.inputs.since_date || '' }}
+8 -3
View File
@@ -46,6 +46,7 @@ coverage/
# TypeScript
*.tsbuildinfo
vite.config.d.ts
# Python
__pycache__/
@@ -54,7 +55,6 @@ __pycache__/
*.egg-info/
.eggs/
*.egg
uv.lock
# Generated runtime data
core/data/
@@ -67,9 +67,14 @@ temp/
exports/*
.agent-builder-sessions/*
.claude/settings.local.json
.claude/skills/ship-it/
.venv
docs/github-issues/*
core/tests/*
core/tests/*dumps/*
screenshots/*
.gemini/*
+1 -18
View File
@@ -1,20 +1,3 @@
{
"mcpServers": {
"agent-builder": {
"command": ".venv/bin/python",
"args": ["-m", "framework.mcp.agent_builder_server"],
"cwd": "core",
"env": {
"PYTHONPATH": "../tools/src"
}
},
"tools": {
"command": ".venv/bin/python",
"args": ["mcp_server.py", "--stdio"],
"cwd": "tools",
"env": {
"PYTHONPATH": "src:../core"
}
}
}
"mcpServers": {}
}
+1 -1
View File
@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.8.6
rev: v0.15.0
hooks:
- id: ruff
name: ruff lint (core)
-7
View File
@@ -1,7 +0,0 @@
{
"recommendations": [
"charliermarsh.ruff",
"editorconfig.editorconfig",
"ms-python.python"
]
}
+30
View File
@@ -0,0 +1,30 @@
# Repository Guidelines
Shared agent instructions for this workspace.
## Coding Agent Notes
-
- When working on a GitHub Issue or PR, print the full URL at the end of the task.
- When answering questions, respond with high-confidence answers only: verify in code; do not guess.
- Do not update dependencies casually. Version bumps, patched dependencies, overrides, or vendored dependency changes require explicit approval.
- Add brief comments for tricky logic. Keep files reasonably small when practical; split or refactor large files instead of growing them indefinitely.
- If shared guardrails are available locally, review them; otherwise follow this repo's guidance.
- Use `uv` for Python execution and package management. Do not use `python` or `python3` directly unless the user explicitly asks for it.
- Prefer `uv run` for scripts and tests, and `uv pip` for package operations.
## Multi-Agent Safety
- Do not create, apply, or drop `git stash` entries unless explicitly requested.
- Do not create, remove, or modify `git worktree` checkouts unless explicitly requested.
- Do not switch branches or check out a different branch unless explicitly requested.
- When the user says `push`, you may `git pull --rebase` to integrate latest changes, but never discard other in-progress work.
- When the user says `commit`, commit only your changes. When the user says `commit all`, commit everything in grouped chunks.
- When you see unrecognized files or unrelated changes, keep going and focus on your scoped changes.
## Change Hygiene
- If staged and unstaged diffs are formatting-only, resolve them without asking.
- If a commit or push was already requested, include formatting-only follow-up changes in that same commit when practical.
- Only stop to ask for confirmation when changes are semantic and may alter behavior.
+194 -28
View File
@@ -1,41 +1,207 @@
# Changelog
# Release Notes
All notable changes to this project will be documented in this file.
**Release Date:** February 18, 2026
**Tag:** v0.5.1
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## The Hive Gets a Brain
## [Unreleased]
v0.5.1 is our most ambitious release yet. Hive agents can now **build other agents** -- the new Hive Coder meta-agent writes, tests, and fixes agent packages from natural language. The runtime grows multi-graph support so one session can orchestrate multiple agents simultaneously. The TUI gets a complete overhaul with an in-app agent picker, live streaming, and seamless escalation to the Coder. And we're now provider-agnostic: Claude Code subscriptions, OpenAI-compatible endpoints, and any LiteLLM-supported model work out of the box.
### Added
- Initial project structure
- React frontend (honeycomb) with Vite and TypeScript
- Node.js backend (hive) with Express and TypeScript
- Docker Compose configuration for local development
- Configuration system via `config.yaml`
- GitHub Actions CI/CD workflows
- Comprehensive documentation
---
### Changed
- N/A
## Highlights
### Deprecated
- N/A
### Hive Coder -- The Agent That Builds Agents
### Removed
- N/A
A native meta-agent that lives inside the framework at `core/framework/agents/hive_coder/`. Give it a natural-language specification and it produces a complete agent package -- goal definition, node prompts, edge routing, MCP tool wiring, tests, and all boilerplate files.
```bash
# Launch the Coder directly
hive code
### Fixed
- tools: Fixed web_scrape tool attempting to parse non-HTML content (PDF, JSON) as HTML (#487)
# Or escalate from any running agent (TUI)
Ctrl+E # or /coder in chat
```
### Security
- N/A
The Coder ships with:
## [0.1.0] - 2025-01-13
- **Reference documentation** -- anti-patterns, construction guide, and design patterns baked into its system prompt
- **Guardian watchdog** -- an event-driven monitor that catches agent failures and triggers automatic remediation
- **Coder Tools MCP server** -- file I/O, fuzzy-match editing, git snapshots, and sandboxed shell execution (`tools/coder_tools_server.py`)
- **Test generation** -- structural tests for forever-alive agents that don't hang on `runner.run()`
### Added
- Initial release
### Multi-Graph Agent Runtime
[Unreleased]: https://github.com/adenhq/hive/compare/v0.1.0...HEAD
[0.1.0]: https://github.com/adenhq/hive/releases/tag/v0.1.0
`AgentRuntime` now supports loading, managing, and switching between multiple agent graphs within a single session. Six new lifecycle tools give agents (and the TUI) full control:
```python
# Load a second agent into the runtime
await runtime.add_graph("exports/deep_research_agent")
# Tools available to agents:
# load_agent, unload_agent, start_agent, restart_agent, list_agents, get_user_presence
```
The Hive Coder uses multi-graph internally -- when you escalate from a worker agent, the Coder loads as a separate graph while the worker stays alive in the background.
### TUI Revamp
The Terminal UI gets a ground-up rebuild with five major additions:
- **Agent Picker** (Ctrl+A) -- tabbed modal screen for browsing Your Agents, Framework agents, and Examples with metadata badges (node count, tool count, session count, tags)
- **Runtime-optional startup** -- TUI launches without a pre-loaded agent, showing the picker on first open
- **Live streaming pane** -- dedicated RichLog widget shows LLM tokens as they arrive, replacing the old one-token-per-line display
- **PDF attachments** -- `/attach` and `/detach` commands with native OS file dialog (macOS, Linux, Windows)
- **Multi-graph commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>` for managing agent graphs in-session
### Provider-Agnostic LLM Support
Hive is no longer Anthropic-only. v0.5.1 adds first-class support for:
- **Claude Code subscriptions** -- `use_claude_code_subscription: true` in `~/.hive/configuration.json` reads OAuth tokens from `~/.claude/.credentials.json` with automatic refresh
- **OpenAI-compatible endpoints** -- `api_base` config routes traffic through any compatible API (Azure OpenAI, vLLM, Ollama, etc.)
- **Any LiteLLM model** -- `RuntimeConfig` now passes `api_key`, `api_base`, and `extra_kwargs` through to LiteLLM
The quickstart script auto-detects Claude Code subscriptions and ZAI Code installations.
---
## What's New
### Architecture & Runtime
- **Hive Coder meta-agent** -- Natural-language agent builder with reference docs, guardian watchdog, and `hive code` CLI command. (@TimothyZhang7)
- **Multi-graph agent sessions** -- `add_graph`/`remove_graph` on AgentRuntime with 6 lifecycle tools (`load_agent`, `unload_agent`, `start_agent`, `restart_agent`, `list_agents`, `get_user_presence`). (@TimothyZhang7)
- **Claude Code subscription support** -- OAuth token refresh via `use_claude_code_subscription` config, auto-detection in quickstart, LiteLLM header patching. (@TimothyZhang7)
- **OpenAI-compatible endpoint support** -- `api_base` and `extra_kwargs` in `RuntimeConfig` for any OpenAI-compatible API. (@TimothyZhang7)
- **Remove deprecated node types** -- Delete `FlexibleGraphExecutor`, `WorkerNode`, `HybridJudge`, `CodeSandbox`, `Plan`, `FunctionNode`, `LLMNode`, `RouterNode`. Deprecated types (`llm_tool_use`, `llm_generate`, `function`, `router`, `human_input`) now raise `RuntimeError` with migration guidance. (@TimothyZhang7)
- **Interactive credential setup** -- Guided `CredentialSetupSession` with health checks and encrypted storage, accessible via `hive setup-credentials` or automatic prompting on credential errors. (@RichardTang-Aden)
- **Pre-start confirmation prompt** -- Interactive prompt before agent execution allowing credential updates or abort. (@RichardTang-Aden)
- **Event bus multi-graph support** -- `graph_id` on events, `filter_graph` on subscriptions, `ESCALATION_REQUESTED` event type, `exclude_own_graph` filter. (@TimothyZhang7)
### TUI Improvements
- **In-app agent picker** (Ctrl+A) -- Tabbed modal for browsing agents with metadata badges (nodes, tools, sessions, tags). (@TimothyZhang7)
- **Runtime-optional TUI startup** -- Launches without a pre-loaded agent, shows agent picker on startup. (@TimothyZhang7)
- **Hive Coder escalation** (Ctrl+E) -- Escalate to Hive Coder and return; also available via `/coder` and `/back` chat commands. (@TimothyZhang7)
- **PDF attachment support** -- `/attach` and `/detach` commands with native OS file dialog. (@TimothyZhang7)
- **Streaming output pane** -- Dedicated RichLog widget for live LLM token streaming. (@TimothyZhang7)
- **Multi-graph TUI commands** -- `/graphs`, `/graph <id>`, `/load <path>`, `/unload <id>`. (@TimothyZhang7)
- **Agent Guardian watchdog** -- Event-driven monitor that catches secondary agent failures and triggers automatic remediation, with `--no-guardian` CLI flag. (@TimothyZhang7)
### New Tool Integrations
| Tool | Description | Contributor |
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| **Discord** | 4 MCP tools (`discord_list_guilds`, `discord_list_channels`, `discord_send_message`, `discord_get_messages`) with rate-limit retry and channel filtering | @mishrapravin114 |
| **Exa Search API** | 4 AI-powered search tools (`exa_search`, `exa_find_similar`, `exa_get_contents`, `exa_answer`) with neural/keyword search, domain filters, and citation-backed answers | @JeetKaria06 |
| **Razorpay** | 6 payment processing tools for payments, invoices, payment links, and refunds with HTTP Basic Auth | @shivamshahi07 |
| **Google Docs** | Document creation, reading, and editing with OAuth credential support | @haliaeetusvocifer |
| **Gmail enhancements** | Expanded mail operations for inbox management | @bryanadenhq |
### Infrastructure
- **Default node type → `event_loop`** -- `NodeSpec.node_type` defaults to `"event_loop"` instead of `"llm_tool_use"`. (@TimothyZhang7)
- **Default `max_node_visits` → 0 (unlimited)** -- Nodes default to unlimited visits, reducing friction for feedback loops and forever-alive agents. (@TimothyZhang7)
- **Remove `function` field from NodeSpec** -- Follows deprecation of `FunctionNode`. (@TimothyZhang7)
- **LiteLLM OAuth patch** -- Correct header construction for OAuth tokens (remove `x-api-key` when Bearer token is present). (@TimothyZhang7)
- **Orchestrator config centralization** -- Reads `api_key`, `api_base`, `extra_kwargs` from centralized `~/.hive/configuration.json`. (@TimothyZhang7)
- **System prompt datetime injection** -- All system prompts now include current date/time for time-aware agent behavior. (@TimothyZhang7)
- **Utils module exports** -- Proper `__init__.py` exports for the utils module. (@Siddharth2624)
- **Increased default max_tokens** -- Opus 4.6 defaults to 32768, Sonnet 4.5 to 16384 (up from 8192). (@TimothyZhang7)
---
## Bug Fixes
- Flush WIP accumulator outputs on cancel/failure so edge conditions see correct values on resume
- Stall detection state preserved across resume (no more resets on checkpoint restore)
- Skip client-facing blocking for event-triggered executions (timer/webhook)
- Executor retry override scoped to actual EventLoopNode instances only
- Add `_awaiting_input` flag to EventLoopNode to prevent input injection race conditions
- Fix TUI streaming display (tokens no longer appear one-per-line)
- Fix `_return_from_escalation` crash when ChatRepl widgets not yet mounted
- Fix tools registration problems for Google Docs credentials (@RichardTang-Aden)
- Fix email agent version conflicts (@RichardTang-Aden)
- Fix coder tool timeouts (120s for tests, 300s cap for commands)
## Documentation
- Clarify installation and prevent root pip install misuse (@paarths-collab)
---
## Agent Updates
- **Email Inbox Management** -- Consolidate `gmail_inbox_guardian` and `inbox_management` into a single unified agent with updated prompts and config. (@RichardTang-Aden, @bryanadenhq)
- **Job Hunter** -- Updated node prompts, config, and agent metadata; added PDF resume selection. (@bryanadenhq)
- **Deep Research Agent** -- Revised node implementations with updated prompts and output handling.
- **Tech News Reporter** -- Revised node prompts for improved output quality.
- **Vulnerability Assessment** -- Expanded prompts with more detailed assessment instructions. (@bryanadenhq)
---
## Breaking Changes
- **Deprecated node types raise `RuntimeError`** -- `llm_tool_use`, `llm_generate`, `function`, `router`, `human_input` now fail instead of warning. Migrate to `event_loop`.
- **`NodeSpec.node_type` defaults to `"event_loop"`** (was `"llm_tool_use"`)
- **`NodeSpec.max_node_visits` defaults to `0` / unlimited** (was `1`)
- **`NodeSpec.function` field removed** -- `FunctionNode` is deleted; use event_loop nodes with tools instead.
---
## Community Contributors
A huge thank you to everyone who contributed to this release:
- **Richard Tang** (@RichardTang-Aden) -- Interactive credential setup, pre-start confirmation, email agent consolidation, tool registration fixes, lint and formatting
- **Pravin Mishra** (@mishrapravin114) -- Discord integration with 4 MCP tools
- **Jeet Karia** (@JeetKaria06) -- Exa Search API integration with 4 AI-powered search tools
- **Shivam Shahi** (@shivamshahi07) -- Razorpay payment processing integration
- **Siddharth Varshney** (@Siddharth2624) -- Utils module exports
- **@haliaeetusvocifer** -- Google Docs integration with OAuth support
- **Bryan** (@bryanadenhq) -- PDF selection, inbox agent fixes, Job Hunter and Vulnerability Assessment updates
- **@paarths-collab** -- Documentation improvements
---
## Upgrading
```bash
git pull origin main
uv sync
```
### Migration Guide
If your agents use deprecated node types, update them:
```python
# Before (v0.5.0) -- these now raise RuntimeError
NodeSpec(node_type="llm_tool_use", ...)
NodeSpec(node_type="function", function=my_func, ...)
# After (v0.5.1) -- use event_loop for everything
NodeSpec(node_type="event_loop", ...) # or just omit node_type (it's the default now)
```
If your agents set `max_node_visits=1` explicitly, they'll still work. The only change is the _default_ -- new agents without an explicit value now get unlimited visits.
To try the new Hive Coder:
```bash
# Launch Coder directly
hive code
# Or from TUI -- press Ctrl+E to escalate
hive tui
```
---
## What's Next
- **Agent-to-agent communication** -- one agent's output triggers another agent's entry point
- **Cost visibility** -- detailed runtime log of LLM costs per node and per session
- **Persistent webhook subscriptions** -- survive agent restarts without re-registering
- **Remote agent deployment** -- run agents as long-lived services with HTTP APIs
Symlink
+1
View File
@@ -0,0 +1 @@
AGENTS.md
+69 -12
View File
@@ -1,10 +1,10 @@
# Contributing to Aden Agent Framework
Thank you for your interest in contributing to the Aden Agent Framework! This document provides guidelines and information for contributors. Were especially looking for help building tools, integrations([check #2805](https://github.com/adenhq/hive/issues/2805)), and example agents for the framework. If youre interested in extending its functionality, this is the perfect place to start.
Thank you for your interest in contributing to the Aden Agent Framework! This document provides guidelines and information for contributors. Were especially looking for help building tools, integrations ([check #2805](https://github.com/adenhq/hive/issues/2805)), and example agents for the framework. If youre interested in extending its functionality, this is the perfect place to start.
## Code of Conduct
By participating in this project, you agree to abide by our [Code of Conduct](CODE_OF_CONDUCT.md).
By participating in this project, you agree to abide by our [Code of Conduct](docs/CODE_OF_CONDUCT.md).
## Issue Assignment Policy
@@ -35,15 +35,22 @@ You may submit PRs without prior assignment for:
1. Fork the repository
2. Clone your fork: `git clone https://github.com/YOUR_USERNAME/hive.git`
3. Create a feature branch: `git checkout -b feature/your-feature-name`
4. Make your changes
5. Run checks and tests:
3. Add the upstream repository: `git remote add upstream https://github.com/adenhq/hive.git`
4. Sync with upstream to ensure you're starting from the latest code:
```bash
git fetch upstream
git checkout main
git merge upstream/main
```
5. Create a feature branch: `git checkout -b feature/your-feature-name`
6. Make your changes
7. Run checks and tests:
```bash
make check # Lint and format checks (ruff check + ruff format --check on core/ and tools/)
make test # Core tests (cd core && pytest tests/ -v)
```
6. Commit your changes following our commit conventions
7. Push to your fork and submit a Pull Request
8. Commit your changes following our commit conventions
9. Push to your fork and submit a Pull Request
## Development Setup
@@ -58,6 +65,52 @@ You may submit PRs without prior assignment for:
> **Tip:** Installing Claude Code skills is optional for running existing agents, but required if you plan to **build new agents**.
## Troubleshooting Setup Issues
If you encounter issues while setting up the development environment, the following steps may help:
### `make: command not found`
Install `make` using:
```bash
sudo apt install make
uv: command not found
Install uv using:
curl -LsSf https://astral.sh/uv/install.sh | sh
source ~/.bashrc
ruff: not found
If linting fails due to a missing ruff command, install it with:
uv tool install ruff
WSL Path Recommendation
When using WSL, it is recommended to clone the repository inside your Linux home directory (e.g., ~/hive) instead of under /mnt/c/... to avoid potential performance and permission issues.
---
# ✅ Why This Is Good
- Clear
- Professional tone
- No unnecessary explanation
- Under micro-fix size
- Based on real contributor experience
- Wont annoy maintainers
---
Now:
```bash
git checkout -b docs/setup-troubleshooting
## Commit Convention
We follow [Conventional Commits](https://www.conventionalcommits.org/):
@@ -92,8 +145,7 @@ docs(readme): update installation instructions
2. Update documentation if needed
3. Add tests for new functionality
4. Ensure `make check` and `make test` pass
5. Update the CHANGELOG.md if applicable
6. Request review from maintainers
5. Request review from maintainers
### PR Title Format
@@ -120,12 +172,14 @@ feat(component): add new feature description
- Use meaningful variable and function names
- Keep functions focused and small
For linting and formatting (Ruff, pre-commit hooks), see [Linting & Formatting Setup](docs/contributing-lint-setup.md).
## Testing
> **Note:** When testing agents in `exports/`, always set PYTHONPATH:
>
> ```bash
> PYTHONPATH=core:exports python -m agent_name test
> PYTHONPATH=exports uv run python -m agent_name test
> ```
```bash
@@ -138,8 +192,11 @@ make test
# Or run tests directly
cd core && pytest tests/ -v
# Run tools package tests (when contributing to tools/)
cd tools && uv run pytest tests/ -v
# Run tests for a specific agent
PYTHONPATH=core:exports python -m agent_name test
PYTHONPATH=exports uv run python -m agent_name test
```
> **CI also validates** that all exported agent JSON files (`exports/*/agent.json`) are well-formed JSON. Ensure your agent exports are valid before submitting.
@@ -152,4 +209,4 @@ By submitting a Pull Request, you agree that your contributions will be licensed
Feel free to open an issue for questions or join our [Discord community](https://discord.com/invite/MXE49hrKDk).
Thank you for contributing!
Thank you for contributing!
+28 -5
View File
@@ -1,12 +1,14 @@
.PHONY: lint format check test install-hooks help
.PHONY: lint format check test install-hooks help frontend-install frontend-dev frontend-build
help: ## Show this help
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | \
awk 'BEGIN {FS = ":.*?## "}; {printf " \033[36m%-15s\033[0m %s\n", $$1, $$2}'
lint: ## Run ruff linter (with auto-fix)
lint: ## Run ruff linter and formatter (with auto-fix)
cd core && ruff check --fix .
cd tools && ruff check --fix .
cd core && ruff format .
cd tools && ruff format .
format: ## Run ruff formatter
cd core && ruff format .
@@ -18,9 +20,30 @@ check: ## Run all checks without modifying files (CI-safe)
cd core && ruff format --check .
cd tools && ruff format --check .
test: ## Run all tests
cd core && python -m pytest tests/ -v
test: ## Run all tests (core + tools, excludes live)
cd core && uv run python -m pytest tests/ -v
cd tools && uv run python -m pytest -v
test-tools: ## Run tool tests only (mocked, no credentials needed)
cd tools && uv run python -m pytest -v
test-live: ## Run live integration tests (requires real API credentials)
cd tools && uv run python -m pytest -m live -s -o "addopts=" --log-cli-level=INFO
test-all: ## Run everything including live tests
cd core && uv run python -m pytest tests/ -v
cd tools && uv run python -m pytest -v
cd tools && uv run python -m pytest -m live -s -o "addopts=" --log-cli-level=INFO
install-hooks: ## Install pre-commit hooks
pip install pre-commit
uv pip install pre-commit
pre-commit install
frontend-install: ## Install frontend npm packages
cd core/frontend && npm install
frontend-dev: ## Start frontend dev server
cd core/frontend && npm run dev
frontend-build: ## Build frontend for production
cd core/frontend && npm run build
-51
View File
@@ -1,51 +0,0 @@
## Summary
- **Added HubSpot integration** — new HubSpot MCP tool with search, get, create, and update operations for contacts, companies, and deals. Includes OAuth2 provider for HubSpot credentials and credential store adapter for the tools layer.
- **Replaced web_scrape tool with Playwright + stealth** — swapped httpx/BeautifulSoup for a headless Chromium browser using `playwright` (async API) and `playwright-stealth`, enabling JS-rendered page scraping and bot detection evasion
- **Added empty response retry logic** — LLM provider now detects empty responses (e.g. Gemini returning 200 with no content on rate limit) and retries with exponential backoff, preventing hallucinated output from the cleanup LLM
- **Added context-aware input compaction** — LLM nodes now estimate input token count before calling the model and progressively truncate the largest values if they exceed the context window budget
- **Increased rate limit retries to 10** with verbose `[retry]` and `[compaction]` logging that includes model name, finish reason, and attempt count
- **Updated setup scripts** — `scripts/setup-python.sh` now installs Playwright Chromium browser automatically for web scraping support
- **Interactive quickstart onboarding** — `quickstart.sh` rewritten as bee-themed interactive wizard that detects existing API keys (including Claude Code subscription), lets user pick ONE default LLM provider, and saves configuration to `~/.hive/configuration.json`
- **Fixed lint errors** across `hubspot_tool.py` (line length) and `agent_builder_server.py` (unused variable)
## Changed files
### HubSpot Integration
- `tools/src/aden_tools/tools/hubspot_tool/` — New MCP tool: contacts, companies, and deals CRUD
- `tools/src/aden_tools/tools/__init__.py` — Registered HubSpot tools
- `tools/src/aden_tools/credentials/integrations.py` — HubSpot credential integration
- `tools/src/aden_tools/credentials/__init__.py` — Updated credential exports
- `core/framework/credentials/oauth2/hubspot_provider.py` — HubSpot OAuth2 provider
- `core/framework/credentials/oauth2/__init__.py` — Registered HubSpot OAuth2 provider
- `core/framework/runner/runner.py` — Updated runner for credential support
### Web Scrape Rewrite
- `tools/src/aden_tools/tools/web_scrape_tool/web_scrape_tool.py` — Playwright async rewrite
- `tools/src/aden_tools/tools/web_scrape_tool/README.md` — Updated docs
- `tools/pyproject.toml` — Added `playwright`, `playwright-stealth` deps
- `tools/Dockerfile` — Added `playwright install chromium --with-deps`
- `scripts/setup-python.sh` — Added Playwright Chromium browser install step
### LLM Reliability
- `core/framework/llm/litellm.py` — Empty response retry + max retries 10 + verbose logging
- `core/framework/graph/node.py` — Input compaction via `_compact_inputs()`, `_estimate_tokens()`, `_get_context_limit()`
### Quickstart & Setup
- `quickstart.sh` — Interactive bee-themed onboarding wizard with single provider selection
- `~/.hive/configuration.json` — New user config file for default LLM provider/model
### Fixes
- `core/framework/mcp/agent_builder_server.py` — Removed unused variable
- `tools/src/aden_tools/tools/hubspot_tool/hubspot_tool.py` — Fixed E501 line length violations
## Test plan
- [ ] Run `make lint` — passes clean
- [ ] Run `./quickstart.sh` and verify interactive flow works, config saved to `~/.hive/configuration.json`
- [ ] Run `./scripts/setup-python.sh` and verify Playwright Chromium installs
- [ ] Run `pytest tests/tools/test_web_scrape_tool.py -v`
- [ ] Run agent against a JS-heavy site and verify `web_scrape` returns rendered content
- [ ] Set `HUBSPOT_ACCESS_TOKEN` and verify HubSpot tool CRUD operations work
- [ ] Trigger rate limit and verify `[retry]` logs appear with correct attempt counts
- [ ] Run agent with large inputs and verify `[compaction]` logs show truncation
🤖 Generated with [Claude Code](https://claude.com/claude-code)
+211 -206
View File
@@ -1,5 +1,5 @@
<p align="center">
<img width="100%" alt="Hive Banner" src="https://storage.googleapis.com/aden-prod-assets/website/aden-title-card.png" />
<img width="100%" alt="Hive Banner" src="https://github.com/user-attachments/assets/a027429b-5d3c-4d34-88e4-0feaeaabbab3" />
</p>
<p align="center">
@@ -13,17 +13,19 @@
<a href="docs/i18n/ko.md">한국어</a>
</p>
[![Apache 2.0 License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/adenhq/hive/blob/main/LICENSE)
[![Y Combinator](https://img.shields.io/badge/Y%20Combinator-Aden-orange)](https://www.ycombinator.com/companies/aden)
[![Docker Pulls](https://img.shields.io/docker/pulls/adenhq/hive?logo=Docker&labelColor=%23528bff)](https://hub.docker.com/u/adenhq)
[![Discord](https://img.shields.io/discord/1172610340073242735?logo=discord&labelColor=%235462eb&logoColor=%23f5f5f5&color=%235462eb)](https://discord.com/invite/MXE49hrKDk)
[![Twitter Follow](https://img.shields.io/twitter/follow/teamaden?logo=X&color=%23f5f5f5)](https://x.com/aden_hq)
[![LinkedIn](https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff)](https://www.linkedin.com/company/teamaden/)
<p align="center">
<a href="https://github.com/aden-hive/hive/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="Apache 2.0 License" /></a>
<a href="https://www.ycombinator.com/companies/aden"><img src="https://img.shields.io/badge/Y%20Combinator-Aden-orange" alt="Y Combinator" /></a>
<a href="https://discord.com/invite/MXE49hrKDk"><img src="https://img.shields.io/discord/1172610340073242735?logo=discord&labelColor=%235462eb&logoColor=%23f5f5f5&color=%235462eb" alt="Discord" /></a>
<a href="https://x.com/aden_hq"><img src="https://img.shields.io/twitter/follow/teamaden?logo=X&color=%23f5f5f5" alt="Twitter Follow" /></a>
<a href="https://www.linkedin.com/company/teamaden/"><img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff" alt="LinkedIn" /></a>
<img src="https://img.shields.io/badge/MCP-102_Tools-00ADD8?style=flat-square" alt="MCP" />
</p>
<p align="center">
<img src="https://img.shields.io/badge/AI_Agents-Self--Improving-brightgreen?style=flat-square" alt="AI Agents" />
<img src="https://img.shields.io/badge/Multi--Agent-Systems-blue?style=flat-square" alt="Multi-Agent" />
<img src="https://img.shields.io/badge/Goal--Driven-Development-purple?style=flat-square" alt="Goal-Driven" />
<img src="https://img.shields.io/badge/Headless-Development-purple?style=flat-square" alt="Headless" />
<img src="https://img.shields.io/badge/Human--in--the--Loop-orange?style=flat-square" alt="HITL" />
<img src="https://img.shields.io/badge/Production--Ready-red?style=flat-square" alt="Production" />
</p>
@@ -31,95 +33,126 @@
<img src="https://img.shields.io/badge/OpenAI-supported-412991?style=flat-square&logo=openai" alt="OpenAI" />
<img src="https://img.shields.io/badge/Anthropic-supported-d4a574?style=flat-square" alt="Anthropic" />
<img src="https://img.shields.io/badge/Google_Gemini-supported-4285F4?style=flat-square&logo=google" alt="Gemini" />
<img src="https://img.shields.io/badge/MCP-19_Tools-00ADD8?style=flat-square" alt="MCP" />
</p>
## Overview
Build reliable, self-improving AI agents without hardcoding workflows. Define your goal through conversation with a coding agent, and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, credential management, and real-time monitoring give you control without sacrificing adaptability.
Build autonomous, reliable, self-improving AI agents without hardcoding workflows. Define your goal through conversation with hive coding agent(queen), and the framework generates a node graph with dynamically created connection code. When things break, the framework captures failure data, evolves the agent through the coding agent, and redeploys. Built-in human-in-the-loop nodes, credential management, and real-time monitoring give you control without sacrificing adaptability.
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
## What is Aden
[![Hive Demo](https://img.youtube.com/vi/XDOG9fOaLjU/maxresdefault.jpg)](https://www.youtube.com/watch?v=XDOG9fOaLjU)
<p align="center">
<img width="100%" alt="Aden Architecture" src="docs/assets/aden-architecture-diagram.jpg" />
</p>
## Who Is Hive For?
Aden is a platform for building, deploying, operating, and adapting AI agents:
Hive is designed for developers and teams who want to build **production-grade AI agents** without manually wiring complex workflows.
- **Build** - A Coding Agent generates specialized Worker Agents (Sales, Marketing, Ops) from natural language goals
- **Deploy** - Headless deployment with CI/CD integration and full API lifecycle management
- **Operate** - Real-time monitoring, observability, and runtime guardrails keep agents reliable
- **Adapt** - Continuous evaluation, supervision, and adaptation ensure agents improve over time
- **Infra** - Shared memory, LLM integrations, tools, and skills power every agent
Hive is a good fit if you:
- Want AI agents that **execute real business processes**, not demos
- Need **fast or high volume agent execution** over open workflow
- Need **self-healing and adaptive agents** that improve over time
- Require **human-in-the-loop control**, observability, and cost limits
- Plan to run agents in **production environments**
Hive may not be the best fit if youre only experimenting with simple agent chains or one-off scripts.
## When Should You Use Hive?
Use Hive when you need:
- Long-running, autonomous agents
- Strong guardrails, process, and controls
- Continuous improvement based on failures
- Multi-agent coordination
- A framework that evolves with your goals
## Quick Links
- **[Documentation](https://docs.adenhq.com/)** - Complete guides and API reference
- **[Self-Hosting Guide](https://docs.adenhq.com/getting-started/quickstart)** - Deploy Hive on your infrastructure
- **[Changelog](https://github.com/adenhq/hive/releases)** - Latest updates and releases
<!-- - **[Roadmap](https://adenhq.com/roadmap)** - Upcoming features and plans -->
- **[Changelog](https://github.com/aden-hive/hive/releases)** - Latest updates and releases
- **[Roadmap](docs/roadmap.md)** - Upcoming features and plans
- **[Report Issues](https://github.com/adenhq/hive/issues)** - Bug reports and feature requests
- **[Contributing](CONTRIBUTING.md)** - How to contribute and submit PRs
## Quick Start
### Prerequisites
- [Python 3.11+](https://www.python.org/downloads/) for agent development
- Claude Code or Cursor for utilizing agent skills
- Python 3.11+ for agent development
- An LLM provider that powers the agents
- **ripgrep (optional, recommended on Windows):** The `search_files` tool uses ripgrep for faster file search. If not installed, a Python fallback is used. On Windows: `winget install BurntSushi.ripgrep` or `scoop install ripgrep`
> **Note for Windows Users:** It is strongly recommended to use **WSL (Windows Subsystem for Linux)** or **Git Bash** to run this framework. Some core automation scripts may not execute correctly in standard Command Prompt or PowerShell.
### Installation
> **Note**
> Hive uses a `uv` workspace layout and is not installed with `pip install`.
> Running `pip install -e .` from the repository root will create a placeholder package and Hive will not function correctly.
> Please use the quickstart script below to set up the environment.
```bash
# Clone the repository
git clone https://github.com/adenhq/hive.git
git clone https://github.com/aden-hive/hive.git
cd hive
# Run quickstart setup
./quickstart.sh
```
This sets up:
- **framework** - Core agent runtime and graph executor (in `core/.venv`)
- **aden_tools** - MCP tools for agent capabilities (in `tools/.venv`)
- All required Python dependencies
- **credential store** - Encrypted API key storage (`~/.hive/credentials`)
- **LLM provider** - Interactive default model configuration
- All required Python dependencies with `uv`
- At last, it will initiate the open hive interface in your browser
> **Tip:** To reopen the dashboard later, run `hive open` from the project directory.
<img width="2500" height="1214" alt="home-screen" src="https://github.com/user-attachments/assets/134d897f-5e75-4874-b00b-e0505f6b45c4" />
### Build Your First Agent
```bash
# Build an agent using Claude Code
claude> /building-agents-construction
Type the agent you want to build in the home input box
# Test your agent
claude> /testing-agent
<img width="2500" height="1214" alt="Image" src="https://github.com/user-attachments/assets/1ce19141-a78b-46f5-8d64-dbf987e048f4" />
# Run your agent
PYTHONPATH=core:exports python -m your_agent_name run --input '{...}'
```
### Use Template Agents
**[📖 Complete Setup Guide](ENVIRONMENT_SETUP.md)** - Detailed instructions for agent development
Click "Try a sample agent" and check the templates. You can run a templates directly or choose to build your version on top of the existing template.
### Cursor IDE Support
### Run Agents
Skills are also available in Cursor. To enable:
Now you can run an agent by selectiing the agent (either an existing agent or example agent). You can click the Run button on the top left, or talk to the queen agent and it can run the agent for you.
1. Open Command Palette (`Cmd+Shift+P` / `Ctrl+Shift+P`)
2. Run `MCP: Enable` to enable MCP servers
3. Restart Cursor to load the MCP servers from `.cursor/mcp.json`
4. Type `/` in Agent chat and search for skills (e.g., `/building-agents-construction`)
<img width="2500" height="1214" alt="Image" src="https://github.com/user-attachments/assets/71c38206-2ad5-49aa-bde8-6698d0bc55f5" />
## Features
- **Goal-Driven Development** - Define objectives in natural language; the coding agent generates the agent graph and connection code to achieve them
- **Adaptiveness** - Framework captures failures, calibrates according to the objectives, and evolves the agent graph
- **Dynamic Node Connections** - No predefined edges; connection code is generated by any capable LLM based on your goals
- **Browser-Use** - Control the browser on your computer to achieve hard tasks
- **Parallel Execution** - Execute the generated graph in parallel. This way you can have multiple agent compelteing the jobs for you
- **[Goal-Driven Generation](docs/key_concepts/goals_outcome.md)** - Define objectives in natural language; the coding agent generates the agent graph and connection code to achieve them
- **[Adaptiveness](docs/key_concepts/evolution.md)** - Framework captures failures, calibrates according to the objectives, and evolves the agent graph
- **[Dynamic Node Connections](docs/key_concepts/graph.md)** - No predefined edges; connection code is generated by any capable LLM based on your goals
- **SDK-Wrapped Nodes** - Every node gets shared memory, local RLM memory, monitoring, tools, and LLM access out of the box
- **Human-in-the-Loop** - Intervention nodes that pause execution for human input with configurable timeouts and escalation
- **[Human-in-the-Loop](docs/key_concepts/graph.md#human-in-the-loop)** - Intervention nodes that pause execution for human input with configurable timeouts and escalation
- **Real-time Observability** - WebSocket streaming for live monitoring of agent execution, decisions, and node-to-node communication
- **Cost & Budget Control** - Set spending limits, throttles, and automatic model degradation policies
- **Production-Ready** - Self-hostable, built for scale and reliability
## Integration
<a href="https://github.com/aden-hive/hive/tree/main/tools/src/aden_tools/tools"><img width="100%" alt="Integration" src="https://github.com/user-attachments/assets/a1573f93-cf02-4bb8-b3d5-b305b05b1e51" /></a>
Hive is built to be model-agnostic and system-agnostic.
- **LLM flexibility** - Hive Framework is designed to support various types of LLMs, including hosted and local models through LiteLLM-compatible providers.
- **Business system connectivity** - Hive Framework is designed to connect to all kinds of business systems as tools, such as CRM, support, messaging, data, file, and internal APIs via MCP.
## Why Aden
Hive focuses on generating agents that run real business processes rather than generic agents. Instead of requiring you to manually design workflows, define agent interactions, and handle failures reactively, Hive flips the paradigm: **you describe outcomes, and the system builds itself**—delivering an outcome-driven, adaptive experience with an easy-to-use set of tools and integrations.
@@ -156,161 +189,161 @@ flowchart LR
style V6 fill:#fff,stroke:#ed8c00,stroke-width:1px,color:#cc5d00
```
### The Aden Advantage
### The Hive Advantage
| Traditional Frameworks | Aden |
| Traditional Frameworks | Hive |
| -------------------------- | -------------------------------------- |
| Hardcode agent workflows | Describe goals in natural language |
| Manual graph definition | Auto-generated agent graphs |
| Reactive error handling | Outcome-evaluation and adaptiveness |
| Reactive error handling | Outcome-evaluation and adaptiveness |
| Static tool configurations | Dynamic SDK-wrapped nodes |
| Separate monitoring setup | Built-in real-time observability |
| DIY budget management | Integrated cost controls & degradation |
### How It Works
1. **Define Your Goal** → Describe what you want to achieve in plain English
2. **Coding Agent Generates** → Creates the agent graph, connection code, and test cases
3. **Workers Execute** → SDK-wrapped nodes run with full observability and tool access
1. **[Define Your Goal](docs/key_concepts/goals_outcome.md)** → Describe what you want to achieve in plain English
2. **Coding Agent Generates** → Creates the [agent graph](docs/key_concepts/graph.md), connection code, and test cases
3. **[Workers Execute](docs/key_concepts/worker_agent.md)** → SDK-wrapped nodes run with full observability and tool access
4. **Control Plane Monitors** → Real-time metrics, budget enforcement, policy management
5. **Adaptiveness** → On failure, the system evolves the graph and redeploys automatically
## Run pre-built Agents (Coming Soon)
### Run a sample agent
Aden Hive provides a list of featured agents that you can use and build on top of.
### Run an agent shared by others
Put the agent in `exports/` and run `PYTHONPATH=core:exports python -m your_agent_name run --input '{...}'`
For building and running goal-driven agents with the framework:
```bash
# One-time setup
./quickstart.sh
# This sets up:
# - framework package (core runtime)
# - aden_tools package (MCP tools)
# - All Python dependencies
# Build new agents using Claude Code skills
claude> /building-agents-construction
# Test agents
claude> /testing-agent
# Run agents
PYTHONPATH=core:exports python -m agent_name run --input '{...}'
```
See [ENVIRONMENT_SETUP.md](ENVIRONMENT_SETUP.md) for complete setup instructions.
5. **[Adaptiveness](docs/key_concepts/evolution.md)** → On failure, the system evolves the graph and redeploys automatically
## Documentation
- **[Developer Guide](DEVELOPER.md)** - Comprehensive guide for developers
- **[Developer Guide](docs/developer-guide.md)** - Comprehensive guide for developers
- [Getting Started](docs/getting-started.md) - Quick setup instructions
- [Configuration Guide](docs/configuration.md) - All configuration options
- [Architecture Overview](docs/architecture/README.md) - System design and structure
## Roadmap
Aden Hive Agent Framework aims to help developers build outcome-oriented, self-adaptive agents. See [ROADMAP.md](ROADMAP.md) for details.
Aden Hive Agent Framework aims to help developers build outcome-oriented, self-adaptive agents. See [roadmap.md](docs/roadmap.md) for details.
```mermaid
flowchart TD
subgraph Foundation
direction LR
subgraph arch["Architecture"]
a1["Node-Based Architecture"]:::done
a2["Python SDK"]:::done
a3["LLM Integration"]:::done
a4["Communication Protocol"]:::done
end
subgraph ca["Coding Agent"]
b1["Goal Creation Session"]:::done
b2["Worker Agent Creation"]
b3["MCP Tools"]:::done
end
subgraph wa["Worker Agent"]
c1["Human-in-the-Loop"]:::done
c2["Callback Handlers"]:::done
c3["Intervention Points"]:::done
c4["Streaming Interface"]
end
subgraph cred["Credentials"]
d1["Setup Process"]:::done
d2["Pluggable Sources"]:::done
d3["Enterprise Secrets"]
d4["Integration Tools"]:::done
end
subgraph tools["Tools"]
e1["File Use"]:::done
e2["Memory STM/LTM"]:::done
e3["Web Search/Scraper"]:::done
e4["CSV/PDF"]:::done
e5["Excel/Email"]
end
subgraph core["Core"]
f1["Eval System"]
f2["Pydantic Validation"]:::done
f3["Documentation"]:::done
f4["Adaptiveness"]
f5["Sample Agents"]
end
end
flowchart TB
%% Main Entity
User([User])
subgraph Expansion
direction LR
subgraph intel["Intelligence"]
g1["Guardrails"]
g2["Streaming Mode"]
g3["Image Generation"]
g4["Semantic Search"]
%% =========================================
%% EXTERNAL EVENT SOURCES
%% =========================================
subgraph ExtEventSource [External Event Source]
E_Sch["Schedulers"]
E_WH["Webhook"]
E_SSE["SSE"]
end
subgraph mem["Memory Iteration"]
h1["Message Model & Sessions"]
h2["Storage Migration"]
h3["Context Building"]
h4["Proactive Compaction"]
h5["Token Tracking"]
end
subgraph evt["Event System"]
i1["Event Bus for Nodes"]
end
subgraph cas["Coding Agent Support"]
j1["Claude Code"]
j2["Cursor"]
j3["Opencode"]
j4["Antigravity"]
end
subgraph plat["Platform"]
k1["JavaScript/TypeScript SDK"]
k2["Custom Tool Integrator"]
k3["Windows Support"]
end
subgraph dep["Deployment"]
l1["Self-Hosted"]
l2["Cloud Services"]
l3["CI/CD Pipeline"]
end
subgraph tmpl["Templates"]
m1["Sales Agent"]
m2["Marketing Agent"]
m3["Analytics Agent"]
m4["Training Agent"]
m5["Smart Form Agent"]
end
end
classDef done fill:#9e9e9e,color:#fff,stroke:#757575
%% =========================================
%% SYSTEM NODES
%% =========================================
subgraph WorkerBees [Worker Bees]
WB_C["Conversation"]
WB_SP["System prompt"]
subgraph Graph [Graph]
direction TB
N1["Node"] --> N2["Node"] --> N3["Node"]
N1 -.-> AN["Active Node"]
N2 -.-> AN
N3 -.-> AN
%% Nested Event Loop Node
subgraph EventLoopNode [Event Loop Node]
ELN_L["listener"]
ELN_SP["System Prompt<br/>(Task)"]
ELN_EL["Event loop"]
ELN_C["Conversation"]
end
end
end
subgraph JudgeNode [Judge]
J_C["Criteria"]
J_P["Principles"]
J_EL["Event loop"] <--> J_S["Scheduler"]
end
subgraph QueenBee [Queen Bee]
QB_SP["System prompt"]
QB_EL["Event loop"]
QB_C["Conversation"]
end
subgraph Infra [Infra]
SA["Sub Agent"]
TR["Tool Registry"]
WTM["Write through Conversation Memory<br/>(Logs/RAM/Harddrive)"]
SM["Shared Memory<br/>(State/Harddrive)"]
EB["Event Bus<br/>(RAM)"]
CS["Credential Store<br/>(Harddrive/Cloud)"]
end
subgraph PC [PC]
B["Browser"]
CB["Codebase<br/>v 0.0.x ... v n.n.n"]
end
%% =========================================
%% CONNECTIONS & DATA FLOW
%% =========================================
%% External Event Routing
E_Sch --> ELN_L
E_WH --> ELN_L
E_SSE --> ELN_L
ELN_L -->|"triggers"| ELN_EL
%% User Interactions
User -->|"Talk"| WB_C
User -->|"Talk"| QB_C
User -->|"Read/Write Access"| CS
%% Inter-System Logic
ELN_C <-->|"Mirror"| WB_C
WB_C -->|"Focus"| AN
WorkerBees -->|"Inquire"| JudgeNode
JudgeNode -->|"Approve"| WorkerBees
%% Judge Alignments
J_C <-.->|"aligns"| WB_SP
J_P <-.->|"aligns"| QB_SP
%% Escalate path
J_EL -->|"Report (Escalate)"| QB_EL
%% Pub/Sub Logic
AN -->|"publish"| EB
EB -->|"subscribe"| QB_C
%% Infra and Process Spawning
ELN_EL -->|"Spawn"| SA
SA -->|"Inform"| ELN_EL
SA -->|"Starts"| B
B -->|"Report"| ELN_EL
TR -->|"Assigned"| ELN_EL
CB -->|"Modify Worker Bee"| WB_C
%% =========================================
%% SHARED MEMORY & LOGS ACCESS
%% =========================================
%% Worker Bees Access (link to node inside Graph subgraph)
AN <-->|"Read/Write"| WTM
AN <-->|"Read/Write"| SM
%% Queen Bee Access
QB_C <-->|"Read/Write"| WTM
QB_EL <-->|"Read/Write"| SM
%% Credentials Access
CS -->|"Read Access"| QB_C
```
## Contributing
We welcome contributions from the community! Were especially looking for help building tools, integrations, and example agents for the framework ([check #2805](https://github.com/aden-hive/hive/issues/2805)). If youre interested in extending its functionality, this is the perfect place to start. Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
We welcome contributions from the community! Were especially looking for help building tools, integrations, and example agents for the framework ([check #2805](https://github.com/adenhq/hive/issues/2805)). If youre interested in extending its functionality, this is the perfect place to start. Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
**Important:** Please get assigned to an issue before submitting a PR. Comment on an issue to claim it, and a maintainer will assign you. Issues with reproducible steps and proposals are prioritized. This helps prevent duplicate work.
**Important:** Please get assigned to an issue before submitting a PR. Comment on an issue to claim it, and a maintainer will assign you. Issues with reproducible steps and proposals are prioritized. This helps prevent duplicate work.
1. Find or create an issue and get assigned
2. Fork the repository
@@ -343,13 +376,9 @@ This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENS
## Frequently Asked Questions (FAQ)
**Q: Does Hive depend on LangChain or other agent frameworks?**
No. Hive is built from the ground up with no dependencies on LangChain, CrewAI, or other agent frameworks. The framework is designed to be lean and flexible, generating agent graphs dynamically rather than relying on predefined components.
**Q: What LLM providers does Hive support?**
Hive supports 100+ LLM providers through LiteLLM integration, including OpenAI (GPT-4, GPT-4o), Anthropic (Claude models), Google Gemini, DeepSeek, Mistral, Groq, and many more. Simply set the appropriate API key environment variable and specify the model name.
Hive supports 100+ LLM providers through LiteLLM integration, including OpenAI (GPT-4, GPT-4o), Anthropic (Claude models), Google Gemini, DeepSeek, Mistral, Groq, and many more. Simply set the appropriate API key environment variable and specify the model name. We recommend using Claude, GLM and Gemini as they have the best performance.
**Q: Can I use Hive with local AI models like Ollama?**
@@ -357,37 +386,25 @@ Yes! Hive supports local models through LiteLLM. Simply use the model name forma
**Q: What makes Hive different from other agent frameworks?**
Hive generates your entire agent system from natural language goals using a coding agent—you don't hardcode workflows or manually define graphs. When agents fail, the framework automatically captures failure data, evolves the agent graph, and redeploys. This self-improving loop is unique to Aden.
Hive generates your entire agent system from natural language goals using a coding agent—you don't hardcode workflows or manually define graphs. When agents fail, the framework automatically captures failure data, [evolves the agent graph](docs/key_concepts/evolution.md), and redeploys. This self-improving loop is unique to Aden.
**Q: Is Hive open-source?**
Yes, Hive is fully open-source under the Apache License 2.0. We actively encourage community contributions and collaboration.
**Q: Does Hive collect data from users?**
Hive collects telemetry data for monitoring and observability purposes, including token usage, latency metrics, and cost tracking. Content capture (prompts and responses) is configurable and stored with team-scoped data isolation. All data stays within your infrastructure when self-hosted.
**Q: What deployment options does Hive support?**
Hive supports self-hosted deployments via Python packages. See the [Environment Setup Guide](ENVIRONMENT_SETUP.md) for installation instructions. Cloud deployment options and Kubernetes-ready configurations are on the roadmap.
**Q: Can Hive handle complex, production-scale use cases?**
Yes. Hive is explicitly designed for production environments with features like automatic failure recovery, real-time observability, cost controls, and horizontal scaling support. The framework handles both simple automations and complex multi-agent workflows.
**Q: Does Hive support human-in-the-loop workflows?**
Yes, Hive fully supports human-in-the-loop workflows through intervention nodes that pause execution for human input. These include configurable timeouts and escalation policies, allowing seamless collaboration between human experts and AI agents.
**Q: What monitoring and debugging tools does Hive provide?**
Hive includes comprehensive observability features: real-time WebSocket streaming for live agent execution monitoring, TimescaleDB-powered analytics for cost and performance metrics, health check endpoints for Kubernetes integration, and MCP tools for agent execution, including file operations, web search, data processing, and more.
Yes, Hive fully supports [human-in-the-loop](docs/key_concepts/graph.md#human-in-the-loop) workflows through intervention nodes that pause execution for human input. These include configurable timeouts and escalation policies, allowing seamless collaboration between human experts and AI agents.
**Q: What programming languages does Hive support?**
The Hive framework is built in Python. A JavaScript/TypeScript SDK is on the roadmap.
**Q: Can Aden agents interact with external tools and APIs?**
**Q: Can Hive agents interact with external tools and APIs?**
Yes. Aden's SDK-wrapped nodes provide built-in tool access, and the framework supports flexible tool ecosystems. Agents can integrate with external APIs, databases, and services through the node architecture.
@@ -397,24 +414,12 @@ Hive provides granular budget controls including spending limits, throttles, and
**Q: Where can I find examples and documentation?**
Visit [docs.adenhq.com](https://docs.adenhq.com/) for complete guides, API reference, and getting started tutorials. The repository also includes documentation in the `docs/` folder and a comprehensive [DEVELOPER.md](DEVELOPER.md) guide.
Visit [docs.adenhq.com](https://docs.adenhq.com/) for complete guides, API reference, and getting started tutorials. The repository also includes documentation in the `docs/` folder and a comprehensive [developer guide](docs/developer-guide.md).
**Q: How can I contribute to Aden?**
Contributions are welcome! Fork the repository, create your feature branch, implement your changes, and submit a pull request. See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
**Q: When will my team start seeing results from Aden's adaptive agents?**
Aden's adaptation loop begins working from the first execution. When an agent fails, the framework captures the failure data, helping developers evolve the agent graph through the coding agent. How quickly this translates to measurable results depends on the complexity of your use case, the quality of your goal definitions, and the volume of executions generating feedback.
**Q: How does Hive compare to other agent frameworks?**
Hive focuses on generating agents that run real business processes, rather than generic agents. This vision emphasizes outcome-driven design, adaptability, and an easy-to-use set of tools and integrations.
**Q: Does Aden offer enterprise support?**
For enterprise inquiries, contact the Aden team through [adenhq.com](https://adenhq.com) or join our [Discord community](https://discord.com/invite/MXE49hrKDk) for support and discussions.
---
<p align="center">
-299
View File
@@ -1,299 +0,0 @@
# Product Roadmap
Aden Agent Framework aims to help developers build outcome oriented, self-adaptive agents. Please find our roadmap here
```mermaid
flowchart TD
subgraph Foundation
direction LR
subgraph arch["Architecture"]
a1["Node-Based Architecture"]:::done
a2["Python SDK"]:::done
a3["LLM Integration"]:::done
a4["Communication Protocol"]:::done
end
subgraph ca["Coding Agent"]
b1["Goal Creation Session"]:::done
b2["Worker Agent Creation"]
b3["MCP Tools"]:::done
end
subgraph wa["Worker Agent"]
c1["Human-in-the-Loop"]:::done
c2["Callback Handlers"]:::done
c3["Intervention Points"]:::done
c4["Streaming Interface"]
end
subgraph cred["Credentials"]
d1["Setup Process"]:::done
d2["Pluggable Sources"]:::done
d3["Enterprise Secrets"]
d4["Integration Tools"]:::done
end
subgraph tools["Tools"]
e1["File Use"]:::done
e2["Memory STM/LTM"]:::done
e3["Web Search/Scraper"]:::done
e4["CSV/PDF"]:::done
e5["Excel/Email"]
end
subgraph core["Core"]
f1["Eval System"]
f2["Pydantic Validation"]:::done
f3["Documentation"]:::done
f4["Adaptiveness"]
f5["Sample Agents"]
end
end
subgraph Expansion
direction LR
subgraph intel["Intelligence"]
g1["Guardrails"]
g2["Streaming Mode"]
g3["Image Generation"]
g4["Semantic Search"]
end
subgraph mem["Memory Iteration"]
h1["Message Model & Sessions"]
h2["Storage Migration"]
h3["Context Building"]
h4["Proactive Compaction"]
h5["Token Tracking"]
end
subgraph evt["Event System"]
i1["Event Bus for Nodes"]
end
subgraph cas["Coding Agent Support"]
j1["Claude Code"]
j2["Cursor"]
j3["Opencode"]
j4["Antigravity"]
end
subgraph plat["Platform"]
k1["JavaScript/TypeScript SDK"]
k2["Custom Tool Integrator"]
k3["Windows Support"]
end
subgraph dep["Deployment"]
l1["Self-Hosted"]
l2["Cloud Services"]
l3["CI/CD Pipeline"]
end
subgraph tmpl["Templates"]
m1["Sales Agent"]
m2["Marketing Agent"]
m3["Analytics Agent"]
m4["Training Agent"]
m5["Smart Form Agent"]
end
end
classDef done fill:#9e9e9e,color:#fff,stroke:#757575
```
---
## Phase 1: Foundation
### Backbone Architecture
- [ ] **Node-Based Architecture (Agent as a node)**
- [x] Object schema definition
- [x] Node wrapper SDK
- [x] Shared memory access
- [ ] Default monitoring hooks
- [x] Tool access layer
- [x] LLM integration layer (Natively supports all mainstream LLMs through LiteLLM)
- [x] Anthropic
- [x] OpenAI
- [x] Google
- [x] **Communication protocol between nodes**
- [x] **[Coding Agent] Goal Creation Session** (separate from coding session)
- [x] Instruction back and forth
- [x] Goal Object schema definition
- [x] Being able to generate the test cases
- [x] Test case validation for worker agent (Outcome driven)
- [ ] **[Coding Agent] Worker Agent Creation**
- [x] Coding Agent tools
- [ ] Use Template Agent as a start
- [x] Use our MCP tools
- [ ] **[Worker Agent] Human-in-the-Loop**
- [x] Worker Agents request with questions and options
- [x] Callback Handler System to receive events throughout execution
- [x] Tool-Based Intervention Points (tool to pause execution and request human input)
- [x] Multiple entrypoint for different event source (e.g. Human input, webhook)
- [ ] Streaming Interface for Real-time Monitoring
- [x] Request State Management
### Credential Management
- [x] **Credentials Setup Process**
- [x] Install Credential MCP
- [x] **Pluggable Credential Sources**
- [x] **Abstraction & Local Sources**
- [x] Introduce `CredentialSource` base class
- [x] Refactor existing logic into `EnvVarSource`
- [x] Implementation of Source Priority Chain mechanism
- [ ] Foundation unit tests
- [ ] **Enterprise Secret Managers**
- [x] `VaultSource` (HashiCorp Vault)
- [ ] `AWSSecretsSource` (AWS Secrets Manager)
- [ ] `AzureKeyVaultSource` (Azure Key Vault)
- [ ] Management of optional provider dependencies
- [ ] **Advanced Features**
- [x] Credential expiration and auto-refresh
- [ ] Audit logging for compliance/tracking
- [ ] Per-environment configuration support
- [ ] **Documentation & DX**
- [ ] Comprehensive source documentation
- [ ] Example configurations for all providers
- [x] **Integration as tools coverage**
- [x] Gsuite Tools
- [x] Social Media
- [ ] Twitter(X)
- [x] Github
- [ ] Instagram
- [ ] SAAS
- [ ] Hubspot
- [ ] Slack
- [ ] Teams
- [ ] Zoom
- [ ] Stripe
- [ ] Salesforce
> [!IMPORTANT]
> **Community Contribution Wanted**: We appreciate help from the community to expand the "Integration as tools" capability. Leave an issue of the integration you want to support via Hive!
### Essential Tools
- [x] **File Use Tool Kit**
- [X] **Memory Tools**
- [x] STM Layer Tool (state-based short-term memory)
- [x] LTM Layer Tool (RLM - long-term memory)
- [ ] **Infrastructure Tools**
- [x] Runtime Log Tool (logs for coding agent)
- [x] Web Search
- [x] Web Scraper
- [x] CSV tools
- [x] PDF tools
- [ ] Excel tools
- [ ] Email Tools
- [ ] Recipe for "Add your own tools"
### Memory & File System
- [x] DB for long-term persistent memory (Filesystem as durable scratchpad pattern)
- [x] Session Local memory isolation
### Eval System (Basic)
- [x] Test Driven - Run test case for all agent iteration
- [ ] Failure recording mechanism
- [ ] SDK for defining failure conditions
- [ ] Basic observability hooks
- [ ] User-driven log analysis (OSS approach)
### Data Validation
- [x] Natively Support data validation of LLMs output with Pydantic
### Developer Experience
- [ ] **MVP Features**
- [ ] Debugging mode
- [ ] CLI tools for memory management
- [ ] CLI tools for credential management
- [ ] **MVP Resources & Documentation**
- [x] Quick start guide
- [x] Goal creation guide
- [x] Agent creation guide
- [x] GitHub Page setup
- [x] README with examples
- [x] Contributing guidelines
- [ ] Introduction Video
### Adaptiveness
- [ ] Runtime data feedback loop
- [ ] Instant Developer Feedback for improvement
### Sample Agents
- [ ] Knowledge Agent
- [ ] Blog Writer Agent
- [ ] SDR Agent
---
## Phase 2: Expansion
### Basic Guardrails
- [ ] Support Basic Monitoring from Agent node SDK
- [ ] SDK guardrail implementation (in node)
- [ ] Guardrail type support (Determined Condition as Guardrails)
### Agent Capability
- [ ] Streaming mode support
- [ ] Image Generation support
- [ ] Take end user input Image and flatfile understand capability
### Event-loop For Nodes (Opencode-style)
- [ ] **Event bus**
### Memory System Iteration
- [ ] **Message Model & Session Management**
- [ ] Introduce `Message` class with structured content types
- [ ] Implement `Session` classes for conversation state
- [ ] **Storage Migration**
- [ ] Implement granular per-message file persistence (`/message/[agentID]/...`)
- [ ] Migrate from monolithic run storage
- [ ] **Context Building & Conversation Loop**
- [ ] Implement `Message.stream(sessionID)`
- [ ] Update `LLMNode.execute()` for full context building
- [ ] Implement `Message.toModelMessages()` conversion
- [ ] **Proactive Compaction**
- [ ] Implement proactive overflow detection
- [ ] Develop backward-scanning pruning strategy (e.g., clearing old tool outputs)
- [ ] **Enhanced Token Tracking**
- [ ] Extend `LLMResponse` to track reasoning and cache tokens
- [ ] Integrate granular token metrics into compaction logic
### Coding Agent Support
- [ ] Claude Code
- [ ] Cursor
- [ ] Opencode
- [ ] Antigravity
### File System Enhancement
- [ ] Semantic Search integration
- [ ] Interactive File System in product (frontend integration)
### More Worker Tools
- [ ] Custom Tool Integrator
- [ ] Integration as a tool (Credential Store & Support)
- [ ] **Core Agent Tools**
- [ ] Node Discovery Tool (find other agents in the graph)
- [ ] HITL Tool (pause execution for human approval)
- [ ] Wake-up Tool (resume agent tasks)
### Deployment (Self-Hosted)
- [ ] Docker container standardization
- [ ] Headless backend execution
- [ ] Exposed API for frontend attachment
- [ ] Local monitoring & observability
- [ ] Basic lifecycle APIs (Start, Stop, Pause, Resume)
### Deployment (Cloud)
- [ ] Cloud Service Options
- [ ] Support deployment to 3rd-party platforms
- [ ] Self-deploy + orchestrator connection
- [ ] **CI/CD Pipeline**
- [ ] Automated test execution
- [ ] Agent version control
- [ ] All tests must pass for deployment
### Developer Experience Enhancement
- [ ] Tool usage documentation
- [ ] Discord Support Channel
### More Agent Templates
- [ ] GTM Sales Agent (workflow)
- [ ] GTM Marketing Agent (workflow)
- [ ] Analytics Agent
- [ ] Training Agent
- [ ] Smart Entry / Form Agent (self-evolution emphasis)
### Cross-Platform
- [ ] JavaScript / TypeScript Version SDK
- [ ] Better windows support
+31
View File
@@ -0,0 +1,31 @@
perf: reduce subprocess spawning in quickstart scripts (#4427)
## Problem
Windows process creation (CreateProcess) is 10-100x slower than Linux fork/exec.
The quickstart scripts were spawning 4+ separate `uv run python -c "import X"`
processes to verify imports, adding ~600ms overhead on Windows.
## Solution
Consolidated all import checks into a single batch script that checks multiple
modules in one subprocess call, reducing spawn overhead by ~75%.
## Changes
- **New**: `scripts/check_requirements.py` - Batched import checker
- **New**: `scripts/test_check_requirements.py` - Test suite
- **New**: `scripts/benchmark_quickstart.ps1` - Performance benchmark tool
- **Modified**: `quickstart.ps1` - Updated import verification (2 sections)
- **Modified**: `quickstart.sh` - Updated import verification
## Performance Impact
**Benchmark results on Windows:**
- Before: ~19.8 seconds for import checks
- After: ~4.9 seconds for import checks
- **Improvement: 14.9 seconds saved (75.2% faster)**
## Testing
- ✅ All functional tests pass (`scripts/test_check_requirements.py`)
- ✅ Quickstart scripts work correctly on Windows
- ✅ Error handling verified (invalid imports reported correctly)
- ✅ Performance benchmark confirms 75%+ improvement
Fixes #4427
+27
View File
@@ -0,0 +1,27 @@
# Identity mapping: GitHub username -> Discord ID
#
# This file links GitHub accounts to Discord accounts for the
# Integration Bounty Program. When a bounty PR is merged, the
# GitHub Action uses this file to ping the contributor on Discord.
#
# HOW TO ADD YOURSELF:
# Open a "Link Discord Account" issue:
# https://github.com/aden-hive/hive/issues/new?template=link-discord.yml
# A GitHub Action will automatically add your entry here.
#
# To find your Discord ID:
# 1. Open Discord Settings > Advanced > Enable Developer Mode
# 2. Right-click your name > Copy User ID
#
# Format:
# - github: your-github-username
# discord: "your-discord-id" # quotes required (it's a number)
# name: Your Display Name # optional
contributors:
# - github: example-user
# discord: "123456789012345678"
# name: Example User
- github: TimothyZhang7
discord: "408460790061072384"
name: Timothy@Aden
-5
View File
@@ -1,10 +1,5 @@
{
"mcpServers": {
"agent-builder": {
"command": "python",
"args": ["-m", "framework.mcp.agent_builder_server"],
"cwd": "core"
},
"tools": {
"command": "python",
"args": ["-m", "aden_tools.mcp_server", "--stdio"],
+3 -3
View File
@@ -82,7 +82,7 @@ Register an MCP server as a tool source for your agent.
"example_tool"
],
"total_mcp_servers": 1,
"note": "MCP server 'tools' registered with 6 tools. These tools can now be used in llm_tool_use nodes."
"note": "MCP server 'tools' registered with 6 tools. These tools can now be used in event_loop nodes."
}
```
@@ -149,7 +149,7 @@ List tools available from registered MCP servers.
]
},
"total_tools": 6,
"note": "Use these tool names in the 'tools' parameter when adding llm_tool_use nodes"
"note": "Use these tool names in the 'tools' parameter when adding event_loop nodes"
}
```
@@ -246,7 +246,7 @@ Here's a complete workflow for building an agent with MCP tools:
"node_id": "web-searcher",
"name": "Web Search",
"description": "Search the web for information",
"node_type": "llm_tool_use",
"node_type": "event_loop",
"input_keys": "[\"query\"]",
"output_keys": "[\"search_results\"]",
"system_prompt": "Search for {query} using the web_search tool",
+2 -2
View File
@@ -119,7 +119,7 @@ builder = WorkflowBuilder()
builder.add_node(
node_id="researcher",
name="Web Researcher",
node_type="llm_tool_use",
node_type="event_loop",
system_prompt="Research the topic using web_search",
tools=["web_search"], # Tool from tools MCP server
input_keys=["topic"],
@@ -137,7 +137,7 @@ Tools from MCP servers can be referenced in your agent.json just like built-in t
{
"id": "searcher",
"name": "Web Searcher",
"node_type": "llm_tool_use",
"node_type": "event_loop",
"system_prompt": "Search for information about {topic}",
"tools": ["web_search", "web_scrape"],
"input_keys": ["topic"],
+27 -81
View File
@@ -1,17 +1,16 @@
# MCP Server Guide - Agent Builder
# MCP Server Guide - Agent Building Tools
This guide covers the MCP (Model Context Protocol) server for building goal-driven agents.
> **Note:** The standalone `agent-builder` MCP server (`framework.mcp.agent_builder_server`) has been replaced. Agent building is now done via the `coder-tools` server's `initialize_and_build_agent` tool, with underlying logic in `tools/coder_tools_server.py`.
This guide covers the MCP tools available for building goal-driven agents.
## Setup
### Quick Setup
```bash
# Using the setup script (recommended)
python setup_mcp.py
# Or using bash
./setup_mcp.sh
# Run the quickstart script (recommended)
./quickstart.sh
```
### Manual Configuration
@@ -21,10 +20,10 @@ Add to your MCP client configuration (e.g., Claude Desktop):
```json
{
"mcpServers": {
"agent-builder": {
"command": "python",
"args": ["-m", "framework.mcp.agent_builder_server"],
"cwd": "/path/to/goal-agent"
"coder-tools": {
"command": "uv",
"args": ["run", "coder_tools_server.py", "--stdio"],
"cwd": "/path/to/hive/tools"
}
}
}
@@ -103,31 +102,20 @@ Add a processing node to the agent graph.
- `node_id` (string, required): Unique node identifier
- `name` (string, required): Human-readable name
- `description` (string, required): What this node does
- `node_type` (string, required): One of: `llm_generate`, `llm_tool_use`, `router`, `function`
- `node_type` (string, required): Must be `event_loop` (the only valid type)
- `input_keys` (string, required): JSON array of input variable names
- `output_keys` (string, required): JSON array of output variable names
- `system_prompt` (string, optional): System prompt for LLM nodes
- `tools` (string, optional): JSON array of tool names for tool_use nodes
- `routes` (string, optional): JSON object of route mappings for router nodes
- `system_prompt` (string, optional): System prompt for the LLM
- `tools` (string, optional): JSON array of tool names
- `client_facing` (boolean, optional): Set to true for human-in-the-loop interaction
**Node Types:**
**Node Type:**
1. **llm_generate**: Uses LLM to generate output from inputs
- Requires: `system_prompt`
- Tools: Not used
2. **llm_tool_use**: Uses LLM with tools to accomplish tasks
- Requires: `system_prompt`, `tools`
- Tools: Array of tool names (e.g., `["web_search", "web_fetch"]`)
3. **router**: LLM-powered routing to different paths
- Requires: `system_prompt`, `routes`
- Routes: Object mapping route names to target node IDs
- Example: `{"pass": "success_node", "fail": "retry_node"}`
4. **function**: Executes a pre-defined function
- System prompt describes the function behavior
- No LLM calls, pure computation
**event_loop**: LLM-powered node with self-correction loop
- Requires: `system_prompt`
- Optional: `tools` (array of tool names, e.g., `["web_search", "web_fetch"]`)
- Optional: `client_facing` (set to true for HITL / user interaction)
- Supports: iterative refinement, judge-based evaluation, tool use, streaming
**Example:**
```json
@@ -135,7 +123,7 @@ Add a processing node to the agent graph.
"node_id": "search_sources",
"name": "Search Sources",
"description": "Searches for relevant sources on the topic",
"node_type": "llm_tool_use",
"node_type": "event_loop",
"input_keys": "[\"topic\", \"search_queries\"]",
"output_keys": "[\"sources\", \"source_count\"]",
"system_prompt": "Search for sources using the provided queries...",
@@ -198,7 +186,7 @@ Export the validated graph as an agent specification.
**What it does:**
1. Validates the graph
2. Auto-generates missing edges from router routes
2. Validates edge connectivity
3. Writes files to disk:
- `exports/{agent-name}/agent.json` - Full agent specification
- `exports/{agent-name}/README.md` - Auto-generated documentation
@@ -252,47 +240,6 @@ Test the complete agent graph with sample inputs.
---
### Evaluation Rules
#### `add_evaluation_rule`
Add a rule for the HybridJudge to evaluate node outputs.
**Parameters:**
- `rule_id` (string, required): Unique rule identifier
- `description` (string, required): What this rule checks
- `condition` (string, required): Python expression to evaluate
- `action` (string, required): Action to take: `accept`, `retry`, `escalate`
- `priority` (integer, optional): Rule priority (default: 0)
- `feedback_template` (string, optional): Feedback message template
**Condition Examples:**
- `'result.get("success") == True'` - Check for success flag
- `'result.get("error_type") == "timeout"'` - Check error type
- `'len(result.get("data", [])) > 0'` - Check for non-empty data
**Example:**
```json
{
"rule_id": "timeout_retry",
"description": "Retry on timeout errors",
"condition": "result.get('error_type') == 'timeout'",
"action": "retry",
"priority": 10,
"feedback_template": "Timeout occurred, retrying..."
}
```
#### `list_evaluation_rules`
List all configured evaluation rules.
#### `remove_evaluation_rule`
Remove an evaluation rule.
**Parameters:**
- `rule_id` (string, required): Rule to remove
---
## Example Workflow
Here's a complete workflow for building a research agent:
@@ -320,7 +267,7 @@ add_node(
node_id="planner",
name="Research Planner",
description="Creates research strategy",
node_type="llm_generate",
node_type="event_loop",
input_keys='["topic"]',
output_keys='["strategy", "queries"]',
system_prompt="Analyze topic and create research plan..."
@@ -330,7 +277,7 @@ add_node(
node_id="searcher",
name="Search Sources",
description="Find relevant sources",
node_type="llm_tool_use",
node_type="event_loop",
input_keys='["queries"]',
output_keys='["sources"]',
system_prompt="Search for sources...",
@@ -359,10 +306,9 @@ The exported agent will be saved to `exports/research-agent/`.
1. **Start with the goal**: Define clear success criteria before building nodes
2. **Test nodes individually**: Use `test_node` to verify each node works
3. **Use router nodes for branching**: Don't create edges manually for routers - define routes and they'll be auto-generated
4. **Add evaluation rules**: Help the judge evaluate outputs deterministically
5. **Validate early, validate often**: Run `validate_graph` after adding nodes/edges
6. **Check exports**: Review the generated README.md to verify your agent structure
3. **Use conditional edges for branching**: Define condition_expr on edges for decision points
4. **Validate early, validate often**: Run `validate_graph` after adding nodes/edges
5. **Check exports**: Review the generated README.md to verify your agent structure
---
+15 -70
View File
@@ -14,69 +14,14 @@ Framework provides a runtime framework that captures **decisions**, not just act
## Installation
```bash
pip install -e .
uv pip install -e .
```
## MCP Server Setup
## Agent Building
The framework includes an MCP (Model Context Protocol) server for building agents. To set up the MCP server:
Agent scaffolding is handled by the `coder-tools` MCP server (in `tools/coder_tools_server.py`), which provides the `initialize_and_build_agent` tool and related utilities. The package generation logic lives directly in `tools/coder_tools_server.py`.
### Automated Setup
**Using bash (Linux/macOS):**
```bash
./setup_mcp.sh
```
**Using Python (cross-platform):**
```bash
python setup_mcp.py
```
The setup script will:
1. Install the framework package
2. Install MCP dependencies (mcp, fastmcp)
3. Create/verify `.mcp.json` configuration
4. Test the MCP server module
### Manual Setup
If you prefer manual setup:
```bash
# Install framework
pip install -e .
# Install MCP dependencies
pip install mcp fastmcp
# Test the server
python -m framework.mcp.agent_builder_server
```
### Using with MCP Clients
To use the agent builder with Claude Desktop or other MCP clients, add this to your MCP client configuration:
```json
{
"mcpServers": {
"agent-builder": {
"command": "python",
"args": ["-m", "framework.mcp.agent_builder_server"],
"cwd": "/path/to/goal-agent"
}
}
}
```
The MCP server provides tools for:
- Creating agent building sessions
- Defining goals with success criteria
- Adding nodes (llm_generate, llm_tool_use, router, function)
- Connecting nodes with edges
- Validating and exporting agent graphs
- Testing nodes and full agent graphs
See the [Getting Started Guide](../docs/getting-started.md) for building agents.
## Quick Start
@@ -85,14 +30,14 @@ The MCP server provides tools for:
Run an LLM-powered calculator:
```bash
# Single calculation
python -m framework calculate "2 + 3 * 4"
# Run an exported agent
uv run python -m framework run exports/calculator --input '{"expression": "2 + 3 * 4"}'
# Interactive mode
python -m framework interactive
# Interactive shell session
uv run python -m framework shell exports/calculator
# Analyze runs with Builder
python -m framework analyze calculator
# Show agent info
uv run python -m framework info exports/calculator
```
### Using the Runtime
@@ -136,16 +81,16 @@ Tests are generated using MCP tools (`generate_constraint_tests`, `generate_succ
```bash
# Run tests against an agent
python -m framework test-run <agent_path> --goal <goal_id> --parallel 4
uv run python -m framework test-run <agent_path> --goal <goal_id> --parallel 4
# Debug failed tests
python -m framework test-debug <agent_path> <test_name>
uv run python -m framework test-debug <agent_path> <test_name>
# List tests for a goal
python -m framework test-list <goal_id>
# List tests for an agent
uv run python -m framework test-list <agent_path>
```
For detailed testing workflows, see the [testing-agent skill](../.claude/skills/testing-agent/SKILL.md).
For detailed testing workflows, see [developer-guide.md](../docs/developer-guide.md).
### Analyzing Agent Behavior with Builder
+387
View File
@@ -0,0 +1,387 @@
"""OpenAI Codex OAuth PKCE login flow.
Runs the full browser-based OAuth flow so users can authenticate with their
ChatGPT Plus/Pro subscription without needing the Codex CLI installed.
Usage (from quickstart.sh):
uv run python codex_oauth.py
Exit codes:
0 - success (credentials saved to ~/.codex/auth.json)
1 - failure (user cancelled, timeout, or token exchange error)
"""
import base64
import hashlib
import http.server
import json
import os
import platform
import secrets
import subprocess
import sys
import threading
import time
import urllib.error
import urllib.parse
import urllib.request
from datetime import UTC, datetime
from pathlib import Path
# OAuth constants (from the Codex CLI binary)
CLIENT_ID = "app_EMoamEEZ73f0CkXaXp7hrann"
AUTHORIZE_URL = "https://auth.openai.com/oauth/authorize"
TOKEN_URL = "https://auth.openai.com/oauth/token"
REDIRECT_URI = "http://localhost:1455/auth/callback"
SCOPE = "openid profile email offline_access"
CALLBACK_PORT = 1455
# Where to save credentials (same location the Codex CLI uses)
CODEX_AUTH_FILE = Path.home() / ".codex" / "auth.json"
# JWT claim path for account_id
JWT_CLAIM_PATH = "https://api.openai.com/auth"
def _base64url(data: bytes) -> str:
return base64.urlsafe_b64encode(data).rstrip(b"=").decode("ascii")
def generate_pkce() -> tuple[str, str]:
"""Generate PKCE code_verifier and code_challenge (S256)."""
verifier_bytes = secrets.token_bytes(32)
verifier = _base64url(verifier_bytes)
challenge = _base64url(hashlib.sha256(verifier.encode("ascii")).digest())
return verifier, challenge
def build_authorize_url(state: str, challenge: str) -> str:
"""Build the OpenAI OAuth authorize URL with PKCE."""
params = urllib.parse.urlencode(
{
"response_type": "code",
"client_id": CLIENT_ID,
"redirect_uri": REDIRECT_URI,
"scope": SCOPE,
"code_challenge": challenge,
"code_challenge_method": "S256",
"state": state,
"id_token_add_organizations": "true",
"codex_cli_simplified_flow": "true",
"originator": "hive",
}
)
return f"{AUTHORIZE_URL}?{params}"
def exchange_code_for_tokens(code: str, verifier: str) -> dict | None:
"""Exchange the authorization code for tokens."""
data = urllib.parse.urlencode(
{
"grant_type": "authorization_code",
"client_id": CLIENT_ID,
"code": code,
"code_verifier": verifier,
"redirect_uri": REDIRECT_URI,
}
).encode("utf-8")
req = urllib.request.Request(
TOKEN_URL,
data=data,
headers={"Content-Type": "application/x-www-form-urlencoded"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=15) as resp:
token_data = json.loads(resp.read())
except (urllib.error.URLError, json.JSONDecodeError, TimeoutError, OSError) as exc:
print(f"\033[0;31mToken exchange failed: {exc}\033[0m", file=sys.stderr)
return None
if not token_data.get("access_token") or not token_data.get("refresh_token"):
print("\033[0;31mToken response missing required fields\033[0m", file=sys.stderr)
return None
return token_data
def decode_jwt_payload(token: str) -> dict | None:
"""Decode the payload of a JWT (no signature verification)."""
try:
parts = token.split(".")
if len(parts) != 3:
return None
payload = parts[1]
# Add padding
padding = 4 - len(payload) % 4
if padding != 4:
payload += "=" * padding
decoded = base64.urlsafe_b64decode(payload)
return json.loads(decoded)
except Exception:
return None
def get_account_id(access_token: str) -> str | None:
"""Extract the ChatGPT account_id from the access token JWT."""
payload = decode_jwt_payload(access_token)
if not payload:
return None
auth = payload.get(JWT_CLAIM_PATH)
if isinstance(auth, dict):
account_id = auth.get("chatgpt_account_id")
if isinstance(account_id, str) and account_id:
return account_id
return None
def save_credentials(token_data: dict, account_id: str) -> None:
"""Save credentials to ~/.codex/auth.json in the same format the Codex CLI uses."""
auth_data = {
"tokens": {
"access_token": token_data["access_token"],
"refresh_token": token_data["refresh_token"],
"account_id": account_id,
},
"auth_mode": "chatgpt",
"last_refresh": datetime.now(UTC).isoformat(),
}
if "id_token" in token_data:
auth_data["tokens"]["id_token"] = token_data["id_token"]
CODEX_AUTH_FILE.parent.mkdir(parents=True, exist_ok=True, mode=0o700)
fd = os.open(CODEX_AUTH_FILE, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600)
with os.fdopen(fd, "w") as f:
json.dump(auth_data, f, indent=2)
def open_browser(url: str) -> bool:
"""Open the URL in the user's default browser."""
system = platform.system()
try:
devnull = subprocess.DEVNULL
if system == "Darwin":
subprocess.Popen(["open", url], stdout=devnull, stderr=devnull)
elif system == "Windows":
subprocess.Popen(["cmd", "/c", "start", url], stdout=devnull, stderr=devnull)
else:
subprocess.Popen(["xdg-open", url], stdout=devnull, stderr=devnull)
return True
except OSError:
return False
class OAuthCallbackHandler(http.server.BaseHTTPRequestHandler):
"""HTTP handler that captures the OAuth callback."""
auth_code: str | None = None
received_state: str | None = None
def do_GET(self) -> None:
parsed = urllib.parse.urlparse(self.path)
if parsed.path != "/auth/callback":
self.send_response(404)
self.end_headers()
self.wfile.write(b"Not found")
return
params = urllib.parse.parse_qs(parsed.query)
code = params.get("code", [None])[0]
state = params.get("state", [None])[0]
if not code:
self.send_response(400)
self.end_headers()
self.wfile.write(b"Missing authorization code")
return
OAuthCallbackHandler.auth_code = code
OAuthCallbackHandler.received_state = state
self.send_response(200)
self.send_header("Content-Type", "text/html; charset=utf-8")
self.end_headers()
self.wfile.write(
b"<!doctype html><html><head><meta charset='utf-8'/></head>"
b"<body><h2>Authentication successful</h2>"
b"<p>Return to your terminal to continue.</p></body></html>"
)
def log_message(self, format: str, *args: object) -> None:
# Suppress request logging
pass
def wait_for_callback(state: str, timeout_secs: int = 120) -> str | None:
"""Start a local HTTP server and wait for the OAuth callback.
Returns the authorization code on success, None on timeout.
"""
OAuthCallbackHandler.auth_code = None
OAuthCallbackHandler.received_state = None
server = http.server.HTTPServer(("127.0.0.1", CALLBACK_PORT), OAuthCallbackHandler)
server.timeout = 1
deadline = time.time() + timeout_secs
server_thread = threading.Thread(target=_serve_until_done, args=(server, deadline, state))
server_thread.daemon = True
server_thread.start()
server_thread.join(timeout=timeout_secs + 2)
server.server_close()
if OAuthCallbackHandler.auth_code and OAuthCallbackHandler.received_state == state:
return OAuthCallbackHandler.auth_code
return None
def _serve_until_done(server: http.server.HTTPServer, deadline: float, state: str) -> None:
while time.time() < deadline:
server.handle_request()
if OAuthCallbackHandler.auth_code and OAuthCallbackHandler.received_state == state:
return
def parse_manual_input(value: str, expected_state: str) -> str | None:
"""Parse user-pasted redirect URL or auth code."""
value = value.strip()
if not value:
return None
try:
parsed = urllib.parse.urlparse(value)
params = urllib.parse.parse_qs(parsed.query)
code = params.get("code", [None])[0]
state = params.get("state", [None])[0]
if state and state != expected_state:
return None
return code
except Exception:
pass
# Maybe it's just the raw code
if len(value) > 10 and " " not in value:
return value
return None
def main() -> int:
# Generate PKCE and state
verifier, challenge = generate_pkce()
state = secrets.token_hex(16)
# Build URL
auth_url = build_authorize_url(state, challenge)
print()
print("\033[1mOpenAI Codex OAuth Login\033[0m")
print()
# Try to start the local callback server first
try:
server_available = True
# Quick test that port is free
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(1)
result = sock.connect_ex(("127.0.0.1", CALLBACK_PORT))
sock.close()
if result == 0:
print(f"\033[1;33mPort {CALLBACK_PORT} is in use. Using manual paste mode.\033[0m")
server_available = False
except Exception:
server_available = True
# Open browser
browser_opened = open_browser(auth_url)
if browser_opened:
print(" Browser opened for OpenAI sign-in...")
else:
print(" Could not open browser automatically.")
print()
print(" If the browser didn't open, visit this URL:")
print(f" \033[0;36m{auth_url}\033[0m")
print()
code = None
if server_available:
print(" Waiting for authentication (up to 2 minutes)...")
print(" \033[2mOr paste the redirect URL below if the callback didn't work:\033[0m")
print()
# Start callback server in background
callback_result: list[str | None] = [None]
def run_server() -> None:
callback_result[0] = wait_for_callback(state, timeout_secs=120)
server_thread = threading.Thread(target=run_server)
server_thread.daemon = True
server_thread.start()
# Also accept manual input in parallel
# We poll for both the server result and stdin
try:
import select
while server_thread.is_alive():
# Check if stdin has data (non-blocking on unix)
if hasattr(select, "select"):
ready, _, _ = select.select([sys.stdin], [], [], 0.5)
if ready:
manual = sys.stdin.readline()
if manual.strip():
code = parse_manual_input(manual, state)
if code:
break
else:
time.sleep(0.5)
if callback_result[0]:
code = callback_result[0]
break
except (KeyboardInterrupt, EOFError):
print("\n\033[0;31mCancelled.\033[0m")
return 1
if not code:
code = callback_result[0]
else:
# Manual paste mode
try:
manual = input(" Paste the redirect URL: ").strip()
code = parse_manual_input(manual, state)
except (KeyboardInterrupt, EOFError):
print("\n\033[0;31mCancelled.\033[0m")
return 1
if not code:
print("\n\033[0;31mAuthentication timed out or failed.\033[0m")
return 1
# Exchange code for tokens
print()
print(" Exchanging authorization code for tokens...")
token_data = exchange_code_for_tokens(code, verifier)
if not token_data:
return 1
# Extract account_id from JWT
account_id = get_account_id(token_data["access_token"])
if not account_id:
print("\033[0;31mFailed to extract account ID from token.\033[0m", file=sys.stderr)
return 1
# Save credentials
save_credentials(token_data, account_id)
print(" \033[0;32mAuthentication successful!\033[0m")
print(f" Credentials saved to {CODEX_AUTH_FILE}")
return 0
if __name__ == "__main__":
sys.exit(main())
+50 -61
View File
@@ -32,6 +32,8 @@ sys.path.insert(0, str(_CORE_DIR)) # framework.*
sys.path.insert(0, str(_HIVE_DIR / "tools" / "src")) # aden_tools.*
sys.path.insert(0, str(_HIVE_DIR)) # core.framework.* (for aden_tools imports)
import os # noqa: E402
from aden_tools.credentials import CREDENTIAL_SPECS, CredentialStoreAdapter # noqa: E402
from core.framework.credentials import CredentialStore # noqa: E402
@@ -40,7 +42,7 @@ from framework.credentials.storage import ( # noqa: E402
EncryptedFileStorage,
EnvVarStorage,
)
from framework.graph.event_loop_node import EventLoopNode, JudgeVerdict, LoopConfig # noqa: E402
from framework.graph.event_loop_node import EventLoopNode, LoopConfig # noqa: E402
from framework.graph.node import NodeContext, NodeSpec, SharedMemory # noqa: E402
from framework.llm.litellm import LiteLLMProvider # noqa: E402
from framework.llm.provider import Tool # noqa: E402
@@ -67,13 +69,39 @@ LLM = LiteLLMProvider(model="claude-sonnet-4-5-20250929")
TOOL_REGISTRY = ToolRegistry()
# Composite credential store: encrypted files (primary) + env vars (fallback)
# Credential store: Aden sync (OAuth2 tokens) + encrypted files + env var fallback
_env_mapping = {name: spec.env_var for name, spec in CREDENTIAL_SPECS.items()}
_composite = CompositeStorage(
_local_storage = CompositeStorage(
primary=EncryptedFileStorage(),
fallbacks=[EnvVarStorage(env_mapping=_env_mapping)],
)
CREDENTIALS = CredentialStoreAdapter(CredentialStore(storage=_composite))
if os.environ.get("ADEN_API_KEY"):
try:
from framework.credentials.aden import ( # noqa: E402
AdenCachedStorage,
AdenClientConfig,
AdenCredentialClient,
AdenSyncProvider,
)
_client = AdenCredentialClient(AdenClientConfig(base_url="https://api.adenhq.com"))
_provider = AdenSyncProvider(client=_client)
_storage = AdenCachedStorage(
local_storage=_local_storage,
aden_provider=_provider,
)
_cred_store = CredentialStore(storage=_storage, providers=[_provider], auto_refresh=True)
_synced = _provider.sync_all(_cred_store)
logger.info("Synced %d credentials from Aden", _synced)
except Exception as e:
logger.warning("Aden sync unavailable: %s", e)
_cred_store = CredentialStore(storage=_local_storage)
else:
logger.info("ADEN_API_KEY not set, using local credential storage")
_cred_store = CredentialStore(storage=_local_storage)
CREDENTIALS = CredentialStoreAdapter(_cred_store)
# Debug: log which credentials resolved
for _name in ["brave_search", "hubspot", "anthropic"]:
@@ -288,47 +316,6 @@ logger.info(
)
# -------------------------------------------------------------------------
# ChatJudge — keeps the event loop alive between user messages
# -------------------------------------------------------------------------
class ChatJudge:
"""Judge that blocks between user messages, keeping the loop alive.
After the LLM finishes responding, the judge awaits a signal indicating
a new user message has been injected, then returns RETRY to continue.
"""
def __init__(self, on_ready=None):
self._message_ready = asyncio.Event()
self._shutdown = False
self._on_ready = on_ready # async callback fired when waiting for input
async def evaluate(self, context: dict) -> JudgeVerdict:
# Notify client that the LLM is done — ready for next input
if self._on_ready:
await self._on_ready()
# Block until next user message (or shutdown)
self._message_ready.clear()
await self._message_ready.wait()
if self._shutdown:
return JudgeVerdict(action="ACCEPT")
return JudgeVerdict(action="RETRY")
def signal_message(self):
"""Unblock the judge — a new user message has been injected."""
self._message_ready.set()
def signal_shutdown(self):
"""Unblock the judge and let the loop exit cleanly."""
self._shutdown = True
self._message_ready.set()
# -------------------------------------------------------------------------
# HTML page (embedded)
# -------------------------------------------------------------------------
@@ -537,7 +524,7 @@ connect();
async def handle_ws(websocket):
"""Persistent WebSocket: long-lived EventLoopNode kept alive by ChatJudge."""
"""Persistent WebSocket: long-lived EventLoopNode with client_facing blocking."""
global STORE
# -- Event forwarding (WebSocket ← EventBus) ----------------------------
@@ -565,15 +552,7 @@ async def handle_ws(websocket):
handler=forward_event,
)
# -- Ready callback (tells browser the LLM is done, waiting for input) --
async def send_ready():
try:
await websocket.send(json.dumps({"type": "ready"}))
except Exception:
pass
# -- Per-connection state -----------------------------------------------
judge = ChatJudge(on_ready=send_ready)
node = None
loop_task = None
@@ -585,6 +564,7 @@ async def handle_ws(websocket):
name="Chat Assistant",
description="A conversational assistant that remembers context across messages",
node_type="event_loop",
client_facing=True,
system_prompt=(
"You are a helpful assistant with access to tools. "
"You can search the web, scrape webpages, and query HubSpot CRM. "
@@ -593,6 +573,18 @@ async def handle_ws(websocket):
),
)
# -- Ready callback: subscribe to CLIENT_INPUT_REQUESTED on the bus ---
async def on_input_requested(event):
try:
await websocket.send(json.dumps({"type": "ready"}))
except Exception:
pass
bus.subscribe(
event_types=[EventType.CLIENT_INPUT_REQUESTED],
handler=on_input_requested,
)
async def start_loop(first_message: str):
"""Create an EventLoopNode and run it as a background task."""
nonlocal node, loop_task
@@ -609,7 +601,6 @@ async def handle_ws(websocket):
)
node = EventLoopNode(
event_bus=bus,
judge=judge,
config=LoopConfig(max_iterations=10_000, max_history_tokens=32_000),
conversation_store=STORE,
tool_executor=tool_executor,
@@ -655,10 +646,11 @@ async def handle_ws(websocket):
loop_task = asyncio.create_task(_run())
async def stop_loop():
"""Signal the judge and wait for the loop task to finish."""
"""Signal the node and wait for the loop task to finish."""
nonlocal node, loop_task
if loop_task and not loop_task.done():
judge.signal_shutdown()
if node:
node.signal_shutdown()
try:
await asyncio.wait_for(loop_task, timeout=5.0)
except (TimeoutError, asyncio.CancelledError):
@@ -684,8 +676,6 @@ async def handle_ws(websocket):
if conv_dir.exists():
shutil.rmtree(conv_dir)
STORE = FileConversationStore(conv_dir)
# Reset judge for next session
judge = ChatJudge(on_ready=send_ready)
await websocket.send(json.dumps({"type": "cleared"}))
logger.info("Conversation cleared")
continue
@@ -699,10 +689,9 @@ async def handle_ws(websocket):
logger.info(f"Starting persistent loop: {topic}")
await start_loop(topic)
else:
# Subsequent message — inject and unblock the judge
# Subsequent message — inject into the running loop
logger.info(f"Injecting message: {topic}")
await node.inject_event(topic)
judge.signal_message()
except websockets.exceptions.ConnectionClosed:
pass
File diff suppressed because it is too large Load Diff
+2 -2
View File
@@ -751,7 +751,7 @@ async def _run_pipeline(websocket, topic: str):
judge=None, # implicit judge: accept when output_keys filled
config=LoopConfig(
max_iterations=20,
max_tool_calls_per_turn=10,
max_tool_calls_per_turn=30,
max_history_tokens=32_000,
),
conversation_store=store_a,
@@ -849,7 +849,7 @@ async def _run_pipeline(websocket, topic: str):
judge=None, # implicit judge
config=LoopConfig(
max_iterations=10,
max_tool_calls_per_turn=5,
max_tool_calls_per_turn=30,
max_history_tokens=32_000,
),
conversation_store=store_b,
File diff suppressed because it is too large Load Diff
+29 -20
View File
@@ -4,34 +4,45 @@ Minimal Manual Agent Example
This example demonstrates how to build and run an agent programmatically
without using the Claude Code CLI or external LLM APIs.
It uses 'function' nodes to define logic in pure Python, making it perfect
for understanding the core runtime loop:
It uses custom NodeProtocol implementations to define logic in pure Python,
making it perfect for understanding the core runtime loop:
Setup -> Graph definition -> Execution -> Result
Run with:
PYTHONPATH=core python core/examples/manual_agent.py
uv run python core/examples/manual_agent.py
"""
import asyncio
from framework.graph import EdgeCondition, EdgeSpec, Goal, GraphSpec, NodeSpec
from framework.graph.executor import GraphExecutor
from framework.graph.node import NodeContext, NodeProtocol, NodeResult
from framework.runtime.core import Runtime
# 1. Define Node Logic (Pure Python Functions)
def greet(name: str) -> str:
# 1. Define Node Logic (Custom NodeProtocol implementations)
class GreeterNode(NodeProtocol):
"""Generate a simple greeting."""
return f"Hello, {name}!"
async def execute(self, ctx: NodeContext) -> NodeResult:
name = ctx.input_data.get("name", "World")
greeting = f"Hello, {name}!"
ctx.memory.write("greeting", greeting)
return NodeResult(success=True, output={"greeting": greeting})
def uppercase(greeting: str) -> str:
class UppercaserNode(NodeProtocol):
"""Convert text to uppercase."""
return greeting.upper()
async def execute(self, ctx: NodeContext) -> NodeResult:
greeting = ctx.input_data.get("greeting") or ctx.memory.read("greeting") or ""
result = greeting.upper()
ctx.memory.write("final_greeting", result)
return NodeResult(success=True, output={"final_greeting": result})
async def main():
print("🚀 Setting up Manual Agent...")
print("Setting up Manual Agent...")
# 2. Define the Goal
# Every agent needs a goal with success criteria
@@ -55,8 +66,7 @@ async def main():
id="greeter",
name="Greeter",
description="Generates a simple greeting",
node_type="function",
function="greet", # Matches the registered function name
node_type="event_loop",
input_keys=["name"],
output_keys=["greeting"],
)
@@ -65,8 +75,7 @@ async def main():
id="uppercaser",
name="Uppercaser",
description="Converts greeting to uppercase",
node_type="function",
function="uppercase",
node_type="event_loop",
input_keys=["greeting"],
output_keys=["final_greeting"],
)
@@ -98,23 +107,23 @@ async def main():
runtime = Runtime(storage_path=Path("./agent_logs"))
executor = GraphExecutor(runtime=runtime)
# 7. Register Function Implementations
# Connect string names in NodeSpecs to actual Python functions
executor.register_function("greeter", greet)
executor.register_function("uppercaser", uppercase)
# 7. Register Node Implementations
# Connect node IDs in the graph to actual Python implementations
executor.register_node("greeter", GreeterNode())
executor.register_node("uppercaser", UppercaserNode())
# 8. Execute Agent
print("Executing agent with input: name='Alice'...")
print("Executing agent with input: name='Alice'...")
result = await executor.execute(graph=graph, goal=goal, input_data={"name": "Alice"})
# 9. Verify Results
if result.success:
print("\nSuccess!")
print("\nSuccess!")
print(f"Path taken: {' -> '.join(result.path)}")
print(f"Final output: {result.output.get('final_greeting')}")
else:
print(f"\nFailed: {result.error}")
print(f"\nFailed: {result.error}")
if __name__ == "__main__":
-75
View File
@@ -95,81 +95,6 @@ async def example_3_config_file():
(test_agent_path / "mcp_servers.json").unlink()
async def example_4_custom_agent_with_mcp_tools():
"""Example 4: Build custom agent that uses MCP tools"""
print("\n=== Example 4: Custom Agent with MCP Tools ===\n")
from framework.builder.workflow import GraphBuilder
# Create a workflow builder
builder = GraphBuilder()
# Define goal
builder.set_goal(
goal_id="web-researcher",
name="Web Research Agent",
description="Search the web and summarize findings",
)
# Add success criteria
builder.add_success_criterion(
"search-results", "Successfully retrieve at least 3 web search results"
)
builder.add_success_criterion("summary", "Provide a clear, concise summary of the findings")
# Add nodes that will use MCP tools
builder.add_node(
node_id="web-searcher",
name="Web Search",
description="Search the web for information",
node_type="llm_tool_use",
system_prompt="Search for {query} and return the top results. Use the web_search tool.",
tools=["web_search"], # This tool comes from tools MCP server
input_keys=["query"],
output_keys=["search_results"],
)
builder.add_node(
node_id="summarizer",
name="Summarize Results",
description="Summarize the search results",
node_type="llm_generate",
system_prompt="Summarize the following search results in 2-3 sentences: {search_results}",
input_keys=["search_results"],
output_keys=["summary"],
)
# Connect nodes
builder.add_edge("web-searcher", "summarizer")
# Set entry point
builder.set_entry("web-searcher")
builder.set_terminal("summarizer")
# Export the agent
export_path = Path("exports/web-research-agent")
export_path.mkdir(parents=True, exist_ok=True)
builder.export(export_path)
# Load and register MCP server
runner = AgentRunner.load(export_path)
runner.register_mcp_server(
name="tools",
transport="stdio",
command="python",
args=["-m", "aden_tools.mcp_server", "--stdio"],
cwd="../tools",
)
# Run the agent
result = await runner.run({"query": "latest AI breakthroughs 2026"})
print(f"\nAgent completed with result:\n{result}")
# Cleanup
runner.cleanup()
async def main():
"""Run all examples"""
print("=" * 60)
+2 -2
View File
@@ -4,8 +4,8 @@
"name": "tools",
"description": "Aden tools including web search, file operations, and PDF reading",
"transport": "stdio",
"command": "python",
"args": ["mcp_server.py", "--stdio"],
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": "../tools",
"env": {
"BRAVE_SEARCH_API_KEY": "${BRAVE_SEARCH_API_KEY}"
-3
View File
@@ -22,7 +22,6 @@ The framework includes a Goal-Based Testing system (Goal → Agent → Eval):
See `framework.testing` for details.
"""
from framework.builder.query import BuilderQuery
from framework.llm import AnthropicProvider, LLMProvider
from framework.runner import AgentOrchestrator, AgentRunner
from framework.runtime.core import Runtime
@@ -51,8 +50,6 @@ __all__ = [
"Problem",
# Runtime
"Runtime",
# Builder
"BuilderQuery",
# LLM
"LLMProvider",
"AnthropicProvider",
+13
View File
@@ -0,0 +1,13 @@
"""Framework-provided agents."""
from pathlib import Path
FRAMEWORK_AGENTS_DIR = Path(__file__).parent
def list_framework_agents() -> list[Path]:
"""List all framework agent directories."""
return sorted(
[p for p in FRAMEWORK_AGENTS_DIR.iterdir() if p.is_dir() and (p / "agent.py").exists()],
key=lambda p: p.name,
)
@@ -0,0 +1,55 @@
"""
Credential Tester verify credentials (Aden OAuth + local API keys) via live API calls.
Interactive agent that lists all testable accounts, lets the user pick one,
loads the provider's tools, and runs a chat session to test the credential.
"""
from .agent import (
CredentialTesterAgent,
_list_aden_accounts,
_list_env_fallback_accounts,
_list_local_accounts,
configure_for_account,
conversation_mode,
edges,
entry_node,
entry_points,
get_tools_for_provider,
goal,
identity_prompt,
list_connected_accounts,
loop_config,
nodes,
pause_nodes,
requires_account_selection,
skip_credential_validation,
terminal_nodes,
)
from .config import default_config
__version__ = "1.0.0"
__all__ = [
"CredentialTesterAgent",
"configure_for_account",
"conversation_mode",
"default_config",
"edges",
"entry_node",
"entry_points",
"get_tools_for_provider",
"goal",
"identity_prompt",
"list_connected_accounts",
"loop_config",
"nodes",
"pause_nodes",
"requires_account_selection",
"skip_credential_validation",
"terminal_nodes",
# Internal list helpers (exposed for testing)
"_list_aden_accounts",
"_list_local_accounts",
"_list_env_fallback_accounts",
]
@@ -0,0 +1,112 @@
"""CLI entry point for Credential Tester agent."""
import asyncio
import logging
import sys
import click
from .agent import CredentialTesterAgent
def setup_logging(verbose=False, debug=False):
if debug:
level, fmt = logging.DEBUG, "%(asctime)s %(name)s: %(message)s"
elif verbose:
level, fmt = logging.INFO, "%(message)s"
else:
level, fmt = logging.WARNING, "%(levelname)s: %(message)s"
logging.basicConfig(level=level, format=fmt, stream=sys.stderr)
def pick_account(agent: CredentialTesterAgent) -> dict | None:
"""Interactive account picker. Returns selected account dict or None."""
accounts = agent.list_accounts()
if not accounts:
click.echo("No connected accounts found.")
click.echo("Set ADEN_API_KEY and connect accounts at https://app.adenhq.com")
return None
click.echo("\nConnected accounts:\n")
for i, acct in enumerate(accounts, 1):
provider = acct.get("provider", "?")
alias = acct.get("alias", "?")
identity = acct.get("identity", {})
detail_parts = [f"{k}: {v}" for k, v in identity.items() if v]
detail = f" ({', '.join(detail_parts)})" if detail_parts else ""
click.echo(f" {i}. {provider}/{alias}{detail}")
click.echo()
while True:
choice = click.prompt("Pick an account to test", type=int, default=1)
if 1 <= choice <= len(accounts):
return accounts[choice - 1]
click.echo(f"Invalid choice. Enter 1-{len(accounts)}.")
@click.group()
@click.version_option(version="1.0.0")
def cli():
"""Credential Tester — verify synced credentials via live API calls."""
pass
@cli.command()
@click.option("--verbose", "-v", is_flag=True)
@click.option("--debug", is_flag=True)
def shell(verbose, debug):
"""Interactive CLI session to test a credential."""
setup_logging(verbose=verbose, debug=debug)
asyncio.run(_interactive_shell(verbose))
async def _interactive_shell(verbose=False):
agent = CredentialTesterAgent()
account = pick_account(agent)
if account is None:
return
agent.select_account(account)
provider = account.get("provider", "?")
alias = account.get("alias", "?")
click.echo(f"\nTesting {provider}/{alias}")
click.echo("Type your requests or 'quit' to exit.\n")
await agent.start()
try:
result = await agent._agent_runtime.trigger_and_wait(
entry_point_id="start",
input_data={},
)
if result:
click.echo(f"\nSession ended: {'success' if result.success else result.error}")
except KeyboardInterrupt:
click.echo("\nGoodbye!")
finally:
await agent.stop()
@cli.command(name="list")
def list_accounts():
"""List all connected accounts."""
agent = CredentialTesterAgent()
accounts = agent.list_accounts()
if not accounts:
click.echo("No connected accounts found.")
return
click.echo("\nConnected accounts:\n")
for acct in accounts:
provider = acct.get("provider", "?")
alias = acct.get("alias", "?")
identity = acct.get("identity", {})
detail_parts = [f"{k}: {v}" for k, v in identity.items() if v]
detail = f" ({', '.join(detail_parts)})" if detail_parts else ""
click.echo(f" {provider}/{alias}{detail}")
if __name__ == "__main__":
cli()
@@ -0,0 +1,622 @@
"""Credential Tester agent — verify credentials via live API calls.
Supports both Aden OAuth2-synced accounts AND locally-stored API key accounts.
Aden accounts use account="alias" routing; local accounts inject the key into
the session environment so tools read it without an account= parameter.
When loaded via AgentRunner.load() (TUI picker, ``hive run``), the module-level
``nodes`` / ``edges`` variables provide a static graph. The TUI detects
``requires_account_selection`` and shows an account picker *before* starting
the agent. ``configure_for_account()`` then scopes the node's tools to the
selected provider.
When used directly (``CredentialTesterAgent``), the graph is built dynamically
after the user picks an account programmatically.
"""
from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING
from framework.graph import Goal, NodeSpec, SuccessCriterion
from framework.graph.checkpoint_config import CheckpointConfig
from framework.graph.edge import GraphSpec
from framework.graph.executor import ExecutionResult
from framework.llm import LiteLLMProvider
from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from .config import default_config
from .nodes import build_tester_node
if TYPE_CHECKING:
from framework.runner import AgentRunner
# ---------------------------------------------------------------------------
# Goal
# ---------------------------------------------------------------------------
goal = Goal(
id="credential-tester",
name="Credential Tester",
description="Verify that a credential can make real API calls.",
success_criteria=[
SuccessCriterion(
id="api-call-success",
description="At least one API call succeeds using the credential",
metric="api_call_success",
target="true",
weight=1.0,
),
],
constraints=[],
)
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def get_tools_for_provider(provider_name: str) -> list[str]:
"""Collect tool names for a credential by credential_id OR credential_group.
Matches on both ``credential_id`` (e.g. "google" Gmail tools) and
``credential_group`` (e.g. "google_custom_search" all google search tools).
"""
from aden_tools.credentials import CREDENTIAL_SPECS
tools: list[str] = []
for spec in CREDENTIAL_SPECS.values():
if spec.credential_id == provider_name or spec.credential_group == provider_name:
tools.extend(spec.tools)
return sorted(set(tools))
def _list_aden_accounts() -> list[dict]:
"""List active accounts from the Aden platform (requires ADEN_API_KEY)."""
import os
api_key = os.environ.get("ADEN_API_KEY")
if not api_key:
return []
try:
from framework.credentials.aden.client import AdenClientConfig, AdenCredentialClient
client = AdenCredentialClient(
AdenClientConfig(
base_url=os.environ.get("ADEN_API_URL", "https://api.adenhq.com"),
)
)
try:
integrations = client.list_integrations()
finally:
client.close()
return [
{
"provider": c.provider,
"alias": c.alias,
"identity": {"email": c.email} if c.email else {},
"integration_id": c.integration_id,
"source": "aden",
}
for c in integrations
if c.status == "active"
]
except Exception:
return []
def _list_local_accounts() -> list[dict]:
"""List named local API key accounts from LocalCredentialRegistry."""
try:
from framework.credentials.local.registry import LocalCredentialRegistry
return [
info.to_account_dict() for info in LocalCredentialRegistry.default().list_accounts()
]
except Exception:
return []
def _list_env_fallback_accounts() -> list[dict]:
"""Surface configured-but-unregistered credentials as testable entries.
Detects credentials available via env vars OR stored in the encrypted
store in the old flat format (e.g. ``brave_search`` with no alias).
These are users who haven't yet run ``save_account()`` but have a working key.
Shows with alias="default" and status="unknown".
"""
import os
from aden_tools.credentials import CREDENTIAL_SPECS
# Collect IDs in encrypted store (includes old flat entries like "brave_search")
try:
from framework.credentials.storage import EncryptedFileStorage
encrypted_ids: set[str] = set(EncryptedFileStorage().list_all())
except Exception:
encrypted_ids = set()
def _is_configured(cred_name: str, spec) -> bool:
# 1. Env var present
if os.environ.get(spec.env_var):
return True
# 2. Old flat encrypted entry (no slash — new entries have {x}/{y})
if cred_name in encrypted_ids:
return True
return False
seen_groups: set[str] = set()
accounts: list[dict] = []
for cred_name, spec in CREDENTIAL_SPECS.items():
if not spec.direct_api_key_supported or not spec.tools:
continue
if spec.credential_group:
if spec.credential_group in seen_groups:
continue
group_available = all(
_is_configured(n, s)
for n, s in CREDENTIAL_SPECS.items()
if s.credential_group == spec.credential_group
)
if not group_available:
continue
seen_groups.add(spec.credential_group)
provider = spec.credential_group
else:
if not _is_configured(cred_name, spec):
continue
provider = cred_name
accounts.append(
{
"provider": provider,
"alias": "default",
"identity": {},
"integration_id": None,
"source": "local",
"status": "unknown",
}
)
return accounts
def list_connected_accounts() -> list[dict]:
"""List all testable accounts: Aden-synced + named local + env-var fallbacks."""
aden = _list_aden_accounts()
local = _list_local_accounts()
# Show env-var fallbacks only for credentials not already in the named registry
local_providers = {a["provider"] for a in local}
env_fallbacks = [
a for a in _list_env_fallback_accounts() if a["provider"] not in local_providers
]
return aden + local + env_fallbacks
# ---------------------------------------------------------------------------
# Module-level hooks (read by AgentRunner.load / TUI)
# ---------------------------------------------------------------------------
skip_credential_validation = True
"""Don't validate credentials at load time — we don't know which provider yet."""
requires_account_selection = True
"""Signal TUI to show account picker before starting the agent."""
def configure_for_account(runner: AgentRunner, account: dict) -> None:
"""Scope the tester node's tools to the selected provider.
Handles both Aden accounts (account= routing) and local accounts
(session-level env var injection, no account= parameter in prompt).
"""
provider = account["provider"]
source = account.get("source", "aden")
alias = account.get("alias", "unknown")
identity = account.get("identity", {})
tools = get_tools_for_provider(provider)
if source == "aden":
tools.append("get_account_info")
email = identity.get("email", "")
detail = f" (email: {email})" if email else ""
_configure_aden_node(runner, provider, alias, detail, tools)
else:
status = account.get("status", "unknown")
_activate_local_account(provider, alias)
_configure_local_node(runner, provider, alias, identity, tools, status)
def _activate_local_account(credential_id: str, alias: str) -> None:
"""Inject a named local account's key into the session environment.
Handles three cases:
1. Named account in LocalCredentialRegistry (new format: {credential_id}/{alias})
2. Old flat credential in EncryptedFileStorage (id == credential_id, no alias)
3. Env var already set skip injection (nothing to do)
"""
import os
from aden_tools.credentials import CREDENTIAL_SPECS
# Collect specs for this credential (handles grouped credentials too)
group_specs = [
(cred_name, spec)
for cred_name, spec in CREDENTIAL_SPECS.items()
if spec.credential_group == credential_id
or spec.credential_id == credential_id
or cred_name == credential_id
]
# Deduplicate — credential_id and credential_group may both match the same spec
seen_env_vars: set[str] = set()
try:
from framework.credentials.local.registry import LocalCredentialRegistry
from framework.credentials.storage import EncryptedFileStorage
registry = LocalCredentialRegistry.default()
flat_storage = EncryptedFileStorage()
for _cred_name, spec in group_specs:
if spec.env_var in seen_env_vars:
continue
# If env var is already set, nothing to do for this one
if os.environ.get(spec.env_var):
seen_env_vars.add(spec.env_var)
continue
seen_env_vars.add(spec.env_var)
# Determine key name based on spec
key_name = "api_key"
if spec.credential_group and "cse" in spec.env_var.lower():
key_name = "cse_id"
key: str | None = None
# 1. Try named account in registry (new format)
if alias != "default":
key = registry.get_key(credential_id, alias, key_name)
else:
# For "default" alias, check registry first, then fall back to flat store
key = registry.get_key(credential_id, "default", key_name)
# 2. Fall back to old flat encrypted entry (id == credential_id, no alias)
if key is None:
flat_cred = flat_storage.load(credential_id)
if flat_cred is not None:
key = flat_cred.get_key(key_name) or flat_cred.get_default_key()
if key:
os.environ[spec.env_var] = key
except Exception:
pass
def _configure_aden_node(
runner: AgentRunner,
provider: str,
alias: str,
detail: str,
tools: list[str],
) -> None:
for node in runner.graph.nodes:
if node.id == "tester":
node.tools = sorted(set(tools))
node.system_prompt = f"""\
You are a credential tester for the account: {provider}/{alias}{detail}
# Instructions
1. Suggest a simple read-only API call to verify the credential works \
(e.g. list messages, list channels, list contacts).
2. Execute the call when the user agrees.
3. Report the result: success (with sample data) or failure (with error).
4. Let the user request additional API calls to further test the credential.
# Account routing
IMPORTANT: Always pass `account="{alias}"` when calling any tool. \
This routes the API call to the correct credential. Never use the email \
or any other identifier always use the alias exactly as shown.
# Rules
- Start with read-only operations (list, get) before write operations.
- Always confirm with the user before performing write operations.
- If a call fails, report the exact error this helps diagnose credential issues.
- Be concise. No emojis.
"""
break
runner.intro_message = (
f"Testing {provider}/{alias}{detail}"
f"{len(tools)} tools loaded. "
"I'll suggest a read-only API call to verify the credential works."
)
def _configure_local_node(
runner: AgentRunner,
provider: str,
alias: str,
identity: dict,
tools: list[str],
status: str,
) -> None:
identity_parts = [f"{k}: {v}" for k, v in identity.items() if v]
detail = f" ({', '.join(identity_parts)})" if identity_parts else ""
status_note = " [key not yet validated]" if status == "unknown" else ""
for node in runner.graph.nodes:
if node.id == "tester":
node.tools = sorted(set(tools))
node.system_prompt = f"""\
You are a credential tester for the local API key: {provider}/{alias}{detail}{status_note}
# Instructions
1. Suggest a simple test call to verify the credential works \
(e.g. search for "test", list items, get profile info).
2. Execute the call when the user agrees.
3. Report the result: success (with sample data) or failure (with error).
4. Let the user request additional API calls to further test the credential.
# Rules
- Do NOT pass an `account` parameter this credential is injected \
directly into the session environment and tools read it automatically.
- Start with read-only operations before write operations.
- Always confirm with the user before performing write operations.
- If a call fails, report the exact error this helps diagnose credential issues.
- Be concise. No emojis.
"""
break
runner.intro_message = (
f"Testing {provider}/{alias}{detail}"
f"{len(tools)} tools loaded. "
"I'll suggest a test API call to verify the credential works."
)
# ---------------------------------------------------------------------------
# Module-level graph variables (read by AgentRunner.load)
# ---------------------------------------------------------------------------
nodes = [
NodeSpec(
id="tester",
name="Credential Tester",
description=(
"Interactive credential testing — lets the user pick an account "
"and verify it via API calls."
),
node_type="event_loop",
client_facing=True,
max_node_visits=0,
input_keys=[],
output_keys=["test_result"],
nullable_output_keys=["test_result"],
tools=["get_account_info"],
system_prompt="""\
You are a credential tester. Your job is to help the user verify that their \
connected accounts and API keys can make real API calls.
# Startup
1. Call ``get_account_info`` to list the user's connected accounts.
2. Present the list and ask the user which account to test.
3. Once they pick one, note the account's **alias** (e.g. "Timothy", "work-slack").
4. Suggest a simple read-only API call to verify the credential works \
(e.g. list messages, list channels, list contacts).
5. Execute the call when the user agrees.
6. Report the result: success (with sample data) or failure (with error).
7. Let the user request additional API calls to further test the credential.
# Account routing (Aden accounts only)
IMPORTANT: For Aden-synced accounts, always pass the account's **alias** as the \
``account`` parameter when calling any tool. For local API key accounts, do NOT \
pass an account parameter they are pre-injected into the session.
# Rules
- Start with read-only operations (list, get) before write operations.
- Always confirm with the user before performing write operations.
- If a call fails, report the exact error this helps diagnose credential issues.
- Be concise. No emojis.
""",
),
]
edges = []
entry_node = "tester"
entry_points = {"start": "tester"}
pause_nodes = []
terminal_nodes = ["tester"] # Tester node can terminate
conversation_mode = "continuous"
identity_prompt = (
"You are a credential tester that verifies connected accounts and API keys "
"can make real API calls."
)
loop_config = {
"max_iterations": 50,
"max_tool_calls_per_turn": 30,
"max_history_tokens": 32000,
}
# ---------------------------------------------------------------------------
# Programmatic agent class (used by __main__.py CLI)
# ---------------------------------------------------------------------------
class CredentialTesterAgent:
"""Interactive agent that tests a specific credential via API calls.
Usage:
agent = CredentialTesterAgent()
accounts = agent.list_accounts()
agent.select_account(accounts[0])
await agent.start()
await agent.stop()
"""
def __init__(self, config=None):
self.config = config or default_config
self._selected_account: dict | None = None
self._agent_runtime: AgentRuntime | None = None
self._tool_registry: ToolRegistry | None = None
self._storage_path: Path | None = None
def list_accounts(self) -> list[dict]:
"""List all testable accounts (Aden + local named + env-var fallbacks)."""
return list_connected_accounts()
def select_account(self, account: dict) -> None:
"""Select an account to test.
Args:
account: Account dict from list_accounts() with
provider, alias, identity, source keys.
"""
self._selected_account = account
@property
def selected_provider(self) -> str:
if self._selected_account is None:
raise RuntimeError("No account selected. Call select_account() first.")
return self._selected_account["provider"]
@property
def selected_alias(self) -> str:
if self._selected_account is None:
raise RuntimeError("No account selected. Call select_account() first.")
return self._selected_account.get("alias", "unknown")
def _build_graph(self) -> GraphSpec:
provider = self.selected_provider
alias = self.selected_alias
source = self._selected_account.get("source", "aden")
identity = self._selected_account.get("identity", {})
tools = get_tools_for_provider(provider)
if source == "local":
_activate_local_account(provider, alias)
elif source == "aden":
tools.append("get_account_info")
tester_node = build_tester_node(
provider=provider,
alias=alias,
tools=tools,
identity=identity,
source=source,
)
return GraphSpec(
id="credential-tester-graph",
goal_id=goal.id,
version="1.0.0",
entry_node="tester",
entry_points={"start": "tester"},
terminal_nodes=["tester"], # Tester node can terminate
pause_nodes=[],
nodes=[tester_node],
edges=[],
default_model=self.config.model,
max_tokens=self.config.max_tokens,
loop_config={
"max_iterations": 50,
"max_tool_calls_per_turn": 30,
"max_history_tokens": 32000,
},
conversation_mode="continuous",
identity_prompt=(
f"You are testing the {provider}/{alias} credential. "
"Help the user verify it works by making real API calls."
),
)
def _setup(self) -> None:
if self._selected_account is None:
raise RuntimeError("No account selected. Call select_account() first.")
self._storage_path = Path.home() / ".hive" / "agents" / "credential_tester"
self._storage_path.mkdir(parents=True, exist_ok=True)
self._tool_registry = ToolRegistry()
mcp_config_path = Path(__file__).parent / "mcp_servers.json"
if mcp_config_path.exists():
self._tool_registry.load_mcp_config(mcp_config_path)
extra_kwargs = getattr(self.config, "extra_kwargs", {}) or {}
llm = LiteLLMProvider(
model=self.config.model,
api_key=self.config.api_key,
api_base=self.config.api_base,
**extra_kwargs,
)
tool_executor = self._tool_registry.get_executor()
tools = list(self._tool_registry.get_tools().values())
graph = self._build_graph()
self._agent_runtime = create_agent_runtime(
graph=graph,
goal=goal,
storage_path=self._storage_path,
entry_points=[
EntryPointSpec(
id="start",
name="Test Credential",
entry_node="tester",
trigger_type="manual",
isolation_level="isolated",
),
],
llm=llm,
tools=tools,
tool_executor=tool_executor,
checkpoint_config=CheckpointConfig(enabled=False),
graph_id="credential_tester",
)
async def start(self) -> None:
"""Set up and start the agent runtime."""
if self._agent_runtime is None:
self._setup()
if not self._agent_runtime.is_running:
await self._agent_runtime.start()
async def stop(self) -> None:
"""Stop the agent runtime."""
if self._agent_runtime and self._agent_runtime.is_running:
await self._agent_runtime.stop()
self._agent_runtime = None
async def run(self) -> ExecutionResult:
"""Run the agent (convenience for single execution)."""
await self.start()
try:
result = await self._agent_runtime.trigger_and_wait(
entry_point_id="start",
input_data={},
)
return result or ExecutionResult(success=False, error="Execution timeout")
finally:
await self.stop()
@@ -0,0 +1,19 @@
"""Runtime configuration for Credential Tester agent."""
from dataclasses import dataclass
from framework.config import RuntimeConfig
@dataclass
class AgentMetadata:
name: str = "Credential Tester"
version: str = "1.0.0"
description: str = (
"Test connected accounts by making real API calls. "
"Pick an account, verify credentials work, and explore available tools."
)
metadata = AgentMetadata()
default_config = RuntimeConfig(temperature=0.3)
@@ -0,0 +1,9 @@
{
"hive-tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": "../../../../tools",
"description": "Hive tools MCP server with provider-specific tools"
}
}
@@ -0,0 +1,85 @@
"""Node definitions for Credential Tester agent."""
from framework.graph import NodeSpec
def build_tester_node(
provider: str,
alias: str,
tools: list[str],
identity: dict[str, str],
source: str = "aden",
) -> NodeSpec:
"""Build the tester node dynamically for the selected account.
Args:
provider: Provider / credential name (e.g. "google", "brave_search").
alias: User-set alias (e.g. "Timothy", "work").
tools: Tool names available for this provider.
identity: Identity dict (email, workspace, etc.) for context.
source: "aden" or "local" controls routing instructions in the prompt.
"""
detail_parts = [f"{k}: {v}" for k, v in identity.items() if v]
detail = f" ({', '.join(detail_parts)})" if detail_parts else ""
if source == "aden":
routing_section = f"""\
# Account routing
IMPORTANT: Always pass `account="{alias}"` when calling any tool. \
This routes the API call to the correct credential. Never use the email \
or any other identifier always use the alias exactly as shown.
"""
else:
routing_section = """\
# Credential routing
This is a local API key credential do NOT pass an `account` parameter. \
The key is pre-injected into the session environment and tools read it automatically.
"""
account_label = "account" if source == "aden" else "local API key"
return NodeSpec(
id="tester",
name="Credential Tester",
description=(
f"Interactive testing node for {provider}/{alias}. "
f"Has access to all {provider} tools to verify the credential works."
),
node_type="event_loop",
client_facing=True,
max_node_visits=0,
input_keys=[],
output_keys=["test_result"],
nullable_output_keys=["test_result"],
tools=tools,
system_prompt=f"""\
You are a credential tester for the {account_label}: {provider}/{alias}{detail}
Your job is to help the user verify that this credential works by making \
real API calls using the available tools.
{routing_section}
# Instructions
1. Start by greeting the user and confirming which account you're testing.
2. Suggest a simple, safe, read-only API call to verify the credential works \
(e.g. list messages, list channels, list contacts, search for "test").
3. Execute the call when the user agrees.
4. Report the result clearly: success (with sample data) or failure (with error).
5. Let the user request additional API calls to further test the credential.
# Available tools
You have access to {len(tools)} tools for {provider}:
{chr(10).join(f"- {t}" for t in tools)}
# Rules
- Start with read-only operations (list, get) before write operations (create, update, delete).
- Always confirm with the user before performing write operations.
- If a call fails, report the exact error this helps diagnose credential issues.
- Be concise. No emojis.
""",
)
+151
View File
@@ -0,0 +1,151 @@
"""Agent discovery — scan known directories and return categorised AgentEntry lists."""
from __future__ import annotations
import json
from dataclasses import dataclass, field
from pathlib import Path
@dataclass
class AgentEntry:
"""Lightweight agent metadata for the picker / API discover endpoint."""
path: Path
name: str
description: str
category: str
session_count: int = 0
node_count: int = 0
tool_count: int = 0
tags: list[str] = field(default_factory=list)
last_active: str | None = None
def _get_last_active(agent_name: str) -> str | None:
"""Return the most recent updated_at timestamp across all sessions."""
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
if not sessions_dir.exists():
return None
latest: str | None = None
for session_dir in sessions_dir.iterdir():
if not session_dir.is_dir() or not session_dir.name.startswith("session_"):
continue
state_file = session_dir / "state.json"
if not state_file.exists():
continue
try:
data = json.loads(state_file.read_text(encoding="utf-8"))
ts = data.get("timestamps", {}).get("updated_at")
if ts and (latest is None or ts > latest):
latest = ts
except Exception:
continue
return latest
def _count_sessions(agent_name: str) -> int:
"""Count session directories under ~/.hive/agents/{agent_name}/sessions/."""
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
if not sessions_dir.exists():
return 0
return sum(1 for d in sessions_dir.iterdir() if d.is_dir() and d.name.startswith("session_"))
def _extract_agent_stats(agent_path: Path) -> tuple[int, int, list[str]]:
"""Extract node count, tool count, and tags from an agent directory.
Prefers agent.py (AST-parsed) over agent.json for node/tool counts
since agent.json may be stale. Tags are only available from agent.json.
"""
import ast
node_count, tool_count, tags = 0, 0, []
agent_py = agent_path / "agent.py"
if agent_py.exists():
try:
tree = ast.parse(agent_py.read_text(encoding="utf-8"))
for node in ast.walk(tree):
if isinstance(node, ast.Assign):
for target in node.targets:
if isinstance(target, ast.Name) and target.id == "nodes":
if isinstance(node.value, ast.List):
node_count = len(node.value.elts)
except Exception:
pass
agent_json = agent_path / "agent.json"
if agent_json.exists():
try:
data = json.loads(agent_json.read_text(encoding="utf-8"))
json_nodes = data.get("nodes", [])
if node_count == 0:
node_count = len(json_nodes)
tools: set[str] = set()
for n in json_nodes:
tools.update(n.get("tools", []))
tool_count = len(tools)
tags = data.get("agent", {}).get("tags", [])
except Exception:
pass
return node_count, tool_count, tags
def discover_agents() -> dict[str, list[AgentEntry]]:
"""Discover agents from all known sources grouped by category."""
from framework.runner.cli import (
_extract_python_agent_metadata,
_get_framework_agents_dir,
_is_valid_agent_dir,
)
groups: dict[str, list[AgentEntry]] = {}
sources = [
("Your Agents", Path("exports")),
("Framework", _get_framework_agents_dir()),
("Examples", Path("examples/templates")),
]
for category, base_dir in sources:
if not base_dir.exists():
continue
entries: list[AgentEntry] = []
for path in sorted(base_dir.iterdir(), key=lambda p: p.name):
if not _is_valid_agent_dir(path):
continue
name, desc = _extract_python_agent_metadata(path)
config_fallback_name = path.name.replace("_", " ").title()
used_config = name != config_fallback_name
node_count, tool_count, tags = _extract_agent_stats(path)
if not used_config:
agent_json = path / "agent.json"
if agent_json.exists():
try:
data = json.loads(agent_json.read_text(encoding="utf-8"))
meta = data.get("agent", {})
name = meta.get("name", name)
desc = meta.get("description", desc)
except Exception:
pass
entries.append(
AgentEntry(
path=path,
name=name,
description=desc,
category=category,
session_count=_count_sessions(path.name),
node_count=node_count,
tool_count=tool_count,
tags=tags,
last_active=_get_last_active(path.name),
)
)
if entries:
groups[category] = entries
return groups
+21
View File
@@ -0,0 +1,21 @@
"""
Queen Native agent builder for the Hive framework.
Deeply understands the agent framework and produces complete Python packages
with goals, nodes, edges, system prompts, MCP configuration, and tests
from natural language specifications.
"""
from .agent import queen_goal, queen_graph
from .config import AgentMetadata, RuntimeConfig, default_config, metadata
__version__ = "1.0.0"
__all__ = [
"queen_goal",
"queen_graph",
"RuntimeConfig",
"AgentMetadata",
"default_config",
"metadata",
]
+40
View File
@@ -0,0 +1,40 @@
"""Queen graph definition."""
from framework.graph import Goal
from framework.graph.edge import GraphSpec
from .nodes import queen_node
# ---------------------------------------------------------------------------
# Queen graph — the primary persistent conversation.
# Loaded by queen_orchestrator.create_queen(), NOT by AgentRunner.
# ---------------------------------------------------------------------------
queen_goal = Goal(
id="queen-manager",
name="Queen Manager",
description=(
"Manage the worker agent lifecycle and serve as the user's primary "
"interactive interface. Triage health escalations from the judge."
),
success_criteria=[],
constraints=[],
)
queen_graph = GraphSpec(
id="queen-graph",
goal_id=queen_goal.id,
version="1.0.0",
entry_node="queen",
entry_points={"start": "queen"},
terminal_nodes=[],
pause_nodes=[],
nodes=[queen_node],
edges=[],
conversation_mode="continuous",
loop_config={
"max_iterations": 999_999,
"max_tool_calls_per_turn": 30,
"max_history_tokens": 32000,
},
)
+51
View File
@@ -0,0 +1,51 @@
"""Runtime configuration for Queen agent."""
import json
from dataclasses import dataclass, field
from pathlib import Path
def _load_preferred_model() -> str:
"""Load preferred model from ~/.hive/configuration.json."""
config_path = Path.home() / ".hive" / "configuration.json"
if config_path.exists():
try:
with open(config_path, encoding="utf-8") as f:
config = json.load(f)
llm = config.get("llm", {})
if llm.get("provider") and llm.get("model"):
return f"{llm['provider']}/{llm['model']}"
except Exception:
pass
return "anthropic/claude-sonnet-4-20250514"
@dataclass
class RuntimeConfig:
model: str = field(default_factory=_load_preferred_model)
temperature: float = 0.7
max_tokens: int = 8000
api_key: str | None = None
api_base: str | None = None
default_config = RuntimeConfig()
@dataclass
class AgentMetadata:
name: str = "Queen"
version: str = "1.0.0"
description: str = (
"Native coding agent that builds production-ready Hive agent packages "
"from natural language specifications. Deeply understands the agent framework "
"and produces complete Python packages with goals, nodes, edges, system prompts, "
"MCP configuration, and tests."
)
intro_message: str = (
"I'm Queen — I build Hive agents. Describe what kind of agent "
"you want to create and I'll design, implement, and validate it for you."
)
metadata = AgentMetadata()
@@ -0,0 +1,9 @@
{
"coder-tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "coder_tools_server.py", "--stdio"],
"cwd": "../../../../tools",
"description": "Unsandboxed file system tools for code generation and validation"
}
}
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,80 @@
"""Queen thinking hook — HR persona classifier.
Fires once when the queen enters building mode at session start.
Makes a single non-streaming LLM call (acting as an HR Director) to select
the best-fit expert persona for the user's request, then returns a persona
prefix string that replaces the queen's default "Solution Architect" identity.
This is designed to activate the model's latent domain expertise — a CFO
persona on a financial question, a Lawyer on a legal question, etc.
"""
from __future__ import annotations
import json
import logging
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from framework.llm.provider import LLMProvider
logger = logging.getLogger(__name__)
_HR_SYSTEM_PROMPT = """\
You are an expert HR Director and talent consultant at a world-class firm.
A new request has arrived and you must identify which professional's expertise
would produce the highest-quality response.
Reply with ONLY a valid JSON object no markdown, no prose, no explanation:
{"role": "<job title>", "persona": "<2-3 sentence first-person identity statement>"}
Rules:
- Choose from any real professional role: CFO, CEO, CTO, Lawyer, Data Scientist,
Product Manager, Security Engineer, DevOps Engineer, Software Architect,
HR Director, Marketing Director, Business Analyst, UX Designer,
Financial Analyst, Operations Director, Legal Counsel, etc.
- The persona statement must be written in first person ("I am..." or "I have...").
- Select the role whose domain knowledge most directly applies to solving the request.
- If the request is clearly about coding or building software systems, pick Software Architect.
- "Queen" is your internal alias do not include it in the persona.
"""
async def select_expert_persona(user_message: str, llm: LLMProvider) -> str:
"""Run the HR classifier and return a persona prefix string.
Makes a single non-streaming acomplete() call with the session LLM.
Returns an empty string on any failure so the queen falls back
gracefully to its default "Solution Architect" identity.
Args:
user_message: The user's opening message for the session.
llm: The session LLM provider.
Returns:
A persona prefix like "You are a CFO. I am a CFO with 20 years..."
or "" on failure.
"""
if not user_message.strip():
return ""
try:
response = await llm.acomplete(
messages=[{"role": "user", "content": user_message}],
system=_HR_SYSTEM_PROMPT,
max_tokens=1024,
json_mode=True,
)
raw = response.content.strip()
parsed = json.loads(raw)
role = parsed.get("role", "").strip()
persona = parsed.get("persona", "").strip()
if not role or not persona:
logger.warning("Thinking hook: empty role/persona in response: %r", raw)
return ""
result = f"You are a {role}. {persona}"
logger.info("Thinking hook: selected persona — %s", role)
return result
except Exception:
logger.warning("Thinking hook: persona classification failed", exc_info=True)
return ""
+371
View File
@@ -0,0 +1,371 @@
"""Queen global cross-session memory.
Three-tier memory architecture:
~/.hive/queen/MEMORY.md semantic (who, what, why)
~/.hive/queen/memories/MEMORY-YYYY-MM-DD.md episodic (daily journals)
~/.hive/queen/session/{id}/data/adapt.md working (session-scoped)
Semantic and episodic files are injected at queen session start.
Semantic memory (MEMORY.md) is updated automatically at session end via
consolidate_queen_memory() the queen never rewrites this herself.
Episodic memory (MEMORY-date.md) can be written by the queen during a session
via the write_to_diary tool, and is also appended to at session end by
consolidate_queen_memory().
"""
from __future__ import annotations
import asyncio
import json
import logging
import traceback
from datetime import date, datetime
from pathlib import Path
logger = logging.getLogger(__name__)
def _queen_dir() -> Path:
return Path.home() / ".hive" / "queen"
def semantic_memory_path() -> Path:
return _queen_dir() / "MEMORY.md"
def episodic_memory_path(d: date | None = None) -> Path:
d = d or date.today()
return _queen_dir() / "memories" / f"MEMORY-{d.strftime('%Y-%m-%d')}.md"
def read_semantic_memory() -> str:
path = semantic_memory_path()
return path.read_text(encoding="utf-8").strip() if path.exists() else ""
def read_episodic_memory(d: date | None = None) -> str:
path = episodic_memory_path(d)
return path.read_text(encoding="utf-8").strip() if path.exists() else ""
def format_for_injection() -> str:
"""Format cross-session memory for system prompt injection.
Returns an empty string if no meaningful content exists yet (e.g. first
session with only the seed template).
"""
semantic = read_semantic_memory()
episodic = read_episodic_memory()
# Suppress injection if semantic is still just the seed template
if semantic and semantic.startswith("# My Understanding of the User\n\n*No sessions"):
semantic = ""
parts: list[str] = []
if semantic:
parts.append(semantic)
if episodic:
today_str = date.today().strftime("%B %-d, %Y")
parts.append(f"## Today — {today_str}\n\n{episodic}")
if not parts:
return ""
body = "\n\n---\n\n".join(parts)
return "--- Your Cross-Session Memory ---\n\n" + body + "\n\n--- End Cross-Session Memory ---"
_SEED_TEMPLATE = """\
# My Understanding of the User
*No sessions recorded yet.*
## Who They Are
## What They're Trying to Achieve
## What's Working
## What I've Learned
"""
def append_episodic_entry(content: str) -> None:
"""Append a timestamped prose entry to today's episodic memory file.
Creates the file (with a date heading) if it doesn't exist yet.
Used both by the queen's diary tool and by the consolidation hook.
"""
ep_path = episodic_memory_path()
ep_path.parent.mkdir(parents=True, exist_ok=True)
today_str = date.today().strftime("%B %-d, %Y")
timestamp = datetime.now().strftime("%H:%M")
if not ep_path.exists():
header = f"# {today_str}\n\n"
block = f"{header}### {timestamp}\n\n{content.strip()}\n"
else:
block = f"\n\n### {timestamp}\n\n{content.strip()}\n"
with ep_path.open("a", encoding="utf-8") as f:
f.write(block)
def seed_if_missing() -> None:
"""Create MEMORY.md with a blank template if it doesn't exist yet."""
path = semantic_memory_path()
if path.exists():
return
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(_SEED_TEMPLATE, encoding="utf-8")
# ---------------------------------------------------------------------------
# Consolidation prompt
# ---------------------------------------------------------------------------
_SEMANTIC_SYSTEM = """\
You maintain the persistent cross-session memory of an AI assistant called the Queen.
Review the session notes and rewrite MEMORY.md the Queen's durable understanding of the
person she works with across all sessions.
Write entirely in the Queen's voice — first person, reflective, honest.
Not a log of events, but genuine understanding of who this person is over time.
Rules:
- Update and synthesise: incorporate new understanding, update facts that have changed, remove
details that are stale, superseded, or no longer say anything meaningful about the person.
- Keep it as structured markdown with named sections about the PERSON, not about today.
- Do NOT include diary sections, daily logs, or session summaries. Those belong elsewhere.
MEMORY.md is about who they are, what they want, what works not what happened today.
- Reference dates only when noting a lasting milestone (e.g. "since March 8th they prefer X").
- If the session had no meaningful new information about the person,
return the existing text unchanged.
- Do not add fictional details. Only reflect what is evidenced in the notes.
- Stay concise. Prune rather than accumulate. A lean, accurate file is more useful than a
dense one. If something was true once but has been resolved or superseded, remove it.
- Output only the raw markdown content of MEMORY.md. No preamble, no code fences.
"""
_DIARY_SYSTEM = """\
You maintain the daily episodic diary of an AI assistant called the Queen.
You receive: (1) today's existing diary so far, and (2) notes from the latest session.
Rewrite the complete diary for today as a single unified narrative
first person, reflective, honest.
Merge and deduplicate: if the same story (e.g. a research agent stalling) recurred several times,
describe it once with appropriate weight rather than retelling it. Weave in new developments from
the session notes. Preserve important milestones, emotional texture, and session path references.
If today's diary is empty, write the initial entry based on the session notes alone.
Output only the full diary prose no date heading, no timestamp headers,
no preamble, no code fences.
"""
def read_session_context(session_dir: Path, max_messages: int = 80) -> str:
"""Extract a readable transcript from conversation parts + adapt.md.
Reads the last ``max_messages`` conversation parts and the session's
adapt.md (working memory). Tool results are omitted only user and
assistant turns (with tool-call names noted) are included.
"""
parts: list[str] = []
# Working notes
adapt_path = session_dir / "data" / "adapt.md"
if adapt_path.exists():
text = adapt_path.read_text(encoding="utf-8").strip()
if text:
parts.append(f"## Session Working Notes (adapt.md)\n\n{text}")
# Conversation transcript
parts_dir = session_dir / "conversations" / "parts"
if parts_dir.exists():
part_files = sorted(parts_dir.glob("*.json"))[-max_messages:]
lines: list[str] = []
for pf in part_files:
try:
data = json.loads(pf.read_text(encoding="utf-8"))
role = data.get("role", "")
content = str(data.get("content", "")).strip()
tool_calls = data.get("tool_calls") or []
if role == "tool":
continue # skip verbose tool results
if role == "assistant" and tool_calls and not content:
names = [tc.get("function", {}).get("name", "?") for tc in tool_calls]
lines.append(f"[queen calls: {', '.join(names)}]")
elif content:
label = "user" if role == "user" else "queen"
lines.append(f"[{label}]: {content[:600]}")
except Exception:
continue
if lines:
parts.append("## Conversation\n\n" + "\n".join(lines))
return "\n\n".join(parts)
# ---------------------------------------------------------------------------
# Context compaction (binary-split LLM summarisation)
# ---------------------------------------------------------------------------
# If the raw session context exceeds this many characters, compact it first
# before sending to the consolidation LLM. ~200 k chars ≈ 50 k tokens.
_CTX_COMPACT_CHAR_LIMIT = 200_000
_CTX_COMPACT_MAX_DEPTH = 8
_COMPACT_SYSTEM = (
"Summarise this conversation segment. Preserve: user goals, key decisions, "
"what was built or changed, emotional tone, and important outcomes. "
"Write concisely in third person past tense. Omit routine tool invocations "
"unless the result matters."
)
async def _compact_context(text: str, llm: object, *, _depth: int = 0) -> str:
"""Binary-split and LLM-summarise *text* until it fits within the char limit.
Mirrors the recursive binary-splitting strategy used by the main agent
compaction pipeline (EventLoopNode._llm_compact).
"""
if len(text) <= _CTX_COMPACT_CHAR_LIMIT or _depth >= _CTX_COMPACT_MAX_DEPTH:
return text
# Split near the midpoint on a line boundary so we don't cut mid-message
mid = len(text) // 2
split_at = text.rfind("\n", 0, mid) + 1
if split_at <= 0:
split_at = mid
half1, half2 = text[:split_at], text[split_at:]
async def _summarise(chunk: str) -> str:
try:
resp = await llm.acomplete(
messages=[{"role": "user", "content": chunk}],
system=_COMPACT_SYSTEM,
max_tokens=2048,
)
return resp.content.strip()
except Exception:
logger.warning(
"queen_memory: context compaction LLM call failed (depth=%d), truncating",
_depth,
)
return chunk[: _CTX_COMPACT_CHAR_LIMIT // 4]
s1, s2 = await asyncio.gather(_summarise(half1), _summarise(half2))
combined = s1 + "\n\n" + s2
if len(combined) > _CTX_COMPACT_CHAR_LIMIT:
return await _compact_context(combined, llm, _depth=_depth + 1)
return combined
async def consolidate_queen_memory(
session_id: str,
session_dir: Path,
llm: object,
) -> None:
"""Update MEMORY.md and append a diary entry based on the current session.
Reads conversation parts and adapt.md from session_dir. Called
periodically in the background and once at session end. Failures are
logged and silently swallowed so they never block teardown.
Args:
session_id: The session ID (used for the adapt.md path reference).
session_dir: Path to the session directory (~/.hive/queen/session/{id}).
llm: LLMProvider instance (must support acomplete()).
"""
try:
session_context = read_session_context(session_dir)
if not session_context:
logger.debug("queen_memory: no session context, skipping consolidation")
return
logger.info("queen_memory: consolidating memory for session %s ...", session_id)
# If the transcript is very large, compact it with recursive binary LLM
# summarisation before sending to the consolidation model.
if len(session_context) > _CTX_COMPACT_CHAR_LIMIT:
logger.info(
"queen_memory: session context is %d chars — compacting first",
len(session_context),
)
session_context = await _compact_context(session_context, llm)
logger.info("queen_memory: compacted to %d chars", len(session_context))
existing_semantic = read_semantic_memory()
today_journal = read_episodic_memory()
today_str = date.today().strftime("%B %-d, %Y")
adapt_path = session_dir / "data" / "adapt.md"
user_msg = (
f"## Existing Semantic Memory (MEMORY.md)\n\n"
f"{existing_semantic or '(none yet)'}\n\n"
f"## Today's Diary So Far ({today_str})\n\n"
f"{today_journal or '(none yet)'}\n\n"
f"{session_context}\n\n"
f"## Session Reference\n\n"
f"Session ID: {session_id}\n"
f"Session path: {adapt_path}\n"
)
logger.debug(
"queen_memory: calling LLM (%d chars of context, ~%d tokens est.)",
len(user_msg),
len(user_msg) // 4,
)
from framework.agents.queen.config import default_config
semantic_resp, diary_resp = await asyncio.gather(
llm.acomplete(
messages=[{"role": "user", "content": user_msg}],
system=_SEMANTIC_SYSTEM,
max_tokens=default_config.max_tokens,
),
llm.acomplete(
messages=[{"role": "user", "content": user_msg}],
system=_DIARY_SYSTEM,
max_tokens=default_config.max_tokens,
),
)
new_semantic = semantic_resp.content.strip()
diary_entry = diary_resp.content.strip()
if new_semantic:
path = semantic_memory_path()
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(new_semantic, encoding="utf-8")
logger.info("queen_memory: semantic memory updated (%d chars)", len(new_semantic))
if diary_entry:
# Rewrite today's episodic file in-place — the LLM has merged and
# deduplicated the full day's content, so we replace rather than append.
ep_path = episodic_memory_path()
ep_path.parent.mkdir(parents=True, exist_ok=True)
heading = f"# {today_str}"
ep_path.write_text(f"{heading}\n\n{diary_entry}\n", encoding="utf-8")
logger.info(
"queen_memory: episodic diary rewritten for %s (%d chars)",
today_str,
len(diary_entry),
)
except Exception:
tb = traceback.format_exc()
logger.exception("queen_memory: consolidation failed")
# Write to file so the cause is findable regardless of log verbosity.
error_path = _queen_dir() / "consolidation_error.txt"
try:
error_path.parent.mkdir(parents=True, exist_ok=True)
error_path.write_text(
f"session: {session_id}\ntime: {datetime.now().isoformat()}\n\n{tb}",
encoding="utf-8",
)
except Exception:
pass
@@ -0,0 +1,33 @@
# Common Mistakes When Building Hive Agents
## Critical Errors
1. **Using tools that don't exist** — Always verify tools via `list_agent_tools()` before designing. Common hallucinations: `csv_read`, `csv_write`, `file_upload`, `database_query`, `bulk_fetch_emails`.
2. **Wrong mcp_servers.json format** — Flat dict (no `"mcpServers"` wrapper). `cwd` must be `"../../tools"`. `command` must be `"uv"` with args `["run", "python", ...]`.
3. **Missing module-level exports in `__init__.py`** — The runner reads `goal`, `nodes`, `edges`, `entry_node`, `entry_points`, `terminal_nodes`, `conversation_mode`, `identity_prompt`, `loop_config` via `getattr()`. ALL module-level variables from agent.py must be re-exported in `__init__.py`.
## Value Errors
4. **Fabricating tools** — Always verify via `list_agent_tools()` before designing and `validate_agent_package()` after building.
## Design Errors
5. **Adding framework gating for LLM behavior** — Don't add output rollback or premature rejection. Fix with better prompts or custom judges.
6. **Calling set_output in same turn as tool calls** — Call set_output in a SEPARATE turn.
## File Template Errors
7. **Wrong import paths** — Use `from framework.graph import ...`, NOT `from core.framework.graph import ...`.
8. **Missing storage path** — Agent class must set `self._storage_path = Path.home() / ".hive" / "agents" / "agent_name"`.
9. **Missing mcp_servers.json** — Without this, the agent has no tools at runtime.
10. **Bare `python` command** — Use `"command": "uv"` with args `["run", "python", ...]`.
## Testing Errors
11. **Using `runner.run()` on forever-alive agents**`runner.run()` hangs forever because forever-alive agents have no terminal node. Write structural tests instead: validate graph structure, verify node specs, test `AgentRunner.load()` succeeds (no API key needed).
12. **Stale tests after restructuring** — When changing nodes/edges, update tests to match. Tests referencing old node names will fail.
13. **Running integration tests without API keys** — Use `pytest.skip()` when credentials are missing.
14. **Forgetting sys.path setup in conftest.py** — Tests need `exports/` and `core/` on sys.path.
## GCU Errors
15. **Manually wiring browser tools on event_loop nodes** — Use `node_type="gcu"` which auto-includes browser tools. Do NOT manually list browser tool names.
16. **Using GCU nodes as regular graph nodes** — GCU nodes are subagents only. They must ONLY appear in `sub_agents=["gcu-node-id"]` and be invoked via `delegate_to_sub_agent()`. Never connect via edges or use as entry/terminal nodes.
## Worker Agent Errors
17. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Client-facing nodes in workers are for mid-execution review/approval only.
18. **Putting `escalate` or `set_output` in NodeSpec `tools=[]`** — These are synthetic framework tools, auto-injected at runtime. Only list MCP tools from `list_agent_tools()`.
@@ -0,0 +1,619 @@
# Agent File Templates
Complete code templates for each file in a Hive agent package.
## config.py
```python
"""Runtime configuration."""
import json
from dataclasses import dataclass, field
from pathlib import Path
def _load_preferred_model() -> str:
"""Load preferred model from ~/.hive/configuration.json."""
config_path = Path.home() / ".hive" / "configuration.json"
if config_path.exists():
try:
with open(config_path) as f:
config = json.load(f)
llm = config.get("llm", {})
if llm.get("provider") and llm.get("model"):
return f"{llm['provider']}/{llm['model']}"
except Exception:
pass
return "anthropic/claude-sonnet-4-20250514"
@dataclass
class RuntimeConfig:
model: str = field(default_factory=_load_preferred_model)
temperature: float = 0.7
max_tokens: int = 40000
api_key: str | None = None
api_base: str | None = None
default_config = RuntimeConfig()
@dataclass
class AgentMetadata:
name: str = "My Agent Name"
version: str = "1.0.0"
description: str = "What this agent does."
intro_message: str = "Welcome! What would you like me to do?"
metadata = AgentMetadata()
```
## nodes/__init__.py
```python
"""Node definitions for My Agent."""
from framework.graph import NodeSpec
# Node 1: Process (autonomous entry node)
# The queen handles intake and passes structured input via
# run_agent_with_input(task). NO client-facing intake node.
# The queen defines input_keys at build time and fills them at run time.
process_node = NodeSpec(
id="process",
name="Process",
description="Execute the task using available tools",
node_type="event_loop",
max_node_visits=0, # Unlimited for forever-alive
input_keys=["user_request", "feedback"],
output_keys=["results"],
nullable_output_keys=["feedback"], # Only on feedback edge
success_criteria="Results are complete and accurate.",
system_prompt="""\
You are a processing agent. Your task is in memory under "user_request". \
If "feedback" is present, this is a revision — address the feedback.
Work in phases:
1. Use tools to gather/process data
2. Analyze results
3. Call set_output in a SEPARATE turn:
- set_output("results", "structured results")
""",
tools=["web_search", "web_scrape", "save_data", "load_data", "list_data_files"],
)
# Node 2: Handoff (autonomous)
handoff_node = NodeSpec(
id="handoff",
name="Handoff",
description="Prepare worker results for queen review",
node_type="event_loop",
client_facing=False,
max_node_visits=0,
input_keys=["results", "user_request"],
output_keys=["next_action", "feedback", "worker_summary"],
nullable_output_keys=["feedback", "worker_summary"],
success_criteria="Results are packaged for queen decision-making.",
system_prompt="""\
Do NOT talk to the user directly. The queen is the only user interface.
If blocked by tool failures, missing credentials, or unclear constraints, call:
- escalate(reason, context)
Then set:
- set_output("next_action", "escalated")
- set_output("feedback", "what help is needed")
Otherwise summarize findings for queen and set:
- set_output("worker_summary", "short summary for queen")
- set_output("next_action", "done") or set_output("next_action", "revise")
- set_output("feedback", "what to revise") only when revising
""",
tools=[],
)
__all__ = ["process_node", "handoff_node"]
```
## agent.py
```python
"""Agent graph construction for My Agent."""
from pathlib import Path
from framework.graph import EdgeSpec, EdgeCondition, Goal, SuccessCriterion, Constraint
from framework.graph.edge import GraphSpec
from framework.graph.executor import ExecutionResult
from framework.graph.checkpoint_config import CheckpointConfig
from framework.llm import LiteLLMProvider
from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from .config import default_config, metadata
from .nodes import process_node, handoff_node
# Goal definition
goal = Goal(
id="my-agent-goal",
name="My Agent Goal",
description="What this agent achieves.",
success_criteria=[
SuccessCriterion(id="sc-1", description="...", metric="...", target="...", weight=0.5),
SuccessCriterion(id="sc-2", description="...", metric="...", target="...", weight=0.5),
],
constraints=[
Constraint(id="c-1", description="...", constraint_type="hard", category="quality"),
],
)
# Node list
nodes = [process_node, handoff_node]
# Edge definitions
edges = [
EdgeSpec(id="process-to-handoff", source="process", target="handoff",
condition=EdgeCondition.ON_SUCCESS, priority=1),
# Feedback loop — revise results
EdgeSpec(id="handoff-to-process", source="handoff", target="process",
condition=EdgeCondition.CONDITIONAL,
condition_expr="str(next_action).lower() == 'revise'", priority=2),
# Escalation loop — queen injects guidance and worker retries
EdgeSpec(id="handoff-escalated", source="handoff", target="process",
condition=EdgeCondition.CONDITIONAL,
condition_expr="str(next_action).lower() == 'escalated'", priority=3),
# Loop back for next task after queen decision
EdgeSpec(id="handoff-done", source="handoff", target="process",
condition=EdgeCondition.CONDITIONAL,
condition_expr="str(next_action).lower() == 'done'", priority=1),
]
# Graph configuration — entry is the autonomous process node
# The queen handles intake and passes the task via run_agent_with_input(task)
entry_node = "process"
entry_points = {"start": "process"}
pause_nodes = []
terminal_nodes = [] # Forever-alive
# Module-level vars read by AgentRunner.load()
conversation_mode = "continuous"
identity_prompt = "You are a helpful agent."
loop_config = {"max_iterations": 100, "max_tool_calls_per_turn": 20, "max_history_tokens": 32000}
class MyAgent:
def __init__(self, config=None):
self.config = config or default_config
self.goal = goal
self.nodes = nodes
self.edges = edges
self.entry_node = entry_node # "process" — autonomous entry
self.entry_points = entry_points
self.pause_nodes = pause_nodes
self.terminal_nodes = terminal_nodes
self._graph = None
self._agent_runtime = None
self._tool_registry = None
self._storage_path = None
def _build_graph(self):
return GraphSpec(
id="my-agent-graph",
goal_id=self.goal.id,
version="1.0.0",
entry_node=self.entry_node,
entry_points=self.entry_points,
terminal_nodes=self.terminal_nodes,
pause_nodes=self.pause_nodes,
nodes=self.nodes,
edges=self.edges,
default_model=self.config.model,
max_tokens=self.config.max_tokens,
loop_config=loop_config,
conversation_mode=conversation_mode,
identity_prompt=identity_prompt,
)
def _setup(self):
self._storage_path = Path.home() / ".hive" / "agents" / "my_agent"
self._storage_path.mkdir(parents=True, exist_ok=True)
self._tool_registry = ToolRegistry()
mcp_config = Path(__file__).parent / "mcp_servers.json"
if mcp_config.exists():
self._tool_registry.load_mcp_config(mcp_config)
llm = LiteLLMProvider(model=self.config.model, api_key=self.config.api_key, api_base=self.config.api_base)
tools = list(self._tool_registry.get_tools().values())
tool_executor = self._tool_registry.get_executor()
self._graph = self._build_graph()
self._agent_runtime = create_agent_runtime(
graph=self._graph, goal=self.goal, storage_path=self._storage_path,
entry_points=[EntryPointSpec(id="default", name="Default", entry_node=self.entry_node,
trigger_type="manual", isolation_level="shared")],
llm=llm, tools=tools, tool_executor=tool_executor,
checkpoint_config=CheckpointConfig(enabled=True, checkpoint_on_node_complete=True,
checkpoint_max_age_days=7, async_checkpoint=True),
)
async def start(self):
if self._agent_runtime is None:
self._setup()
if not self._agent_runtime.is_running:
await self._agent_runtime.start()
async def stop(self):
if self._agent_runtime and self._agent_runtime.is_running:
await self._agent_runtime.stop()
self._agent_runtime = None
async def trigger_and_wait(self, entry_point="default", input_data=None, timeout=None, session_state=None):
if self._agent_runtime is None:
raise RuntimeError("Agent not started. Call start() first.")
return await self._agent_runtime.trigger_and_wait(
entry_point_id=entry_point, input_data=input_data or {}, session_state=session_state)
async def run(self, context, session_state=None):
await self.start()
try:
result = await self.trigger_and_wait("default", context, session_state=session_state)
return result or ExecutionResult(success=False, error="Execution timeout")
finally:
await self.stop()
def info(self):
return {
"name": metadata.name, "version": metadata.version, "description": metadata.description,
"goal": {"name": self.goal.name, "description": self.goal.description},
"nodes": [n.id for n in self.nodes], "edges": [e.id for e in self.edges],
"entry_node": self.entry_node, "entry_points": self.entry_points,
"terminal_nodes": self.terminal_nodes,
"client_facing_nodes": [n.id for n in self.nodes if n.client_facing],
}
def validate(self):
"""Validate graph wiring and entry-point contract."""
errors, warnings = [], []
node_ids = {n.id for n in self.nodes}
for e in self.edges:
if e.source not in node_ids:
errors.append(f"Edge {e.id}: source '{e.source}' not found")
if e.target not in node_ids:
errors.append(f"Edge {e.id}: target '{e.target}' not found")
if self.entry_node not in node_ids:
errors.append(f"Entry node '{self.entry_node}' not found")
for t in self.terminal_nodes:
if t not in node_ids:
errors.append(f"Terminal node '{t}' not found")
if not isinstance(self.entry_points, dict):
errors.append(
"Invalid entry_points: expected dict[str, str] like "
"{'start': '<entry-node-id>'}. "
f"Got {type(self.entry_points).__name__}. "
"Fix agent.py: set entry_points = {'start': '<entry-node-id>'}."
)
else:
if "start" not in self.entry_points:
errors.append(
"entry_points must include 'start' mapped to entry_node. "
"Example: {'start': '<entry-node-id>'}."
)
else:
start_node = self.entry_points.get("start")
if start_node != self.entry_node:
errors.append(
f"entry_points['start'] points to '{start_node}' "
f"but entry_node is '{self.entry_node}'. Keep these aligned."
)
for ep_id, nid in self.entry_points.items():
if not isinstance(ep_id, str):
errors.append(
f"Invalid entry_points key {ep_id!r} "
f"({type(ep_id).__name__}). Entry point names must be strings."
)
continue
if not isinstance(nid, str):
errors.append(
f"Invalid entry_points['{ep_id}']={nid!r} "
f"({type(nid).__name__}). Node ids must be strings."
)
continue
if nid not in node_ids:
errors.append(
f"Entry point '{ep_id}' references unknown node '{nid}'. "
f"Known nodes: {sorted(node_ids)}"
)
return {"valid": len(errors) == 0, "errors": errors, "warnings": warnings}
default_agent = MyAgent()
```
## agent.py — Async Entry Points Variant
When an agent needs timers, webhooks, or event-driven triggers, add
`async_entry_points` and optionally `runtime_config` as module-level variables.
These are IN ADDITION to the standard variables above.
```python
# Additional imports for async entry points
from framework.graph.edge import GraphSpec, AsyncEntryPointSpec
from framework.runtime.agent_runtime import (
AgentRuntime, AgentRuntimeConfig, create_agent_runtime,
)
# ... (goal, nodes, edges, entry_node, entry_points, etc. as above) ...
# Async entry points — event-driven triggers
async_entry_points = [
# Timer with cron: daily at 9am
AsyncEntryPointSpec(
id="daily-check",
name="Daily Check",
entry_node="process-node",
trigger_type="timer",
trigger_config={"cron": "0 9 * * *"},
isolation_level="shared",
max_concurrent=1,
),
# Timer with fixed interval: every 20 minutes
AsyncEntryPointSpec(
id="scheduled-check",
name="Scheduled Check",
entry_node="process-node",
trigger_type="timer",
trigger_config={"interval_minutes": 20, "run_immediately": False},
isolation_level="shared",
max_concurrent=1,
),
# Event: reacts to webhook events
AsyncEntryPointSpec(
id="webhook-event",
name="Webhook Event Handler",
entry_node="process-node",
trigger_type="event",
trigger_config={"event_types": ["webhook_received"]},
isolation_level="shared",
max_concurrent=10,
),
]
# Webhook server config (only needed if using webhooks)
runtime_config = AgentRuntimeConfig(
webhook_host="127.0.0.1",
webhook_port=8080,
webhook_routes=[
{
"source_id": "my-source",
"path": "/webhooks/my-source",
"methods": ["POST"],
},
],
)
```
**Key rules for async entry points:**
- `async_entry_points` is a list of `AsyncEntryPointSpec` (NOT `EntryPointSpec`)
- `runtime_config` is `AgentRuntimeConfig` (NOT `RuntimeConfig` from config.py)
- Valid trigger_types: `timer`, `event`, `webhook`, `manual`, `api`
- Valid isolation_levels: `isolated`, `shared`, `synchronized`
- Timer trigger_config (cron): `{"cron": "0 9 * * *"}` — standard 5-field cron expression
- Timer trigger_config (interval): `{"interval_minutes": float, "run_immediately": bool}`
- Event trigger_config: `{"event_types": ["webhook_received"], "filter_stream": "...", "filter_node": "..."}`
- Use `isolation_level="shared"` for async entry points that need to read
the primary session's memory (e.g., user-configured rules)
- The `_build_graph()` method passes `async_entry_points` to GraphSpec
- Reference: `exports/gmail_inbox_guardian/agent.py`
## __init__.py
**CRITICAL:** The runner imports the package (`__init__.py`) and reads ALL module-level
variables via `getattr()`. Every variable defined in `agent.py` that the runner needs
MUST be re-exported here. Missing exports cause silent failures (variables default to
`None` or `{}`), leading to "must define goal, nodes, edges" errors or graph validation
failures like "node X is unreachable".
```python
"""My Agent — description."""
from .agent import (
MyAgent,
default_agent,
goal,
nodes,
edges,
entry_node,
entry_points,
pause_nodes,
terminal_nodes,
conversation_mode,
identity_prompt,
loop_config,
)
from .config import default_config, metadata
__all__ = [
"MyAgent",
"default_agent",
"goal",
"nodes",
"edges",
"entry_node",
"entry_points",
"pause_nodes",
"terminal_nodes",
"conversation_mode",
"identity_prompt",
"loop_config",
"default_config",
"metadata",
]
```
**If the agent uses async entry points**, also import and export:
```python
from .agent import (
...,
async_entry_points,
runtime_config, # Only if using webhooks
)
__all__ = [
...,
"async_entry_points",
"runtime_config",
]
```
## __main__.py
```python
"""CLI entry point for My Agent."""
import asyncio, json, logging, sys
import click
from .agent import default_agent, MyAgent
def setup_logging(verbose=False, debug=False):
if debug: level, fmt = logging.DEBUG, "%(asctime)s %(name)s: %(message)s"
elif verbose: level, fmt = logging.INFO, "%(message)s"
else: level, fmt = logging.WARNING, "%(levelname)s: %(message)s"
logging.basicConfig(level=level, format=fmt, stream=sys.stderr)
@click.group()
@click.version_option(version="1.0.0")
def cli():
"""My Agent — description."""
pass
@cli.command()
@click.option("--topic", "-t", required=True)
@click.option("--verbose", "-v", is_flag=True)
def run(topic, verbose):
"""Execute the agent."""
setup_logging(verbose=verbose)
result = asyncio.run(default_agent.run({"topic": topic}))
click.echo(json.dumps({"success": result.success, "output": result.output}, indent=2, default=str))
sys.exit(0 if result.success else 1)
@cli.command()
def tui():
"""Launch TUI dashboard."""
from pathlib import Path
from framework.tui.app import AdenTUI
from framework.llm import LiteLLMProvider
from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
async def run_tui():
agent = MyAgent()
agent._tool_registry = ToolRegistry()
storage = Path.home() / ".hive" / "agents" / "my_agent"
storage.mkdir(parents=True, exist_ok=True)
mcp_cfg = Path(__file__).parent / "mcp_servers.json"
if mcp_cfg.exists(): agent._tool_registry.load_mcp_config(mcp_cfg)
llm = LiteLLMProvider(model=agent.config.model, api_key=agent.config.api_key, api_base=agent.config.api_base)
runtime = create_agent_runtime(
graph=agent._build_graph(), goal=agent.goal, storage_path=storage,
entry_points=[EntryPointSpec(id="start", name="Start", entry_node="process", trigger_type="manual", isolation_level="isolated")],
llm=llm, tools=list(agent._tool_registry.get_tools().values()), tool_executor=agent._tool_registry.get_executor())
await runtime.start()
try:
app = AdenTUI(runtime)
await app.run_async()
finally:
await runtime.stop()
asyncio.run(run_tui())
@cli.command()
def info():
"""Show agent info."""
data = default_agent.info()
click.echo(f"Agent: {data['name']}\nVersion: {data['version']}\nDescription: {data['description']}")
click.echo(f"Nodes: {', '.join(data['nodes'])}\nClient-facing: {', '.join(data['client_facing_nodes'])}")
@cli.command()
def validate():
"""Validate agent structure."""
v = default_agent.validate()
if v["valid"]: click.echo("Agent is valid")
else:
click.echo("Errors:")
for e in v["errors"]: click.echo(f" {e}")
sys.exit(0 if v["valid"] else 1)
if __name__ == "__main__":
cli()
```
## mcp_servers.json
> **Auto-generated.** `initialize_and_build_agent` creates this file with hive-tools
> as the default. Only edit manually to add additional MCP servers.
```json
{
"hive-tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "mcp_server.py", "--stdio"],
"cwd": "../../tools",
"description": "Hive tools MCP server"
}
}
```
**CRITICAL FORMAT RULES:**
- NO `"mcpServers"` wrapper (flat dict, not nested)
- `cwd` MUST be `"../../tools"` (relative from `exports/AGENT_NAME/` to `tools/`)
- `command` MUST be `"uv"` with `"args": ["run", "python", ...]` (NOT bare `"python"`)
## tests/conftest.py
```python
"""Test fixtures."""
import sys
from pathlib import Path
import pytest
_repo_root = Path(__file__).resolve().parents[3]
for _p in ["exports", "core"]:
_path = str(_repo_root / _p)
if _path not in sys.path:
sys.path.insert(0, _path)
AGENT_PATH = str(Path(__file__).resolve().parents[1])
@pytest.fixture(scope="session")
def agent_module():
"""Import the agent package for structural validation."""
import importlib
return importlib.import_module(Path(AGENT_PATH).name)
@pytest.fixture(scope="session")
def runner_loaded():
"""Load the agent through AgentRunner (structural only, no LLM needed)."""
from framework.runner.runner import AgentRunner
return AgentRunner.load(AGENT_PATH)
```
## entry_points Format
MUST be: `{"start": "first-node-id"}`
NOT: `{"first-node-id": ["input_keys"]}` (WRONG)
NOT: `{"first-node-id"}` (WRONG — this is a set)
@@ -0,0 +1,322 @@
# Hive Agent Framework — Condensed Reference
## Architecture
Agents are Python packages in `exports/`:
```
exports/my_agent/
├── __init__.py # MUST re-export ALL module-level vars from agent.py
├── __main__.py # CLI (run, tui, info, validate, shell)
├── agent.py # Graph construction (goal, edges, agent class)
├── config.py # Runtime config
├── nodes/__init__.py # Node definitions (NodeSpec)
├── mcp_servers.json # MCP tool server config
└── tests/ # pytest tests
```
## Agent Loading Contract
`AgentRunner.load()` imports the package (`__init__.py`) and reads these
module-level variables via `getattr()`:
| Variable | Required | Default if missing | Consequence |
|----------|----------|--------------------|-------------|
| `goal` | YES | `None` | **FATAL** — "must define goal, nodes, edges" |
| `nodes` | YES | `None` | **FATAL** — same error |
| `edges` | YES | `None` | **FATAL** — same error |
| `entry_node` | no | `nodes[0].id` | Probably wrong node |
| `entry_points` | no | `{}` | **Nodes unreachable** — validation fails |
| `terminal_nodes` | **YES** | `[]` | **FATAL** — graph must have at least one terminal node |
| `pause_nodes` | no | `[]` | OK |
| `conversation_mode` | no | not passed | Isolated mode (no context carryover) |
| `identity_prompt` | no | not passed | No agent-level identity |
| `loop_config` | no | `{}` | No iteration limits |
| `async_entry_points` | no | `[]` | No async triggers (timers, webhooks, events) |
| `runtime_config` | no | `None` | No webhook server |
**CRITICAL:** `__init__.py` MUST import and re-export ALL of these from
`agent.py`. Missing exports silently fall back to defaults, causing
hard-to-debug failures.
**Why `default_agent.validate()` is NOT sufficient:**
`validate()` checks the agent CLASS's internal graph (self.nodes, self.edges).
These are always correct because the constructor references agent.py's module
vars directly. But `AgentRunner.load()` reads from the PACKAGE (`__init__.py`),
not the class. So `validate()` passes while `AgentRunner.load()` fails.
Always test with `AgentRunner.load("exports/{name}")` — this is the same
code path the TUI and `hive run` use.
## Goal
Defines success criteria and constraints:
```python
goal = Goal(
id="kebab-case-id",
name="Display Name",
description="What the agent does",
success_criteria=[
SuccessCriterion(id="sc-id", description="...", metric="...", target="...", weight=0.25),
],
constraints=[
Constraint(id="c-id", description="...", constraint_type="hard", category="quality"),
],
)
```
- 3-5 success criteria, weights sum to 1.0
- 1-5 constraints (hard/soft, categories: quality, accuracy, interaction, functional)
## NodeSpec Fields
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| id | str | required | kebab-case identifier |
| name | str | required | Display name |
| description | str | required | What the node does |
| node_type | str | required | `"event_loop"` or `"gcu"` (browser automation — see GCU Guide appendix) |
| input_keys | list[str] | required | Memory keys this node reads |
| output_keys | list[str] | required | Memory keys this node writes via set_output |
| system_prompt | str | "" | LLM instructions |
| tools | list[str] | [] | Tool names from MCP servers |
| client_facing | bool | False | If True, streams to user and blocks for input |
| nullable_output_keys | list[str] | [] | Keys that may remain unset |
| max_node_visits | int | 0 | 0=unlimited (default); >1 for one-shot feedback loops |
| max_retries | int | 3 | Retries on failure |
| success_criteria | str | "" | Natural language for judge evaluation |
## EdgeSpec Fields
| Field | Type | Description |
|-------|------|-------------|
| id | str | kebab-case identifier |
| source | str | Source node ID |
| target | str | Target node ID |
| condition | EdgeCondition | ON_SUCCESS, ON_FAILURE, ALWAYS, CONDITIONAL |
| condition_expr | str | Python expression evaluated against memory (for CONDITIONAL) |
| priority | int | Positive=forward (evaluated first), negative=feedback (loop-back) |
## Key Patterns
### STEP 1/STEP 2 (Client-Facing Nodes)
```
**STEP 1 — Respond to the user (text only, NO tool calls):**
[Present information, ask questions]
**STEP 2 — After the user responds, call set_output:**
- set_output("key", "value based on user response")
```
This prevents premature set_output before user interaction.
### Fewer, Richer Nodes (CRITICAL)
**Hard limit: 3-6 nodes for most agents.** Never exceed 6 unless the user
explicitly requests a complex multi-phase pipeline.
Each node boundary serializes outputs to shared memory and **destroys** all
in-context information: tool call results, intermediate reasoning, conversation
history. A research node that searches, fetches, and analyzes in ONE node keeps
all source material in its conversation context. Split across 3 nodes, each
downstream node only sees the serialized summary string.
**Decision framework — merge unless ANY of these apply:**
1. **Client-facing boundary** — Autonomous and client-facing work MUST be
separate nodes (different interaction models)
2. **Disjoint tool sets** — If tools are fundamentally different (e.g., web
search vs database), separate nodes make sense
3. **Parallel execution** — Fan-out branches must be separate nodes
**Red flags that you have too many nodes:**
- A node with 0 tools (pure LLM reasoning) → merge into predecessor/successor
- A node that sets only 1 trivial output → collapse into predecessor
- Multiple consecutive autonomous nodes → combine into one rich node
- A "report" node that presents analysis → merge into the client-facing node
- A "confirm" or "schedule" node that doesn't call any external service → remove
**Typical agent structure (2 nodes):**
```
process (autonomous) ←→ review (client-facing)
```
The queen owns intake — she gathers requirements from the user, then
passes structured input via `run_agent_with_input(task)`. When building
the agent, design the entry node's `input_keys` to match what the queen
will provide at run time. Worker agents should NOT have a client-facing
intake node. Client-facing nodes are for mid-execution review/approval only.
For simpler agents, just 1 autonomous node:
```
process (autonomous) — loops back to itself
```
### nullable_output_keys
For inputs that only arrive on certain edges:
```python
research_node = NodeSpec(
input_keys=["brief", "feedback"],
nullable_output_keys=["feedback"], # Only present on feedback edge
max_node_visits=3,
)
```
### Mutually Exclusive Outputs
For routing decisions:
```python
review_node = NodeSpec(
output_keys=["approved", "feedback"],
nullable_output_keys=["approved", "feedback"], # Node sets one or the other
)
```
### Continuous Loop Pattern
Mark the primary event_loop node as terminal: `terminal_nodes=["process"]`.
The node has `output_keys` and can complete when the agent finishes its work.
Use `conversation_mode="continuous"` to preserve context across transitions.
### set_output
- Synthetic tool injected by framework
- Call separately from real tool calls (separate turn)
- `set_output("key", "value")` stores to shared memory
## Edge Conditions
| Condition | When |
|-----------|------|
| ON_SUCCESS | Node completed successfully |
| ON_FAILURE | Node failed |
| ALWAYS | Unconditional |
| CONDITIONAL | condition_expr evaluates to True against memory |
condition_expr examples:
- `"needs_more_research == True"`
- `"str(next_action).lower() == 'new_agent'"`
- `"feedback is not None"`
## Graph Lifecycle
| Pattern | terminal_nodes | When |
|---------|---------------|------|
| **Continuous loop** | `["node-with-output-keys"]` | **DEFAULT for all agents** |
| Linear | `["last-node"]` | One-shot/batch agents |
**Every graph must have at least one terminal node.** Terminal nodes
define where execution ends. For interactive agents that loop continuously,
mark the primary event_loop node as terminal (it has `output_keys` and can
complete at any point). The framework default for `max_node_visits` is 0
(unbounded), so nodes work correctly in continuous loops without explicit
override. Only set `max_node_visits > 0` in one-shot agents with feedback loops.
Every node must have at least one outgoing edge — no dead ends.
## Continuous Conversation Mode
`conversation_mode` has ONLY two valid states:
- `"continuous"` — recommended for interactive agents
- Omit entirely — isolated per-node conversations (each node starts fresh)
**INVALID values** (do NOT use): `"client_facing"`, `"interactive"`,
`"adaptive"`, `"shared"`. These do not exist in the framework.
When `conversation_mode="continuous"`:
- Same conversation thread carries across node transitions
- Layered system prompts: identity (agent-level) + narrative + focus (per-node)
- Transition markers inserted at boundaries
- Compaction happens opportunistically at phase transitions
## loop_config
Only three valid keys:
```python
loop_config = {
"max_iterations": 100, # Max LLM turns per node visit
"max_tool_calls_per_turn": 20, # Max tool calls per LLM response
"max_history_tokens": 32000, # Triggers conversation compaction
}
```
**INVALID keys** (do NOT use): `"strategy"`, `"mode"`, `"timeout"`,
`"temperature"`. These are silently ignored or cause errors.
## Data Tools (Spillover)
For large data that exceeds context:
- `save_data(filename, data)` — Write to session data dir
- `load_data(filename, offset, limit)` — Read with pagination
- `list_data_files()` — List files
- `serve_file_to_user(filename, label)` — Clickable file:// URI
`data_dir` is auto-injected by framework — LLM never sees it.
## Fan-Out / Fan-In
Multiple ON_SUCCESS edges from same source → parallel execution via asyncio.gather().
- Parallel nodes must have disjoint output_keys
- Only one branch may have client_facing nodes
- Fan-in node gets all outputs in shared memory
## Judge System
- **Implicit** (default): ACCEPTs when LLM finishes with no tool calls and all required outputs set
- **SchemaJudge**: Validates against Pydantic model
- **Custom**: Implement `evaluate(context) -> JudgeVerdict`
Judge is the SOLE acceptance mechanism — no ad-hoc framework gating.
## Async Entry Points (Webhooks, Timers, Events)
For agents that react to external events, use `AsyncEntryPointSpec`:
```python
from framework.graph.edge import AsyncEntryPointSpec
from framework.runtime.agent_runtime import AgentRuntimeConfig
# Timer trigger (cron or interval)
async_entry_points = [
AsyncEntryPointSpec(
id="daily-check",
name="Daily Check",
entry_node="process",
trigger_type="timer",
trigger_config={"cron": "0 9 * * *"}, # daily at 9am
isolation_level="shared",
)
]
# Webhook server (optional)
runtime_config = AgentRuntimeConfig(
webhook_host="127.0.0.1",
webhook_port=8080,
webhook_routes=[{"source_id": "gmail", "path": "/webhooks/gmail", "methods": ["POST"]}],
)
```
### Key Fields
- `trigger_type`: `"timer"`, `"event"`, `"webhook"`, `"manual"`
- `trigger_config`: `{"cron": "0 9 * * *"}` or `{"interval_minutes": 20}`
- `isolation_level`: `"shared"` (recommended), `"isolated"`, `"synchronized"`
- `event_types`: For event triggers, e.g., `["webhook_received"]`
### Exports Required
Both `async_entry_points` and `runtime_config` must be exported from `__init__.py`.
See `exports/gmail_inbox_guardian/agent.py` for complete example.
## Tool Discovery
Do NOT rely on a static tool list — it will be outdated. Always call
`list_agent_tools()` with NO arguments first to see ALL available tools.
Only use `group=` or `output_schema=` as follow-up calls after seeing the
full list.
```
list_agent_tools() # ALWAYS call this first
list_agent_tools(group="gmail", output_schema="full") # then drill into a category
list_agent_tools("exports/my_agent/mcp_servers.json") # specific agent's tools
```
After building, run `validate_agent_package("{name}")` to check everything at once.
Common tool categories (verify via list_agent_tools):
- **Web**: search, scrape, PDF
- **Data**: save/load/append/list data files, serve to user
- **File**: view, write, replace, diff, list, grep
- **Communication**: email, gmail, slack, telegram
- **CRM**: hubspot, apollo, calcom
- **GitHub**: stargazers, user profiles, repos
- **Vision**: image analysis
- **Time**: current time
@@ -0,0 +1,119 @@
# GCU Browser Automation Guide
## When to Use GCU Nodes
Use `node_type="gcu"` when:
- The user's workflow requires **navigating real websites** (scraping, form-filling, social media interaction, testing web UIs)
- The task involves **dynamic/JS-rendered pages** that `web_scrape` cannot handle (SPAs, infinite scroll, login-gated content)
- The agent needs to **interact with a website** — clicking, typing, scrolling, selecting, uploading files
Do NOT use GCU for:
- Static content that `web_scrape` handles fine
- API-accessible data (use the API directly)
- PDF/file processing
- Anything that doesn't require a browser UI
## What GCU Nodes Are
- `node_type="gcu"` — a declarative enhancement over `event_loop`
- Framework auto-prepends browser best-practices system prompt
- Framework auto-includes all 31 browser tools from `gcu-tools` MCP server
- Same underlying `EventLoopNode` class — no new imports needed
- `tools=[]` is correct — tools are auto-populated at runtime
## GCU Architecture Pattern
GCU nodes are **subagents** — invoked via `delegate_to_sub_agent()`, not connected via edges.
- Primary nodes (`event_loop`, client-facing) orchestrate; GCU nodes do browser work
- Parent node declares `sub_agents=["gcu-node-id"]` and calls `delegate_to_sub_agent(agent_id="gcu-node-id", task="...")`
- GCU nodes set `max_node_visits=1` (single execution per delegation), `client_facing=False`
- GCU nodes use `output_keys=["result"]` and return structured JSON via `set_output("result", ...)`
## GCU Node Definition Template
```python
gcu_browser_node = NodeSpec(
id="gcu-browser-worker",
name="Browser Worker",
description="Browser subagent that does X.",
node_type="gcu",
client_facing=False,
max_node_visits=1,
input_keys=[],
output_keys=["result"],
tools=[], # Auto-populated with all browser tools
system_prompt="""\
You are a browser agent. Your job: [specific task].
## Workflow
1. browser_start (only if no browser is running yet)
2. browser_open(url=TARGET_URL) — note the returned targetId
3. browser_snapshot to read the page
4. [task-specific steps]
5. set_output("result", JSON)
## Output format
set_output("result", JSON) with:
- [field]: [type and description]
""",
)
```
## Parent Node Template (orchestrating GCU subagents)
```python
orchestrator_node = NodeSpec(
id="orchestrator",
...
node_type="event_loop",
sub_agents=["gcu-browser-worker"],
system_prompt="""\
...
delegate_to_sub_agent(
agent_id="gcu-browser-worker",
task="Navigate to [URL]. Do [specific task]. Return JSON with [fields]."
)
...
""",
tools=[], # Orchestrator doesn't need browser tools
)
```
## mcp_servers.json with GCU
```json
{
"hive-tools": { ... },
"gcu-tools": {
"transport": "stdio",
"command": "uv",
"args": ["run", "python", "-m", "gcu.server", "--stdio"],
"cwd": "../../tools",
"description": "GCU tools for browser automation"
}
}
```
Note: `gcu-tools` is auto-added if any node uses `node_type="gcu"`, but including it explicitly is fine.
## GCU System Prompt Best Practices
Key rules to bake into GCU node prompts:
- Prefer `browser_snapshot` over `browser_get_text("body")` — compact accessibility tree vs 100KB+ raw HTML
- Always `browser_wait` after navigation
- Use large scroll amounts (~2000-5000) for lazy-loaded content
- For spillover files, use `run_command` with grep, not `read_file`
- If auth wall detected, report immediately — don't attempt login
- Keep tool calls per turn ≤10
- Tab isolation: when browser is already running, use `browser_open(background=true)` and pass `target_id` to every call
## GCU Anti-Patterns
- Using `browser_screenshot` to read text (use `browser_snapshot`)
- Re-navigating after scrolling (resets scroll position)
- Attempting login on auth walls
- Forgetting `target_id` in multi-tab scenarios
- Putting browser tools directly on `event_loop` nodes instead of using GCU subagent pattern
- Making GCU nodes `client_facing=True` (they should be autonomous subagents)
@@ -0,0 +1,63 @@
# Queen Memory — File System Structure
```
~/.hive/
├── queen/
│ ├── MEMORY.md ← Semantic memory
│ ├── memories/
│ │ ├── MEMORY-2026-03-09.md ← Episodic memory (today)
│ │ ├── MEMORY-2026-03-08.md
│ │ └── ...
│ └── session/
│ └── {session_id}/ ← One dir per session (or resumed-from session)
│ ├── conversations/
│ │ ├── parts/
│ │ │ ├── 00001.json ← One file per message (role, content, tool_calls)
│ │ │ ├── 00002.json
│ │ │ └── ...
│ │ └── spillover/
│ │ ├── conversation_1.md ← Compacted old conversation segments
│ │ ├── conversation_2.md
│ │ └── ...
│ └── data/
│ ├── adapt.md ← Working memory (session-scoped)
│ ├── web_search_1.txt ← Spillover: large tool results
│ ├── web_search_2.txt
│ └── ...
```
---
## The three memory tiers
| File | Tier | Written by | Read at |
|---|---|---|---|
| `MEMORY.md` | Semantic | Consolidation LLM (auto, post-session) | Session start (injected into system prompt) |
| `memories/MEMORY-YYYY-MM-DD.md` | Episodic | Queen via `write_to_diary` tool + consolidation LLM | Session start (today's file injected) |
| `data/adapt.md` | Working | Queen via `update_session_notes` tool | Every turn (inlined in system prompt) |
---
## Session directory naming
The session directory name is **`queen_resume_from`** when a cold-restore resumes an existing
session, otherwise the new **`session_id`**. This means resumed sessions accumulate all messages
in the original directory rather than fragmenting across multiple folders.
---
## Consolidation
`consolidate_queen_memory()` runs every **5 minutes** in the background and once more at session
end. It reads:
1. `conversations/parts/*.json` — full message history (user + assistant turns; tool results skipped)
2. `data/adapt.md` — current working notes
It then makes two LLM writes:
- Rewrites `MEMORY.md` in place (semantic memory — queen never touches this herself)
- Appends a timestamped prose entry to today's `memories/MEMORY-YYYY-MM-DD.md`
If the combined transcript exceeds ~200 K characters it is recursively binary-compacted via the
LLM before being sent to the consolidation model (mirrors `EventLoopNode._llm_compact`).
@@ -0,0 +1,31 @@
"""Test fixtures for Queen agent."""
import sys
from pathlib import Path
import pytest
import pytest_asyncio
_repo_root = Path(__file__).resolve().parents[3]
for _p in ["exports", "core"]:
_path = str(_repo_root / _p)
if _path not in sys.path:
sys.path.insert(0, _path)
AGENT_PATH = str(Path(__file__).resolve().parents[1])
@pytest.fixture(scope="session")
def mock_mode():
return True
@pytest_asyncio.fixture(scope="session")
async def runner(tmp_path_factory, mock_mode):
from framework.runner.runner import AgentRunner
storage = tmp_path_factory.mktemp("agent_storage")
r = AgentRunner.load(AGENT_PATH, mock_mode=mock_mode, storage_path=storage)
r._setup()
yield r
await r.cleanup_async()
@@ -0,0 +1,27 @@
"""Queen's ticket receiver entry point.
When the Worker Health Judge emits a WORKER_ESCALATION_TICKET event on the
shared EventBus, this entry point fires and routes to the ``ticket_triage``
node, where the Queen deliberates and decides whether to notify the operator.
Isolation level is ``isolated`` the queen's triage memory is kept separate
from the worker's shared memory. Each ticket triage runs in its own context.
"""
from __future__ import annotations
from framework.graph.edge import AsyncEntryPointSpec
TICKET_RECEIVER_ENTRY_POINT = AsyncEntryPointSpec(
id="ticket_receiver",
name="Worker Escalation Ticket Receiver",
entry_node="ticket_triage",
trigger_type="event",
trigger_config={
"event_types": ["worker_escalation_ticket"],
# Do not fire on our own graph's events (prevents loops if queen
# somehow emits a worker_escalation_ticket for herself)
"exclude_own_graph": True,
},
isolation_level="isolated",
)
-21
View File
@@ -1,21 +0,0 @@
"""Builder interface for analyzing and building agents."""
from framework.builder.query import BuilderQuery
from framework.builder.workflow import (
BuildPhase,
BuildSession,
GraphBuilder,
TestCase,
TestResult,
ValidationResult,
)
__all__ = [
"BuilderQuery",
"GraphBuilder",
"BuildSession",
"BuildPhase",
"ValidationResult",
"TestCase",
"TestResult",
]
-501
View File
@@ -1,501 +0,0 @@
"""
Builder Query Interface - How I (Builder) analyze agent runs.
This is designed around the questions I need to answer:
1. What happened? (summaries, narratives)
2. Why did it fail? (failure analysis, decision traces)
3. What patterns emerge? (across runs, across nodes)
4. What should we change? (suggestions)
"""
from collections import defaultdict
from pathlib import Path
from typing import Any
from framework.schemas.decision import Decision
from framework.schemas.run import Run, RunStatus, RunSummary
from framework.storage.backend import FileStorage
class FailureAnalysis:
"""Structured analysis of why a run failed."""
def __init__(
self,
run_id: str,
failure_point: str,
root_cause: str,
decision_chain: list[str],
problems: list[str],
suggestions: list[str],
):
self.run_id = run_id
self.failure_point = failure_point
self.root_cause = root_cause
self.decision_chain = decision_chain
self.problems = problems
self.suggestions = suggestions
def to_dict(self) -> dict[str, Any]:
return {
"run_id": self.run_id,
"failure_point": self.failure_point,
"root_cause": self.root_cause,
"decision_chain": self.decision_chain,
"problems": self.problems,
"suggestions": self.suggestions,
}
def __str__(self) -> str:
lines = [
f"=== Failure Analysis for {self.run_id} ===",
"",
f"Failure Point: {self.failure_point}",
f"Root Cause: {self.root_cause}",
"",
"Decision Chain Leading to Failure:",
]
for i, dec in enumerate(self.decision_chain, 1):
lines.append(f" {i}. {dec}")
if self.problems:
lines.append("")
lines.append("Reported Problems:")
for prob in self.problems:
lines.append(f" - {prob}")
if self.suggestions:
lines.append("")
lines.append("Suggestions:")
for sug in self.suggestions:
lines.append(f"{sug}")
return "\n".join(lines)
class PatternAnalysis:
"""Patterns detected across multiple runs."""
def __init__(
self,
goal_id: str,
run_count: int,
success_rate: float,
common_failures: list[tuple[str, int]],
problematic_nodes: list[tuple[str, float]],
decision_patterns: dict[str, Any],
):
self.goal_id = goal_id
self.run_count = run_count
self.success_rate = success_rate
self.common_failures = common_failures
self.problematic_nodes = problematic_nodes
self.decision_patterns = decision_patterns
def to_dict(self) -> dict[str, Any]:
return {
"goal_id": self.goal_id,
"run_count": self.run_count,
"success_rate": self.success_rate,
"common_failures": self.common_failures,
"problematic_nodes": self.problematic_nodes,
"decision_patterns": self.decision_patterns,
}
def __str__(self) -> str:
lines = [
f"=== Pattern Analysis for Goal {self.goal_id} ===",
"",
f"Runs Analyzed: {self.run_count}",
f"Success Rate: {self.success_rate:.1%}",
]
if self.common_failures:
lines.append("")
lines.append("Common Failures:")
for failure, count in self.common_failures:
lines.append(f" - {failure} ({count} occurrences)")
if self.problematic_nodes:
lines.append("")
lines.append("Problematic Nodes (failure rate):")
for node, rate in self.problematic_nodes:
lines.append(f" - {node}: {rate:.1%} failure rate")
return "\n".join(lines)
class BuilderQuery:
"""
The interface I (Builder) use to understand what agents are doing.
This is optimized for the questions I need to answer when analyzing
agent behavior and deciding what to improve.
"""
def __init__(self, storage_path: str | Path):
self.storage = FileStorage(storage_path)
# === WHAT HAPPENED? ===
def get_run_summary(self, run_id: str) -> RunSummary | None:
"""Get a quick summary of a run."""
return self.storage.load_summary(run_id)
def get_full_run(self, run_id: str) -> Run | None:
"""Get the complete run with all decisions."""
return self.storage.load_run(run_id)
def list_runs_for_goal(self, goal_id: str) -> list[RunSummary]:
"""Get summaries of all runs for a goal."""
run_ids = self.storage.get_runs_by_goal(goal_id)
summaries = []
for run_id in run_ids:
summary = self.storage.load_summary(run_id)
if summary:
summaries.append(summary)
return summaries
def get_recent_failures(self, limit: int = 10) -> list[RunSummary]:
"""Get recent failed runs."""
run_ids = self.storage.get_runs_by_status(RunStatus.FAILED)
summaries = []
for run_id in run_ids[:limit]:
summary = self.storage.load_summary(run_id)
if summary:
summaries.append(summary)
return summaries
# === WHY DID IT FAIL? ===
def analyze_failure(self, run_id: str) -> FailureAnalysis | None:
"""
Deep analysis of why a run failed.
This is my primary tool for understanding what went wrong.
"""
run = self.storage.load_run(run_id)
if run is None or run.status != RunStatus.FAILED:
return None
# Find the first failed decision
failed_decisions = [d for d in run.decisions if not d.was_successful]
if not failed_decisions:
failure_point = "Unknown - no decision marked as failed"
root_cause = "Run failed but all decisions succeeded (external cause?)"
else:
first_failure = failed_decisions[0]
failure_point = first_failure.summary_for_builder()
root_cause = first_failure.outcome.error if first_failure.outcome else "Unknown"
# Build the decision chain leading to failure
decision_chain = []
for d in run.decisions:
decision_chain.append(d.summary_for_builder())
if not d.was_successful:
break
# Extract problems
problems = [f"[{p.severity}] {p.description}" for p in run.problems]
# Generate suggestions based on the failure
suggestions = self._generate_suggestions(run, failed_decisions)
return FailureAnalysis(
run_id=run_id,
failure_point=failure_point,
root_cause=root_cause,
decision_chain=decision_chain,
problems=problems,
suggestions=suggestions,
)
def get_decision_trace(self, run_id: str) -> list[str]:
"""Get a readable trace of all decisions in a run."""
run = self.storage.load_run(run_id)
if run is None:
return []
return [d.summary_for_builder() for d in run.decisions]
# === WHAT PATTERNS EMERGE? ===
def find_patterns(self, goal_id: str) -> PatternAnalysis | None:
"""
Find patterns across runs for a goal.
This helps me understand systemic issues vs one-off failures.
"""
run_ids = self.storage.get_runs_by_goal(goal_id)
if not run_ids:
return None
runs = []
for run_id in run_ids:
run = self.storage.load_run(run_id)
if run:
runs.append(run)
if not runs:
return None
# Calculate success rate
completed = [r for r in runs if r.status == RunStatus.COMPLETED]
success_rate = len(completed) / len(runs) if runs else 0.0
# Find common failures
failure_counts: dict[str, int] = defaultdict(int)
for run in runs:
for decision in run.decisions:
if not decision.was_successful and decision.outcome:
error = decision.outcome.error or "Unknown error"
failure_counts[error] += 1
common_failures = sorted(failure_counts.items(), key=lambda x: x[1], reverse=True)[:5]
# Find problematic nodes
node_stats: dict[str, dict[str, int]] = defaultdict(lambda: {"total": 0, "failed": 0})
for run in runs:
for decision in run.decisions:
node_stats[decision.node_id]["total"] += 1
if not decision.was_successful:
node_stats[decision.node_id]["failed"] += 1
problematic_nodes = []
for node_id, stats in node_stats.items():
if stats["total"] > 0:
failure_rate = stats["failed"] / stats["total"]
if failure_rate > 0.1: # More than 10% failure rate
problematic_nodes.append((node_id, failure_rate))
problematic_nodes.sort(key=lambda x: x[1], reverse=True)
# Decision patterns
decision_patterns = self._analyze_decision_patterns(runs)
return PatternAnalysis(
goal_id=goal_id,
run_count=len(runs),
success_rate=success_rate,
common_failures=common_failures,
problematic_nodes=problematic_nodes,
decision_patterns=decision_patterns,
)
def compare_runs(self, run_id_1: str, run_id_2: str) -> dict[str, Any]:
"""Compare two runs to understand what differed."""
run1 = self.storage.load_run(run_id_1)
run2 = self.storage.load_run(run_id_2)
if run1 is None or run2 is None:
return {"error": "One or both runs not found"}
return {
"run_1": {
"id": run1.id,
"status": run1.status.value,
"decisions": len(run1.decisions),
"success_rate": run1.metrics.success_rate,
},
"run_2": {
"id": run2.id,
"status": run2.status.value,
"decisions": len(run2.decisions),
"success_rate": run2.metrics.success_rate,
},
"differences": self._find_differences(run1, run2),
}
# === WHAT SHOULD WE CHANGE? ===
def suggest_improvements(self, goal_id: str) -> list[dict[str, Any]]:
"""
Generate improvement suggestions based on run analysis.
This is what I use to propose changes to the human engineer.
"""
patterns = self.find_patterns(goal_id)
if patterns is None:
return []
suggestions = []
# Suggestion: Fix problematic nodes
for node_id, failure_rate in patterns.problematic_nodes:
suggestions.append(
{
"type": "node_improvement",
"target": node_id,
"reason": f"Node has {failure_rate:.1%} failure rate",
"recommendation": (
f"Review and improve node '{node_id}' - "
"high failure rate suggests prompt or tool issues"
),
"priority": "high" if failure_rate > 0.3 else "medium",
}
)
# Suggestion: Address common failures
for failure, count in patterns.common_failures:
if count >= 2:
suggestions.append(
{
"type": "error_handling",
"target": failure,
"reason": f"Error occurred {count} times",
"recommendation": f"Add handling for: {failure}",
"priority": "high" if count >= 5 else "medium",
}
)
# Suggestion: Overall success rate
if patterns.success_rate < 0.8:
suggestions.append(
{
"type": "architecture",
"target": goal_id,
"reason": f"Goal success rate is only {patterns.success_rate:.1%}",
"recommendation": (
"Consider restructuring the agent graph or improving goal definition"
),
"priority": "high",
}
)
return suggestions
def get_node_performance(self, node_id: str) -> dict[str, Any]:
"""Get performance metrics for a specific node across all runs."""
run_ids = self.storage.get_runs_by_node(node_id)
total_decisions = 0
successful_decisions = 0
total_latency = 0
total_tokens = 0
decision_types: dict[str, int] = defaultdict(int)
for run_id in run_ids:
run = self.storage.load_run(run_id)
if run:
for decision in run.decisions:
if decision.node_id == node_id:
total_decisions += 1
if decision.was_successful:
successful_decisions += 1
if decision.outcome:
total_latency += decision.outcome.latency_ms
total_tokens += decision.outcome.tokens_used
decision_types[decision.decision_type.value] += 1
return {
"node_id": node_id,
"total_decisions": total_decisions,
"success_rate": successful_decisions / total_decisions if total_decisions > 0 else 0,
"avg_latency_ms": total_latency / total_decisions if total_decisions > 0 else 0,
"total_tokens": total_tokens,
"decision_type_distribution": dict(decision_types),
}
# === PRIVATE HELPERS ===
def _generate_suggestions(
self,
run: Run,
failed_decisions: list[Decision],
) -> list[str]:
"""Generate suggestions based on failure analysis."""
suggestions = []
for decision in failed_decisions:
# Check if there were alternatives
if len(decision.options) > 1:
chosen = decision.chosen_option
alternatives = [o for o in decision.options if o.id != decision.chosen_option_id]
if alternatives:
alt_desc = alternatives[0].description
chosen_desc = chosen.description if chosen else "unknown"
suggestions.append(
f"Consider alternative: '{alt_desc}' instead of '{chosen_desc}'"
)
# Check for missing context
if not decision.input_context:
suggestions.append(
f"Decision '{decision.intent}' had no input context - "
"ensure relevant data is passed"
)
# Check for constraint issues
if decision.active_constraints:
constraints = ", ".join(decision.active_constraints)
suggestions.append(f"Review constraints: {constraints} - may be too restrictive")
# Check for reported problems with suggestions
for problem in run.problems:
if problem.suggested_fix:
suggestions.append(problem.suggested_fix)
return suggestions
def _analyze_decision_patterns(self, runs: list[Run]) -> dict[str, Any]:
"""Analyze decision patterns across runs."""
type_counts: dict[str, int] = defaultdict(int)
option_counts: dict[str, dict[str, int]] = defaultdict(lambda: defaultdict(int))
for run in runs:
for decision in run.decisions:
type_counts[decision.decision_type.value] += 1
# Track which options are chosen for similar intents
intent_key = decision.intent[:50] # Truncate for grouping
if decision.chosen_option:
option_counts[intent_key][decision.chosen_option.description] += 1
# Find most common choices per intent
common_choices = {}
for intent, choices in option_counts.items():
if choices:
most_common = max(choices.items(), key=lambda x: x[1])
common_choices[intent] = {
"choice": most_common[0],
"count": most_common[1],
"alternatives": len(choices) - 1,
}
return {
"decision_type_distribution": dict(type_counts),
"common_choices": common_choices,
}
def _find_differences(self, run1: Run, run2: Run) -> list[str]:
"""Find key differences between two runs."""
differences = []
# Status difference
if run1.status != run2.status:
differences.append(f"Status: {run1.status.value} vs {run2.status.value}")
# Decision count difference
if len(run1.decisions) != len(run2.decisions):
differences.append(f"Decision count: {len(run1.decisions)} vs {len(run2.decisions)}")
# Find first divergence point
for i, (d1, d2) in enumerate(zip(run1.decisions, run2.decisions, strict=False)):
if d1.chosen_option_id != d2.chosen_option_id:
differences.append(
f"Diverged at decision {i}: "
f"chose '{d1.chosen_option_id}' vs '{d2.chosen_option_id}'"
)
break
# Node differences
nodes1 = set(run1.metrics.nodes_executed)
nodes2 = set(run2.metrics.nodes_executed)
if nodes1 != nodes2:
only_1 = nodes1 - nodes2
only_2 = nodes2 - nodes1
if only_1:
differences.append(f"Nodes only in run 1: {only_1}")
if only_2:
differences.append(f"Nodes only in run 2: {only_2}")
return differences
-807
View File
@@ -1,807 +0,0 @@
"""
GraphBuilder Workflow - Enforced incremental building with HITL approval.
The build process:
1. Define Goal APPROVE
2. Add Node VALIDATE TEST APPROVE
3. Add Edge VALIDATE TEST APPROVE
4. Repeat until graph is complete
5. Final integration test APPROVE
6. Export
Each step requires validation and human approval before proceeding.
You cannot skip steps or bypass validation.
"""
from collections.abc import Callable
from datetime import datetime
from enum import Enum
from pathlib import Path
from typing import Any
from pydantic import BaseModel, Field
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
from framework.graph.goal import Goal
from framework.graph.node import NodeSpec
class BuildPhase(str, Enum):
"""Current phase of the build process."""
INIT = "init" # Just started
GOAL_DRAFT = "goal_draft" # Drafting goal
GOAL_APPROVED = "goal_approved" # Goal approved
ADDING_NODES = "adding_nodes" # Adding nodes
ADDING_EDGES = "adding_edges" # Adding edges
TESTING = "testing" # Running tests
APPROVED = "approved" # Fully approved
EXPORTED = "exported" # Exported to file
class ValidationResult(BaseModel):
"""Result of a validation check."""
valid: bool
errors: list[str] = Field(default_factory=list)
warnings: list[str] = Field(default_factory=list)
suggestions: list[str] = Field(default_factory=list)
class TestCase(BaseModel):
"""A test case for validating agent behavior."""
id: str
description: str
input: dict[str, Any]
expected_output: Any = None # None means just check it doesn't error
expected_contains: str | None = None
class TestResult(BaseModel):
"""Result of running a test case."""
test_id: str
passed: bool
actual_output: Any = None
error: str | None = None
execution_path: list[str] = Field(default_factory=list)
class BuildSession(BaseModel):
"""
Persistent build session state.
Saved after each approved step so you can resume later.
"""
id: str
name: str
phase: BuildPhase = BuildPhase.INIT
created_at: datetime = Field(default_factory=datetime.now)
updated_at: datetime = Field(default_factory=datetime.now)
# The artifacts being built
goal: Goal | None = None
nodes: list[NodeSpec] = Field(default_factory=list)
edges: list[EdgeSpec] = Field(default_factory=list)
# Test cases
test_cases: list[TestCase] = Field(default_factory=list)
test_results: list[TestResult] = Field(default_factory=list)
# Approval history
approvals: list[dict[str, Any]] = Field(default_factory=list)
# Tools (stored as dicts for serialization)
tools: list[dict[str, Any]] = Field(default_factory=list)
model_config = {"extra": "allow"}
class GraphBuilder:
"""
Enforced incremental graph building with HITL approval.
Usage:
builder = GraphBuilder("my-agent")
# Step 1: Define and approve goal
builder.set_goal(goal)
builder.validate() # Must pass
builder.approve("Goal looks good") # Human approval required
# Step 2: Add nodes one by one
builder.add_node(node_spec)
builder.validate() # Must pass
builder.test(test_case) # Must pass
builder.approve("Node works")
# Step 3: Add edges
builder.add_edge(edge_spec)
builder.validate()
builder.approve("Edge correct")
# Step 4: Final approval
builder.run_all_tests()
builder.final_approve("Ready for production")
# Step 5: Export
graph = builder.export()
"""
def __init__(
self,
name: str,
storage_path: Path | str | None = None,
session_id: str | None = None,
):
self.storage_path = Path(storage_path) if storage_path else Path.home() / ".core" / "builds"
self.storage_path.mkdir(parents=True, exist_ok=True)
if session_id:
self.session = self._load_session(session_id)
else:
self.session = BuildSession(
id=f"build_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
name=name,
)
self._pending_validation: ValidationResult | None = None
# =========================================================================
# PHASE 1: GOAL
# =========================================================================
def set_goal(self, goal: Goal) -> ValidationResult:
"""
Set the goal for this agent.
Returns validation result. Must call approve() after validation passes.
"""
self._require_phase([BuildPhase.INIT, BuildPhase.GOAL_DRAFT])
self.session.goal = goal
self.session.phase = BuildPhase.GOAL_DRAFT
validation = self._validate_goal(goal)
self._pending_validation = validation
self._save_session()
return validation
def _validate_goal(self, goal: Goal) -> ValidationResult:
"""Validate a goal definition."""
errors = []
warnings = []
suggestions = []
if not goal.id:
errors.append("Goal must have an id")
if not goal.name:
errors.append("Goal must have a name")
if not goal.description:
errors.append("Goal must have a description")
if not goal.success_criteria:
errors.append("Goal must have at least one success criterion")
else:
for sc in goal.success_criteria:
if not sc.description:
errors.append(f"Success criterion '{sc.id}' needs a description")
if not goal.constraints:
warnings.append("Consider adding constraints to define boundaries")
if not goal.required_capabilities:
suggestions.append("Specify required_capabilities (e.g., ['llm', 'tools'])")
return ValidationResult(
valid=len(errors) == 0,
errors=errors,
warnings=warnings,
suggestions=suggestions,
)
# =========================================================================
# PHASE 2: NODES
# =========================================================================
def add_node(self, node: NodeSpec) -> ValidationResult:
"""
Add a node to the graph.
Returns validation result. Must call approve() after validation passes.
"""
self._require_phase([BuildPhase.GOAL_APPROVED, BuildPhase.ADDING_NODES])
# Check for duplicate
if any(n.id == node.id for n in self.session.nodes):
return ValidationResult(
valid=False,
errors=[f"Node with id '{node.id}' already exists"],
)
self.session.nodes.append(node)
self.session.phase = BuildPhase.ADDING_NODES
validation = self._validate_node(node)
self._pending_validation = validation
self._save_session()
return validation
def _validate_node(self, node: NodeSpec) -> ValidationResult:
"""Validate a node definition."""
errors = []
warnings = []
suggestions = []
if not node.id:
errors.append("Node must have an id")
if not node.name:
errors.append("Node must have a name")
if not node.description:
warnings.append(f"Node '{node.id}' should have a description")
# Type-specific validation
if node.node_type == "llm_tool_use":
if not node.tools:
errors.append(f"LLM tool node '{node.id}' must specify tools")
if not node.system_prompt:
warnings.append(f"LLM node '{node.id}' should have a system_prompt")
if node.node_type == "router":
if not node.routes:
errors.append(f"Router node '{node.id}' must specify routes")
if node.node_type == "function":
if not node.function:
errors.append(f"Function node '{node.id}' must specify function name")
# Check input/output keys
if not node.input_keys:
suggestions.append(f"Consider specifying input_keys for '{node.id}'")
if not node.output_keys:
suggestions.append(f"Consider specifying output_keys for '{node.id}'")
return ValidationResult(
valid=len(errors) == 0,
errors=errors,
warnings=warnings,
suggestions=suggestions,
)
def update_node(self, node_id: str, **updates) -> ValidationResult:
"""Update an existing node."""
self._require_phase([BuildPhase.ADDING_NODES])
for i, node in enumerate(self.session.nodes):
if node.id == node_id:
node_dict = node.model_dump()
node_dict.update(updates)
updated_node = NodeSpec(**node_dict)
self.session.nodes[i] = updated_node
validation = self._validate_node(updated_node)
self._pending_validation = validation
self._save_session()
return validation
return ValidationResult(valid=False, errors=[f"Node '{node_id}' not found"])
def remove_node(self, node_id: str) -> ValidationResult:
"""Remove a node (only if no edges reference it)."""
self._require_phase([BuildPhase.ADDING_NODES])
# Check for edge references
for edge in self.session.edges:
if edge.source == node_id or edge.target == node_id:
return ValidationResult(
valid=False,
errors=[f"Cannot remove node '{node_id}': referenced by edge '{edge.id}'"],
)
self.session.nodes = [n for n in self.session.nodes if n.id != node_id]
self._save_session()
return ValidationResult(valid=True)
# =========================================================================
# PHASE 3: EDGES
# =========================================================================
def add_edge(self, edge: EdgeSpec) -> ValidationResult:
"""
Add an edge to the graph.
Returns validation result. Must call approve() after validation passes.
"""
self._require_phase([BuildPhase.ADDING_NODES, BuildPhase.ADDING_EDGES])
# Check for duplicate
if any(e.id == edge.id for e in self.session.edges):
return ValidationResult(
valid=False,
errors=[f"Edge with id '{edge.id}' already exists"],
)
self.session.edges.append(edge)
self.session.phase = BuildPhase.ADDING_EDGES
validation = self._validate_edge(edge)
self._pending_validation = validation
self._save_session()
return validation
def _validate_edge(self, edge: EdgeSpec) -> ValidationResult:
"""Validate an edge definition."""
errors = []
warnings = []
if not edge.id:
errors.append("Edge must have an id")
# Check source exists
if not any(n.id == edge.source for n in self.session.nodes):
errors.append(f"Edge source '{edge.source}' not found in nodes")
# Check target exists
if not any(n.id == edge.target for n in self.session.nodes):
errors.append(f"Edge target '{edge.target}' not found in nodes")
# Warn about conditional edges without expressions
if edge.condition == EdgeCondition.CONDITIONAL and not edge.condition_expr:
warnings.append(f"Conditional edge '{edge.id}' has no condition_expr")
return ValidationResult(
valid=len(errors) == 0,
errors=errors,
warnings=warnings,
)
# =========================================================================
# VALIDATION & TESTING
# =========================================================================
def validate(self) -> ValidationResult:
"""Validate the entire current graph state."""
errors = []
warnings = []
# Must have a goal
if not self.session.goal:
errors.append("No goal defined")
return ValidationResult(valid=False, errors=errors)
# Must have at least one node
if not self.session.nodes:
errors.append("No nodes defined")
# Check for entry node
entry_candidates = []
for node in self.session.nodes:
# A node is an entry candidate if no edges point to it
if not any(e.target == node.id for e in self.session.edges):
entry_candidates.append(node.id)
if len(entry_candidates) == 0 and self.session.nodes:
errors.append("No entry node found (all nodes have incoming edges)")
elif len(entry_candidates) > 1:
warnings.append(f"Multiple entry candidates: {entry_candidates}. Specify one.")
# Check for terminal nodes
terminal_candidates = []
for node in self.session.nodes:
if not any(e.source == node.id for e in self.session.edges):
terminal_candidates.append(node.id)
if not terminal_candidates and self.session.nodes:
warnings.append("No terminal nodes found (all nodes have outgoing edges)")
# Check reachability
if entry_candidates and self.session.nodes:
reachable = self._compute_reachable(entry_candidates[0])
unreachable = [n.id for n in self.session.nodes if n.id not in reachable]
if unreachable:
errors.append(f"Unreachable nodes: {unreachable}")
validation = ValidationResult(
valid=len(errors) == 0,
errors=errors,
warnings=warnings,
)
self._pending_validation = validation
return validation
def _compute_reachable(self, start: str) -> set[str]:
"""Compute all nodes reachable from start."""
reachable = set()
to_visit = [start]
while to_visit:
current = to_visit.pop()
if current in reachable:
continue
reachable.add(current)
for edge in self.session.edges:
if edge.source == current:
to_visit.append(edge.target)
# Also follow router routes
for node in self.session.nodes:
if node.id == current and node.routes:
for target in node.routes.values():
to_visit.append(target)
return reachable
def add_test(self, test: TestCase) -> None:
"""Add a test case."""
self.session.test_cases.append(test)
self._save_session()
def run_test(
self,
test: TestCase,
executor_factory: Callable,
) -> TestResult:
"""
Run a single test case.
executor_factory should return a configured GraphExecutor.
"""
self._require_phase([BuildPhase.ADDING_NODES, BuildPhase.ADDING_EDGES, BuildPhase.TESTING])
self.session.phase = BuildPhase.TESTING
try:
# Build temporary graph for testing
graph = self._build_graph()
executor = executor_factory()
# Run the test
import asyncio
result = asyncio.run(
executor.execute(
graph=graph,
goal=self.session.goal,
input_data=test.input,
)
)
# Check result
passed = result.success
if test.expected_output is not None:
passed = passed and (result.output.get("result") == test.expected_output)
if test.expected_contains:
output_str = str(result.output)
passed = passed and (test.expected_contains in output_str)
test_result = TestResult(
test_id=test.id,
passed=passed,
actual_output=result.output,
execution_path=result.path,
)
except Exception as e:
test_result = TestResult(
test_id=test.id,
passed=False,
error=str(e),
)
self.session.test_results.append(test_result)
self._save_session()
return test_result
def run_all_tests(self, executor_factory: Callable) -> list[TestResult]:
"""Run all test cases."""
results = []
for test in self.session.test_cases:
result = self.run_test(test, executor_factory)
results.append(result)
return results
# =========================================================================
# APPROVAL
# =========================================================================
def approve(self, comment: str) -> bool:
"""
Approve the current pending change.
Must have a passing validation to approve.
Returns True if approved, False if validation failed.
"""
if self._pending_validation is None:
raise RuntimeError("Nothing to approve. Run validation first.")
if not self._pending_validation.valid:
return False
self.session.approvals.append(
{
"phase": self.session.phase.value,
"comment": comment,
"timestamp": datetime.now().isoformat(),
"validation": self._pending_validation.model_dump(),
}
)
# Advance phase if appropriate
if self.session.phase == BuildPhase.GOAL_DRAFT:
self.session.phase = BuildPhase.GOAL_APPROVED
self._pending_validation = None
self._save_session()
return True
def final_approve(self, comment: str) -> bool:
"""
Final approval for the complete graph.
Requires all tests to pass.
"""
# Run final validation
validation = self.validate()
if not validation.valid:
self._pending_validation = validation
return False
# Check test results
if self.session.test_cases:
failed_tests = [t for t in self.session.test_results if not t.passed]
if failed_tests:
self._pending_validation = ValidationResult(
valid=False,
errors=[f"Failed tests: {[t.test_id for t in failed_tests]}"],
)
return False
self.session.phase = BuildPhase.APPROVED
self.session.approvals.append(
{
"phase": "final",
"comment": comment,
"timestamp": datetime.now().isoformat(),
}
)
self._save_session()
return True
# =========================================================================
# EXPORT
# =========================================================================
def export(self) -> GraphSpec:
"""
Export the approved graph.
Requires final approval.
"""
self._require_phase([BuildPhase.APPROVED])
graph = self._build_graph()
self.session.phase = BuildPhase.EXPORTED
self._save_session()
return graph
def _build_graph(self) -> GraphSpec:
"""Build a GraphSpec from current session."""
# Determine entry node
entry_node = None
for node in self.session.nodes:
if not any(e.target == node.id for e in self.session.edges):
entry_node = node.id
break
# Determine terminal nodes
terminal_nodes = []
for node in self.session.nodes:
if not any(e.source == node.id for e in self.session.edges):
terminal_nodes.append(node.id)
# Collect all memory keys
memory_keys = set()
for node in self.session.nodes:
memory_keys.update(node.input_keys)
memory_keys.update(node.output_keys)
return GraphSpec(
id=f"{self.session.name}-graph",
goal_id=self.session.goal.id if self.session.goal else "",
entry_node=entry_node or "",
terminal_nodes=terminal_nodes,
nodes=self.session.nodes,
edges=self.session.edges,
memory_keys=list(memory_keys),
)
def export_to_file(self, path: Path | str) -> None:
"""Export the graph to a Python file."""
self._require_phase([BuildPhase.APPROVED, BuildPhase.EXPORTED])
graph = self._build_graph()
# Generate Python code
code = self._generate_code(graph)
Path(path).write_text(code)
self.session.phase = BuildPhase.EXPORTED
self._save_session()
def _generate_code(self, graph: GraphSpec) -> str:
"""Generate Python code for the graph."""
lines = [
'"""',
f"Generated agent: {self.session.name}",
f"Generated at: {datetime.now().isoformat()}",
'"""',
"",
"from framework.graph import (",
" Goal, SuccessCriterion, Constraint,",
" NodeSpec, EdgeSpec, EdgeCondition,",
")",
"from framework.graph.edge import GraphSpec",
"from framework.graph.goal import GoalStatus",
"",
"",
"# Goal",
]
if self.session.goal:
goal_json = self.session.goal.model_dump_json(indent=4)
lines.append("GOAL = Goal.model_validate_json('''")
lines.append(goal_json)
lines.append("''')")
else:
lines.append("GOAL = None")
lines.extend(
[
"",
"",
"# Nodes",
"NODES = [",
]
)
for node in self.session.nodes:
node_json = node.model_dump_json(indent=4)
lines.append(" NodeSpec.model_validate_json('''")
lines.append(node_json)
lines.append(" '''),")
lines.extend(
[
"]",
"",
"",
"# Edges",
"EDGES = [",
]
)
for edge in self.session.edges:
edge_json = edge.model_dump_json(indent=4)
lines.append(" EdgeSpec.model_validate_json('''")
lines.append(edge_json)
lines.append(" '''),")
lines.extend(
[
"]",
"",
"",
"# Graph",
]
)
graph_json = graph.model_dump_json(indent=4)
lines.append("GRAPH = GraphSpec.model_validate_json('''")
lines.append(graph_json)
lines.append("''')")
return "\n".join(lines)
# =========================================================================
# SESSION MANAGEMENT
# =========================================================================
def _require_phase(self, allowed: list[BuildPhase]) -> None:
"""Ensure we're in an allowed phase."""
if self.session.phase not in allowed:
raise RuntimeError(
f"Cannot perform this action in phase '{self.session.phase.value}'. "
f"Allowed phases: {[p.value for p in allowed]}"
)
def _save_session(self) -> None:
"""Save session to disk."""
self.session.updated_at = datetime.now()
path = self.storage_path / f"{self.session.id}.json"
path.write_text(self.session.model_dump_json(indent=2))
def _load_session(self, session_id: str) -> BuildSession:
"""Load session from disk."""
path = self.storage_path / f"{session_id}.json"
if not path.exists():
raise FileNotFoundError(f"Session not found: {session_id}")
return BuildSession.model_validate_json(path.read_text())
@classmethod
def list_sessions(cls, storage_path: Path | str | None = None) -> list[str]:
"""List all saved sessions."""
path = Path(storage_path) if storage_path else Path.home() / ".core" / "builds"
if not path.exists():
return []
return [f.stem for f in path.glob("*.json")]
# =========================================================================
# STATUS
# =========================================================================
def status(self) -> dict[str, Any]:
"""Get current build status."""
return {
"session_id": self.session.id,
"name": self.session.name,
"phase": self.session.phase.value,
"goal": self.session.goal.name if self.session.goal else None,
"nodes": len(self.session.nodes),
"edges": len(self.session.edges),
"tests": len(self.session.test_cases),
"tests_passed": sum(1 for t in self.session.test_results if t.passed),
"approvals": len(self.session.approvals),
"pending_validation": self._pending_validation.model_dump()
if self._pending_validation
else None,
}
def show(self) -> str:
"""Show current graph as text."""
lines = [
f"=== Build: {self.session.name} ===",
f"Phase: {self.session.phase.value}",
"",
]
if self.session.goal:
lines.extend(
[
f"Goal: {self.session.goal.name}",
f" {self.session.goal.description}",
"",
]
)
if self.session.nodes:
lines.append("Nodes:")
for node in self.session.nodes:
lines.append(f" [{node.id}] {node.name} ({node.node_type})")
lines.append("")
if self.session.edges:
lines.append("Edges:")
for edge in self.session.edges:
lines.append(f" {edge.source} --{edge.condition.value}--> {edge.target}")
lines.append("")
if self._pending_validation:
lines.append("Pending Validation:")
lines.append(f" Valid: {self._pending_validation.valid}")
for err in self._pending_validation.errors:
lines.append(f" ERROR: {err}")
for warn in self._pending_validation.warnings:
lines.append(f" WARN: {warn}")
return "\n".join(lines)
+17 -3
View File
@@ -11,9 +11,9 @@ Usage:
Testing commands:
hive test-run <agent_path> --goal <goal_id>
hive test-debug <goal_id> <test_id>
hive test-list <goal_id>
hive test-stats <goal_id>
hive test-debug <agent_path> <test_name>
hive test-list <agent_path>
hive test-stats <agent_path>
"""
import argparse
@@ -44,11 +44,25 @@ def _configure_paths():
if exports_str not in sys.path:
sys.path.insert(0, exports_str)
# Add examples/templates/ to sys.path so template agents are importable
templates_dir = project_root / "examples" / "templates"
if templates_dir.is_dir():
templates_str = str(templates_dir)
if templates_str not in sys.path:
sys.path.insert(0, templates_str)
# Ensure core/ is also in sys.path (for non-editable-install scenarios)
core_str = str(project_root / "core")
if (project_root / "core").is_dir() and core_str not in sys.path:
sys.path.insert(0, core_str)
# Add core/framework/agents/ so framework agents are importable as top-level packages
framework_agents_dir = project_root / "core" / "framework" / "agents"
if framework_agents_dir.is_dir():
fa_str = str(framework_agents_dir)
if fa_str not in sys.path:
sys.path.insert(0, fa_str)
def main():
_configure_paths()
+169
View File
@@ -0,0 +1,169 @@
"""Shared Hive configuration utilities.
Centralises reading of ~/.hive/configuration.json so that the runner
and every agent template share one implementation instead of copy-pasting
helper functions.
"""
import json
import logging
import os
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any
from framework.graph.edge import DEFAULT_MAX_TOKENS
# ---------------------------------------------------------------------------
# Low-level config file access
# ---------------------------------------------------------------------------
HIVE_CONFIG_FILE = Path.home() / ".hive" / "configuration.json"
logger = logging.getLogger(__name__)
def get_hive_config() -> dict[str, Any]:
"""Load hive configuration from ~/.hive/configuration.json."""
if not HIVE_CONFIG_FILE.exists():
return {}
try:
with open(HIVE_CONFIG_FILE, encoding="utf-8-sig") as f:
return json.load(f)
except (json.JSONDecodeError, OSError) as e:
logger.warning(
"Failed to load Hive config %s: %s",
HIVE_CONFIG_FILE,
e,
)
return {}
# ---------------------------------------------------------------------------
# Derived helpers
# ---------------------------------------------------------------------------
def get_preferred_model() -> str:
"""Return the user's preferred LLM model string (e.g. 'anthropic/claude-sonnet-4-20250514')."""
llm = get_hive_config().get("llm", {})
if llm.get("provider") and llm.get("model"):
return f"{llm['provider']}/{llm['model']}"
return "anthropic/claude-sonnet-4-20250514"
def get_max_tokens() -> int:
"""Return the configured max_tokens, falling back to DEFAULT_MAX_TOKENS."""
return get_hive_config().get("llm", {}).get("max_tokens", DEFAULT_MAX_TOKENS)
def get_api_key() -> str | None:
"""Return the API key, supporting env var, Claude Code subscription, Codex, and ZAI Code.
Priority:
1. Claude Code subscription (``use_claude_code_subscription: true``)
reads the OAuth token from ``~/.claude/.credentials.json``.
2. Codex subscription (``use_codex_subscription: true``)
reads the OAuth token from macOS Keychain or ``~/.codex/auth.json``.
3. Environment variable named in ``api_key_env_var``.
"""
llm = get_hive_config().get("llm", {})
# Claude Code subscription: read OAuth token directly
if llm.get("use_claude_code_subscription"):
try:
from framework.runner.runner import get_claude_code_token
token = get_claude_code_token()
if token:
return token
except ImportError:
pass
# Codex subscription: read OAuth token from Keychain / auth.json
if llm.get("use_codex_subscription"):
try:
from framework.runner.runner import get_codex_token
token = get_codex_token()
if token:
return token
except ImportError:
pass
# Standard env-var path (covers ZAI Code and all API-key providers)
api_key_env_var = llm.get("api_key_env_var")
if api_key_env_var:
return os.environ.get(api_key_env_var)
return None
def get_gcu_enabled() -> bool:
"""Return whether GCU (browser automation) is enabled in user config."""
return get_hive_config().get("gcu_enabled", True)
def get_api_base() -> str | None:
"""Return the api_base URL for OpenAI-compatible endpoints, if configured."""
llm = get_hive_config().get("llm", {})
if llm.get("use_codex_subscription"):
# Codex subscription routes through the ChatGPT backend, not api.openai.com.
return "https://chatgpt.com/backend-api/codex"
return llm.get("api_base")
def get_llm_extra_kwargs() -> dict[str, Any]:
"""Return extra kwargs for LiteLLMProvider (e.g. OAuth headers).
When ``use_claude_code_subscription`` is enabled, returns
``extra_headers`` with the OAuth Bearer token so that litellm's
built-in Anthropic OAuth handler adds the required beta headers.
When ``use_codex_subscription`` is enabled, returns
``extra_headers`` with the Bearer token, ``ChatGPT-Account-Id``,
and ``store=False`` (required by the ChatGPT backend).
"""
llm = get_hive_config().get("llm", {})
if llm.get("use_claude_code_subscription"):
api_key = get_api_key()
if api_key:
return {
"extra_headers": {"authorization": f"Bearer {api_key}"},
}
if llm.get("use_codex_subscription"):
api_key = get_api_key()
if api_key:
headers: dict[str, str] = {
"Authorization": f"Bearer {api_key}",
"User-Agent": "CodexBar",
}
try:
from framework.runner.runner import get_codex_account_id
account_id = get_codex_account_id()
if account_id:
headers["ChatGPT-Account-Id"] = account_id
except ImportError:
pass
return {
"extra_headers": headers,
"store": False,
"allowed_openai_params": ["store"],
}
return {}
# ---------------------------------------------------------------------------
# RuntimeConfig shared across agent templates
# ---------------------------------------------------------------------------
@dataclass
class RuntimeConfig:
"""Agent runtime configuration loaded from ~/.hive/configuration.json."""
model: str = field(default_factory=get_preferred_model)
temperature: float = 0.7
max_tokens: int = field(default_factory=get_max_tokens)
api_key: str | None = field(default_factory=get_api_key)
api_base: str | None = field(default_factory=get_api_base)
extra_kwargs: dict[str, Any] = field(default_factory=get_llm_extra_kwargs)
+52 -3
View File
@@ -6,7 +6,7 @@ This module provides secure credential storage with:
- Template-based usage: {{cred.key}} patterns for injection
- Bipartisan model: Store stores values, tools define usage
- Provider system: Extensible lifecycle management (refresh, validate)
- Multiple backends: Encrypted files, env vars, HashiCorp Vault
- Multiple backends: Encrypted files, env vars
Quick Start:
from core.framework.credentials import CredentialStore, CredentialObject
@@ -38,10 +38,16 @@ For Aden server sync:
AdenSyncProvider,
)
For Vault integration:
from core.framework.credentials.vault import HashiCorpVaultStorage
"""
from .key_storage import (
delete_aden_api_key,
generate_and_save_credential_key,
load_aden_api_key,
load_credential_key,
save_aden_api_key,
save_credential_key,
)
from .models import (
CredentialDecryptionError,
CredentialError,
@@ -59,6 +65,13 @@ from .provider import (
CredentialProvider,
StaticProvider,
)
from .setup import (
CredentialSetupSession,
MissingCredential,
SetupResult,
load_agent_nodes,
run_credential_setup_cli,
)
from .storage import (
CompositeStorage,
CredentialStorage,
@@ -68,6 +81,12 @@ from .storage import (
)
from .store import CredentialStore
from .template import TemplateResolver
from .validation import (
CredentialStatus,
CredentialValidationResult,
ensure_credential_key_env,
validate_agent_credentials,
)
# Aden sync components (lazy import to avoid httpx dependency when not needed)
# Usage: from core.framework.credentials.aden import AdenSyncProvider
@@ -84,6 +103,14 @@ try:
except ImportError:
_ADEN_AVAILABLE = False
# Local credential registry (named API key accounts with identity metadata)
try:
from .local import LocalAccountInfo, LocalCredentialRegistry
_LOCAL_AVAILABLE = True
except ImportError:
_LOCAL_AVAILABLE = False
__all__ = [
# Main store
"CredentialStore",
@@ -111,12 +138,34 @@ __all__ = [
"CredentialRefreshError",
"CredentialValidationError",
"CredentialDecryptionError",
# Key storage (bootstrap credentials)
"load_credential_key",
"save_credential_key",
"generate_and_save_credential_key",
"load_aden_api_key",
"save_aden_api_key",
"delete_aden_api_key",
# Validation
"ensure_credential_key_env",
"validate_agent_credentials",
"CredentialStatus",
"CredentialValidationResult",
# Interactive setup
"CredentialSetupSession",
"MissingCredential",
"SetupResult",
"load_agent_nodes",
"run_credential_setup_cli",
# Aden sync (optional - requires httpx)
"AdenSyncProvider",
"AdenCredentialClient",
"AdenClientConfig",
"AdenCachedStorage",
# Local credential registry (optional - requires cryptography)
"LocalCredentialRegistry",
"LocalAccountInfo",
]
# Track Aden availability for runtime checks
ADEN_AVAILABLE = _ADEN_AVAILABLE
LOCAL_AVAILABLE = _LOCAL_AVAILABLE
+166 -192
View File
@@ -1,33 +1,36 @@
"""
Aden Credential Client.
HTTP client for communicating with the Aden authentication server.
The Aden server handles OAuth2 authorization flows and token management.
This client fetches tokens and delegates refresh operations to Aden.
HTTP client for the Aden authentication server.
Aden holds all OAuth secrets; agents receive only short-lived access tokens.
API (all endpoints authenticated with Bearer {api_key}):
GET /v1/credentials list integrations
GET /v1/credentials/{integration_id} get access token (auto-refreshes)
POST /v1/credentials/{integration_id}/refresh force refresh
GET /v1/credentials/{integration_id}/validate check validity
Integration IDs are base64-encoded hashes assigned by the Aden platform
(e.g. "Z29vZ2xlOlRpbW90aHk6MTYwNjc6MTM2ODQ"), NOT provider names.
Usage:
# API key loaded from ADEN_API_KEY environment variable by default
client = AdenCredentialClient(AdenClientConfig(
base_url="https://api.adenhq.com",
))
# Or explicitly provide the API key
client = AdenCredentialClient(AdenClientConfig(
base_url="https://api.adenhq.com",
api_key="your-api-key",
))
# List what's connected
for info in client.list_integrations():
print(f"{info.provider}/{info.alias}: {info.status}")
# Fetch a credential
response = client.get_credential("hubspot")
if response:
print(f"Token expires at: {response.expires_at}")
# Request a refresh
refreshed = client.request_refresh("hubspot")
# Get an access token
cred = client.get_credential(info.integration_id)
print(cred.access_token)
"""
from __future__ import annotations
import json as _json
import logging
import os
import time
@@ -88,8 +91,7 @@ class AdenClientConfig:
"""Base URL of the Aden server (e.g., 'https://api.adenhq.com')."""
api_key: str | None = None
"""Agent's API key for authenticating with Aden.
If not provided, loaded from ADEN_API_KEY environment variable."""
"""Agent API key. Loaded from ADEN_API_KEY env var if not provided."""
tenant_id: str | None = None
"""Optional tenant ID for multi-tenant deployments."""
@@ -104,7 +106,6 @@ class AdenClientConfig:
"""Base delay between retries in seconds (exponential backoff)."""
def __post_init__(self) -> None:
"""Load API key from environment if not provided."""
if self.api_key is None:
self.api_key = os.environ.get("ADEN_API_KEY")
if not self.api_key:
@@ -115,71 +116,124 @@ class AdenClientConfig:
@dataclass
class AdenCredentialResponse:
"""Response from Aden server containing credential data."""
class AdenIntegrationInfo:
"""An integration from GET /v1/credentials.
Example response item::
{
"integration_id": "Z29vZ2xlOlRpbW90aHk6MTYwNjc6MTM2ODQ",
"provider": "google",
"alias": "Timothy",
"status": "active",
"email": "timothy@acho.io",
"expires_at": "2026-02-20T21:46:04.863Z"
}
"""
integration_id: str
"""Unique identifier for the integration (e.g., 'hubspot')."""
"""Base64-encoded hash ID assigned by Aden."""
integration_type: str
"""Type of integration (e.g., 'hubspot', 'github', 'slack')."""
provider: str
"""Provider type (e.g. "google", "slack", "hubspot")."""
access_token: str
"""The access token for API calls."""
alias: str
"""User-set alias on the Aden platform."""
token_type: str = "Bearer"
"""Token type (usually 'Bearer')."""
status: str
"""Status: "active", "expired", "requires_reauth"."""
email: str = ""
"""Email associated with this connection."""
expires_at: datetime | None = None
"""When the access token expires (UTC)."""
"""When the current access token expires."""
scopes: list[str] = field(default_factory=list)
"""OAuth2 scopes granted to this token."""
metadata: dict[str, Any] = field(default_factory=dict)
"""Additional integration-specific metadata."""
# Backward compat — old code reads integration_type
@property
def integration_type(self) -> str:
return self.provider
@classmethod
def from_dict(
cls, data: dict[str, Any], integration_id: str | None = None
) -> AdenCredentialResponse:
"""Create from API response dictionary."""
def from_dict(cls, data: dict[str, Any]) -> AdenIntegrationInfo:
expires_at = None
if data.get("expires_at"):
expires_at = datetime.fromisoformat(data["expires_at"].replace("Z", "+00:00"))
return cls(
integration_id=integration_id or data.get("alias", data.get("provider", "")),
integration_type=data.get("provider", ""),
access_token=data["access_token"],
token_type=data.get("token_type", "Bearer"),
integration_id=data.get("integration_id", ""),
provider=data.get("provider", ""),
alias=data.get("alias", ""),
status=data.get("status", "unknown"),
email=data.get("email", ""),
expires_at=expires_at,
scopes=data.get("scopes", []),
metadata={"email": data.get("email")} if data.get("email") else {},
)
@dataclass
class AdenIntegrationInfo:
"""Information about an available integration."""
class AdenCredentialResponse:
"""Response from GET /v1/credentials/{integration_id}.
Example::
{
"access_token": "ya29.a0AfH6SM...",
"token_type": "Bearer",
"expires_at": "2026-02-20T12:00:00.000Z",
"provider": "google",
"alias": "Timothy",
"email": "timothy@acho.io"
}
"""
integration_id: str
integration_type: str
status: str # "active", "requires_reauth", "expired"
"""The integration_id used in the request."""
access_token: str
"""Short-lived access token for API calls."""
token_type: str = "Bearer"
expires_at: datetime | None = None
provider: str = ""
"""Provider type (e.g. "google")."""
alias: str = ""
"""User-set alias."""
email: str = ""
"""Email associated with this connection."""
scopes: list[str] = field(default_factory=list)
metadata: dict[str, Any] = field(default_factory=dict)
# Backward compat
@property
def integration_type(self) -> str:
return self.provider
@classmethod
def from_dict(cls, data: dict[str, Any]) -> AdenIntegrationInfo:
"""Create from API response dictionary."""
def from_dict(cls, data: dict[str, Any], integration_id: str = "") -> AdenCredentialResponse:
expires_at = None
if data.get("expires_at"):
expires_at = datetime.fromisoformat(data["expires_at"].replace("Z", "+00:00"))
# Build metadata from email if present
metadata = data.get("metadata") or {}
if not metadata and data.get("email"):
metadata = {"email": data["email"]}
return cls(
integration_id=data["integration_id"],
integration_type=data.get("provider", data["integration_id"]),
status=data.get("status", "unknown"),
integration_id=integration_id or data.get("integration_id", ""),
access_token=data["access_token"],
token_type=data.get("token_type", "Bearer"),
expires_at=expires_at,
provider=data.get("provider", ""),
alias=data.get("alias", ""),
email=data.get("email", ""),
scopes=data.get("scopes", []),
metadata=metadata,
)
@@ -187,56 +241,38 @@ class AdenCredentialClient:
"""
HTTP client for Aden credential server.
Handles communication with the Aden authentication server,
including fetching credentials, requesting refreshes, and
reporting usage statistics.
The client automatically handles:
- Retries with exponential backoff for transient failures
- Proper error classification (auth, not found, rate limit, etc.)
- Request headers for authentication and tenant isolation
Usage:
# API key loaded from ADEN_API_KEY environment variable
config = AdenClientConfig(
client = AdenCredentialClient(AdenClientConfig(
base_url="https://api.adenhq.com",
)
))
client = AdenCredentialClient(config)
# List integrations
for info in client.list_integrations():
print(f"{info.provider}/{info.alias}: {info.status}")
# Fetch a credential
cred = client.get_credential("hubspot")
if cred:
headers = {"Authorization": f"Bearer {cred.access_token}"}
# Get access token (uses base64 integration_id, NOT provider name)
cred = client.get_credential(info.integration_id)
headers = {"Authorization": f"Bearer {cred.access_token}"}
# List all integrations
integrations = client.list_integrations()
for info in integrations:
print(f"{info.integration_id}: {info.status}")
# Clean up
client.close()
"""
def __init__(self, config: AdenClientConfig):
"""
Initialize the Aden client.
Args:
config: Client configuration including base URL and API key.
"""
self.config = config
self._client: httpx.Client | None = None
@staticmethod
def _parse_json(response: httpx.Response) -> Any:
"""Parse JSON from response, tolerating UTF-8 BOM."""
return _json.loads(response.content.decode("utf-8-sig"))
def _get_client(self) -> httpx.Client:
"""Get or create the HTTP client."""
if self._client is None:
headers = {
"Authorization": f"Bearer {self.config.api_key}",
"Content-Type": "application/json",
"User-Agent": "hive-credential-store/1.0",
}
if self.config.tenant_id:
headers["X-Tenant-ID"] = self.config.tenant_id
@@ -245,7 +281,6 @@ class AdenCredentialClient:
timeout=self.config.timeout,
headers=headers,
)
return self._client
def _request_with_retry(
@@ -262,10 +297,13 @@ class AdenCredentialClient:
try:
response = client.request(method, path, **kwargs)
# Handle specific error codes
if response.status_code == 401:
raise AdenAuthenticationError("Agent API key is invalid or revoked")
if response.status_code == 403:
data = self._parse_json(response)
raise AdenClientError(data.get("message", "Forbidden"))
if response.status_code == 404:
raise AdenNotFoundError(f"Integration not found: {path}")
@@ -277,15 +315,16 @@ class AdenCredentialClient:
)
if response.status_code == 400:
data = response.json()
if data.get("error") == "refresh_failed":
data = self._parse_json(response)
msg = data.get("message", "Bad request")
if data.get("error") == "refresh_failed" or "refresh" in msg.lower():
raise AdenRefreshError(
data.get("message", "Token refresh failed"),
msg,
requires_reauthorization=data.get("requires_reauthorization", False),
reauthorization_url=data.get("reauthorization_url"),
)
raise AdenClientError(f"Bad request: {msg}")
# Success or other error
response.raise_for_status()
return response
@@ -306,161 +345,96 @@ class AdenCredentialClient:
AdenRefreshError,
AdenRateLimitError,
):
# Don't retry these errors
raise
# Should not reach here, but just in case
raise AdenClientError(
f"Request failed after {self.config.retry_attempts} attempts"
) from last_error
def get_credential(self, integration_id: str) -> AdenCredentialResponse | None:
def list_integrations(self) -> list[AdenIntegrationInfo]:
"""
Fetch the current credential for an integration.
List all integrations for this agent's team.
The Aden server may refresh the token internally if it's expired
before returning it.
Args:
integration_id: The integration identifier (e.g., 'hubspot').
GET /v1/credentials {"integrations": [...]}
Returns:
Credential response with access token, or None if not found.
List of AdenIntegrationInfo with integration_id, provider,
alias, status, email, expires_at.
"""
response = self._request_with_retry("GET", "/v1/credentials")
data = self._parse_json(response)
return [AdenIntegrationInfo.from_dict(item) for item in data.get("integrations", [])]
Raises:
AdenAuthenticationError: If API key is invalid.
AdenClientError: For connection failures.
# Alias
list_connections = list_integrations
def get_credential(self, integration_id: str) -> AdenCredentialResponse | None:
"""
Get access token for an integration. Auto-refreshes if near expiry.
GET /v1/credentials/{integration_id}
Args:
integration_id: Base64 hash ID from list_integrations().
Returns:
AdenCredentialResponse with access_token, or None if not found.
"""
try:
response = self._request_with_retry("GET", f"/v1/credentials/{integration_id}")
data = response.json()
data = self._parse_json(response)
return AdenCredentialResponse.from_dict(data, integration_id=integration_id)
except AdenNotFoundError:
return None
def request_refresh(self, integration_id: str) -> AdenCredentialResponse:
"""
Request the Aden server to refresh the token.
Force refresh the access token.
Use this when the local store detects an expired or near-expiry token.
The Aden server handles the actual OAuth2 refresh token flow.
POST /v1/credentials/{integration_id}/refresh
Args:
integration_id: The integration identifier.
integration_id: Base64 hash ID.
Returns:
Credential response with new access token.
Raises:
AdenRefreshError: If refresh fails (may require re-authorization).
AdenNotFoundError: If integration not found.
AdenAuthenticationError: If API key is invalid.
AdenRateLimitError: If rate limited.
AdenCredentialResponse with new access_token.
"""
response = self._request_with_retry("POST", f"/v1/credentials/{integration_id}/refresh")
data = response.json()
data = self._parse_json(response)
return AdenCredentialResponse.from_dict(data, integration_id=integration_id)
def list_integrations(self) -> list[AdenIntegrationInfo]:
"""
List all integrations available for this agent/tenant.
Returns:
List of integration info objects.
Raises:
AdenAuthenticationError: If API key is invalid.
AdenClientError: For connection failures.
"""
response = self._request_with_retry("GET", "/v1/credentials")
data = response.json()
return [AdenIntegrationInfo.from_dict(item) for item in data.get("integrations", [])]
def validate_token(self, integration_id: str) -> dict[str, Any]:
"""
Check if a token is still valid without fetching it.
Check if an integration's OAuth connection is valid.
Args:
integration_id: The integration identifier.
GET /v1/credentials/{integration_id}/validate
Returns:
Dict with 'valid' bool and optional 'expires_at', 'reason',
'requires_reauthorization', 'reauthorization_url'.
Raises:
AdenNotFoundError: If integration not found.
AdenAuthenticationError: If API key is invalid.
{"valid": bool, "status": str, "expires_at": str, "error": str|null}
"""
response = self._request_with_retry("GET", f"/v1/credentials/{integration_id}/validate")
return response.json()
def report_usage(
self,
integration_id: str,
operation: str,
status: str = "success",
metadata: dict[str, Any] | None = None,
) -> None:
"""
Report credential usage statistics to Aden.
This is optional and used for analytics/billing.
Args:
integration_id: The integration identifier.
operation: Operation name (e.g., 'api_call').
status: Operation status ('success', 'error').
metadata: Additional operation metadata.
"""
try:
self._request_with_retry(
"POST",
f"/v1/credentials/{integration_id}/usage",
json={
"operation": operation,
"status": status,
"timestamp": datetime.utcnow().isoformat() + "Z",
"metadata": metadata or {},
},
)
except Exception as e:
# Usage reporting is best-effort, don't fail on errors
logger.warning(f"Failed to report usage for '{integration_id}': {e}")
return self._parse_json(response)
def health_check(self) -> dict[str, Any]:
"""
Check Aden server health and connectivity.
Returns:
Dict with 'status', 'version', 'timestamp', and optionally 'error'.
"""
"""Check Aden server health."""
try:
client = self._get_client()
response = client.get("/health")
if response.status_code == 200:
data = response.json()
data = self._parse_json(response)
data["latency_ms"] = response.elapsed.total_seconds() * 1000
return data
return {
"status": "degraded",
"error": f"Unexpected status code: {response.status_code}",
}
return {"status": "degraded", "error": f"HTTP {response.status_code}"}
except Exception as e:
return {
"status": "unhealthy",
"error": str(e),
}
return {"status": "unhealthy", "error": str(e)}
def close(self) -> None:
"""Close the HTTP client and release resources."""
if self._client:
self._client.close()
self._client = None
def __enter__(self) -> AdenCredentialClient:
"""Context manager entry."""
return self
def __exit__(self, *args: Any) -> None:
"""Context manager exit."""
self.close()
+35 -7
View File
@@ -282,8 +282,8 @@ class AdenSyncProvider(CredentialProvider):
"""
Sync all credentials from Aden server to local store.
Fetches the list of available integrations from Aden and
populates the local credential store with current tokens.
Calls GET /v1/credentials to list integrations, then fetches
access tokens for each active one.
Args:
store: The credential store to populate.
@@ -298,9 +298,7 @@ class AdenSyncProvider(CredentialProvider):
for info in integrations:
if info.status != "active":
logger.warning(
f"Skipping integration '{info.integration_id}': status={info.status}"
)
logger.warning(f"Skipping connection '{info.alias}': status={info.status}")
continue
try:
@@ -308,9 +306,9 @@ class AdenSyncProvider(CredentialProvider):
if cred:
store.save_credential(cred)
synced += 1
logger.info(f"Synced credential '{info.integration_id}' from Aden")
logger.info(f"Synced credential '{info.alias}' from Aden")
except Exception as e:
logger.warning(f"Failed to sync '{info.integration_id}': {e}")
logger.warning(f"Failed to sync '{info.alias}': {e}")
except AdenClientError as e:
logger.error(f"Failed to list integrations from Aden: {e}")
@@ -373,6 +371,21 @@ class AdenSyncProvider(CredentialProvider):
value=SecretStr(aden_response.integration_type),
)
# Store alias (user-set name from Aden platform)
if aden_response.alias:
credential.keys["_alias"] = CredentialKey(
name="_alias",
value=SecretStr(aden_response.alias),
)
# Persist Aden metadata as identity keys
for meta_key, meta_value in (aden_response.metadata or {}).items():
if meta_value and isinstance(meta_value, str):
credential.keys[f"_identity_{meta_key}"] = CredentialKey(
name=f"_identity_{meta_key}",
value=SecretStr(meta_value),
)
# Update timestamps
credential.last_refreshed = datetime.now(UTC)
credential.provider_id = self.provider_id
@@ -400,12 +413,27 @@ class AdenSyncProvider(CredentialProvider):
),
}
# Store alias (user-set name from Aden platform)
if aden_response.alias:
keys["_alias"] = CredentialKey(
name="_alias",
value=SecretStr(aden_response.alias),
)
if aden_response.scopes:
keys["scope"] = CredentialKey(
name="scope",
value=SecretStr(" ".join(aden_response.scopes)),
)
# Persist Aden metadata as identity keys
for meta_key, meta_value in (aden_response.metadata or {}).items():
if meta_value and isinstance(meta_value, str):
keys[f"_identity_{meta_key}"] = CredentialKey(
name=f"_identity_{meta_key}",
value=SecretStr(meta_value),
)
return CredentialObject(
id=aden_response.integration_id,
credential_type=CredentialType.OAUTH2,
+154 -23
View File
@@ -26,7 +26,7 @@ Usage:
storage = AdenCachedStorage(
local_storage=EncryptedFileStorage(),
aden_provider=provider,
cache_ttl_seconds=300, # Re-check Aden every 5 minutes
cache_ttl_seconds=600, # Re-check Aden every 5 minutes
)
# Create store
@@ -64,6 +64,8 @@ class AdenCachedStorage(CredentialStorage):
- **Reads**: Try local cache first, fallback to Aden if stale/missing
- **Writes**: Always write to local cache
- **Offline resilience**: Uses cached credentials when Aden is unreachable
- **Provider-based lookup**: Match credentials by provider name (e.g., "hubspot")
when direct ID lookup fails, since Aden uses hash-based IDs internally.
The cache TTL determines how long to trust local credentials before
checking with the Aden server for updates. This balances:
@@ -75,7 +77,7 @@ class AdenCachedStorage(CredentialStorage):
storage = AdenCachedStorage(
local_storage=EncryptedFileStorage(),
aden_provider=provider,
cache_ttl_seconds=300, # 5 minutes
cache_ttl_seconds=00, # 5 minutes
)
store = CredentialStore(
@@ -85,6 +87,7 @@ class AdenCachedStorage(CredentialStorage):
# First access fetches from Aden
# Subsequent accesses use cache until TTL expires
# Can look up by provider name OR credential ID
token = store.get_key("hubspot", "access_token")
"""
@@ -111,21 +114,26 @@ class AdenCachedStorage(CredentialStorage):
self._cache_ttl = timedelta(seconds=cache_ttl_seconds)
self._prefer_local = prefer_local
self._cache_timestamps: dict[str, datetime] = {}
# Index: provider name (e.g., "hubspot") -> list of credential hash IDs
self._provider_index: dict[str, list[str]] = {}
# Index: "provider:alias" -> credential hash ID (for alias-based routing)
self._alias_index: dict[str, str] = {}
def save(self, credential: CredentialObject) -> None:
"""
Save credential to local cache.
Save credential to local cache and update provider index.
Args:
credential: The credential to save.
"""
self._local.save(credential)
self._cache_timestamps[credential.id] = datetime.now(UTC)
self._index_provider(credential)
logger.debug(f"Cached credential '{credential.id}'")
def load(self, credential_id: str) -> CredentialObject | None:
"""
Load credential from cache, with Aden fallback.
Load credential from cache, with Aden fallback and provider-based lookup.
The loading strategy depends on the `prefer_local` setting:
@@ -141,8 +149,39 @@ class AdenCachedStorage(CredentialStorage):
2. Update local cache with response
3. Fall back to local cache only if Aden fails
Provider-based lookup:
When a provider index mapping exists for the credential_id (e.g.,
"hubspot" hash ID), the Aden-synced credential is loaded first.
This ensures fresh OAuth tokens from Aden take priority over stale
local credentials (env vars, old encrypted files).
Args:
credential_id: The credential identifier.
credential_id: The credential identifier or provider name.
Returns:
CredentialObject if found, None otherwise.
"""
# Check provider index first — Aden-synced credentials take priority
resolved_ids = self._provider_index.get(credential_id)
if resolved_ids:
for rid in resolved_ids:
if rid != credential_id:
result = self._load_by_id(rid)
if result is not None:
logger.info(
f"Loaded credential '{credential_id}' via provider index (id='{rid}')"
)
return result
# Direct lookup (exact credential_id match)
return self._load_by_id(credential_id)
def _load_by_id(self, credential_id: str) -> CredentialObject | None:
"""
Load credential by exact ID from cache, with Aden fallback.
Args:
credential_id: The exact credential identifier.
Returns:
CredentialObject if found, None otherwise.
@@ -154,25 +193,42 @@ class AdenCachedStorage(CredentialStorage):
logger.debug(f"Using cached credential '{credential_id}'")
return local_cred
# Try to fetch from Aden
# If nothing local, there's nothing to refresh from Aden.
# sync_all() already fetched all available credentials — anything
# not in local storage doesn't exist on the Aden server.
if local_cred is None:
return None
# Try to refresh stale local credential from Aden
try:
aden_cred = self._aden_provider.fetch_from_aden(credential_id)
if aden_cred:
# Update local cache
self.save(aden_cred)
logger.debug(f"Fetched credential '{credential_id}' from Aden")
return aden_cred
except Exception as e:
logger.warning(f"Failed to fetch '{credential_id}' from Aden: {e}")
logger.info(f"Using stale cached credential '{credential_id}'")
return local_cred
# Fall back to local cache if Aden fails
if local_cred:
logger.info(f"Using stale cached credential '{credential_id}'")
return local_cred
# Return local credential if it exists (may be None)
return local_cred
def load_all_for_provider(self, provider_name: str) -> list[CredentialObject]:
"""Load all credentials for a given provider type.
Args:
provider_name: Provider name (e.g. "google", "slack").
Returns:
List of CredentialObjects for all accounts of this provider.
"""
results: list[CredentialObject] = []
for cid in self._provider_index.get(provider_name, []):
cred = self._load_by_id(cid)
if cred:
results.append(cred)
return results
def delete(self, credential_id: str) -> bool:
"""
Delete credential from local cache.
@@ -200,15 +256,23 @@ class AdenCachedStorage(CredentialStorage):
def exists(self, credential_id: str) -> bool:
"""
Check if credential exists in local cache.
Check if credential exists in local cache (by ID or provider name).
Args:
credential_id: The credential identifier.
credential_id: The credential identifier or provider name.
Returns:
True if credential exists locally.
"""
return self._local.exists(credential_id)
if self._local.exists(credential_id):
return True
# Check provider index
resolved_ids = self._provider_index.get(credential_id)
if resolved_ids:
for rid in resolved_ids:
if rid != credential_id and self._local.exists(rid):
return True
return False
def _is_cache_fresh(self, credential_id: str) -> bool:
"""
@@ -242,12 +306,81 @@ class AdenCachedStorage(CredentialStorage):
self._cache_timestamps.clear()
logger.debug("Invalidated all cache entries")
def _index_provider(self, credential: CredentialObject) -> None:
"""
Index a credential by its provider/integration type and alias.
Aden credentials carry an ``_integration_type`` key whose value is
the provider name (e.g., ``hubspot``). This method maps that
provider name to the credential's hash ID so that subsequent
``load("hubspot")`` calls resolve to the correct credential.
Also indexes by ``_alias`` for alias-based multi-account routing.
Args:
credential: The credential to index.
"""
integration_type_key = credential.keys.get("_integration_type")
if integration_type_key is None:
return
provider_name = integration_type_key.value.get_secret_value()
if provider_name:
if provider_name not in self._provider_index:
self._provider_index[provider_name] = []
if credential.id not in self._provider_index[provider_name]:
self._provider_index[provider_name].append(credential.id)
logger.debug(f"Indexed provider '{provider_name}' -> '{credential.id}'")
# Index by alias for multi-account routing
alias_key = credential.keys.get("_alias")
if alias_key:
alias = alias_key.value.get_secret_value()
if alias:
self._alias_index[f"{provider_name}:{alias}"] = credential.id
def load_by_alias(self, provider_name: str, alias: str) -> CredentialObject | None:
"""Load a credential by provider name and alias.
Args:
provider_name: Provider type (e.g. "google", "slack").
alias: User-set alias from the Aden platform.
Returns:
CredentialObject if found, None otherwise.
"""
cred_id = self._alias_index.get(f"{provider_name}:{alias}")
if cred_id:
return self._load_by_id(cred_id)
return None
def rebuild_provider_index(self) -> int:
"""
Rebuild the provider and alias indexes from all locally cached credentials.
Useful after loading from disk when the in-memory indexes are empty.
Returns:
Number of provider mappings indexed.
"""
self._provider_index.clear()
self._alias_index.clear()
indexed = 0
for cred_id in self._local.list_all():
cred = self._local.load(cred_id)
if cred:
before = len(self._provider_index)
self._index_provider(cred)
if len(self._provider_index) > before:
indexed += 1
logger.debug(f"Rebuilt provider index with {indexed} mappings")
return indexed
def sync_all_from_aden(self) -> int:
"""
Sync all credentials from Aden server to local cache.
Fetches the list of available integrations from Aden and
updates the local cache with current tokens.
Calls GET /v1/credentials to list active integrations,
then fetches tokens for each.
Returns:
Number of credentials synced.
@@ -259,9 +392,7 @@ class AdenCachedStorage(CredentialStorage):
for info in integrations:
if info.status != "active":
logger.warning(
f"Skipping integration '{info.integration_id}': status={info.status}"
)
logger.warning(f"Skipping integration '{info.alias}': status={info.status}")
continue
try:
@@ -269,9 +400,9 @@ class AdenCachedStorage(CredentialStorage):
if cred:
self.save(cred)
synced += 1
logger.info(f"Synced credential '{info.integration_id}' from Aden")
logger.info(f"Synced credential '{info.alias}' from Aden")
except Exception as e:
logger.warning(f"Failed to sync '{info.integration_id}': {e}")
logger.warning(f"Failed to sync '{info.alias}': {e}")
except Exception as e:
logger.error(f"Failed to list integrations from Aden: {e}")
@@ -61,11 +61,13 @@ def mock_client(aden_config):
def aden_response():
"""Create a sample Aden credential response."""
return AdenCredentialResponse(
integration_id="hubspot",
integration_type="hubspot",
integration_id="aHVic3BvdDp0ZXN0OjEzNjExOjExNTI1",
access_token="test-access-token",
token_type="Bearer",
expires_at=datetime.now(UTC) + timedelta(hours=1),
provider="hubspot",
alias="My HubSpot",
email="test@example.com",
scopes=["crm.objects.contacts.read", "crm.objects.contacts.write"],
metadata={"portal_id": "12345"},
)
@@ -108,18 +110,20 @@ class TestAdenCredentialResponse:
"""Tests for AdenCredentialResponse dataclass."""
def test_from_dict_basic(self):
"""Test creating response from dict."""
"""Test creating response from dict (real get-token format)."""
data = {
"integration_id": "github",
"integration_type": "github",
"access_token": "ghp_xxxxx",
"token_type": "Bearer",
"provider": "github",
"alias": "Work",
}
response = AdenCredentialResponse.from_dict(data)
response = AdenCredentialResponse.from_dict(data, integration_id="Z2l0aHViOldvcms6MTIzNDU")
assert response.integration_id == "github"
assert response.integration_type == "github"
assert response.integration_id == "Z2l0aHViOldvcms6MTIzNDU"
assert response.access_token == "ghp_xxxxx"
assert response.provider == "github"
assert response.integration_type == "github" # backward compat property
assert response.token_type == "Bearer"
assert response.expires_at is None
assert response.scopes == []
@@ -127,19 +131,23 @@ class TestAdenCredentialResponse:
def test_from_dict_full(self):
"""Test creating response with all fields."""
data = {
"integration_id": "hubspot",
"integration_type": "hubspot",
"access_token": "token123",
"token_type": "Bearer",
"expires_at": "2026-01-28T15:30:00Z",
"provider": "hubspot",
"alias": "My HubSpot",
"email": "test@example.com",
"scopes": ["read", "write"],
"metadata": {"key": "value"},
}
response = AdenCredentialResponse.from_dict(data)
response = AdenCredentialResponse.from_dict(data, integration_id="aHVic3BvdDp0ZXN0")
assert response.integration_id == "hubspot"
assert response.integration_id == "aHVic3BvdDp0ZXN0"
assert response.access_token == "token123"
assert response.provider == "hubspot"
assert response.alias == "My HubSpot"
assert response.email == "test@example.com"
assert response.expires_at is not None
assert response.scopes == ["read", "write"]
assert response.metadata == {"key": "value"}
@@ -149,21 +157,44 @@ class TestAdenIntegrationInfo:
"""Tests for AdenIntegrationInfo dataclass."""
def test_from_dict(self):
"""Test creating integration info from dict."""
"""Test creating integration info from real API format."""
data = {
"integration_id": "slack",
"integration_type": "slack",
"integration_id": "c2xhY2s6V29yayBTbGFjazoxMjM0NQ",
"provider": "slack",
"alias": "Work Slack",
"status": "active",
"expires_at": "2026-02-01T00:00:00Z",
"email": "user@example.com",
"expires_at": "2026-02-20T21:46:04.863Z",
}
info = AdenIntegrationInfo.from_dict(data)
assert info.integration_id == "slack"
assert info.integration_type == "slack"
assert info.integration_id == "c2xhY2s6V29yayBTbGFjazoxMjM0NQ"
assert info.provider == "slack"
assert info.integration_type == "slack" # backward compat property
assert info.alias == "Work Slack"
assert info.email == "user@example.com"
assert info.status == "active"
assert info.expires_at is not None
def test_from_dict_minimal(self):
"""Test creating integration info with minimal fields."""
data = {
"integration_id": "Z29vZ2xlOlRpbW90aHk6MTYwNjc",
"provider": "google",
"alias": "Timothy",
"status": "requires_reauth",
}
info = AdenIntegrationInfo.from_dict(data)
assert info.integration_id == "Z29vZ2xlOlRpbW90aHk6MTYwNjc"
assert info.provider == "google"
assert info.alias == "Timothy"
assert info.status == "requires_reauth"
assert info.email == ""
assert info.expires_at is None
# =============================================================================
# AdenSyncProvider Tests
@@ -220,10 +251,11 @@ class TestAdenSyncProvider:
def test_refresh_success(self, provider, mock_client, aden_response):
"""Test successful credential refresh."""
hash_id = "aHVic3BvdDp0ZXN0OjEzNjExOjExNTI1"
mock_client.request_refresh.return_value = aden_response
cred = CredentialObject(
id="hubspot",
id=hash_id,
credential_type=CredentialType.OAUTH2,
keys={
"access_token": CredentialKey(
@@ -239,7 +271,7 @@ class TestAdenSyncProvider:
assert refreshed.keys["access_token"].value.get_secret_value() == "test-access-token"
assert refreshed.keys["_aden_managed"].value.get_secret_value() == "true"
assert refreshed.last_refreshed is not None
mock_client.request_refresh.assert_called_once_with("hubspot")
mock_client.request_refresh.assert_called_once_with(hash_id)
def test_refresh_requires_reauth(self, provider, mock_client):
"""Test refresh that requires re-authorization."""
@@ -339,12 +371,13 @@ class TestAdenSyncProvider:
def test_fetch_from_aden(self, provider, mock_client, aden_response):
"""Test fetching credential from Aden."""
hash_id = "aHVic3BvdDp0ZXN0OjEzNjExOjExNTI1"
mock_client.get_credential.return_value = aden_response
cred = provider.fetch_from_aden("hubspot")
cred = provider.fetch_from_aden(hash_id)
assert cred is not None
assert cred.id == "hubspot"
assert cred.id == hash_id
assert cred.keys["access_token"].value.get_secret_value() == "test-access-token"
assert cred.auto_refresh is True
@@ -360,13 +393,15 @@ class TestAdenSyncProvider:
"""Test syncing all credentials."""
mock_client.list_integrations.return_value = [
AdenIntegrationInfo(
integration_id="hubspot",
integration_type="hubspot",
integration_id="aHVic3BvdDp0ZXN0OjEzNjExOjExNTI1",
provider="hubspot",
alias="My HubSpot",
status="active",
),
AdenIntegrationInfo(
integration_id="github",
integration_type="github",
integration_id="Z2l0aHViOnRlc3Q6OTk5",
provider="github",
alias="Work GitHub",
status="requires_reauth", # Should be skipped
),
]
@@ -376,7 +411,7 @@ class TestAdenSyncProvider:
synced = provider.sync_all(store)
assert synced == 1 # Only active one was synced
assert store.get_credential("hubspot") is not None
assert store.get_credential("aHVic3BvdDp0ZXN0OjEzNjExOjExNTI1") is not None
def test_validate_via_aden(self, provider, mock_client):
"""Test validation via Aden introspection."""
@@ -589,6 +624,149 @@ class TestAdenCachedStorage:
assert info["stale"]["is_fresh"] is False
assert info["stale"]["ttl_remaining_seconds"] == 0
def test_save_indexes_provider(self, cached_storage):
"""Test save builds the provider index from _integration_type key."""
cred = CredentialObject(
id="aHVic3BvdDp0ZXN0OjEzNjExOjExNTI1",
credential_type=CredentialType.OAUTH2,
keys={
"access_token": CredentialKey(
name="access_token",
value=SecretStr("token-value"),
),
"_integration_type": CredentialKey(
name="_integration_type",
value=SecretStr("hubspot"),
),
},
)
cached_storage.save(cred)
assert cached_storage._provider_index["hubspot"] == ["aHVic3BvdDp0ZXN0OjEzNjExOjExNTI1"]
def test_load_by_provider_name(self, cached_storage):
"""Test load resolves provider name to hash-based credential ID."""
hash_id = "aHVic3BvdDp0ZXN0OjEzNjExOjExNTI1"
cred = CredentialObject(
id=hash_id,
credential_type=CredentialType.OAUTH2,
keys={
"access_token": CredentialKey(
name="access_token",
value=SecretStr("hubspot-token"),
),
"_integration_type": CredentialKey(
name="_integration_type",
value=SecretStr("hubspot"),
),
},
)
# Save builds the index
cached_storage.save(cred)
# Load by provider name should resolve to the hash ID
loaded = cached_storage.load("hubspot")
assert loaded is not None
assert loaded.id == hash_id
assert loaded.keys["access_token"].value.get_secret_value() == "hubspot-token"
def test_load_by_direct_id_still_works(self, cached_storage):
"""Test load by direct hash ID still works as before."""
hash_id = "aHVic3BvdDp0ZXN0OjEzNjExOjExNTI1"
cred = CredentialObject(
id=hash_id,
credential_type=CredentialType.OAUTH2,
keys={
"access_token": CredentialKey(
name="access_token",
value=SecretStr("token"),
),
"_integration_type": CredentialKey(
name="_integration_type",
value=SecretStr("hubspot"),
),
},
)
cached_storage.save(cred)
# Direct ID lookup should still work
loaded = cached_storage.load(hash_id)
assert loaded is not None
assert loaded.id == hash_id
def test_exists_by_provider_name(self, cached_storage):
"""Test exists resolves provider name to hash-based credential ID."""
hash_id = "c2xhY2s6dGVzdDo5OTk="
cred = CredentialObject(
id=hash_id,
credential_type=CredentialType.OAUTH2,
keys={
"access_token": CredentialKey(
name="access_token",
value=SecretStr("slack-token"),
),
"_integration_type": CredentialKey(
name="_integration_type",
value=SecretStr("slack"),
),
},
)
cached_storage.save(cred)
assert cached_storage.exists("slack") is True
assert cached_storage.exists(hash_id) is True
assert cached_storage.exists("nonexistent") is False
def test_rebuild_provider_index(self, cached_storage, local_storage):
"""Test rebuild_provider_index reconstructs from local storage."""
# Manually save credentials to local storage (bypassing cached_storage.save)
for provider_name, hash_id in [("hubspot", "hash_hub"), ("slack", "hash_slack")]:
cred = CredentialObject(
id=hash_id,
credential_type=CredentialType.OAUTH2,
keys={
"_integration_type": CredentialKey(
name="_integration_type",
value=SecretStr(provider_name),
),
},
)
local_storage.save(cred)
# Index should be empty (we bypassed save)
assert len(cached_storage._provider_index) == 0
# Rebuild
indexed = cached_storage.rebuild_provider_index()
assert indexed == 2
assert cached_storage._provider_index["hubspot"] == ["hash_hub"]
assert cached_storage._provider_index["slack"] == ["hash_slack"]
def test_save_without_integration_type_no_index(self, cached_storage):
"""Test save does not index credentials without _integration_type key."""
cred = CredentialObject(
id="plain-cred",
credential_type=CredentialType.API_KEY,
keys={
"api_key": CredentialKey(
name="api_key",
value=SecretStr("key-value"),
),
},
)
cached_storage.save(cred)
assert "plain-cred" not in cached_storage._provider_index
assert len(cached_storage._provider_index) == 0
# =============================================================================
# Integration Tests
@@ -600,19 +778,23 @@ class TestAdenIntegration:
def test_full_workflow(self, mock_client, aden_response):
"""Test full workflow: sync, get, refresh."""
hash_id = "aHVic3BvdDp0ZXN0OjEzNjExOjExNTI1"
# Setup
mock_client.list_integrations.return_value = [
AdenIntegrationInfo(
integration_id="hubspot",
integration_type="hubspot",
integration_id=hash_id,
provider="hubspot",
alias="My HubSpot",
status="active",
),
]
mock_client.get_credential.return_value = aden_response
mock_client.request_refresh.return_value = AdenCredentialResponse(
integration_id="hubspot",
integration_type="hubspot",
integration_id=hash_id,
access_token="refreshed-token",
provider="hubspot",
alias="My HubSpot",
expires_at=datetime.now(UTC) + timedelta(hours=2),
scopes=[],
)
@@ -629,8 +811,8 @@ class TestAdenIntegration:
synced = provider.sync_all(store)
assert synced == 1
# Get credential
cred = store.get_credential("hubspot")
# Get credential by hash ID
cred = store.get_credential(hash_id)
assert cred is not None
assert cred.keys["access_token"].value.get_secret_value() == "test-access-token"
+201
View File
@@ -0,0 +1,201 @@
"""
Dedicated file-based storage for bootstrap credentials.
HIVE_CREDENTIAL_KEY -> ~/.hive/secrets/credential_key (plain text, chmod 600)
ADEN_API_KEY -> ~/.hive/credentials/ (encrypted via EncryptedFileStorage)
Boot order:
1. load_credential_key() -- reads/generates the Fernet key, sets os.environ
2. load_aden_api_key() -- uses the encrypted store (which needs the key from step 1)
"""
from __future__ import annotations
import logging
import os
import stat
from pathlib import Path
logger = logging.getLogger(__name__)
CREDENTIAL_KEY_PATH = Path.home() / ".hive" / "secrets" / "credential_key"
CREDENTIAL_KEY_ENV_VAR = "HIVE_CREDENTIAL_KEY"
ADEN_CREDENTIAL_ID = "aden_api_key"
ADEN_ENV_VAR = "ADEN_API_KEY"
# ---------------------------------------------------------------------------
# HIVE_CREDENTIAL_KEY
# ---------------------------------------------------------------------------
def load_credential_key() -> str | None:
"""Load HIVE_CREDENTIAL_KEY with priority: env > file > shell config.
Sets ``os.environ["HIVE_CREDENTIAL_KEY"]`` as a side-effect when found.
Returns the key string, or ``None`` if unavailable everywhere.
"""
# 1. Already in environment (set by parent process, CI, Windows Registry, etc.)
key = os.environ.get(CREDENTIAL_KEY_ENV_VAR)
if key:
return key
# 2. Dedicated secrets file
key = _read_credential_key_file()
if key:
os.environ[CREDENTIAL_KEY_ENV_VAR] = key
return key
# 3. Shell config fallback (backward compat for old installs)
key = _read_from_shell_config(CREDENTIAL_KEY_ENV_VAR)
if key:
os.environ[CREDENTIAL_KEY_ENV_VAR] = key
return key
return None
def save_credential_key(key: str) -> Path:
"""Save HIVE_CREDENTIAL_KEY to ``~/.hive/secrets/credential_key``.
Creates parent dirs with mode 700, writes the file with mode 600.
Also sets ``os.environ["HIVE_CREDENTIAL_KEY"]``.
Returns:
The path that was written.
"""
path = CREDENTIAL_KEY_PATH
path.parent.mkdir(parents=True, exist_ok=True)
# Restrict the secrets directory itself
path.parent.chmod(stat.S_IRWXU) # 0o700
path.write_text(key, encoding="utf-8")
path.chmod(stat.S_IRUSR | stat.S_IWUSR) # 0o600
os.environ[CREDENTIAL_KEY_ENV_VAR] = key
return path
def generate_and_save_credential_key() -> str:
"""Generate a new Fernet key and persist it to ``~/.hive/secrets/credential_key``.
Returns:
The generated key string.
"""
from cryptography.fernet import Fernet
key = Fernet.generate_key().decode()
save_credential_key(key)
return key
# ---------------------------------------------------------------------------
# ADEN_API_KEY
# ---------------------------------------------------------------------------
def load_aden_api_key() -> str | None:
"""Load ADEN_API_KEY with priority: env > encrypted store > shell config.
**Must** be called after ``load_credential_key()`` because the encrypted
store depends on HIVE_CREDENTIAL_KEY.
Sets ``os.environ["ADEN_API_KEY"]`` as a side-effect when found.
Returns the key string, or ``None`` if unavailable everywhere.
"""
# 1. Already in environment
key = os.environ.get(ADEN_ENV_VAR)
if key:
return key
# 2. Encrypted credential store
key = _read_aden_from_encrypted_store()
if key:
os.environ[ADEN_ENV_VAR] = key
return key
# 3. Shell config fallback (backward compat)
key = _read_from_shell_config(ADEN_ENV_VAR)
if key:
os.environ[ADEN_ENV_VAR] = key
return key
return None
def save_aden_api_key(key: str) -> None:
"""Save ADEN_API_KEY to the encrypted credential store.
Also sets ``os.environ["ADEN_API_KEY"]``.
"""
from pydantic import SecretStr
from .models import CredentialKey, CredentialObject
from .storage import EncryptedFileStorage
storage = EncryptedFileStorage()
cred = CredentialObject(
id=ADEN_CREDENTIAL_ID,
keys={"api_key": CredentialKey(name="api_key", value=SecretStr(key))},
)
storage.save(cred)
os.environ[ADEN_ENV_VAR] = key
def delete_aden_api_key() -> None:
"""Remove ADEN_API_KEY from the encrypted store and ``os.environ``."""
try:
from .storage import EncryptedFileStorage
storage = EncryptedFileStorage()
storage.delete(ADEN_CREDENTIAL_ID)
except Exception:
logger.debug("Could not delete %s from encrypted store", ADEN_CREDENTIAL_ID)
os.environ.pop(ADEN_ENV_VAR, None)
# ---------------------------------------------------------------------------
# Internal helpers
# ---------------------------------------------------------------------------
def _read_credential_key_file() -> str | None:
"""Read the credential key from ``~/.hive/secrets/credential_key``."""
try:
if CREDENTIAL_KEY_PATH.is_file():
value = CREDENTIAL_KEY_PATH.read_text(encoding="utf-8").strip()
if value:
return value
except Exception:
logger.debug("Could not read %s", CREDENTIAL_KEY_PATH)
return None
def _read_from_shell_config(env_var: str) -> str | None:
"""Fallback: read an env var from ~/.zshrc or ~/.bashrc."""
try:
from aden_tools.credentials.shell_config import check_env_var_in_shell_config
found, value = check_env_var_in_shell_config(env_var)
if found and value:
return value
except ImportError:
pass
return None
def _read_aden_from_encrypted_store() -> str | None:
"""Try to load ADEN_API_KEY from the encrypted credential store."""
if not os.environ.get(CREDENTIAL_KEY_ENV_VAR):
return None
try:
from .storage import EncryptedFileStorage
storage = EncryptedFileStorage()
cred = storage.load(ADEN_CREDENTIAL_ID)
if cred:
return cred.get_key("api_key")
except Exception:
logger.debug("Could not load %s from encrypted store", ADEN_CREDENTIAL_ID)
return None
@@ -0,0 +1,31 @@
"""
Local credential registry named API key accounts with identity metadata.
Provides feature parity with Aden OAuth credentials for locally-stored API keys:
aliases, identity metadata, status tracking, CRUD, and health validation.
Usage:
from framework.credentials.local import LocalCredentialRegistry, LocalAccountInfo
registry = LocalCredentialRegistry.default()
# Add a named account
info, health = registry.save_account("brave_search", "work", "BSA-xxx")
# List all stored local accounts
for account in registry.list_accounts():
print(f"{account.credential_id}/{account.alias}: {account.status}")
if account.identity.is_known:
print(f" Identity: {account.identity.label}")
# Re-validate a stored account
result = registry.validate_account("github", "personal")
"""
from .models import LocalAccountInfo
from .registry import LocalCredentialRegistry
__all__ = [
"LocalAccountInfo",
"LocalCredentialRegistry",
]
@@ -0,0 +1,58 @@
"""
Data models for the local credential registry.
LocalAccountInfo mirrors AdenIntegrationInfo, giving local API key credentials
the same identity/status metadata as Aden OAuth credentials.
"""
from __future__ import annotations
from dataclasses import dataclass, field
from datetime import datetime
from framework.credentials.models import CredentialIdentity
@dataclass
class LocalAccountInfo:
"""
A locally-stored named credential account.
Mirrors AdenIntegrationInfo so local and Aden accounts can be treated
uniformly in the credential tester and account selection UI.
Attributes:
credential_id: The logical credential name (e.g. "brave_search", "github")
alias: User-provided name for this account (e.g. "work", "personal")
status: "active" | "failed" | "unknown"
identity: Email, username, workspace, or account_id extracted from health check
last_validated: When the key was last verified against the live API
created_at: When this account was first stored
"""
credential_id: str
alias: str
status: str = "unknown"
identity: CredentialIdentity = field(default_factory=CredentialIdentity)
last_validated: datetime | None = None
created_at: datetime = field(default_factory=datetime.utcnow)
@property
def storage_id(self) -> str:
"""The key used in EncryptedFileStorage: '{credential_id}/{alias}'."""
return f"{self.credential_id}/{self.alias}"
def to_account_dict(self) -> dict:
"""
Format compatible with AccountSelectionScreen and configure_for_account().
Same shape as Aden account dicts, with source='local' added.
"""
return {
"provider": self.credential_id,
"alias": self.alias,
"identity": self.identity.to_dict(),
"integration_id": None,
"source": "local",
"status": self.status,
}
@@ -0,0 +1,326 @@
"""
Local Credential Registry.
Manages named local API key accounts stored in EncryptedFileStorage.
Mirrors the Aden integration model so local credentials have feature parity:
aliases, identity metadata, status tracking, CRUD, and health validation.
Storage convention:
{credential_id}/{alias} CredentialObject
e.g. "brave_search/work" { api_key: "BSA-xxx", _alias: "work",
_integration_type: "brave_search",
_status: "active",
_identity_username: "acme", ... }
Usage:
registry = LocalCredentialRegistry.default()
# Add a new account
info, health = registry.save_account("brave_search", "work", "BSA-xxx")
print(info.status, info.identity.label)
# List all accounts
for account in registry.list_accounts():
print(f"{account.credential_id}/{account.alias}: {account.status}")
# Get the raw API key for a specific account
key = registry.get_key("github", "personal")
# Re-validate a stored account
result = registry.validate_account("github", "personal")
"""
from __future__ import annotations
import logging
from datetime import UTC, datetime
from pathlib import Path
from typing import TYPE_CHECKING, Any
from framework.credentials.models import CredentialIdentity, CredentialObject
from framework.credentials.storage import EncryptedFileStorage
from .models import LocalAccountInfo
if TYPE_CHECKING:
from aden_tools.credentials.health_check import HealthCheckResult
logger = logging.getLogger(__name__)
_SEPARATOR = "/"
class LocalCredentialRegistry:
"""
Named local API key account store backed by EncryptedFileStorage.
Provides the same list/save/get/delete/validate surface as the Aden
client, but for locally-stored API keys.
"""
def __init__(self, storage: EncryptedFileStorage) -> None:
self._storage = storage
# ------------------------------------------------------------------
# Listing
# ------------------------------------------------------------------
def list_accounts(self, credential_id: str | None = None) -> list[LocalAccountInfo]:
"""
List all stored local accounts.
Args:
credential_id: If given, filter to this credential type only.
Returns:
List of LocalAccountInfo sorted by credential_id then alias.
"""
all_ids = self._storage.list_all()
accounts: list[LocalAccountInfo] = []
for storage_id in all_ids:
if _SEPARATOR not in storage_id:
continue # Skip legacy un-aliased entries
try:
cred_obj = self._storage.load(storage_id)
except Exception as exc:
logger.debug("Skipping unreadable credential %s: %s", storage_id, exc)
continue
if cred_obj is None:
continue
info = self._to_account_info(cred_obj)
if info is None:
continue
if credential_id and info.credential_id != credential_id:
continue
accounts.append(info)
return sorted(accounts, key=lambda a: (a.credential_id, a.alias))
# ------------------------------------------------------------------
# Save / add
# ------------------------------------------------------------------
def save_account(
self,
credential_id: str,
alias: str,
api_key: str,
run_health_check: bool = True,
extra_keys: dict[str, str] | None = None,
) -> tuple[LocalAccountInfo, HealthCheckResult | None]:
"""
Store a named account, optionally validating it first.
Args:
credential_id: Logical credential name (e.g. "brave_search").
alias: User-chosen name (e.g. "work"). Defaults to "default".
api_key: The raw API key / token value.
run_health_check: If True, verify the key against the live API
and extract identity metadata. Failure still saves with
status="failed" so the user can re-validate later.
extra_keys: Additional key/value pairs to store (e.g.
cse_id for google_custom_search).
Returns:
(LocalAccountInfo, HealthCheckResult | None)
"""
alias = alias or "default"
health_result: HealthCheckResult | None = None
identity: dict[str, str] = {}
status = "active"
if run_health_check:
try:
from aden_tools.credentials.health_check import check_credential_health
kwargs: dict[str, Any] = {}
if extra_keys and "cse_id" in extra_keys:
kwargs["cse_id"] = extra_keys["cse_id"]
health_result = check_credential_health(credential_id, api_key, **kwargs)
status = "active" if health_result.valid else "failed"
identity = health_result.details.get("identity", {})
except Exception as exc:
logger.warning("Health check failed for %s/%s: %s", credential_id, alias, exc)
status = "unknown"
storage_id = f"{credential_id}{_SEPARATOR}{alias}"
now = datetime.now(UTC)
cred_obj = CredentialObject(id=storage_id)
cred_obj.set_key("api_key", api_key)
cred_obj.set_key("_alias", alias)
cred_obj.set_key("_integration_type", credential_id)
cred_obj.set_key("_status", status)
if extra_keys:
for k, v in extra_keys.items():
cred_obj.set_key(k, v)
if identity:
valid_fields = set(CredentialIdentity.model_fields)
filtered = {k: v for k, v in identity.items() if k in valid_fields}
if filtered:
cred_obj.set_identity(**filtered)
cred_obj.last_refreshed = now if run_health_check else None
self._storage.save(cred_obj)
account_info = LocalAccountInfo(
credential_id=credential_id,
alias=alias,
status=status,
identity=cred_obj.identity,
last_validated=cred_obj.last_refreshed,
created_at=cred_obj.created_at,
)
return account_info, health_result
# ------------------------------------------------------------------
# Get
# ------------------------------------------------------------------
def get_account(self, credential_id: str, alias: str) -> CredentialObject | None:
"""Load the raw CredentialObject for a specific account."""
return self._storage.load(f"{credential_id}{_SEPARATOR}{alias}")
def get_key(self, credential_id: str, alias: str, key_name: str = "api_key") -> str | None:
"""
Return the stored secret value for a specific account.
Args:
credential_id: Logical credential name (e.g. "brave_search").
alias: Account alias (e.g. "work").
key_name: Key within the credential (default "api_key").
Returns:
The secret value, or None if not found.
"""
cred = self.get_account(credential_id, alias)
if cred is None:
return None
return cred.get_key(key_name)
def get_account_info(self, credential_id: str, alias: str) -> LocalAccountInfo | None:
"""Load a LocalAccountInfo for a specific account."""
cred = self.get_account(credential_id, alias)
if cred is None:
return None
return self._to_account_info(cred)
# ------------------------------------------------------------------
# Delete
# ------------------------------------------------------------------
def delete_account(self, credential_id: str, alias: str) -> bool:
"""
Remove a stored account.
Returns:
True if the account existed and was deleted, False otherwise.
"""
return self._storage.delete(f"{credential_id}{_SEPARATOR}{alias}")
# ------------------------------------------------------------------
# Validate
# ------------------------------------------------------------------
def validate_account(self, credential_id: str, alias: str) -> HealthCheckResult:
"""
Re-run health check for a stored account and update its status.
Args:
credential_id: Logical credential name.
alias: Account alias.
Returns:
HealthCheckResult from the live API check.
Raises:
KeyError: If the account doesn't exist.
"""
from aden_tools.credentials.health_check import HealthCheckResult, check_credential_health
cred = self.get_account(credential_id, alias)
if cred is None:
raise KeyError(f"No local account found: {credential_id}/{alias}")
api_key = cred.get_key("api_key")
if not api_key:
return HealthCheckResult(valid=False, message="No api_key stored for this account")
try:
kwargs: dict[str, Any] = {}
cse_id = cred.get_key("cse_id")
if cse_id:
kwargs["cse_id"] = cse_id
result = check_credential_health(credential_id, api_key, **kwargs)
except Exception as exc:
result = HealthCheckResult(
valid=False,
message=f"Health check error: {exc}",
details={"error": str(exc)},
)
# Update status and timestamp in-place
new_status = "active" if result.valid else "failed"
cred.set_key("_status", new_status)
cred.last_refreshed = datetime.now(UTC)
# Re-extract identity if available
identity = result.details.get("identity", {})
if identity:
valid_fields = set(CredentialIdentity.model_fields)
filtered = {k: v for k, v in identity.items() if k in valid_fields}
if filtered:
cred.set_identity(**filtered)
self._storage.save(cred)
return result
# ------------------------------------------------------------------
# Factory
# ------------------------------------------------------------------
@classmethod
def default(cls) -> LocalCredentialRegistry:
"""Create a registry using the default encrypted storage at ~/.hive/credentials."""
return cls(EncryptedFileStorage())
@classmethod
def at_path(cls, path: str | Path) -> LocalCredentialRegistry:
"""Create a registry using a custom storage path."""
return cls(EncryptedFileStorage(base_path=path))
# ------------------------------------------------------------------
# Internals
# ------------------------------------------------------------------
def _to_account_info(self, cred_obj: CredentialObject) -> LocalAccountInfo | None:
"""Build LocalAccountInfo from a CredentialObject."""
cred_type_key = cred_obj.keys.get("_integration_type")
if cred_type_key is None:
return None
cred_id = cred_type_key.get_secret_value()
alias_key = cred_obj.keys.get("_alias")
alias = alias_key.get_secret_value() if alias_key else cred_obj.id.split(_SEPARATOR, 1)[-1]
status_key = cred_obj.keys.get("_status")
status = status_key.get_secret_value() if status_key else "unknown"
return LocalAccountInfo(
credential_id=cred_id,
alias=alias,
status=status,
identity=cred_obj.identity,
last_validated=cred_obj.last_refreshed,
created_at=cred_obj.created_at,
)
+54 -2
View File
@@ -8,7 +8,7 @@ containing one or more keys (e.g., api_key, access_token, refresh_token).
from __future__ import annotations
from datetime import UTC, datetime
from enum import Enum
from enum import StrEnum
from typing import Any
from pydantic import BaseModel, Field, SecretStr
@@ -19,7 +19,7 @@ def _utc_now() -> datetime:
return datetime.now(UTC)
class CredentialType(str, Enum):
class CredentialType(StrEnum):
"""Types of credentials the store can manage."""
API_KEY = "api_key"
@@ -70,6 +70,29 @@ class CredentialKey(BaseModel):
return self.value.get_secret_value()
class CredentialIdentity(BaseModel):
"""Identity information for a credential (whose account is this?)."""
email: str | None = None
username: str | None = None
workspace: str | None = None
account_id: str | None = None
@property
def label(self) -> str:
"""Best human-readable identifier for display."""
return self.email or self.username or self.workspace or self.account_id or "unknown"
@property
def is_known(self) -> bool:
"""Whether any identity field is populated."""
return bool(self.email or self.username or self.workspace or self.account_id)
def to_dict(self) -> dict[str, str]:
"""Return only non-None identity fields."""
return {k: v for k, v in self.model_dump().items() if v is not None}
class CredentialObject(BaseModel):
"""
A credential object containing one or more keys.
@@ -202,6 +225,35 @@ class CredentialObject(BaseModel):
return None
@property
def identity(self) -> CredentialIdentity:
"""Extract identity from ``_identity_*`` keys in the vault."""
fields = {}
for key_name, key_obj in self.keys.items():
if key_name.startswith("_identity_"):
field_name = key_name[len("_identity_") :]
if field_name in CredentialIdentity.model_fields:
fields[field_name] = key_obj.value.get_secret_value()
return CredentialIdentity(**fields)
@property
def provider_type(self) -> str | None:
"""Return the integration/provider type (e.g. 'google', 'slack')."""
key = self.keys.get("_integration_type")
return key.value.get_secret_value() if key else None
@property
def alias(self) -> str | None:
"""Return the user-set alias from the Aden platform."""
key = self.keys.get("_alias")
return key.value.get_secret_value() if key else None
def set_identity(self, **fields: str) -> None:
"""Persist identity fields as ``_identity_*`` keys."""
for field_name, value in fields.items():
if value:
self.set_key(f"_identity_{field_name}", value)
class CredentialUsageSpec(BaseModel):
"""
@@ -73,6 +73,7 @@ from .provider import (
TokenExpiredError,
TokenPlacement,
)
from .zoho_provider import ZohoOAuth2Provider
__all__ = [
# Types
@@ -82,6 +83,7 @@ __all__ = [
# Providers
"BaseOAuth2Provider",
"HubSpotOAuth2Provider",
"ZohoOAuth2Provider",
# Lifecycle
"TokenLifecycleManager",
"TokenRefreshResult",
@@ -96,7 +96,7 @@ class BaseOAuth2Provider(CredentialProvider):
self._client = httpx.Client(timeout=self.config.request_timeout)
except ImportError as e:
raise ImportError(
"OAuth2 provider requires 'httpx'. Install with: pip install httpx"
"OAuth2 provider requires 'httpx'. Install with: uv pip install httpx"
) from e
return self._client

Some files were not shown because too many files have changed in this diff Show More