Compare commits

..

336 Commits

Author SHA1 Message Date
bryan faff64c413 chore: agents.md update 2026-03-04 12:12:27 -08:00
Timothy 6fbcdc1d87 fix: auto install node 20 2026-03-04 12:11:29 -08:00
bryan 69a11af949 chore: best effort alignment of windows quickstart 2026-03-04 11:43:50 -08:00
bryan 9ef272020e chore: added llm key health check 2026-03-04 11:35:12 -08:00
bryan 258cfe7de5 chore: added easy way to update llm provider key 2026-03-04 10:42:57 -08:00
bryan 0d53b21133 chore: doc updates about hive open 2026-03-04 10:33:34 -08:00
bryan 0ccb28ffab fix: enter to use previously configured 2026-03-04 10:05:59 -08:00
bryan b30b571b44 chore: update recommended models 2026-03-04 09:54:29 -08:00
bryan bc44c3a401 chore: make gcu enabled by default 2026-03-04 09:52:42 -08:00
bryan 7fbf57cbb7 fix: linter update 2026-03-04 09:52:16 -08:00
bryan 67d094f51a fix: tool tests 2026-03-04 09:22:34 -08:00
bryan 873af04c6e fix: utilize mac keychain for claude code subscription 2026-03-04 09:22:12 -08:00
bryan 1920192656 feat: hive open cmd 2026-03-04 08:55:18 -08:00
Timothy @aden 4cbd5a4c6c Merge pull request #5786 from osb910/fix/charmap-decode-error
fix(core): add utf-8 encoding to backend open calls (micro-fix)
2026-03-04 08:39:10 -08:00
Timothy 65aa5629e8 chore: fix lint 2026-03-04 08:34:01 -08:00
Omar Shareef 7193d09bed formatting warning fix 2026-03-04 16:43:46 +02:00
Omar Shareef 49f8fae0b4 fix: systematically enforce UTF-8 encoding across tools and core to fix Windows charmap decode errors 2026-03-04 16:04:53 +02:00
Omar Shareef e1a490756e fix: systematically enforce UTF-8 encoding across tools and core to fix Windows charmap decode errors 2026-03-04 15:58:03 +02:00
Omar Shareef 91bfaf36e3 fix(core): add utf-8 encoding to backend open calls
This fixes a charmap decoding error on Windows when opening agent files without explicitly specifying the encoding.
2026-03-04 13:32:59 +02:00
Timothy @aden 465adf5b1f Merge pull request #5767 from aden-hive/feat/integrations
Feat/integrations
2026-03-03 22:04:08 -08:00
Timothy 8018325923 style: fix all ruff lint errors (E501, E722, E741, F841)
- Break long lines (E501) across 25+ files
- Replace bare except with except Exception (E722)
- Rename ambiguous variable `l` to `item` (E741)
- Prefix unused variables with underscore (F841)
2026-03-03 20:42:30 -08:00
Timothy b4cf10214b chore: lint issues 2026-03-03 20:38:30 -08:00
Timothy e421bcc326 chore: lint issues 2026-03-03 20:36:28 -08:00
Timothy 9b76ac48b7 chore: new depedency 2026-03-03 20:23:10 -08:00
Timothy 6da48eac6f feat: split tool loading into verified and unverified tiers
register_all_tools() now only loads verified (stable) tools by default.
Pass include_unverified=True to also load new/community integrations.
This prevents unverified tools from being loaded in production.

Also fixes duplicate register_brevo and register_pushover calls.
2026-03-03 17:54:45 -08:00
Timothy 638ff04e24 fix: remove duplicate community tool directories and fix credential wiring
- Remove s3_tool (duplicate of aws_s3_tool), power_bi_tool (duplicate of
  powerbi_tool), x_tool (duplicate of twitter_tool)
- Remove integrations/plaid (duplicate of plaid_tool), integrations/sap_s4hana
  (duplicate of sap_tool), stray tools/mssql.py
- Add help key to credential error responses across 14 tool modules
- Fix health checker registry keys (calendly -> calendly_pat, lusha -> lusha_api_key)
- Add health_check_endpoint to calendly and lusha credential specs
- Fix Trello env var (TRELLO_TOKEN -> TRELLO_API_TOKEN) and remove duplicate
  Trello specs from hubspot.py
- Add credential_group="aws" to AWS S3 and Redshift specs sharing env vars
- Update conftest UNREGISTERED_COMMUNITY_MODULES to only contain mssql_tool
2026-03-03 17:46:28 -08:00
Timothy 4ff531dec7 fix: update expected health checkers set (add calendly, zoho_crm) 2026-03-03 14:10:34 -08:00
Timothy 4f8b3d7aff fix: update credential specs for community Linear/Trello tools, skip unregistered community modules 2026-03-03 14:09:04 -08:00
Timothy 210fa9c474 fix: use community Brevo implementation (6 tools), remove orphaned x_tool test 2026-03-03 14:06:00 -08:00
Timothy 25361cac8c fix: align tests with community implementations, revert Reddit to httpx (praw unavailable) 2026-03-03 14:02:33 -08:00
Timothy 28defebd6d fix: remove community youtube_transcript tool.py requiring uninstalled SDK 2026-03-03 13:58:45 -08:00
Timothy d58f3103dd fix: guard register_tools for s3_tool and mssql_tool when SDK not available 2026-03-03 13:54:46 -08:00
Timothy 5d1ed35660 fix: remove shell heredoc artifacts from community power_bi_tool 2026-03-03 13:52:20 -08:00
Timothy 1f3e305534 fix: guard optional SDK imports (boto3, pyodbc) and remove s3_tool registration 2026-03-03 13:51:04 -08:00
Timothy 7d8fdd279c fix: revert Asana to httpx-based implementation (asana SDK not available) 2026-03-03 13:33:35 -08:00
Timothy bb061b770f merge: incorporate QuickBooks community PR #4158
# Conflicts:
#	examples/templates/deep_research_agent/config.py
#	examples/templates/tech_news_reporter/config.py
#	tools/README.md
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/quickbooks.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/quickbooks_tool/__init__.py
#	tools/src/aden_tools/tools/quickbooks_tool/quickbooks_tool.py
#	tools/tests/tools/test_quickbooks_tool.py
2026-03-03 13:27:04 -08:00
Timothy a8768b9ed6 merge: incorporate MSSQL community PR #4200
# Conflicts:
#	tools/pyproject.toml
#	tools/src/aden_tools/credentials/integrations.py
#	tools/src/aden_tools/tools/__init__.py
2026-03-03 13:26:36 -08:00
Timothy b437aa5f6c merge: incorporate Linear community PR #3585
# Conflicts:
#	.claude/skills/hive-credentials/SKILL.md
#	tools/README.md
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/linear_tool/__init__.py
#	tools/src/aden_tools/tools/linear_tool/linear_tool.py
2026-03-03 13:24:57 -08:00
Timothy 9248182570 merge: incorporate Trello community PR #3376
# Conflicts:
#	tools/README.md
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/trello_tool/__init__.py
#	tools/src/aden_tools/tools/trello_tool/trello_tool.py
#	tools/tests/tools/test_trello_tool.py
2026-03-03 13:24:23 -08:00
Timothy 7c77c7170f merge: incorporate YouTube Transcript community PR #3520
# Conflicts:
#	tools/pyproject.toml
#	tools/src/aden_tools/tools/__init__.py
2026-03-03 13:22:46 -08:00
Timothy 85fcb6516c merge: incorporate Redshift community PR #3533
# Conflicts:
#	tools/pyproject.toml
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/redshift_tool/__init__.py
#	tools/src/aden_tools/tools/redshift_tool/redshift_tool.py
#	tools/tests/tools/test_redshift_tool.py
2026-03-03 13:17:41 -08:00
Timothy e8e76d85f7 merge: incorporate Pushover community PR #5424
# Conflicts:
#	tools/src/aden_tools/tools/pushover_tool/__init__.py
#	tools/src/aden_tools/tools/pushover_tool/pushover_tool.py
2026-03-03 13:17:18 -08:00
Timothy 5aaa5ae4d5 merge: incorporate Twitter/X community PR #3807
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/tests/test_credentials.py
2026-03-03 13:16:45 -08:00
Timothy c3a8ee9c7b merge: incorporate Calendly community PR #3947
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/calendly.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/calendly_tool/__init__.py
#	tools/src/aden_tools/tools/calendly_tool/calendly_tool.py
#	tools/tests/test_health_checks.py
#	tools/tests/tools/test_calendly_tool.py
2026-03-03 13:14:20 -08:00
Timothy 5d07a8aba5 merge: incorporate Airtable community PR #3953
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/airtable.py
#	tools/src/aden_tools/credentials/health_check.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/airtable_tool/__init__.py
#	tools/src/aden_tools/tools/airtable_tool/airtable_tool.py
#	tools/tests/test_health_checks.py
#	tools/tests/tools/test_airtable_tool.py
2026-03-03 13:13:47 -08:00
Timothy d18e0594b8 merge: incorporate Reddit community PR #3963
# Conflicts:
#	tools/pyproject.toml
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/health_check.py
#	tools/src/aden_tools/credentials/reddit.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/reddit_tool/__init__.py
#	tools/src/aden_tools/tools/reddit_tool/reddit_tool.py
#	tools/tests/tools/test_reddit_tool.py
#	uv.lock
2026-03-03 13:12:55 -08:00
Timothy 26dcc86a24 merge: incorporate Zoho CRM community PR #4713
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/zoho_crm_tool/__init__.py
#	tools/src/aden_tools/tools/zoho_crm_tool/zoho_crm_tool.py
#	tools/tests/test_health_checks.py
2026-03-03 13:11:51 -08:00
Timothy e928ad19e5 merge: incorporate Lusha community PR #4714
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/lusha.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/lusha_tool/__init__.py
#	tools/src/aden_tools/tools/lusha_tool/lusha_tool.py
#	tools/tests/tools/test_lusha_tool.py
2026-03-03 13:11:33 -08:00
Timothy 6768aaa575 merge: incorporate Apify community PR #4770
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/apify.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/apify_tool/__init__.py
#	tools/src/aden_tools/tools/apify_tool/apify_tool.py
#	tools/tests/tools/test_apify_tool.py
2026-03-03 13:10:45 -08:00
Timothy f561aacbfc merge: incorporate Attio community PR #4832
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/attio.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/attio_tool/__init__.py
#	tools/src/aden_tools/tools/attio_tool/attio_tool.py
2026-03-03 13:10:09 -08:00
Timothy d9edd7adf7 merge: incorporate Asana community PR #4857
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/asana.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/asana_tool/__init__.py
#	tools/tests/tools/test_asana_tool.py
2026-03-03 13:08:30 -08:00
Timothy b4a5323009 merge: incorporate Brevo community PR #5136
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/brevo.py
#	tools/src/aden_tools/tools/brevo_tool/__init__.py
#	tools/src/aden_tools/tools/brevo_tool/brevo_tool.py
2026-03-03 13:04:29 -08:00
Timothy ade8b5b9a7 merge: incorporate Databricks community PR #5428
# Conflicts:
#	tools/src/aden_tools/credentials/__init__.py
#	tools/src/aden_tools/credentials/databricks.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/src/aden_tools/tools/databricks_tool/__init__.py
#	tools/src/aden_tools/tools/databricks_tool/databricks_tool.py
#	tools/tests/tools/test_databricks_tool.py
2026-03-03 13:02:30 -08:00
Timothy e4ace3d484 merge: incorporate YouTube community PR #5673 (resolve conflicts, preserve README) 2026-03-03 12:29:32 -08:00
Timothy f3dd25adc5 merge: incorporate Power BI community PR #4341 2026-03-03 12:27:06 -08:00
Timothy ec251f8168 merge: incorporate SAP S/4HANA community PR #5519 2026-03-03 12:27:02 -08:00
Timothy 1bb9579dc5 merge: incorporate Plaid community PR #5518 2026-03-03 12:26:56 -08:00
Timothy 7ebf4146ce merge: incorporate AWS S3 community PR #5521 2026-03-03 12:26:50 -08:00
Timothy e0e05f3488 chore: register Obsidian tool in tool/credential registries 2026-03-03 11:55:12 -08:00
Timothy c92f2510c8 test: add Obsidian tool unit tests (read, write, append, search, list, active) 2026-03-03 11:55:12 -08:00
Timothy ea1fbe9ee1 chore: add Obsidian credential spec (REST API key) 2026-03-03 11:55:11 -08:00
Timothy 84a0be0179 feat: add Obsidian knowledge management integration (#3741)
6 tools: obsidian_read_note, obsidian_write_note, obsidian_append_note,
obsidian_search, obsidian_list_files, obsidian_get_active.
Uses Local REST API plugin with Bearer token auth. Supports vault
browsing, full-text search, and note CRUD with frontmatter metadata.
2026-03-03 11:55:04 -08:00
Timothy 1b5780461e chore: register Langfuse tool in tool/credential registries 2026-03-03 11:42:49 -08:00
Timothy c8d35b63a4 test: add Langfuse tool unit tests (traces, scores, prompts) 2026-03-03 11:42:49 -08:00
Timothy feb1ebae04 chore: add Langfuse credential specs (public key, secret key) 2026-03-03 11:42:48 -08:00
Timothy efe49d0a5b feat: add Langfuse LLM observability integration (#5322)
6 tools: langfuse_list_traces, langfuse_get_trace, langfuse_list_scores,
langfuse_create_score, langfuse_list_prompts, langfuse_get_prompt.
Uses HTTP Basic Auth with public/secret key pair. Supports cloud and
self-hosted instances with offset-based pagination.
2026-03-03 11:41:11 -08:00
Timothy e50a5ea22a chore: register Zoom and n8n tools in tool/credential registries 2026-03-03 11:31:25 -08:00
Timothy 6382c94d0a test: add n8n tool unit tests (workflows, executions, activate/deactivate) 2026-03-03 11:31:21 -08:00
Timothy 58ce84c9cc chore: add n8n credential specs (API key, base URL) 2026-03-03 11:31:20 -08:00
Timothy 08fd6ff765 feat: add n8n workflow automation integration (#2931)
6 tools: n8n_list_workflows, n8n_get_workflow, n8n_activate_workflow,
n8n_deactivate_workflow, n8n_list_executions, n8n_get_execution.
Uses X-N8N-API-KEY header auth with configurable base URL.
Supports cursor-based pagination and execution status filtering.
2026-03-03 11:31:15 -08:00
Timothy a9cb79909c test: add Zoom tool unit tests (user, meetings, recordings) 2026-03-03 11:31:07 -08:00
Timothy 852f8ccd94 chore: add Zoom credential spec (Server-to-Server OAuth token) 2026-03-03 11:31:07 -08:00
Timothy 9388ef3e99 feat: add Zoom meeting management integration (#2867)
6 tools: zoom_get_user, zoom_list_meetings, zoom_get_meeting,
zoom_create_meeting, zoom_delete_meeting, zoom_list_recordings.
Uses Server-to-Server OAuth Bearer token. Supports token-based
pagination and cloud recording retrieval by date range.
2026-03-03 11:31:00 -08:00
Timothy 04afb0c4bb chore: register Salesforce and Shopify tools in tool/credential registries 2026-03-03 11:22:40 -08:00
Timothy a07fd44de3 test: add Shopify tool unit tests (orders, products, customers, search) 2026-03-03 11:22:35 -08:00
Timothy f6c1b13846 chore: add Shopify credential specs (access token, store name) 2026-03-03 11:22:35 -08:00
Timothy 654fa3dd1f feat: add Shopify Admin REST API integration - orders, products, customers (#2984)
6 tools: shopify_list_orders, shopify_get_order, shopify_list_products,
shopify_get_product, shopify_list_customers, shopify_search_customers.
Uses X-Shopify-Access-Token header auth with store subdomain.
2026-03-03 11:22:29 -08:00
Timothy 8183449d27 test: add Salesforce CRM tool unit tests (SOQL, CRUD, describe, list objects) 2026-03-03 11:22:16 -08:00
Timothy a9acfb86ad chore: add Salesforce credential specs (access token, instance URL) 2026-03-03 11:22:15 -08:00
Timothy d7d070ac5f feat: add Salesforce CRM integration - SOQL, records, and metadata (#2916)
6 tools: salesforce_soql_query, salesforce_get_record, salesforce_create_record,
salesforce_update_record, salesforce_describe_object, salesforce_list_objects.
Uses OAuth2 Bearer token auth with instance URL. Supports pagination via
nextRecordsUrl and field-level describe with picklist values.
2026-03-03 11:22:08 -08:00
Timothy 8c01b573ce chore: register Redshift and SAP S/4HANA in tool/credential registries 2026-03-03 11:11:12 -08:00
Timothy 7744f21b9d test: add SAP S/4HANA tool unit tests (POs, partners, products, sales orders) 2026-03-03 11:11:08 -08:00
Timothy 9ed23a235f chore: add SAP S/4HANA credential specs (base URL, username, password) 2026-03-03 11:11:07 -08:00
Timothy e88328321f feat: add SAP S/4HANA Cloud read-only procurement integration (#3182) 2026-03-03 11:11:06 -08:00
Timothy a4c516bea1 test: add Redshift tool unit tests (execute, describe, results, databases, tables) 2026-03-03 11:11:00 -08:00
Timothy 1c932a04ef chore: add Redshift credential specs (AWS access key, secret key) 2026-03-03 11:11:00 -08:00
Timothy 76d34be4c2 feat: add Amazon Redshift Data API integration - SQL and schema browsing (#3267) 2026-03-03 11:10:59 -08:00
Timothy d6e8afe316 chore: register Azure SQL and Kafka in tool/credential registries 2026-03-03 11:03:31 -08:00
Timothy a04f2bcf99 test: add Kafka tool unit tests (topics, produce, consumer groups) 2026-03-03 11:03:27 -08:00
Timothy c138e7c638 chore: add Kafka credential specs (REST URL, cluster ID) 2026-03-03 11:03:27 -08:00
Timothy fc08c7007f feat: add Apache Kafka integration via Confluent REST Proxy (#4774) 2026-03-03 11:03:26 -08:00
Timothy d559bb3446 test: add Azure SQL tool unit tests (servers, databases, firewall rules) 2026-03-03 11:03:18 -08:00
Timothy 55a8c39e4b chore: add Azure SQL credential specs (token, subscription ID) 2026-03-03 11:03:17 -08:00
Timothy 02d6f10e5f feat: add Azure SQL Database management integration (#3377) 2026-03-03 11:03:16 -08:00
Timothy 77428a91cc chore: register Power BI and Snowflake in tool/credential registries 2026-03-03 10:56:46 -08:00
Timothy 51403dc276 test: add Snowflake tool unit tests (execute, status, cancel) 2026-03-03 10:56:43 -08:00
Timothy 914a07a35d chore: add Snowflake credential specs (account, token) 2026-03-03 10:56:42 -08:00
Timothy 3c70d7b424 feat: add Snowflake SQL REST API integration (#3230) 2026-03-03 10:56:41 -08:00
Timothy ce1ee4ff17 test: add Power BI tool unit tests (workspaces, datasets, reports, refresh) 2026-03-03 10:56:35 -08:00
Timothy fca41d9bda chore: add Power BI credential spec (POWERBI_ACCESS_TOKEN) 2026-03-03 10:56:34 -08:00
Timothy ff889e02f7 feat: add Power BI integration - workspaces, datasets, reports (#3973) 2026-03-03 10:56:34 -08:00
Timothy 43ab460462 chore: register Terraform Cloud and Lusha in tool/credential registries 2026-03-03 10:49:21 -08:00
Timothy caa06e266b test: add Lusha tool unit tests (enrich, search, usage) 2026-03-03 10:49:17 -08:00
Timothy 3622ca78ee chore: add Lusha credential spec (LUSHA_API_KEY) 2026-03-03 10:49:17 -08:00
Timothy 019e3f9659 feat: add Lusha B2B contact and company enrichment integration (#3461) 2026-03-03 10:49:16 -08:00
Timothy 208cb579a2 test: add Terraform Cloud tool unit tests (workspaces, runs) 2026-03-03 10:49:09 -08:00
Timothy 17de7e4485 chore: add Terraform Cloud credential spec (TFC_TOKEN) 2026-03-03 10:49:08 -08:00
Timothy 810616eee1 feat: add Terraform Cloud integration - workspaces and runs (#4773) 2026-03-03 10:48:41 -08:00
Timothy 191f583669 chore: register Twitter/X and Tines in tool/credential registries 2026-03-03 10:35:46 -08:00
Timothy 1d638cc18e test: add Tines tool unit tests (stories, actions, logs) 2026-03-03 10:35:42 -08:00
Timothy 3efa1f3b88 chore: add Tines credential specs (domain, api_key) 2026-03-03 10:35:42 -08:00
Timothy 4daa33db09 feat: add Tines integration - security automation stories and actions
Implements 5 tools via Tines REST API:
- tines_list_stories: List workflow stories with search/filter
- tines_get_story: Get story details including entry/exit agents
- tines_list_actions: List actions (agents) in stories
- tines_get_action: Get action details with sources/receivers
- tines_get_action_logs: Get action execution logs by level

Uses Bearer token auth with tenant domain.
2026-03-03 10:35:37 -08:00
Timothy fab2fb0056 test: add Twitter/X tool unit tests (search, user, timeline, tweet) 2026-03-03 10:35:29 -08:00
Timothy ce885c120e chore: add Twitter/X credential spec (bearer_token) 2026-03-03 10:35:28 -08:00
Timothy 75b53c47ff feat: add Twitter/X integration - tweet search and user lookup via API v2
Implements 4 tools via X API v2:
- twitter_search_tweets: Search recent tweets with query operators
- twitter_get_user: Get user profile by username
- twitter_get_user_tweets: Get user timeline
- twitter_get_tweet: Get tweet details by ID

Uses Bearer token auth (app-only, read access).
2026-03-03 10:35:21 -08:00
Timothy 2936f73707 chore: register AWS S3 and QuickBooks in tool/credential registries 2026-03-03 10:22:46 -08:00
Timothy e26426b138 test: add QuickBooks tool unit tests (query, entities, invoices) 2026-03-03 10:22:42 -08:00
Timothy 62cacb8e28 chore: add QuickBooks credential specs (access_token, realm_id) 2026-03-03 10:22:42 -08:00
Timothy f3e37190ce feat: add QuickBooks Online integration - accounting API
Implements 5 tools via QuickBooks Online API v3:
- quickbooks_query: Query entities with SQL-like syntax
- quickbooks_get_entity: Get entity by type and ID
- quickbooks_create_customer: Create customers
- quickbooks_create_invoice: Create invoices with line items
- quickbooks_get_company_info: Get company details

Uses OAuth 2.0 Bearer token auth. Supports sandbox mode.
2026-03-03 10:22:35 -08:00
Timothy 0863bbbd2f test: add AWS S3 tool unit tests (buckets, objects, get, put, delete) 2026-03-03 10:22:25 -08:00
Timothy b23fa1daad chore: add AWS S3 credential specs (access_key_id, secret_access_key) 2026-03-03 10:22:24 -08:00
Timothy 05cc1ce599 feat: add AWS S3 integration - object storage via REST API with SigV4
Implements 5 tools via AWS S3 REST API:
- s3_list_buckets: List all buckets in the account
- s3_list_objects: List objects with prefix/delimiter filtering
- s3_get_object: Get object content and metadata
- s3_put_object: Upload text objects
- s3_delete_object: Delete objects

Uses AWS Signature V4 signing (no boto3 dependency).
2026-03-03 10:22:16 -08:00
Timothy e6939f8d51 chore: register PagerDuty and Calendly in tool/credential registries 2026-03-03 10:13:18 -08:00
Timothy 801fef12e1 test: add Calendly tool unit tests (user, events, invitees) 2026-03-03 10:13:14 -08:00
Timothy 5845629175 chore: add Calendly credential spec (personal_access_token) 2026-03-03 10:13:13 -08:00
Timothy 11b916301a feat: add Calendly integration - scheduling events and invitees
Implements 5 tools via Calendly API v2:
- calendly_get_current_user: Get user URI and profile info
- calendly_list_event_types: List meeting templates
- calendly_list_scheduled_events: List booked meetings with date filters
- calendly_get_scheduled_event: Get event details by URI
- calendly_list_invitees: List invitees for an event

Uses Bearer token auth (Personal Access Token).
2026-03-03 10:13:07 -08:00
Timothy aa5d80b1d2 test: add PagerDuty tool unit tests (incidents, services) 2026-03-03 10:13:02 -08:00
Timothy aa5f990acd chore: add PagerDuty credential specs (api_key, from_email) 2026-03-03 10:13:01 -08:00
Timothy 9764c82c2a feat: add PagerDuty integration - incident management and services
Implements 5 tools via PagerDuty REST API v2:
- pagerduty_list_incidents: List incidents with status/urgency/date filters
- pagerduty_get_incident: Get incident details by ID
- pagerduty_create_incident: Create incidents on a service
- pagerduty_update_incident: Acknowledge or resolve incidents
- pagerduty_list_services: List services with name search

Uses Token auth header, From header for write operations.
2026-03-03 10:12:55 -08:00
Timothy 543a71eb6c chore: register MongoDB and Airtable in tool/credential registries 2026-03-03 10:06:12 -08:00
Timothy 8285593c13 test: add Airtable tool unit tests (records, bases, schema) 2026-03-03 10:06:08 -08:00
Timothy 6fbfe773fb chore: add Airtable credential spec (personal_access_token) 2026-03-03 10:06:07 -08:00
Timothy a8c54b1e5f feat: add Airtable integration - record CRUD and base metadata
Implements 6 tools via Airtable Web API:
- airtable_list_records: List records with filters, sort, field selection
- airtable_get_record: Get a single record by ID
- airtable_create_records: Create up to 10 records per request
- airtable_update_records: Partial update up to 10 records per request
- airtable_list_bases: List accessible bases
- airtable_get_base_schema: Get table and field schema for a base

Uses Bearer token auth (Personal Access Token).
2026-03-03 10:06:03 -08:00
Timothy a5323abfca test: add MongoDB tool unit tests (find, insert, update, delete, aggregate) 2026-03-03 10:05:53 -08:00
Timothy ba4df2d2c4 chore: add MongoDB credential specs (data_api_url, api_key, data_source) 2026-03-03 10:05:52 -08:00
Timothy 6510633a8c feat: add MongoDB Atlas Data API integration - document CRUD and aggregation
Implements 6 tools via MongoDB Atlas Data API:
- mongodb_find: Find documents with filters, projection, sort, limit
- mongodb_find_one: Find a single document
- mongodb_insert_one: Insert a document
- mongodb_update_one: Update a document with MongoDB operators
- mongodb_delete_one: Delete a document
- mongodb_aggregate: Run aggregation pipelines

Uses API key auth header. All endpoints are POST.
2026-03-03 10:05:42 -08:00
Timothy 9172e5f46b chore: register Twilio and Zendesk in tool/credential registries 2026-03-03 09:56:14 -08:00
Timothy ed3e3848c0 test: add Zendesk tool unit tests (list, get, create, update, search) 2026-03-03 09:56:10 -08:00
Timothy ee90185d5c chore: add Zendesk credential specs (subdomain, email, api_token) 2026-03-03 09:56:09 -08:00
Timothy 6eb2633677 feat: add Zendesk integration - ticket management and search
Implements 5 tools via Zendesk Support API v2:
- zendesk_list_tickets: List tickets with status/sort filters
- zendesk_get_ticket: Get ticket details by ID
- zendesk_create_ticket: Create tickets with priority/type/tags
- zendesk_update_ticket: Update ticket fields and add comments
- zendesk_search_tickets: Search tickets with Zendesk query syntax

Uses Basic auth (email/token:api_token).
2026-03-03 09:56:00 -08:00
Timothy c1f215dcf2 test: add Twilio tool unit tests (SMS, WhatsApp, list, get) 2026-03-03 09:55:50 -08:00
Timothy 97cc9a1045 chore: add Twilio credential specs (account_sid, auth_token) 2026-03-03 09:55:49 -08:00
Timothy 5f7b02a4b7 feat: add Twilio integration - SMS and WhatsApp messaging
Implements 4 tools via Twilio REST API:
- twilio_send_sms: Send SMS messages
- twilio_send_whatsapp: Send WhatsApp messages
- twilio_list_messages: List message history with filters
- twilio_get_message: Get message details by SID

Uses Basic auth (AccountSID:AuthToken), form-urlencoded POST.
2026-03-03 09:55:43 -08:00
Timothy e696b41a0e chore: register GitLab and Google Sheets in tool/credential registries 2026-03-03 09:49:23 -08:00
Timothy 1f9acc6135 test: add Google Sheets tool unit tests (metadata, read, batch read) 2026-03-03 09:49:23 -08:00
Timothy 7e8699cb4b chore: add Google Sheets credential spec (api_key) 2026-03-03 09:49:22 -08:00
Timothy fd4fc657d6 feat: add Google Sheets integration - read spreadsheet data via API v4
3 tools: sheets_get_spreadsheet, sheets_read_range, sheets_batch_read.
Uses API key auth for read-only access to public spreadsheets.
2026-03-03 09:49:21 -08:00
Timothy 34403648b9 test: add GitLab tool unit tests (projects, issues, MRs) 2026-03-03 09:49:15 -08:00
Timothy 3795d50eb9 chore: add GitLab credential spec (personal access token) 2026-03-03 09:49:14 -08:00
Timothy 80515dde5a feat: add GitLab integration - projects, issues, merge requests
6 tools: gitlab_list_projects, gitlab_get_project, gitlab_list_issues,
gitlab_get_issue, gitlab_create_issue, gitlab_list_merge_requests.
Supports GitLab.com and self-hosted via configurable base URL.
2026-03-03 09:49:13 -08:00
Timothy efcd296d83 chore: register Notion and Jira tools in tool/credential registries 2026-03-03 09:43:32 -08:00
Timothy 802cb292b0 test: add Jira tool unit tests (issues, projects, comments) 2026-03-03 09:43:32 -08:00
Timothy 8e55f74d73 chore: add Jira credential specs (domain, email, api_token) 2026-03-03 09:43:31 -08:00
Timothy 3d810485a0 feat: add Jira integration - issues, projects, comments via REST API v3
6 tools: jira_search_issues, jira_get_issue, jira_create_issue,
jira_list_projects, jira_get_project, jira_add_comment. Uses Basic auth
with email + API token and Atlassian Document Format for text fields.
2026-03-03 09:43:30 -08:00
Timothy 94cfd48661 test: add Notion tool unit tests (search, pages, databases) 2026-03-03 09:43:16 -08:00
Timothy 87c8e741f3 chore: add Notion credential spec (api_token) 2026-03-03 09:43:15 -08:00
Timothy d0e92ed18d feat: add Notion integration - pages, databases, and search
5 tools: notion_search, notion_get_page, notion_create_page,
notion_query_database, notion_get_database. Uses Bearer auth
with Notion internal integration token.
2026-03-03 09:43:14 -08:00
Timothy 1927045519 chore: register Greenhouse and YouTube Transcript in tool/credential registries 2026-03-03 09:36:47 -08:00
Timothy 68cffb86c9 test: add YouTube Transcript tool unit tests (get, list transcripts) 2026-03-03 09:36:47 -08:00
Timothy 5bec989647 feat: add YouTube Transcript integration - captions and transcript retrieval
2 tools: youtube_get_transcript, youtube_list_transcripts.
Uses youtube-transcript-api library, no API key required.
2026-03-03 09:36:46 -08:00
Timothy 66f5d2f36c test: add Greenhouse tool unit tests (jobs, candidates, applications) 2026-03-03 09:36:40 -08:00
Timothy 941f815254 chore: add Greenhouse credential spec (api_token) 2026-03-03 09:36:39 -08:00
Timothy 42afd10518 feat: add Greenhouse integration - ATS jobs, candidates, applications
6 tools: greenhouse_list_jobs, greenhouse_get_job, greenhouse_list_candidates,
greenhouse_get_candidate, greenhouse_list_applications, greenhouse_get_application.
Uses Harvest API v1 with Basic auth (API token).
2026-03-03 09:36:38 -08:00
Timothy 3efa285a59 chore: register Cloudinary and Reddit tools in tool/credential registries 2026-03-03 09:31:22 -08:00
Timothy 4f2b4172b4 test: add Reddit tool unit tests (search, posts, comments, user) 2026-03-03 09:31:18 -08:00
Timothy 0d7de71b94 chore: add Reddit credential specs (client_id, client_secret) 2026-03-03 09:31:17 -08:00
Timothy f0f5b4bede feat: add Reddit integration - search, posts, comments, user info
4 tools: reddit_search, reddit_get_posts, reddit_get_comments, reddit_get_user.
Uses OAuth2 client_credentials flow for app-only access.
2026-03-03 09:31:17 -08:00
Timothy bfd27e97d3 test: add Cloudinary tool unit tests (upload, list, get, delete, search) 2026-03-03 09:31:10 -08:00
Timothy f2def27390 chore: add Cloudinary credential specs (cloud_name, api_key, api_secret) 2026-03-03 09:31:10 -08:00
Timothy b3f7bd6cc0 feat: add Cloudinary integration - upload, manage, search media assets
5 tools: cloudinary_upload, cloudinary_list_resources, cloudinary_get_resource,
cloudinary_delete_resource, cloudinary_search. Uses Basic auth with
API key/secret and supports image, video, and raw resource types.
2026-03-03 09:31:09 -08:00
Timothy 0e8e78dc5b chore: register Trello and Confluence tools in tool/credential registries 2026-03-03 09:22:03 -08:00
Timothy b259d85776 test: add Confluence tool tests (9 tests) 2026-03-03 09:22:02 -08:00
Timothy 175d9c3b7c feat: add Confluence credential spec with Basic auth (email + API token) 2026-03-03 09:21:55 -08:00
Timothy a2a810aabf feat: add Confluence integration - spaces, pages, content search via CQL 2026-03-03 09:21:54 -08:00
Timothy 175c7cfd51 test: add Trello tool tests (12 tests) 2026-03-03 09:21:47 -08:00
Timothy 5ada973d38 feat: add Trello credential spec with API key and token auth 2026-03-03 09:21:39 -08:00
Timothy 0103276136 feat: add Trello integration - boards, lists, cards management 2026-03-03 09:21:37 -08:00
Timothy 1d9e8ec138 chore: register HuggingFace tool in tool/credential registries 2026-03-03 09:11:59 -08:00
Timothy 83ac2e71bb test: add HuggingFace tool tests (10 tests) 2026-03-03 09:11:56 -08:00
Timothy 0b35a729a7 feat: add HuggingFace credential spec with token auth 2026-03-03 09:11:55 -08:00
Timothy 56723a519a feat: add HuggingFace Hub integration - models, datasets, spaces search 2026-03-03 09:11:49 -08:00
Timothy ebff394c76 chore: register Plaid tool in tool/credential registries 2026-03-03 09:08:44 -08:00
Timothy ceecc97bc8 test: add Plaid tool tests (13 tests) 2026-03-03 09:08:40 -08:00
Timothy 313154f880 feat: add Plaid credential spec with client_id and secret auth 2026-03-03 09:08:38 -08:00
Timothy 3eb6417cdc feat: add Plaid integration - accounts, balances, transactions, institutions 2026-03-03 09:08:29 -08:00
Timothy 1b35d6ca0a chore: register Pinecone tool in tool/credential registries 2026-03-03 09:05:20 -08:00
Timothy 1d89f0ba9d test: add Pinecone tool tests (18 tests) 2026-03-03 09:05:16 -08:00
Timothy 864df0e21a feat: add Pinecone credential spec with API key auth 2026-03-03 09:05:14 -08:00
Timothy 3f626decc4 feat: add Pinecone vector database integration - indexes, vectors, queries 2026-03-03 09:05:06 -08:00
Timothy bf1760b1a9 chore: register DuckDuckGo tool in tool registry 2026-03-03 08:56:06 -08:00
Timothy 8a58ea6344 test: add DuckDuckGo tool tests (6 tests) 2026-03-03 08:56:06 -08:00
Timothy 662ff4c35f feat: add DuckDuckGo search integration - web search, news, images 2026-03-03 08:56:01 -08:00
Timothy af02352b49 chore: register Linear tool in tool/credential registries 2026-03-03 08:43:41 -08:00
Timothy db9f987d46 test: add Linear tool tests (10 tests) 2026-03-03 08:43:41 -08:00
Timothy 8490ce1389 feat: add Linear credential spec with API key auth 2026-03-03 08:43:41 -08:00
Timothy 55ea9a56a4 feat: add Linear integration - issues, projects, teams, search via GraphQL 2026-03-03 08:43:41 -08:00
Timothy bd2381b10d chore: register Asana tool in tool/credential registries 2026-03-03 08:40:02 -08:00
Timothy 443de755bd test: add Asana tool tests (12 tests) 2026-03-03 08:40:02 -08:00
Timothy 55ec5f14ee feat: add Asana credential spec with PAT auth 2026-03-03 08:40:02 -08:00
Timothy 2e019302c9 feat: add Asana integration - tasks, projects, workspaces, search 2026-03-03 08:40:02 -08:00
Timothy b1e829644b chore: register Yahoo Finance tool in tool registry 2026-03-03 08:36:20 -08:00
Timothy 18f773e91b test: add Yahoo Finance tool tests (8 tests) 2026-03-03 08:36:19 -08:00
Timothy 987cfee930 feat: add Yahoo Finance integration - quotes, history, financials, company info 2026-03-03 08:36:19 -08:00
Timothy 57f6b8498a chore: register Google Search Console tool in tool/credential registries 2026-03-03 08:34:30 -08:00
Timothy 9f0d35977c test: add Google Search Console tool tests (10 tests) 2026-03-03 08:34:30 -08:00
Timothy e5910bbf2f feat: add Google Search Console credential spec with OAuth2 auth 2026-03-03 08:34:30 -08:00
Timothy 0015bf7b38 feat: add Google Search Console integration - analytics, sitemaps, URL inspection 2026-03-03 08:34:30 -08:00
Timothy a6b9234abb chore: register Zoho CRM tool in tool/credential registries 2026-03-03 08:32:13 -08:00
Timothy 086f3942b8 test: add Zoho CRM tool tests (12 tests) 2026-03-03 08:32:13 -08:00
Timothy 924f4abede feat: add Zoho CRM credential spec with OAuth token auth 2026-03-03 08:32:13 -08:00
Timothy 02be91cb08 feat: add Zoho CRM integration - leads, contacts, deals, accounts, notes 2026-03-03 08:32:13 -08:00
Timothy c2298393ab chore: register Apify tool in tool/credential registries 2026-03-03 08:29:33 -08:00
Timothy 4b8c63bf6e test: add Apify tool tests (11 tests) 2026-03-03 08:29:33 -08:00
Timothy e089c3b72c feat: add Apify credential spec with API token auth 2026-03-03 08:29:33 -08:00
Timothy a93983b5db feat: add Apify integration - actors, runs, datasets, key-value stores 2026-03-03 08:29:27 -08:00
Timothy 20f6329004 chore: register Attio tool in tool/credential registries 2026-03-03 08:25:12 -08:00
Timothy 3c2cf71c47 test: add Attio tool tests (14 tests) 2026-03-03 08:25:08 -08:00
Timothy 56288c3137 feat: add Attio credential spec with API key auth 2026-03-03 08:25:04 -08:00
Timothy 79188921a5 feat: add Attio CRM integration - records, lists, notes, tasks 2026-03-03 08:24:58 -08:00
Timothy 5ab66008ae chore: register Pipedrive tool in tool/credential registries 2026-03-03 08:18:45 -08:00
Timothy f38c9ee049 test: add Pipedrive tool tests (16 tests) 2026-03-03 08:18:41 -08:00
Timothy 86f5e71ec2 feat: add Pipedrive credential spec with API token auth 2026-03-03 08:18:29 -08:00
Timothy 1e15cc8495 feat: add Pipedrive CRM integration - deals, contacts, orgs, activities, pipelines 2026-03-03 08:18:24 -08:00
Timothy 077d82ad82 chore: register Docker Hub tool in tool/credential registries 2026-03-03 08:14:27 -08:00
Timothy e4cf7f3da2 test: add Docker Hub tool tests (9 tests) 2026-03-03 08:14:24 -08:00
Timothy e3bdc9e8d7 feat: add Docker Hub credential spec with PAT auth 2026-03-03 08:14:20 -08:00
Timothy f1c1c9aab3 feat: add Docker Hub integration - search, repos, tags, image details 2026-03-03 08:14:15 -08:00
Timothy 4860739a2f chore: register Vercel in tool/credential registries (#5044) 2026-03-03 08:08:16 -08:00
Timothy 791ee40cd6 test: add Vercel tool unit tests (#5044) 2026-03-03 08:08:12 -08:00
Timothy e0191ac52b feat: add Vercel credential spec (#5044) 2026-03-03 08:08:07 -08:00
Timothy e0724df196 feat: add Vercel tool - deployments, projects, domains, env vars (#5044) 2026-03-03 08:08:00 -08:00
Timothy 2a56294638 chore: register Databricks in tool/credential registries (#5167) 2026-03-03 08:05:25 -08:00
Timothy d5cd557013 test: add Databricks tool unit tests (#5167) 2026-03-03 08:05:21 -08:00
Timothy 2a43f23a3d feat: add Databricks credential spec (#5167) 2026-03-03 08:05:03 -08:00
Timothy 69af8f569a feat: add Databricks tool - SQL, jobs, clusters, workspace (#5167) 2026-03-03 08:04:34 -08:00
Timothy 0e86dbcc9b chore: register Redis tool in tool/credential registries (#5370) 2026-03-03 08:01:43 -08:00
Timothy 92c75aa6f5 test: add Redis tool unit tests (#5370) 2026-03-03 08:01:37 -08:00
Timothy be41d848e5 feat: add Redis credential spec (#5370) 2026-03-03 08:01:32 -08:00
Timothy f7c299f6f0 feat: add Redis tool implementation - KV, hash, list, pub/sub (#5370) 2026-03-03 08:01:25 -08:00
Timothy b6a0f65a09 feat: add Pushover push notification integration (#5415)
4 tools: pushover_send, pushover_validate_user, pushover_list_sounds,
pushover_check_receipt. Supports priority levels, HTML, sounds, TTL.
All 12 unit tests and 13 conformance tests passing.
2026-03-03 07:58:29 -08:00
Timothy 1e7b0068ed chore: register Supabase tool in tool/credential registries 2026-03-03 07:54:34 -08:00
Timothy de5105f313 feat: add Supabase integration - DB, Auth, Edge Functions (#5489)
7 tools: supabase_select, supabase_insert, supabase_update, supabase_delete,
supabase_auth_signup, supabase_auth_signin, supabase_edge_invoke.
All 19 unit tests and 13 conformance tests passing.
2026-03-03 07:54:27 -08:00
Timothy 6d32f1bb36 chore: register YouTube and Microsoft Graph tools in tool/credential registries 2026-03-03 07:51:33 -08:00
Timothy 9c316cee28 feat: add Microsoft Graph integration - Outlook, Teams, OneDrive (#5601)
11 tools: outlook_list_messages, outlook_get_message, outlook_send_mail,
teams_list_teams, teams_list_channels, teams_send_channel_message,
teams_get_channel_messages, onedrive_search_files, onedrive_list_files,
onedrive_download_file, onedrive_upload_file.
All 15 unit tests and 13 conformance tests passing.
2026-03-03 07:47:49 -08:00
Timothy 6af4f2d6e6 feat: add YouTube Data API integration (#5603)
8 tools: search_videos, get_video_details, get_channel, list_channel_videos,
get_playlist, search_channels, get_video_comments, get_video_categories.
All 17 unit tests and 13 conformance tests passing.
2026-03-03 07:47:34 -08:00
Amdev-5 57651900f1 Merge remote-tracking branch 'origin/main' into lusha 2026-03-03 18:46:12 +05:30
Amdev-5 46b0617018 Merge remote-tracking branch 'origin/main' into lusha
# Conflicts:
#	tools/src/aden_tools/credentials/health_check.py
#	tools/src/aden_tools/tools/__init__.py
#	tools/tests/test_health_checks.py
2026-03-03 18:34:54 +05:30
P Gokul Sree Chandra 7d9bd2e86b feat(tools): add YouTube Data API integration
- Implement 6 YouTube API tools (search videos, get video/channel details, list channel videos, get playlist items, search channels)
- Add YOUTUBE_API_KEY credential spec with help_url and description
- Register YouTube tool in tools/__init__.py
- Add comprehensive test coverage (18 tests) with mocking
- Add detailed README with setup instructions and examples
- Use httpx for HTTP requests to YouTube Data API v3
- Verified with real API integration testing

Implements #5603
2026-03-03 07:35:04 +05:30
Amdev-5 cce073dbdb fix(lusha): add pagination and empty filter validation
- Expose page parameter on search_people and search_companies
  (client + MCP tool) enabling access beyond the first 50 results
- Add guard requiring at least one filter on both search endpoints
  to prevent broad requests that burn API credits
- Add unit tests for pagination and empty filter validation
2026-03-02 10:20:08 +05:30
Vasu Bansal 6a92588264 fix(plaid): update v0.6 credential compatibility and stabilize tests 2026-03-01 01:16:16 +05:30
Vasu Bansal 276aad6f0d feat: add Plaid banking integration
- Implement Plaid connector for account balances
- Add transaction history retrieval
- Include GL reconciliation functionality
- Add institution metadata lookup
- Include comprehensive tests and documentation

Closes #4016
2026-03-01 01:16:16 +05:30
Vasu Bansal 10620bda4f fix(sap): update credential-store compatibility and test imports 2026-03-01 01:07:00 +05:30
Vasu Bansal c214401a00 feat(integration): add SAP S/4HANA connector
Add complete SAP S/4HANA integration with:
- Connector for OData API access
- Credential management following Hive patterns
- Unit tests with mocked responses
- Documentation and usage examples

Refs #3182
2026-03-01 01:07:00 +05:30
Vasu Bansal 260ac33324 fix(s3): support v0.6 credential refs and register S3 tools 2026-03-01 00:56:22 +05:30
Vasu Bansal d4cd643860 feat: add AWS S3 integration for cloud object storage
- Add S3Storage class with upload, download, list, delete operations
- Support IAM roles, environment variables, and credential store
- Implement retry logic with adaptive backoff
- Add MCP tools: s3_upload, s3_download, s3_list, s3_delete, s3_check_credentials
- Include comprehensive tests with moto mocking
- Add documentation for setup and IAM permissions

Closes #3012
2026-03-01 00:54:57 +05:30
IamSayeed dc16cfda21 Merge branch 'main' into feature/add-asana-integration 2026-02-28 11:28:43 +05:30
Navya Bijoy ddd30a950d Integration: add Databricks MCP tool integration
Implements the Databricks MCP tool integration for the Hive agent framework
2026-02-26 21:01:59 +05:30
KRYSTALM7 3ca0e63d54 feat(tools): add Pushover push notification integration
Closes #5415
2026-02-26 13:54:34 +00:00
Shivam Shahi– oss/acc 0f8627f17a format 2026-02-22 00:25:15 +05:30
Utkarsh Singh cd0cf69099 feat(tools): add Brevo transactional email and SMS integration
- Add brevo_tool with 6 MCP tools: brevo_send_email, brevo_send_sms,
  brevo_create_contact, brevo_get_contact, brevo_update_contact,
  brevo_get_email_stats
- Add CredentialSpec for BREVO_API_KEY in credentials/brevo.py
- Register brevo_tool in tools/__init__.py and credentials/__init__.py
- Add README with setup instructions and usage examples
- Add 34 unit tests covering all tools, validation and error handling

Closes #5127
2026-02-20 13:19:07 +05:30
Amdev-5 9744363342 fix(lusha): address PR review round 2 — structured filters, pagination, correct types
- search_people: replaced freetext searchText concatenation with proper
  structured Lusha API filters (jobTitles, seniority as list[int],
  departments, locations as dict, company_names, industry_ids, search_text)
- search_companies: added locations, company_names, search_text params;
  made all params optional for flexible queries
- Pagination: exposed limit param (clamped 10-50 per Lusha API constraints)
  on both search tools, replacing hardcoded size=25
- get_signals: changed ids from list[str] to list[int], removed internal
  str-to-int conversion as Lusha IDs are always numeric
- seniority type corrected to list[int] (API rejects string-encoded values
  despite OpenAPI spec suggesting strings — verified via live integration)
- Unit tests updated for all changes (19/19 pass)

Verified against live Lusha API: all 6 tools return correct responses.
2026-02-17 22:00:09 +05:30
Amdev-5 6fe8439e94 fix(lusha): use mainIndustriesIds for company search, safer credential handling
- search_companies: replace names filter with mainIndustriesIds (numeric
  industry IDs) per Lusha API schema. Parameter changed from
  industry: str to industry_ids: list[int] | None.
- _get_api_key: return None instead of raising TypeError on unexpected
  credential type. Lets _get_client handle it with the standard error dict
  pattern used across all tools.
- Updated unit tests for new industry_ids parameter and added test for
  non-string credential handling.
2026-02-17 21:33:02 +05:30
Amdev-5 8e61ffe377 fix(tools): remove invalid searchText field from Lusha prospecting filters
Lusha API rejects filters.companies.include.searchText (HTTP 400).
Replaced with valid 'names' field in search_companies and removed
redundant company searchText from search_people. Updated unit tests.
2026-02-17 21:33:02 +05:30
Amdev-5 723476f7a7 feat(tools): add Lusha MCP integration with credentials and health checks 2026-02-17 21:33:02 +05:30
IamSayeed 0f253027ae Merge branch 'main' into feature/add-asana-integration 2026-02-17 12:20:01 +05:30
Sayeed Rizwan 6053895a82 fix(asana): resolve from PR feedback - refactor client, fix specs, add tests 2026-02-17 12:18:06 +05:30
Shivam Shahi– oss/acc ceffa38717 Merge branch 'main' into feat/zoho-crm 2026-02-17 02:46:29 +05:30
Your hh3538962 ae205fa3f2 fix(tools): address Power BI integration code review feedback
- Fix export endpoint: /Export -> /ExportTo
- Add 202 Accepted response handling
- Add notifyOption to refresh_dataset API call
- Rename format parameter to export_format (avoid shadowing builtin)
- Add PNG support to export formats
- All critical API issues from review addressed
2026-02-16 14:00:09 +05:00
Shivam Shahi– oss/acc 669a05892b Merge branch 'main' into feat/zoho-crm 2026-02-15 21:47:52 +05:30
IamSayeed 4898a9759a Merge branch 'main' into feature/add-asana-integration 2026-02-15 13:07:15 +05:30
Sayeed Rizwan 2c2fa25580 fix: Resolve merge conflicts in credential and tool registries 2026-02-15 13:00:23 +05:30
Sayeed Rizwan 56496d7dbd feat: Add Asana integration for project management automation
- Implement 25 MCP tools for comprehensive Asana operations
  - Task management (create, update, search, delete, complete, comment, subtask)
  - Project management (create, update, list, get tasks)
  - Workspace & team operations (list workspaces, get users)
  - Section management for Kanban workflows
  - Tag and custom field support

- Add Personal Access Token (PAT) authentication
- Use official asana>=3.2.0 Python SDK (v5+ API)
- Include comprehensive error handling with ApiException
- Add 5 unit tests with 100% pass rate
- Provide detailed documentation and usage examples

Technical Details:
- Uses asana.ApiClient with Configuration pattern
- Implements workspace resolution by name or GID
- Handles paginated responses automatically
- Follows CredentialStoreAdapter pattern
- Matches existing tool structure (slack_tool, github_tool)

Closes #4156
2026-02-15 11:33:17 +05:30
y0sif dd0696e44d chore: resolve merge conflicts with main 2026-02-14 21:38:44 +02:00
y0sif dcda273e0b chore: resolve merge conflicts with main 2026-02-14 21:32:33 +02:00
y0sif f3b159c650 docs(tools): document Attio CRM in README 2026-02-14 21:23:47 +02:00
y0sif 06df037e28 chore: add Attio credentials to test spec file 2026-02-14 21:22:55 +02:00
y0sif e814e516d1 chore: add Attio credentials to init file 2026-02-14 21:21:37 +02:00
y0sif 0375e068ed test(tools): add Attio tool tests 2026-02-14 21:20:03 +02:00
y0sif 34ffc533d3 feat(tools): add Attio CRM integration 2026-02-14 21:19:14 +02:00
mubarakar95 ea2ea1a4ae Merge branch 'main' into integration/apify 2026-02-14 17:53:39 +05:30
mubarakar95 9e11947687 style: apply ruff formatting to apify_tool.py 2026-02-14 17:22:35 +05:30
mubarakar95 47117281e1 fix(test): resolve E501 line too long in test_apify_tool.py 2026-02-14 17:22:33 +05:30
mubarakar95 032dd13f5a feat(tools): implement Apify integration with 4 tools and comprehensive tests
- Added credential spec with health check endpoint
- Implemented apify_run_actor (sync/async execution)
- Implemented apify_get_dataset (result retrieval)
- Implemented apify_get_run (status checking)
- Implemented apify_search_actors (marketplace search)
- Created comprehensive README with examples and use cases
- Added 24 unit tests with mocked API responses
- All tests passing, conformance validated, linting clean

Resolves: #4510
2026-02-14 17:22:25 +05:30
mubarakar95 13d8ebbeff feat: Add Apify integration (issue #4510)
Implements comprehensive Apify integration for web scraping and automation:

- Added 4 new tools: apify_run_actor, apify_get_dataset, apify_get_run, apify_search_actors
- Credential management for APIFY_API_TOKEN with health check
- Support for synchronous (wait=True) and asynchronous (wait=False) actor execution
- Actor ID validation and comprehensive error handling
- Full test coverage (26 tests passing)
- README with usage examples and documentation

Addresses #4510
2026-02-14 11:53:56 +05:30
Shivam Shahi– oss/acc 2efa0e01df ruff format fix 2026-02-14 00:35:30 +05:30
Shivam Shahi– oss/acc 6044369fdf feat(tools): add Zoho CRM v8 integration with OAuth2 and MCP tools
Add Zoho CRM MCP integration for lead/contact/account/deal workflows with notes support. Implements 5 MCP tools:
- zoho_crm_search: Search Leads/Contacts/Accounts/Deals by criteria or word with pagination
- zoho_crm_get_record: Fetch a single record by module and ID
- zoho_crm_create_record: Create records with pass-through field payloads
- zoho_crm_update_record: Update records by ID with partial field payloads
- zoho_crm_add_note: Create notes linked to CRM records via Parent_Id mapping

Features:
- Zoho OAuth2 provider added in core credentials (refresh-token flow)
- Zoho auth format: Authorization: Zoho-oauthtoken <token>
- Region/DC-aware routing using accounts domain/region + api_domain usage
- Persisted DC metadata on refresh (api_domain/accounts_domain/location)
- Credential spec and health check registration for zoho_crm
- Tool registration and allowed-tool list updates
- Normalized tool responses with retriable 429 handling
- README with setup, auth modes, usage, and testing instructions
- Comprehensive unit/integration coverage updates for tool, provider, and health checks

Validation:
- Scoped ruff lint/format checks passed
- Targeted test suite passed: 563 passed, 18 skipped

Closes #4418
2026-02-13 18:28:12 +05:30
RichardTang-Aden 97440f9e8a Merge branch 'main' into feature/x-twitter-integration 2026-02-11 17:13:33 -08:00
Your hh3538962 765f7cae58 feat(tools): add get_datasets, get_reports, and export_report functions to Power BI integration 2026-02-11 22:19:51 +05:00
Your hh3538962 b455c8a2ad Merge remote-tracking branch 'origin/main' into feat/power-bi-integration 2026-02-11 22:07:00 +05:00
Sapna vishnoi da25e0ffa5 Merge branch 'main' into feat/redshift-integration 2026-02-11 13:42:26 +05:30
Your hh3538962 e07703c01f feat(tools): add Power BI integration - initial structure with workspace and dataset refresh functions 2026-02-10 13:23:32 +05:00
mishrapravin114 a4abf3eb2b Merge upstream/main: resolve conflicts with Apollo integration
- Keep both APOLLO_CREDENTIALS and AIRTABLE_CREDENTIALS
- Keep both apollo_tool and airtable_tool imports (alphabetical)

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-10 00:25:17 +05:30
mishrapravin114 269d72d073 Merge upstream/main: resolve conflicts with Apollo integration
- Keep both APOLLO_CREDENTIALS and CALENDLY_CREDENTIALS
- Keep both apollo_tool and calendly_tool imports (alphabetical)

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-10 00:20:17 +05:30
mishrapravin114 c8f5dccbd2 docs(airtable): add rate limit section to README
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-10 00:17:49 +05:30
mishrapravin114 8b797ee73f feat(airtable): add rate limit retry and retry_after
- Add 429 handling with retry_after from Retry-After header
- Add _request_with_retry (2 retries) for all API calls
- Update tests to use httpx.request

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-10 00:17:37 +05:30
mishrapravin114 de38adb1e4 feat(calendly): add rate limit handling, retry, 7-day validation
- Add 429 handling with retry_after from Retry-After header
- Add _request_with_retry (2 retries) for all API calls
- Validate get_availability date range <= 7 days
- Update tests to use httpx.request

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-10 00:16:37 +05:30
Sapna vishnoi c169bcc5d8 Merge branch 'main' into feat/redshift-integration 2026-02-09 23:32:08 +05:30
kubrakaradirek 80ea286beb fix: resolve complex merge conflicts and restore integrations 2026-02-09 16:09:43 +03:00
kubrakaradirek 3499be782e feat: implement MSSQL tool with schema discovery closes #3377 2026-02-09 15:32:57 +03:00
Gordon Ng 16603ae49c Test MCP 2026-02-09 01:48:49 -05:00
Gordon Ng bf6bd9ce7f test mcp 2026-02-09 01:48:46 -05:00
Gordon Ng a54c0f6f46 update 2026-02-09 01:20:25 -05:00
Gordon Ng beeed11d48 update 2026-02-09 01:11:33 -05:00
Manas Dutta 25331590a7 feat(reddit): add Reddit health checker and update tool functions 2026-02-08 19:26:01 +05:30
GastonAQS bff9f8976e Merge branch 'main' into feature/add-trello-integration 2026-02-07 15:57:48 -03:00
Manas Dutta b71628e211 Merge branch 'main' into feature/reddit-integration 2026-02-07 19:35:02 +05:30
Manas Dutta 8c1cb1f55b feat: add Reddit integration with 18 MCP tools
Implements Reddit API integration for community management and content monitoring.

Features:
- Search & Monitoring: search posts/comments, get subreddit feeds (new/hot), get posts/comments (6 tools)
- Content Creation: submit posts, reply, edit, delete comments (5 tools)
- User Engagement: get profiles, upvote, downvote, save posts (4 tools)
- Moderation: remove/approve posts, ban users (3 tools)

Implementation:
- OAuth 2.0 authentication via REDDIT_CREDENTIALS
- PRAW library for Reddit API integration
- Comprehensive error handling and validation
- Full test coverage (25 tests passing)

Resolves #3595
2026-02-07 18:38:59 +05:30
mishrapravin114 66214384a9 fix: add register_airtable import and fix ruff I001 import order
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 17:18:26 +05:30
mishrapravin114 6d6646887c feat(tools): add Airtable bases and records integration
- Add Airtable tool with 5 MCP tools:
  - airtable_list_bases
  - airtable_list_tables
  - airtable_list_records (with filter/sort)
  - airtable_create_record
  - airtable_update_record
- Add AIRTABLE_CREDENTIALS with credentialSpec + credentialStore
- Add AirtableHealthChecker for token validation
- Add README with setup and usage
- Add unit tests (9 tests total)

Fixes #2911

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 17:14:46 +05:30
mishrapravin114 6f8db0ed08 style: apply ruff format to calendly and health check files
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 17:00:05 +05:30
mishrapravin114 6aaf6836ea fix(calendly): resolve ruff lint errors (UP017, E501)
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 16:58:48 +05:30
mishrapravin114 4f2348f50e feat(tools): add Calendly scheduling integration
- Add Calendly tool with 4 MCP tools:
  - calendly_list_event_types
  - calendly_get_availability
  - calendly_get_booking_link
  - calendly_cancel_event
- Add CALENDLY_CREDENTIALS with credentialSpec + credentialStore
- Add CalendlyHealthChecker for token validation
- Add README with setup and usage
- Add unit tests (12 tests total)

Fixes #2930

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-07 16:51:27 +05:30
RichardTang-Aden deb7f2f72a Merge pull request #3814 from Amdev-5/feature/x-twitter-integration
fix(tests): update credential group test for X integration
2026-02-06 09:16:42 -08:00
Amdev-5 d989d9c65a fix(tests): update credential group test for X integration
Add test_x_credentials_share_credential_group to verify all X credentials
share the 'x' credential group. Update test_credential_group_default_empty
to account for X credentials alongside existing Google exceptions.
2026-02-06 22:17:40 +05:30
bryan 4173c606ab Merge feature/x-twitter-final-integration from Amdev-5/hive - X (Twitter) tool with DM support 2026-02-06 08:03:43 -08:00
Amdev-5 a01430d20f Merge verification fixes into PR branch 2026-02-06 16:42:56 +05:30
Amdev-5 2a8f775732 feat(tools): enhance X tool with DM support and robust error handling
- Added `x_send_dm` tool using v2 endpoint (`POST /dm_conversations/with/:id/messages`) for reliable 1:1 messaging.
- Fixed 403 Forbidden payload validation errors by simplifying DM payload structure.
- Enhanced `_handle_response` to verify `x_tool.py` returns raw API error details for 403/400 responses, aiding in permission debugging.
- Updated `demo_x_tools.py` to support standard `.env` variable names (e.g., `X_API_KEY`) and added user lookup for DM testing.
- Added unit tests covering new DM functionality and payload verification in `test_x_tool.py`.
- Audited credential handling: Read-only tools (Search/Mentions) correctly use Bearer Token, while Write tools (Post/Reply/Delete/DM) enforce OAuth 1.0a User Context.

Verified with live API tests (see PR description for logs).
2026-02-06 15:48:20 +05:30
Sapna vishnoi 4a0d9b2855 Merge branch 'main' into feat/redshift-integration 2026-02-05 11:44:09 +05:30
y0sif 92c65d69ea chore: resolve merge conflicts with main 2026-02-05 07:13:36 +02:00
Yosif Soliman 910a8968c4 fix(linear): correct GraphQL variable type for workflow states query 2026-02-05 07:00:28 +02:00
Sapna vishnoi cdb4679c5a Merge branch 'main' into feat/redshift-integration 2026-02-05 00:05:38 +05:30
Sapna.Vishnoi 1a9dce89b4 feat(tools): Add Amazon Redshift integration
- Implement 5 core functions for data warehouse querying
- Add boto3 integration with Redshift Data API
- Security: Read-only SELECT queries by default
- Full credential store support
- 26/26 tests passing (100% coverage)
- Complete documentation with examples
2026-02-04 23:58:35 +05:30
Aneesh cf1e4d7f88 Merge remote-tracking branch 'origin/main' into feature/youtube-transcript 2026-02-04 19:46:52 +05:30
Aneesh f2f0b4fc61 feat(tools): add youtube transcript integration via youtube-transcript-api 2026-02-04 19:24:40 +05:30
y0sif b21dd25181 fix(linear): handle credential decryption errors gracefully, handle mcp tool issue with credentials 2026-02-04 05:21:23 +02:00
y0sif 04a18bcbe5 docs(tools): document Linear integration in README and setup credentials claude skill 2026-02-04 04:05:15 +02:00
y0sif 7f66dd67eb feat(linear): add OAuth setup instructions 2026-02-04 04:03:37 +02:00
y0sif cfa03b89c8 test(tools): add comprehensive Linear tool tests 2026-02-04 03:47:28 +02:00
y0sif 9866d7a22b feat(tools): add Linear project management integration 2026-02-04 03:47:03 +02:00
GastonAQS 331a6e442f feat: add Trello integration tools and API client 2026-02-03 10:32:25 -03:00
Sashank Thapa 1c2295b2b5 Merge branch 'adenhq:main' into feature/twitter-x-mcp-tool 2026-02-03 16:20:45 +05:30
Sashank Thapa fa43ca3785 Merge branch 'adenhq:main' into feature/twitter-x-mcp-tool 2026-01-31 16:26:39 +05:30
kozuedoingregression b4a2c3bd14 ruff formatting and lint fixes 2026-01-31 16:18:16 +05:30
kozuedoingregression 2d4ec4f462 lint fix 2026-01-31 16:14:25 +05:30
kozuedoingregression 1e8b933da0 add X (Twitter) integration tool 2026-01-31 15:49:16 +05:30
Aneesh 48b1e0e038 Docs: clarify agent creation assumptions in Getting Started 2026-01-28 22:49:30 +05:30
317 changed files with 43938 additions and 656 deletions
+1
View File
@@ -79,3 +79,4 @@ core/tests/*dumps/*
screenshots/*
.gemini/*
+4
View File
@@ -2,6 +2,10 @@
Shared agent instructions for this workspace.
## Deprecations
- **TUI is deprecated.** The terminal UI (`hive tui`) is no longer maintained. Use the browser-based interface (`hive open`) instead.
## Coding Agent Notes
-
+2
View File
@@ -112,6 +112,8 @@ This sets up:
- At last, it will initiate the open hive interface in your browser
> **Tip:** To reopen the dashboard later, run `hive open` from the project directory.
<img width="2500" height="1214" alt="home-screen" src="https://github.com/user-attachments/assets/134d897f-5e75-4874-b00b-e0505f6b45c4" />
### Build Your First Agent
+31
View File
@@ -0,0 +1,31 @@
perf: reduce subprocess spawning in quickstart scripts (#4427)
## Problem
Windows process creation (CreateProcess) is 10-100x slower than Linux fork/exec.
The quickstart scripts were spawning 4+ separate `uv run python -c "import X"`
processes to verify imports, adding ~600ms overhead on Windows.
## Solution
Consolidated all import checks into a single batch script that checks multiple
modules in one subprocess call, reducing spawn overhead by ~75%.
## Changes
- **New**: `scripts/check_requirements.py` - Batched import checker
- **New**: `scripts/test_check_requirements.py` - Test suite
- **New**: `scripts/benchmark_quickstart.ps1` - Performance benchmark tool
- **Modified**: `quickstart.ps1` - Updated import verification (2 sections)
- **Modified**: `quickstart.sh` - Updated import verification
## Performance Impact
**Benchmark results on Windows:**
- Before: ~19.8 seconds for import checks
- After: ~4.9 seconds for import checks
- **Improvement: 14.9 seconds saved (75.2% faster)**
## Testing
- ✅ All functional tests pass (`scripts/test_check_requirements.py`)
- ✅ Quickstart scripts work correctly on Windows
- ✅ Error handling verified (invalid imports reported correctly)
- ✅ Performance benchmark confirms 75%+ improvement
Fixes #4427
+2 -2
View File
@@ -10,7 +10,7 @@ def _load_preferred_model() -> str:
config_path = Path.home() / ".hive" / "configuration.json"
if config_path.exists():
try:
with open(config_path) as f:
with open(config_path, encoding="utf-8") as f:
config = json.load(f)
llm = config.get("llm", {})
if llm.get("provider") and llm.get("model"):
@@ -24,7 +24,7 @@ def _load_preferred_model() -> str:
class RuntimeConfig:
model: str = field(default_factory=_load_preferred_model)
temperature: float = 0.7
max_tokens: int = 40000
max_tokens: int = 8000
api_key: str | None = None
api_base: str | None = None
@@ -7,11 +7,11 @@ from framework.graph import NodeSpec
# Load reference docs at import time so they're always in the system prompt.
# No voluntary read_file() calls needed — the LLM gets everything upfront.
_ref_dir = Path(__file__).parent.parent / "reference"
_framework_guide = (_ref_dir / "framework_guide.md").read_text()
_file_templates = (_ref_dir / "file_templates.md").read_text()
_anti_patterns = (_ref_dir / "anti_patterns.md").read_text()
_framework_guide = (_ref_dir / "framework_guide.md").read_text(encoding="utf-8")
_file_templates = (_ref_dir / "file_templates.md").read_text(encoding="utf-8")
_anti_patterns = (_ref_dir / "anti_patterns.md").read_text(encoding="utf-8")
_gcu_guide_path = _ref_dir / "gcu_guide.md"
_gcu_guide = _gcu_guide_path.read_text() if _gcu_guide_path.exists() else ""
_gcu_guide = _gcu_guide_path.read_text(encoding="utf-8") if _gcu_guide_path.exists() else ""
def _is_gcu_enabled() -> bool:
+2 -2
View File
@@ -660,7 +660,7 @@ class GraphBuilder:
# Generate Python code
code = self._generate_code(graph)
Path(path).write_text(code)
Path(path).write_text(code, encoding="utf-8")
self.session.phase = BuildPhase.EXPORTED
self._save_session()
@@ -754,7 +754,7 @@ class GraphBuilder:
"""Save session to disk."""
self.session.updated_at = datetime.now()
path = self.storage_path / f"{self.session.id}.json"
path.write_text(self.session.model_dump_json(indent=2))
path.write_text(self.session.model_dump_json(indent=2), encoding="utf-8")
def _load_session(self, session_id: str) -> BuildSession:
"""Load session from disk."""
+1 -1
View File
@@ -92,7 +92,7 @@ def get_api_key() -> str | None:
def get_gcu_enabled() -> bool:
"""Return whether GCU (browser automation) is enabled in user config."""
return get_hive_config().get("gcu_enabled", False)
return get_hive_config().get("gcu_enabled", True)
def get_api_base() -> str | None:
+1 -1
View File
@@ -69,7 +69,7 @@ def save_credential_key(key: str) -> Path:
# Restrict the secrets directory itself
path.parent.chmod(stat.S_IRWXU) # 0o700
path.write_text(key)
path.write_text(key, encoding="utf-8")
path.chmod(stat.S_IRUSR | stat.S_IWUSR) # 0o600
os.environ[CREDENTIAL_KEY_ENV_VAR] = key
@@ -73,6 +73,7 @@ from .provider import (
TokenExpiredError,
TokenPlacement,
)
from .zoho_provider import ZohoOAuth2Provider
__all__ = [
# Types
@@ -82,6 +83,7 @@ __all__ = [
# Providers
"BaseOAuth2Provider",
"HubSpotOAuth2Provider",
"ZohoOAuth2Provider",
# Lifecycle
"TokenLifecycleManager",
"TokenRefreshResult",
@@ -0,0 +1,198 @@
"""
Zoho CRM-specific OAuth2 provider.
Pre-configured for Zoho's OAuth2 endpoints and CRM scopes.
Extends BaseOAuth2Provider for Zoho-specific behavior.
Usage:
provider = ZohoOAuth2Provider(
client_id="your-client-id",
client_secret="your-client-secret",
accounts_domain="https://accounts.zoho.com", # or .in, .eu, etc.
)
# Use with credential store
store = CredentialStore(
storage=EncryptedFileStorage(),
providers=[provider],
)
See: https://www.zoho.com/crm/developer/docs/api/v2/access-refresh.html
"""
from __future__ import annotations
import logging
import os
from typing import Any
from ..models import CredentialObject, CredentialRefreshError, CredentialType
from .base_provider import BaseOAuth2Provider
from .provider import OAuth2Config, OAuth2Token, TokenPlacement
logger = logging.getLogger(__name__)
# Default CRM scopes for Phase 1 (Leads, Contacts, Accounts, Deals, Notes)
ZOHO_DEFAULT_SCOPES = [
"ZohoCRM.modules.leads.ALL",
"ZohoCRM.modules.contacts.ALL",
"ZohoCRM.modules.accounts.ALL",
"ZohoCRM.modules.deals.ALL",
"ZohoCRM.modules.notes.CREATE",
]
class ZohoOAuth2Provider(BaseOAuth2Provider):
"""
Zoho CRM OAuth2 provider with pre-configured endpoints.
Handles Zoho-specific OAuth2 behavior:
- Pre-configured token and authorization URLs (region-aware)
- Default CRM scopes for Leads, Contacts, Accounts, Deals, Notes
- Token validation via Zoho CRM API
- Authorization header format: "Authorization: Zoho-oauthtoken {token}"
Example:
provider = ZohoOAuth2Provider(
client_id="your-zoho-client-id",
client_secret="your-zoho-client-secret",
accounts_domain="https://accounts.zoho.com", # US
# or "https://accounts.zoho.in" for India
# or "https://accounts.zoho.eu" for EU
)
"""
def __init__(
self,
client_id: str,
client_secret: str,
accounts_domain: str = "https://accounts.zoho.com",
api_domain: str | None = None,
scopes: list[str] | None = None,
):
"""
Initialize Zoho OAuth2 provider.
Args:
client_id: Zoho OAuth2 client ID
client_secret: Zoho OAuth2 client secret
accounts_domain: Zoho accounts domain (region-specific)
- US: https://accounts.zoho.com
- India: https://accounts.zoho.in
- EU: https://accounts.zoho.eu
- etc.
api_domain: Zoho API domain for CRM calls (used in validate).
Defaults to ZOHO_API_DOMAIN env or https://www.zohoapis.com
scopes: Override default scopes if needed
"""
base = accounts_domain.rstrip("/")
token_url = f"{base}/oauth/v2/token"
auth_url = f"{base}/oauth/v2/auth"
config = OAuth2Config(
token_url=token_url,
authorization_url=auth_url,
client_id=client_id,
client_secret=client_secret,
default_scopes=scopes or ZOHO_DEFAULT_SCOPES,
token_placement=TokenPlacement.HEADER_CUSTOM,
custom_header_name="Authorization",
)
super().__init__(config, provider_id="zoho_crm_oauth2")
self._accounts_domain = base
self._api_domain = (
api_domain or os.getenv("ZOHO_API_DOMAIN", "https://www.zohoapis.com")
).rstrip("/")
@property
def supported_types(self) -> list[CredentialType]:
return [CredentialType.OAUTH2]
def format_for_request(self, token: OAuth2Token) -> dict[str, Any]:
"""
Format token for Zoho CRM API requests.
Zoho uses Authorization header: "Zoho-oauthtoken {access_token}"
(not Bearer).
"""
return {
"headers": {
"Authorization": f"Zoho-oauthtoken {token.access_token}",
"Content-Type": "application/json",
"Accept": "application/json",
}
}
def validate(self, credential: CredentialObject) -> bool:
"""
Validate Zoho credential by making a lightweight API call.
Uses GET /crm/v2/users?type=CurrentUser (doesn't require module access).
Treats 429 as valid-but-rate-limited.
"""
access_token = credential.get_key("access_token")
if not access_token:
return False
try:
client = self._get_client()
response = client.get(
f"{self._api_domain}/crm/v2/users?type=CurrentUser",
headers={
"Authorization": f"Zoho-oauthtoken {access_token}",
"Accept": "application/json",
},
timeout=self.config.request_timeout,
)
return response.status_code in (200, 429)
except Exception as e:
logger.debug("Zoho credential validation failed: %s", e)
return False
def _parse_token_response(self, response_data: dict[str, Any]) -> OAuth2Token:
"""
Parse Zoho token response.
Zoho returns:
{
"access_token": "...",
"refresh_token": "...",
"expires_in": 3600,
"api_domain": "https://www.zohoapis.com",
"token_type": "Bearer"
}
"""
token = OAuth2Token.from_token_response(response_data)
if "api_domain" in response_data:
token.raw_response["api_domain"] = response_data["api_domain"]
return token
def refresh(self, credential: CredentialObject) -> CredentialObject:
"""Refresh Zoho OAuth2 credential and persist DC metadata."""
refresh_tok = credential.get_key("refresh_token")
if not refresh_tok:
raise CredentialRefreshError(f"Credential '{credential.id}' has no refresh_token")
try:
new_token = self.refresh_access_token(refresh_tok)
except Exception as e:
raise CredentialRefreshError(f"Failed to refresh '{credential.id}': {e}") from e
credential.set_key("access_token", new_token.access_token, expires_at=new_token.expires_at)
if new_token.refresh_token and new_token.refresh_token != refresh_tok:
credential.set_key("refresh_token", new_token.refresh_token)
api_domain = new_token.raw_response.get("api_domain")
if isinstance(api_domain, str) and api_domain:
credential.set_key("api_domain", api_domain.rstrip("/"))
accounts_server = new_token.raw_response.get("accounts-server")
if isinstance(accounts_server, str) and accounts_server:
credential.set_key("accounts_domain", accounts_server.rstrip("/"))
location = new_token.raw_response.get("location")
if isinstance(location, str) and location:
credential.set_key("location", location.strip().lower())
return credential
+1 -1
View File
@@ -568,7 +568,7 @@ def _load_nodes_from_python_agent(agent_path: Path) -> list:
def _load_nodes_from_json_agent(agent_json: Path) -> list:
"""Load nodes from a JSON-based agent."""
try:
with open(agent_json) as f:
with open(agent_json, encoding="utf-8") as f:
data = json.load(f)
from framework.graph import NodeSpec
+3 -3
View File
@@ -227,7 +227,7 @@ class EncryptedFileStorage(CredentialStorage):
index_path = self.base_path / "metadata" / "index.json"
if not index_path.exists():
return []
with open(index_path) as f:
with open(index_path, encoding="utf-8") as f:
index = json.load(f)
return list(index.get("credentials", {}).keys())
@@ -268,7 +268,7 @@ class EncryptedFileStorage(CredentialStorage):
index_path = self.base_path / "metadata" / "index.json"
if index_path.exists():
with open(index_path) as f:
with open(index_path, encoding="utf-8") as f:
index = json.load(f)
else:
index = {"credentials": {}, "version": "1.0"}
@@ -283,7 +283,7 @@ class EncryptedFileStorage(CredentialStorage):
index["last_modified"] = datetime.now(UTC).isoformat()
with open(index_path, "w") as f:
with open(index_path, "w", encoding="utf-8") as f:
json.dump(index, f, indent=2)
+1 -1
View File
@@ -170,7 +170,7 @@ def _dump_failed_request(
"temperature": kwargs.get("temperature"),
}
with open(filepath, "w") as f:
with open(filepath, "w", encoding="utf-8") as f:
json.dump(dump_data, f, indent=2, default=str)
return str(filepath)
+7 -5
View File
@@ -162,7 +162,7 @@ def _load_session(session_id: str) -> BuildSession:
if not session_file.exists():
raise ValueError(f"Session '{session_id}' not found")
with open(session_file) as f:
with open(session_file, encoding="utf-8") as f:
data = json.load(f)
return BuildSession.from_dict(data)
@@ -174,7 +174,7 @@ def _load_active_session() -> BuildSession | None:
return None
try:
with open(ACTIVE_SESSION_FILE) as f:
with open(ACTIVE_SESSION_FILE, encoding="utf-8") as f:
session_id = f.read().strip()
if session_id:
@@ -228,7 +228,7 @@ def list_sessions() -> str:
if SESSIONS_DIR.exists():
for session_file in SESSIONS_DIR.glob("*.json"):
try:
with open(session_file) as f:
with open(session_file, encoding="utf-8") as f:
data = json.load(f)
sessions.append(
{
@@ -248,7 +248,7 @@ def list_sessions() -> str:
active_id = None
if ACTIVE_SESSION_FILE.exists():
try:
with open(ACTIVE_SESSION_FILE) as f:
with open(ACTIVE_SESSION_FILE, encoding="utf-8") as f:
active_id = f.read().strip()
except Exception:
pass
@@ -310,7 +310,7 @@ def delete_session(session_id: Annotated[str, "ID of the session to delete"]) ->
_session = None
if ACTIVE_SESSION_FILE.exists():
with open(ACTIVE_SESSION_FILE) as f:
with open(ACTIVE_SESSION_FILE, encoding="utf-8") as f:
active_id = f.read().strip()
if active_id == session_id:
ACTIVE_SESSION_FILE.unlink()
@@ -2894,6 +2894,7 @@ def run_tests(
try:
result = subprocess.run(
cmd,
encoding="utf-8",
capture_output=True,
text=True,
timeout=600, # 10 minute timeout
@@ -3085,6 +3086,7 @@ def debug_test(
try:
result = subprocess.run(
cmd,
encoding="utf-8",
capture_output=True,
text=True,
timeout=120, # 2 minute timeout for single test
+58 -5
View File
@@ -401,6 +401,43 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
)
serve_parser.set_defaults(func=cmd_serve)
# open command (serve + auto-open browser)
open_parser = subparsers.add_parser(
"open",
help="Start HTTP server and open dashboard in browser",
description="Shortcut for 'hive serve --open'. "
"Starts the HTTP server and opens the dashboard.",
)
open_parser.add_argument(
"--host",
type=str,
default="127.0.0.1",
help="Host to bind (default: 127.0.0.1)",
)
open_parser.add_argument(
"--port",
"-p",
type=int,
default=8787,
help="Port to listen on (default: 8787)",
)
open_parser.add_argument(
"--agent",
"-a",
type=str,
action="append",
default=[],
help="Agent path to preload (repeatable)",
)
open_parser.add_argument(
"--model",
"-m",
type=str,
default=None,
help="LLM model for preloaded agents",
)
open_parser.set_defaults(func=cmd_open)
def _load_resume_state(
agent_path: str, session_id: str, checkpoint_id: str | None = None
@@ -517,7 +554,7 @@ def cmd_run(args: argparse.Namespace) -> int:
return 1
elif args.input_file:
try:
with open(args.input_file) as f:
with open(args.input_file, encoding="utf-8") as f:
context = json.load(f)
except (FileNotFoundError, json.JSONDecodeError) as e:
print(f"Error reading input file: {e}", file=sys.stderr)
@@ -659,7 +696,7 @@ def cmd_run(args: argparse.Namespace) -> int:
# Output results
if args.output:
with open(args.output, "w") as f:
with open(args.output, "w", encoding="utf-8") as f:
json.dump(output, f, indent=2, default=str)
if not args.quiet:
print(f"Results written to {args.output}")
@@ -1517,7 +1554,7 @@ def _extract_python_agent_metadata(agent_path: Path) -> tuple[str, str]:
return fallback_name, fallback_desc
try:
with open(config_path) as f:
with open(config_path, encoding="utf-8") as f:
tree = ast.parse(f.read())
# Find AgentMetadata class definition
@@ -1932,10 +1969,18 @@ def _open_browser(url: str) -> None:
try:
if sys.platform == "darwin":
subprocess.Popen(["open", url], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
subprocess.Popen(
["open", url],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
encoding="utf-8",
)
elif sys.platform == "linux":
subprocess.Popen(
["xdg-open", url], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL
["xdg-open", url],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
encoding="utf-8",
)
except Exception:
pass # Best-effort — don't crash if browser can't open
@@ -1980,12 +2025,14 @@ def _build_frontend() -> bool:
# Ensure deps are installed
subprocess.run(
["npm", "install", "--no-fund", "--no-audit"],
encoding="utf-8",
cwd=frontend_dir,
check=True,
capture_output=True,
)
subprocess.run(
["npm", "run", "build"],
encoding="utf-8",
cwd=frontend_dir,
check=True,
capture_output=True,
@@ -2074,3 +2121,9 @@ def cmd_serve(args: argparse.Namespace) -> int:
print("\nServer stopped.")
return 0
def cmd_open(args: argparse.Namespace) -> int:
"""Start the HTTP API server and open the dashboard in the browser."""
args.open = True
return cmd_serve(args)
+111 -21
View File
@@ -39,6 +39,7 @@ logger = logging.getLogger(__name__)
CLAUDE_CREDENTIALS_FILE = Path.home() / ".claude" / ".credentials.json"
CLAUDE_OAUTH_TOKEN_URL = "https://console.anthropic.com/v1/oauth/token"
CLAUDE_OAUTH_CLIENT_ID = "9d1c250a-e61b-44d9-88ed-5944d1962f5e"
CLAUDE_KEYCHAIN_SERVICE = "Claude Code-credentials"
# Buffer in seconds before token expiry to trigger a proactive refresh
_TOKEN_REFRESH_BUFFER_SECS = 300 # 5 minutes
@@ -51,6 +52,96 @@ CODEX_KEYCHAIN_SERVICE = "Codex Auth"
_CODEX_TOKEN_LIFETIME_SECS = 3600 # 1 hour (no explicit expiry field)
def _read_claude_keychain() -> dict | None:
"""Read Claude Code credentials from macOS Keychain.
Returns the parsed JSON dict, or None if not on macOS or entry missing.
"""
import getpass
import platform
import subprocess
if platform.system() != "Darwin":
return None
try:
account = getpass.getuser()
result = subprocess.run(
[
"security",
"find-generic-password",
"-s",
CLAUDE_KEYCHAIN_SERVICE,
"-a",
account,
"-w",
],
capture_output=True,
encoding="utf-8",
timeout=5,
)
if result.returncode != 0:
return None
raw = result.stdout.strip()
if not raw:
return None
return json.loads(raw)
except (subprocess.TimeoutExpired, json.JSONDecodeError, OSError) as exc:
logger.debug("Claude keychain read failed: %s", exc)
return None
def _save_claude_keychain(creds: dict) -> bool:
"""Write Claude Code credentials to macOS Keychain. Returns True on success."""
import getpass
import platform
import subprocess
if platform.system() != "Darwin":
return False
try:
account = getpass.getuser()
data = json.dumps(creds)
result = subprocess.run(
[
"security",
"add-generic-password",
"-U",
"-s",
CLAUDE_KEYCHAIN_SERVICE,
"-a",
account,
"-w",
data,
],
capture_output=True,
timeout=5,
)
return result.returncode == 0
except (subprocess.TimeoutExpired, OSError) as exc:
logger.debug("Claude keychain write failed: %s", exc)
return False
def _read_claude_credentials() -> dict | None:
"""Read Claude Code credentials from Keychain (macOS) or file (Linux/Windows)."""
# Try macOS Keychain first
creds = _read_claude_keychain()
if creds:
return creds
# Fall back to file
if not CLAUDE_CREDENTIALS_FILE.exists():
return None
try:
with open(CLAUDE_CREDENTIALS_FILE, encoding="utf-8") as f:
return json.load(f)
except (json.JSONDecodeError, OSError):
return None
def _refresh_claude_code_token(refresh_token: str) -> dict | None:
"""Refresh the Claude Code OAuth token using the refresh token.
@@ -89,16 +180,14 @@ def _refresh_claude_code_token(refresh_token: str) -> dict | None:
def _save_refreshed_credentials(token_data: dict) -> None:
"""Write refreshed token data back to ~/.claude/.credentials.json."""
"""Write refreshed token data back to Keychain (macOS) or credentials file."""
import time
if not CLAUDE_CREDENTIALS_FILE.exists():
creds = _read_claude_credentials()
if not creds:
return
try:
with open(CLAUDE_CREDENTIALS_FILE) as f:
creds = json.load(f)
oauth = creds.get("claudeAiOauth", {})
oauth["accessToken"] = token_data["access_token"]
if "refresh_token" in token_data:
@@ -107,9 +196,15 @@ def _save_refreshed_credentials(token_data: dict) -> None:
oauth["expiresAt"] = int((time.time() + token_data["expires_in"]) * 1000)
creds["claudeAiOauth"] = oauth
with open(CLAUDE_CREDENTIALS_FILE, "w") as f:
json.dump(creds, f, indent=2)
logger.debug("Claude Code credentials refreshed successfully")
# Try Keychain first (macOS), fall back to file
if _save_claude_keychain(creds):
logger.debug("Claude Code credentials refreshed in Keychain")
return
if CLAUDE_CREDENTIALS_FILE.exists():
with open(CLAUDE_CREDENTIALS_FILE, "w", encoding="utf-8") as f:
json.dump(creds, f, indent=2)
logger.debug("Claude Code credentials refreshed in file")
except (json.JSONDecodeError, OSError, KeyError) as exc:
logger.debug("Failed to save refreshed credentials: %s", exc)
@@ -117,8 +212,8 @@ def _save_refreshed_credentials(token_data: dict) -> None:
def get_claude_code_token() -> str | None:
"""Get the OAuth token from Claude Code subscription with auto-refresh.
Reads from ~/.claude/.credentials.json which is created by the
Claude Code CLI when users authenticate with their subscription.
Reads from macOS Keychain (on Darwin) or ~/.claude/.credentials.json
(on Linux/Windows), as created by the Claude Code CLI.
If the token is expired or close to expiry, attempts an automatic
refresh using the stored refresh token.
@@ -128,13 +223,8 @@ def get_claude_code_token() -> str | None:
"""
import time
if not CLAUDE_CREDENTIALS_FILE.exists():
return None
try:
with open(CLAUDE_CREDENTIALS_FILE) as f:
creds = json.load(f)
except (json.JSONDecodeError, OSError):
creds = _read_claude_credentials()
if not creds:
return None
oauth = creds.get("claudeAiOauth", {})
@@ -212,7 +302,7 @@ def _read_codex_keychain() -> dict | None:
"-w",
],
capture_output=True,
text=True,
encoding="utf-8",
timeout=5,
)
if result.returncode != 0:
@@ -231,7 +321,7 @@ def _read_codex_auth_file() -> dict | None:
if not CODEX_AUTH_FILE.exists():
return None
try:
with open(CODEX_AUTH_FILE) as f:
with open(CODEX_AUTH_FILE, encoding="utf-8") as f:
return json.load(f)
except (json.JSONDecodeError, OSError):
return None
@@ -324,7 +414,7 @@ def _save_refreshed_codex_credentials(auth_data: dict, token_data: dict) -> None
CODEX_AUTH_FILE.parent.mkdir(parents=True, exist_ok=True, mode=0o700)
fd = os.open(CODEX_AUTH_FILE, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600)
with os.fdopen(fd, "w") as f:
with os.fdopen(fd, "w", encoding="utf-8") as f:
json.dump(auth_data, f, indent=2)
logger.debug("Codex credentials refreshed successfully")
except (OSError, KeyError) as exc:
@@ -869,7 +959,7 @@ class AgentRunner:
if not agent_json_path.exists():
raise FileNotFoundError(f"No agent.py or agent.json found in {agent_path}")
with open(agent_json_path) as f:
with open(agent_json_path, encoding="utf-8") as f:
graph, goal = load_agent_export(f.read())
return cls(
+1 -1
View File
@@ -340,7 +340,7 @@ class ToolRegistry:
self._mcp_config_path = Path(config_path)
try:
with open(config_path) as f:
with open(config_path, encoding="utf-8") as f:
config = json.load(f)
except Exception as e:
logger.warning(f"Failed to load MCP config from {config_path}: {e}")
+2 -2
View File
@@ -270,10 +270,10 @@ def _edit_test_code(code: str) -> str:
try:
# Open editor
subprocess.run([editor, temp_path], check=True)
subprocess.run([editor, temp_path], check=True, encoding="utf-8")
# Read edited code
with open(temp_path) as f:
with open(temp_path, encoding="utf-8") as f:
return f.read()
except subprocess.CalledProcessError:
print("Editor failed, keeping original code")
+2
View File
@@ -190,6 +190,7 @@ def cmd_test_run(args: argparse.Namespace) -> int:
try:
result = subprocess.run(
cmd,
encoding="utf-8",
env=env,
timeout=600, # 10 minute timeout
)
@@ -248,6 +249,7 @@ def cmd_test_debug(args: argparse.Namespace) -> int:
try:
result = subprocess.run(
cmd,
encoding="utf-8",
env=env,
timeout=120, # 2 minute timeout for single test
)
+1 -1
View File
@@ -256,7 +256,7 @@ class AdenTUI(App):
"""Override to use native `open` for file:// URLs on macOS."""
if url.startswith("file://") and platform.system() == "Darwin":
path = url.removeprefix("file://")
subprocess.Popen(["open", path])
subprocess.Popen(["open", path], encoding="utf-8")
else:
super().open_url(url, new_tab=new_tab)
+6 -6
View File
@@ -488,7 +488,7 @@ class ChatRepl(Vertical):
if not state_file.exists():
continue
with open(state_file) as f:
with open(state_file, encoding="utf-8") as f:
state = json.load(f)
status = state.get("status", "").lower()
@@ -547,7 +547,7 @@ class ChatRepl(Vertical):
# Read session state
try:
with open(state_file) as f:
with open(state_file, encoding="utf-8") as f:
state = json.load(f)
# Track this session for /resume <number> lookup
@@ -599,7 +599,7 @@ class ChatRepl(Vertical):
try:
import json
with open(state_file) as f:
with open(state_file, encoding="utf-8") as f:
state = json.load(f)
# Basic info
@@ -640,7 +640,7 @@ class ChatRepl(Vertical):
# Load and show checkpoints
for i, cp_file in enumerate(checkpoint_files[-5:], 1): # Last 5
try:
with open(cp_file) as f:
with open(cp_file, encoding="utf-8") as f:
cp_data = json.load(f)
cp_id = cp_data.get("checkpoint_id", cp_file.stem)
@@ -687,7 +687,7 @@ class ChatRepl(Vertical):
import json
with open(state_file) as f:
with open(state_file, encoding="utf-8") as f:
state = json.load(f)
# Resume from session state (not checkpoint)
@@ -1102,7 +1102,7 @@ class ChatRepl(Vertical):
continue
try:
with open(state_file) as f:
with open(state_file, encoding="utf-8") as f:
state = json.load(f)
status = state.get("status", "").lower()
@@ -38,6 +38,7 @@ def _linux_file_dialog() -> subprocess.CompletedProcess | None:
"--title=Select a PDF file",
"--file-filter=PDF files (*.pdf)|*.pdf",
],
encoding="utf-8",
capture_output=True,
text=True,
timeout=300,
@@ -54,6 +55,7 @@ def _linux_file_dialog() -> subprocess.CompletedProcess | None:
".",
"PDF files (*.pdf)",
],
encoding="utf-8",
capture_output=True,
text=True,
timeout=300,
@@ -79,6 +81,7 @@ def _pick_pdf_subprocess() -> Path | None:
'POSIX path of (choose file of type {"com.adobe.pdf"} '
'with prompt "Select a PDF file")',
],
encoding="utf-8",
capture_output=True,
text=True,
timeout=300,
@@ -93,6 +96,7 @@ def _pick_pdf_subprocess() -> Path | None:
)
result = subprocess.run(
["powershell", "-NoProfile", "-Command", ps_script],
encoding="utf-8",
capture_output=True,
text=True,
timeout=300,
@@ -199,10 +199,11 @@ def _copy_to_clipboard(text: str) -> None:
"""Copy text to system clipboard using platform-native tools."""
try:
if sys.platform == "darwin":
subprocess.run(["pbcopy"], input=text.encode(), check=True, timeout=5)
subprocess.run(["pbcopy"], encoding="utf-8", input=text.encode(), check=True, timeout=5)
elif sys.platform == "win32":
subprocess.run(
["clip.exe"],
encoding="utf-8",
input=text.encode("utf-16le"),
check=True,
timeout=5,
@@ -211,6 +212,7 @@ def _copy_to_clipboard(text: str) -> None:
try:
subprocess.run(
["xclip", "-selection", "clipboard"],
encoding="utf-8",
input=text.encode(),
check=True,
timeout=5,
@@ -218,6 +220,7 @@ def _copy_to_clipboard(text: str) -> None:
except (subprocess.SubprocessError, FileNotFoundError):
subprocess.run(
["xsel", "--clipboard", "--input"],
encoding="utf-8",
input=text.encode(),
check=True,
timeout=5,
+10 -3
View File
@@ -53,7 +53,13 @@ def log_error(message: str):
def run_command(cmd: list, error_msg: str) -> bool:
"""Run a command and return success status."""
try:
subprocess.run(cmd, check=True, capture_output=True, text=True)
subprocess.run(
cmd,
check=True,
capture_output=True,
text=True,
encoding="utf-8",
)
return True
except subprocess.CalledProcessError as e:
log_error(error_msg)
@@ -97,7 +103,7 @@ def main():
if mcp_config_path.exists():
log_success("MCP configuration found at .mcp.json")
logger.info("Configuration:")
with open(mcp_config_path) as f:
with open(mcp_config_path, encoding="utf-8") as f:
config = json.load(f)
logger.info(json.dumps(config, indent=2))
else:
@@ -114,7 +120,7 @@ def main():
}
}
with open(mcp_config_path, "w") as f:
with open(mcp_config_path, "w", encoding="utf-8") as f:
json.dump(config, f, indent=2)
log_success("Created .mcp.json")
@@ -129,6 +135,7 @@ def main():
check=True,
capture_output=True,
text=True,
encoding="utf-8",
)
log_success("MCP server module verified")
except subprocess.CalledProcessError as e:
+5
View File
@@ -68,6 +68,7 @@ class TestFrameworkModule:
[sys.executable, "-m", "framework", "--help"],
capture_output=True,
text=True,
encoding="utf-8",
cwd=str(project_root / "core"),
)
assert result.returncode == 0
@@ -79,6 +80,7 @@ class TestFrameworkModule:
[sys.executable, "-m", "framework", "list", "--help"],
capture_output=True,
text=True,
encoding="utf-8",
cwd=str(project_root / "core"),
)
assert result.returncode == 0
@@ -104,6 +106,7 @@ class TestHiveEntryPoint:
["hive", "--help"],
capture_output=True,
text=True,
encoding="utf-8",
)
assert result.returncode == 0
assert "run" in result.stdout.lower()
@@ -115,6 +118,7 @@ class TestHiveEntryPoint:
["hive", "list", "--help"],
capture_output=True,
text=True,
encoding="utf-8",
)
assert result.returncode == 0
@@ -124,5 +128,6 @@ class TestHiveEntryPoint:
["hive", "run", "nonexistent_agent_xyz"],
capture_output=True,
text=True,
encoding="utf-8",
)
assert result.returncode != 0
+2 -2
View File
@@ -232,7 +232,7 @@ async def test_shared_session_reuses_directory_and_memory(tmp_path):
# Verify primary session's state.json exists and has the primary entry_point
primary_state_path = tmp_path / "sessions" / primary_exec_id / "state.json"
assert primary_state_path.exists()
primary_state = json.loads(primary_state_path.read_text())
primary_state = json.loads(primary_state_path.read_text(encoding="utf-8"))
assert primary_state["entry_point"] == "primary"
# Async stream — simulates a webhook entry point sharing the session
@@ -275,7 +275,7 @@ async def test_shared_session_reuses_directory_and_memory(tmp_path):
# State.json should NOT have been overwritten by the async execution
# (it should still show the primary entry point)
final_state = json.loads(primary_state_path.read_text())
final_state = json.loads(primary_state_path.read_text(encoding="utf-8"))
assert final_state["entry_point"] == "primary"
# Verify only ONE session directory exists (not two)
+2 -2
View File
@@ -184,7 +184,7 @@ class TestPathTraversalWithActualFiles:
# Create a secret file outside storage
secret_file = tmpdir_path / "secret.txt"
secret_file.write_text("SENSITIVE_DATA")
secret_file.write_text("SENSITIVE_DATA", encoding="utf-8")
storage = FileStorage(storage_dir)
@@ -193,7 +193,7 @@ class TestPathTraversalWithActualFiles:
storage.get_runs_by_goal("../secret")
# Verify the secret file was not accessed (still contains original data)
assert secret_file.read_text() == "SENSITIVE_DATA"
assert secret_file.read_text(encoding="utf-8") == "SENSITIVE_DATA"
def test_cannot_write_outside_storage(self):
"""Verify that we can't write files outside storage directory."""
+5 -2
View File
@@ -353,7 +353,9 @@ class TestRuntimeLogger:
# Verify the file exists and has one line
jsonl_path = tmp_path / "logs" / "sessions" / run_id / "logs" / "tool_logs.jsonl"
assert jsonl_path.exists()
lines = [line for line in jsonl_path.read_text().strip().split("\n") if line]
lines = [
line for line in jsonl_path.read_text(encoding="utf-8").strip().split("\n") if line
]
assert len(lines) == 1
data = json.loads(lines[0])
@@ -376,7 +378,8 @@ class TestRuntimeLogger:
jsonl_path = tmp_path / "logs" / "sessions" / run_id / "logs" / "details.jsonl"
assert jsonl_path.exists()
lines = [line for line in jsonl_path.read_text().strip().split("\n") if line]
content = jsonl_path.read_text(encoding="utf-8").strip()
lines = [line for line in content.split("\n") if line]
assert len(lines) == 1
data = json.loads(lines[0])
+1 -1
View File
@@ -98,7 +98,7 @@ class TestFileStorageRunOperations:
assert run_file.exists()
# Verify it's valid JSON
with open(run_file) as f:
with open(run_file, encoding="utf-8") as f:
data = json.load(f)
assert data["id"] == "my_run"
+14 -3
View File
@@ -71,6 +71,7 @@ def main():
capture_output=True,
text=True,
check=True,
encoding="utf-8",
)
framework_path = result.stdout.strip()
success(f"installed at {framework_path}")
@@ -84,7 +85,12 @@ def main():
missing_deps = []
for dep in ["mcp", "fastmcp"]:
try:
subprocess.run([sys.executable, "-c", f"import {dep}"], capture_output=True, check=True)
subprocess.run(
[sys.executable, "-c", f"import {dep}"],
capture_output=True,
check=True,
encoding="utf-8",
)
except subprocess.CalledProcessError:
missing_deps.append(dep)
@@ -103,6 +109,7 @@ def main():
capture_output=True,
text=True,
check=True,
encoding="utf-8",
)
success("loads successfully")
except subprocess.CalledProcessError as e:
@@ -115,7 +122,7 @@ def main():
mcp_config = script_dir / ".mcp.json"
if mcp_config.exists():
try:
with open(mcp_config) as f:
with open(mcp_config, encoding="utf-8") as f:
config = json.load(f)
if "mcpServers" in config and "agent-builder" in config["mcpServers"]:
@@ -149,7 +156,10 @@ def main():
for module in modules_to_check:
try:
subprocess.run(
[sys.executable, "-c", f"import {module}"], capture_output=True, check=True
[sys.executable, "-c", f"import {module}"],
capture_output=True,
check=True,
encoding="utf-8",
)
except subprocess.CalledProcessError:
failed_modules.append(module)
@@ -174,6 +184,7 @@ def main():
text=True,
check=True,
timeout=5,
encoding="utf-8",
)
if "OK" in result.stdout:
success("server can start")
+19 -2
View File
@@ -27,8 +27,22 @@ uv run python -c "import framework; import aden_tools; print('✓ Setup complete
## Building Your First Agent
Agents are not included by default in a fresh clone.
Agents are created using Claude Code or by manual creation in the
exports/ directory. Until an agent exists, agent validation and run
commands will fail.
### Option 1: Using Claude Code Skills (Recommended)
This is the recommended way to create your first agent.
**Requirements**
- Anthropic (Claude) API access
- Claude Code CLI installed
- Unix-based shell (macOS, Linux, or Windows via WSL)
```bash
# Setup already done via quickstart.sh above
@@ -120,7 +134,10 @@ hive/
## Running an Agent
```bash
# Browse and run agents interactively (Recommended)
# Launch the web dashboard in your browser
hive open
# Browse and run agents in terminal
hive tui
# Run a specific agent
@@ -164,7 +181,7 @@ PYTHONPATH=exports uv run python -m my_agent test --type success
## Next Steps
1. **TUI Dashboard**: Run `hive tui` to explore agents interactively
1. **Dashboard**: Run `hive open` to launch the web dashboard, or `hive tui` for the terminal UI
2. **Detailed Setup**: See [environment-setup.md](./environment-setup.md)
3. **Developer Guide**: See [developer-guide.md](./developer-guide.md)
4. **Build Agents**: Use `/hive` skill in Claude Code
+246 -56
View File
@@ -408,6 +408,58 @@ Write-Ok "uv detected: $uvVersion"
Write-Host ""
# Check for Node.js (needed for frontend dashboard)
function Install-NodeViaFnm {
<#
.SYNOPSIS
Install Node.js 20 via fnm (Fast Node Manager) - mirrors nvm approach in quickstart.sh
#>
$fnmCmd = Get-Command fnm -ErrorAction SilentlyContinue
if (-not $fnmCmd) {
$fnmDir = Join-Path $env:LOCALAPPDATA "fnm"
$fnmExe = Join-Path $fnmDir "fnm.exe"
if (-not (Test-Path $fnmExe)) {
try {
Write-Host " Downloading fnm (Fast Node Manager)..." -ForegroundColor DarkGray
$zipUrl = "https://github.com/Schniz/fnm/releases/latest/download/fnm-windows.zip"
$zipPath = Join-Path $env:TEMP "fnm-install.zip"
Invoke-WebRequest -Uri $zipUrl -OutFile $zipPath -UseBasicParsing -ErrorAction Stop
if (-not (Test-Path $fnmDir)) { New-Item -ItemType Directory -Path $fnmDir -Force | Out-Null }
Expand-Archive -Path $zipPath -DestinationPath $fnmDir -Force
Remove-Item $zipPath -Force -ErrorAction SilentlyContinue
} catch {
Write-Fail "fnm download failed"
Write-Host " Install Node.js 20+ manually from https://nodejs.org" -ForegroundColor DarkGray
return $false
}
}
if (Test-Path (Join-Path $fnmDir "fnm.exe")) {
$env:PATH = "$fnmDir;$env:PATH"
} else {
Write-Fail "fnm binary not found after download"
Write-Host " Install Node.js 20+ manually from https://nodejs.org" -ForegroundColor DarkGray
return $false
}
}
try {
$null = & fnm install 20 2>&1
if ($LASTEXITCODE -ne 0) { throw "fnm install 20 exited with code $LASTEXITCODE" }
& fnm env --use-on-cd --shell powershell | Out-String | Invoke-Expression
$null = & fnm use 20 2>&1
$testNode = Get-Command node -ErrorAction SilentlyContinue
if ($testNode) {
$ver = & node --version 2>$null
Write-Ok "Node.js $ver installed via fnm"
return $true
}
throw "node not found after fnm install"
} catch {
Write-Fail "Node.js installation failed"
Write-Host " Install manually from https://nodejs.org" -ForegroundColor DarkGray
return $false
}
}
$NodeAvailable = $false
$nodeCmd = Get-Command node -ErrorAction SilentlyContinue
if ($nodeCmd) {
@@ -419,12 +471,13 @@ if ($nodeCmd) {
$NodeAvailable = $true
} else {
Write-Warn "Node.js $nodeVersion found (20+ required for frontend dashboard)"
Write-Host " Install from https://nodejs.org" -ForegroundColor DarkGray
Write-Host " Installing Node.js 20 via fnm..." -ForegroundColor Yellow
$NodeAvailable = Install-NodeViaFnm
}
}
} else {
Write-Warn "Node.js not found (optional, needed for web dashboard)"
Write-Host " Install from https://nodejs.org" -ForegroundColor DarkGray
Write-Warn "Node.js not found. Installing via fnm..."
$NodeAvailable = Install-NodeViaFnm
}
Write-Host ""
@@ -736,8 +789,8 @@ $ProviderMap = [ordered]@{
}
$DefaultModels = @{
anthropic = "claude-opus-4-6"
openai = "gpt-5.2"
anthropic = "claude-haiku-4-5-20251001"
openai = "gpt-5-mini"
gemini = "gemini-3-flash-preview"
groq = "moonshotai/kimi-k2-instruct-0905"
cerebras = "zai-glm-4.7"
@@ -749,14 +802,14 @@ $DefaultModels = @{
# Model choices: array of hashtables per provider
$ModelChoices = @{
anthropic = @(
@{ Id = "claude-opus-4-6"; Label = "Opus 4.6 - Most capable (recommended)"; MaxTokens = 32768 },
@{ Id = "claude-sonnet-4-5-20250929"; Label = "Sonnet 4.5 - Best balance"; MaxTokens = 16384 },
@{ Id = "claude-sonnet-4-20250514"; Label = "Sonnet 4 - Fast + capable"; MaxTokens = 8192 },
@{ Id = "claude-haiku-4-5-20251001"; Label = "Haiku 4.5 - Fast + cheap"; MaxTokens = 8192 }
@{ Id = "claude-haiku-4-5-20251001"; Label = "Haiku 4.5 - Fast + cheap (recommended)"; MaxTokens = 8192 },
@{ Id = "claude-sonnet-4-20250514"; Label = "Sonnet 4 - Fast + capable"; MaxTokens = 8192 },
@{ Id = "claude-sonnet-4-5-20250929"; Label = "Sonnet 4.5 - Best balance"; MaxTokens = 16384 },
@{ Id = "claude-opus-4-6"; Label = "Opus 4.6 - Most capable"; MaxTokens = 32768 }
)
openai = @(
@{ Id = "gpt-5.2"; Label = "GPT-5.2 - Most capable (recommended)"; MaxTokens = 16384 },
@{ Id = "gpt-5-mini"; Label = "GPT-5 Mini - Fast + cheap"; MaxTokens = 16384 }
@{ Id = "gpt-5-mini"; Label = "GPT-5 Mini - Fast + cheap (recommended)"; MaxTokens = 16384 },
@{ Id = "gpt-5.2"; Label = "GPT-5.2 - Most capable"; MaxTokens = 16384 }
)
gemini = @(
@{ Id = "gemini-3-flash-preview"; Label = "Gemini 3 Flash - Fast (recommended)"; MaxTokens = 8192 },
@@ -783,6 +836,17 @@ function Get-ModelSelection {
return @{ Model = $choices[0].Id; MaxTokens = $choices[0].MaxTokens }
}
# Find default index from previous model (if same provider)
$defaultIdx = "1"
if ($PrevModel -and $PrevProvider -eq $ProviderId) {
for ($j = 0; $j -lt $choices.Count; $j++) {
if ($choices[$j].Id -eq $PrevModel) {
$defaultIdx = [string]($j + 1)
break
}
}
}
Write-Host ""
Write-Color -Text "Select a model:" -Color White
Write-Host ""
@@ -794,8 +858,8 @@ function Get-ModelSelection {
Write-Host ""
while ($true) {
$raw = Read-Host "Enter choice [1]"
if ([string]::IsNullOrWhiteSpace($raw)) { $raw = "1" }
$raw = Read-Host "Enter choice [$defaultIdx]"
if ([string]::IsNullOrWhiteSpace($raw)) { $raw = $defaultIdx }
if ($raw -match '^\d+$') {
$num = [int]$raw
if ($num -ge 1 -and $num -le $choices.Count) {
@@ -851,6 +915,60 @@ $ProviderMenuUrls = @(
"https://cloud.cerebras.ai/"
)
# ── Read previous configuration (if any) ──────────────────────
$PrevProvider = ""
$PrevModel = ""
$PrevEnvVar = ""
$PrevSubMode = ""
if (Test-Path $HiveConfigFile) {
try {
$prevConfig = Get-Content -Path $HiveConfigFile -Raw | ConvertFrom-Json
$prevLlm = $prevConfig.llm
if ($prevLlm) {
$PrevProvider = if ($prevLlm.provider) { $prevLlm.provider } else { "" }
$PrevModel = if ($prevLlm.model) { $prevLlm.model } else { "" }
$PrevEnvVar = if ($prevLlm.api_key_env_var) { $prevLlm.api_key_env_var } else { "" }
if ($prevLlm.use_claude_code_subscription) { $PrevSubMode = "claude_code" }
elseif ($prevLlm.use_codex_subscription) { $PrevSubMode = "codex" }
elseif ($prevLlm.api_base -and $prevLlm.api_base -like "*api.z.ai*") { $PrevSubMode = "zai_code" }
}
} catch { }
}
# Compute default menu number (only if credential is still valid)
$DefaultChoice = ""
if ($PrevSubMode -or $PrevProvider) {
$prevCredValid = $false
switch ($PrevSubMode) {
"claude_code" { if ($ClaudeCredDetected) { $prevCredValid = $true } }
"zai_code" { if ($ZaiCredDetected) { $prevCredValid = $true } }
"codex" { if ($CodexCredDetected) { $prevCredValid = $true } }
default {
if ($PrevEnvVar) {
$envVal = [System.Environment]::GetEnvironmentVariable($PrevEnvVar, "Process")
if (-not $envVal) { $envVal = [System.Environment]::GetEnvironmentVariable($PrevEnvVar, "User") }
if ($envVal) { $prevCredValid = $true }
}
}
}
if ($prevCredValid) {
switch ($PrevSubMode) {
"claude_code" { $DefaultChoice = "1" }
"zai_code" { $DefaultChoice = "2" }
"codex" { $DefaultChoice = "3" }
}
if (-not $DefaultChoice) {
switch ($PrevProvider) {
"anthropic" { $DefaultChoice = "4" }
"openai" { $DefaultChoice = "5" }
"gemini" { $DefaultChoice = "6" }
"groq" { $DefaultChoice = "7" }
"cerebras" { $DefaultChoice = "8" }
}
}
}
}
# ── Show unified provider selection menu ─────────────────────
Write-Color -Text "Select your default LLM provider:" -Color White
Write-Host ""
@@ -896,8 +1014,18 @@ Write-Color -Text "9" -Color Cyan -NoNewline
Write-Host ") Skip for now"
Write-Host ""
if ($DefaultChoice) {
Write-Color -Text " Previously configured: $PrevProvider/$PrevModel. Press Enter to keep." -Color DarkGray
Write-Host ""
}
while ($true) {
$raw = Read-Host "Enter choice (1-9)"
if ($DefaultChoice) {
$raw = Read-Host "Enter choice (1-9) [$DefaultChoice]"
if ([string]::IsNullOrWhiteSpace($raw)) { $raw = $DefaultChoice }
} else {
$raw = Read-Host "Enter choice (1-9)"
}
if ($raw -match '^\d+$') {
$num = [int]$raw
if ($num -ge 1 -and $num -le 9) { break }
@@ -974,28 +1102,68 @@ switch ($num) {
$providerName = $ProviderMenuNames[$provIdx] -replace ' - .*', '' # strip description
$signupUrl = $ProviderMenuUrls[$provIdx]
# Check if key is already set
$existingKey = [System.Environment]::GetEnvironmentVariable($SelectedEnvVar, "User")
if (-not $existingKey) { $existingKey = [System.Environment]::GetEnvironmentVariable($SelectedEnvVar, "Process") }
if (-not $existingKey) {
Write-Host ""
Write-Host "Get your API key from: " -NoNewline
Write-Color -Text $signupUrl -Color Cyan
Write-Host ""
$apiKey = Read-Host "Paste your $providerName API key (or press Enter to skip)"
# Prompt for key (allow replacement if already set) with verification + retry
while ($true) {
$existingKey = [System.Environment]::GetEnvironmentVariable($SelectedEnvVar, "User")
if (-not $existingKey) { $existingKey = [System.Environment]::GetEnvironmentVariable($SelectedEnvVar, "Process") }
if ($existingKey) {
$masked = $existingKey.Substring(0, [Math]::Min(4, $existingKey.Length)) + "..." + $existingKey.Substring([Math]::Max(0, $existingKey.Length - 4))
Write-Host ""
Write-Color -Text " $([char]0x2B22) Current key: $masked" -Color Green
$apiKey = Read-Host " Press Enter to keep, or paste a new key to replace"
} else {
Write-Host ""
Write-Host "Get your API key from: " -NoNewline
Write-Color -Text $signupUrl -Color Cyan
Write-Host ""
$apiKey = Read-Host "Paste your $providerName API key (or press Enter to skip)"
}
if ($apiKey) {
[System.Environment]::SetEnvironmentVariable($SelectedEnvVar, $apiKey, "User")
Set-Item -Path "Env:\$SelectedEnvVar" -Value $apiKey
Write-Host ""
Write-Ok "API key saved as User environment variable: $SelectedEnvVar"
Write-Color -Text " (Persisted for all future sessions)" -Color DarkGray
} else {
# Health check the new key
Write-Host " Verifying API key... " -NoNewline
try {
$hcResult = & uv run python (Join-Path $ScriptDir "scripts/check_llm_key.py") $SelectedProviderId $apiKey 2>$null
$hcJson = $hcResult | ConvertFrom-Json
if ($hcJson.valid -eq $true) {
Write-Color -Text "ok" -Color Green
break
} elseif ($hcJson.valid -eq $false) {
Write-Color -Text "failed" -Color Red
Write-Warn $hcJson.message
# Undo the save so user can retry cleanly
[System.Environment]::SetEnvironmentVariable($SelectedEnvVar, $null, "User")
Remove-Item -Path "Env:\$SelectedEnvVar" -ErrorAction SilentlyContinue
Write-Host ""
Read-Host " Press Enter to try again"
# loop back to key prompt
} else {
Write-Color -Text "--" -Color Yellow
Write-Color -Text " Could not verify key (network issue). The key has been saved." -Color DarkGray
break
}
} catch {
Write-Color -Text "--" -Color Yellow
Write-Color -Text " Could not verify key (network issue). The key has been saved." -Color DarkGray
break
}
} elseif (-not $existingKey) {
# No existing key and user skipped
Write-Host ""
Write-Warn "Skipped. Set the environment variable manually when ready:"
Write-Host " [System.Environment]::SetEnvironmentVariable('$SelectedEnvVar', 'your-key', 'User')"
$SelectedEnvVar = ""
$SelectedProviderId = ""
break
} else {
# User pressed Enter with existing key — keep it
break
}
}
}
@@ -1011,26 +1179,67 @@ switch ($num) {
}
}
# For ZAI subscription: prompt for API key if not already set
# For ZAI subscription: prompt for API key (allow replacement if already set) with verification + retry
if ($SubscriptionMode -eq "zai_code") {
$existingZai = [System.Environment]::GetEnvironmentVariable("ZAI_API_KEY", "User")
if (-not $existingZai) { $existingZai = $env:ZAI_API_KEY }
if (-not $existingZai) {
Write-Host ""
$apiKey = Read-Host "Paste your ZAI API key (or press Enter to skip)"
while ($true) {
$existingZai = [System.Environment]::GetEnvironmentVariable("ZAI_API_KEY", "User")
if (-not $existingZai) { $existingZai = $env:ZAI_API_KEY }
if ($existingZai) {
$masked = $existingZai.Substring(0, [Math]::Min(4, $existingZai.Length)) + "..." + $existingZai.Substring([Math]::Max(0, $existingZai.Length - 4))
Write-Host ""
Write-Color -Text " $([char]0x2B22) Current ZAI key: $masked" -Color Green
$apiKey = Read-Host " Press Enter to keep, or paste a new key to replace"
} else {
Write-Host ""
$apiKey = Read-Host "Paste your ZAI API key (or press Enter to skip)"
}
if ($apiKey) {
[System.Environment]::SetEnvironmentVariable("ZAI_API_KEY", $apiKey, "User")
$env:ZAI_API_KEY = $apiKey
Write-Host ""
Write-Ok "ZAI API key saved as User environment variable"
} else {
# Health check the new key
Write-Host " Verifying ZAI API key... " -NoNewline
try {
$hcResult = & uv run python (Join-Path $ScriptDir "scripts/check_llm_key.py") "zai" $apiKey "https://api.z.ai/api/coding/paas/v4" 2>$null
$hcJson = $hcResult | ConvertFrom-Json
if ($hcJson.valid -eq $true) {
Write-Color -Text "ok" -Color Green
break
} elseif ($hcJson.valid -eq $false) {
Write-Color -Text "failed" -Color Red
Write-Warn $hcJson.message
# Undo the save so user can retry cleanly
[System.Environment]::SetEnvironmentVariable("ZAI_API_KEY", $null, "User")
Remove-Item -Path "Env:\ZAI_API_KEY" -ErrorAction SilentlyContinue
Write-Host ""
Read-Host " Press Enter to try again"
# loop back to key prompt
} else {
Write-Color -Text "--" -Color Yellow
Write-Color -Text " Could not verify key (network issue). The key has been saved." -Color DarkGray
break
}
} catch {
Write-Color -Text "--" -Color Yellow
Write-Color -Text " Could not verify key (network issue). The key has been saved." -Color DarkGray
break
}
} elseif (-not $existingZai) {
# No existing key and user skipped
Write-Host ""
Write-Warn "Skipped. Add your ZAI API key later:"
Write-Color -Text " [System.Environment]::SetEnvironmentVariable('ZAI_API_KEY', 'your-key', 'User')" -Color Cyan
$SelectedEnvVar = ""
$SelectedProviderId = ""
$SubscriptionMode = ""
break
} else {
# User pressed Enter with existing key — keep it
break
}
}
}
@@ -1081,37 +1290,18 @@ if ($SelectedProviderId) {
Write-Host ""
# ============================================================
# Step 5b: Browser Automation (GCU)
# Step 5b: Browser Automation (GCU) — always enabled
# ============================================================
Write-Host ""
Write-Color -Text "Enable browser automation?" -Color White
Write-Color -Text "This lets your agents control a real browser - navigate websites, fill forms," -Color DarkGray
Write-Color -Text "scrape dynamic pages, and interact with web UIs." -Color DarkGray
Write-Host ""
Write-Host " " -NoNewline; Write-Color -Text "1)" -Color Cyan -NoNewline; Write-Host " Yes"
Write-Host " " -NoNewline; Write-Color -Text "2)" -Color Cyan -NoNewline; Write-Host " No"
Write-Host ""
do {
$gcuChoice = Read-Host "Enter choice (1-2)"
} while ($gcuChoice -ne "1" -and $gcuChoice -ne "2")
$GcuEnabled = $false
if ($gcuChoice -eq "1") {
$GcuEnabled = $true
Write-Ok "Browser automation enabled"
} else {
Write-Color -Text " Browser automation skipped" -Color DarkGray
}
Write-Ok "Browser automation enabled"
# Patch gcu_enabled into configuration.json
if (Test-Path $HiveConfigFile) {
$existingConfig = Get-Content -Path $HiveConfigFile -Raw | ConvertFrom-Json
$existingConfig | Add-Member -NotePropertyName "gcu_enabled" -NotePropertyValue $GcuEnabled -Force
$existingConfig | Add-Member -NotePropertyName "gcu_enabled" -NotePropertyValue $true -Force
$existingConfig | ConvertTo-Json -Depth 4 | Set-Content -Path $HiveConfigFile -Encoding UTF8
} elseif ($GcuEnabled) {
# No config file yet (user skipped LLM provider) - create minimal one
} else {
if (-not (Test-Path $HiveConfigDir)) {
New-Item -ItemType Directory -Path $HiveConfigDir -Force | Out-Null
}
@@ -1425,7 +1615,7 @@ if ($FrontendBuilt) {
Write-Color -Text " Starting server on http://localhost:8787" -Color DarkGray
Write-Color -Text " Press Ctrl+C to stop" -Color DarkGray
Write-Host ""
& (Join-Path $ScriptDir "hive.ps1") serve --open
& (Join-Path $ScriptDir "hive.ps1") open
} else {
Write-Color -Text "═══════════════════════════════════════════════════════" -Color Yellow
Write-Host ""
+242 -98
View File
@@ -407,7 +407,7 @@ if [ "$USE_ASSOC_ARRAYS" = true ]; then
)
declare -A DEFAULT_MODELS=(
["anthropic"]="claude-haiku-4-5"
["anthropic"]="claude-haiku-4-5-20251001"
["openai"]="gpt-5-mini"
["gemini"]="gemini-3-flash-preview"
["groq"]="moonshotai/kimi-k2-instruct-0905"
@@ -420,12 +420,12 @@ if [ "$USE_ASSOC_ARRAYS" = true ]; then
# Model choices per provider: composite-key associative arrays
# Keys: "provider:index" -> value
declare -A MODEL_CHOICES_ID=(
["anthropic:0"]="claude-opus-4-6"
["anthropic:1"]="claude-sonnet-4-5-20250929"
["anthropic:2"]="claude-sonnet-4-20250514"
["anthropic:3"]="claude-haiku-4-5-20251001"
["openai:0"]="gpt-5.2"
["openai:1"]="gpt-5-mini"
["anthropic:0"]="claude-haiku-4-5-20251001"
["anthropic:1"]="claude-sonnet-4-20250514"
["anthropic:2"]="claude-sonnet-4-5-20250929"
["anthropic:3"]="claude-opus-4-6"
["openai:0"]="gpt-5-mini"
["openai:1"]="gpt-5.2"
["gemini:0"]="gemini-3-flash-preview"
["gemini:1"]="gemini-3.1-pro-preview"
["groq:0"]="moonshotai/kimi-k2-instruct-0905"
@@ -435,12 +435,12 @@ if [ "$USE_ASSOC_ARRAYS" = true ]; then
)
declare -A MODEL_CHOICES_LABEL=(
["anthropic:0"]="Opus 4.6 - Most capable (recommended)"
["anthropic:1"]="Sonnet 4.5 - Best balance"
["anthropic:2"]="Sonnet 4 - Fast + capable"
["anthropic:3"]="Haiku 4.5 - Fast + cheap"
["openai:0"]="GPT-5.2 - Most capable (recommended)"
["openai:1"]="GPT-5 Mini - Fast + cheap"
["anthropic:0"]="Haiku 4.5 - Fast + cheap (recommended)"
["anthropic:1"]="Sonnet 4 - Fast + capable"
["anthropic:2"]="Sonnet 4.5 - Best balance"
["anthropic:3"]="Opus 4.6 - Most capable"
["openai:0"]="GPT-5 Mini - Fast + cheap (recommended)"
["openai:1"]="GPT-5.2 - Most capable"
["gemini:0"]="Gemini 3 Flash - Fast (recommended)"
["gemini:1"]="Gemini 3.1 Pro - Best quality"
["groq:0"]="Kimi K2 - Best quality (recommended)"
@@ -450,10 +450,10 @@ if [ "$USE_ASSOC_ARRAYS" = true ]; then
)
declare -A MODEL_CHOICES_MAXTOKENS=(
["anthropic:0"]=32768
["anthropic:1"]=16384
["anthropic:2"]=8192
["anthropic:3"]=8192
["anthropic:0"]=8192
["anthropic:1"]=8192
["anthropic:2"]=16384
["anthropic:3"]=32768
["openai:0"]=16384
["openai:1"]=16384
["gemini:0"]=8192
@@ -508,7 +508,7 @@ else
# Default models by provider id (parallel arrays)
MODEL_PROVIDER_IDS=(anthropic openai gemini groq cerebras mistral together_ai deepseek)
MODEL_DEFAULTS=("claude-opus-4-6" "gpt-5.2" "gemini-3-flash-preview" "moonshotai/kimi-k2-instruct-0905" "zai-glm-4.7" "mistral-large-latest" "meta-llama/Llama-3.3-70B-Instruct-Turbo" "deepseek-chat")
MODEL_DEFAULTS=("claude-haiku-4-5-20251001" "gpt-5-mini" "gemini-3-flash-preview" "moonshotai/kimi-k2-instruct-0905" "zai-glm-4.7" "mistral-large-latest" "meta-llama/Llama-3.3-70B-Instruct-Turbo" "deepseek-chat")
# Helper: get provider display name for an env var
get_provider_name() {
@@ -552,9 +552,9 @@ else
# Model choices per provider - flat parallel arrays with provider offsets
# Provider order: anthropic(4), openai(2), gemini(2), groq(2), cerebras(2)
MC_PROVIDERS=(anthropic anthropic anthropic anthropic openai openai gemini gemini groq groq cerebras cerebras)
MC_IDS=("claude-opus-4-6" "claude-sonnet-4-5-20250929" "claude-sonnet-4-20250514" "claude-haiku-4-5-20251001" "gpt-5.2" "gpt-5-mini" "gemini-3-flash-preview" "gemini-3.1-pro-preview" "moonshotai/kimi-k2-instruct-0905" "openai/gpt-oss-120b" "zai-glm-4.7" "qwen3-235b-a22b-instruct-2507")
MC_LABELS=("Opus 4.6 - Most capable (recommended)" "Sonnet 4.5 - Best balance" "Sonnet 4 - Fast + capable" "Haiku 4.5 - Fast + cheap" "GPT-5.2 - Most capable (recommended)" "GPT-5 Mini - Fast + cheap" "Gemini 3 Flash - Fast (recommended)" "Gemini 3.1 Pro - Best quality" "Kimi K2 - Best quality (recommended)" "GPT-OSS 120B - Fast reasoning" "ZAI-GLM 4.7 - Best quality (recommended)" "Qwen3 235B - Frontier reasoning")
MC_MAXTOKENS=(32768 16384 8192 8192 16384 16384 8192 8192 8192 8192 8192 8192)
MC_IDS=("claude-haiku-4-5-20251001" "claude-sonnet-4-20250514" "claude-sonnet-4-5-20250929" "claude-opus-4-6" "gpt-5-mini" "gpt-5.2" "gemini-3-flash-preview" "gemini-3.1-pro-preview" "moonshotai/kimi-k2-instruct-0905" "openai/gpt-oss-120b" "zai-glm-4.7" "qwen3-235b-a22b-instruct-2507")
MC_LABELS=("Haiku 4.5 - Fast + cheap (recommended)" "Sonnet 4 - Fast + capable" "Sonnet 4.5 - Best balance" "Opus 4.6 - Most capable" "GPT-5 Mini - Fast + cheap (recommended)" "GPT-5.2 - Most capable" "Gemini 3 Flash - Fast (recommended)" "Gemini 3.1 Pro - Best quality" "Kimi K2 - Best quality (recommended)" "GPT-OSS 120B - Fast reasoning" "ZAI-GLM 4.7 - Best quality (recommended)" "Qwen3 235B - Frontier reasoning")
MC_MAXTOKENS=(8192 8192 16384 32768 16384 16384 8192 8192 8192 8192 8192 8192)
# Helper: get number of model choices for a provider
get_model_choice_count() {
@@ -687,6 +687,19 @@ prompt_model_selection() {
echo -e "${BOLD}Select a model:${NC}"
echo ""
# Find default index from previous model (if same provider)
local default_idx=""
if [ -n "$PREV_MODEL" ] && [ "$provider_id" = "$PREV_PROVIDER" ]; then
local j=0
while [ $j -lt "$count" ]; do
if [ "$(get_model_choice_id "$provider_id" "$j")" = "$PREV_MODEL" ]; then
default_idx=$((j + 1))
break
fi
j=$((j + 1))
done
fi
local i=0
while [ $i -lt "$count" ]; do
local label
@@ -701,7 +714,12 @@ prompt_model_selection() {
local choice
while true; do
read -r -p "Enter choice (1-$count): " choice || true
if [ -n "$default_idx" ]; then
read -r -p "Enter choice (1-$count) [$default_idx]: " choice || true
choice="${choice:-$default_idx}"
else
read -r -p "Enter choice (1-$count): " choice || true
fi
if [[ "$choice" =~ ^[0-9]+$ ]] && [ "$choice" -ge 1 ] && [ "$choice" -le "$count" ]; then
local idx=$((choice - 1))
SELECTED_MODEL="$(get_model_choice_id "$provider_id" "$idx")"
@@ -781,7 +799,9 @@ SUBSCRIPTION_MODE="" # "claude_code" | "codex" | "zai_code" | ""
# ── Credential detection (silent — just set flags) ───────────
CLAUDE_CRED_DETECTED=false
if [ -f "$HOME/.claude/.credentials.json" ]; then
if command -v security &>/dev/null && security find-generic-password -s "Claude Code-credentials" &>/dev/null 2>&1; then
CLAUDE_CRED_DETECTED=true
elif [ -f "$HOME/.claude/.credentials.json" ]; then
CLAUDE_CRED_DETECTED=true
fi
@@ -814,6 +834,65 @@ else
done
fi
# ── Read previous configuration (if any) ──────────────────────
PREV_PROVIDER=""
PREV_MODEL=""
PREV_ENV_VAR=""
PREV_SUB_MODE=""
if [ -f "$HIVE_CONFIG_FILE" ]; then
eval "$($PYTHON_CMD -c "
import json, sys
try:
with open('$HIVE_CONFIG_FILE') as f:
c = json.load(f)
llm = c.get('llm', {})
print(f'PREV_PROVIDER={llm.get(\"provider\", \"\")}')
print(f'PREV_MODEL={llm.get(\"model\", \"\")}')
print(f'PREV_ENV_VAR={llm.get(\"api_key_env_var\", \"\")}')
sub = ''
if llm.get('use_claude_code_subscription'): sub = 'claude_code'
elif llm.get('use_codex_subscription'): sub = 'codex'
elif 'api.z.ai' in llm.get('api_base', ''): sub = 'zai_code'
print(f'PREV_SUB_MODE={sub}')
except Exception:
pass
" 2>/dev/null)" || true
fi
# Compute default menu number from previous config (only if credential is still valid)
DEFAULT_CHOICE=""
if [ -n "$PREV_SUB_MODE" ] || [ -n "$PREV_PROVIDER" ]; then
PREV_CRED_VALID=false
case "$PREV_SUB_MODE" in
claude_code) [ "$CLAUDE_CRED_DETECTED" = true ] && PREV_CRED_VALID=true ;;
zai_code) [ "$ZAI_CRED_DETECTED" = true ] && PREV_CRED_VALID=true ;;
codex) [ "$CODEX_CRED_DETECTED" = true ] && PREV_CRED_VALID=true ;;
*)
# API key provider — check if the env var is set
if [ -n "$PREV_ENV_VAR" ] && [ -n "${!PREV_ENV_VAR}" ]; then
PREV_CRED_VALID=true
fi
;;
esac
if [ "$PREV_CRED_VALID" = true ]; then
case "$PREV_SUB_MODE" in
claude_code) DEFAULT_CHOICE=1 ;;
zai_code) DEFAULT_CHOICE=2 ;;
codex) DEFAULT_CHOICE=3 ;;
esac
if [ -z "$DEFAULT_CHOICE" ]; then
case "$PREV_PROVIDER" in
anthropic) DEFAULT_CHOICE=4 ;;
openai) DEFAULT_CHOICE=5 ;;
gemini) DEFAULT_CHOICE=6 ;;
groq) DEFAULT_CHOICE=7 ;;
cerebras) DEFAULT_CHOICE=8 ;;
esac
fi
fi
fi
# ── Show unified provider selection menu ─────────────────────
echo -e "${BOLD}Select your default LLM provider:${NC}"
echo ""
@@ -858,8 +937,18 @@ done
echo -e " ${CYAN}9)${NC} Skip for now"
echo ""
if [ -n "$DEFAULT_CHOICE" ]; then
echo -e " ${DIM}Previously configured: ${PREV_PROVIDER}/${PREV_MODEL}. Press Enter to keep.${NC}"
echo ""
fi
while true; do
read -r -p "Enter choice (1-9): " choice || true
if [ -n "$DEFAULT_CHOICE" ]; then
read -r -p "Enter choice (1-9) [$DEFAULT_CHOICE]: " choice || true
choice="${choice:-$DEFAULT_CHOICE}"
else
read -r -p "Enter choice (1-9): " choice || true
fi
if [[ "$choice" =~ ^[0-9]+$ ]] && [ "$choice" -ge 1 ] && [ "$choice" -le 9 ]; then
break
fi
@@ -968,48 +1057,132 @@ case $choice in
;;
esac
# For API-key providers: prompt for key if not already set
if [ -z "$SUBSCRIPTION_MODE" ] && [ -n "$SELECTED_ENV_VAR" ] && [ -z "${!SELECTED_ENV_VAR}" ]; then
echo ""
echo -e "Get your API key from: ${CYAN}$SIGNUP_URL${NC}"
echo ""
read -r -p "Paste your $PROVIDER_NAME API key (or press Enter to skip): " API_KEY
# For API-key providers: prompt for key (allow replacement if already set)
if [ -z "$SUBSCRIPTION_MODE" ] && [ -n "$SELECTED_ENV_VAR" ]; then
while true; do
CURRENT_KEY="${!SELECTED_ENV_VAR}"
if [ -n "$CURRENT_KEY" ]; then
# Key exists — offer to keep or replace
MASKED_KEY="${CURRENT_KEY:0:4}...${CURRENT_KEY: -4}"
echo ""
echo -e " ${GREEN}${NC} Current key: ${DIM}$MASKED_KEY${NC}"
read -r -p " Press Enter to keep, or paste a new key to replace: " API_KEY
else
# No key — prompt for one
echo ""
echo -e "Get your API key from: ${CYAN}$SIGNUP_URL${NC}"
echo ""
read -r -p "Paste your $PROVIDER_NAME API key (or press Enter to skip): " API_KEY
fi
if [ -n "$API_KEY" ]; then
echo "" >> "$SHELL_RC_FILE"
echo "# Hive Agent Framework - $PROVIDER_NAME API key" >> "$SHELL_RC_FILE"
echo "export $SELECTED_ENV_VAR=\"$API_KEY\"" >> "$SHELL_RC_FILE"
export "$SELECTED_ENV_VAR=$API_KEY"
echo ""
echo -e "${GREEN}${NC} API key saved to $SHELL_RC_FILE"
else
echo ""
echo -e "${YELLOW}Skipped.${NC} Add your API key to $SHELL_RC_FILE when ready."
SELECTED_ENV_VAR=""
SELECTED_PROVIDER_ID=""
fi
if [ -n "$API_KEY" ]; then
# Remove old export line(s) for this env var from shell rc, then append new
sed -i.bak "/^export ${SELECTED_ENV_VAR}=/d" "$SHELL_RC_FILE" && rm -f "${SHELL_RC_FILE}.bak"
echo "" >> "$SHELL_RC_FILE"
echo "# Hive Agent Framework - $PROVIDER_NAME API key" >> "$SHELL_RC_FILE"
echo "export $SELECTED_ENV_VAR=\"$API_KEY\"" >> "$SHELL_RC_FILE"
export "$SELECTED_ENV_VAR=$API_KEY"
echo ""
echo -e "${GREEN}${NC} API key saved to $SHELL_RC_FILE"
# Health check the new key
echo -n " Verifying API key... "
HC_RESULT=$(uv run python "$SCRIPT_DIR/scripts/check_llm_key.py" "$SELECTED_PROVIDER_ID" "$API_KEY" 2>/dev/null) || true
HC_VALID=$(echo "$HC_RESULT" | $PYTHON_CMD -c "import json,sys; print(json.loads(sys.stdin.read()).get('valid',''))" 2>/dev/null) || true
HC_MSG=$(echo "$HC_RESULT" | $PYTHON_CMD -c "import json,sys; print(json.loads(sys.stdin.read()).get('message',''))" 2>/dev/null) || true
if [ "$HC_VALID" = "True" ]; then
echo -e "${GREEN}ok${NC}"
break
elif [ "$HC_VALID" = "False" ]; then
echo -e "${RED}failed${NC}"
echo -e " ${YELLOW}$HC_MSG${NC}"
# Undo the save so the user can retry cleanly
sed -i.bak "/^export ${SELECTED_ENV_VAR}=/d" "$SHELL_RC_FILE" && rm -f "${SHELL_RC_FILE}.bak"
# Remove the comment line we just added
sed -i.bak "/^# Hive Agent Framework - $PROVIDER_NAME API key$/d" "$SHELL_RC_FILE" && rm -f "${SHELL_RC_FILE}.bak"
unset "$SELECTED_ENV_VAR"
echo ""
read -r -p " Press Enter to try again: " _
# Loop back to key prompt
else
echo -e "${YELLOW}--${NC}"
echo -e " ${DIM}Could not verify key (network issue). The key has been saved.${NC}"
break
fi
elif [ -z "$CURRENT_KEY" ]; then
# No existing key and user skipped — abort provider
echo ""
echo -e "${YELLOW}Skipped.${NC} Add your API key to $SHELL_RC_FILE when ready."
SELECTED_ENV_VAR=""
SELECTED_PROVIDER_ID=""
break
else
# User pressed Enter with existing key — keep it, proceed normally
break
fi
done
fi
# For ZAI subscription: always prompt for API key
# For ZAI subscription: prompt for API key (allow replacement if already set)
if [ "$SUBSCRIPTION_MODE" = "zai_code" ]; then
echo ""
read -r -p "Paste your ZAI API key (or press Enter to skip): " API_KEY
while true; do
if [ "$ZAI_CRED_DETECTED" = true ] && [ -n "$ZAI_API_KEY" ]; then
# Key exists — offer to keep or replace
MASKED_KEY="${ZAI_API_KEY:0:4}...${ZAI_API_KEY: -4}"
echo ""
echo -e " ${GREEN}${NC} Current ZAI key: ${DIM}$MASKED_KEY${NC}"
read -r -p " Press Enter to keep, or paste a new key to replace: " API_KEY
else
# No key — prompt for one
echo ""
read -r -p "Paste your ZAI API key (or press Enter to skip): " API_KEY
fi
if [ -n "$API_KEY" ]; then
echo "" >> "$SHELL_RC_FILE"
echo "# Hive Agent Framework - ZAI Code subscription API key" >> "$SHELL_RC_FILE"
echo "export ZAI_API_KEY=\"$API_KEY\"" >> "$SHELL_RC_FILE"
export ZAI_API_KEY="$API_KEY"
echo ""
echo -e "${GREEN}${NC} ZAI API key saved to $SHELL_RC_FILE"
else
echo ""
echo -e "${YELLOW}Skipped.${NC} Add your ZAI API key to $SHELL_RC_FILE when ready:"
echo -e " ${CYAN}echo 'export ZAI_API_KEY=\"your-key\"' >> $SHELL_RC_FILE${NC}"
SELECTED_ENV_VAR=""
SELECTED_PROVIDER_ID=""
SUBSCRIPTION_MODE=""
fi
if [ -n "$API_KEY" ]; then
sed -i.bak "/^export ZAI_API_KEY=/d" "$SHELL_RC_FILE" && rm -f "${SHELL_RC_FILE}.bak"
echo "" >> "$SHELL_RC_FILE"
echo "# Hive Agent Framework - ZAI Code subscription API key" >> "$SHELL_RC_FILE"
echo "export ZAI_API_KEY=\"$API_KEY\"" >> "$SHELL_RC_FILE"
export ZAI_API_KEY="$API_KEY"
echo ""
echo -e "${GREEN}${NC} ZAI API key saved to $SHELL_RC_FILE"
# Health check the new key
echo -n " Verifying ZAI API key... "
HC_RESULT=$(uv run python "$SCRIPT_DIR/scripts/check_llm_key.py" "zai" "$API_KEY" "https://api.z.ai/api/coding/paas/v4" 2>/dev/null) || true
HC_VALID=$(echo "$HC_RESULT" | $PYTHON_CMD -c "import json,sys; print(json.loads(sys.stdin.read()).get('valid',''))" 2>/dev/null) || true
HC_MSG=$(echo "$HC_RESULT" | $PYTHON_CMD -c "import json,sys; print(json.loads(sys.stdin.read()).get('message',''))" 2>/dev/null) || true
if [ "$HC_VALID" = "True" ]; then
echo -e "${GREEN}ok${NC}"
break
elif [ "$HC_VALID" = "False" ]; then
echo -e "${RED}failed${NC}"
echo -e " ${YELLOW}$HC_MSG${NC}"
# Undo the save so the user can retry cleanly
sed -i.bak "/^export ZAI_API_KEY=/d" "$SHELL_RC_FILE" && rm -f "${SHELL_RC_FILE}.bak"
sed -i.bak "/^# Hive Agent Framework - ZAI Code subscription API key$/d" "$SHELL_RC_FILE" && rm -f "${SHELL_RC_FILE}.bak"
unset ZAI_API_KEY
ZAI_CRED_DETECTED=false
echo ""
read -r -p " Press Enter to try again: " _
# Loop back to key prompt
else
echo -e "${YELLOW}--${NC}"
echo -e " ${DIM}Could not verify key (network issue). The key has been saved.${NC}"
break
fi
elif [ "$ZAI_CRED_DETECTED" = false ] || [ -z "$ZAI_API_KEY" ]; then
# No existing key and user skipped — abort provider
echo ""
echo -e "${YELLOW}Skipped.${NC} Add your ZAI API key to $SHELL_RC_FILE when ready:"
echo -e " ${CYAN}echo 'export ZAI_API_KEY=\"your-key\"' >> $SHELL_RC_FILE${NC}"
SELECTED_ENV_VAR=""
SELECTED_PROVIDER_ID=""
SUBSCRIPTION_MODE=""
break
else
# User pressed Enter with existing key — keep it, proceed normally
break
fi
done
fi
# Prompt for model if not already selected (manual provider path)
@@ -1037,52 +1210,22 @@ fi
echo ""
# ============================================================
# Step 4b: Browser Automation (GCU)
# Step 4b: Browser Automation (GCU) — always enabled
# ============================================================
echo -e "${BOLD}Enable browser automation?${NC}"
echo -e "${DIM}This lets your agents control a real browser — navigate websites, fill forms,${NC}"
echo -e "${DIM}scrape dynamic pages, and interact with web UIs.${NC}"
echo ""
echo -e " ${CYAN}${BOLD}1)${NC} ${BOLD}Yes${NC}"
echo -e " ${CYAN}2)${NC} No"
echo ""
while true; do
read -r -p "Enter choice (1-2, default 1): " gcu_choice || true
gcu_choice="${gcu_choice:-1}"
if [ "$gcu_choice" = "1" ] || [ "$gcu_choice" = "2" ]; then
break
fi
echo -e "${RED}Invalid choice. Please enter 1 or 2${NC}"
done
if [ "$gcu_choice" = "1" ]; then
GCU_ENABLED=true
echo -e "${GREEN}${NC} Browser automation enabled"
else
GCU_ENABLED=false
echo -e "${DIM}⬡ Browser automation skipped${NC}"
fi
echo -e "${GREEN}${NC} Browser automation enabled"
# Patch gcu_enabled into configuration.json
if [ "$GCU_ENABLED" = "true" ]; then
GCU_PY_VAL="True"
else
GCU_PY_VAL="False"
fi
if [ -f "$HIVE_CONFIG_FILE" ]; then
uv run python -c "
import json
with open('$HIVE_CONFIG_FILE') as f:
config = json.load(f)
config['gcu_enabled'] = $GCU_PY_VAL
config['gcu_enabled'] = True
with open('$HIVE_CONFIG_FILE', 'w') as f:
json.dump(config, f, indent=2)
"
elif [ "$GCU_ENABLED" = "true" ]; then
# No config file yet (user skipped LLM provider) — create minimal one
else
mkdir -p "$HIVE_CONFIG_DIR"
uv run python -c "
import json
@@ -1352,9 +1495,10 @@ if [ "$FRONTEND_BUILT" = true ]; then
echo -e " ${DIM}Starting server on http://localhost:8787${NC}"
echo -e " ${DIM}Press Ctrl+C to stop${NC}"
echo ""
# exec replaces the quickstart process with hive serve
# --open tells it to auto-open the browser once the server is ready
exec "$SCRIPT_DIR/hive" serve --open
echo -e " ${DIM}Tip: You can restart the dashboard anytime with:${NC} ${CYAN}hive open${NC}"
echo ""
# exec replaces the quickstart process with hive open
exec "$SCRIPT_DIR/hive" open
else
# No frontend — show manual instructions
echo -e "${YELLOW}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
+125
View File
@@ -0,0 +1,125 @@
"""Validate an LLM API key without consuming tokens.
Usage:
python scripts/check_llm_key.py <provider_id> <api_key> [api_base]
Exit codes:
0 = valid key
1 = invalid key
2 = inconclusive (timeout, network error)
Output: single JSON line {"valid": bool, "message": str}
"""
import json
import sys
import httpx
TIMEOUT = 10.0
def check_anthropic(api_key: str, **_: str) -> dict:
"""Send empty messages to trigger 400 without consuming tokens."""
with httpx.Client(timeout=TIMEOUT) as client:
r = client.post(
"https://api.anthropic.com/v1/messages",
headers={
"x-api-key": api_key,
"anthropic-version": "2023-06-01",
"Content-Type": "application/json",
},
json={"model": "claude-sonnet-4-20250514", "max_tokens": 1, "messages": []},
)
if r.status_code in (200, 400, 429):
return {"valid": True, "message": "API key valid"}
if r.status_code == 401:
return {"valid": False, "message": "Invalid API key"}
if r.status_code == 403:
return {"valid": False, "message": "API key lacks permissions"}
return {"valid": False, "message": f"Unexpected status {r.status_code}"}
def check_openai_compatible(api_key: str, endpoint: str, name: str) -> dict:
"""GET /models on any OpenAI-compatible API."""
with httpx.Client(timeout=TIMEOUT) as client:
r = client.get(
endpoint,
headers={"Authorization": f"Bearer {api_key}"},
)
if r.status_code in (200, 429):
return {"valid": True, "message": f"{name} API key valid"}
if r.status_code == 401:
return {"valid": False, "message": f"Invalid {name} API key"}
if r.status_code == 403:
return {"valid": False, "message": f"{name} API key lacks permissions"}
return {"valid": False, "message": f"{name} API returned status {r.status_code}"}
def check_gemini(api_key: str, **_: str) -> dict:
"""List models with query param auth."""
with httpx.Client(timeout=TIMEOUT) as client:
r = client.get(
"https://generativelanguage.googleapis.com/v1beta/models",
params={"key": api_key},
)
if r.status_code in (200, 429):
return {"valid": True, "message": "Gemini API key valid"}
if r.status_code in (400, 401, 403):
return {"valid": False, "message": "Invalid Gemini API key"}
return {"valid": False, "message": f"Gemini API returned status {r.status_code}"}
PROVIDERS = {
"anthropic": lambda key, **kw: check_anthropic(key),
"openai": lambda key, **kw: check_openai_compatible(
key, "https://api.openai.com/v1/models", "OpenAI"
),
"gemini": lambda key, **kw: check_gemini(key),
"groq": lambda key, **kw: check_openai_compatible(
key, "https://api.groq.com/openai/v1/models", "Groq"
),
"cerebras": lambda key, **kw: check_openai_compatible(
key, "https://api.cerebras.ai/v1/models", "Cerebras"
),
}
def main() -> None:
if len(sys.argv) < 3:
print(json.dumps({"valid": False, "message": "Usage: check_llm_key.py <provider> <key> [api_base]"}))
sys.exit(2)
provider_id = sys.argv[1]
api_key = sys.argv[2]
api_base = sys.argv[3] if len(sys.argv) > 3 else ""
try:
if api_base:
# Custom API base (ZAI or other OpenAI-compatible)
endpoint = api_base.rstrip("/") + "/models"
result = check_openai_compatible(api_key, endpoint, "ZAI")
elif provider_id in PROVIDERS:
result = PROVIDERS[provider_id](api_key)
else:
result = {"valid": True, "message": f"No health check for {provider_id}"}
print(json.dumps(result))
sys.exit(0)
print(json.dumps(result))
sys.exit(0 if result["valid"] else 1)
except httpx.TimeoutException:
print(json.dumps({"valid": None, "message": "Request timed out"}))
sys.exit(2)
except httpx.RequestError as e:
msg = str(e)
# Redact key from error messages
if api_key in msg:
msg = msg.replace(api_key, "***")
print(json.dumps({"valid": None, "message": f"Connection failed: {msg}"}))
sys.exit(2)
if __name__ == "__main__":
main()
+2
View File
@@ -20,6 +20,7 @@ def test_check_requirements():
[sys.executable, "scripts/check_requirements.py", "json", "sys", "os"],
capture_output=True,
text=True,
encoding="utf-8",
)
print(f"Exit code: {result.returncode}")
print(f"Output:\n{result.stdout}")
@@ -39,6 +40,7 @@ def test_check_requirements():
[sys.executable, "scripts/check_requirements.py", "json", "nonexistent_module"],
capture_output=True,
text=True,
encoding="utf-8",
)
print(f"Exit code: {result.returncode}")
print(f"Output:\n{result.stdout}")
+66
View File
@@ -0,0 +1,66 @@
# MSSQL Connection Configuration Template
#
# Copy this file to .env and fill in your actual values
# DO NOT commit the .env file to version control!
# ============================================================================
# SQL Server Connection - Choose ONE format below:
# ============================================================================
# OPTION 1: Local named instance
MSSQL_SERVER=localhost\SQLEXPRESS
# OPTION 2: Local default instance
# MSSQL_SERVER=localhost
# OPTION 3: Remote server with default port (1433)
# MSSQL_SERVER=192.168.1.100
# OPTION 4: Remote server with custom port (comma-separated)
# MSSQL_SERVER=192.168.1.100,1433
# OPTION 5: Remote named instance
# MSSQL_SERVER=PRODUCTION-SERVER\INSTANCE01
# OPTION 6: Domain server name
# MSSQL_SERVER=sql-prod.company.com
# OPTION 7: Domain server with port
# MSSQL_SERVER=sql-prod.company.com,1433
# ============================================================================
# Database Configuration
# ============================================================================
MSSQL_DATABASE=AdenTestDB
# ============================================================================
# Authentication - Choose ONE method:
# ============================================================================
# METHOD 1: SQL Server Authentication (username/password)
# Use this for: remote servers, Linux servers, specific SQL logins
MSSQL_USERNAME=sa
MSSQL_PASSWORD=your_password_here
# METHOD 2: Windows Authentication (leave both empty)
# Use this for: local Windows servers, domain-joined environments
# MSSQL_USERNAME=
# MSSQL_PASSWORD=
# ============================================================================
# Important Notes:
# ============================================================================
# - Port format: Use comma (,) not colon - Example: server,1433
# - Named instances: Use backslash (\) - Example: SERVER\INSTANCE
# - Default port: 1433 (can be omitted if using default)
# - ODBC Driver: Requires "ODBC Driver 17 for SQL Server" or newer
# - Security: Never commit this file with real credentials!
# - Escaping: In some shells, escape backslashes (\\) when setting env vars
# ============================================================================
# Example Production Configurations:
# -----------------------------------
# Azure SQL: MSSQL_SERVER=yourserver.database.windows.net
# AWS RDS: MSSQL_SERVER=yourinstance.region.rds.amazonaws.com,1433
# Docker: MSSQL_SERVER=localhost,1401
# Kubernetes: MSSQL_SERVER=mssql-service.namespace.svc.cluster.local,1433
+11 -1
View File
@@ -90,7 +90,13 @@ def _resolve_path(path: str) -> str:
def _snapshot_git(*args: str) -> str:
"""Run a git command with the snapshot GIT_DIR and PROJECT_ROOT worktree."""
cmd = ["git", "--git-dir", SNAPSHOT_DIR, "--work-tree", PROJECT_ROOT, *args]
result = subprocess.run(cmd, capture_output=True, text=True, timeout=30)
result = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=30,
encoding="utf-8",
)
return result.stdout.strip()
@@ -104,6 +110,7 @@ def _ensure_snapshot_repo():
["git", "init", "--bare", SNAPSHOT_DIR],
capture_output=True,
timeout=10,
encoding="utf-8",
)
_snapshot_git("config", "core.autocrlf", "false")
@@ -152,6 +159,7 @@ def run_command(command: str, cwd: str = "", timeout: int = 120) -> str:
capture_output=True,
text=True,
timeout=timeout,
encoding="utf-8",
env={
**os.environ,
"PYTHONPATH": (
@@ -228,6 +236,7 @@ def undo_changes(path: str = "") -> str:
capture_output=True,
text=True,
timeout=10,
encoding="utf-8",
)
return f"Restored: {path}"
else:
@@ -1021,6 +1030,7 @@ def run_agent_tests(
text=True,
timeout=120,
env=env,
encoding="utf-8",
)
except subprocess.TimeoutExpired:
return json.dumps(
+120
View File
@@ -0,0 +1,120 @@
"""
Database Initialization Script Runner for AdenTestDB
This script executes the SQL initialization file to create the AdenTestDB database.
Make sure your SQL Server is running before executing this script.
"""
import os
import pyodbc
from dotenv import load_dotenv
# Load environment variables from .env
load_dotenv()
# Database connection settings (from environment variables)
SERVER = os.getenv("MSSQL_SERVER", r"MONSTER\MSSQLSERVERR")
USERNAME = os.getenv("MSSQL_USERNAME")
PASSWORD = os.getenv("MSSQL_PASSWORD")
# SQL file path
SQL_FILE = os.path.join(os.path.dirname(__file__), "init_aden_testdb.sql")
def execute_sql_file():
"""Execute the SQL initialization file."""
connection = None
try:
# Read SQL file
if not os.path.exists(SQL_FILE):
print(f"[ERROR] SQL file not found: {SQL_FILE}")
return False
with open(SQL_FILE, encoding="utf-8") as f:
sql_script = f.read()
print("=" * 70)
print("AdenTestDB Database Initialization")
print("=" * 70)
print(f"Server: {SERVER}")
print(f"SQL Script: {SQL_FILE}")
print()
# Connect to master database (to create new database)
connection_string = (
f"DRIVER={{ODBC Driver 17 for SQL Server}};"
f"SERVER={SERVER};"
f"DATABASE=master;"
f"UID={USERNAME};"
f"PWD={PASSWORD};"
)
print("Connecting to SQL Server...")
connection = pyodbc.connect(connection_string)
connection.autocommit = True # Required for CREATE DATABASE
cursor = connection.cursor()
print("[OK] Connected successfully!")
print()
print("Executing SQL script...")
print("-" * 70)
# Split by GO statements and execute each batch
batches = sql_script.split("\nGO\n")
for i, batch in enumerate(batches, 1):
batch = batch.strip()
if batch and not batch.startswith("--"):
try:
cursor.execute(batch)
# Print any messages from the server
while cursor.nextset():
pass
except pyodbc.Error as e:
# Some statements might not return results, that's OK
if "No results" not in str(e):
print(f"Warning in batch {i}: {str(e)}")
print("-" * 70)
print()
print("=" * 70)
print("[SUCCESS] Database initialization completed successfully!")
print("=" * 70)
print()
print("Next steps:")
print("1. Run: python test_mssql_connection.py")
print("2. Verify the relational schema and sample data")
print()
return True
except pyodbc.Error as e:
print()
print("=" * 70)
print("[ERROR] Database initialization failed!")
print("=" * 70)
print(f"Error detail: {str(e)}")
print()
print("Possible solutions:")
print("1. Ensure SQL Server is running")
print("2. Check server name, username, and password")
print("3. Ensure you have permission to create databases")
print("4. Verify ODBC Driver 17 for SQL Server is installed")
print()
return False
except Exception as e:
print(f"\n[ERROR] Unexpected error: {str(e)}")
return False
finally:
if connection:
connection.close()
print("Connection closed.")
if __name__ == "__main__":
success = execute_sql_file()
exit(0 if success else 1)
+134
View File
@@ -0,0 +1,134 @@
"""
Grant Permissions to AdenTestDB
This script grants the necessary permissions to the 'sa' user to access AdenTE testDB.
"""
import pyodbc
SERVER = r"MONSTER\MSSQLSERVERR"
USERNAME = "sa"
PASSWORD = "622622aA."
def grant_permissions():
"""Grant permissions to the database."""
connection = None
try:
# Connect to AdenTestDB
connection_string = (
f"DRIVER={{ODBC Driver 17 for SQL Server}};"
f"SERVER={SERVER};"
f"DATABASE=AdenTestDB;"
f"UID={USERNAME};"
f"PWD={PASSWORD};"
f"TrustServerCertificate=yes;"
)
print("=" * 70)
print("Granting Permissions to AdenTestDB")
print("=" * 70)
print(f"Server: {SERVER}")
print()
print("Connecting to database...")
connection = pyodbc.connect(connection_string)
cursor = connection.cursor()
print("[OK] Connected successfully!")
print()
# Grant permissions
print("Granting permissions...")
try:
cursor.execute("GRANT SELECT, INSERT, UPDATE, DELETE ON SCHEMA::dbo TO sa")
print("[OK] Granted schema permissions to sa")
except pyodbc.Error as e:
print(f"Note: {str(e)}")
connection.commit()
print()
print("=" * 70)
print("[SUCCESS] Permissions granted!")
print("=" * 70)
print()
print("You can now run: python test_mssql_connection.py")
return True
except pyodbc.Error:
# If we can't connect, try connecting to master and creating user
try:
connection_string = (
f"DRIVER={{ODBC Driver 17 for SQL Server}};"
f"SERVER={SERVER};"
f"DATABASE=master;"
f"UID={USERNAME};"
f"PWD={PASSWORD};"
f"TrustServerCertificate=yes;"
)
print("Attempting to grant permissions via master database...")
connection = pyodbc.connect(connection_string)
cursor = connection.cursor()
# Create login if not exists
try:
cursor.execute(f"""
IF NOT EXISTS (SELECT * FROM sys.server_principals WHERE name = 'sa')
BEGIN
CREATE LOGIN sa WITH PASSWORD = '{PASSWORD}'
END
""")
except Exception:
pass
# Switch to AdenTestDB and grant permissions
cursor.execute("USE AdenTestDB")
# Create user if not exists
try:
cursor.execute("""
IF NOT EXISTS (SELECT * FROM sys.database_principals WHERE name = 'sa')
BEGIN
CREATE USER sa FOR LOGIN sa
END
""")
print("[OK] Created database user")
except Exception:
pass
# Grant permissions
cursor.execute("ALTER ROLE db_datareader ADD MEMBER sa")
cursor.execute("ALTER ROLE db_datawriter ADD MEMBER sa")
connection.commit()
print("[OK] Permissions granted successfully!")
return True
except Exception as inner_e:
print("\n[ERROR] Could not grant permissions!")
print(f"Error: {str(inner_e)}")
print()
print("The database was created successfully, but there's a permission issue.")
print("Please run this SQL command in SQL Server Management Studio:")
print()
print("USE AdenTestDB;")
print("GO")
print("ALTER ROLE db_datareader ADD MEMBER sa;")
print("ALTER ROLE db_datawriter ADD MEMBER sa;")
print("GO")
return False
finally:
if connection:
connection.close()
print("\nConnection closed.")
if __name__ == "__main__":
grant_permissions()
+183
View File
@@ -0,0 +1,183 @@
-- ============================================================================
-- AdenTestDB Database Initialization Script
-- ============================================================================
-- Purpose: Create a professional testing database for Aden Hive MSSQL tool
-- Author: Database Architect
-- Date: 2026-02-08
-- ============================================================================
USE master;
GO
-- Drop database if exists (for clean recreation)
IF EXISTS (SELECT name FROM sys.databases WHERE name = N'AdenTestDB')
BEGIN
ALTER DATABASE AdenTestDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE AdenTestDB;
PRINT 'Existing AdenTestDB dropped successfully.';
END
GO
-- Create new database
CREATE DATABASE AdenTestDB;
GO
PRINT 'AdenTestDB created successfully.';
GO
USE AdenTestDB;
GO
-- ============================================================================
-- TABLE: Departments
-- ============================================================================
-- Purpose: Store department information with budget tracking
-- ============================================================================
CREATE TABLE Departments (
department_id INT IDENTITY(1,1) NOT NULL,
name NVARCHAR(100) NOT NULL,
budget DECIMAL(15,2) NOT NULL,
created_date DATETIME NOT NULL DEFAULT GETDATE(),
CONSTRAINT PK_Departments PRIMARY KEY (department_id),
CONSTRAINT UK_Departments_Name UNIQUE (name),
CONSTRAINT CK_Departments_Budget CHECK (budget >= 0)
);
GO
-- Create index for performance optimization
CREATE INDEX IX_Departments_Name ON Departments(name);
GO
PRINT 'Departments table created successfully.';
GO
-- ============================================================================
-- TABLE: Employees
-- ============================================================================
-- Purpose: Store employee information with department association
-- ============================================================================
CREATE TABLE Employees (
employee_id INT IDENTITY(1000,1) NOT NULL,
first_name NVARCHAR(50) NOT NULL,
last_name NVARCHAR(50) NOT NULL,
email NVARCHAR(100) NOT NULL,
salary DECIMAL(12,2) NOT NULL,
hire_date DATETIME NOT NULL,
department_id INT NOT NULL,
CONSTRAINT PK_Employees PRIMARY KEY (employee_id),
CONSTRAINT UK_Employees_Email UNIQUE (email),
CONSTRAINT CK_Employees_Salary CHECK (salary >= 0),
CONSTRAINT FK_Employees_Departments
FOREIGN KEY (department_id) REFERENCES Departments(department_id)
ON DELETE CASCADE
ON UPDATE CASCADE
);
GO
-- Create indexes for performance optimization
CREATE INDEX IX_Employees_DepartmentId ON Employees(department_id);
CREATE INDEX IX_Employees_LastName ON Employees(last_name);
CREATE INDEX IX_Employees_Email ON Employees(email);
GO
PRINT 'Employees table created successfully.';
GO
-- ============================================================================
-- SAMPLE DATA: Departments
-- ============================================================================
INSERT INTO Departments (name, budget, created_date) VALUES
('Engineering', 2500000.00, '2023-01-15'),
('Human Resources', 800000.00, '2023-01-15'),
('Sales', 1500000.00, '2023-01-20'),
('Marketing', 1200000.00, '2023-02-01'),
('Finance', 1000000.00, '2023-02-10');
GO
PRINT 'Sample departments inserted successfully.';
GO
-- ============================================================================
-- SAMPLE DATA: Employees
-- ============================================================================
INSERT INTO Employees (first_name, last_name, email, salary, hire_date, department_id) VALUES
-- Engineering Department (ID: 1)
('John', 'Smith', 'john.smith@adenhive.com', 120000.00, '2023-03-01', 1),
('Sarah', 'Johnson', 'sarah.johnson@adenhive.com', 115000.00, '2023-03-15', 1),
('Michael', 'Chen', 'michael.chen@adenhive.com', 125000.00, '2023-04-01', 1),
('Emily', 'Rodriguez', 'emily.rodriguez@adenhive.com', 110000.00, '2023-05-10', 1),
('David', 'Kim', 'david.kim@adenhive.com', 105000.00, '2024-01-15', 1),
-- Human Resources Department (ID: 2)
('Lisa', 'Anderson', 'lisa.anderson@adenhive.com', 85000.00, '2023-02-20', 2),
('James', 'Wilson', 'james.wilson@adenhive.com', 80000.00, '2023-06-01', 2),
-- Sales Department (ID: 3)
('Jennifer', 'Taylor', 'jennifer.taylor@adenhive.com', 95000.00, '2023-04-15', 3),
('Robert', 'Martinez', 'robert.martinez@adenhive.com', 90000.00, '2023-05-01', 3),
('Amanda', 'Garcia', 'amanda.garcia@adenhive.com', 92000.00, '2023-07-20', 3),
-- Marketing Department (ID: 4)
('Christopher', 'Lee', 'christopher.lee@adenhive.com', 88000.00, '2023-03-10', 4),
('Michelle', 'White', 'michelle.white@adenhive.com', 86000.00, '2023-08-01', 4),
('Kevin', 'Brown', 'kevin.brown@adenhive.com', 84000.00, '2024-02-01', 4),
-- Finance Department (ID: 5)
('Jessica', 'Davis', 'jessica.davis@adenhive.com', 98000.00, '2023-02-15', 5),
('Daniel', 'Miller', 'daniel.miller@adenhive.com', 95000.00, '2023-09-01', 5);
GO
PRINT 'Sample employees inserted successfully.';
GO
-- ============================================================================
-- VERIFICATION QUERIES
-- ============================================================================
PRINT '';
PRINT '============================================================';
PRINT 'Database Setup Summary';
PRINT '============================================================';
-- Count departments
DECLARE @DeptCount INT;
SELECT @DeptCount = COUNT(*) FROM Departments;
PRINT 'Total Departments: ' + CAST(@DeptCount AS NVARCHAR(10));
-- Count employees
DECLARE @EmpCount INT;
SELECT @EmpCount = COUNT(*) FROM Employees;
PRINT 'Total Employees: ' + CAST(@EmpCount AS NVARCHAR(10));
-- Show department summary
PRINT '';
PRINT 'Department Summary:';
PRINT '------------------------------------------------------------';
SELECT
d.name AS Department,
COUNT(e.employee_id) AS Employees,
d.budget AS Budget,
FORMAT(d.budget / NULLIF(COUNT(e.employee_id), 0), 'C', 'en-US') AS BudgetPerEmployee
FROM Departments d
LEFT JOIN Employees e ON d.department_id = e.department_id
GROUP BY d.name, d.budget
ORDER BY d.name;
GO
PRINT '';
PRINT '============================================================';
PRINT 'AdenTestDB initialization completed successfully!';
PRINT '============================================================';
PRINT '';
PRINT 'Next Steps:';
PRINT '1. Run: python test_mssql_connection.py';
PRINT '2. Verify JOIN queries work correctly';
PRINT '3. Test relational integrity';
PRINT '============================================================';
GO
+208
View File
@@ -0,0 +1,208 @@
"""
Payroll Analysis Tool
Analyzes total payroll costs by department and identifies highest-paid employee
"""
import io
import os
import sys
import pyodbc
from dotenv import load_dotenv
# Force UTF-8 encoding for console output
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8")
# Load environment variables from .env file
load_dotenv()
# Database connection settings (from environment variables)
SERVER = os.getenv("MSSQL_SERVER", r"MONSTER\MSSQLSERVERR")
DATABASE = os.getenv("MSSQL_DATABASE", "AdenTestDB")
USERNAME = os.getenv("MSSQL_USERNAME")
PASSWORD = os.getenv("MSSQL_PASSWORD")
def main():
"""Main analysis function."""
connection = None
try:
print("=" * 80)
print(" COMPANY PAYROLL ANALYSIS")
print("=" * 80)
print(f"Server: {SERVER}")
print(f"Database: {DATABASE}")
print()
# Connect to database
if USERNAME and PASSWORD:
# SQL Server Authentication
connection_string = (
f"DRIVER={{ODBC Driver 17 for SQL Server}};"
f"SERVER={SERVER};"
f"DATABASE={DATABASE};"
f"UID={USERNAME};"
f"PWD={PASSWORD};"
)
else:
# Windows Authentication
connection_string = (
f"DRIVER={{ODBC Driver 17 for SQL Server}};"
f"SERVER={SERVER};"
f"DATABASE={DATABASE};"
f"Trusted_Connection=yes;"
)
print("Connecting to database...")
connection = pyodbc.connect(connection_string)
cursor = connection.cursor()
print("✓ Connection successful!")
print()
# Analysis 1: Total Payroll by Department
print("=" * 80)
print(" TOTAL SALARY COSTS BY DEPARTMENT")
print("=" * 80)
payroll_query = """
SELECT
d.name AS department_name,
COUNT(e.employee_id) AS employee_count,
SUM(e.salary) AS total_salary_cost,
AVG(e.salary) AS avg_salary
FROM Departments d
LEFT JOIN Employees e ON d.department_id = e.department_id
GROUP BY d.name
ORDER BY total_salary_cost DESC
"""
cursor.execute(payroll_query)
print(
f"\n{'Department':<25} {'Employees':<12} {'Total Salary Cost':<20} {'Avg Salary':<15}"
)
print("-" * 80)
total_company_payroll = 0
total_employees = 0
for row in cursor:
dept_name = row[0]
emp_count = row[1]
total_salary = row[2] if row[2] else 0
avg_salary = row[3] if row[3] else 0
total_company_payroll += total_salary
total_employees += emp_count
total_salary_str = f"${total_salary:,.2f}"
avg_salary_str = f"${avg_salary:,.2f}" if avg_salary > 0 else "N/A"
print(f"{dept_name:<25} {emp_count:<12} {total_salary_str:<20} {avg_salary_str:<15}")
print("-" * 80)
print(f"{'TOTAL COMPANY':<25} {total_employees:<12} ${total_company_payroll:,.2f}")
print("-" * 80)
print()
# Analysis 2: Highest Paid Employee
print("=" * 80)
print(" HIGHEST PAID EMPLOYEE")
print("=" * 80)
highest_paid_query = """
SELECT TOP 1
e.employee_id,
e.first_name + ' ' + e.last_name AS full_name,
e.email,
e.salary,
d.name AS department_name
FROM Employees e
INNER JOIN Departments d ON e.department_id = d.department_id
ORDER BY e.salary DESC
"""
cursor.execute(highest_paid_query)
top_employee = cursor.fetchone()
if top_employee:
print(f"\n{'Field':<20} {'Value':<50}")
print("-" * 80)
print(f"{'Employee ID':<20} {top_employee[0]}")
print(f"{'Name':<20} {top_employee[1]}")
print(f"{'Email':<20} {top_employee[2]}")
print(f"{'Department':<20} {top_employee[4]}")
print(f"{'Salary':<20} ${top_employee[3]:,.2f}")
print("-" * 80)
else:
print("\nNo employees found in the database.")
print()
# Additional Analysis: Top 5 Highest Paid Employees
print("=" * 80)
print(" TOP 5 HIGHEST PAID EMPLOYEES")
print("=" * 80)
top_5_query = """
SELECT TOP 5
e.first_name + ' ' + e.last_name AS full_name,
d.name AS department_name,
e.salary
FROM Employees e
INNER JOIN Departments d ON e.department_id = d.department_id
ORDER BY e.salary DESC
"""
cursor.execute(top_5_query)
print(f"\n{'Rank':<6} {'Name':<30} {'Department':<25} {'Salary':<15}")
print("-" * 80)
rank = 1
for row in cursor:
full_name = row[0]
dept_name = row[1]
salary = row[2]
print(f"{rank:<6} {full_name:<30} {dept_name:<25} ${salary:,.2f}")
rank += 1
print("-" * 80)
print()
# Summary
print("=" * 80)
print(" ANALYSIS SUMMARY")
print("=" * 80)
print(f"✓ Total Employees: {total_employees}")
print(f"✓ Total Company Payroll: ${total_company_payroll:,.2f}")
print(
f"✓ Average Employee Salary: ${total_company_payroll / total_employees:,.2f}"
if total_employees > 0
else "N/A"
)
print("=" * 80)
print("\nPayroll analysis completed successfully!")
except pyodbc.Error as e:
print("\n[ERROR] Database operation failed!")
print(f"Error detail: {str(e)}")
print()
print("Possible solutions:")
print("1. Ensure SQL Server is running")
print("2. Verify database access permissions")
print("3. Check connection string configuration")
except Exception as e:
print(f"\n[ERROR] Unexpected error: {str(e)}")
finally:
if connection:
connection.close()
print("\nConnection closed.")
if __name__ == "__main__":
main()
+7
View File
@@ -31,6 +31,7 @@ dependencies = [
"litellm>=1.81.0",
"dnspython>=2.4.0",
"resend>=2.0.0",
"asana>=3.2.0",
"google-analytics-data>=0.18.0",
"framework",
"stripe>=14.3.0",
@@ -60,6 +61,10 @@ sql = [
bigquery = [
"google-cloud-bigquery>=3.0.0",
]
databricks = [
"databricks-sdk>=0.30.0",
"databricks-mcp>=0.1.0",
]
all = [
"RestrictedPython>=7.0",
"pytesseract>=0.3.10",
@@ -67,6 +72,8 @@ all = [
"duckdb>=1.0.0",
"openpyxl>=3.1.0",
"google-cloud-bigquery>=3.0.0",
"databricks-sdk>=0.30.0",
"databricks-mcp>=0.1.0",
]
[tool.uv.sources]
+117
View File
@@ -0,0 +1,117 @@
"""
Query Average Salary by Department
"""
import io
import os
import sys
import pyodbc
from dotenv import load_dotenv
# Force UTF-8 encoding for console output
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8")
# Load environment variables from .env file
load_dotenv()
# Database connection settings (from environment variables)
SERVER = os.getenv("MSSQL_SERVER", r"MONSTER\\MSSQLSERVERR")
DATABASE = os.getenv("MSSQL_DATABASE", "AdenTestDB")
USERNAME = os.getenv("MSSQL_USERNAME")
PASSWORD = os.getenv("MSSQL_PASSWORD")
def main():
"""Query and display average salary by department."""
connection = None
try:
# Connect to database
if USERNAME and PASSWORD:
# SQL Server Authentication
connection_string = (
f"DRIVER={{ODBC Driver 17 for SQL Server}};"
f"SERVER={SERVER};"
f"DATABASE={DATABASE};"
f"UID={USERNAME};"
f"PWD={PASSWORD};"
)
else:
# Windows Authentication
connection_string = (
f"DRIVER={{ODBC Driver 17 for SQL Server}};"
f"SERVER={SERVER};"
f"DATABASE={DATABASE};"
f"Trusted_Connection=yes;"
)
connection = pyodbc.connect(connection_string)
cursor = connection.cursor()
# Query to get average salary by department, sorted by average salary descending
query = """
SELECT
d.name AS department,
AVG(e.salary) AS avg_salary,
COUNT(e.employee_id) AS emp_count
FROM Departments d
LEFT JOIN Employees e ON d.department_id = e.department_id
WHERE e.salary IS NOT NULL
GROUP BY d.name
ORDER BY avg_salary DESC
"""
cursor.execute(query)
results = cursor.fetchall()
if not results:
print("No salary data found.")
return
# Get the highest average salary for highlighting
highest_avg = results[0][1] if results else 0
print("=" * 80)
print(" AVERAGE SALARY BY DEPARTMENT (Sorted Highest to Lowest)")
print("=" * 80)
print()
print(f"{'Rank':<6} {'Department':<25} {'Avg Salary':<20} {'Employees':<12}")
print("-" * 80)
for idx, row in enumerate(results, 1):
department = row[0]
avg_salary = row[1]
emp_count = row[2]
avg_salary_str = f"${avg_salary:,.2f}"
# Highlight the department with the highest average
if avg_salary == highest_avg:
# Use special formatting for the highest
prefix = f"{'>>> ' + str(idx):<6}"
print(f"{prefix} {department:<25} {avg_salary_str:<20} {emp_count:<12} ⭐ HIGHEST")
else:
print(f"{idx:<6} {department:<25} {avg_salary_str:<20} {emp_count:<12}")
print("-" * 80)
print()
print("📊 Summary:")
print(f" • Total departments with employees: {len(results)}")
print(f" • Highest average salary: ${highest_avg:,.2f} ({results[0][0]})")
print(f" • Lowest average salary: ${results[-1][1]:,.2f} ({results[-1][0]})")
print("=" * 80)
except pyodbc.Error as e:
print(f"\n[ERROR] Database operation failed: {str(e)}")
except Exception as e:
print(f"\n[ERROR] Unexpected error: {str(e)}")
finally:
if connection:
connection.close()
if __name__ == "__main__":
main()
@@ -55,20 +55,35 @@ To add a new credential:
3. If new category, import and merge it in this __init__.py
"""
from .airtable import AIRTABLE_CREDENTIALS
from .apify import APIFY_CREDENTIALS
from .apollo import APOLLO_CREDENTIALS
from .asana import ASANA_CREDENTIALS
from .attio import ATTIO_CREDENTIALS
from .aws_s3 import AWS_S3_CREDENTIALS
from .azure_sql import AZURE_SQL_CREDENTIALS
from .base import CredentialError, CredentialSpec
from .bigquery import BIGQUERY_CREDENTIALS
from .brevo import BREVO_CREDENTIALS
from .browser import get_aden_auth_url, get_aden_setup_url, open_browser
from .calcom import CALCOM_CREDENTIALS
from .calendly import CALENDLY_CREDENTIALS
from .cloudinary import CLOUDINARY_CREDENTIALS
from .confluence import CONFLUENCE_CREDENTIALS
from .databricks import DATABRICKS_CREDENTIALS
from .discord import DISCORD_CREDENTIALS
from .docker_hub import DOCKER_HUB_CREDENTIALS
from .email import EMAIL_CREDENTIALS
from .gcp_vision import GCP_VISION_CREDENTIALS
from .github import GITHUB_CREDENTIALS
from .gitlab import GITLAB_CREDENTIALS
from .google_analytics import GOOGLE_ANALYTICS_CREDENTIALS
from .google_calendar import GOOGLE_CALENDAR_CREDENTIALS
from .google_docs import GOOGLE_DOCS_CREDENTIALS
from .google_maps import GOOGLE_MAPS_CREDENTIALS
from .google_search_console import GOOGLE_SEARCH_CONSOLE_CREDENTIALS
from .google_sheets import GOOGLE_SHEETS_CREDENTIALS
from .greenhouse import GREENHOUSE_CREDENTIALS
from .health_check import (
BaseHttpHealthChecker,
HealthCheckResult,
@@ -76,11 +91,34 @@ from .health_check import (
validate_integration_wiring,
)
from .hubspot import HUBSPOT_CREDENTIALS
from .huggingface import HUGGINGFACE_CREDENTIALS
from .intercom import INTERCOM_CREDENTIALS
from .jira import JIRA_CREDENTIALS
from .kafka import KAFKA_CREDENTIALS
from .langfuse import LANGFUSE_CREDENTIALS
from .linear import LINEAR_CREDENTIALS
from .llm import LLM_CREDENTIALS
from .lusha import LUSHA_CREDENTIALS
from .microsoft_graph import MICROSOFT_GRAPH_CREDENTIALS
from .mongodb import MONGODB_CREDENTIALS
from .n8n import N8N_CREDENTIALS
from .news import NEWS_CREDENTIALS
from .notion import NOTION_CREDENTIALS
from .obsidian import OBSIDIAN_CREDENTIALS
from .pagerduty import PAGERDUTY_CREDENTIALS
from .pinecone import PINECONE_CREDENTIALS
from .pipedrive import PIPEDRIVE_CREDENTIALS
from .plaid import PLAID_CREDENTIALS
from .postgres import POSTGRES_CREDENTIALS
from .powerbi import POWERBI_CREDENTIALS
from .pushover import PUSHOVER_CREDENTIALS
from .quickbooks import QUICKBOOKS_CREDENTIALS
from .razorpay import RAZORPAY_CREDENTIALS
from .reddit import REDDIT_CREDENTIALS
from .redis import REDIS_CREDENTIALS
from .redshift import REDSHIFT_CREDENTIALS
from .salesforce import SALESFORCE_CREDENTIALS
from .sap import SAP_CREDENTIALS
from .search import SEARCH_CREDENTIALS
from .serpapi import SERPAPI_CREDENTIALS
from .shell_config import (
@@ -89,26 +127,49 @@ from .shell_config import (
get_shell_config_path,
get_shell_source_command,
)
from .shopify import SHOPIFY_CREDENTIALS
from .slack import SLACK_CREDENTIALS
from .snowflake import SNOWFLAKE_CREDENTIALS
from .store_adapter import CredentialStoreAdapter
from .stripe import STRIPE_CREDENTIALS
from .supabase import SUPABASE_CREDENTIALS
from .telegram import TELEGRAM_CREDENTIALS
from .terraform import TERRAFORM_CREDENTIALS
from .tines import TINES_CREDENTIALS
from .trello import TRELLO_CREDENTIALS
from .twilio import TWILIO_CREDENTIALS
from .twitter import TWITTER_CREDENTIALS
from .vercel import VERCEL_CREDENTIALS
from .youtube import YOUTUBE_CREDENTIALS
from .zendesk import ZENDESK_CREDENTIALS
from .zoho_crm import ZOHO_CRM_CREDENTIALS
from .zoom import ZOOM_CREDENTIALS
# Merged registry of all credentials
CREDENTIAL_SPECS = {
**AIRTABLE_CREDENTIALS,
**LLM_CREDENTIALS,
**NEWS_CREDENTIALS,
**SEARCH_CREDENTIALS,
**EMAIL_CREDENTIALS,
**GCP_VISION_CREDENTIALS,
**APIFY_CREDENTIALS,
**AWS_S3_CREDENTIALS,
**ASANA_CREDENTIALS,
**APOLLO_CREDENTIALS,
**ATTIO_CREDENTIALS,
**DISCORD_CREDENTIALS,
**GITHUB_CREDENTIALS,
**GOOGLE_ANALYTICS_CREDENTIALS,
**GOOGLE_DOCS_CREDENTIALS,
**GOOGLE_MAPS_CREDENTIALS,
**GOOGLE_SEARCH_CONSOLE_CREDENTIALS,
**HUGGINGFACE_CREDENTIALS,
**HUBSPOT_CREDENTIALS,
**INTERCOM_CREDENTIALS,
**LINEAR_CREDENTIALS,
**MONGODB_CREDENTIALS,
**PAGERDUTY_CREDENTIALS,
**GOOGLE_CALENDAR_CREDENTIALS,
**SLACK_CREDENTIALS,
**SERPAPI_CREDENTIALS,
@@ -116,9 +177,50 @@ CREDENTIAL_SPECS = {
**TELEGRAM_CREDENTIALS,
**BIGQUERY_CREDENTIALS,
**CALCOM_CREDENTIALS,
**CALENDLY_CREDENTIALS,
**DATABRICKS_CREDENTIALS,
**DOCKER_HUB_CREDENTIALS,
**PIPEDRIVE_CREDENTIALS,
**STRIPE_CREDENTIALS,
**BREVO_CREDENTIALS,
**POSTGRES_CREDENTIALS,
**QUICKBOOKS_CREDENTIALS,
**MICROSOFT_GRAPH_CREDENTIALS,
**PUSHOVER_CREDENTIALS,
**REDIS_CREDENTIALS,
**SUPABASE_CREDENTIALS,
**VERCEL_CREDENTIALS,
**YOUTUBE_CREDENTIALS,
**PINECONE_CREDENTIALS,
**PLAID_CREDENTIALS,
**TRELLO_CREDENTIALS,
**CONFLUENCE_CREDENTIALS,
**CLOUDINARY_CREDENTIALS,
**GITLAB_CREDENTIALS,
**GOOGLE_SHEETS_CREDENTIALS,
**GREENHOUSE_CREDENTIALS,
**JIRA_CREDENTIALS,
**NOTION_CREDENTIALS,
**REDDIT_CREDENTIALS,
**TINES_CREDENTIALS,
**TWITTER_CREDENTIALS,
**TWILIO_CREDENTIALS,
**ZENDESK_CREDENTIALS,
**ZOHO_CRM_CREDENTIALS,
**TERRAFORM_CREDENTIALS,
**LUSHA_CREDENTIALS,
**POWERBI_CREDENTIALS,
**SNOWFLAKE_CREDENTIALS,
**AZURE_SQL_CREDENTIALS,
**KAFKA_CREDENTIALS,
**REDSHIFT_CREDENTIALS,
**SAP_CREDENTIALS,
**SALESFORCE_CREDENTIALS,
**SHOPIFY_CREDENTIALS,
**ZOOM_CREDENTIALS,
**N8N_CREDENTIALS,
**LANGFUSE_CREDENTIALS,
**OBSIDIAN_CREDENTIALS,
}
__all__ = [
@@ -145,6 +247,7 @@ __all__ = [
# Merged registry
"CREDENTIAL_SPECS",
# Category registries (for direct access if needed)
"AIRTABLE_CREDENTIALS",
"LLM_CREDENTIALS",
"NEWS_CREDENTIALS",
"SEARCH_CREDENTIALS",
@@ -154,18 +257,68 @@ __all__ = [
"GOOGLE_ANALYTICS_CREDENTIALS",
"GOOGLE_DOCS_CREDENTIALS",
"GOOGLE_MAPS_CREDENTIALS",
"GOOGLE_SEARCH_CONSOLE_CREDENTIALS",
"HUGGINGFACE_CREDENTIALS",
"HUBSPOT_CREDENTIALS",
"INTERCOM_CREDENTIALS",
"LINEAR_CREDENTIALS",
"MONGODB_CREDENTIALS",
"PAGERDUTY_CREDENTIALS",
"GOOGLE_CALENDAR_CREDENTIALS",
"SLACK_CREDENTIALS",
"APIFY_CREDENTIALS",
"AWS_S3_CREDENTIALS",
"ASANA_CREDENTIALS",
"APOLLO_CREDENTIALS",
"ATTIO_CREDENTIALS",
"SERPAPI_CREDENTIALS",
"RAZORPAY_CREDENTIALS",
"TELEGRAM_CREDENTIALS",
"BIGQUERY_CREDENTIALS",
"CALCOM_CREDENTIALS",
"CALENDLY_CREDENTIALS",
"DATABRICKS_CREDENTIALS",
"DISCORD_CREDENTIALS",
"DOCKER_HUB_CREDENTIALS",
"PIPEDRIVE_CREDENTIALS",
"STRIPE_CREDENTIALS",
"BREVO_CREDENTIALS",
"POSTGRES_CREDENTIALS",
"QUICKBOOKS_CREDENTIALS",
"MICROSOFT_GRAPH_CREDENTIALS",
"PUSHOVER_CREDENTIALS",
"REDIS_CREDENTIALS",
"SUPABASE_CREDENTIALS",
"VERCEL_CREDENTIALS",
"YOUTUBE_CREDENTIALS",
"PINECONE_CREDENTIALS",
"PLAID_CREDENTIALS",
"TRELLO_CREDENTIALS",
"CONFLUENCE_CREDENTIALS",
"CLOUDINARY_CREDENTIALS",
"GITLAB_CREDENTIALS",
"GOOGLE_SHEETS_CREDENTIALS",
"GREENHOUSE_CREDENTIALS",
"JIRA_CREDENTIALS",
"NOTION_CREDENTIALS",
"REDDIT_CREDENTIALS",
"TINES_CREDENTIALS",
"TWITTER_CREDENTIALS",
"TWILIO_CREDENTIALS",
"ZENDESK_CREDENTIALS",
"ZOHO_CRM_CREDENTIALS",
"TERRAFORM_CREDENTIALS",
"LUSHA_CREDENTIALS",
"POWERBI_CREDENTIALS",
"SNOWFLAKE_CREDENTIALS",
"AZURE_SQL_CREDENTIALS",
"KAFKA_CREDENTIALS",
"REDSHIFT_CREDENTIALS",
"SAP_CREDENTIALS",
"SALESFORCE_CREDENTIALS",
"SHOPIFY_CREDENTIALS",
"ZOOM_CREDENTIALS",
"N8N_CREDENTIALS",
"LANGFUSE_CREDENTIALS",
"OBSIDIAN_CREDENTIALS",
]
@@ -0,0 +1,37 @@
"""
Airtable credentials.
Contains credentials for the Airtable Web API.
Requires AIRTABLE_PAT (Personal Access Token).
"""
from .base import CredentialSpec
AIRTABLE_CREDENTIALS = {
"airtable_pat": CredentialSpec(
env_var="AIRTABLE_PAT",
tools=[
"airtable_list_records",
"airtable_get_record",
"airtable_create_records",
"airtable_update_records",
"airtable_list_bases",
"airtable_get_base_schema",
],
required=True,
startup_required=False,
help_url="https://airtable.com/create/tokens",
description="Airtable Personal Access Token",
direct_api_key_supported=True,
api_key_instructions="""To set up Airtable API access:
1. Go to https://airtable.com/create/tokens
2. Create a new Personal Access Token
3. Grant scopes: data.records:read, data.records:write, schema.bases:read
4. Select the bases to grant access to
5. Set environment variable:
export AIRTABLE_PAT=your-personal-access-token""",
health_check_endpoint="",
credential_id="airtable_pat",
credential_key="api_key",
),
}
+34
View File
@@ -0,0 +1,34 @@
"""
Apify credentials.
Contains credentials for Apify web scraping and automation platform.
"""
from .base import CredentialSpec
APIFY_CREDENTIALS = {
"apify": CredentialSpec(
env_var="APIFY_API_TOKEN",
tools=[
"apify_run_actor",
"apify_get_run",
"apify_get_dataset_items",
"apify_list_actors",
"apify_list_runs",
"apify_get_kv_store_record",
],
required=True,
startup_required=False,
help_url="https://docs.apify.com/api/v2",
description="Apify API token for running web scraping actors and retrieving datasets",
direct_api_key_supported=True,
api_key_instructions="""To get an Apify API token:
1. Go to https://console.apify.com/account/integrations
2. Copy your personal API token
3. Set the environment variable:
export APIFY_API_TOKEN=your-api-token""",
health_check_endpoint="https://api.apify.com/v2/users/me",
credential_id="apify",
credential_key="api_key",
),
}
+35
View File
@@ -0,0 +1,35 @@
"""
Asana credentials.
Contains credentials for Asana task and project management.
"""
from .base import CredentialSpec
ASANA_CREDENTIALS = {
"asana": CredentialSpec(
env_var="ASANA_ACCESS_TOKEN",
tools=[
"asana_list_workspaces",
"asana_list_projects",
"asana_list_tasks",
"asana_get_task",
"asana_create_task",
"asana_search_tasks",
],
required=True,
startup_required=False,
help_url="https://developers.asana.com/docs/personal-access-token",
description="Asana personal access token for task and project management",
direct_api_key_supported=True,
api_key_instructions="""To get an Asana personal access token:
1. Go to https://app.asana.com/0/my-apps
2. Click 'Create new token'
3. Give it a name and copy the token
4. Set the environment variable:
export ASANA_ACCESS_TOKEN=your-pat""",
health_check_endpoint="https://app.asana.com/api/1.0/users/me",
credential_id="asana",
credential_key="api_key",
),
}
+55
View File
@@ -0,0 +1,55 @@
"""
Attio tool credentials.
Contains credentials for Attio CRM integration.
"""
from .base import CredentialSpec
ATTIO_CREDENTIALS = {
"attio": CredentialSpec(
env_var="ATTIO_API_KEY",
tools=[
"attio_record_list",
"attio_record_get",
"attio_record_create",
"attio_record_update",
"attio_record_assert",
"attio_list_lists",
"attio_list_entries_get",
"attio_list_entry_create",
"attio_list_entry_delete",
"attio_task_create",
"attio_task_list",
"attio_task_get",
"attio_task_delete",
"attio_members_list",
"attio_member_get",
],
required=True,
startup_required=False,
help_url="https://attio.com/help/apps/other-apps/generating-an-api-key",
description="Attio API key for CRM integration",
# Auth method support
aden_supported=False,
direct_api_key_supported=True,
api_key_instructions="""To get an Attio API key:
1. Go to Attio Settings > Developers > Access tokens
2. Click "Generate new token"
3. Name your token (e.g., "Hive Agent")
4. Select required scopes:
- record_permission:read-write
- object_configuration:read
- list_entry:read-write
- list_configuration:read
- task:read-write
- user_management:read
5. Copy the generated token""",
# Health check configuration
health_check_endpoint="https://api.attio.com/v2/workspace_members",
health_check_method="GET",
# Credential store mapping
credential_id="attio",
credential_key="api_key",
),
}
@@ -0,0 +1,57 @@
"""
AWS S3 credentials.
Contains credentials for AWS S3 REST API with SigV4 signing.
Requires AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
"""
from .base import CredentialSpec
AWS_S3_CREDENTIALS = {
"aws_access_key": CredentialSpec(
env_var="AWS_ACCESS_KEY_ID",
tools=[
"s3_list_buckets",
"s3_list_objects",
"s3_get_object",
"s3_put_object",
"s3_delete_object",
],
required=True,
startup_required=False,
help_url="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html",
description="AWS Access Key ID for S3 API access",
direct_api_key_supported=True,
api_key_instructions="""To set up AWS S3 API access:
1. Go to AWS IAM > Users > Security credentials
2. Create a new access key
3. Set environment variables:
export AWS_ACCESS_KEY_ID=your-access-key-id
export AWS_SECRET_ACCESS_KEY=your-secret-access-key
export AWS_REGION=us-east-1""",
health_check_endpoint="",
credential_id="aws_access_key",
credential_key="api_key",
credential_group="aws",
),
"aws_secret_key": CredentialSpec(
env_var="AWS_SECRET_ACCESS_KEY",
tools=[
"s3_list_buckets",
"s3_list_objects",
"s3_get_object",
"s3_put_object",
"s3_delete_object",
],
required=True,
startup_required=False,
help_url="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html",
description="AWS Secret Access Key for S3 API access",
direct_api_key_supported=True,
api_key_instructions="""See AWS_ACCESS_KEY_ID instructions above.""",
health_check_endpoint="",
credential_id="aws_secret_key",
credential_key="api_key",
credential_group="aws",
),
}
@@ -0,0 +1,55 @@
"""
Azure SQL Database management credentials.
Contains credentials for the Azure SQL REST API (management plane).
Requires AZURE_SQL_ACCESS_TOKEN and AZURE_SUBSCRIPTION_ID.
"""
from .base import CredentialSpec
AZURE_SQL_CREDENTIALS = {
"azure_sql_token": CredentialSpec(
env_var="AZURE_SQL_ACCESS_TOKEN",
tools=[
"azure_sql_list_servers",
"azure_sql_get_server",
"azure_sql_list_databases",
"azure_sql_get_database",
"azure_sql_list_firewall_rules",
],
required=True,
startup_required=False,
help_url="https://learn.microsoft.com/en-us/rest/api/sql/",
description="Azure Bearer token for SQL management API (scope: management.azure.com)",
direct_api_key_supported=True,
api_key_instructions="""To set up Azure SQL management API access:
1. Register an app in Azure AD (Entra ID)
2. Assign SQL DB Contributor or Reader role
3. Obtain a token via client credentials flow (scope: https://management.azure.com/.default)
4. Set environment variables:
export AZURE_SQL_ACCESS_TOKEN=your-bearer-token
export AZURE_SUBSCRIPTION_ID=your-subscription-id""",
health_check_endpoint="",
credential_id="azure_sql_token",
credential_key="api_key",
),
"azure_subscription_id": CredentialSpec(
env_var="AZURE_SUBSCRIPTION_ID",
tools=[
"azure_sql_list_servers",
"azure_sql_get_server",
"azure_sql_list_databases",
"azure_sql_get_database",
"azure_sql_list_firewall_rules",
],
required=True,
startup_required=False,
help_url="https://learn.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id",
description="Azure subscription ID for resource management",
direct_api_key_supported=True,
api_key_instructions="""See AZURE_SQL_ACCESS_TOKEN instructions above.""",
health_check_endpoint="",
credential_id="azure_subscription_id",
credential_key="api_key",
),
}
+7 -13
View File
@@ -1,8 +1,6 @@
"""
Brevo tool credentials.
Contains credentials for Brevo (formerly Sendinblue) transactional email,
SMS, and contact management integration.
Contains credentials for Brevo email and SMS integration.
"""
from .base import CredentialSpec
@@ -16,26 +14,22 @@ BREVO_CREDENTIALS = {
"brevo_create_contact",
"brevo_get_contact",
"brevo_update_contact",
"brevo_get_email_stats",
],
required=True,
startup_required=False,
help_url="https://app.brevo.com/settings/keys/api",
description="Brevo API key for transactional email, SMS, and contact management",
# Auth method support
aden_supported=False,
direct_api_key_supported=True,
api_key_instructions="""To get a Brevo API key:
1. Go to https://app.brevo.com and create an account (or sign in)
2. Navigate to Settings > API Keys (or visit https://app.brevo.com/settings/keys/api)
3. Click "Generate a new API key"
4. Give it a name (e.g., "Hive Agent")
5. Copy the API key (starts with xkeysib-)
6. Store it securely - you won't be able to see it again!
7. Note: For sending emails, you'll need a verified sender domain or email""",
# Health check configuration
1. Sign up or log in at https://www.brevo.com
2. Go to Settings API Keys
3. Click 'Generate a new API key'
4. Give it a name (e.g., 'Hive Agent')
5. Copy the API key and set it as BREVO_API_KEY""",
health_check_endpoint="https://api.brevo.com/v3/account",
health_check_method="GET",
# Credential store mapping
credential_id="brevo",
credential_key="api_key",
),
@@ -40,6 +40,7 @@ def open_browser(url: str) -> tuple[bool, str]:
["open", url],
check=True,
capture_output=True,
encoding="utf-8",
)
return True, "Opened in browser"
@@ -50,6 +51,7 @@ def open_browser(url: str) -> tuple[bool, str]:
["xdg-open", url],
check=True,
capture_output=True,
encoding="utf-8",
)
return True, "Opened in browser"
except FileNotFoundError:
@@ -0,0 +1,34 @@
"""
Calendly credentials.
Contains credentials for the Calendly API v2.
Requires CALENDLY_PAT (Personal Access Token).
"""
from .base import CredentialSpec
CALENDLY_CREDENTIALS = {
"calendly_pat": CredentialSpec(
env_var="CALENDLY_PAT",
tools=[
"calendly_get_current_user",
"calendly_list_event_types",
"calendly_list_scheduled_events",
"calendly_get_scheduled_event",
"calendly_list_invitees",
],
required=True,
startup_required=False,
help_url="https://developer.calendly.com/how-to-authenticate-with-personal-access-tokens",
description="Calendly Personal Access Token",
direct_api_key_supported=True,
api_key_instructions="""To set up Calendly API access:
1. Go to https://calendly.com/integrations/api_webhooks
2. Generate a Personal Access Token
3. Set environment variable:
export CALENDLY_PAT=your-personal-access-token""",
health_check_endpoint="https://api.calendly.com/users/me",
credential_id="calendly_pat",
credential_key="api_key",
),
}
@@ -0,0 +1,74 @@
"""
Cloudinary credentials.
Contains credentials for Cloudinary image/video management.
Requires CLOUDINARY_CLOUD_NAME, CLOUDINARY_API_KEY, and CLOUDINARY_API_SECRET.
"""
from .base import CredentialSpec
CLOUDINARY_CREDENTIALS = {
"cloudinary_cloud_name": CredentialSpec(
env_var="CLOUDINARY_CLOUD_NAME",
tools=[
"cloudinary_upload",
"cloudinary_list_resources",
"cloudinary_get_resource",
"cloudinary_delete_resource",
"cloudinary_search",
],
required=True,
startup_required=False,
help_url="https://console.cloudinary.com/",
description="Cloudinary cloud name from your dashboard",
direct_api_key_supported=True,
api_key_instructions="""To set up Cloudinary access:
1. Go to https://console.cloudinary.com/
2. Copy your Cloud Name, API Key, and API Secret from the dashboard
3. Set environment variables:
export CLOUDINARY_CLOUD_NAME=your-cloud-name
export CLOUDINARY_API_KEY=your-api-key
export CLOUDINARY_API_SECRET=your-api-secret""",
health_check_endpoint="",
credential_id="cloudinary_cloud_name",
credential_key="api_key",
),
"cloudinary_key": CredentialSpec(
env_var="CLOUDINARY_API_KEY",
tools=[
"cloudinary_upload",
"cloudinary_list_resources",
"cloudinary_get_resource",
"cloudinary_delete_resource",
"cloudinary_search",
],
required=True,
startup_required=False,
help_url="https://console.cloudinary.com/",
description="Cloudinary API key for authentication",
direct_api_key_supported=True,
api_key_instructions="""See CLOUDINARY_CLOUD_NAME instructions above.""",
health_check_endpoint="",
credential_id="cloudinary_key",
credential_key="api_key",
),
"cloudinary_secret": CredentialSpec(
env_var="CLOUDINARY_API_SECRET",
tools=[
"cloudinary_upload",
"cloudinary_list_resources",
"cloudinary_get_resource",
"cloudinary_delete_resource",
"cloudinary_search",
],
required=True,
startup_required=False,
help_url="https://console.cloudinary.com/",
description="Cloudinary API secret for authentication",
direct_api_key_supported=True,
api_key_instructions="""See CLOUDINARY_CLOUD_NAME instructions above.""",
health_check_endpoint="",
credential_id="cloudinary_secret",
credential_key="api_key",
),
}
@@ -0,0 +1,74 @@
"""
Confluence credentials.
Contains credentials for Confluence wiki & knowledge management.
Requires CONFLUENCE_DOMAIN, CONFLUENCE_EMAIL, and CONFLUENCE_API_TOKEN.
"""
from .base import CredentialSpec
CONFLUENCE_CREDENTIALS = {
"confluence_domain": CredentialSpec(
env_var="CONFLUENCE_DOMAIN",
tools=[
"confluence_list_spaces",
"confluence_list_pages",
"confluence_get_page",
"confluence_create_page",
"confluence_search",
],
required=True,
startup_required=False,
help_url="https://id.atlassian.com/manage/api-tokens",
description="Confluence domain (e.g. your-org.atlassian.net)",
direct_api_key_supported=True,
api_key_instructions="""To set up Confluence access:
1. Go to https://id.atlassian.com/manage/api-tokens
2. Click 'Create API token'
3. Set environment variables:
export CONFLUENCE_DOMAIN=your-org.atlassian.net
export CONFLUENCE_EMAIL=your-email@example.com
export CONFLUENCE_API_TOKEN=your-api-token""",
health_check_endpoint="",
credential_id="confluence_domain",
credential_key="api_key",
),
"confluence_email": CredentialSpec(
env_var="CONFLUENCE_EMAIL",
tools=[
"confluence_list_spaces",
"confluence_list_pages",
"confluence_get_page",
"confluence_create_page",
"confluence_search",
],
required=True,
startup_required=False,
help_url="https://id.atlassian.com/manage/api-tokens",
description="Atlassian account email for Confluence authentication",
direct_api_key_supported=True,
api_key_instructions="""See CONFLUENCE_DOMAIN instructions above.""",
health_check_endpoint="",
credential_id="confluence_email",
credential_key="api_key",
),
"confluence_token": CredentialSpec(
env_var="CONFLUENCE_API_TOKEN",
tools=[
"confluence_list_spaces",
"confluence_list_pages",
"confluence_get_page",
"confluence_create_page",
"confluence_search",
],
required=True,
startup_required=False,
help_url="https://id.atlassian.com/manage/api-tokens",
description="Atlassian API token for Confluence authentication",
direct_api_key_supported=True,
api_key_instructions="""See CONFLUENCE_DOMAIN instructions above.""",
health_check_endpoint="",
credential_id="confluence_token",
credential_key="api_key",
),
}
@@ -0,0 +1,39 @@
"""
Databricks credentials.
Contains credentials for Databricks workspace, SQL, and job management.
"""
from .base import CredentialSpec
DATABRICKS_CREDENTIALS = {
"databricks": CredentialSpec(
env_var="DATABRICKS_TOKEN",
tools=[
"databricks_sql_query",
"databricks_list_jobs",
"databricks_run_job",
"databricks_get_run",
"databricks_list_clusters",
"databricks_start_cluster",
"databricks_terminate_cluster",
"databricks_list_workspace",
],
required=True,
startup_required=False,
help_url="https://docs.databricks.com/dev-tools/auth/pat.html",
description="Databricks personal access token (also requires DATABRICKS_HOST env var)",
direct_api_key_supported=True,
api_key_instructions="""To get a Databricks personal access token:
1. Go to your Databricks workspace URL
2. Click your username in the top-right Settings
3. Go to Developer Access tokens
4. Click Generate new token
5. Set both environment variables:
export DATABRICKS_TOKEN=dapi...
export DATABRICKS_HOST=https://your-workspace.cloud.databricks.com""",
health_check_endpoint="",
credential_id="databricks",
credential_key="api_key",
),
}
@@ -0,0 +1,37 @@
"""
Docker Hub credentials.
Contains credentials for Docker Hub repository and image management.
"""
from .base import CredentialSpec
DOCKER_HUB_CREDENTIALS = {
"docker_hub": CredentialSpec(
env_var="DOCKER_HUB_TOKEN",
tools=[
"docker_hub_search",
"docker_hub_list_repos",
"docker_hub_list_tags",
"docker_hub_get_repo",
],
required=True,
startup_required=False,
help_url="https://hub.docker.com/settings/security",
description=(
"Docker Hub personal access token (also set DOCKER_HUB_USERNAME for listing own repos)"
),
direct_api_key_supported=True,
api_key_instructions="""To get a Docker Hub personal access token:
1. Go to https://hub.docker.com/settings/security
2. Click 'New Access Token'
3. Give it a description and select permissions (Read is sufficient for browsing)
4. Copy the token
5. Set environment variables:
export DOCKER_HUB_TOKEN=your-pat
export DOCKER_HUB_USERNAME=your-username""",
health_check_endpoint="https://hub.docker.com/v2/user/login",
credential_id="docker_hub",
credential_key="api_key",
),
}
@@ -0,0 +1,37 @@
"""
GitLab credentials.
Contains credentials for GitLab projects, issues, and merge requests.
Requires GITLAB_TOKEN. GITLAB_URL is optional (defaults to gitlab.com).
"""
from .base import CredentialSpec
GITLAB_CREDENTIALS = {
"gitlab_token": CredentialSpec(
env_var="GITLAB_TOKEN",
tools=[
"gitlab_list_projects",
"gitlab_get_project",
"gitlab_list_issues",
"gitlab_get_issue",
"gitlab_create_issue",
"gitlab_list_merge_requests",
],
required=True,
startup_required=False,
help_url="https://gitlab.com/-/user_settings/personal_access_tokens",
description="GitLab personal access token",
direct_api_key_supported=True,
api_key_instructions="""To set up GitLab API access:
1. Go to https://gitlab.com/-/user_settings/personal_access_tokens
(or your self-hosted instance equivalent)
2. Create a new token with 'api' scope
3. Set environment variables:
export GITLAB_TOKEN=your-personal-access-token
export GITLAB_URL=https://gitlab.com (optional, defaults to gitlab.com)""",
health_check_endpoint="https://gitlab.com/api/v4/user",
credential_id="gitlab_token",
credential_key="api_key",
),
}
@@ -0,0 +1,35 @@
"""
Google Search Console credentials.
Contains credentials for Search Console analytics, sitemaps, and URL inspection.
"""
from .base import CredentialSpec
GOOGLE_SEARCH_CONSOLE_CREDENTIALS = {
"google_search_console": CredentialSpec(
env_var="GOOGLE_SEARCH_CONSOLE_TOKEN",
tools=[
"gsc_search_analytics",
"gsc_list_sites",
"gsc_list_sitemaps",
"gsc_inspect_url",
"gsc_submit_sitemap",
],
required=True,
startup_required=False,
help_url="https://developers.google.com/webmaster-tools/v1/prereqs",
description="Google OAuth2 access token with Search Console scope",
direct_api_key_supported=False,
api_key_instructions="""To get a Google Search Console access token:
1. Go to https://console.cloud.google.com/apis/credentials
2. Create an OAuth2 client (type: Desktop app or Web app)
3. Enable the Search Console API in your project
4. Generate an access token with scope: https://www.googleapis.com/auth/webmasters.readonly
5. Set the environment variable:
export GOOGLE_SEARCH_CONSOLE_TOKEN=your-access-token""",
health_check_endpoint="https://www.googleapis.com/webmasters/v3/sites",
credential_id="google_search_console",
credential_key="api_key",
),
}
@@ -0,0 +1,34 @@
"""
Google Sheets credentials.
Contains credentials for Google Sheets spreadsheet access.
Requires GOOGLE_SHEETS_API_KEY for read-only access to public sheets.
"""
from .base import CredentialSpec
GOOGLE_SHEETS_CREDENTIALS = {
"google_sheets_key": CredentialSpec(
env_var="GOOGLE_SHEETS_API_KEY",
tools=[
"sheets_get_spreadsheet",
"sheets_read_range",
"sheets_batch_read",
],
required=True,
startup_required=False,
help_url="https://console.cloud.google.com/apis/credentials",
description="Google API key for reading public Google Sheets",
direct_api_key_supported=True,
api_key_instructions="""To set up Google Sheets API access:
1. Go to https://console.cloud.google.com/apis/credentials
2. Click 'Create Credentials' > 'API Key'
3. Enable the Google Sheets API in APIs & Services > Library
4. Target spreadsheets must be shared as 'Anyone with the link'
5. Set environment variable:
export GOOGLE_SHEETS_API_KEY=your-api-key""",
health_check_endpoint="",
credential_id="google_sheets_key",
credential_key="api_key",
),
}
@@ -0,0 +1,37 @@
"""
Greenhouse credentials.
Contains credentials for Greenhouse ATS & recruiting.
Requires GREENHOUSE_API_TOKEN.
"""
from .base import CredentialSpec
GREENHOUSE_CREDENTIALS = {
"greenhouse_token": CredentialSpec(
env_var="GREENHOUSE_API_TOKEN",
tools=[
"greenhouse_list_jobs",
"greenhouse_get_job",
"greenhouse_list_candidates",
"greenhouse_get_candidate",
"greenhouse_list_applications",
"greenhouse_get_application",
],
required=True,
startup_required=False,
help_url="https://support.greenhouse.io/hc/en-us/articles/202842799-Harvest-API",
description="Greenhouse Harvest API token for ATS access",
direct_api_key_supported=True,
api_key_instructions="""To set up Greenhouse Harvest API access:
1. Go to Greenhouse > Configure > Dev Center > API Credential Management
2. Click 'Create New API Key'
3. Select 'Harvest' as the API type
4. Set permissions (at minimum: Jobs, Candidates, Applications read access)
5. Set environment variable:
export GREENHOUSE_API_TOKEN=your-api-token""",
health_check_endpoint="https://harvest.greenhouse.io/v1/jobs?per_page=1",
credential_id="greenhouse_token",
credential_key="api_key",
),
}
+345 -19
View File
@@ -8,6 +8,7 @@ to verify the credential works.
from __future__ import annotations
import os
from dataclasses import dataclass, field
from typing import Any, Protocol
@@ -104,6 +105,72 @@ class HubSpotHealthChecker:
)
class ZohoCRMHealthChecker:
"""Health checker for Zoho CRM credentials."""
TIMEOUT = 10.0
def check(self, access_token: str) -> HealthCheckResult:
"""
Validate Zoho token by making lightweight API call.
Uses /users?type=CurrentUser so module permissions are not required.
"""
api_domain = os.getenv("ZOHO_API_DOMAIN", "https://www.zohoapis.com").rstrip("/")
endpoint = f"{api_domain}/crm/v2/users?type=CurrentUser"
try:
with httpx.Client(timeout=self.TIMEOUT) as client:
response = client.get(
endpoint,
headers={
"Authorization": f"Zoho-oauthtoken {access_token}",
"Accept": "application/json",
},
)
if response.status_code == 200:
return HealthCheckResult(
valid=True,
message="Zoho CRM credentials valid",
)
elif response.status_code == 401:
return HealthCheckResult(
valid=False,
message="Zoho CRM token is invalid or expired",
details={"status_code": 401},
)
elif response.status_code == 403:
return HealthCheckResult(
valid=False,
message="Zoho CRM token lacks required scopes",
details={"status_code": 403},
)
elif response.status_code == 429:
return HealthCheckResult(
valid=True,
message="Zoho CRM credentials valid (rate limited)",
details={"status_code": 429, "rate_limited": True},
)
else:
return HealthCheckResult(
valid=False,
message=f"Zoho CRM API returned status {response.status_code}",
details={"status_code": response.status_code},
)
except httpx.TimeoutException:
return HealthCheckResult(
valid=False,
message="Zoho CRM API request timed out",
details={"error": "timeout"},
)
except httpx.RequestError as e:
return HealthCheckResult(
valid=False,
message=f"Failed to connect to Zoho CRM: {e}",
details={"error": str(e)},
)
class BraveSearchHealthChecker:
"""Health checker for Brave Search API."""
@@ -563,6 +630,66 @@ class SlackHealthChecker:
)
class CalendlyHealthChecker:
"""Health checker for Calendly API tokens."""
ENDPOINT = "https://api.calendly.com/users/me"
TIMEOUT = 10.0
def check(self, api_token: str) -> HealthCheckResult:
"""
Validate Calendly token by calling /users/me.
Makes a GET request to verify the token works.
"""
try:
with httpx.Client(timeout=self.TIMEOUT) as client:
response = client.get(
self.ENDPOINT,
headers={
"Authorization": f"Bearer {api_token}",
"Content-Type": "application/json",
},
)
if response.status_code == 200:
return HealthCheckResult(
valid=True,
message="Calendly token valid",
details={},
)
elif response.status_code == 401:
return HealthCheckResult(
valid=False,
message="Calendly token is invalid or expired",
details={"status_code": 401},
)
elif response.status_code == 403:
return HealthCheckResult(
valid=False,
message="Calendly token access forbidden",
details={"status_code": 403},
)
else:
return HealthCheckResult(
valid=False,
message=f"Calendly API returned status {response.status_code}",
details={"status_code": response.status_code},
)
except httpx.TimeoutException:
return HealthCheckResult(
valid=False,
message="Calendly API request timed out",
details={"error": "timeout"},
)
except httpx.RequestError as e:
return HealthCheckResult(
valid=False,
message=f"Failed to connect to Calendly API: {e}",
details={"error": str(e)},
)
class AnthropicHealthChecker:
"""Health checker for Anthropic API credentials."""
@@ -898,6 +1025,71 @@ class GoogleMapsHealthChecker:
)
class LushaHealthChecker:
"""Health checker for Lusha API credentials."""
ENDPOINT = "https://api.lusha.com/account/usage"
TIMEOUT = 10.0
def check(self, api_key: str) -> HealthCheckResult:
"""
Validate Lusha API key by checking account usage endpoint.
This is a lightweight authenticated request that confirms API access.
"""
try:
with httpx.Client(timeout=self.TIMEOUT) as client:
response = client.get(
self.ENDPOINT,
headers={
"api_key": api_key,
"Accept": "application/json",
},
)
if response.status_code == 200:
return HealthCheckResult(
valid=True,
message="Lusha API key valid",
)
elif response.status_code == 401:
return HealthCheckResult(
valid=False,
message="Lusha API key is invalid",
details={"status_code": 401},
)
elif response.status_code == 403:
return HealthCheckResult(
valid=False,
message="Lusha API key lacks required permissions",
details={"status_code": 403},
)
elif response.status_code == 429:
return HealthCheckResult(
valid=True,
message="Lusha API key valid (rate/credit limited)",
details={"status_code": 429, "rate_limited": True},
)
else:
return HealthCheckResult(
valid=False,
message=f"Lusha API returned status {response.status_code}",
details={"status_code": response.status_code},
)
except httpx.TimeoutException:
return HealthCheckResult(
valid=False,
message="Lusha API request timed out",
details={"error": "timeout"},
)
except httpx.RequestError as e:
return HealthCheckResult(
valid=False,
message=f"Failed to connect to Lusha API: {e}",
details={"error": str(e)},
)
class GoogleGmailHealthChecker(OAuthBearerHealthChecker):
"""Health checker for Google Gmail OAuth tokens."""
@@ -1068,30 +1260,164 @@ class IntercomHealthChecker(OAuthBearerHealthChecker):
)
# --- Simple Bearer-auth checkers ---
class ApifyHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://api.apify.com/v2/users/me"
SERVICE_NAME = "Apify"
class AsanaHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://app.asana.com/api/1.0/users/me"
SERVICE_NAME = "Asana"
class AttioHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://api.attio.com/v2/workspace_members"
SERVICE_NAME = "Attio"
class DockerHubHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://hub.docker.com/v2/user/login"
SERVICE_NAME = "Docker Hub"
class GoogleSearchConsoleHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://www.googleapis.com/webmasters/v3/sites"
SERVICE_NAME = "Google Search Console"
class HuggingFaceHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://huggingface.co/api/whoami-v2"
SERVICE_NAME = "Hugging Face"
class LinearHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://api.linear.app/graphql"
SERVICE_NAME = "Linear"
class MicrosoftGraphHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://graph.microsoft.com/v1.0/me"
SERVICE_NAME = "Microsoft Graph"
class PineconeHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://api.pinecone.io/indexes"
SERVICE_NAME = "Pinecone"
class VercelHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://api.vercel.com/v2/user"
SERVICE_NAME = "Vercel"
# --- Custom-header auth checkers ---
class GitLabHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://gitlab.com/api/v4/user"
SERVICE_NAME = "GitLab"
AUTH_TYPE = BaseHttpHealthChecker.AUTH_HEADER
AUTH_HEADER_NAME = "PRIVATE-TOKEN"
AUTH_HEADER_TEMPLATE = "{token}"
class NotionHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://api.notion.com/v1/users/me"
SERVICE_NAME = "Notion"
def _build_headers(self, credential_value: str) -> dict[str, str]:
headers = super()._build_headers(credential_value)
headers["Notion-Version"] = "2022-06-28"
return headers
# --- Basic-auth checkers ---
class GreenhouseHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://harvest.greenhouse.io/v1/jobs?per_page=1"
SERVICE_NAME = "Greenhouse"
AUTH_TYPE = BaseHttpHealthChecker.AUTH_BASIC
# --- Query-param auth checkers ---
class PipedriveHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://api.pipedrive.com/v1/users/me"
SERVICE_NAME = "Pipedrive"
AUTH_TYPE = BaseHttpHealthChecker.AUTH_QUERY
AUTH_QUERY_PARAM_NAME = "api_token"
class TrelloKeyHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://api.trello.com/1/members/me"
SERVICE_NAME = "Trello"
AUTH_TYPE = BaseHttpHealthChecker.AUTH_QUERY
AUTH_QUERY_PARAM_NAME = "key"
class TrelloTokenHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://api.trello.com/1/members/me"
SERVICE_NAME = "Trello"
AUTH_TYPE = BaseHttpHealthChecker.AUTH_QUERY
AUTH_QUERY_PARAM_NAME = "token"
class YouTubeHealthChecker(BaseHttpHealthChecker):
ENDPOINT = "https://www.googleapis.com/youtube/v3/videoCategories?part=snippet&regionCode=US"
SERVICE_NAME = "YouTube"
AUTH_TYPE = BaseHttpHealthChecker.AUTH_QUERY
AUTH_QUERY_PARAM_NAME = "key"
# Registry of health checkers
HEALTH_CHECKERS: dict[str, CredentialHealthChecker] = {
"discord": DiscordHealthChecker(),
"hubspot": HubSpotHealthChecker(),
"brave_search": BraveSearchHealthChecker(),
"google_calendar_oauth": GoogleCalendarHealthChecker(),
"google": GoogleGmailHealthChecker(),
"slack": SlackHealthChecker(),
"google_search": GoogleSearchHealthChecker(),
"google_maps": GoogleMapsHealthChecker(),
"anthropic": AnthropicHealthChecker(),
"github": GitHubHealthChecker(),
"intercom": IntercomHealthChecker(),
"resend": ResendHealthChecker(),
"stripe": StripeHealthChecker(),
"exa_search": ExaSearchHealthChecker(),
"google_docs": GoogleDocsHealthChecker(),
"calcom": CalcomHealthChecker(),
"serpapi": SerpApiHealthChecker(),
"apify": ApifyHealthChecker(),
"apollo": ApolloHealthChecker(),
"telegram": TelegramHealthChecker(),
"newsdata": NewsdataHealthChecker(),
"finlight": FinlightHealthChecker(),
"asana": AsanaHealthChecker(),
"attio": AttioHealthChecker(),
"brave_search": BraveSearchHealthChecker(),
"brevo": BrevoHealthChecker(),
"calcom": CalcomHealthChecker(),
"calendly_pat": CalendlyHealthChecker(),
"discord": DiscordHealthChecker(),
"docker_hub": DockerHubHealthChecker(),
"exa_search": ExaSearchHealthChecker(),
"finlight": FinlightHealthChecker(),
"github": GitHubHealthChecker(),
"gitlab_token": GitLabHealthChecker(),
"google": GoogleGmailHealthChecker(),
"google_calendar_oauth": GoogleCalendarHealthChecker(),
"google_docs": GoogleDocsHealthChecker(),
"google_maps": GoogleMapsHealthChecker(),
"google_search": GoogleSearchHealthChecker(),
"google_search_console": GoogleSearchConsoleHealthChecker(),
"greenhouse_token": GreenhouseHealthChecker(),
"hubspot": HubSpotHealthChecker(),
"huggingface": HuggingFaceHealthChecker(),
"intercom": IntercomHealthChecker(),
"linear": LinearHealthChecker(),
"lusha_api_key": LushaHealthChecker(),
"microsoft_graph": MicrosoftGraphHealthChecker(),
"newsdata": NewsdataHealthChecker(),
"notion_token": NotionHealthChecker(),
"pinecone": PineconeHealthChecker(),
"pipedrive": PipedriveHealthChecker(),
"resend": ResendHealthChecker(),
"serpapi": SerpApiHealthChecker(),
"slack": SlackHealthChecker(),
"stripe": StripeHealthChecker(),
"telegram": TelegramHealthChecker(),
"trello_key": TrelloKeyHealthChecker(),
"trello_token": TrelloTokenHealthChecker(),
"vercel": VercelHealthChecker(),
"youtube": YouTubeHealthChecker(),
"zoho_crm": ZohoCRMHealthChecker(),
}
@@ -0,0 +1,36 @@
"""
HuggingFace credentials.
Contains credentials for HuggingFace Hub API access.
"""
from .base import CredentialSpec
HUGGINGFACE_CREDENTIALS = {
"huggingface": CredentialSpec(
env_var="HUGGINGFACE_TOKEN",
tools=[
"huggingface_search_models",
"huggingface_get_model",
"huggingface_search_datasets",
"huggingface_get_dataset",
"huggingface_search_spaces",
"huggingface_whoami",
],
required=True,
startup_required=False,
help_url="https://huggingface.co/settings/tokens",
description="HuggingFace API token for Hub access (models, datasets, spaces)",
direct_api_key_supported=True,
api_key_instructions="""To get a HuggingFace token:
1. Go to https://huggingface.co/settings/tokens
2. Click 'New token'
3. Choose 'Read' access (or 'Write' for repo management)
4. Copy the token
5. Set the environment variable:
export HUGGINGFACE_TOKEN=hf_your-token""",
health_check_endpoint="https://huggingface.co/api/whoami-v2",
credential_id="huggingface",
credential_key="api_key",
),
}
@@ -0,0 +1,149 @@
"""
Integration credentials.
Contains credentials for third-party service integrations (HubSpot, Linear, etc.).
"""
from .base import CredentialSpec
INTEGRATION_CREDENTIALS = {
"github": CredentialSpec(
env_var="GITHUB_TOKEN",
tools=[
"github_list_repos",
"github_get_repo",
"github_search_repos",
"github_list_issues",
"github_get_issue",
"github_create_issue",
"github_update_issue",
"github_list_pull_requests",
"github_get_pull_request",
"github_create_pull_request",
"github_search_code",
"github_list_branches",
"github_get_branch",
],
required=True,
startup_required=False,
help_url="https://github.com/settings/tokens",
description="GitHub Personal Access Token (classic)",
# Auth method support
aden_supported=False,
direct_api_key_supported=True,
api_key_instructions="""To get a GitHub Personal Access Token:
1. Go to GitHub Settings > Developer settings > Personal access tokens
2. Click "Generate new token" > "Generate new token (classic)"
3. Give your token a descriptive name (e.g., "Hive Agent")
4. Select the following scopes:
- repo (Full control of private repositories)
- read:org (Read org and team membership - optional)
- user (Read user profile data - optional)
5. Click "Generate token" and copy the token (starts with ghp_)
6. Store it securely - you won't be able to see it again!""",
# Health check configuration
health_check_endpoint="https://api.github.com/user",
health_check_method="GET",
# Credential store mapping
credential_id="github",
credential_key="access_token",
),
"hubspot": CredentialSpec(
env_var="HUBSPOT_ACCESS_TOKEN",
tools=[
"hubspot_search_contacts",
"hubspot_get_contact",
"hubspot_create_contact",
"hubspot_update_contact",
"hubspot_search_companies",
"hubspot_get_company",
"hubspot_create_company",
"hubspot_update_company",
"hubspot_search_deals",
"hubspot_get_deal",
"hubspot_create_deal",
"hubspot_update_deal",
],
required=True,
startup_required=False,
help_url="https://developers.hubspot.com/docs/api/private-apps",
description="HubSpot access token (Private App or OAuth2)",
# Auth method support
aden_supported=True,
aden_provider_name="hubspot",
direct_api_key_supported=True,
api_key_instructions="""To get a HubSpot Private App token:
1. Go to HubSpot Settings > Integrations > Private Apps
2. Click "Create a private app"
3. Name your app (e.g., "Hive Agent")
4. Go to the "Scopes" tab and enable:
- crm.objects.contacts.read
- crm.objects.contacts.write
- crm.objects.companies.read
- crm.objects.companies.write
- crm.objects.deals.read
- crm.objects.deals.write
5. Click "Create app" and copy the access token""",
# Health check configuration
health_check_endpoint="https://api.hubapi.com/crm/v3/objects/contacts?limit=1",
health_check_method="GET",
# Credential store mapping
credential_id="hubspot",
credential_key="access_token",
),
"linear": CredentialSpec(
env_var="LINEAR_API_KEY",
tools=[
"linear_issue_create",
"linear_issue_get",
"linear_issue_update",
"linear_issue_delete",
"linear_issue_search",
"linear_issue_add_comment",
"linear_project_create",
"linear_project_get",
"linear_project_update",
"linear_project_list",
"linear_teams_list",
"linear_team_get",
"linear_workflow_states_get",
"linear_label_create",
"linear_labels_list",
"linear_users_list",
"linear_user_get",
"linear_viewer",
],
required=True,
startup_required=False,
help_url="https://linear.app/settings/api",
description="Linear API key or OAuth2 token for project management integration",
# Auth method support
aden_supported=True,
aden_provider_name="linear",
direct_api_key_supported=True,
api_key_instructions="""To get a Linear API key:
1. Go to Linear Settings > API (https://linear.app/settings/api)
2. Click "Create key" under "Personal API keys"
3. Give your key a descriptive label (e.g., "Hive Agent")
4. Copy the generated key (starts with 'lin_api_')
5. Store it securely - you won't be able to see it again!
Note: Personal API keys have the same permissions as your user account.
To create an OAuth application (for automatic token refresh via Aden):
1. Go to Linear Settings > API (https://linear.app/settings/api)
2. Click "New OAuth application"
3. Fill in the required information:
- Application name (e.g., "Hive Agent")
- Developer name
- Other required fields
4. Click "Create"
5. Copy your client ID and client secret""",
# Health check configuration
health_check_endpoint="https://api.linear.app/graphql",
health_check_method="POST",
# Credential store mapping
credential_id="linear",
credential_key="api_key",
),
}
+77
View File
@@ -0,0 +1,77 @@
"""
Jira credentials.
Contains credentials for Jira Cloud issue tracking.
Requires JIRA_DOMAIN, JIRA_EMAIL, and JIRA_API_TOKEN.
"""
from .base import CredentialSpec
JIRA_CREDENTIALS = {
"jira_domain": CredentialSpec(
env_var="JIRA_DOMAIN",
tools=[
"jira_search_issues",
"jira_get_issue",
"jira_create_issue",
"jira_list_projects",
"jira_get_project",
"jira_add_comment",
],
required=True,
startup_required=False,
help_url="https://id.atlassian.com/manage/api-tokens",
description="Jira Cloud domain (e.g. your-org.atlassian.net)",
direct_api_key_supported=True,
api_key_instructions="""To set up Jira API access:
1. Go to https://id.atlassian.com/manage/api-tokens
2. Click 'Create API token'
3. Set environment variables:
export JIRA_DOMAIN=your-org.atlassian.net
export JIRA_EMAIL=your-email@example.com
export JIRA_API_TOKEN=your-api-token""",
health_check_endpoint="",
credential_id="jira_domain",
credential_key="api_key",
),
"jira_email": CredentialSpec(
env_var="JIRA_EMAIL",
tools=[
"jira_search_issues",
"jira_get_issue",
"jira_create_issue",
"jira_list_projects",
"jira_get_project",
"jira_add_comment",
],
required=True,
startup_required=False,
help_url="https://id.atlassian.com/manage/api-tokens",
description="Atlassian account email for Jira authentication",
direct_api_key_supported=True,
api_key_instructions="""See JIRA_DOMAIN instructions above.""",
health_check_endpoint="",
credential_id="jira_email",
credential_key="api_key",
),
"jira_token": CredentialSpec(
env_var="JIRA_API_TOKEN",
tools=[
"jira_search_issues",
"jira_get_issue",
"jira_create_issue",
"jira_list_projects",
"jira_get_project",
"jira_add_comment",
],
required=True,
startup_required=False,
help_url="https://id.atlassian.com/manage/api-tokens",
description="Atlassian API token for Jira authentication",
direct_api_key_supported=True,
api_key_instructions="""See JIRA_DOMAIN instructions above.""",
health_check_endpoint="",
credential_id="jira_token",
credential_key="api_key",
),
}
+59
View File
@@ -0,0 +1,59 @@
"""
Apache Kafka (Confluent REST Proxy) credentials.
Contains credentials for the Kafka REST Proxy API.
Requires KAFKA_REST_URL and KAFKA_CLUSTER_ID. Optional KAFKA_API_KEY + KAFKA_API_SECRET.
"""
from .base import CredentialSpec
KAFKA_CREDENTIALS = {
"kafka_rest_url": CredentialSpec(
env_var="KAFKA_REST_URL",
tools=[
"kafka_list_topics",
"kafka_get_topic",
"kafka_create_topic",
"kafka_produce_message",
"kafka_list_consumer_groups",
"kafka_get_consumer_group_lag",
],
required=True,
startup_required=False,
help_url="https://docs.confluent.io/platform/current/kafka-rest/index.html",
description="Kafka REST Proxy URL (e.g. 'https://pkc-xxxxx.region.confluent.cloud:443')",
direct_api_key_supported=True,
api_key_instructions="""To set up Kafka REST Proxy access:
1. Get your REST Proxy URL (Confluent Cloud: cluster settings; self-hosted: default port 8082)
2. Get your cluster ID from cluster settings
3. Create an API key pair (Confluent Cloud) or configure SASL auth
4. Set environment variables:
export KAFKA_REST_URL=https://your-rest-proxy-url
export KAFKA_CLUSTER_ID=your-cluster-id
export KAFKA_API_KEY=your-api-key (optional)
export KAFKA_API_SECRET=your-api-secret (optional)""",
health_check_endpoint="",
credential_id="kafka_rest_url",
credential_key="api_key",
),
"kafka_cluster_id": CredentialSpec(
env_var="KAFKA_CLUSTER_ID",
tools=[
"kafka_list_topics",
"kafka_get_topic",
"kafka_create_topic",
"kafka_produce_message",
"kafka_list_consumer_groups",
"kafka_get_consumer_group_lag",
],
required=True,
startup_required=False,
help_url="https://docs.confluent.io/platform/current/kafka-rest/index.html",
description="Kafka cluster ID",
direct_api_key_supported=True,
api_key_instructions="""See KAFKA_REST_URL instructions above.""",
health_check_endpoint="",
credential_id="kafka_cluster_id",
credential_key="api_key",
),
}
@@ -0,0 +1,59 @@
"""
Langfuse LLM observability credentials.
Contains credentials for the Langfuse REST API.
Requires LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY.
Optional LANGFUSE_HOST for self-hosted instances.
"""
from .base import CredentialSpec
LANGFUSE_CREDENTIALS = {
"langfuse_public_key": CredentialSpec(
env_var="LANGFUSE_PUBLIC_KEY",
tools=[
"langfuse_list_traces",
"langfuse_get_trace",
"langfuse_list_scores",
"langfuse_create_score",
"langfuse_list_prompts",
"langfuse_get_prompt",
],
required=True,
startup_required=False,
help_url="https://langfuse.com/docs/api-and-data-platform/features/public-api",
description="Langfuse public key (starts with pk-lf-)",
direct_api_key_supported=True,
api_key_instructions="""To set up Langfuse API access:
1. Create a Langfuse account at https://cloud.langfuse.com
2. Go to Project > Settings > API Keys
3. Create a new key pair
4. Set environment variables:
export LANGFUSE_PUBLIC_KEY=pk-lf-your-public-key
export LANGFUSE_SECRET_KEY=sk-lf-your-secret-key
export LANGFUSE_HOST=https://cloud.langfuse.com (optional, for self-hosted)""",
health_check_endpoint="",
credential_id="langfuse_public_key",
credential_key="api_key",
),
"langfuse_secret_key": CredentialSpec(
env_var="LANGFUSE_SECRET_KEY",
tools=[
"langfuse_list_traces",
"langfuse_get_trace",
"langfuse_list_scores",
"langfuse_create_score",
"langfuse_list_prompts",
"langfuse_get_prompt",
],
required=True,
startup_required=False,
help_url="https://langfuse.com/docs/api-and-data-platform/features/public-api",
description="Langfuse secret key (starts with sk-lf-)",
direct_api_key_supported=True,
api_key_instructions="""See LANGFUSE_PUBLIC_KEY instructions above.""",
health_check_endpoint="",
credential_id="langfuse_secret_key",
credential_key="api_key",
),
}
@@ -0,0 +1,48 @@
"""
Linear credentials.
Contains credentials for Linear issue tracking and project management.
"""
from .base import CredentialSpec
LINEAR_CREDENTIALS = {
"linear": CredentialSpec(
env_var="LINEAR_API_KEY",
tools=[
"linear_issue_create",
"linear_issue_get",
"linear_issue_update",
"linear_issue_delete",
"linear_issue_search",
"linear_issue_add_comment",
"linear_project_create",
"linear_project_get",
"linear_project_update",
"linear_project_list",
"linear_teams_list",
"linear_team_get",
"linear_workflow_states_get",
"linear_label_create",
"linear_labels_list",
"linear_users_list",
"linear_user_get",
"linear_viewer",
],
required=True,
startup_required=False,
help_url="https://linear.app/developers",
description="Linear API key for issue tracking and project management",
direct_api_key_supported=True,
api_key_instructions="""To get a Linear API key:
1. Go to Linear Settings > Account > Security & Access
2. Under 'Personal API Keys', click 'Create key'
3. Choose permissions (Read + Write recommended)
4. Copy the key
5. Set the environment variable:
export LINEAR_API_KEY=lin_api_your-key""",
health_check_endpoint="https://api.linear.app/graphql",
credential_id="linear",
credential_key="api_key",
),
}
+34
View File
@@ -0,0 +1,34 @@
"""
Lusha credentials.
Contains credentials for the Lusha B2B data API.
Requires LUSHA_API_KEY.
"""
from .base import CredentialSpec
LUSHA_CREDENTIALS = {
"lusha_api_key": CredentialSpec(
env_var="LUSHA_API_KEY",
tools=[
"lusha_enrich_person",
"lusha_enrich_company",
"lusha_search_contacts",
"lusha_search_companies",
"lusha_get_usage",
],
required=True,
startup_required=False,
help_url="https://docs.lusha.com/",
description="Lusha API key for B2B contact and company enrichment",
direct_api_key_supported=True,
api_key_instructions="""To set up Lusha API access:
1. Go to dashboard.lusha.com > Enrich > API
2. Copy your API key
3. Set environment variable:
export LUSHA_API_KEY=your-api-key""",
health_check_endpoint="https://api.lusha.com/account/usage",
credential_id="lusha_api_key",
credential_key="api_key",
),
}
@@ -0,0 +1,45 @@
"""
Microsoft Graph API credentials.
Contains credentials for Microsoft 365 services (Outlook, Teams, OneDrive).
"""
from .base import CredentialSpec
MICROSOFT_GRAPH_CREDENTIALS = {
"microsoft_graph": CredentialSpec(
env_var="MICROSOFT_GRAPH_ACCESS_TOKEN",
tools=[
"outlook_list_messages",
"outlook_get_message",
"outlook_send_mail",
"teams_list_teams",
"teams_list_channels",
"teams_send_channel_message",
"teams_get_channel_messages",
"onedrive_search_files",
"onedrive_list_files",
"onedrive_download_file",
"onedrive_upload_file",
],
required=True,
startup_required=False,
help_url="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade",
description="Microsoft Graph OAuth 2.0 access token for Outlook, Teams, and OneDrive",
direct_api_key_supported=True,
api_key_instructions="""To get a Microsoft Graph access token:
1. Go to https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade
2. Register a new application (or select existing)
3. Under API Permissions, add Microsoft Graph permissions:
- Mail.Read, Mail.Send (for Outlook)
- ChannelMessage.Read.All, ChannelMessage.Send (for Teams)
- Files.ReadWrite (for OneDrive)
4. Configure Authentication with redirect URI
5. Get client ID and client secret from Certificates & Secrets
6. Use OAuth 2.0 authorization code flow to obtain access token
7. For quick testing, use https://developer.microsoft.com/en-us/graph/graph-explorer""",
health_check_endpoint="https://graph.microsoft.com/v1.0/me",
credential_id="microsoft_graph",
credential_key="access_token",
),
}
@@ -0,0 +1,78 @@
"""
MongoDB credentials.
Contains credentials for MongoDB Atlas Data API.
Requires MONGODB_DATA_API_URL, MONGODB_API_KEY, and MONGODB_DATA_SOURCE.
"""
from .base import CredentialSpec
MONGODB_CREDENTIALS = {
"mongodb_url": CredentialSpec(
env_var="MONGODB_DATA_API_URL",
tools=[
"mongodb_find",
"mongodb_find_one",
"mongodb_insert_one",
"mongodb_update_one",
"mongodb_delete_one",
"mongodb_aggregate",
],
required=True,
startup_required=False,
help_url="https://www.mongodb.com/docs/atlas/app-services/data-api/",
description="MongoDB Atlas Data API URL (e.g. https://data.mongodb-api.com/app/APP_ID/endpoint/data/v1)",
direct_api_key_supported=True,
api_key_instructions="""To set up MongoDB Atlas Data API access:
1. Go to MongoDB Atlas > App Services > Data API
2. Enable the Data API and copy the URL Endpoint
3. Create an API key
4. Set environment variables:
export MONGODB_DATA_API_URL=your-data-api-url
export MONGODB_API_KEY=your-api-key
export MONGODB_DATA_SOURCE=Cluster0""",
health_check_endpoint="",
credential_id="mongodb_url",
credential_key="api_key",
),
"mongodb_api_key": CredentialSpec(
env_var="MONGODB_API_KEY",
tools=[
"mongodb_find",
"mongodb_find_one",
"mongodb_insert_one",
"mongodb_update_one",
"mongodb_delete_one",
"mongodb_aggregate",
],
required=True,
startup_required=False,
help_url="https://www.mongodb.com/docs/atlas/app-services/data-api/",
description="MongoDB Atlas Data API key",
direct_api_key_supported=True,
api_key_instructions="""See MONGODB_DATA_API_URL instructions above.""",
health_check_endpoint="",
credential_id="mongodb_api_key",
credential_key="api_key",
),
"mongodb_data_source": CredentialSpec(
env_var="MONGODB_DATA_SOURCE",
tools=[
"mongodb_find",
"mongodb_find_one",
"mongodb_insert_one",
"mongodb_update_one",
"mongodb_delete_one",
"mongodb_aggregate",
],
required=True,
startup_required=False,
help_url="https://www.mongodb.com/docs/atlas/app-services/data-api/",
description="MongoDB cluster name (e.g. 'Cluster0')",
direct_api_key_supported=True,
api_key_instructions="""See MONGODB_DATA_API_URL instructions above.""",
health_check_endpoint="",
credential_id="mongodb_data_source",
credential_key="api_key",
),
}
+56
View File
@@ -0,0 +1,56 @@
"""
n8n workflow automation credentials.
Contains credentials for the n8n REST API v1.
Requires N8N_API_KEY and N8N_BASE_URL.
"""
from .base import CredentialSpec
N8N_CREDENTIALS = {
"n8n": CredentialSpec(
env_var="N8N_API_KEY",
tools=[
"n8n_list_workflows",
"n8n_get_workflow",
"n8n_activate_workflow",
"n8n_deactivate_workflow",
"n8n_list_executions",
"n8n_get_execution",
],
required=True,
startup_required=False,
help_url="https://docs.n8n.io/api/authentication/",
description="n8n API key for workflow management",
direct_api_key_supported=True,
api_key_instructions="""To set up n8n API access:
1. In n8n, go to Settings > API
2. Generate an API key
3. Set environment variables:
export N8N_API_KEY=your-api-key
export N8N_BASE_URL=https://your-n8n-instance.com""",
health_check_endpoint="",
credential_id="n8n",
credential_key="api_key",
),
"n8n_base_url": CredentialSpec(
env_var="N8N_BASE_URL",
tools=[
"n8n_list_workflows",
"n8n_get_workflow",
"n8n_activate_workflow",
"n8n_deactivate_workflow",
"n8n_list_executions",
"n8n_get_execution",
],
required=True,
startup_required=False,
help_url="https://docs.n8n.io/api/",
description="n8n instance base URL (e.g. 'https://your-n8n.example.com')",
direct_api_key_supported=True,
api_key_instructions="""See N8N_API_KEY instructions above.""",
health_check_endpoint="",
credential_id="n8n_base_url",
credential_key="api_key",
),
}
@@ -0,0 +1,37 @@
"""
Notion credentials.
Contains credentials for Notion pages, databases, and search.
Requires NOTION_API_TOKEN.
"""
from .base import CredentialSpec
NOTION_CREDENTIALS = {
"notion_token": CredentialSpec(
env_var="NOTION_API_TOKEN",
tools=[
"notion_search",
"notion_get_page",
"notion_create_page",
"notion_query_database",
"notion_get_database",
],
required=True,
startup_required=False,
help_url="https://www.notion.so/my-integrations",
description="Notion internal integration token",
direct_api_key_supported=True,
api_key_instructions="""To set up Notion API access:
1. Go to https://www.notion.so/my-integrations
2. Click 'New integration'
3. Give it a name, select the workspace, and set capabilities
4. Copy the integration token
5. Share target pages/databases with the integration
6. Set environment variable:
export NOTION_API_TOKEN=your-integration-token""",
health_check_endpoint="https://api.notion.com/v1/users/me",
credential_id="notion_token",
credential_key="api_key",
),
}
@@ -0,0 +1,37 @@
"""
Obsidian Local REST API credentials.
Contains credentials for the Obsidian Local REST API plugin.
Requires OBSIDIAN_REST_API_KEY. Optional OBSIDIAN_REST_BASE_URL.
"""
from .base import CredentialSpec
OBSIDIAN_CREDENTIALS = {
"obsidian": CredentialSpec(
env_var="OBSIDIAN_REST_API_KEY",
tools=[
"obsidian_read_note",
"obsidian_write_note",
"obsidian_append_note",
"obsidian_search",
"obsidian_list_files",
"obsidian_get_active",
],
required=True,
startup_required=False,
help_url="https://github.com/coddingtonbear/obsidian-local-rest-api",
description="Obsidian Local REST API key (64-char hex, from plugin settings)",
direct_api_key_supported=True,
api_key_instructions="""To set up Obsidian Local REST API access:
1. Install the 'Local REST API' community plugin in Obsidian
2. Enable the plugin and go to its settings
3. Copy the API Key (64-character hex string)
4. Set environment variables:
export OBSIDIAN_REST_API_KEY=your-api-key
export OBSIDIAN_REST_BASE_URL=https://127.0.0.1:27124 (optional)""",
health_check_endpoint="",
credential_id="obsidian",
credential_key="api_key",
),
}
@@ -0,0 +1,51 @@
"""
PagerDuty credentials.
Contains credentials for PagerDuty REST API v2.
Requires PAGERDUTY_API_KEY and optionally PAGERDUTY_FROM_EMAIL.
"""
from .base import CredentialSpec
PAGERDUTY_CREDENTIALS = {
"pagerduty_api_key": CredentialSpec(
env_var="PAGERDUTY_API_KEY",
tools=[
"pagerduty_list_incidents",
"pagerduty_get_incident",
"pagerduty_create_incident",
"pagerduty_update_incident",
"pagerduty_list_services",
],
required=True,
startup_required=False,
help_url="https://support.pagerduty.com/docs/api-access-keys",
description="PagerDuty REST API key (account-level or user-level)",
direct_api_key_supported=True,
api_key_instructions="""To set up PagerDuty API access:
1. Go to PagerDuty > Integrations > API Access Keys
2. Create a new REST API key
3. Set environment variables:
export PAGERDUTY_API_KEY=your-api-key
export PAGERDUTY_FROM_EMAIL=your-pagerduty-email@example.com""",
health_check_endpoint="",
credential_id="pagerduty_api_key",
credential_key="api_key",
),
"pagerduty_from_email": CredentialSpec(
env_var="PAGERDUTY_FROM_EMAIL",
tools=[
"pagerduty_create_incident",
"pagerduty_update_incident",
],
required=False,
startup_required=False,
help_url="https://support.pagerduty.com/docs/api-access-keys",
description="PagerDuty user email (required for write operations)",
direct_api_key_supported=True,
api_key_instructions="""See PAGERDUTY_API_KEY instructions above.""",
health_check_endpoint="",
credential_id="pagerduty_from_email",
credential_key="api_key",
),
}
@@ -0,0 +1,38 @@
"""
Pinecone credentials.
Contains credentials for Pinecone vector database operations.
"""
from .base import CredentialSpec
PINECONE_CREDENTIALS = {
"pinecone": CredentialSpec(
env_var="PINECONE_API_KEY",
tools=[
"pinecone_list_indexes",
"pinecone_create_index",
"pinecone_describe_index",
"pinecone_delete_index",
"pinecone_upsert_vectors",
"pinecone_query_vectors",
"pinecone_fetch_vectors",
"pinecone_delete_vectors",
"pinecone_index_stats",
],
required=True,
startup_required=False,
help_url="https://app.pinecone.io/",
description="API key for Pinecone vector database operations",
direct_api_key_supported=True,
api_key_instructions="""To get a Pinecone API key:
1. Go to https://app.pinecone.io/ and sign up or log in
2. Navigate to 'API Keys' in the left sidebar
3. Click 'Create API Key' or copy the default key
4. Set the environment variable:
export PINECONE_API_KEY=your-api-key""",
health_check_endpoint="https://api.pinecone.io/indexes",
credential_id="pinecone",
credential_key="api_key",
),
}
@@ -0,0 +1,42 @@
"""
Pipedrive CRM credentials.
Contains credentials for Pipedrive deal, contact, and pipeline management.
"""
from .base import CredentialSpec
PIPEDRIVE_CREDENTIALS = {
"pipedrive": CredentialSpec(
env_var="PIPEDRIVE_API_TOKEN",
tools=[
"pipedrive_list_deals",
"pipedrive_get_deal",
"pipedrive_create_deal",
"pipedrive_list_persons",
"pipedrive_search_persons",
"pipedrive_list_organizations",
"pipedrive_list_activities",
"pipedrive_list_pipelines",
"pipedrive_list_stages",
"pipedrive_add_note",
],
required=True,
startup_required=False,
help_url="https://pipedrive.readme.io/docs/core-api-concepts-about-pipedrive-api",
description=(
"Pipedrive API token for CRM management (also set PIPEDRIVE_DOMAIN for custom domains)"
),
direct_api_key_supported=True,
api_key_instructions="""To get a Pipedrive API token:
1. Log in to your Pipedrive account
2. Go to Settings > Personal preferences > API
3. Copy your personal API token
4. Set environment variables:
export PIPEDRIVE_API_TOKEN=your-api-token
export PIPEDRIVE_DOMAIN=your-company.pipedrive.com""",
health_check_endpoint="https://api.pipedrive.com/v1/users/me",
credential_id="pipedrive",
credential_key="api_key",
),
}
+61
View File
@@ -0,0 +1,61 @@
"""
Plaid credentials.
Contains credentials for Plaid banking & financial data operations.
Plaid requires both PLAID_CLIENT_ID and PLAID_SECRET.
"""
from .base import CredentialSpec
PLAID_CREDENTIALS = {
"plaid_client_id": CredentialSpec(
env_var="PLAID_CLIENT_ID",
tools=[
"plaid_get_accounts",
"plaid_get_balance",
"plaid_sync_transactions",
"plaid_get_transactions",
"plaid_get_institution",
"plaid_search_institutions",
],
required=True,
startup_required=False,
help_url="https://dashboard.plaid.com/developers/keys",
description=(
"Plaid client ID for banking data access"
" (also set PLAID_SECRET and optionally PLAID_ENV)"
),
direct_api_key_supported=True,
api_key_instructions="""To get Plaid credentials:
1. Sign up at https://dashboard.plaid.com/
2. Go to Developers > Keys
3. Copy your client_id and secret
4. Set environment variables:
export PLAID_CLIENT_ID=your-client-id
export PLAID_SECRET=your-secret
export PLAID_ENV=sandbox (or development, production)""",
health_check_endpoint="https://sandbox.plaid.com/institutions/search",
credential_id="plaid_client_id",
credential_key="api_key",
),
"plaid_secret": CredentialSpec(
env_var="PLAID_SECRET",
tools=[
"plaid_get_accounts",
"plaid_get_balance",
"plaid_sync_transactions",
"plaid_get_transactions",
"plaid_get_institution",
"plaid_search_institutions",
],
required=True,
startup_required=False,
help_url="https://dashboard.plaid.com/developers/keys",
description="Plaid API secret for banking data access",
direct_api_key_supported=True,
api_key_instructions="""See PLAID_CLIENT_ID instructions above.""",
health_check_endpoint="https://sandbox.plaid.com/institutions/search",
credential_id="plaid_secret",
credential_key="api_key",
),
}
@@ -0,0 +1,35 @@
"""
Power BI credentials.
Contains credentials for the Microsoft Power BI REST API.
Requires POWERBI_ACCESS_TOKEN (OAuth2 Bearer token).
"""
from .base import CredentialSpec
POWERBI_CREDENTIALS = {
"powerbi_token": CredentialSpec(
env_var="POWERBI_ACCESS_TOKEN",
tools=[
"powerbi_list_workspaces",
"powerbi_list_datasets",
"powerbi_list_reports",
"powerbi_refresh_dataset",
"powerbi_get_refresh_history",
],
required=True,
startup_required=False,
help_url="https://learn.microsoft.com/en-us/rest/api/power-bi/",
description="Power BI OAuth2 access token for API access",
direct_api_key_supported=True,
api_key_instructions="""To set up Power BI API access:
1. Register an app in Azure AD (Entra ID)
2. Grant Power BI API permissions (Workspace.Read.All, Dataset.ReadWrite.All, Report.Read.All)
3. Obtain an access token via client credentials or authorization code flow
4. Set environment variable:
export POWERBI_ACCESS_TOKEN=your-oauth-access-token""",
health_check_endpoint="",
credential_id="powerbi_token",
credential_key="api_key",
),
}
@@ -0,0 +1,35 @@
"""
Pushover credentials.
Contains credentials for Pushover push notification service.
"""
from .base import CredentialSpec
PUSHOVER_CREDENTIALS = {
"pushover": CredentialSpec(
env_var="PUSHOVER_API_TOKEN",
tools=[
"pushover_send",
"pushover_validate_user",
"pushover_list_sounds",
"pushover_check_receipt",
],
required=True,
startup_required=False,
help_url="https://pushover.net/apps/build",
description="Pushover application API token",
direct_api_key_supported=True,
api_key_instructions="""To get a Pushover API token:
1. Go to https://pushover.net/ and create an account
2. Go to https://pushover.net/apps/build
3. Create a new application/API token
4. Copy the API Token/Key
5. Your User Key is on the main dashboard at https://pushover.net/
6. Set environment variable:
export PUSHOVER_API_TOKEN=your-app-token""",
health_check_endpoint="",
credential_id="pushover",
credential_key="api_key",
),
}
@@ -0,0 +1,55 @@
"""
QuickBooks Online credentials.
Contains credentials for QuickBooks Online Accounting API.
Requires QUICKBOOKS_ACCESS_TOKEN and QUICKBOOKS_REALM_ID.
"""
from .base import CredentialSpec
QUICKBOOKS_CREDENTIALS = {
"quickbooks_token": CredentialSpec(
env_var="QUICKBOOKS_ACCESS_TOKEN",
tools=[
"quickbooks_query",
"quickbooks_get_entity",
"quickbooks_create_customer",
"quickbooks_create_invoice",
"quickbooks_get_company_info",
],
required=True,
startup_required=False,
help_url="https://developer.intuit.com/app/developer/qbo/docs/develop/authentication-and-authorization",
description="QuickBooks OAuth 2.0 access token",
direct_api_key_supported=False,
api_key_instructions="""To set up QuickBooks API access:
1. Create an app at https://developer.intuit.com
2. Complete OAuth 2.0 authorization flow
3. Set environment variables:
export QUICKBOOKS_ACCESS_TOKEN=your-oauth-access-token
export QUICKBOOKS_REALM_ID=your-company-id
export QUICKBOOKS_SANDBOX=true # optional, for sandbox""",
health_check_endpoint="",
credential_id="quickbooks_token",
credential_key="api_key",
),
"quickbooks_realm_id": CredentialSpec(
env_var="QUICKBOOKS_REALM_ID",
tools=[
"quickbooks_query",
"quickbooks_get_entity",
"quickbooks_create_customer",
"quickbooks_create_invoice",
"quickbooks_get_company_info",
],
required=True,
startup_required=False,
help_url="https://developer.intuit.com/app/developer/qbo/docs/develop/authentication-and-authorization",
description="QuickBooks company (realm) ID",
direct_api_key_supported=True,
api_key_instructions="""See QUICKBOOKS_ACCESS_TOKEN instructions above.""",
health_check_endpoint="",
credential_id="quickbooks_realm_id",
credential_key="api_key",
),
}
@@ -0,0 +1,55 @@
"""
Reddit credentials.
Contains credentials for Reddit community content monitoring and search.
Requires REDDIT_CLIENT_ID and REDDIT_CLIENT_SECRET.
"""
from .base import CredentialSpec
REDDIT_CREDENTIALS = {
"reddit_client_id": CredentialSpec(
env_var="REDDIT_CLIENT_ID",
tools=[
"reddit_search",
"reddit_get_posts",
"reddit_get_comments",
"reddit_get_user",
],
required=True,
startup_required=False,
help_url="https://www.reddit.com/prefs/apps",
description="Reddit app client ID for OAuth2 authentication",
direct_api_key_supported=True,
api_key_instructions="""To set up Reddit API access:
1. Go to https://www.reddit.com/prefs/apps
2. Click 'create another app...' at the bottom
3. Select 'script' as the app type
4. Fill in the name and redirect URI (http://localhost)
5. Copy the client ID (under the app name) and secret
6. Set environment variables:
export REDDIT_CLIENT_ID=your-client-id
export REDDIT_CLIENT_SECRET=your-client-secret""",
health_check_endpoint="",
credential_id="reddit_client_id",
credential_key="api_key",
),
"reddit_secret": CredentialSpec(
env_var="REDDIT_CLIENT_SECRET",
tools=[
"reddit_search",
"reddit_get_posts",
"reddit_get_comments",
"reddit_get_user",
],
required=True,
startup_required=False,
help_url="https://www.reddit.com/prefs/apps",
description="Reddit app client secret for OAuth2 authentication",
direct_api_key_supported=True,
api_key_instructions="""See REDDIT_CLIENT_ID instructions above.""",
health_check_endpoint="",
credential_id="reddit_secret",
credential_key="api_key",
),
}
+40
View File
@@ -0,0 +1,40 @@
"""
Redis credentials.
Contains credentials for Redis in-memory data store.
"""
from .base import CredentialSpec
REDIS_CREDENTIALS = {
"redis": CredentialSpec(
env_var="REDIS_URL",
tools=[
"redis_get",
"redis_set",
"redis_delete",
"redis_keys",
"redis_hset",
"redis_hgetall",
"redis_lpush",
"redis_lrange",
"redis_publish",
"redis_info",
"redis_ttl",
],
required=True,
startup_required=False,
help_url="https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/",
description="Redis connection URL (e.g. redis://localhost:6379 or redis://:password@host:6379/0)",
direct_api_key_supported=True,
api_key_instructions="""To set up Redis:
1. Install Redis locally: brew install redis (macOS) or apt install redis-server (Linux)
2. Or use a hosted service: Redis Cloud (https://redis.com/cloud/), Upstash, etc.
3. Set the connection URL:
export REDIS_URL=redis://localhost:6379
export REDIS_URL=redis://:your-password@host:port/db-number""",
health_check_endpoint="",
credential_id="redis",
credential_key="url",
),
}
@@ -0,0 +1,56 @@
"""
Amazon Redshift Data API credentials.
Contains credentials for the Redshift Data API with SigV4 signing.
Reuses AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
"""
from .base import CredentialSpec
REDSHIFT_CREDENTIALS = {
"redshift_access_key": CredentialSpec(
env_var="AWS_ACCESS_KEY_ID",
tools=[
"redshift_execute_sql",
"redshift_describe_statement",
"redshift_get_results",
"redshift_list_databases",
"redshift_list_tables",
],
required=True,
startup_required=False,
help_url="https://docs.aws.amazon.com/redshift/latest/mgmt/data-api.html",
description="AWS Access Key ID for Redshift Data API access",
direct_api_key_supported=True,
api_key_instructions="""To set up Redshift Data API access:
1. Ensure your IAM user has redshift-data:* permissions
2. Set environment variables:
export AWS_ACCESS_KEY_ID=your-access-key-id
export AWS_SECRET_ACCESS_KEY=your-secret-access-key
export AWS_REGION=us-east-1""",
health_check_endpoint="",
credential_id="redshift_access_key",
credential_key="api_key",
credential_group="aws",
),
"redshift_secret_key": CredentialSpec(
env_var="AWS_SECRET_ACCESS_KEY",
tools=[
"redshift_execute_sql",
"redshift_describe_statement",
"redshift_get_results",
"redshift_list_databases",
"redshift_list_tables",
],
required=True,
startup_required=False,
help_url="https://docs.aws.amazon.com/redshift/latest/mgmt/data-api.html",
description="AWS Secret Access Key for Redshift Data API access",
direct_api_key_supported=True,
api_key_instructions="""See AWS_ACCESS_KEY_ID instructions above.""",
health_check_endpoint="",
credential_id="redshift_secret_key",
credential_key="api_key",
credential_group="aws",
),
}
@@ -0,0 +1,57 @@
"""
Salesforce CRM credentials.
Contains credentials for the Salesforce REST API.
Requires SALESFORCE_ACCESS_TOKEN and SALESFORCE_INSTANCE_URL.
"""
from .base import CredentialSpec
SALESFORCE_CREDENTIALS = {
"salesforce": CredentialSpec(
env_var="SALESFORCE_ACCESS_TOKEN",
tools=[
"salesforce_soql_query",
"salesforce_get_record",
"salesforce_create_record",
"salesforce_update_record",
"salesforce_describe_object",
"salesforce_list_objects",
],
required=True,
startup_required=False,
help_url="https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest",
description="Salesforce OAuth2 Bearer access token",
direct_api_key_supported=True,
api_key_instructions="""To set up Salesforce REST API access:
1. Create a Connected App in Salesforce Setup
2. Enable OAuth settings and select required scopes (api, full)
3. Use Client Credentials or Username-Password flow to obtain a token
4. Set environment variables:
export SALESFORCE_ACCESS_TOKEN=your-bearer-token
export SALESFORCE_INSTANCE_URL=https://your-org.my.salesforce.com""",
health_check_endpoint="",
credential_id="salesforce",
credential_key="api_key",
),
"salesforce_instance_url": CredentialSpec(
env_var="SALESFORCE_INSTANCE_URL",
tools=[
"salesforce_soql_query",
"salesforce_get_record",
"salesforce_create_record",
"salesforce_update_record",
"salesforce_describe_object",
"salesforce_list_objects",
],
required=True,
startup_required=False,
help_url="https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest",
description="Salesforce instance URL (e.g. 'https://your-org.my.salesforce.com')",
direct_api_key_supported=True,
api_key_instructions="""See SALESFORCE_ACCESS_TOKEN instructions above.""",
health_check_endpoint="",
credential_id="salesforce_instance_url",
credential_key="api_key",
),
}
+74
View File
@@ -0,0 +1,74 @@
"""
SAP S/4HANA Cloud credentials.
Contains credentials for the SAP S/4HANA Cloud OData APIs.
Requires SAP_BASE_URL, SAP_USERNAME, and SAP_PASSWORD.
"""
from .base import CredentialSpec
SAP_CREDENTIALS = {
"sap_base_url": CredentialSpec(
env_var="SAP_BASE_URL",
tools=[
"sap_list_purchase_orders",
"sap_get_purchase_order",
"sap_list_business_partners",
"sap_list_products",
"sap_list_sales_orders",
],
required=True,
startup_required=False,
help_url="https://api.sap.com/package/SAPS4HANACloud/odata",
description="SAP S/4HANA Cloud base URL (e.g. 'https://tenant-api.s4hana.ondemand.com')",
direct_api_key_supported=True,
api_key_instructions="""To set up SAP S/4HANA Cloud API access:
1. Create a Communication User in S/4HANA Cloud
2. Set up Communication Arrangements for the APIs you need
3. Set environment variables:
export SAP_BASE_URL=https://your-tenant-api.s4hana.ondemand.com
export SAP_USERNAME=your-communication-user
export SAP_PASSWORD=your-password""",
health_check_endpoint="",
credential_id="sap_base_url",
credential_key="api_key",
),
"sap_username": CredentialSpec(
env_var="SAP_USERNAME",
tools=[
"sap_list_purchase_orders",
"sap_get_purchase_order",
"sap_list_business_partners",
"sap_list_products",
"sap_list_sales_orders",
],
required=True,
startup_required=False,
help_url="https://api.sap.com/package/SAPS4HANACloud/odata",
description="SAP S/4HANA Communication User username",
direct_api_key_supported=True,
api_key_instructions="""See SAP_BASE_URL instructions above.""",
health_check_endpoint="",
credential_id="sap_username",
credential_key="api_key",
),
"sap_password": CredentialSpec(
env_var="SAP_PASSWORD",
tools=[
"sap_list_purchase_orders",
"sap_get_purchase_order",
"sap_list_business_partners",
"sap_list_products",
"sap_list_sales_orders",
],
required=True,
startup_required=False,
help_url="https://api.sap.com/package/SAPS4HANACloud/odata",
description="SAP S/4HANA Communication User password",
direct_api_key_supported=True,
api_key_instructions="""See SAP_BASE_URL instructions above.""",
health_check_endpoint="",
credential_id="sap_password",
credential_key="api_key",
),
}
@@ -84,7 +84,7 @@ def check_env_var_in_shell_config(
if not config_path.exists():
return False, None
content = config_path.read_text()
content = config_path.read_text(encoding="utf-8")
# Look for export ENV_VAR=value or export ENV_VAR="value"
pattern = rf"^export\s+{re.escape(env_var)}=(.+)$"
@@ -130,7 +130,7 @@ def add_env_var_to_shell_config(
try:
if config_path.exists():
content = config_path.read_text()
content = config_path.read_text(encoding="utf-8")
# Check if already exists
pattern = rf"^export\s+{re.escape(env_var)}=.*$"
@@ -142,11 +142,11 @@ def add_env_var_to_shell_config(
content,
flags=re.MULTILINE,
)
config_path.write_text(new_content)
config_path.write_text(new_content, encoding="utf-8")
return True, str(config_path)
# Append to file
with open(config_path, "a") as f:
with open(config_path, "a", encoding="utf-8") as f:
f.write(f"\n# {comment}\n")
f.write(f"{export_line}\n")
@@ -178,7 +178,7 @@ def remove_env_var_from_shell_config(
return True, "Config file does not exist"
try:
content = config_path.read_text()
content = config_path.read_text(encoding="utf-8")
lines = content.split("\n")
new_lines = []
@@ -206,7 +206,7 @@ def remove_env_var_from_shell_config(
new_lines.append(line)
config_path.write_text("\n".join(new_lines))
config_path.write_text("\n".join(new_lines), encoding="utf-8")
return True, str(config_path)
except PermissionError:
@@ -0,0 +1,57 @@
"""
Shopify Admin REST API credentials.
Contains credentials for the Shopify Admin API.
Requires SHOPIFY_ACCESS_TOKEN and SHOPIFY_STORE_NAME.
"""
from .base import CredentialSpec
SHOPIFY_CREDENTIALS = {
"shopify": CredentialSpec(
env_var="SHOPIFY_ACCESS_TOKEN",
tools=[
"shopify_list_orders",
"shopify_get_order",
"shopify_list_products",
"shopify_get_product",
"shopify_list_customers",
"shopify_search_customers",
],
required=True,
startup_required=False,
help_url="https://shopify.dev/docs/api/admin-rest",
description="Shopify Admin API access token (starts with shpat_)",
direct_api_key_supported=True,
api_key_instructions="""To set up Shopify Admin API access:
1. In Shopify Admin, go to Settings > Apps and sales channels > Develop apps
2. Create a custom app with scopes: read_orders, read_products, read_customers
3. Install the app and reveal the Admin API access token
4. Set environment variables:
export SHOPIFY_ACCESS_TOKEN=shpat_your-token
export SHOPIFY_STORE_NAME=your-store-name""",
health_check_endpoint="",
credential_id="shopify",
credential_key="api_key",
),
"shopify_store_name": CredentialSpec(
env_var="SHOPIFY_STORE_NAME",
tools=[
"shopify_list_orders",
"shopify_get_order",
"shopify_list_products",
"shopify_get_product",
"shopify_list_customers",
"shopify_search_customers",
],
required=True,
startup_required=False,
help_url="https://shopify.dev/docs/api/admin-rest",
description="Shopify store subdomain (e.g. 'my-store' from my-store.myshopify.com)",
direct_api_key_supported=True,
api_key_instructions="""See SHOPIFY_ACCESS_TOKEN instructions above.""",
health_check_endpoint="",
credential_id="shopify_store_name",
credential_key="api_key",
),
}
@@ -0,0 +1,52 @@
"""
Snowflake credentials.
Contains credentials for the Snowflake SQL REST API.
Requires SNOWFLAKE_ACCOUNT and SNOWFLAKE_TOKEN.
"""
from .base import CredentialSpec
SNOWFLAKE_CREDENTIALS = {
"snowflake_account": CredentialSpec(
env_var="SNOWFLAKE_ACCOUNT",
tools=[
"snowflake_execute_sql",
"snowflake_get_statement_status",
"snowflake_cancel_statement",
],
required=True,
startup_required=False,
help_url="https://docs.snowflake.com/en/developer-guide/sql-api/index",
description="Snowflake account identifier (e.g. 'xy12345.us-east-1')",
direct_api_key_supported=True,
api_key_instructions="""To set up Snowflake SQL API access:
1. Get your Snowflake account identifier from your account URL
2. Generate a JWT or OAuth token for authentication
3. Set environment variables:
export SNOWFLAKE_ACCOUNT=your-account-id
export SNOWFLAKE_TOKEN=your-jwt-or-oauth-token
export SNOWFLAKE_WAREHOUSE=your-warehouse (optional)
export SNOWFLAKE_DATABASE=your-database (optional)""",
health_check_endpoint="",
credential_id="snowflake_account",
credential_key="api_key",
),
"snowflake_token": CredentialSpec(
env_var="SNOWFLAKE_TOKEN",
tools=[
"snowflake_execute_sql",
"snowflake_get_statement_status",
"snowflake_cancel_statement",
],
required=True,
startup_required=False,
help_url="https://docs.snowflake.com/en/developer-guide/sql-api/authenticating",
description="Snowflake JWT or OAuth token for API authentication",
direct_api_key_supported=True,
api_key_instructions="""See SNOWFLAKE_ACCOUNT instructions above.""",
health_check_endpoint="",
credential_id="snowflake_token",
credential_key="api_key",
),
}
@@ -0,0 +1,39 @@
"""
Supabase credentials.
Contains credentials for Supabase database, auth, and edge functions.
"""
from .base import CredentialSpec
SUPABASE_CREDENTIALS = {
"supabase": CredentialSpec(
env_var="SUPABASE_ANON_KEY",
tools=[
"supabase_select",
"supabase_insert",
"supabase_update",
"supabase_delete",
"supabase_auth_signup",
"supabase_auth_signin",
"supabase_edge_invoke",
],
required=True,
startup_required=False,
help_url="https://supabase.com/dashboard",
description="Supabase anon/public API key (also requires SUPABASE_URL env var)",
direct_api_key_supported=True,
api_key_instructions="""To get Supabase credentials:
1. Go to https://supabase.com/dashboard
2. Create a new project or select an existing one
3. Go to Project Settings API
4. Copy the 'anon' / 'public' key (starts with eyJ...)
5. Copy the Project URL (https://<ref>.supabase.co)
6. Set both environment variables:
export SUPABASE_ANON_KEY=your-anon-key
export SUPABASE_URL=https://your-project.supabase.co""",
health_check_endpoint="",
credential_id="supabase",
credential_key="anon_key",
),
}
@@ -0,0 +1,35 @@
"""
Terraform Cloud / HCP Terraform credentials.
Contains credentials for the Terraform Cloud REST API v2.
Requires TFC_TOKEN.
"""
from .base import CredentialSpec
TERRAFORM_CREDENTIALS = {
"tfc_token": CredentialSpec(
env_var="TFC_TOKEN",
tools=[
"terraform_list_workspaces",
"terraform_get_workspace",
"terraform_list_runs",
"terraform_get_run",
"terraform_create_run",
],
required=True,
startup_required=False,
help_url="https://developer.hashicorp.com/terraform/cloud-docs/users-teams-organizations/api-tokens",
description="Terraform Cloud API token (User or Team token)",
direct_api_key_supported=True,
api_key_instructions="""To set up Terraform Cloud API access:
1. Go to app.terraform.io > User Settings > Tokens
2. Create a new API token
3. Set environment variable:
export TFC_TOKEN=your-api-token
(Optional for Terraform Enterprise: export TFC_URL=https://your-host.example.com)""",
health_check_endpoint="",
credential_id="tfc_token",
credential_key="api_key",
),
}
+54
View File
@@ -0,0 +1,54 @@
"""
Tines credentials.
Contains credentials for the Tines security automation API.
Requires TINES_DOMAIN and TINES_API_KEY.
"""
from .base import CredentialSpec
TINES_CREDENTIALS = {
"tines_domain": CredentialSpec(
env_var="TINES_DOMAIN",
tools=[
"tines_list_stories",
"tines_get_story",
"tines_list_actions",
"tines_get_action",
"tines_get_action_logs",
],
required=True,
startup_required=False,
help_url="https://www.tines.com/api/authentication/",
description="Tines tenant domain (e.g. 'your-tenant.tines.com')",
direct_api_key_supported=True,
api_key_instructions="""To set up Tines API access:
1. Go to your Tines tenant > Settings > API Keys
2. Create a new API key
3. Set environment variables:
export TINES_DOMAIN=your-tenant.tines.com
export TINES_API_KEY=your-api-key""",
health_check_endpoint="",
credential_id="tines_domain",
credential_key="api_key",
),
"tines_api_key": CredentialSpec(
env_var="TINES_API_KEY",
tools=[
"tines_list_stories",
"tines_get_story",
"tines_list_actions",
"tines_get_action",
"tines_get_action_logs",
],
required=True,
startup_required=False,
help_url="https://www.tines.com/api/authentication/",
description="Tines API key for authentication",
direct_api_key_supported=True,
api_key_instructions="""See TINES_DOMAIN instructions above.""",
health_check_endpoint="",
credential_id="tines_api_key",
credential_key="api_key",
),
}
@@ -0,0 +1,64 @@
"""
Trello credentials.
Contains credentials for Trello board, list, and card management.
Trello requires both TRELLO_API_KEY and TRELLO_TOKEN.
"""
from .base import CredentialSpec
TRELLO_CREDENTIALS = {
"trello_key": CredentialSpec(
env_var="TRELLO_API_KEY",
tools=[
"trello_list_boards",
"trello_get_member",
"trello_list_lists",
"trello_list_cards",
"trello_create_card",
"trello_move_card",
"trello_update_card",
"trello_add_comment",
"trello_add_attachment",
],
required=True,
startup_required=False,
help_url="https://trello.com/power-ups/admin",
description="Trello API key (also set TRELLO_TOKEN for authentication)",
direct_api_key_supported=True,
api_key_instructions="""To get Trello credentials:
1. Go to https://trello.com/power-ups/admin
2. Select your Power-Up or create one
3. Copy the API Key
4. Generate a token via the authorize URL
5. Set environment variables:
export TRELLO_API_KEY=your-api-key
export TRELLO_TOKEN=your-token""",
health_check_endpoint="https://api.trello.com/1/members/me",
credential_id="trello_key",
credential_key="api_key",
),
"trello_token": CredentialSpec(
env_var="TRELLO_API_TOKEN",
tools=[
"trello_list_boards",
"trello_get_member",
"trello_list_lists",
"trello_list_cards",
"trello_create_card",
"trello_move_card",
"trello_update_card",
"trello_add_comment",
"trello_add_attachment",
],
required=True,
startup_required=False,
help_url="https://trello.com/power-ups/admin",
description="Trello API token for authentication",
direct_api_key_supported=True,
api_key_instructions="""See TRELLO_API_KEY instructions above.""",
health_check_endpoint="https://api.trello.com/1/members/me",
credential_id="trello_token",
credential_key="api_key",
),
}
@@ -0,0 +1,52 @@
"""
Twilio credentials.
Contains credentials for Twilio SMS & WhatsApp messaging.
Requires TWILIO_ACCOUNT_SID and TWILIO_AUTH_TOKEN.
"""
from .base import CredentialSpec
TWILIO_CREDENTIALS = {
"twilio_sid": CredentialSpec(
env_var="TWILIO_ACCOUNT_SID",
tools=[
"twilio_send_sms",
"twilio_send_whatsapp",
"twilio_list_messages",
"twilio_get_message",
],
required=True,
startup_required=False,
help_url="https://console.twilio.com/",
description="Twilio Account SID (starts with AC)",
direct_api_key_supported=True,
api_key_instructions="""To set up Twilio API access:
1. Go to https://console.twilio.com/
2. Copy your Account SID and Auth Token from the dashboard
3. Set environment variables:
export TWILIO_ACCOUNT_SID=your-account-sid
export TWILIO_AUTH_TOKEN=your-auth-token""",
health_check_endpoint="",
credential_id="twilio_sid",
credential_key="api_key",
),
"twilio_token": CredentialSpec(
env_var="TWILIO_AUTH_TOKEN",
tools=[
"twilio_send_sms",
"twilio_send_whatsapp",
"twilio_list_messages",
"twilio_get_message",
],
required=True,
startup_required=False,
help_url="https://console.twilio.com/",
description="Twilio Auth Token for API authentication",
direct_api_key_supported=True,
api_key_instructions="""See TWILIO_ACCOUNT_SID instructions above.""",
health_check_endpoint="",
credential_id="twilio_token",
credential_key="api_key",
),
}
@@ -0,0 +1,34 @@
"""
Twitter/X credentials.
Contains credentials for X API v2.
Requires X_BEARER_TOKEN for read-only access.
"""
from .base import CredentialSpec
TWITTER_CREDENTIALS = {
"x_bearer_token": CredentialSpec(
env_var="X_BEARER_TOKEN",
tools=[
"twitter_search_tweets",
"twitter_get_user",
"twitter_get_user_tweets",
"twitter_get_tweet",
],
required=True,
startup_required=False,
help_url="https://developer.x.com/en/portal/dashboard",
description="X/Twitter API v2 Bearer Token (app-only, read access)",
direct_api_key_supported=True,
api_key_instructions="""To set up X/Twitter API access:
1. Go to https://developer.x.com/en/portal/dashboard
2. Create a Project and App
3. Copy the Bearer Token from the Keys and Tokens tab
4. Set environment variable:
export X_BEARER_TOKEN=your-bearer-token""",
health_check_endpoint="",
credential_id="x_bearer_token",
credential_key="api_key",
),
}
@@ -0,0 +1,37 @@
"""
Vercel credentials.
Contains credentials for Vercel deployment and hosting management.
"""
from .base import CredentialSpec
VERCEL_CREDENTIALS = {
"vercel": CredentialSpec(
env_var="VERCEL_TOKEN",
tools=[
"vercel_list_deployments",
"vercel_get_deployment",
"vercel_list_projects",
"vercel_get_project",
"vercel_list_project_domains",
"vercel_list_env_vars",
"vercel_create_env_var",
],
required=True,
startup_required=False,
help_url="https://vercel.com/account/tokens",
description="Vercel access token for deployment and project management",
direct_api_key_supported=True,
api_key_instructions="""To get a Vercel access token:
1. Go to https://vercel.com/account/tokens
2. Click 'Create' to generate a new token
3. Give it a name and set the scope (Full Account recommended)
4. Copy the token
5. Set the environment variable:
export VERCEL_TOKEN=your-token""",
health_check_endpoint="https://api.vercel.com/v2/user",
credential_id="vercel",
credential_key="api_key",
),
}
+106
View File
@@ -0,0 +1,106 @@
"""
X (Twitter) tool credentials.
Contains credentials for X API v2 integration.
Bearer token for read-only operations, OAuth 1.0a keys for write operations.
"""
from .base import CredentialSpec
_X_TOOLS = [
"x_post_tweet",
"x_reply_tweet",
"x_delete_tweet",
"x_search_tweets",
"x_get_mentions",
"x_send_dm",
]
X_CREDENTIALS = {
"x_bearer_token": CredentialSpec(
env_var="X_BEARER_TOKEN",
tools=_X_TOOLS,
required=True,
startup_required=False,
help_url="https://developer.x.com/en/portal/dashboard",
description="X (Twitter) API v2 Bearer Token for read-only operations",
direct_api_key_supported=True,
api_key_instructions="""To get an X API Bearer Token:
1. Go to https://developer.x.com/en/portal/dashboard
2. Create a Project & App (or select existing)
3. Go to Keys & Tokens tab
4. Copy the Bearer Token
5. Set it as X_BEARER_TOKEN environment variable""",
health_check_endpoint="https://api.x.com/2/users/me",
health_check_method="GET",
credential_id="x_bearer_token",
credential_key="api_key",
credential_group="x",
),
"x_api_key": CredentialSpec(
env_var="X_API_KEY",
tools=_X_TOOLS,
required=False,
startup_required=False,
help_url="https://developer.x.com/en/portal/dashboard",
description="X (Twitter) API Consumer Key for OAuth 1.0a write operations",
direct_api_key_supported=True,
api_key_instructions="""To get your X API Consumer Key:
1. Go to https://developer.x.com/en/portal/dashboard
2. Select your app > Keys and Tokens
3. Under Consumer Keys, copy the API Key""",
credential_id="x_api_key",
credential_key="api_key",
credential_group="x",
),
"x_api_secret": CredentialSpec(
env_var="X_API_SECRET",
tools=_X_TOOLS,
required=False,
startup_required=False,
help_url="https://developer.x.com/en/portal/dashboard",
description="X (Twitter) API Consumer Secret for OAuth 1.0a write operations",
direct_api_key_supported=True,
api_key_instructions="""To get your X API Consumer Secret:
1. Go to https://developer.x.com/en/portal/dashboard
2. Select your app > Keys and Tokens
3. Under Consumer Keys, copy the API Secret""",
credential_id="x_api_secret",
credential_key="api_key",
credential_group="x",
),
"x_access_token": CredentialSpec(
env_var="X_ACCESS_TOKEN",
tools=_X_TOOLS,
required=False,
startup_required=False,
help_url="https://developer.x.com/en/portal/dashboard",
description="X (Twitter) User Access Token for OAuth 1.0a write operations",
direct_api_key_supported=True,
api_key_instructions="""To get your X Access Token:
1. Go to https://developer.x.com/en/portal/dashboard
2. Select your app > Keys and Tokens
3. Under Authentication Tokens, generate Access Token and Secret
4. Copy the Access Token""",
credential_id="x_access_token",
credential_key="api_key",
credential_group="x",
),
"x_access_token_secret": CredentialSpec(
env_var="X_ACCESS_TOKEN_SECRET",
tools=_X_TOOLS,
required=False,
startup_required=False,
help_url="https://developer.x.com/en/portal/dashboard",
description="X (Twitter) User Access Token Secret for OAuth 1.0a write operations",
direct_api_key_supported=True,
api_key_instructions="""To get your X Access Token Secret:
1. Go to https://developer.x.com/en/portal/dashboard
2. Select your app > Keys and Tokens
3. Under Authentication Tokens, generate Access Token and Secret
4. Copy the Access Token Secret""",
credential_id="x_access_token_secret",
credential_key="api_key",
credential_group="x",
),
}
@@ -0,0 +1,40 @@
"""
YouTube Data API credentials.
Contains credentials for YouTube Data API v3 integration.
"""
from .base import CredentialSpec
YOUTUBE_CREDENTIALS = {
"youtube": CredentialSpec(
env_var="YOUTUBE_API_KEY",
tools=[
"youtube_search_videos",
"youtube_get_video_details",
"youtube_get_channel",
"youtube_list_channel_videos",
"youtube_get_playlist",
"youtube_search_channels",
"youtube_get_video_comments",
"youtube_get_video_categories",
],
required=True,
startup_required=False,
help_url="https://console.cloud.google.com/apis/credentials",
description="Google API key with YouTube Data API v3 enabled",
direct_api_key_supported=True,
api_key_instructions="""To get a YouTube Data API key:
1. Go to https://console.cloud.google.com/
2. Create a new project or select an existing one
3. Go to APIs & Services > Library
4. Search for "YouTube Data API v3" and enable it
5. Go to APIs & Services > Credentials
6. Click "Create Credentials" > "API key"
7. Copy the API key
8. (Optional) Restrict the key to YouTube Data API v3 only""",
health_check_endpoint="https://www.googleapis.com/youtube/v3/videoCategories?part=snippet&regionCode=US",
credential_id="youtube",
credential_key="api_key",
),
}
@@ -0,0 +1,74 @@
"""
Zendesk credentials.
Contains credentials for Zendesk Support ticket management.
Requires ZENDESK_SUBDOMAIN, ZENDESK_EMAIL, and ZENDESK_API_TOKEN.
"""
from .base import CredentialSpec
ZENDESK_CREDENTIALS = {
"zendesk_subdomain": CredentialSpec(
env_var="ZENDESK_SUBDOMAIN",
tools=[
"zendesk_list_tickets",
"zendesk_get_ticket",
"zendesk_create_ticket",
"zendesk_update_ticket",
"zendesk_search_tickets",
],
required=True,
startup_required=False,
help_url="https://developer.zendesk.com/api-reference/introduction/security-and-auth/",
description="Zendesk subdomain (e.g. 'acme' from acme.zendesk.com)",
direct_api_key_supported=True,
api_key_instructions="""To set up Zendesk API access:
1. Go to Zendesk Admin > Apps and integrations > APIs > Zendesk API
2. Enable Token Access and create an API token
3. Set environment variables:
export ZENDESK_SUBDOMAIN=your-subdomain
export ZENDESK_EMAIL=your-email@example.com
export ZENDESK_API_TOKEN=your-api-token""",
health_check_endpoint="",
credential_id="zendesk_subdomain",
credential_key="api_key",
),
"zendesk_email": CredentialSpec(
env_var="ZENDESK_EMAIL",
tools=[
"zendesk_list_tickets",
"zendesk_get_ticket",
"zendesk_create_ticket",
"zendesk_update_ticket",
"zendesk_search_tickets",
],
required=True,
startup_required=False,
help_url="https://developer.zendesk.com/api-reference/introduction/security-and-auth/",
description="Zendesk agent email for API authentication",
direct_api_key_supported=True,
api_key_instructions="""See ZENDESK_SUBDOMAIN instructions above.""",
health_check_endpoint="",
credential_id="zendesk_email",
credential_key="api_key",
),
"zendesk_token": CredentialSpec(
env_var="ZENDESK_API_TOKEN",
tools=[
"zendesk_list_tickets",
"zendesk_get_ticket",
"zendesk_create_ticket",
"zendesk_update_ticket",
"zendesk_search_tickets",
],
required=True,
startup_required=False,
help_url="https://developer.zendesk.com/api-reference/introduction/security-and-auth/",
description="Zendesk API token for authentication",
direct_api_key_supported=True,
api_key_instructions="""See ZENDESK_SUBDOMAIN instructions above.""",
health_check_endpoint="",
credential_id="zendesk_token",
credential_key="api_key",
),
}

Some files were not shown because too many files have changed in this diff Show More