Compare commits

...

62 Commits

Author SHA1 Message Date
Timothy 6b506a1c08 chore: lint 2026-03-18 13:05:00 -07:00
Timothy 0c9f4fa97e fix(llm): restore Claude Code subscription (OAuth) support after Anthropic API change
Anthropic tightened OAuth validation on 2026-03-17, requiring a
specific User-Agent header and a billing integrity system block for
subscription-authenticated requests. Without these, all OAuth calls
return HTTP 400 with a generic "Error" message.

Changes:
- Add billing integrity system block (SHA-256 hash derived from first
  user message content) prepended to system messages on OAuth requests
- Set User-Agent to claude-code/<version> for OAuth sessions
- Fix OAuth header patch to detect tokens in x-api-key (not just
  Authorization) and add required beta/browser-access headers
- Set litellm.drop_params=True to prevent unsupported params like
  stream_options from leaking to Anthropic (causes 400)
- Skip stream_options entirely for Anthropic models
- Honour LITELLM_LOG env var for debug logging instead of hardcoding
  LiteLLM logger to WARNING
2026-03-18 13:02:24 -07:00
Bryan @ Aden 1c9b09fb78 Merge pull request #6602 from sundaram2021/cleanup/remove-commit-message-txt
micro-fix: remove unnecessary commit message file
2026-03-18 17:40:50 +00:00
Timothy @aden 9fb14f23d2 Merge pull request #6526 from sundaram2021/feature/openrouter-api-key-support
feat openrouter api key support
2026-03-18 10:15:40 -07:00
Sundaram Kumar Jha 4795dc4f68 chore: clean useless commit message file 2026-03-18 16:45:10 +05:30
Sundaram Kumar Jha acf0f804c5 style(llm): apply ruff formatting 2026-03-18 10:54:06 +05:30
Sundaram Kumar Jha 4e2951854b fix(openrouter): harden quickstart setup and model validation 2026-03-18 10:39:58 +05:30
Sundaram Kumar Jha 80dfb429d7 refactor(review): remove out-of-scope PR changes 2026-03-18 10:39:48 +05:30
Timothy @aden 9c0ba77e22 Replace demo image with GitHub asset link
Updated README to include new asset link and removed demo image.
2026-03-17 20:59:14 -07:00
Timothy @aden 46b4651073 Merge pull request #6589 from aden-hive/fix/data-disclosure-gaps
Release / Create Release (push) Waiting to run
Fix data disclosure gaps, add worker run digests, clean up deprecated tools
2026-03-17 20:46:12 -07:00
Timothy 86dd5246c6 Merge remote-tracking branch 'origin/fix/resume-with-scheduler' into fix/data-disclosure-gaps 2026-03-17 20:44:28 -07:00
Timothy a1227c88ee Merge remote-tracking branch 'origin/fix/resume-with-scheduler' into fix/data-disclosure-gaps 2026-03-17 20:42:25 -07:00
Timothy 535d7ab568 fix: worker digest sub event 2026-03-17 20:41:56 -07:00
Richard Tang af10494b31 chore: ruff lint 2026-03-17 20:41:08 -07:00
Richard Tang 39c1042827 fix: fall back to queen-only session when worker load fails on cold restore 2026-03-17 20:38:41 -07:00
Richard Tang 16e7dc11f4 fix: don't overwrite meta in queen creation 2026-03-17 20:27:39 -07:00
Richard Tang 7a27babefd feat: track and resume the session by phase 2026-03-17 20:22:54 -07:00
Timothy d53ae9d51d fix: deprecated tests 2026-03-17 20:20:21 -07:00
Timothy 910cf7727d Merge remote-tracking branch 'origin/fix/resume-with-scheduler' into fix/data-disclosure-gaps 2026-03-17 20:14:25 -07:00
Timothy 1698605f15 chore: lint 2026-03-17 19:59:23 -07:00
Timothy eda124a123 chore: lint 2026-03-17 19:58:08 -07:00
Timothy 15e9ce8d2f Merge remote-tracking branch 'origin/feature/session-digest' into fix/data-disclosure-gaps 2026-03-17 19:45:07 -07:00
Timothy c01dd603d7 fix: digest invocation 2026-03-17 19:44:22 -07:00
Timothy 9d5157d69f feat: queen subscribe to worker digest 2026-03-17 19:23:43 -07:00
Timothy d78795bdf5 Merge remote-tracking branch 'origin/feature/session-digest' into fix/data-disclosure-gaps 2026-03-17 19:15:22 -07:00
Timothy ff2b7f473e fix: subagent execution 2026-03-17 19:15:07 -07:00
Timothy 73c9a91811 feat: add worker memory consolidation hooks 2026-03-17 19:14:07 -07:00
Timothy 27b765d902 Merge branch 'feature/session-digest' into fix/data-disclosure-gaps 2026-03-17 18:32:20 -07:00
Timothy fddba419be fix: minor issues 2026-03-17 18:30:57 -07:00
Timothy f42d6308e8 Merge branch 'main' into fix/data-disclosure-gaps 2026-03-17 17:50:36 -07:00
Timothy c167002754 fix: data disclosure gaps 2026-03-17 17:50:08 -07:00
Timothy @aden ea26ee7d0c Merge pull request #6568 from aden-hive/feature/node-focus-prompt
Inject execution-scope preamble into worker node system prompts
2026-03-17 17:38:49 -07:00
Richard Tang 5280e908b2 feat: change the agent last active time 2026-03-17 17:35:01 -07:00
RichardTang-Aden 1c5dd8c664 Merge pull request #5178 from Schlaflied/feat/sdr-agent-template
feat(templates): add SDR Agent sample template
2026-03-17 16:05:45 -07:00
Richard Tang 3aca153be5 fix: add missing flowchart and terminal nodes 2026-03-17 16:03:29 -07:00
Timothy 65c8e1653c chore: lint 2026-03-17 15:31:36 -07:00
Timothy 58e4fa918c feat: make worker node aware of boundaries 2026-03-17 15:28:41 -07:00
Timothy 3af13d3f90 feat: session digest for run scoped diary 2026-03-17 14:25:32 -07:00
Timothy @aden d2eb86e534 Merge pull request #6540 from sundaram2021/fix/make-windows-compatibility
fix make test compatibility on windows
2026-03-17 11:41:32 -07:00
Timothy 03842353e4 Merge branch 'main' into feature/openrouter-api-key-support 2026-03-17 11:21:53 -07:00
Schlaflied 48747e20af fix: remove personal oauth credential entries from .gitignore
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:53:16 -04:00
Schlaflied 58af593af6 revert: remove unrelated changes from previous commit
Restore .claude/settings.json and revert .gitignore change
that were accidentally included in the sdr-agent refactor commit.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:52:44 -04:00
Schlaflied 450575a927 refactor(sdr-agent): reuse agent.start() in tui command and fix mock mode
- Replace duplicated setup code in tui command with agent.start(mock_mode=mock)
- Fix mock mode to use MockLLMProvider instead of llm=None
- Add demo_contacts.json sample data for template testing
- Untrack .claude/settings.json and add to .gitignore

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:52:10 -04:00
Schlaflied eac2bb19b2 fix(sdr-agent): fix agent runtime lifecycle and mcp config
- Replace self._executor with self._agent_runtime (AgentRuntime | None)
- Import AgentRuntime for proper type annotation
- Add missing await self._agent_runtime.start() in start() — runtime
  was created but never started, causing silent failures at runtime
- Add self._agent_runtime = None reset in stop() for clean restart
- Remove redundant self._graph is None guard in trigger_and_wait()
- Update mcp_servers.json with hive-tools server config
- Add credential file patterns to .gitignore

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 13:50:29 -04:00
Schlaflied 756a815bf0 feat(templates): add SDR Agent sample template 2026-03-17 13:50:05 -04:00
mma2027 23a7b080eb test: add comprehensive test suite for safe_eval (#4015)
* test: add comprehensive test suite for safe_eval sandboxed evaluator

Adds 113 tests across 14 test classes covering the full surface area of
the safe_eval expression evaluator used by edge conditions:

- Literals, data structures, arithmetic, unary/binary/boolean operators
- Short-circuit semantics for `and`/`or` (including guard patterns)
- Ternary expressions, variable lookup, subscript/attribute access
- Whitelisted function and method calls
- Security boundaries (private attrs, disallowed AST nodes, blocked builtins)
- Real-world EdgeSpec.condition_expr patterns from graph executor usage

* style: fix import sort order

---------

Co-authored-by: mma2027 <mma2027@users.noreply.github.com>
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-18 01:01:31 +08:00
mma2027 bf39bcdec9 fixed race condition deadlock, missing short-circuit eval, unhandled format exceptions (#4012) 2026-03-18 00:36:54 +08:00
Richard Tang 0276632491 Merge branch 'feat/graph-improvements' 2026-03-17 07:34:10 -07:00
RichardTang-Aden ae2993d0d1 Merge pull request #6528 from Antiarin/feat/trigger-nodes-in-draft-graph
Restore trigger nodes in the new flowchart
2026-03-16 20:54:36 -07:00
RichardTang-Aden d14d71f760 Merge pull request #6549 from aden-hive/staging
Release / Create Release (push) Waiting to run
release 0.7.2
2026-03-16 20:44:47 -07:00
Antiarin 738641d35f fix: correct trigger target, label, and SSE event data
- Add name and entry_node to all trigger SSE events (TRIGGER_AVAILABLE,
  TRIGGER_ACTIVATED, TRIGGER_DEACTIVATED) so frontend gets correct data
  immediately instead of guessing
- Use ep.entry_node from backend in polling instead of guessing first
  non-trigger node
- Compute cronToLabel from trigger config during polling so pill labels
  show human-readable schedule
- Fix AsyncMock for event_bus.publish in tests
2026-03-17 09:07:10 +05:30
Antiarin 22f5534f08 fix: ensure Queen calls remove_trigger when user asks to remove scheduler
Added explicit prompt guidance requiring the Queen to call the
remove_trigger tool instead of just saying "it's removed."
2026-03-17 09:07:10 +05:30
Antiarin b79e7eca73 feat: live update trigger pill and detail panel on save
- Handle trigger_updated SSE event to update graph node label and
  config in real time when cron or task is saved
- Use cronToLabel for human-readable schedule display in detail panel
- Add "Saved" button feedback for Save Cron and Save Task (2s toast)
- Update trigger pill label to reflect new schedule on cron save
2026-03-17 09:07:10 +05:30
Antiarin 28250dc45e feat: support cron editing via trigger update API
- Extend PATCH /triggers/{id} to accept trigger_config with cron
  validation via croniter and active timer restart
- Add TRIGGER_UPDATED SSE event so frontend updates in real time
- Update frontend API client to use updateTrigger with config support
- Add tests for task update, cron restart, and invalid cron rejection
2026-03-17 09:07:10 +05:30
Antiarin fe5df6a87a feat: restore trigger node rendering in DraftGraph
Trigger nodes (scheduler, webhook, etc.) stopped appearing after the
v0.7.0 refactor because DraftGraph had no trigger awareness.

- Extract shared utilities (cssVar, truncateLabel, trigger colors/icons,
  useTriggerColors, cronToLabel) into lib/graphUtils.ts
- Render trigger pills above the draft flowchart with pill shape, icons,
  countdown timers, active/inactive status, and click handling
- Draw dashed edges from trigger pills to the correct draft node using
  flowchartMap lookup
- Name all trigger layout constants, fix countdown text color bug
- Include trigger pill extent in SVG viewBox width

Closes #6344
2026-03-17 09:07:10 +05:30
Sundaram Kumar Jha ff7b5c7e27 fix: prepend ~/.local/bin to PATH so uv is found in Git Bash on Windows 2026-03-17 01:28:25 +05:30
Sundaram Kumar Jha 22bb07f00e chore: resolve merge conflict 2026-03-16 19:59:57 +05:30
Sundaram Kumar Jha 660f883197 style(core): apply ruff formatting to satisfy CI lint 2026-03-16 19:57:21 +05:30
Sundaram Kumar Jha 988de80b66 Merge branch 'main' into feature/openrouter-api-key-support 2026-03-16 19:51:04 +05:30
Sundaram Kumar Jha dc6aa226ee feat(openrouter): validate model readiness and harden tool-call handling
- add OpenRouter chat completion validation to key checks for quickstart flows

- improve OpenRouter compat parsing to convert plain textual tool calls into real tool events

- prevent tool-call text from leaking into assistant responses

- add regression tests for OpenRouter key checks and LiteLLM tool compat parsing
2026-03-16 19:39:11 +05:30
Sundaram Kumar Jha a7b6b080ab chore(lockfiles): refresh generated lockfiles
- update frontend package-lock metadata after frontend validation\n- refresh uv.lock editable package version for the current workspace state
2026-03-14 20:50:51 +05:30
Sundaram Kumar Jha 9202cbd4d4 fix(openrouter): stabilize quickstart and tool execution
- add cross-platform OpenRouter quickstart setup, config fallbacks, and key validation\n- harden LiteLLM/OpenRouter tool execution, duplicate question handling, and worker loading UX\n- add backend and frontend regression coverage for OpenRouter flows
2026-03-14 20:48:58 +05:30
64 changed files with 6254 additions and 1389 deletions
-1
View File
@@ -68,7 +68,6 @@ temp/
exports/*
.claude/settings.local.json
.claude/skills/ship-it/
.venv
+9 -2
View File
@@ -1,4 +1,11 @@
.PHONY: lint format check test install-hooks help frontend-install frontend-dev frontend-build
.PHONY: lint format check test test-tools test-live test-all install-hooks help frontend-install frontend-dev frontend-build
# ── Ensure uv is findable in Git Bash on Windows ──────────────────────────────
# uv installs to ~/.local/bin on Windows/Linux/macOS. Git Bash may not include
# this in PATH by default, so we prepend it here.
export PATH := $(HOME)/.local/bin:$(PATH)
# ── Targets ───────────────────────────────────────────────────────────────────
help: ## Show this help
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | \
@@ -46,4 +53,4 @@ frontend-dev: ## Start frontend dev server
cd core/frontend && npm run dev
frontend-build: ## Build frontend for production
cd core/frontend && npm run build
cd core/frontend && npm run build
+2 -1
View File
@@ -41,7 +41,8 @@ Generate a swarm of worker agents with a coding agent(queen) that control them.
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
[![Hive Demo](https://img.youtube.com/vi/XDOG9fOaLjU/maxresdefault.jpg)](https://www.youtube.com/watch?v=XDOG9fOaLjU)
https://github.com/user-attachments/assets/aad3a035-e7b3-4cac-b13d-4a83c7002c30
## Who Is Hive For?
-31
View File
@@ -1,31 +0,0 @@
perf: reduce subprocess spawning in quickstart scripts (#4427)
## Problem
Windows process creation (CreateProcess) is 10-100x slower than Linux fork/exec.
The quickstart scripts were spawning 4+ separate `uv run python -c "import X"`
processes to verify imports, adding ~600ms overhead on Windows.
## Solution
Consolidated all import checks into a single batch script that checks multiple
modules in one subprocess call, reducing spawn overhead by ~75%.
## Changes
- **New**: `scripts/check_requirements.py` - Batched import checker
- **New**: `scripts/test_check_requirements.py` - Test suite
- **New**: `scripts/benchmark_quickstart.ps1` - Performance benchmark tool
- **Modified**: `quickstart.ps1` - Updated import verification (2 sections)
- **Modified**: `quickstart.sh` - Updated import verification
## Performance Impact
**Benchmark results on Windows:**
- Before: ~19.8 seconds for import checks
- After: ~4.9 seconds for import checks
- **Improvement: 14.9 seconds saved (75.2% faster)**
## Testing
- ✅ All functional tests pass (`scripts/test_check_requirements.py`)
- ✅ Quickstart scripts work correctly on Windows
- ✅ Error handling verified (invalid imports reported correctly)
- ✅ Performance benchmark confirms 75%+ improvement
Fixes #4427
+50 -19
View File
@@ -23,25 +23,56 @@ class AgentEntry:
last_active: str | None = None
def _get_last_active(agent_name: str) -> str | None:
"""Return the most recent updated_at timestamp across all sessions."""
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
if not sessions_dir.exists():
return None
def _get_last_active(agent_path: Path) -> str | None:
"""Return the most recent updated_at timestamp across all sessions.
Checks both worker sessions (``~/.hive/agents/{name}/sessions/``) and
queen sessions (``~/.hive/queen/session/``) whose ``meta.json`` references
the same *agent_path*.
"""
from datetime import datetime
agent_name = agent_path.name
latest: str | None = None
for session_dir in sessions_dir.iterdir():
if not session_dir.is_dir() or not session_dir.name.startswith("session_"):
continue
state_file = session_dir / "state.json"
if not state_file.exists():
continue
try:
data = json.loads(state_file.read_text(encoding="utf-8"))
ts = data.get("timestamps", {}).get("updated_at")
if ts and (latest is None or ts > latest):
latest = ts
except Exception:
continue
# 1. Worker sessions
sessions_dir = Path.home() / ".hive" / "agents" / agent_name / "sessions"
if sessions_dir.exists():
for session_dir in sessions_dir.iterdir():
if not session_dir.is_dir() or not session_dir.name.startswith("session_"):
continue
state_file = session_dir / "state.json"
if not state_file.exists():
continue
try:
data = json.loads(state_file.read_text(encoding="utf-8"))
ts = data.get("timestamps", {}).get("updated_at")
if ts and (latest is None or ts > latest):
latest = ts
except Exception:
continue
# 2. Queen sessions
queen_sessions_dir = Path.home() / ".hive" / "queen" / "session"
if queen_sessions_dir.exists():
resolved = agent_path.resolve()
for d in queen_sessions_dir.iterdir():
if not d.is_dir():
continue
meta_file = d / "meta.json"
if not meta_file.exists():
continue
try:
meta = json.loads(meta_file.read_text(encoding="utf-8"))
stored = meta.get("agent_path")
if not stored or Path(stored).resolve() != resolved:
continue
ts = datetime.fromtimestamp(d.stat().st_mtime).isoformat()
if latest is None or ts > latest:
latest = ts
except Exception:
continue
return latest
@@ -169,7 +200,7 @@ def discover_agents() -> dict[str, list[AgentEntry]]:
node_count=node_count,
tool_count=tool_count,
tags=tags,
last_active=_get_last_active(path.name),
last_active=_get_last_active(path),
)
)
if entries:
@@ -1144,6 +1144,8 @@ Batch your response — do not call run_agent_with_input() once per trigger.
config since last run), skip it and inform the user.
- Never disable a trigger without telling the user. Use remove_trigger() only \
when explicitly asked or when the trigger is clearly obsolete.
- When the user asks to remove or disable a trigger, you MUST call remove_trigger(trigger_id). \
Never just say "it's removed" without actually calling the tool.
"""
# -- Backward-compatible composed versions (used by queen_node.system_prompt default) --
+286
View File
@@ -0,0 +1,286 @@
"""Worker per-run digest (run diary).
Storage layout:
~/.hive/agents/{agent_name}/runs/{run_id}/digest.md
Each completed or failed worker run gets one digest file. The queen reads
these via get_worker_status(focus='diary') before digging into live runtime
logs the diary is a cheap, persistent record that survives across sessions.
"""
from __future__ import annotations
import logging
import traceback
from collections import Counter
from datetime import datetime
from pathlib import Path
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from framework.runtime.event_bus import AgentEvent, EventBus
logger = logging.getLogger(__name__)
_DIGEST_SYSTEM = """\
You maintain run digests for a worker agent.
A run digest is a concise, factual record of a single task execution.
Write 3-6 sentences covering:
- What the worker was asked to do (the task/goal)
- What approach it took and what tools it used
- What the outcome was (success, partial, or failure and why if relevant)
- Any notable issues, retries, or escalations to the queen
Write in third person past tense. Be direct and specific.
Omit routine tool invocations unless the result matters.
Output only the digest prose no headings, no code fences.
"""
def _worker_runs_dir(agent_name: str) -> Path:
return Path.home() / ".hive" / "agents" / agent_name / "runs"
def digest_path(agent_name: str, run_id: str) -> Path:
return _worker_runs_dir(agent_name) / run_id / "digest.md"
def _collect_run_events(bus: EventBus, run_id: str, limit: int = 2000) -> list[AgentEvent]:
"""Collect all events belonging to *run_id* from the bus history.
Strategy: find the EXECUTION_STARTED event that carries ``run_id``,
extract its ``execution_id``, then query the bus by that execution_id.
This works because TOOL_CALL_*, EDGE_TRAVERSED, NODE_STALLED etc. carry
execution_id but not run_id.
Falls back to a full-scan run_id filter when EXECUTION_STARTED is not
found (e.g. bus was rotated).
"""
from framework.runtime.event_bus import EventType
# Pass 1: find execution_id via EXECUTION_STARTED with matching run_id
started = bus.get_history(event_type=EventType.EXECUTION_STARTED, limit=limit)
exec_id: str | None = None
for e in started:
if getattr(e, "run_id", None) == run_id and e.execution_id:
exec_id = e.execution_id
break
if exec_id:
return bus.get_history(execution_id=exec_id, limit=limit)
# Fallback: scan all events and match by run_id attribute
return [e for e in bus.get_history(limit=limit) if getattr(e, "run_id", None) == run_id]
def _build_run_context(
events: list[AgentEvent],
outcome_event: AgentEvent | None,
) -> str:
"""Assemble a plain-text run context string for the digest LLM call."""
from framework.runtime.event_bus import EventType
# Reverse so events are in chronological order
events_chron = list(reversed(events))
lines: list[str] = []
# Task input from EXECUTION_STARTED
started = [e for e in events_chron if e.type == EventType.EXECUTION_STARTED]
if started:
inp = started[0].data.get("input", {})
if inp:
lines.append(f"Task input: {str(inp)[:400]}")
# Duration (elapsed so far if no outcome yet)
ref_ts = outcome_event.timestamp if outcome_event else datetime.utcnow()
if started:
elapsed = (ref_ts - started[0].timestamp).total_seconds()
m, s = divmod(int(elapsed), 60)
lines.append(f"Duration so far: {m}m {s}s" if m else f"Duration so far: {s}s")
# Outcome
if outcome_event is None:
lines.append("Status: still running (mid-run snapshot)")
elif outcome_event.type == EventType.EXECUTION_COMPLETED:
out = outcome_event.data.get("output", {})
out_str = f"Outcome: completed. Output: {str(out)[:300]}"
lines.append(out_str if out else "Outcome: completed.")
else:
err = outcome_event.data.get("error", "")
lines.append(f"Outcome: failed. Error: {str(err)[:300]}" if err else "Outcome: failed.")
# Node path (edge traversals)
edges = [e for e in events_chron if e.type == EventType.EDGE_TRAVERSED]
if edges:
parts = [
f"{e.data.get('source_node', '?')}->{e.data.get('target_node', '?')}"
for e in edges[-20:]
]
lines.append(f"Node path: {', '.join(parts)}")
# Tools used
tool_events = [e for e in events_chron if e.type == EventType.TOOL_CALL_COMPLETED]
if tool_events:
names = [e.data.get("tool_name", "?") for e in tool_events]
counts = Counter(names)
summary = ", ".join(f"{name}×{n}" if n > 1 else name for name, n in counts.most_common())
lines.append(f"Tools used: {summary}")
# Note any tool errors
errors = [e for e in tool_events if e.data.get("is_error")]
if errors:
err_names = Counter(e.data.get("tool_name", "?") for e in errors)
lines.append(f"Tool errors: {dict(err_names)}")
# Issues
issue_map = {
EventType.NODE_STALLED: "stall",
EventType.NODE_TOOL_DOOM_LOOP: "doom loop",
EventType.CONSTRAINT_VIOLATION: "constraint violation",
EventType.NODE_RETRY: "retry",
}
issue_parts: list[str] = []
for evt_type, label in issue_map.items():
n = sum(1 for e in events_chron if e.type == evt_type)
if n:
issue_parts.append(f"{n} {label}(s)")
if issue_parts:
lines.append(f"Issues: {', '.join(issue_parts)}")
# Escalations to queen
escalations = [e for e in events_chron if e.type == EventType.ESCALATION_REQUESTED]
if escalations:
lines.append(f"Escalations to queen: {len(escalations)}")
# Final LLM output snippet (last LLM_TEXT_DELTA snapshot)
text_events = [e for e in reversed(events_chron) if e.type == EventType.LLM_TEXT_DELTA]
if text_events:
snapshot = text_events[0].data.get("snapshot", "") or ""
if snapshot:
lines.append(f"Final LLM output: {snapshot[-400:].strip()}")
return "\n".join(lines)
async def consolidate_worker_run(
agent_name: str,
run_id: str,
outcome_event: AgentEvent | None,
bus: EventBus,
llm: Any,
) -> None:
"""Write (or overwrite) the digest for a worker run.
Called fire-and-forget either:
- After EXECUTION_COMPLETED / EXECUTION_FAILED (outcome_event set, final write)
- Periodically during a run on a cooldown timer (outcome_event=None, mid-run snapshot)
The digest file is always overwritten so each call produces the freshest view.
The final completion/failure call supersedes any mid-run snapshot.
Args:
agent_name: Worker agent directory name (determines storage path).
run_id: The run ID.
outcome_event: EXECUTION_COMPLETED or EXECUTION_FAILED event, or None for
a mid-run snapshot.
bus: The session EventBus (shared queen + worker).
llm: LLMProvider with an acomplete() method.
"""
try:
events = _collect_run_events(bus, run_id)
run_context = _build_run_context(events, outcome_event)
if not run_context:
logger.debug("worker_memory: no events for run %s, skipping digest", run_id)
return
is_final = outcome_event is not None
logger.info(
"worker_memory: generating %s digest for run %s ...",
"final" if is_final else "mid-run",
run_id,
)
from framework.agents.queen.config import default_config
resp = await llm.acomplete(
messages=[{"role": "user", "content": run_context}],
system=_DIGEST_SYSTEM,
max_tokens=min(default_config.max_tokens, 512),
)
digest_text = (resp.content or "").strip()
if not digest_text:
logger.warning("worker_memory: LLM returned empty digest for run %s", run_id)
return
path = digest_path(agent_name, run_id)
path.parent.mkdir(parents=True, exist_ok=True)
from framework.runtime.event_bus import EventType
ts = (outcome_event.timestamp if outcome_event else datetime.utcnow()).strftime(
"%Y-%m-%d %H:%M"
)
if outcome_event is None:
status = "running"
elif outcome_event.type == EventType.EXECUTION_COMPLETED:
status = "completed"
else:
status = "failed"
path.write_text(
f"# {run_id}\n\n**{ts}** | {status}\n\n{digest_text}\n",
encoding="utf-8",
)
logger.info(
"worker_memory: %s digest written for run %s (%d chars)",
status,
run_id,
len(digest_text),
)
except Exception:
tb = traceback.format_exc()
logger.exception("worker_memory: digest failed for run %s", run_id)
# Persist the error so it's findable without log access
error_path = _worker_runs_dir(agent_name) / run_id / "digest_error.txt"
try:
error_path.parent.mkdir(parents=True, exist_ok=True)
error_path.write_text(
f"run_id: {run_id}\ntime: {datetime.now().isoformat()}\n\n{tb}",
encoding="utf-8",
)
except Exception:
pass
def read_recent_digests(agent_name: str, max_runs: int = 5) -> list[tuple[str, str]]:
"""Return recent run digests as [(run_id, content), ...], newest first.
Args:
agent_name: Worker agent directory name.
max_runs: Maximum number of digests to return.
Returns:
List of (run_id, digest_content) tuples, ordered newest first.
"""
runs_dir = _worker_runs_dir(agent_name)
if not runs_dir.exists():
return []
digest_files = sorted(
runs_dir.glob("*/digest.md"),
key=lambda p: p.stat().st_mtime,
reverse=True,
)[:max_runs]
result: list[tuple[str, str]] = []
for f in digest_files:
try:
content = f.read_text(encoding="utf-8").strip()
if content:
result.append((f.parent.name, content))
except OSError:
continue
return result
+13 -2
View File
@@ -51,7 +51,13 @@ def get_preferred_model() -> str:
"""Return the user's preferred LLM model string (e.g. 'anthropic/claude-sonnet-4-20250514')."""
llm = get_hive_config().get("llm", {})
if llm.get("provider") and llm.get("model"):
return f"{llm['provider']}/{llm['model']}"
provider = str(llm["provider"])
model = str(llm["model"]).strip()
# OpenRouter quickstart stores raw model IDs; tolerate pasted "openrouter/<id>" too.
if provider.lower() == "openrouter" and model.lower().startswith("openrouter/"):
model = model[len("openrouter/") :]
if model:
return f"{provider}/{model}"
return "anthropic/claude-sonnet-4-20250514"
@@ -61,6 +67,7 @@ def get_max_tokens() -> int:
DEFAULT_MAX_CONTEXT_TOKENS = 32_000
OPENROUTER_API_BASE = "https://openrouter.ai/api/v1"
def get_max_context_tokens() -> int:
@@ -142,7 +149,11 @@ def get_api_base() -> str | None:
if llm.get("use_kimi_code_subscription"):
# Kimi Code uses an Anthropic-compatible endpoint (no /v1 suffix).
return "https://api.kimi.com/coding"
return llm.get("api_base")
if llm.get("api_base"):
return llm["api_base"]
if str(llm.get("provider", "")).lower() == "openrouter":
return OPENROUTER_API_BASE
return None
def get_llm_extra_kwargs() -> dict[str, Any]:
+10
View File
@@ -51,6 +51,16 @@ def ensure_credential_key_env() -> None:
if found and value:
os.environ[var_name] = value
logger.debug("Loaded %s from shell config", var_name)
# Also load the currently configured LLM env var even if it's not in CREDENTIAL_SPECS.
# This keeps quickstart-written keys available to fresh processes on Unix shells.
from framework.config import get_hive_config
llm_env_var = str(get_hive_config().get("llm", {}).get("api_key_env_var", "")).strip()
if llm_env_var and not os.environ.get(llm_env_var):
found, value = check_env_var_in_shell_config(llm_env_var)
if found and value:
os.environ[llm_env_var] = value
logger.debug("Loaded configured LLM env var %s from shell config", llm_env_var)
except ImportError:
pass
+6 -2
View File
@@ -612,6 +612,11 @@ class NodeConversation:
continue # never prune errors
if msg.content.startswith("[Pruned tool result"):
continue # already pruned
# Tiny results (set_output acks, confirmations) — pruning
# saves negligible space but makes the LLM think the call
# failed, causing costly retries.
if len(msg.content) < 100:
continue
# Phase-aware: protect current phase messages
if self._current_phase and msg.phase_id == self._current_phase:
@@ -901,8 +906,7 @@ class NodeConversation:
full_path = str((spill_path / conv_filename).resolve())
ref_parts.append(
f"[Previous conversation saved to '{full_path}'. "
f"Use load_data('{conv_filename}'), read_file('{full_path}'), "
f"or run_command('cat \"{full_path}\"') to review if needed.]"
f"Use load_data('{conv_filename}') to review if needed.]"
)
elif not collapsed_msgs:
ref_parts.append("[Previous freeform messages compacted.]")
+123 -64
View File
@@ -243,7 +243,7 @@ class LoopConfig:
# Maximum seconds a delegate_to_sub_agent call may run before being
# killed. Subagents run a full event-loop so they naturally take
# longer than a single tool call — default is 10 minutes. 0 = no timeout.
subagent_timeout_seconds: float = 300.0
subagent_timeout_seconds: float = 600.0
# --- Lifecycle hooks ---
# Hooks are async callables keyed by event name. Supported events:
@@ -293,13 +293,26 @@ class OutputAccumulator:
Values are stored in memory and optionally written through to a
ConversationStore's cursor data for crash recovery.
When *spillover_dir* and *max_value_chars* are set, large values are
automatically saved to files and replaced with lightweight file
references. This guarantees auto-spill fires on **every** ``set()``
call regardless of code path (resume, checkpoint restore, etc.).
"""
values: dict[str, Any] = field(default_factory=dict)
store: ConversationStore | None = None
spillover_dir: str | None = None
max_value_chars: int = 0 # 0 = disabled
async def set(self, key: str, value: Any) -> None:
"""Set a key-value pair, persisting immediately if store is available."""
"""Set a key-value pair, auto-spilling large values to files.
When the serialised value exceeds *max_value_chars*, the data is
saved to ``<spillover_dir>/output_<key>.<ext>`` and *value* is
replaced with a compact file-reference string.
"""
value = self._auto_spill(key, value)
self.values[key] = value
if self.store:
cursor = await self.store.read_cursor() or {}
@@ -308,6 +321,39 @@ class OutputAccumulator:
cursor["outputs"] = outputs
await self.store.write_cursor(cursor)
def _auto_spill(self, key: str, value: Any) -> Any:
"""Save large values to a file and return a reference string."""
if self.max_value_chars <= 0 or not self.spillover_dir:
return value
val_str = json.dumps(value, ensure_ascii=False) if not isinstance(value, str) else value
if len(val_str) <= self.max_value_chars:
return value
spill_path = Path(self.spillover_dir)
spill_path.mkdir(parents=True, exist_ok=True)
ext = ".json" if isinstance(value, (dict, list)) else ".txt"
filename = f"output_{key}{ext}"
write_content = (
json.dumps(value, indent=2, ensure_ascii=False)
if isinstance(value, (dict, list))
else str(value)
)
(spill_path / filename).write_text(write_content, encoding="utf-8")
file_size = (spill_path / filename).stat().st_size
logger.info(
"set_output value auto-spilled: key=%s, %d chars → %s (%d bytes)",
key,
len(val_str),
filename,
file_size,
)
return (
f"[Saved to '{filename}' ({file_size:,} bytes). "
f"Use load_data(filename='{filename}') "
f"to access full data.]"
)
def get(self, key: str) -> Any | None:
"""Get a value by key, or None if not present."""
return self.values.get(key)
@@ -467,7 +513,11 @@ class EventLoopNode(NodeProtocol):
conversation._output_keys = (
ctx.cumulative_output_keys or ctx.node_spec.output_keys or None
)
accumulator = OutputAccumulator(store=self._conversation_store)
accumulator = OutputAccumulator(
store=self._conversation_store,
spillover_dir=self._config.spillover_dir,
max_value_chars=self._config.max_output_value_chars,
)
start_iteration = 0
_restored_recent_responses: list[str] = []
_restored_tool_fingerprints: list[list[tuple[str, str]]] = []
@@ -504,9 +554,21 @@ class EventLoopNode(NodeProtocol):
_restored_tool_fingerprints = []
# Fresh conversation: either isolated mode or first node in continuous mode.
from framework.graph.prompt_composer import _with_datetime
from framework.graph.prompt_composer import (
EXECUTION_SCOPE_PREAMBLE,
_with_datetime,
)
system_prompt = _with_datetime(ctx.node_spec.system_prompt or "")
# Prepend execution-scope preamble for worker nodes so the
# LLM knows it is one step in a pipeline and should not try
# to perform work that belongs to other nodes.
if (
not ctx.is_subagent_mode
and ctx.node_spec.node_type in ("event_loop", "gcu")
and ctx.node_spec.output_keys
):
system_prompt = f"{EXECUTION_SCOPE_PREAMBLE}\n\n{system_prompt}"
# Prepend GCU browser best-practices prompt for gcu nodes
if ctx.node_spec.node_type == "gcu":
from framework.graph.gcu import GCU_BROWSER_SYSTEM_PROMPT
@@ -573,7 +635,11 @@ class EventLoopNode(NodeProtocol):
# Stamp phase for first node in continuous mode
if _is_continuous:
conversation.set_current_phase(ctx.node_id)
accumulator = OutputAccumulator(store=self._conversation_store)
accumulator = OutputAccumulator(
store=self._conversation_store,
spillover_dir=self._config.spillover_dir,
max_value_chars=self._config.max_output_value_chars,
)
start_iteration = 0
# Add initial user message from input data
@@ -756,6 +822,7 @@ class EventLoopNode(NodeProtocol):
)
_stream_retry_count = 0
_turn_cancelled = False
_llm_turn_failed_waiting_input = False
while True:
try:
(
@@ -875,6 +942,16 @@ class EventLoopNode(NodeProtocol):
# can retry or adjust the request.
if ctx.node_spec.client_facing:
error_msg = f"LLM call failed: {e}"
_guardrail_phrase = (
"no endpoints available matching your guardrail restrictions "
"and data policy"
)
if _guardrail_phrase in str(e).lower():
error_msg += (
" OpenRouter blocked this model under current privacy settings. "
"Update https://openrouter.ai/settings/privacy or choose another "
"OpenRouter model."
)
logger.error(
"[%s] iter=%d: %s — waiting for user input",
node_id,
@@ -896,6 +973,7 @@ class EventLoopNode(NodeProtocol):
f"[Error: {error_msg}. Please try again.]"
)
await self._await_user_input(ctx, prompt="")
_llm_turn_failed_waiting_input = True
break # exit retry loop, continue outer iteration
# Non-client-facing: crash as before
@@ -946,6 +1024,11 @@ class EventLoopNode(NodeProtocol):
await self._await_user_input(ctx, prompt="")
continue # back to top of for-iteration loop
# Client-facing non-transient LLM failures wait for user input and then
# continue the outer loop without touching per-turn token vars.
if _llm_turn_failed_waiting_input:
continue
# 6e'. Feed actual API token count back for accurate estimation
turn_input = turn_tokens.get("input", 0)
if turn_input > 0:
@@ -2197,58 +2280,24 @@ class EventLoopNode(NodeProtocol):
pass
key = tc.tool_input.get("key", "")
# Auto-spill: save large values to data files and
# replace with a lightweight file reference so shared
# memory / adapt.md / transition markers stay small.
spill_dir = self._config.spillover_dir
max_val = self._config.max_output_value_chars
if max_val > 0 and spill_dir:
val_str = (
json.dumps(value, ensure_ascii=False)
if not isinstance(value, str)
else value
)
if len(val_str) > max_val:
spill_path = Path(spill_dir)
spill_path.mkdir(parents=True, exist_ok=True)
ext = ".json" if isinstance(value, (dict, list)) else ".txt"
filename = f"output_{key}{ext}"
write_content = (
json.dumps(value, indent=2, ensure_ascii=False)
if isinstance(value, (dict, list))
else str(value)
)
(spill_path / filename).write_text(write_content, encoding="utf-8")
file_size = (spill_path / filename).stat().st_size
logger.info(
"set_output value auto-spilled: key=%s, "
"%d chars → %s (%d bytes)",
key,
len(val_str),
filename,
file_size,
)
# Replace value with reference
value = (
f"[Saved to '{filename}' ({file_size:,} bytes). "
f"Use load_data(filename='{filename}') "
f"to access full data.]"
)
# Update tool result to inform the LLM
result = ToolResult(
tool_use_id=tc.tool_use_id,
content=(
f"Output '{key}' was large "
f"({len(val_str):,} chars) — data saved "
f"to '{filename}' ({file_size:,} bytes). "
f"The next phase will see the file "
f"reference and can load full data."
),
is_error=False,
)
# Auto-spill happens inside accumulator.set()
# — it fires on every code path (fresh, resume,
# restore) and prevents overwrite regression.
await accumulator.set(key, value)
self._record_learning(key, value)
stored = accumulator.get(key)
# If the accumulator spilled, update the tool
# result so the LLM knows data was saved to a file.
if isinstance(stored, str) and stored.startswith("[Saved to '"):
result = ToolResult(
tool_use_id=tc.tool_use_id,
content=(
f"Output '{key}' auto-saved to file "
f"(value was too large for inline). "
f"{stored}"
),
is_error=False,
)
self._record_learning(key, stored)
outputs_set_this_turn.append(key)
await self._publish_output_key_set(stream_id, node_id, key, execution_id)
logged_tool_calls.append(
@@ -2266,7 +2315,6 @@ class EventLoopNode(NodeProtocol):
elif tc.tool_name == "ask_user":
# --- Framework-level ask_user handling ---
user_input_requested = True
ask_user_prompt = tc.tool_input.get("question", "")
raw_options = tc.tool_input.get("options", None)
# Defensive: ensure options is a list of strings.
@@ -2303,6 +2351,8 @@ class EventLoopNode(NodeProtocol):
user_input_requested = False
continue
user_input_requested = True
# Free-form ask_user (no options): stream the question
# text as a chat message so the user can see it. When
# options are present the QuestionWidget shows the
@@ -2328,7 +2378,6 @@ class EventLoopNode(NodeProtocol):
elif tc.tool_name == "ask_user_multiple":
# --- Framework-level ask_user_multiple ---
user_input_requested = True
raw_questions = tc.tool_input.get("questions", [])
if not isinstance(raw_questions, list) or len(raw_questions) < 2:
result = ToolResult(
@@ -2366,6 +2415,8 @@ class EventLoopNode(NodeProtocol):
}
)
user_input_requested = True
# Store as multi-question prompt/options for
# the event emission path
ask_user_prompt = ""
@@ -2627,6 +2678,11 @@ class EventLoopNode(NodeProtocol):
content=raw.content,
is_error=raw.is_error,
)
# Route through _truncate_tool_result so large
# subagent results are saved to spillover files
# and survive pruning (instead of being "cleared
# from context" with no recovery path).
result = self._truncate_tool_result(result, "delegate_to_sub_agent")
results_by_id[tc.tool_use_id] = result
logged_tool_calls.append(
{
@@ -2671,7 +2727,11 @@ class EventLoopNode(NodeProtocol):
content=result.content,
is_error=result.is_error,
)
if tc.tool_name in ("ask_user", "ask_user_multiple"):
if (
tc.tool_name in ("ask_user", "ask_user_multiple")
and user_input_requested
and not result.is_error
):
# Defer tool_call_completed until after user responds
self._deferred_tool_complete = {
"stream_id": stream_id,
@@ -4287,17 +4347,14 @@ class EventLoopNode(NodeProtocol):
)
parts.append(
"CONVERSATION HISTORY (freeform messages saved during compaction — "
"use load_data('<filename>'), read_file('<full_path>'), "
"or run_command('cat \"<full_path>\"') to review earlier dialogue):\n"
+ conv_list
"use load_data('<filename>') to review earlier dialogue):\n" + conv_list
)
if data_files:
file_list = "\n".join(
f" - {f} (full path: {data_dir / f})" for f in data_files[:30]
)
parts.append(
"DATA FILES (use load_data('<filename>'), read_file('<full_path>'), "
"or run_command('cat \"<full_path>\"') to read):\n" + file_list
"DATA FILES (use load_data('<filename>') to read):\n" + file_list
)
if not all_files:
parts.append(
@@ -4363,6 +4420,8 @@ class EventLoopNode(NodeProtocol):
return None
accumulator = await OutputAccumulator.restore(self._conversation_store)
accumulator.spillover_dir = self._config.spillover_dir
accumulator.max_value_chars = self._config.max_output_value_chars
cursor = await self._conversation_store.read_cursor()
start_iteration = cursor.get("iteration", 0) + 1 if cursor else 0
+7 -1
View File
@@ -1420,6 +1420,7 @@ class GraphExecutor:
next_spec = graph.get_node(current_node_id)
if next_spec and next_spec.node_type == "event_loop":
from framework.graph.prompt_composer import (
EXECUTION_SCOPE_PREAMBLE,
build_accounts_prompt,
build_narrative,
build_transition_marker,
@@ -1459,9 +1460,14 @@ class GraphExecutor:
)
# Compose new system prompt (Layer 1 + 2 + 3 + accounts)
# Prepend scope preamble to focus so the LLM stays
# within this node's responsibility.
_focus = next_spec.system_prompt
if next_spec.output_keys and _focus:
_focus = f"{EXECUTION_SCOPE_PREAMBLE}\n\n{_focus}"
new_system = compose_system_prompt(
identity_prompt=getattr(graph, "identity_prompt", None),
focus_prompt=next_spec.system_prompt,
focus_prompt=_focus,
narrative=narrative,
accounts_prompt=_node_accounts,
)
+43 -3
View File
@@ -26,6 +26,16 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# Injected into every worker node's system prompt so the LLM understands
# it is one step in a multi-node pipeline and should not overreach.
EXECUTION_SCOPE_PREAMBLE = (
"EXECUTION SCOPE: You are one node in a multi-step workflow graph. "
"Focus ONLY on the task described in your instructions below. "
"Call set_output() for each of your declared output keys, then stop. "
"Do NOT attempt work that belongs to other nodes — the framework "
"routes data between nodes automatically."
)
def _with_datetime(prompt: str) -> str:
"""Append current datetime with local timezone to a system prompt."""
@@ -267,7 +277,9 @@ def build_transition_marker(
sections.append(f"\nCompleted: {previous_node.name}")
sections.append(f" {previous_node.description}")
# Outputs in memory
# Outputs in memory — use file references for large values so the
# next node loads full data from disk instead of seeing truncated
# inline previews that look deceptively complete.
all_memory = memory.read_all()
if all_memory:
memory_lines: list[str] = []
@@ -275,7 +287,29 @@ def build_transition_marker(
if value is None:
continue
val_str = str(value)
if len(val_str) > 300:
if len(val_str) > 300 and data_dir:
# Auto-spill large transition values to data files
import json as _json
data_path = Path(data_dir)
data_path.mkdir(parents=True, exist_ok=True)
ext = ".json" if isinstance(value, (dict, list)) else ".txt"
filename = f"output_{key}{ext}"
try:
write_content = (
_json.dumps(value, indent=2, ensure_ascii=False)
if isinstance(value, (dict, list))
else str(value)
)
(data_path / filename).write_text(write_content, encoding="utf-8")
file_size = (data_path / filename).stat().st_size
val_str = (
f"[Saved to '{filename}' ({file_size:,} bytes). "
f"Use load_data(filename='{filename}') to access.]"
)
except Exception:
val_str = val_str[:300] + "..."
elif len(val_str) > 300:
val_str = val_str[:300] + "..."
memory_lines.append(f" {key}: {val_str}")
if memory_lines:
@@ -292,7 +326,7 @@ def build_transition_marker(
]
if file_lines:
sections.append(
"\nData files (use read_file to access):\n" + "\n".join(file_lines)
"\nData files (use load_data to access):\n" + "\n".join(file_lines)
)
# Agent working memory
@@ -306,6 +340,12 @@ def build_transition_marker(
# Next phase
sections.append(f"\nNow entering: {next_node.name}")
sections.append(f" {next_node.description}")
if next_node.output_keys:
sections.append(
f"\nYour ONLY job in this phase: complete the task above and call "
f"set_output() for {next_node.output_keys}. Do NOT do work that "
f"belongs to later phases."
)
# Reflection prompt (engineered metacognition)
sections.append(
+15 -3
View File
@@ -115,11 +115,23 @@ class SafeEvalVisitor(ast.NodeVisitor):
return True
def visit_BoolOp(self, node: ast.BoolOp) -> Any:
values = [self.visit(v) for v in node.values]
# Short-circuit evaluation to match Python semantics.
# Previously all operands were eagerly evaluated, which broke
# guard patterns like: ``x is not None and x.get("key")``
if isinstance(node.op, ast.And):
return all(values)
result = True
for v in node.values:
result = self.visit(v)
if not result:
return result
return result
elif isinstance(node.op, ast.Or):
return any(values)
result = False
for v in node.values:
result = self.visit(v)
if result:
return result
return result
raise ValueError(f"Boolean operator {type(node.op).__name__} is not allowed")
def visit_IfExp(self, node: ast.IfExp) -> Any:
+647 -12
View File
@@ -7,9 +7,13 @@ Groq, and local models.
See: https://docs.litellm.ai/docs/providers
"""
import ast
import asyncio
import hashlib
import json
import logging
import os
import re
import time
from collections.abc import AsyncIterator
from datetime import datetime
@@ -44,7 +48,10 @@ def _patch_litellm_anthropic_oauth() -> None:
"""
try:
from litellm.llms.anthropic.common_utils import AnthropicModelInfo
from litellm.types.llms.anthropic import ANTHROPIC_OAUTH_TOKEN_PREFIX
from litellm.types.llms.anthropic import (
ANTHROPIC_OAUTH_BETA_HEADER,
ANTHROPIC_OAUTH_TOKEN_PREFIX,
)
except ImportError:
logger.warning(
"Could not apply litellm Anthropic OAuth patch — litellm internals may have "
@@ -69,9 +76,27 @@ def _patch_litellm_anthropic_oauth() -> None:
api_key=api_key,
api_base=api_base,
)
# Check both authorization header and x-api-key for OAuth tokens.
# litellm's optionally_handle_anthropic_oauth only checks headers["authorization"],
# but hive passes OAuth tokens via api_key — so litellm puts them into x-api-key.
# Anthropic rejects OAuth tokens in x-api-key; they must go in Authorization: Bearer.
auth = result.get("authorization", "")
if auth.startswith(f"Bearer {ANTHROPIC_OAUTH_TOKEN_PREFIX}"):
x_api_key = result.get("x-api-key", "")
oauth_prefix = f"Bearer {ANTHROPIC_OAUTH_TOKEN_PREFIX}"
auth_is_oauth = auth.startswith(oauth_prefix)
key_is_oauth = x_api_key.startswith(ANTHROPIC_OAUTH_TOKEN_PREFIX)
if auth_is_oauth or key_is_oauth:
token = x_api_key if key_is_oauth else auth.removeprefix("Bearer ").strip()
result.pop("x-api-key", None)
result["authorization"] = f"Bearer {token}"
# Merge the OAuth beta header with any existing beta headers.
existing_beta = result.get("anthropic-beta", "")
beta_parts = (
[b.strip() for b in existing_beta.split(",") if b.strip()] if existing_beta else []
)
if ANTHROPIC_OAUTH_BETA_HEADER not in beta_parts:
beta_parts.append(ANTHROPIC_OAUTH_BETA_HEADER)
result["anthropic-beta"] = ",".join(beta_parts)
return result
AnthropicModelInfo.validate_environment = _patched_validate_environment
@@ -130,11 +155,15 @@ def _patch_litellm_metadata_nonetype() -> None:
if litellm is not None:
_patch_litellm_anthropic_oauth()
_patch_litellm_metadata_nonetype()
# Let litellm silently drop params unsupported by the target provider
# (e.g. stream_options for Anthropic) instead of forwarding them verbatim.
litellm.drop_params = True
RATE_LIMIT_MAX_RETRIES = 10
RATE_LIMIT_BACKOFF_BASE = 2 # seconds
RATE_LIMIT_MAX_DELAY = 120 # seconds - cap to prevent absurd waits
MINIMAX_API_BASE = "https://api.minimax.io/v1"
OPENROUTER_API_BASE = "https://openrouter.ai/api/v1"
# Providers that accept cache_control on message content blocks.
# Anthropic: native ephemeral caching. MiniMax & Z-AI/GLM: pass-through to their APIs.
@@ -159,10 +188,69 @@ def _model_supports_cache_control(model: str) -> bool:
# enforces a coding-agent whitelist that blocks unknown User-Agents.
KIMI_API_BASE = "https://api.kimi.com/coding"
# Claude Code OAuth subscription: the Anthropic API requires a specific
# User-Agent and a billing integrity header for OAuth-authenticated requests.
CLAUDE_CODE_VERSION = "2.1.76"
CLAUDE_CODE_USER_AGENT = f"claude-code/{CLAUDE_CODE_VERSION}"
_CLAUDE_CODE_BILLING_SALT = "59cf53e54c78"
def _sample_js_code_unit(text: str, idx: int) -> str:
"""Return the character at UTF-16 code unit index *idx*, matching JS semantics."""
encoded = text.encode("utf-16-le")
unit_offset = idx * 2
if unit_offset + 2 > len(encoded):
return "0"
code_unit = int.from_bytes(encoded[unit_offset : unit_offset + 2], "little")
return chr(code_unit)
def _claude_code_billing_header(messages: list[dict[str, Any]]) -> str:
"""Build the billing integrity system block required by Anthropic's OAuth path."""
# Find the first user message text
first_text = ""
for msg in messages:
if msg.get("role") != "user":
continue
content = msg.get("content")
if isinstance(content, str):
first_text = content
break
if isinstance(content, list):
for block in content:
if isinstance(block, dict) and block.get("type") == "text" and block.get("text"):
first_text = block["text"]
break
if first_text:
break
sampled = "".join(_sample_js_code_unit(first_text, i) for i in (4, 7, 20))
version_hash = hashlib.sha256(
f"{_CLAUDE_CODE_BILLING_SALT}{sampled}{CLAUDE_CODE_VERSION}".encode()
).hexdigest()
entrypoint = os.environ.get("CLAUDE_CODE_ENTRYPOINT", "").strip() or "cli"
return (
f"x-anthropic-billing-header: cc_version={CLAUDE_CODE_VERSION}.{version_hash[:3]}; "
f"cc_entrypoint={entrypoint}; cch=00000;"
)
# Empty-stream retries use a short fixed delay, not the rate-limit backoff.
# Conversation-structure issues are deterministic — long waits don't help.
EMPTY_STREAM_MAX_RETRIES = 3
EMPTY_STREAM_RETRY_DELAY = 1.0 # seconds
OPENROUTER_TOOL_COMPAT_ERROR_SNIPPETS = (
"no endpoints found that support tool use",
"no endpoints available that support tool use",
"provider routing",
)
OPENROUTER_TOOL_CALL_RE = re.compile(
r"<\|tool_call_start\|>\s*(.*?)\s*<\|tool_call_end\|>",
re.DOTALL,
)
OPENROUTER_TOOL_COMPAT_CACHE_TTL_SECONDS = 3600
# OpenRouter routing can change over time, so tool-compat caching must expire.
OPENROUTER_TOOL_COMPAT_MODEL_CACHE: dict[str, float] = {}
# Directory for dumping failed requests
FAILED_REQUESTS_DIR = Path.home() / ".hive" / "failed_requests"
@@ -205,6 +293,24 @@ def _prune_failed_request_dumps(max_files: int = MAX_FAILED_REQUEST_DUMPS) -> No
pass # Best-effort — never block the caller
def _remember_openrouter_tool_compat_model(model: str) -> None:
"""Cache OpenRouter tool-compat fallback for a bounded time window."""
OPENROUTER_TOOL_COMPAT_MODEL_CACHE[model] = (
time.monotonic() + OPENROUTER_TOOL_COMPAT_CACHE_TTL_SECONDS
)
def _is_openrouter_tool_compat_cached(model: str) -> bool:
"""Return True when the cached OpenRouter compat entry is still fresh."""
expires_at = OPENROUTER_TOOL_COMPAT_MODEL_CACHE.get(model)
if expires_at is None:
return False
if expires_at <= time.monotonic():
OPENROUTER_TOOL_COMPAT_MODEL_CACHE.pop(model, None)
return False
return True
def _dump_failed_request(
model: str,
kwargs: dict[str, Any],
@@ -408,6 +514,12 @@ class LiteLLMProvider(LLMProvider):
self.api_key = api_key
self.api_base = api_base or self._default_api_base_for_model(_original_model)
self.extra_kwargs = kwargs
# Detect Claude Code OAuth subscription by checking the api_key prefix.
self._claude_code_oauth = bool(api_key and api_key.startswith("sk-ant-oat"))
if self._claude_code_oauth:
# Anthropic requires a specific User-Agent for OAuth requests.
eh = self.extra_kwargs.setdefault("extra_headers", {})
eh.setdefault("user-agent", CLAUDE_CODE_USER_AGENT)
# The Codex ChatGPT backend (chatgpt.com/backend-api/codex) rejects
# several standard OpenAI params: max_output_tokens, stream_options.
self._codex_backend = bool(
@@ -431,6 +543,8 @@ class LiteLLMProvider(LLMProvider):
model_lower = model.lower()
if model_lower.startswith("minimax/") or model_lower.startswith("minimax-"):
return MINIMAX_API_BASE
if model_lower.startswith("openrouter/"):
return OPENROUTER_API_BASE
if model_lower.startswith("kimi/"):
return KIMI_API_BASE
if model_lower.startswith("hive/"):
@@ -773,6 +887,9 @@ class LiteLLMProvider(LLMProvider):
return await self._collect_stream_to_response(stream_iter)
full_messages: list[dict[str, Any]] = []
if self._claude_code_oauth:
billing = _claude_code_billing_header(messages)
full_messages.append({"role": "system", "content": billing})
if system:
sys_msg: dict[str, Any] = {"role": "system", "content": system}
if _model_supports_cache_control(self.model):
@@ -834,11 +951,504 @@ class LiteLLMProvider(LLMProvider):
},
}
def _is_anthropic_model(self) -> bool:
"""Return True when the configured model targets Anthropic."""
model = (self.model or "").lower()
return model.startswith("anthropic/") or model.startswith("claude-")
def _is_minimax_model(self) -> bool:
"""Return True when the configured model targets MiniMax."""
model = (self.model or "").lower()
return model.startswith("minimax/") or model.startswith("minimax-")
def _is_openrouter_model(self) -> bool:
"""Return True when the configured model targets OpenRouter."""
model = (self.model or "").lower()
if model.startswith("openrouter/"):
return True
api_base = (self.api_base or "").lower()
return "openrouter.ai/api/v1" in api_base
def _should_use_openrouter_tool_compat(
self,
error: BaseException,
tools: list[Tool] | None,
) -> bool:
"""Return True when OpenRouter rejects native tool use for the model."""
if not tools or not self._is_openrouter_model():
return False
error_text = str(error).lower()
return "openrouter" in error_text and any(
snippet in error_text for snippet in OPENROUTER_TOOL_COMPAT_ERROR_SNIPPETS
)
@staticmethod
def _extract_json_object(text: str) -> dict[str, Any] | None:
"""Extract the first JSON object from a model response."""
candidates = [text.strip()]
stripped = text.strip()
if stripped.startswith("```"):
fence_lines = stripped.splitlines()
if len(fence_lines) >= 3:
candidates.append("\n".join(fence_lines[1:-1]).strip())
decoder = json.JSONDecoder()
for candidate in candidates:
if not candidate:
continue
try:
parsed = json.loads(candidate)
except json.JSONDecodeError:
parsed = None
if isinstance(parsed, dict):
return parsed
for start_idx, char in enumerate(candidate):
if char != "{":
continue
try:
parsed, _ = decoder.raw_decode(candidate[start_idx:])
except json.JSONDecodeError:
continue
if isinstance(parsed, dict):
return parsed
return None
def _parse_openrouter_tool_compat_response(
self,
content: str,
tools: list[Tool],
) -> tuple[str, list[dict[str, Any]]]:
"""Parse JSON tool-compat output into assistant text and tool calls."""
payload = self._extract_json_object(content)
if payload is None:
text_tool_content, text_tool_calls = self._parse_openrouter_text_tool_calls(
content,
tools,
)
if text_tool_calls:
logger.info(
"[openrouter-tool-compat] Parsed textual tool-call markers for %s",
self.model,
)
return text_tool_content, text_tool_calls
logger.info(
"[openrouter-tool-compat] %s returned non-JSON fallback content; "
"treating it as plain text.",
self.model,
)
return content.strip(), []
assistant_text = payload.get("assistant_response")
if not isinstance(assistant_text, str):
assistant_text = payload.get("content")
if not isinstance(assistant_text, str):
assistant_text = payload.get("response")
if not isinstance(assistant_text, str):
assistant_text = ""
tool_calls_raw = payload.get("tool_calls")
if not tool_calls_raw and {"name", "arguments"} <= payload.keys():
tool_calls_raw = [payload]
elif isinstance(payload.get("tool_call"), dict):
tool_calls_raw = [payload["tool_call"]]
if not isinstance(tool_calls_raw, list):
tool_calls_raw = []
allowed_tool_names = {tool.name for tool in tools}
tool_calls: list[dict[str, Any]] = []
compat_prefix = f"openrouter_compat_{time.time_ns()}"
for idx, raw_call in enumerate(tool_calls_raw):
if not isinstance(raw_call, dict):
continue
function_block = raw_call.get("function")
function_name = (
raw_call.get("name")
or raw_call.get("tool_name")
or (function_block.get("name") if isinstance(function_block, dict) else None)
)
if not isinstance(function_name, str) or function_name not in allowed_tool_names:
if function_name:
logger.warning(
"[openrouter-tool-compat] Ignoring unknown tool '%s' for model %s",
function_name,
self.model,
)
continue
arguments = raw_call.get("arguments")
if arguments is None:
arguments = raw_call.get("tool_input")
if arguments is None:
arguments = raw_call.get("input")
if arguments is None and isinstance(function_block, dict):
arguments = function_block.get("arguments")
if arguments is None:
arguments = {}
if isinstance(arguments, str):
try:
arguments = json.loads(arguments)
except json.JSONDecodeError:
arguments = {"_raw": arguments}
elif not isinstance(arguments, dict):
arguments = {"value": arguments}
tool_calls.append(
{
"id": f"{compat_prefix}_{idx}",
"name": function_name,
"input": arguments,
}
)
return assistant_text.strip(), tool_calls
@staticmethod
def _close_truncated_json_fragment(fragment: str) -> str:
"""Close a truncated JSON fragment by balancing quotes/brackets."""
stack: list[str] = []
in_string = False
escaped = False
normalized = fragment.rstrip()
while normalized and normalized[-1] in ",:{[":
normalized = normalized[:-1].rstrip()
for char in normalized:
if in_string:
if escaped:
escaped = False
elif char == "\\":
escaped = True
elif char == '"':
in_string = False
continue
if char == '"':
in_string = True
elif char in "{[":
stack.append(char)
elif char == "}" and stack and stack[-1] == "{":
stack.pop()
elif char == "]" and stack and stack[-1] == "[":
stack.pop()
if in_string:
if escaped:
normalized = normalized[:-1]
normalized += '"'
for opener in reversed(stack):
normalized += "}" if opener == "{" else "]"
return normalized
def _repair_truncated_tool_arguments(self, raw_arguments: str) -> dict[str, Any] | None:
"""Try to recover a truncated JSON object from tool-call arguments."""
stripped = raw_arguments.strip()
if not stripped or stripped[0] != "{":
return None
max_trim = min(len(stripped), 256)
for trim in range(max_trim + 1):
candidate = stripped[: len(stripped) - trim].rstrip()
if not candidate:
break
candidate = self._close_truncated_json_fragment(candidate)
try:
parsed = json.loads(candidate)
except json.JSONDecodeError:
continue
if isinstance(parsed, dict):
return parsed
return None
def _parse_tool_call_arguments(self, raw_arguments: str, tool_name: str) -> dict[str, Any]:
"""Parse streamed tool arguments, repairing truncation when possible."""
try:
parsed = json.loads(raw_arguments) if raw_arguments else {}
except json.JSONDecodeError:
parsed = None
if isinstance(parsed, dict):
return parsed
repaired = self._repair_truncated_tool_arguments(raw_arguments)
if repaired is not None:
logger.warning(
"[tool-args] Recovered truncated arguments for %s on %s",
tool_name,
self.model,
)
return repaired
raise ValueError(
f"Failed to parse tool call arguments for '{tool_name}' (likely truncated JSON)."
)
def _parse_openrouter_text_tool_calls(
self,
content: str,
tools: list[Tool],
) -> tuple[str, list[dict[str, Any]]]:
"""Parse textual OpenRouter tool calls into synthetic tool calls.
Supports both:
- Marker wrapped payloads: <|tool_call_start|>...<|tool_call_end|>
- Plain one-line tool calls: ask_user("...", ["..."])
"""
tools_by_name = {tool.name: tool for tool in tools}
compat_prefix = f"openrouter_compat_{time.time_ns()}"
tool_calls: list[dict[str, Any]] = []
segment_index = 0
for match in OPENROUTER_TOOL_CALL_RE.finditer(content):
parsed_calls = self._parse_openrouter_text_tool_call_block(
block=match.group(1),
tools_by_name=tools_by_name,
compat_prefix=f"{compat_prefix}_{segment_index}",
)
if parsed_calls:
segment_index += 1
tool_calls.extend(parsed_calls)
stripped_content = OPENROUTER_TOOL_CALL_RE.sub("", content)
retained_lines: list[str] = []
for line in stripped_content.splitlines():
stripped_line = line.strip()
if not stripped_line:
retained_lines.append(line)
continue
candidate = stripped_line
if candidate.startswith("`") and candidate.endswith("`") and len(candidate) > 1:
candidate = candidate[1:-1].strip()
parsed_calls = self._parse_openrouter_text_tool_call_block(
block=candidate,
tools_by_name=tools_by_name,
compat_prefix=f"{compat_prefix}_{segment_index}",
)
if parsed_calls:
segment_index += 1
tool_calls.extend(parsed_calls)
continue
retained_lines.append(line)
stripped_text = "\n".join(retained_lines).strip()
return stripped_text, tool_calls
def _parse_openrouter_text_tool_call_block(
self,
block: str,
tools_by_name: dict[str, Tool],
compat_prefix: str,
) -> list[dict[str, Any]]:
"""Parse a single textual tool-call block like [tool(arg='x')]."""
try:
parsed = ast.parse(block.strip(), mode="eval").body
except SyntaxError:
return []
call_nodes = parsed.elts if isinstance(parsed, ast.List) else [parsed]
tool_calls: list[dict[str, Any]] = []
for call_index, call_node in enumerate(call_nodes):
if not isinstance(call_node, ast.Call) or not isinstance(call_node.func, ast.Name):
continue
tool_name = call_node.func.id
tool = tools_by_name.get(tool_name)
if tool is None:
continue
try:
tool_input = self._parse_openrouter_text_tool_call_arguments(
call_node=call_node,
tool=tool,
)
except (ValueError, SyntaxError):
continue
tool_calls.append(
{
"id": f"{compat_prefix}_{call_index}",
"name": tool_name,
"input": tool_input,
}
)
return tool_calls
@staticmethod
def _parse_openrouter_text_tool_call_arguments(
call_node: ast.Call,
tool: Tool,
) -> dict[str, Any]:
"""Parse positional/keyword args from a textual tool call."""
properties = tool.parameters.get("properties", {})
positional_keys = list(properties.keys())
tool_input: dict[str, Any] = {}
if len(call_node.args) > len(positional_keys):
raise ValueError("Too many positional args for textual tool call")
for idx, arg_node in enumerate(call_node.args):
tool_input[positional_keys[idx]] = ast.literal_eval(arg_node)
for kwarg in call_node.keywords:
if kwarg.arg is None:
raise ValueError("Star args are not supported in textual tool calls")
tool_input[kwarg.arg] = ast.literal_eval(kwarg.value)
return tool_input
def _build_openrouter_tool_compat_messages(
self,
messages: list[dict[str, Any]],
system: str,
tools: list[Tool],
) -> list[dict[str, Any]]:
"""Build a JSON-only prompt for models without native tool support."""
tool_specs = [
{
"name": tool.name,
"description": tool.description,
"parameters": tool.parameters,
}
for tool in tools
]
compat_instruction = (
"Tool compatibility mode is active because this OpenRouter model does not support "
"native function calling on the routed provider.\n"
"Return exactly one JSON object and nothing else.\n"
'Schema: {"assistant_response": string, '
'"tool_calls": [{"name": string, "arguments": object}]}\n'
"Rules:\n"
"- If a tool is required, put one or more entries in tool_calls "
"and do not invent tool results.\n"
"- If no tool is required, set tool_calls to [] and put the full "
"answer in assistant_response.\n"
"- Only use tool names from the allowed tool list.\n"
"- arguments must always be valid JSON objects.\n"
f"Allowed tools:\n{json.dumps(tool_specs, ensure_ascii=True)}"
)
compat_system = compat_instruction if not system else f"{system}\n\n{compat_instruction}"
full_messages: list[dict[str, Any]] = [{"role": "system", "content": compat_system}]
full_messages.extend(messages)
return [
message
for message in full_messages
if not (
message.get("role") == "assistant"
and not message.get("content")
and not message.get("tool_calls")
)
]
async def _acomplete_via_openrouter_tool_compat(
self,
messages: list[dict[str, Any]],
system: str,
tools: list[Tool],
max_tokens: int,
) -> LLMResponse:
"""Emulate tool calling via JSON when OpenRouter rejects native tools."""
full_messages = self._build_openrouter_tool_compat_messages(messages, system, tools)
kwargs: dict[str, Any] = {
"model": self.model,
"messages": full_messages,
"max_tokens": max_tokens,
**self.extra_kwargs,
}
if self.api_key:
kwargs["api_key"] = self.api_key
if self.api_base:
kwargs["api_base"] = self.api_base
response = await self._acompletion_with_rate_limit_retry(**kwargs)
raw_content = response.choices[0].message.content or ""
assistant_text, tool_calls = self._parse_openrouter_tool_compat_response(
raw_content,
tools,
)
usage = response.usage
input_tokens = usage.prompt_tokens if usage else 0
output_tokens = usage.completion_tokens if usage else 0
stop_reason = "tool_calls" if tool_calls else (response.choices[0].finish_reason or "stop")
return LLMResponse(
content=assistant_text,
model=response.model or self.model,
input_tokens=input_tokens,
output_tokens=output_tokens,
stop_reason=stop_reason,
raw_response={
"compat_mode": "openrouter_tool_emulation",
"tool_calls": tool_calls,
"response": response,
},
)
async def _stream_via_openrouter_tool_compat(
self,
messages: list[dict[str, Any]],
system: str,
tools: list[Tool],
max_tokens: int,
) -> AsyncIterator[StreamEvent]:
"""Fallback stream for OpenRouter models without native tool support."""
from framework.llm.stream_events import (
FinishEvent,
StreamErrorEvent,
TextDeltaEvent,
TextEndEvent,
ToolCallEvent,
)
logger.info(
"[openrouter-tool-compat] Using compatibility mode for %s",
self.model,
)
try:
response = await self._acomplete_via_openrouter_tool_compat(
messages=messages,
system=system,
tools=tools,
max_tokens=max_tokens,
)
except Exception as e:
yield StreamErrorEvent(error=str(e), recoverable=False)
return
raw_response = response.raw_response if isinstance(response.raw_response, dict) else {}
tool_calls = raw_response.get("tool_calls", [])
if response.content:
yield TextDeltaEvent(content=response.content, snapshot=response.content)
yield TextEndEvent(full_text=response.content)
for tool_call in tool_calls:
yield ToolCallEvent(
tool_use_id=tool_call["id"],
tool_name=tool_call["name"],
tool_input=tool_call["input"],
)
yield FinishEvent(
stop_reason=response.stop_reason,
input_tokens=response.input_tokens,
output_tokens=response.output_tokens,
model=response.model,
)
async def _stream_via_nonstream_completion(
self,
messages: list[dict[str, Any]],
@@ -882,12 +1492,11 @@ class LiteLLMProvider(LLMProvider):
tool_calls = msg.tool_calls or []
for tc in tool_calls:
parsed_args: Any
args = tc.function.arguments if tc.function else ""
try:
parsed_args = json.loads(args) if args else {}
except json.JSONDecodeError:
parsed_args = {"_raw": args}
parsed_args = self._parse_tool_call_arguments(
args,
tc.function.name if tc.function else "",
)
yield ToolCallEvent(
tool_use_id=getattr(tc, "id", ""),
tool_name=tc.function.name if tc.function else "",
@@ -946,7 +1555,20 @@ class LiteLLMProvider(LLMProvider):
yield event
return
if tools and self._is_openrouter_model() and _is_openrouter_tool_compat_cached(self.model):
async for event in self._stream_via_openrouter_tool_compat(
messages=messages,
system=system,
tools=tools,
max_tokens=max_tokens,
):
yield event
return
full_messages: list[dict[str, Any]] = []
if self._claude_code_oauth:
billing = _claude_code_billing_header(messages)
full_messages.append({"role": "system", "content": billing})
if system:
sys_msg: dict[str, Any] = {"role": "system", "content": system}
if _model_supports_cache_control(self.model):
@@ -984,9 +1606,12 @@ class LiteLLMProvider(LLMProvider):
"messages": full_messages,
"max_tokens": max_tokens,
"stream": True,
"stream_options": {"include_usage": True},
**self.extra_kwargs,
}
# stream_options is OpenAI-specific; Anthropic rejects it with 400.
# Only include it for providers that support it.
if not self._is_anthropic_model():
kwargs["stream_options"] = {"include_usage": True}
if self.api_key:
kwargs["api_key"] = self.api_key
if self.api_base:
@@ -1092,10 +1717,10 @@ class LiteLLMProvider(LLMProvider):
if choice.finish_reason:
stream_finish_reason = choice.finish_reason
for _idx, tc_data in sorted(tool_calls_acc.items()):
try:
parsed_args = json.loads(tc_data["arguments"])
except (json.JSONDecodeError, KeyError):
parsed_args = {"_raw": tc_data.get("arguments", "")}
parsed_args = self._parse_tool_call_arguments(
tc_data.get("arguments", ""),
tc_data.get("name", ""),
)
tail_events.append(
ToolCallEvent(
tool_use_id=tc_data["id"],
@@ -1276,6 +1901,16 @@ class LiteLLMProvider(LLMProvider):
return
except Exception as e:
if self._should_use_openrouter_tool_compat(e, tools):
_remember_openrouter_tool_compat_model(self.model)
async for event in self._stream_via_openrouter_tool_compat(
messages=messages,
system=system,
tools=tools or [],
max_tokens=max_tokens,
):
yield event
return
if _is_stream_transient_error(e) and attempt < RATE_LIMIT_MAX_RETRIES:
wait = _compute_retry_delay(attempt, exception=e)
logger.warning(
+6 -1
View File
@@ -208,7 +208,12 @@ def configure_logging(
# Suppress noisy LiteLLM INFO logs (model/provider line + Provider List URL
# printed on every single completion call). Warnings and errors still show.
logging.getLogger("LiteLLM").setLevel(logging.WARNING)
# Honour LITELLM_LOG env var so users can opt-in to debug output.
_litellm_level = os.getenv("LITELLM_LOG", "").upper()
if _litellm_level and hasattr(logging, _litellm_level):
logging.getLogger("LiteLLM").setLevel(getattr(logging, _litellm_level))
else:
logging.getLogger("LiteLLM").setLevel(logging.WARNING)
# When in JSON mode, configure known third-party loggers to use JSON formatter
# This ensures libraries like LiteLLM, httpcore also output clean JSON
+2
View File
@@ -1381,6 +1381,8 @@ class AgentRunner:
return "MISTRAL_API_KEY"
elif model_lower.startswith("groq/"):
return "GROQ_API_KEY"
elif model_lower.startswith("openrouter/"):
return "OPENROUTER_API_KEY"
elif self._is_local_model(model_lower):
return None # Local models don't need an API key
elif model_lower.startswith("azure/"):
+1
View File
@@ -159,6 +159,7 @@ class EventType(StrEnum):
TRIGGER_DEACTIVATED = "trigger_deactivated"
TRIGGER_FIRED = "trigger_fired"
TRIGGER_REMOVED = "trigger_removed"
TRIGGER_UPDATED = "trigger_updated"
@dataclass
@@ -69,6 +69,7 @@ async def create_queen(
QueenPhaseState,
register_queen_lifecycle_tools,
)
from framework.tools.queen_memory_tools import register_queen_memory_tools
hive_home = Path.home() / ".hive"
@@ -122,6 +123,9 @@ async def create_queen(
phase_state=phase_state,
)
# ---- Episodic memory tools (always registered) ---------------------
register_queen_memory_tools(queen_registry)
# ---- Monitoring tools (only when worker is loaded) ----------------
if session.worker_runtime:
from framework.tools.worker_monitoring_tools import register_worker_monitoring_tools
+1
View File
@@ -46,6 +46,7 @@ DEFAULT_EVENT_TYPES = [
EventType.TRIGGER_DEACTIVATED,
EventType.TRIGGER_FIRED,
EventType.TRIGGER_REMOVED,
EventType.TRIGGER_UPDATED,
EventType.DRAFT_GRAPH_UPDATED,
]
+115 -7
View File
@@ -24,6 +24,8 @@ Worker session browsing (persisted execution runs on disk):
"""
import asyncio
import contextlib
import json
import logging
import shutil
@@ -408,7 +410,7 @@ async def handle_session_entry_points(request: web.Request) -> web.Response:
async def handle_update_trigger_task(request: web.Request) -> web.Response:
"""PATCH /api/sessions/{session_id}/triggers/{trigger_id} — update trigger task."""
"""PATCH /api/sessions/{session_id}/triggers/{trigger_id} — update trigger fields."""
session, err = resolve_session(request)
if err:
return err
@@ -427,30 +429,136 @@ async def handle_update_trigger_task(request: web.Request) -> web.Response:
except Exception:
return web.json_response({"error": "Invalid JSON body"}, status=400)
task = body.get("task")
if task is None:
return web.json_response({"error": "Missing 'task' field"}, status=400)
if not isinstance(task, str):
return web.json_response({"error": "'task' must be a string"}, status=400)
updates: dict[str, object] = {}
tdef.task = task
if "task" in body:
task = body.get("task")
if not isinstance(task, str):
return web.json_response({"error": "'task' must be a string"}, status=400)
tdef.task = task
updates["task"] = tdef.task
trigger_config_update = body.get("trigger_config")
if trigger_config_update is not None:
if not isinstance(trigger_config_update, dict):
return web.json_response(
{"error": "'trigger_config' must be an object"},
status=400,
)
merged_trigger_config = dict(tdef.trigger_config)
merged_trigger_config.update(trigger_config_update)
if tdef.trigger_type == "timer":
cron_expr = merged_trigger_config.get("cron")
interval = merged_trigger_config.get("interval_minutes")
if cron_expr is not None and not isinstance(cron_expr, str):
return web.json_response(
{"error": "'trigger_config.cron' must be a string"},
status=400,
)
if cron_expr:
try:
from croniter import croniter
if not croniter.is_valid(cron_expr):
return web.json_response(
{"error": f"Invalid cron expression: {cron_expr}"},
status=400,
)
except ImportError:
return web.json_response(
{
"error": (
"croniter package not installed — cannot validate cron expression."
)
},
status=500,
)
merged_trigger_config.pop("interval_minutes", None)
elif interval is None:
return web.json_response(
{
"error": (
"Timer trigger needs 'cron' or 'interval_minutes' in trigger_config."
)
},
status=400,
)
elif not isinstance(interval, (int, float)) or interval <= 0:
return web.json_response(
{"error": "'trigger_config.interval_minutes' must be > 0"},
status=400,
)
tdef.trigger_config = merged_trigger_config
updates["trigger_config"] = tdef.trigger_config
if not updates:
return web.json_response(
{"error": "Provide at least one of 'task' or 'trigger_config'"},
status=400,
)
# Persist to session state and agent definition
from framework.tools.queen_lifecycle_tools import (
_persist_active_triggers,
_save_trigger_to_agent,
_start_trigger_timer,
_start_trigger_webhook,
)
if "trigger_config" in updates and trigger_id in getattr(session, "active_trigger_ids", set()):
task = session.active_timer_tasks.pop(trigger_id, None)
if task and not task.done():
task.cancel()
with contextlib.suppress(asyncio.CancelledError):
await task
getattr(session, "trigger_next_fire", {}).pop(trigger_id, None)
webhook_subs = getattr(session, "active_webhook_subs", {})
if sub_id := webhook_subs.pop(trigger_id, None):
with contextlib.suppress(Exception):
session.event_bus.unsubscribe(sub_id)
if tdef.trigger_type == "timer":
await _start_trigger_timer(session, trigger_id, tdef)
elif tdef.trigger_type == "webhook":
await _start_trigger_webhook(session, trigger_id, tdef)
if trigger_id in getattr(session, "active_trigger_ids", set()):
session_id = request.match_info["session_id"]
await _persist_active_triggers(session, session_id)
_save_trigger_to_agent(session, trigger_id, tdef)
# Emit SSE event so the frontend updates the graph and detail panel
bus = getattr(session, "event_bus", None)
if bus:
from framework.runtime.event_bus import AgentEvent, EventType
await bus.publish(
AgentEvent(
type=EventType.TRIGGER_UPDATED,
stream_id="queen",
data={
"trigger_id": trigger_id,
"task": tdef.task,
"trigger_config": tdef.trigger_config,
"trigger_type": tdef.trigger_type,
"name": tdef.description or trigger_id,
"entry_node": getattr(
getattr(getattr(session, "runner", None), "graph", None),
"entry_node",
None,
),
},
)
)
return web.json_response(
{
"trigger_id": trigger_id,
"task": tdef.task,
"trigger_config": tdef.trigger_config,
}
)
+302 -53
View File
@@ -47,6 +47,8 @@ class Session:
worker_handoff_sub: str | None = None
# Memory consolidation subscription (fires on CONTEXT_COMPACTED)
memory_consolidation_sub: str | None = None
# Worker run digest subscription (fires on EXECUTION_COMPLETED / EXECUTION_FAILED)
worker_digest_sub: str | None = None
# Trigger definitions loaded from agent's triggers.json (available but inactive)
available_triggers: dict[str, TriggerDefinition] = field(default_factory=dict)
# Active trigger tracking (IDs currently firing + their asyncio tasks)
@@ -177,6 +179,31 @@ class SessionManager:
agent_path = Path(agent_path)
resolved_worker_id = agent_id or agent_path.name
# When cold-restoring, check meta.json for the phase — if the agent
# was still being built we must NOT try to load the worker (the code
# is incomplete and will fail to import).
if queen_resume_from:
_resume_phase = None
_meta_path = (
Path.home() / ".hive" / "queen" / "session" / queen_resume_from / "meta.json"
)
if _meta_path.exists():
try:
_meta = json.loads(_meta_path.read_text(encoding="utf-8"))
_resume_phase = _meta.get("phase")
except (json.JSONDecodeError, OSError):
pass
if _resume_phase in ("building", "planning"):
# Fall back to queen-only session — cold resume handler in
# _start_queen will set phase_state.agent_path and switch to
# the correct phase.
return await self.create_session(
session_id=session_id,
model=model,
initial_prompt=initial_prompt,
queen_resume_from=queen_resume_from,
)
# Reuse the original session ID when cold-restoring so the frontend
# sees one continuous session instead of a new one each time.
session = await self._create_session_core(
@@ -193,6 +220,9 @@ class SessionManager:
model=model,
)
# Restore active triggers from persisted state (cold restore)
await self._restore_active_triggers(session, session.id)
# Start queen with worker profile + lifecycle + monitoring tools
worker_identity = (
build_worker_profile(session.worker_runtime, agent_path=agent_path)
@@ -204,7 +234,23 @@ class SessionManager:
)
except Exception:
# If anything fails, tear down the session
if queen_resume_from:
# Cold restore: worker load failed (e.g. incomplete code from a
# building session). Fall back to queen-only so the user can
# continue the conversation and fix / rebuild the agent.
logger.warning(
"Cold restore: worker load failed for '%s', falling back to queen-only",
agent_path,
exc_info=True,
)
await self.stop_session(session.id)
return await self.create_session(
session_id=session_id,
model=model,
initial_prompt=initial_prompt,
queen_resume_from=queen_resume_from,
)
# If anything fails (non-cold-restore), tear down the session
await self.stop_session(session.id)
raise
return session
@@ -297,6 +343,9 @@ class SessionManager:
session.worker_runtime = runtime
session.worker_info = info
# Subscribe to execution completion for per-run digest generation
self._subscribe_worker_digest(session)
async with self._lock:
self._loading.discard(session.id)
@@ -399,6 +448,51 @@ class SessionManager:
return False
return True
async def _restore_active_triggers(self, session: "Session", session_id: str) -> None:
"""Restore previously active triggers from persisted session state.
Called after worker loading to restart any timer/webhook triggers
that were active before a server restart.
"""
if not session.available_triggers or not session.worker_runtime:
return
try:
store = session.worker_runtime._session_store
state = await store.read_state(session_id)
if state and state.active_triggers:
from framework.tools.queen_lifecycle_tools import (
_start_trigger_timer,
_start_trigger_webhook,
)
saved_tasks = getattr(state, "trigger_tasks", {}) or {}
for tid in state.active_triggers:
tdef = session.available_triggers.get(tid)
if tdef:
# Restore user-configured task override
saved_task = saved_tasks.get(tid, "")
if saved_task:
tdef.task = saved_task
tdef.active = True
session.active_trigger_ids.add(tid)
if tdef.trigger_type == "timer":
await _start_trigger_timer(session, tid, tdef)
logger.info("Restored trigger timer '%s'", tid)
elif tdef.trigger_type == "webhook":
await _start_trigger_webhook(session, tid, tdef)
logger.info("Restored webhook trigger '%s'", tid)
else:
logger.warning(
"Saved trigger '%s' not found in worker entry points, skipping",
tid,
)
# Restore worker_configured flag
if state and getattr(state, "worker_configured", False):
session.worker_configured = True
except Exception as e:
logger.warning("Failed to restore active triggers: %s", e)
async def load_worker(
self,
session_id: str,
@@ -447,44 +541,7 @@ class SessionManager:
except OSError:
pass
# Restore previously active triggers from persisted session state
if session.available_triggers and session.worker_runtime:
try:
store = session.worker_runtime._session_store
state = await store.read_state(session_id)
if state and state.active_triggers:
from framework.tools.queen_lifecycle_tools import (
_start_trigger_timer,
_start_trigger_webhook,
)
saved_tasks = getattr(state, "trigger_tasks", {}) or {}
for tid in state.active_triggers:
tdef = session.available_triggers.get(tid)
if tdef:
# Restore user-configured task override
saved_task = saved_tasks.get(tid, "")
if saved_task:
tdef.task = saved_task
tdef.active = True
session.active_trigger_ids.add(tid)
if tdef.trigger_type == "timer":
await _start_trigger_timer(session, tid, tdef)
logger.info("Restored trigger timer '%s'", tid)
elif tdef.trigger_type == "webhook":
await _start_trigger_webhook(session, tid, tdef)
logger.info("Restored webhook trigger '%s'", tid)
else:
logger.warning(
"Saved trigger '%s' not found in worker entry points, skipping",
tid,
)
# Restore worker_configured flag
if state and getattr(state, "worker_configured", False):
session.worker_configured = True
except Exception as e:
logger.warning("Failed to restore active triggers: %s", e)
await self._restore_active_triggers(session, session_id)
# Emit SSE event so the frontend can update UI
await self._emit_worker_loaded(session)
@@ -526,6 +583,13 @@ class SessionManager:
await self._emit_trigger_events(session, "removed", session.available_triggers)
session.available_triggers.clear()
if session.worker_digest_sub is not None:
try:
session.event_bus.unsubscribe(session.worker_digest_sub)
except Exception:
pass
session.worker_digest_sub = None
worker_id = session.worker_id
session.worker_id = None
session.worker_path = None
@@ -563,6 +627,13 @@ class SessionManager:
pass
session.worker_handoff_sub = None
if session.worker_digest_sub is not None:
try:
session.event_bus.unsubscribe(session.worker_digest_sub)
except Exception:
pass
session.worker_digest_sub = None
# Stop queen and memory consolidation subscription
if session.memory_consolidation_sub is not None:
try:
@@ -647,6 +718,134 @@ class SessionManager:
else:
logger.warning("Worker handoff received but queen node not ready")
def _subscribe_worker_digest(self, session: Session) -> None:
"""Subscribe to worker events to write per-run digests.
Three triggers:
- NODE_LOOP_ITERATION: write a mid-run snapshot, throttled to at most
once every _DIGEST_COOLDOWN seconds per execution.
- TOOL_CALL_COMPLETED for delegate_to_sub_agent: same throttled snapshot.
Orchestrator nodes often run all subagent calls in a single LLM turn,
so NODE_LOOP_ITERATION only fires once at the end. Subagent
completions provide intermediate checkpoints.
- EXECUTION_COMPLETED / EXECUTION_FAILED: always write the final digest,
bypassing the cooldown.
"""
import time as _time
from framework.runtime.event_bus import EventType as _ET
_DIGEST_COOLDOWN = 300.0 # seconds between mid-run snapshots
if session.worker_digest_sub is not None:
try:
session.event_bus.unsubscribe(session.worker_digest_sub)
except Exception:
pass
session.worker_digest_sub = None
agent_name = session.worker_path.name if session.worker_path else None
if not agent_name:
return
_agent_name = agent_name
_llm = session.llm
_bus = session.event_bus
# per-execution_id monotonic timestamp of last mid-run digest
_last_digest: dict[str, float] = {}
def _resolve_run_id(exec_id: str) -> str | None:
"""Look up the run_id for a given execution_id via EXECUTION_STARTED history."""
for e in _bus.get_history(event_type=_ET.EXECUTION_STARTED, limit=200):
if e.execution_id == exec_id and getattr(e, "run_id", None):
return e.run_id
return None
async def _inject_digest_to_queen(run_id: str) -> None:
"""Read the written digest and push it into the queen's conversation."""
from framework.agents.worker_memory import digest_path
try:
content = digest_path(_agent_name, run_id).read_text(encoding="utf-8").strip()
except OSError:
return
if not content:
return
executor = session.queen_executor
if executor is None:
return
node = executor.node_registry.get("queen")
if node is None or not hasattr(node, "inject_event"):
return
await node.inject_event(f"[WORKER_DIGEST]\n{content}")
async def _consolidate_and_notify(run_id: str, outcome_event: Any) -> None:
"""Write the digest then push it to the queen."""
from framework.agents.worker_memory import consolidate_worker_run
await consolidate_worker_run(_agent_name, run_id, outcome_event, _bus, _llm)
await _inject_digest_to_queen(run_id)
async def _on_worker_event(event: Any) -> None:
if event.stream_id == "queen":
return
exec_id = event.execution_id
if event.type == _ET.EXECUTION_STARTED:
# New run on this execution_id — reset cooldown so the first
# iteration always produces a mid-run snapshot.
if exec_id:
_last_digest.pop(exec_id, None)
elif event.type in (
_ET.EXECUTION_COMPLETED,
_ET.EXECUTION_FAILED,
_ET.EXECUTION_PAUSED,
):
# Final digest — always fire, ignore cooldown.
# EXECUTION_PAUSED covers cancellation (queen re-triggering the
# worker cancels the previous execution, emitting paused).
run_id = getattr(event, "run_id", None) or _resolve_run_id(exec_id)
if run_id:
asyncio.create_task(
_consolidate_and_notify(run_id, event),
name=f"worker-digest-final-{run_id}",
)
elif event.type in (_ET.NODE_LOOP_ITERATION, _ET.TOOL_CALL_COMPLETED):
# Mid-run snapshot — respect 300 s cooldown per execution.
# TOOL_CALL_COMPLETED is only interesting for subagent calls;
# regular tool completions are too frequent and too cheap.
if event.type == _ET.TOOL_CALL_COMPLETED:
tool_name = (event.data or {}).get("tool_name", "")
if tool_name != "delegate_to_sub_agent":
return
if not exec_id:
return
now = _time.monotonic()
if now - _last_digest.get(exec_id, 0.0) < _DIGEST_COOLDOWN:
return
run_id = _resolve_run_id(exec_id)
if run_id:
_last_digest[exec_id] = now
asyncio.create_task(
_consolidate_and_notify(run_id, None),
name=f"worker-digest-{run_id}",
)
session.worker_digest_sub = session.event_bus.subscribe(
event_types=[
_ET.EXECUTION_STARTED,
_ET.NODE_LOOP_ITERATION,
_ET.TOOL_CALL_COMPLETED,
_ET.EXECUTION_COMPLETED,
_ET.EXECUTION_FAILED,
_ET.EXECUTION_PAUSED,
],
handler=_on_worker_event,
)
def _subscribe_worker_handoffs(self, session: Session, executor: Any) -> None:
"""Subscribe queen to worker/subagent escalation handoff events."""
from framework.runtime.event_bus import EventType as _ET
@@ -700,16 +899,21 @@ class SessionManager:
else None
)
)
_meta_path.write_text(
json.dumps(
{
"agent_name": _agent_name,
"agent_path": str(session.worker_path) if session.worker_path else None,
"created_at": time.time(),
}
),
encoding="utf-8",
)
# Merge into existing meta.json to preserve fields written by
# _update_meta_json (e.g. phase, agent_path set during building).
_existing_meta: dict = {}
if _meta_path.exists():
try:
_existing_meta = json.loads(_meta_path.read_text(encoding="utf-8"))
except (json.JSONDecodeError, OSError):
pass
_new_meta: dict = {"created_at": time.time()}
if _agent_name is not None:
_new_meta["agent_name"] = _agent_name
if session.worker_path is not None:
_new_meta["agent_path"] = str(session.worker_path)
_existing_meta.update(_new_meta)
_meta_path.write_text(json.dumps(_existing_meta), encoding="utf-8")
except OSError:
pass
@@ -762,11 +966,27 @@ class SessionManager:
try:
_meta = json.loads(meta_path.read_text(encoding="utf-8"))
_agent_path = _meta.get("agent_path")
_phase = _meta.get("phase")
if _agent_path and Path(_agent_path).exists():
await self.load_worker(session.id, _agent_path)
if session.phase_state:
await session.phase_state.switch_to_staging(source="auto")
logger.info("Cold restore: auto-loaded worker from %s", _agent_path)
if _phase in ("staging", "running", None):
# Agent fully built — load worker and resume
await self.load_worker(session.id, _agent_path)
if session.phase_state:
await session.phase_state.switch_to_staging(source="auto")
# Emit flowchart overlay so frontend can display it
await self._emit_flowchart_on_restore(session, _agent_path)
logger.info("Cold restore: auto-loaded worker from %s", _agent_path)
elif _phase == "building":
# Agent folder exists but incomplete — resume building
if session.phase_state:
session.phase_state.agent_path = _agent_path
await session.phase_state.switch_to_building(source="auto")
logger.info("Cold restore: resumed BUILDING phase for %s", _agent_path)
elif _phase == "planning":
if session.phase_state:
session.phase_state.agent_path = _agent_path
logger.info("Cold restore: PLANNING phase for %s", _agent_path)
except Exception:
logger.warning("Cold restore: failed to auto-load worker", exc_info=True)
@@ -841,6 +1061,29 @@ class SessionManager:
)
)
async def _emit_flowchart_on_restore(self, session: Session, agent_path: str | Path) -> None:
"""Emit FLOWCHART_MAP_UPDATED from persisted flowchart file on cold restore."""
from framework.runtime.event_bus import AgentEvent, EventType
from framework.tools.flowchart_utils import load_flowchart_file
original_draft, flowchart_map = load_flowchart_file(agent_path)
if original_draft is None:
return
# Cache in phase_state so the REST endpoint also returns it
if session.phase_state:
session.phase_state.original_draft_graph = original_draft
session.phase_state.flowchart_map = flowchart_map
await session.event_bus.publish(
AgentEvent(
type=EventType.FLOWCHART_MAP_UPDATED,
stream_id="queen",
data={
"map": flowchart_map,
"original_draft": original_draft,
},
)
)
async def _notify_queen_worker_unloaded(self, session: Session) -> None:
"""Notify the queen that the worker has been unloaded."""
executor = session.queen_executor
@@ -868,6 +1111,10 @@ class SessionManager:
event_type = (
EventType.TRIGGER_AVAILABLE if kind == "available" else EventType.TRIGGER_REMOVED
)
# Resolve graph entry node for trigger target
runner = getattr(session, "runner", None)
graph_entry = runner.graph.entry_node if runner else None
for t in triggers.values():
await session.event_bus.publish(
AgentEvent(
@@ -877,6 +1124,8 @@ class SessionManager:
"trigger_id": t.id,
"trigger_type": t.trigger_type,
"trigger_config": t.trigger_config,
"name": t.description or t.id,
**({"entry_node": graph_entry} if graph_entry else {}),
},
)
)
+67
View File
@@ -5,6 +5,7 @@ Uses aiohttp TestClient with mocked sessions to test all endpoints
without requiring actual LLM calls or agent loading.
"""
import asyncio
import json
from dataclasses import dataclass, field
from pathlib import Path
@@ -13,6 +14,7 @@ from unittest.mock import AsyncMock, MagicMock
import pytest
from aiohttp.test_utils import TestClient, TestServer
from framework.runtime.triggers import TriggerDefinition
from framework.server.app import create_app
from framework.server.session_manager import Session
@@ -172,6 +174,7 @@ def _make_session(
runner.intro_message = "Test intro"
mock_event_bus = MagicMock()
mock_event_bus.publish = AsyncMock()
mock_llm = MagicMock()
queen_executor = _make_queen_executor() if with_queen else None
@@ -484,6 +487,70 @@ class TestSessionCRUD:
data = await resp.json()
assert "primary" in data["graphs"]
@pytest.mark.asyncio
async def test_update_trigger_task(self, tmp_path):
session = _make_session(tmp_dir=tmp_path)
session.available_triggers["daily"] = TriggerDefinition(
id="daily",
trigger_type="timer",
trigger_config={"cron": "0 5 * * *"},
task="Old task",
)
app = _make_app_with_session(session)
async with TestClient(TestServer(app)) as client:
resp = await client.patch(
"/api/sessions/test_agent/triggers/daily",
json={"task": "New task"},
)
assert resp.status == 200
data = await resp.json()
assert data["task"] == "New task"
assert data["trigger_config"]["cron"] == "0 5 * * *"
assert session.available_triggers["daily"].task == "New task"
@pytest.mark.asyncio
async def test_update_trigger_cron_restarts_active_timer(self, tmp_path):
session = _make_session(tmp_dir=tmp_path)
session.available_triggers["daily"] = TriggerDefinition(
id="daily",
trigger_type="timer",
trigger_config={"cron": "0 5 * * *"},
task="Run task",
active=True,
)
session.active_trigger_ids.add("daily")
session.active_timer_tasks["daily"] = asyncio.create_task(asyncio.sleep(60))
app = _make_app_with_session(session)
async with TestClient(TestServer(app)) as client:
resp = await client.patch(
"/api/sessions/test_agent/triggers/daily",
json={"trigger_config": {"cron": "0 6 * * *"}},
)
assert resp.status == 200
data = await resp.json()
assert data["trigger_config"]["cron"] == "0 6 * * *"
assert "daily" in session.active_timer_tasks
assert session.active_timer_tasks["daily"] is not None
assert session.available_triggers["daily"].trigger_config["cron"] == "0 6 * * *"
session.active_timer_tasks["daily"].cancel()
@pytest.mark.asyncio
async def test_update_trigger_cron_rejects_invalid_expression(self, tmp_path):
session = _make_session(tmp_dir=tmp_path)
session.available_triggers["daily"] = TriggerDefinition(
id="daily",
trigger_type="timer",
trigger_config={"cron": "0 5 * * *"},
task="Run task",
)
app = _make_app_with_session(session)
async with TestClient(TestServer(app)) as client:
resp = await client.patch(
"/api/sessions/test_agent/triggers/daily",
json={"trigger_config": {"cron": "not a cron"}},
)
assert resp.status == 400
class TestExecution:
@pytest.mark.asyncio
+122 -13
View File
@@ -727,6 +727,25 @@ def _dissolve_planning_nodes(
return converted, flowchart_map
def _update_meta_json(session_manager, manager_session_id, updates: dict) -> None:
"""Merge updates into the queen session's meta.json."""
if session_manager is None or not manager_session_id:
return
srv_session = session_manager.get_session(manager_session_id)
if not srv_session:
return
storage_sid = getattr(srv_session, "queen_resume_from", None) or srv_session.id
meta_path = Path.home() / ".hive" / "queen" / "session" / storage_sid / "meta.json"
try:
existing = {}
if meta_path.exists():
existing = json.loads(meta_path.read_text(encoding="utf-8"))
existing.update(updates)
meta_path.write_text(json.dumps(existing), encoding="utf-8")
except OSError:
pass
def register_queen_lifecycle_tools(
registry: ToolRegistry,
session: Any = None,
@@ -975,6 +994,7 @@ def register_queen_lifecycle_tools(
# Switch to building phase
if phase_state is not None:
await phase_state.switch_to_building()
_update_meta_json(session_manager, manager_session_id, {"phase": "building"})
result = json.loads(stop_result)
result["phase"] = "building"
@@ -1559,12 +1579,22 @@ def register_queen_lifecycle_tools(
# Find edges where this leaf node is the source
out_edges = [e for e in validated_edges if e["source"] == leaf_id]
in_edges = [e for e in validated_edges if e["target"] == leaf_id]
if not out_edges:
continue # already a proper leaf
# Identify the parent (predecessor that connects IN)
parent_ids = [e["source"] for e in in_edges]
if not out_edges:
# Already a proper leaf — still ensure sub_agents is set
for pid in parent_ids:
parent = node_by_id_v.get(pid)
if parent is None:
continue
existing = parent.get("sub_agents") or []
if leaf_id not in existing:
existing.append(leaf_id)
parent["sub_agents"] = existing
continue
# Strip all outgoing edges from the leaf node that
# don't go back to a parent (report edges are OK)
illegal_targets: list[str] = []
@@ -1978,6 +2008,17 @@ def register_queen_lifecycle_tools(
"type": "string",
"description": "What success looks like for this node",
},
"sub_agents": {
"type": "array",
"items": {"type": "string"},
"description": (
"IDs of GCU/browser sub-agent nodes managed by this node. "
"At build time, sub-agent nodes are dissolved into this list. "
"Set this on the PARENT node — e.g. the orchestrator that "
"delegates to GCU leaves. Visual delegation edges are "
"synthesized automatically."
),
},
"decision_clause": {
"type": "string",
"description": (
@@ -2095,8 +2136,22 @@ def register_queen_lifecycle_tools(
phase_state.draft_graph = converted
phase_state.flowchart_map = fmap
# Note: flowchart file is persisted later, in initialize_and_build_agent
# (after the agent folder is scaffolded) or in load_built_agent.
# Create agent folder early so flowchart and agent_path are available
# throughout the entire BUILDING phase.
_agent_name = phase_state.draft_graph.get("agent_name", "").strip()
if _agent_name:
_agent_folder = Path("exports") / _agent_name
_agent_folder.mkdir(parents=True, exist_ok=True)
_save_flowchart_file(_agent_folder, original_copy, fmap)
phase_state.agent_path = str(_agent_folder)
_update_meta_json(
session_manager,
manager_session_id,
{
"agent_path": str(_agent_folder),
"agent_name": _agent_name.replace("_", " ").title(),
},
)
dissolved_count = len(original_nodes) - len(converted.get("nodes", []))
decision_count = sum(1 for n in original_nodes if n.get("flowchart_type") == "decision")
@@ -2228,6 +2283,7 @@ def register_queen_lifecycle_tools(
if fallback_path:
phase_state.agent_path = str(fallback_path)
await phase_state.switch_to_building(source="tool")
_update_meta_json(session_manager, manager_session_id, {"phase": "building"})
if phase_state.inject_notification:
await phase_state.inject_notification(
"[PHASE CHANGE] Switched to BUILDING phase. "
@@ -2270,8 +2326,13 @@ def register_queen_lifecycle_tools(
if parsed.get("success", True):
if phase_state is not None:
# Set agent_path so the frontend can query credentials
phase_state.agent_path = str(Path("exports") / agent_name)
phase_state.agent_path = phase_state.agent_path or str(
Path("exports") / agent_name
)
await phase_state.switch_to_building(source="tool")
_update_meta_json(
session_manager, manager_session_id, {"phase": "building"}
)
# Reset draft state after successful scaffolding
phase_state.build_confirmed = False
# Persist flowchart now that the agent folder exists
@@ -2319,6 +2380,7 @@ def register_queen_lifecycle_tools(
# Switch to staging phase
if phase_state is not None:
await phase_state.switch_to_staging()
_update_meta_json(session_manager, manager_session_id, {"phase": "staging"})
result = json.loads(stop_result)
result["phase"] = "staging"
@@ -2347,6 +2409,30 @@ def register_queen_lifecycle_tools(
"""Get the session's event bus for querying history."""
return getattr(session, "event_bus", None)
def _get_worker_name() -> str | None:
"""Return the worker agent directory name, used for diary lookups."""
p = getattr(session, "worker_path", None)
return p.name if p else None
def _format_diary(max_runs: int) -> str:
"""Read recent run digests from disk — no EventBus required."""
agent_name = _get_worker_name()
if not agent_name:
return "No worker loaded — diary unavailable."
from framework.agents.worker_memory import read_recent_digests
entries = read_recent_digests(agent_name, max_runs)
if not entries:
return (
f"No run digests for '{agent_name}' yet. "
"Digests are written at the end of each completed run."
)
lines = [f"Worker '{agent_name}'{len(entries)} recent run digest(s):", ""]
for _run_id, content in entries:
lines.append(content)
lines.append("")
return "\n".join(lines).rstrip()
# Tiered cooldowns: summary is free, detail has short cooldown, full keeps 30s
_COOLDOWN_FULL = 30.0
_COOLDOWN_DETAIL = 10.0
@@ -2949,16 +3035,17 @@ def register_queen_lifecycle_tools(
import time as _time
# --- Tiered cooldown ---
# diary is free (file reads only), summary is free, detail has 10s, full has 30s
now = _time.monotonic()
if focus == "full":
cooldown = _COOLDOWN_FULL
tier = "full"
elif focus is not None:
elif focus == "diary" or focus is None:
cooldown = 0.0
tier = focus or "summary"
else:
cooldown = _COOLDOWN_DETAIL
tier = "detail"
else:
cooldown = 0.0
tier = "summary"
elapsed_since = now - _status_last_called.get(tier, 0.0)
if elapsed_since < cooldown:
@@ -2974,6 +3061,10 @@ def register_queen_lifecycle_tools(
)
_status_last_called[tier] = now
# --- Diary: pure file reads, no runtime required ---
if focus == "diary":
return _format_diary(last_n)
# --- Runtime check ---
runtime = _get_runtime()
if runtime is None:
@@ -3023,7 +3114,7 @@ def register_queen_lifecycle_tools(
else:
return (
f"Unknown focus '{focus}'. "
"Valid options: activity, memory, tools, issues, progress, full."
"Valid options: diary, activity, memory, tools, issues, progress, full."
)
except Exception as exc:
logger.exception("get_worker_status error")
@@ -3034,6 +3125,8 @@ def register_queen_lifecycle_tools(
description=(
"Check on the worker. Returns a brief prose summary by default. "
"Use 'focus' to drill into specifics:\n"
"- diary: persistent run digests from past executions — read this first "
"before digging into live runtime logs\n"
"- activity: current node, transitions, latest LLM output\n"
"- memory: worker's accumulated knowledge and state\n"
"- tools: running and recent tool calls\n"
@@ -3046,8 +3139,11 @@ def register_queen_lifecycle_tools(
"properties": {
"focus": {
"type": "string",
"enum": ["activity", "memory", "tools", "issues", "progress", "full"],
"description": ("Aspect to inspect. Omit for a brief summary."),
"enum": ["diary", "activity", "memory", "tools", "issues", "progress", "full"],
"description": (
"Aspect to inspect. Omit for a brief summary. "
"Use 'diary' to read persistent run history before checking live logs."
),
},
"last_n": {
"type": "integer",
@@ -3446,6 +3542,7 @@ def register_queen_lifecycle_tools(
if phase_state is not None:
phase_state.agent_path = str(resolved_path)
await phase_state.switch_to_staging()
_update_meta_json(session_manager, manager_session_id, {"phase": "staging"})
worker_name = info.name if info else updated_session.worker_id
return json.dumps(
@@ -3565,6 +3662,7 @@ def register_queen_lifecycle_tools(
# Switch to running phase
if phase_state is not None:
await phase_state.switch_to_running()
_update_meta_json(session_manager, manager_session_id, {"phase": "running"})
return json.dumps(
{
@@ -3702,6 +3800,8 @@ def register_queen_lifecycle_tools(
_save_trigger_to_agent(session, trigger_id, tdef)
bus = getattr(session, "event_bus", None)
if bus:
_runner = getattr(session, "runner", None)
_graph_entry = _runner.graph.entry_node if _runner else None
await bus.publish(
AgentEvent(
type=EventType.TRIGGER_ACTIVATED,
@@ -3710,6 +3810,8 @@ def register_queen_lifecycle_tools(
"trigger_id": trigger_id,
"trigger_type": t_type,
"trigger_config": t_config,
"name": tdef.description or trigger_id,
**({"entry_node": _graph_entry} if _graph_entry else {}),
},
)
)
@@ -3762,6 +3864,8 @@ def register_queen_lifecycle_tools(
# Emit event
bus = getattr(session, "event_bus", None)
if bus:
_runner = getattr(session, "runner", None)
_graph_entry = _runner.graph.entry_node if _runner else None
await bus.publish(
AgentEvent(
type=EventType.TRIGGER_ACTIVATED,
@@ -3770,6 +3874,8 @@ def register_queen_lifecycle_tools(
"trigger_id": trigger_id,
"trigger_type": t_type,
"trigger_config": t_config,
"name": tdef.description or trigger_id,
**({"entry_node": _graph_entry} if _graph_entry else {}),
},
)
)
@@ -3868,7 +3974,10 @@ def register_queen_lifecycle_tools(
AgentEvent(
type=EventType.TRIGGER_DEACTIVATED,
stream_id="queen",
data={"trigger_id": trigger_id},
data={
"trigger_id": trigger_id,
"name": tdef.description or trigger_id if tdef else trigger_id,
},
)
)
+8
View File
@@ -60,6 +60,7 @@
"integrity": "sha512-CGOfOJqWjg2qW/Mb6zNsDm+u5vFQ8DxXfbM09z69p5Z6+mE1ikP2jUXw+j42Pf1XTYED2Rni5f95npYeuwMDQA==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@babel/code-frame": "^7.29.0",
"@babel/generator": "^7.29.0",
@@ -1556,6 +1557,7 @@
"integrity": "sha512-4K3bqJpXpqfg2XKGK9bpDTc6xO/xoUP/RBWS7AtRMug6zZFaRekiLzjVtAoZMquxoAbzBvy5nxQ7veS5eYzf8A==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"undici-types": "~7.18.0"
}
@@ -1571,6 +1573,7 @@
"resolved": "https://registry.npmjs.org/@types/react/-/react-18.3.28.tgz",
"integrity": "sha512-z9VXpC7MWrhfWipitjNdgCauoMLRdIILQsAEV+ZesIzBq/oUlxk0m3ApZuMFCXdnS4U7KrI+l3WRUEGQ8K1QKw==",
"license": "MIT",
"peer": true,
"dependencies": {
"@types/prop-types": "*",
"csstype": "^3.2.2"
@@ -1783,6 +1786,7 @@
}
],
"license": "MIT",
"peer": true,
"dependencies": {
"baseline-browser-mapping": "^2.9.0",
"caniuse-lite": "^1.0.30001759",
@@ -3560,6 +3564,7 @@
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"dev": true,
"license": "MIT",
"peer": true,
"engines": {
"node": ">=12"
},
@@ -3611,6 +3616,7 @@
"resolved": "https://registry.npmjs.org/react/-/react-18.3.1.tgz",
"integrity": "sha512-wS+hAgJShR0KhEvPJArfuPVN1+Hz1t0Y6n5jLrGQbkb4urgPE/0Rve+1kMB1v/oWgHgm4WIcV+i7F2pTVj+2iQ==",
"license": "MIT",
"peer": true,
"dependencies": {
"loose-envify": "^1.1.0"
},
@@ -3623,6 +3629,7 @@
"resolved": "https://registry.npmjs.org/react-dom/-/react-dom-18.3.1.tgz",
"integrity": "sha512-5m4nQKp+rZRb09LNH59GM4BxTh9251/ylbKIbpe7TpGxfJ+9kv6BLkLBXIjjspbgbnIBNqlI23tRnTWT0snUIw==",
"license": "MIT",
"peer": true,
"dependencies": {
"loose-envify": "^1.1.0",
"scheduler": "^0.23.2"
@@ -4183,6 +4190,7 @@
"integrity": "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"esbuild": "^0.25.0",
"fdir": "^6.4.4",
+7 -3
View File
@@ -64,10 +64,14 @@ export const sessionsApi = {
`/sessions/${sessionId}/entry-points`,
),
updateTriggerTask: (sessionId: string, triggerId: string, task: string) =>
api.patch<{ trigger_id: string; task: string }>(
updateTrigger: (
sessionId: string,
triggerId: string,
patch: { task?: string; trigger_config?: Record<string, unknown> },
) =>
api.patch<{ trigger_id: string; task: string; trigger_config: Record<string, unknown> }>(
`/sessions/${sessionId}/triggers/${triggerId}`,
{ task },
patch,
),
graphs: (sessionId: string) =>
+2 -1
View File
@@ -337,7 +337,8 @@ export type EventTypeName =
| "trigger_activated"
| "trigger_deactivated"
| "trigger_fired"
| "trigger_removed";
| "trigger_removed"
| "trigger_updated";
export interface AgentEvent {
type: EventTypeName;
+161 -14
View File
@@ -3,11 +3,23 @@ import { Loader2 } from "lucide-react";
import type { DraftGraph as DraftGraphData, DraftNode } from "@/api/types";
import { RunButton } from "./RunButton";
import type { GraphNode, RunState } from "./graph-types";
import {
cssVar,
truncateLabel,
TRIGGER_ICONS,
ACTIVE_TRIGGER_COLORS,
useTriggerColors,
} from "@/lib/graphUtils";
// Read a CSS custom property value (space-separated HSL components)
function cssVar(name: string): string {
return getComputedStyle(document.documentElement).getPropertyValue(name).trim();
}
// ── Trigger layout constants ──
const TRIGGER_H = 38; // pill height
const TRIGGER_PILL_GAP_X = 16; // horizontal gap between multiple trigger pills
const TRIGGER_ICON_X = 16; // icon center offset from pill left edge
const TRIGGER_LABEL_X = 30; // label start offset from pill left edge
const TRIGGER_LABEL_INSET = 38; // icon + padding subtracted from pill width for label space
const TRIGGER_TEXT_Y = 11; // y-offset below pill for first text line (countdown or status)
const TRIGGER_TEXT_STEP = 11; // additional y-offset for second text line when countdown present
const TRIGGER_CLEARANCE = 30; // vertical space below pill for countdown + status text
interface DraftChromeColors {
edge: string;
@@ -107,13 +119,6 @@ function formatNodeId(id: string): string {
return id.split("-").map(w => w.charAt(0).toUpperCase() + w.slice(1)).join(" ");
}
function truncateLabel(label: string, availablePx: number, fontSize: number): string {
const avgCharW = fontSize * 0.58;
const maxChars = Math.floor(availablePx / avgCharW);
if (label.length <= maxChars) return label;
return label.slice(0, Math.max(maxChars - 1, 1)) + "\u2026";
}
/** Return the bounding-rect corner radius for a given flowchart shape. */
/**
* Render an ISO 5807 flowchart shape as an SVG element.
@@ -240,6 +245,13 @@ export default function DraftGraph({ draft, originalDraft, onNodeClick, flowchar
const runBtnRef = useRef<HTMLButtonElement>(null);
const [containerW, setContainerW] = useState(484);
const chrome = useDraftChromeColors();
const triggerColors = useTriggerColors();
// Extract trigger nodes from runtimeNodes
const triggerNodes = useMemo(
() => (runtimeNodes ?? []).filter(n => n.nodeType === "trigger"),
[runtimeNodes],
);
// ── Entrance animation — fires when originalDraft becomes a new non-null value ──
// This covers: agent loaded, build finished, queen modifies flowchart.
@@ -709,12 +721,17 @@ export default function DraftGraph({ draft, originalDraft, onNodeClick, flowchar
return { nodeYOffset: offsets, totalExtraY: totalExtra, groupBoxMaxX: maxGroupX };
}, [nodes, maxLayer, flowchartMap, idxMap, layers, nodeXPositions, nodeW]);
// When triggers are present, push the entire draft graph down to make room
const triggerOffsetY = triggerNodes.length > 0
? TRIGGER_H + TRIGGER_TEXT_Y + TRIGGER_TEXT_STEP + TRIGGER_CLEARANCE
: 0;
const nodePos = (i: number) => ({
x: nodeXPositions[i],
y: TOP_Y + layers[i] * (NODE_H + GAP_Y) + nodeYOffset[i],
y: TOP_Y + triggerOffsetY + layers[i] * (NODE_H + GAP_Y) + nodeYOffset[i],
});
const svgHeight = TOP_Y + (maxLayer + 1) * NODE_H + maxLayer * GAP_Y + totalExtraY + 16;
const svgHeight = TOP_Y + triggerOffsetY + (maxLayer + 1) * NODE_H + maxLayer * GAP_Y + totalExtraY + 16;
// Compute group areas for runtime node boundaries on the draft
const groupAreas = useMemo(() => {
@@ -847,6 +864,131 @@ export default function DraftGraph({ draft, originalDraft, onNodeClick, flowchar
pending: "",
};
// ── Trigger node rendering ──
const triggerW = Math.min(nodeW, 180);
// Shared trigger pill X position (used by both node and edge renderers)
const triggerPillX = (idx: number) => {
const totalW = triggerNodes.length * triggerW + (triggerNodes.length - 1) * TRIGGER_PILL_GAP_X;
return (containerW - totalW) / 2 + idx * (triggerW + TRIGGER_PILL_GAP_X);
};
const renderTriggerNode = (node: GraphNode, triggerIdx: number) => {
const icon = TRIGGER_ICONS[node.triggerType || ""] || "\u26A1";
const isActive = node.status === "running" || node.status === "complete";
const colors = isActive ? ACTIVE_TRIGGER_COLORS : triggerColors;
const nextFireIn = node.triggerConfig?.next_fire_in as number | undefined;
const tx = triggerPillX(triggerIdx);
const ty = TOP_Y;
const fontSize = triggerW < 140 ? 10.5 : 11.5;
const displayLabel = truncateLabel(node.label, triggerW - TRIGGER_LABEL_INSET, fontSize);
// Countdown
let countdownLabel: string | null = null;
if (isActive && nextFireIn != null && nextFireIn > 0) {
const h = Math.floor(nextFireIn / 3600);
const m = Math.floor((nextFireIn % 3600) / 60);
const s = Math.floor(nextFireIn % 60);
countdownLabel = h > 0
? `next in ${h}h ${String(m).padStart(2, "0")}m`
: `next in ${m}m ${String(s).padStart(2, "0")}s`;
}
const statusLabel = isActive ? "active" : "inactive";
const statusColor = isActive ? "hsl(140,40%,50%)" : "hsl(210,20%,40%)";
return (
<g
key={node.id}
onClick={() => onRuntimeNodeClick?.(node.id)}
style={{ cursor: onRuntimeNodeClick ? "pointer" : "default" }}
>
<title>{node.label}</title>
{/* Pill-shaped background */}
<rect
x={tx} y={ty}
width={triggerW} height={TRIGGER_H}
rx={TRIGGER_H / 2}
fill={colors.bg}
stroke={colors.border}
strokeWidth={isActive ? 1.5 : 1}
strokeDasharray={isActive ? undefined : "4 2"}
/>
{/* Icon */}
<text
x={tx + TRIGGER_ICON_X} y={ty + TRIGGER_H / 2}
fill={colors.icon} fontSize={13}
textAnchor="middle" dominantBaseline="middle"
>
{icon}
</text>
{/* Label */}
<text
x={tx + TRIGGER_LABEL_X} y={ty + TRIGGER_H / 2}
fill={colors.text}
fontSize={fontSize}
fontWeight={500}
dominantBaseline="middle"
letterSpacing="0.01em"
>
{displayLabel}
</text>
{/* Countdown */}
{countdownLabel && (
<text
x={tx + triggerW / 2} y={ty + TRIGGER_H + TRIGGER_TEXT_Y}
fill={colors.text} fontSize={9}
textAnchor="middle" fontStyle="italic" opacity={0.7}
>
{countdownLabel}
</text>
)}
{/* Status */}
<text
x={tx + triggerW / 2} y={ty + TRIGGER_H + (countdownLabel ? TRIGGER_TEXT_Y + TRIGGER_TEXT_STEP : TRIGGER_TEXT_Y)}
fill={statusColor} fontSize={8.5}
textAnchor="middle" opacity={0.8}
>
{statusLabel}
</text>
</g>
);
};
const renderTriggerEdge = (triggerIdx: number) => {
if (nodes.length === 0) return null;
const triggerNode = triggerNodes[triggerIdx];
const runtimeTargetId = triggerNode?.next?.[0];
const targetDraftId = runtimeTargetId
? flowchartMap?.[runtimeTargetId]?.[0] ?? runtimeTargetId
: draft?.entry_node;
const targetIdx = targetDraftId ? idxMap[targetDraftId] ?? 0 : 0;
const targetPos = nodePos(targetIdx);
const targetX = targetPos.x + nodeW / 2;
const targetY = targetPos.y;
const tx = triggerPillX(triggerIdx) + triggerW / 2;
const ty = TOP_Y + TRIGGER_H + TRIGGER_TEXT_Y + TRIGGER_TEXT_STEP + 4;
const midY = (ty + targetY) / 2;
const d = Math.abs(tx - targetX) < 2
? `M ${tx} ${ty} L ${targetX} ${targetY}`
: `M ${tx} ${ty} L ${tx} ${midY} L ${targetX} ${midY} L ${targetX} ${targetY}`;
return (
<g key={`trigger-edge-${triggerIdx}`}>
<path d={d} fill="none" stroke={chrome.edge} strokeWidth={1.2} strokeDasharray="4 3" />
<polygon
points={`${targetX - 3},${targetY - 5} ${targetX + 3},${targetY - 5} ${targetX},${targetY - 1}`}
fill={chrome.edgeArrow}
/>
</g>
);
};
const renderNode = (node: DraftNode, i: number) => {
const pos = nodePos(i);
const isHovered = hoveredNode === node.id;
@@ -994,7 +1136,7 @@ export default function DraftGraph({ draft, originalDraft, onNodeClick, flowchar
>
<svg
width="100%"
viewBox={`0 0 ${Math.max((maxContentRight ?? 0), groupBoxMaxX) + (backEdgeOverflow ?? 0)} ${totalH}`}
viewBox={`0 0 ${Math.max((maxContentRight ?? 0), groupBoxMaxX, triggerNodes.length > 0 ? triggerPillX(triggerNodes.length - 1) + triggerW : 0) + (backEdgeOverflow ?? 0)} ${totalH}`}
preserveAspectRatio="xMidYMin meet"
className="select-none"
style={{
@@ -1078,6 +1220,11 @@ export default function DraftGraph({ draft, originalDraft, onNodeClick, flowchar
);
})}
{/* Trigger edges (dashed lines from trigger pills to first draft node) */}
{triggerNodes.map((_, i) => renderTriggerEdge(i))}
{/* Trigger pill nodes */}
{triggerNodes.map((tn, i) => renderTriggerNode(tn, i))}
{forwardEdges.map((e, i) => renderEdge(e, i))}
{backEdges.map((e, i) => renderBackEdge(e, i))}
{nodes.map((n, i) => renderNode(n, i))}
+88
View File
@@ -0,0 +1,88 @@
import { useEffect, useState } from "react";
// ── Shared graph utilities ──
// Common helpers used by both AgentGraph and DraftGraph.
// AgentGraph still has its own copies for now (separate cleanup PR).
/** Read a CSS custom property value (space-separated HSL components). */
export function cssVar(name: string): string {
return getComputedStyle(document.documentElement).getPropertyValue(name).trim();
}
/** Truncate label to fit within `availablePx` at the given fontSize. */
export function truncateLabel(label: string, availablePx: number, fontSize: number): string {
const avgCharW = fontSize * 0.58;
const maxChars = Math.floor(availablePx / avgCharW);
if (label.length <= maxChars) return label;
return label.slice(0, Math.max(maxChars - 1, 1)) + "\u2026";
}
// ── Trigger styling ──
export type TriggerColorSet = { bg: string; border: string; text: string; icon: string };
export function buildTriggerColors(): TriggerColorSet {
const bg = cssVar("--trigger-bg") || "210 25% 14%";
const border = cssVar("--trigger-border") || "210 30% 30%";
const text = cssVar("--trigger-text") || "210 30% 65%";
const icon = cssVar("--trigger-icon") || "210 40% 55%";
return {
bg: `hsl(${bg})`,
border: `hsl(${border})`,
text: `hsl(${text})`,
icon: `hsl(${icon})`,
};
}
export const ACTIVE_TRIGGER_COLORS: TriggerColorSet = {
bg: "hsl(210,30%,18%)",
border: "hsl(210,50%,50%)",
text: "hsl(210,40%,75%)",
icon: "hsl(210,60%,65%)",
};
export const TRIGGER_ICONS: Record<string, string> = {
webhook: "\u26A1", // lightning bolt
timer: "\u23F1", // stopwatch
api: "\u2192", // right arrow
event: "\u223F", // sine wave
};
/** Format a cron expression into a human-readable schedule label. */
export function cronToLabel(cron: string): string {
const parts = cron.trim().split(/\s+/);
if (parts.length !== 5) return cron;
const [min, hour, dom, mon, dow] = parts;
// */N * * * * -> "Every Nm"
if (min.startsWith("*/") && hour === "*" && dom === "*" && mon === "*" && dow === "*") {
return `Every ${min.slice(2)}m`;
}
// 0 */N * * * -> "Every Nh"
if (min === "0" && hour.startsWith("*/") && dom === "*" && mon === "*" && dow === "*") {
return `Every ${hour.slice(2)}h`;
}
// 0 H * * * -> "Daily at Ham/pm"
if (dom === "*" && mon === "*" && dow === "*" && !min.includes("*") && !hour.includes("*")) {
const h = parseInt(hour, 10);
const m = parseInt(min, 10);
const suffix = h >= 12 ? "PM" : "AM";
const h12 = h % 12 || 12;
return m === 0 ? `Daily at ${h12}${suffix}` : `Daily at ${h12}:${String(m).padStart(2, "0")}${suffix}`;
}
return cron;
}
/** Theme-reactive hook for inactive trigger colors. */
export function useTriggerColors(): TriggerColorSet {
const [colors, setColors] = useState<TriggerColorSet>(buildTriggerColors);
useEffect(() => {
const rebuild = () => setColors(buildTriggerColors());
const obs = new MutationObserver(rebuild);
obs.observe(document.documentElement, { attributes: true, attributeFilter: ["class", "style"] });
return () => obs.disconnect();
}, []);
return colors;
}
+8 -1
View File
@@ -27,7 +27,14 @@ export default function MyAgents() {
agentsApi
.discover()
.then((result) => {
setAgents(result["Your Agents"] || []);
const entries = result["Your Agents"] || [];
entries.sort((a, b) => {
if (!a.last_active && !b.last_active) return 0;
if (!a.last_active) return 1;
if (!b.last_active) return -1;
return b.last_active.localeCompare(a.last_active);
});
setAgents(entries);
})
.catch((err) => {
setError(err.message || "Failed to load agents");
+237 -24
View File
@@ -17,6 +17,7 @@ import { useMultiSSE } from "@/hooks/use-sse";
import type { LiveSession, AgentEvent, DiscoverEntry, NodeSpec, DraftGraph as DraftGraphData } from "@/api/types";
import { sseEventToChatMessage, formatAgentDisplayName } from "@/lib/chat-helpers";
import { topologyToGraphNodes } from "@/lib/graph-converter";
import { cronToLabel } from "@/lib/graphUtils";
import { ApiError } from "@/api/client";
const makeId = () => Math.random().toString(36).slice(2, 9);
@@ -251,6 +252,10 @@ function truncate(s: string, max: number): string {
type SessionRestoreResult = {
messages: ChatMessage[];
restoredPhase: "planning" | "building" | "staging" | "running" | null;
/** Last flowchart map from events — used to restore flowchart overlay on cold resume. */
flowchartMap: Record<string, string[]> | null;
/** Last original draft from events — used to restore flowchart overlay on cold resume. */
originalDraft: DraftGraphData | null;
};
/**
@@ -267,6 +272,8 @@ async function restoreSessionMessages(
if (events.length > 0) {
const messages: ChatMessage[] = [];
let runningPhase: ChatMessage["phase"] = undefined;
let flowchartMap: Record<string, string[]> | null = null;
let originalDraft: DraftGraphData | null = null;
for (const evt of events) {
// Track phase transitions so each message gets the phase it was created in
const p = evt.type === "queen_phase_changed" ? evt.data?.phase as string
@@ -275,6 +282,12 @@ async function restoreSessionMessages(
if (p && ["planning", "building", "staging", "running"].includes(p)) {
runningPhase = p as ChatMessage["phase"];
}
// Track last flowchart state for cold restore
if (evt.type === "flowchart_map_updated" && evt.data) {
const mapData = evt.data as { map?: Record<string, string[]>; original_draft?: DraftGraphData };
flowchartMap = mapData.map ?? null;
originalDraft = mapData.original_draft ?? null;
}
const msg = sseEventToChatMessage(evt, thread, agentDisplayName);
if (!msg) continue;
if (evt.stream_id === "queen") {
@@ -283,12 +296,12 @@ async function restoreSessionMessages(
}
messages.push(msg);
}
return { messages, restoredPhase: runningPhase ?? null };
return { messages, restoredPhase: runningPhase ?? null, flowchartMap, originalDraft };
}
} catch {
// Event log not available — session will start fresh.
}
return { messages: [], restoredPhase: null };
return { messages: [], restoredPhase: null, flowchartMap: null, originalDraft: null };
}
// --- Per-agent backend state (consolidated) ---
@@ -557,7 +570,11 @@ export default function Workspace() {
const [dismissedBanner, setDismissedBanner] = useState<string | null>(null);
const [selectedNode, setSelectedNode] = useState<GraphNode | null>(null);
const [triggerTaskDraft, setTriggerTaskDraft] = useState("");
const [triggerCronDraft, setTriggerCronDraft] = useState("");
const [triggerTaskSaving, setTriggerTaskSaving] = useState(false);
const [triggerScheduleSaving, setTriggerScheduleSaving] = useState(false);
const [triggerCronSaved, setTriggerCronSaved] = useState(false);
const [triggerTaskSaved, setTriggerTaskSaved] = useState(false);
const [newTabOpen, setNewTabOpen] = useState(false);
const newTabBtnRef = useRef<HTMLButtonElement>(null);
const [graphPanelPct, setGraphPanelPct] = useState(30);
@@ -794,6 +811,8 @@ export default function Workspace() {
}
let restoredPhase: "planning" | "building" | "staging" | "running" | null = null;
let restoredFlowchartMap: Record<string, string[]> | null = null;
let restoredOriginalDraft: DraftGraphData | null = null;
if (!liveSession) {
// Fetch conversation history from disk BEFORE creating the new session.
// SKIP if messages were already pre-populated by handleHistoryOpen.
@@ -805,9 +824,22 @@ export default function Workspace() {
const restored = await restoreSessionMessages(restoreFrom, agentType, "Queen Bee");
preRestoredMsgs.push(...restored.messages);
restoredPhase = restored.restoredPhase;
restoredFlowchartMap = restored.flowchartMap;
restoredOriginalDraft = restored.originalDraft;
} catch {
// Not available — will start fresh
}
} else if (restoreFrom && alreadyHasMessages) {
// Messages already cached in localStorage — still fetch events for
// non-message state (phase, flowchart) that isn't cached.
try {
const restored = await restoreSessionMessages(restoreFrom, agentType, "Queen Bee");
restoredPhase = restored.restoredPhase;
restoredFlowchartMap = restored.flowchartMap;
restoredOriginalDraft = restored.originalDraft;
} catch {
// Not critical — UI will still show cached messages
}
}
// Suppress the queen's intro cycle whenever we are about to restore a
@@ -830,7 +862,7 @@ export default function Workspace() {
}));
}
restoredMessageCount = preRestoredMsgs.length;
} else if (restoreFrom && activeId) {
} else if (restoreFrom && activeId && !alreadyHasMessages) {
// We had a stored session but no messages on disk — wipe stale localStorage cache
setSessionsByAgent(prev => ({
...prev,
@@ -884,6 +916,9 @@ export default function Workspace() {
queenReady: true,
queenPhase: qPhase,
queenBuilding: qPhase === "building",
// Restore flowchart overlay from persisted events
...(restoredFlowchartMap ? { flowchartMap: restoredFlowchartMap } : {}),
...(restoredOriginalDraft ? { originalDraft: restoredOriginalDraft, draftGraph: null } : {}),
});
} catch (err: unknown) {
const msg = err instanceof Error ? err.message : String(err);
@@ -958,6 +993,8 @@ export default function Workspace() {
// Track the last queen phase seen in the event log for cold restore
let restoredPhase: "planning" | "building" | "staging" | "running" | null = null;
let restoredFlowchartMap: Record<string, string[]> | null = null;
let restoredOriginalDraft: DraftGraphData | null = null;
if (!liveSession) {
// Reconnect failed — clear stale cached messages from localStorage restore.
@@ -985,6 +1022,19 @@ export default function Workspace() {
const restored = await restoreSessionMessages(coldRestoreId, agentType, displayNameTemp);
preQueenMsgs = restored.messages;
restoredPhase = restored.restoredPhase;
restoredFlowchartMap = restored.flowchartMap;
restoredOriginalDraft = restored.originalDraft;
} else if (coldRestoreId && alreadyHasMessages) {
// Messages already cached — still fetch events for non-message state (phase, flowchart)
try {
const displayNameTemp = formatAgentDisplayName(agentPath);
const restored = await restoreSessionMessages(coldRestoreId, agentType, displayNameTemp);
restoredPhase = restored.restoredPhase;
restoredFlowchartMap = restored.flowchartMap;
restoredOriginalDraft = restored.originalDraft;
} catch {
// Not critical — UI will still show cached messages
}
}
// Suppress intro whenever we are about to restore a previous conversation.
@@ -1065,6 +1115,9 @@ export default function Workspace() {
displayName,
queenPhase: initialPhase,
queenBuilding: initialPhase === "building",
// Restore flowchart overlay from persisted events
...(restoredFlowchartMap ? { flowchartMap: restoredFlowchartMap } : {}),
...(restoredOriginalDraft ? { originalDraft: restoredOriginalDraft, draftGraph: null } : {}),
});
// Update the session label + backendSessionId. Also set historySourceId
@@ -1102,6 +1155,11 @@ export default function Workspace() {
if (historyId && !coldRestoreId) {
const restored = await restoreSessionMessages(historyId, agentType, displayName);
restoredMsgs.push(...restored.messages);
// Use flowchart from event log if not already set
if (restored.flowchartMap && !restoredFlowchartMap) {
restoredFlowchartMap = restored.flowchartMap;
restoredOriginalDraft = restored.originalDraft;
}
// Check worker status (needed for isWorkerRunning flag)
try {
@@ -1144,6 +1202,9 @@ export default function Workspace() {
loading: false,
queenReady: !!(isResumedSession || hasRestoredContent),
...(isWorkerRunning ? { workerRunState: "running" } : {}),
// Restore flowchart overlay from persisted events
...(restoredFlowchartMap ? { flowchartMap: restoredFlowchartMap } : {}),
...(restoredOriginalDraft ? { originalDraft: restoredOriginalDraft, draftGraph: null } : {}),
});
} catch (err: unknown) {
const msg = err instanceof Error ? err.message : String(err);
@@ -1260,12 +1321,28 @@ export default function Workspace() {
const fireMap = new Map<string, number>();
const taskMap = new Map<string, string>();
const labelMap = new Map<string, string>();
const targetMap = new Map<string, string>();
for (const ep of triggerEps) {
const nodeId = `__trigger_${ep.id}`;
if (ep.next_fire_in != null) {
fireMap.set(`__trigger_${ep.id}`, ep.next_fire_in);
fireMap.set(nodeId, ep.next_fire_in);
}
if (ep.task != null) {
taskMap.set(`__trigger_${ep.id}`, ep.task);
taskMap.set(nodeId, ep.task);
}
const cron = ep.trigger_config?.cron as string | undefined;
const interval = ep.trigger_config?.interval_minutes as number | undefined;
const epLabel = cron
? cronToLabel(cron)
: interval
? `Every ${interval >= 60 ? `${interval / 60}h` : `${interval}m`}`
: ep.name || undefined;
if (epLabel) {
labelMap.set(nodeId, epLabel);
}
if (ep.entry_node) {
targetMap.set(nodeId, ep.entry_node);
}
}
@@ -1274,14 +1351,18 @@ export default function Workspace() {
if (!ss?.length) return prev;
const existingIds = new Set(ss[0].graphNodes.map(n => n.id));
// Update existing trigger nodes
// Update existing trigger nodes (countdown, task, label, target)
let updated = ss[0].graphNodes.map((n) => {
if (n.nodeType !== "trigger") return n;
const nfi = fireMap.get(n.id);
const task = taskMap.get(n.id);
if (nfi == null && task == null) return n;
const label = labelMap.get(n.id);
const target = targetMap.get(n.id);
if (nfi == null && task == null && !label && !target) return n;
return {
...n,
...(label && label !== n.label ? { label } : {}),
...(target ? { next: [target] } : {}),
triggerConfig: {
...n.triggerConfig,
...(nfi != null ? { next_fire_in: nfi } : {}),
@@ -1291,14 +1372,15 @@ export default function Workspace() {
});
// Discover new triggers not yet in the graph
const entryNode = ss[0].graphNodes.find(n => n.nodeType !== "trigger")?.id;
const fallbackEntry = ss[0].graphNodes.find(n => n.nodeType !== "trigger")?.id;
const newNodes: GraphNode[] = [];
for (const ep of triggerEps) {
const nodeId = `__trigger_${ep.id}`;
if (existingIds.has(nodeId)) continue;
const target = ep.entry_node || fallbackEntry;
newNodes.push({
id: nodeId,
label: ep.name || ep.id,
label: labelMap.get(nodeId) || ep.name || ep.id,
status: "pending",
nodeType: "trigger",
triggerType: ep.trigger_type,
@@ -1307,7 +1389,7 @@ export default function Workspace() {
...(ep.next_fire_in != null ? { next_fire_in: ep.next_fire_in } : {}),
...(ep.task ? { task: ep.task } : {}),
},
...(entryNode ? { next: [entryNode] } : {}),
...(target ? { next: [target] } : {}),
});
}
if (newNodes.length > 0) {
@@ -2237,10 +2319,18 @@ export default function Workspace() {
// Synthesize new trigger node at the front of the graph
const triggerType = (event.data?.trigger_type as string) || "timer";
const triggerConfig = (event.data?.trigger_config as Record<string, unknown>) || {};
const entryNode = s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
const entryNode = (event.data?.entry_node as string) || s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
const triggerName = (event.data?.name as string) || triggerId;
const _cron = triggerConfig.cron as string | undefined;
const _interval = triggerConfig.interval_minutes as number | undefined;
const computedLabel = _cron
? cronToLabel(_cron)
: _interval
? `Every ${_interval >= 60 ? `${_interval / 60}h` : `${_interval}m`}`
: triggerName;
const newNode: GraphNode = {
id: nodeId,
label: triggerId,
label: computedLabel,
status: "running",
nodeType: "trigger",
triggerType,
@@ -2305,10 +2395,18 @@ export default function Workspace() {
if (s.graphNodes.some(n => n.id === nodeId)) return s;
const triggerType = (event.data?.trigger_type as string) || "timer";
const triggerConfig = (event.data?.trigger_config as Record<string, unknown>) || {};
const entryNode = s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
const entryNode = (event.data?.entry_node as string) || s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
const triggerName = (event.data?.name as string) || triggerId;
const _cron2 = triggerConfig.cron as string | undefined;
const _interval2 = triggerConfig.interval_minutes as number | undefined;
const computedLabel2 = _cron2
? cronToLabel(_cron2)
: _interval2
? `Every ${_interval2 >= 60 ? `${_interval2 / 60}h` : `${_interval2}m`}`
: triggerName;
const newNode: GraphNode = {
id: nodeId,
label: triggerId,
label: computedLabel2,
status: "pending",
nodeType: "trigger",
triggerType,
@@ -2323,6 +2421,43 @@ export default function Workspace() {
break;
}
case "trigger_updated": {
const triggerId = event.data?.trigger_id as string;
if (triggerId) {
const nodeId = `__trigger_${triggerId}`;
const triggerConfig = (event.data?.trigger_config as Record<string, unknown>) || {};
const cron = triggerConfig.cron as string | undefined;
const interval = triggerConfig.interval_minutes as number | undefined;
const newLabel = cron
? cronToLabel(cron)
: interval
? `Every ${interval >= 60 ? `${interval / 60}h` : `${interval}m`}`
: undefined;
setSessionsByAgent(prev => {
const sessions = prev[agentType] || [];
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
return {
...prev,
[agentType]: sessions.map(s => {
if (s.id !== activeId) return s;
return {
...s,
graphNodes: s.graphNodes.map(n => {
if (n.id !== nodeId) return n;
return {
...n,
...(newLabel ? { label: newLabel } : {}),
triggerConfig: { ...n.triggerConfig, ...triggerConfig },
};
}),
};
}),
};
});
}
break;
}
case "trigger_removed": {
const triggerId = event.data?.trigger_id as string;
if (triggerId) {
@@ -2376,14 +2511,43 @@ export default function Workspace() {
const liveSelectedNode = selectedNode && currentGraph.nodes.find(n => n.id === selectedNode.id);
const resolvedSelectedNode = liveSelectedNode || selectedNode;
// Sync trigger task draft when selected trigger node changes
// Sync trigger drafts when selected trigger node changes
useEffect(() => {
if (resolvedSelectedNode?.nodeType === "trigger") {
const tc = resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined;
setTriggerTaskDraft((tc?.task as string) || "");
setTriggerCronDraft((tc?.cron as string) || "");
}
}, [resolvedSelectedNode?.id]);
const patchTriggerNode = useCallback((agentType: string, triggerNodeId: string, patch: { task?: string; trigger_config?: Record<string, unknown>; label?: string }) => {
setSessionsByAgent(prev => {
const sessions = prev[agentType] || [];
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
return {
...prev,
[agentType]: sessions.map(s => {
if (s.id !== activeId) return s;
return {
...s,
graphNodes: s.graphNodes.map(n => {
if (n.id !== triggerNodeId) return n;
return {
...n,
...(patch.label !== undefined ? { label: patch.label } : {}),
triggerConfig: {
...n.triggerConfig,
...(patch.trigger_config || {}),
...(patch.task !== undefined ? { task: patch.task } : {}),
},
};
}),
};
}),
};
});
}, []);
// Build a flat list of all agent-type tabs for the tab bar
const agentTabs = Object.entries(sessionsByAgent)
.filter(([, sessions]) => sessions.length > 0)
@@ -3052,18 +3216,64 @@ export default function Workspace() {
const interval = tc?.interval_minutes as number | undefined;
const eventTypes = tc?.event_types as string[] | undefined;
const scheduleLabel = cron
? `cron: ${cron}`
? cronToLabel(cron)
: interval
? `Every ${interval >= 60 ? `${interval / 60}h` : `${interval}m`}`
: eventTypes?.length
? eventTypes.join(", ")
: null;
return scheduleLabel ? (
const canEditCron = resolvedSelectedNode.triggerType === "timer";
const cronChanged = canEditCron && triggerCronDraft.trim() !== (cron || "");
return scheduleLabel || canEditCron ? (
<div>
<p className="text-[10px] font-medium text-muted-foreground uppercase tracking-wider mb-1.5">Schedule</p>
<p className="text-xs text-foreground/80 font-mono bg-muted/30 rounded-lg px-3 py-2 border border-border/20">
{scheduleLabel}
</p>
{scheduleLabel && (
<p className="text-xs text-foreground/80 font-mono bg-muted/30 rounded-lg px-3 py-2 border border-border/20">
{scheduleLabel}
</p>
)}
{canEditCron && (
<>
<input
value={triggerCronDraft}
onChange={(e) => setTriggerCronDraft(e.target.value)}
placeholder="0 5 * * *"
className="mt-1.5 w-full text-xs text-foreground/80 bg-muted/30 rounded-lg px-3 py-2 border border-border/20 font-mono focus:outline-none focus:border-primary/40"
/>
<p className="text-[10px] text-muted-foreground/60 mt-1">
Edit the cron expression for this timer trigger.
</p>
{(cronChanged || triggerCronSaved) && (
<button
disabled={triggerScheduleSaving || !cronChanged}
onClick={async () => {
const sessionId = activeAgentState?.sessionId;
const triggerId = resolvedSelectedNode.id.replace("__trigger_", "");
const nextCron = triggerCronDraft.trim();
if (!sessionId || !nextCron) return;
const nextTriggerConfig: Record<string, unknown> = { cron: nextCron };
setTriggerScheduleSaving(true);
try {
await sessionsApi.updateTrigger(sessionId, triggerId, {
trigger_config: nextTriggerConfig,
});
patchTriggerNode(activeWorker, resolvedSelectedNode.id, {
trigger_config: nextTriggerConfig,
label: cronToLabel(nextCron),
});
setTriggerCronSaved(true);
setTimeout(() => setTriggerCronSaved(false), 2000);
} finally {
setTriggerScheduleSaving(false);
}
}}
className="mt-1.5 w-full text-[11px] px-3 py-1.5 rounded-lg border border-primary/30 text-primary hover:bg-primary/10 transition-colors disabled:opacity-50"
>
{triggerScheduleSaving ? "Saving..." : triggerCronSaved ? "Saved" : "Save Cron"}
</button>
)}
</>
)}
</div>
) : null;
})()}
@@ -3090,24 +3300,27 @@ export default function Workspace() {
{(() => {
const currentTask = (resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined)?.task as string || "";
const hasChanged = triggerTaskDraft !== currentTask;
if (!hasChanged) return null;
if (!hasChanged && !triggerTaskSaved) return null;
return (
<button
disabled={triggerTaskSaving}
disabled={triggerTaskSaving || !hasChanged}
onClick={async () => {
const sessionId = activeAgentState?.sessionId;
const triggerId = resolvedSelectedNode.id.replace("__trigger_", "");
if (!sessionId) return;
setTriggerTaskSaving(true);
try {
await sessionsApi.updateTriggerTask(sessionId, triggerId, triggerTaskDraft);
await sessionsApi.updateTrigger(sessionId, triggerId, { task: triggerTaskDraft });
patchTriggerNode(activeWorker, resolvedSelectedNode.id, { task: triggerTaskDraft });
setTriggerTaskSaved(true);
setTimeout(() => setTriggerTaskSaved(false), 2000);
} finally {
setTriggerTaskSaving(false);
}
}}
className="mt-1.5 w-full text-[11px] px-3 py-1.5 rounded-lg border border-primary/30 text-primary hover:bg-primary/10 transition-colors disabled:opacity-50"
>
{triggerTaskSaving ? "Saving..." : "Save Task"}
{triggerTaskSaving ? "Saving..." : triggerTaskSaved ? "Saved" : "Save Task"}
</button>
);
})()}
+209
View File
@@ -0,0 +1,209 @@
import importlib.util
from pathlib import Path
def _load_check_llm_key_module():
module_path = Path(__file__).resolve().parents[2] / "scripts" / "check_llm_key.py"
spec = importlib.util.spec_from_file_location("check_llm_key_script", module_path)
module = importlib.util.module_from_spec(spec)
assert spec.loader is not None
spec.loader.exec_module(module)
return module
def _run_openrouter_check(monkeypatch, status_code: int):
module = _load_check_llm_key_module()
calls = {}
class FakeResponse:
def __init__(self, code):
self.status_code = code
class FakeClient:
def __init__(self, timeout):
calls["timeout"] = timeout
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def get(self, endpoint, headers):
calls["endpoint"] = endpoint
calls["headers"] = headers
return FakeResponse(status_code)
monkeypatch.setattr(module.httpx, "Client", FakeClient)
result = module.check_openrouter("test-key")
return result, calls
def _run_openrouter_model_check(
monkeypatch,
status_code: int,
payload: dict | None = None,
model: str = "openai/gpt-4o-mini",
):
module = _load_check_llm_key_module()
calls = {}
class FakeResponse:
def __init__(self, code):
self.status_code = code
self._payload = payload
self.text = ""
def json(self):
if self._payload is None:
raise ValueError("no json")
return self._payload
class FakeClient:
def __init__(self, timeout):
calls["timeout"] = timeout
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
return False
def get(self, endpoint, headers):
calls["endpoint"] = endpoint
calls["headers"] = headers
return FakeResponse(status_code)
monkeypatch.setattr(module.httpx, "Client", FakeClient)
result = module.check_openrouter_model("test-key", model)
return result, calls
def test_check_openrouter_200(monkeypatch):
result, calls = _run_openrouter_check(monkeypatch, 200)
assert result == {"valid": True, "message": "OpenRouter API key valid"}
assert calls["endpoint"] == "https://openrouter.ai/api/v1/models"
assert calls["headers"] == {"Authorization": "Bearer test-key"}
def test_check_openrouter_401(monkeypatch):
result, _ = _run_openrouter_check(monkeypatch, 401)
assert result == {"valid": False, "message": "Invalid OpenRouter API key"}
def test_check_openrouter_403(monkeypatch):
result, _ = _run_openrouter_check(monkeypatch, 403)
assert result == {"valid": False, "message": "OpenRouter API key lacks permissions"}
def test_check_openrouter_429(monkeypatch):
result, _ = _run_openrouter_check(monkeypatch, 429)
assert result == {"valid": True, "message": "OpenRouter API key valid"}
def test_check_openrouter_model_200(monkeypatch):
result, calls = _run_openrouter_model_check(
monkeypatch,
200,
{
"data": [
{
"id": "openai/gpt-4o-mini",
"canonical_slug": "openai/gpt-4o-mini",
}
]
},
)
assert result == {
"valid": True,
"message": "OpenRouter model is available: openai/gpt-4o-mini",
"model": "openai/gpt-4o-mini",
}
assert calls["endpoint"] == "https://openrouter.ai/api/v1/models/user"
assert calls["headers"] == {"Authorization": "Bearer test-key"}
def test_check_openrouter_model_200_matches_canonical_slug(monkeypatch):
result, _ = _run_openrouter_model_check(
monkeypatch,
200,
{
"data": [
{
"id": "mistralai/mistral-small-4",
"canonical_slug": "mistralai/mistral-small-2603",
}
]
},
model="mistralai/mistral-small-2603",
)
assert result == {
"valid": True,
"message": "OpenRouter model is available: mistralai/mistral-small-2603",
"model": "mistralai/mistral-small-2603",
}
def test_check_openrouter_model_200_sanitizes_pasted_unicode(monkeypatch):
result, _ = _run_openrouter_model_check(
monkeypatch,
200,
{
"data": [
{
"id": "z-ai/glm-5-turbo",
"canonical_slug": "z-ai/glm-5-turbo",
}
]
},
model="openrouter/z-ai\u200b/glm\u20115\u2011turbo",
)
assert result == {
"valid": True,
"message": "OpenRouter model is available: z-ai/glm-5-turbo",
"model": "z-ai/glm-5-turbo",
}
def test_check_openrouter_model_200_not_found_with_suggestions(monkeypatch):
result, _ = _run_openrouter_model_check(
monkeypatch,
200,
{
"data": [
{"id": "z-ai/glm-5-turbo"},
{"id": "z-ai/glm-4.6v"},
]
},
model="z-ai/glm-5-turb",
)
assert result == {
"valid": False,
"message": (
"OpenRouter model is not available for this key/settings: z-ai/glm-5-turb. "
"Closest matches: z-ai/glm-5-turbo"
),
}
def test_check_openrouter_model_404_with_error_message(monkeypatch):
result, _ = _run_openrouter_model_check(
monkeypatch,
404,
{"error": {"message": "No endpoints available for this model"}},
)
assert result == {
"valid": False,
"message": (
"OpenRouter model is not available for this key/settings: openai/gpt-4o-mini. "
"No endpoints available for this model"
),
}
def test_check_openrouter_model_429(monkeypatch):
result, _ = _run_openrouter_model_check(monkeypatch, 429)
assert result == {
"valid": True,
"message": "OpenRouter model check rate-limited; assuming model is reachable",
}
+45 -1
View File
@@ -2,7 +2,7 @@
import logging
from framework.config import get_hive_config
from framework.config import get_api_base, get_hive_config, get_preferred_model
class TestGetHiveConfig:
@@ -21,3 +21,47 @@ class TestGetHiveConfig:
assert result == {}
assert "Failed to load Hive config" in caplog.text
assert str(config_file) in caplog.text
class TestOpenRouterConfig:
"""OpenRouter config composition and fallback behavior."""
def test_get_preferred_model_for_openrouter(self, tmp_path, monkeypatch):
config_file = tmp_path / "configuration.json"
config_file.write_text(
'{"llm":{"provider":"openrouter","model":"x-ai/grok-4.20-beta"}}',
encoding="utf-8",
)
monkeypatch.setattr("framework.config.HIVE_CONFIG_FILE", config_file)
assert get_preferred_model() == "openrouter/x-ai/grok-4.20-beta"
def test_get_preferred_model_normalizes_openrouter_prefixed_model(self, tmp_path, monkeypatch):
config_file = tmp_path / "configuration.json"
config_file.write_text(
'{"llm":{"provider":"openrouter","model":"openrouter/x-ai/grok-4.20-beta"}}',
encoding="utf-8",
)
monkeypatch.setattr("framework.config.HIVE_CONFIG_FILE", config_file)
assert get_preferred_model() == "openrouter/x-ai/grok-4.20-beta"
def test_get_api_base_falls_back_to_openrouter_default(self, tmp_path, monkeypatch):
config_file = tmp_path / "configuration.json"
config_file.write_text(
'{"llm":{"provider":"openrouter","model":"x-ai/grok-4.20-beta"}}',
encoding="utf-8",
)
monkeypatch.setattr("framework.config.HIVE_CONFIG_FILE", config_file)
assert get_api_base() == "https://openrouter.ai/api/v1"
def test_get_api_base_keeps_explicit_openrouter_api_base(self, tmp_path, monkeypatch):
config_file = tmp_path / "configuration.json"
config_file.write_text(
'{"llm":{"provider":"openrouter","model":"x-ai/grok-4.20-beta","api_base":"https://proxy.example/v1"}}',
encoding="utf-8",
)
monkeypatch.setattr("framework.config.HIVE_CONFIG_FILE", config_file)
assert get_api_base() == "https://proxy.example/v1"
+70
View File
@@ -0,0 +1,70 @@
import os
import sys
from types import ModuleType, SimpleNamespace
from framework.credentials import key_storage
from framework.credentials.validation import ensure_credential_key_env
def _install_fake_aden_modules(monkeypatch, check_fn, credential_specs):
shell_config_module = ModuleType("aden_tools.credentials.shell_config")
shell_config_module.check_env_var_in_shell_config = check_fn
credentials_module = ModuleType("aden_tools.credentials")
credentials_module.CREDENTIAL_SPECS = credential_specs
monkeypatch.setitem(sys.modules, "aden_tools.credentials.shell_config", shell_config_module)
monkeypatch.setitem(sys.modules, "aden_tools.credentials", credentials_module)
def test_bootstrap_loads_configured_llm_env_var_from_shell_config(monkeypatch):
monkeypatch.setattr(key_storage, "load_credential_key", lambda: None)
monkeypatch.setattr(key_storage, "load_aden_api_key", lambda: None)
monkeypatch.setattr(
"framework.config.get_hive_config",
lambda: {"llm": {"api_key_env_var": "OPENROUTER_API_KEY"}},
)
monkeypatch.delenv("OPENROUTER_API_KEY", raising=False)
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
calls = []
def check_env(var_name):
calls.append(var_name)
if var_name == "OPENROUTER_API_KEY":
return True, "or-key-123"
return False, None
_install_fake_aden_modules(
monkeypatch,
check_env,
{"anthropic": SimpleNamespace(env_var="ANTHROPIC_API_KEY")},
)
ensure_credential_key_env()
assert os.environ.get("OPENROUTER_API_KEY") == "or-key-123"
assert "OPENROUTER_API_KEY" in calls
def test_bootstrap_does_not_override_existing_configured_llm_env_var(monkeypatch):
monkeypatch.setattr(key_storage, "load_credential_key", lambda: None)
monkeypatch.setattr(key_storage, "load_aden_api_key", lambda: None)
monkeypatch.setattr(
"framework.config.get_hive_config",
lambda: {"llm": {"api_key_env_var": "OPENROUTER_API_KEY"}},
)
monkeypatch.setenv("OPENROUTER_API_KEY", "already-set")
calls = []
def check_env(var_name):
calls.append(var_name)
return True, "new-value-should-not-apply"
_install_fake_aden_modules(monkeypatch, check_env, {})
ensure_credential_key_env()
assert os.environ.get("OPENROUTER_API_KEY") == "already-set"
assert "OPENROUTER_API_KEY" not in calls
+28
View File
@@ -1530,6 +1530,34 @@ class TestTransientErrorRetry:
await node.execute(ctx)
assert llm._call_index == 1 # only tried once
@pytest.mark.asyncio
async def test_client_facing_non_transient_error_does_not_crash(
self, runtime, node_spec, memory
):
"""Client-facing non-transient errors should wait for input, not crash on token vars."""
node_spec.output_keys = []
node_spec.client_facing = True
llm = ErrorThenSuccessLLM(
error=ValueError("bad request: blocked by policy"),
fail_count=100, # always fails
success_scenario=text_scenario("unreachable"),
)
ctx = build_ctx(runtime, node_spec, memory, llm)
node = EventLoopNode(
config=LoopConfig(
max_iterations=1,
max_stream_retries=0,
stream_retry_backoff_base=0.01,
),
)
node._await_user_input = AsyncMock(return_value=None)
result = await node.execute(ctx)
assert result.success is False
assert "Max iterations" in (result.error or "")
node._await_user_input.assert_awaited_once()
@pytest.mark.asyncio
async def test_transient_error_exhausts_retries(self, runtime, node_spec, memory):
"""Transient errors that exhaust retries should raise."""
+356 -1
View File
@@ -19,7 +19,11 @@ from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from framework.llm.anthropic import AnthropicProvider
from framework.llm.litellm import LiteLLMProvider, _compute_retry_delay
from framework.llm.litellm import (
OPENROUTER_TOOL_COMPAT_MODEL_CACHE,
LiteLLMProvider,
_compute_retry_delay,
)
from framework.llm.provider import LLMProvider, LLMResponse, Tool
@@ -72,6 +76,20 @@ class TestLiteLLMProviderInit:
)
assert provider.api_base == "https://proxy.example/v1"
def test_init_openrouter_defaults_api_base(self):
"""OpenRouter should default to the official OpenAI-compatible endpoint."""
provider = LiteLLMProvider(model="openrouter/x-ai/grok-4.20-beta", api_key="my-key")
assert provider.api_base == "https://openrouter.ai/api/v1"
def test_init_openrouter_keeps_custom_api_base(self):
"""Explicit api_base should win over OpenRouter defaults."""
provider = LiteLLMProvider(
model="openrouter/x-ai/grok-4.20-beta",
api_key="my-key",
api_base="https://proxy.example/v1",
)
assert provider.api_base == "https://proxy.example/v1"
def test_init_ollama_no_key_needed(self):
"""Test that Ollama models don't require API key."""
with patch.dict(os.environ, {}, clear=True):
@@ -192,6 +210,34 @@ class TestToolConversion:
assert result["function"]["parameters"]["properties"]["query"]["type"] == "string"
assert result["function"]["parameters"]["required"] == ["query"]
def test_parse_tool_call_arguments_repairs_truncated_json(self):
"""Truncated JSON fragments should be repaired into valid tool inputs."""
provider = LiteLLMProvider(model="gpt-4o-mini", api_key="test-key")
parsed = provider._parse_tool_call_arguments(
(
'{"question":"What story structure should the agent use?",'
'"options":["3-act structure","Beginning-Middle-End","Random paragraph"'
),
"ask_user",
)
assert parsed == {
"question": "What story structure should the agent use?",
"options": [
"3-act structure",
"Beginning-Middle-End",
"Random paragraph",
],
}
def test_parse_tool_call_arguments_raises_when_unrepairable(self):
"""Completely invalid JSON should fail fast instead of producing _raw loops."""
provider = LiteLLMProvider(model="gpt-4o-mini", api_key="test-key")
with pytest.raises(ValueError, match="Failed to parse tool call arguments"):
provider._parse_tool_call_arguments('{"question": foo', "ask_user")
class TestAnthropicProviderBackwardCompatibility:
"""Test AnthropicProvider backward compatibility with LiteLLM backend."""
@@ -682,6 +728,315 @@ class TestMiniMaxStreamFallback:
assert not LiteLLMProvider(model="gpt-4o-mini", api_key="x")._is_minimax_model()
class TestOpenRouterToolCompatFallback:
"""OpenRouter models should fall back when native tool use is unavailable."""
def teardown_method(self):
OPENROUTER_TOOL_COMPAT_MODEL_CACHE.clear()
@pytest.mark.asyncio
@patch("litellm.acompletion")
async def test_stream_falls_back_to_json_tool_emulation(self, mock_acompletion):
"""OpenRouter tool-use 404s should emit synthetic ToolCallEvents instead of errors."""
from framework.llm.stream_events import FinishEvent, ToolCallEvent
provider = LiteLLMProvider(
model="openrouter/liquid/lfm-2.5-1.2b-thinking:free",
api_key="test-key",
)
tools = [
Tool(
name="web_search",
description="Search the web",
parameters={
"properties": {
"query": {"type": "string"},
"num_results": {"type": "integer"},
},
"required": ["query"],
},
)
]
compat_response = MagicMock()
compat_response.choices = [MagicMock()]
compat_response.choices[0].message.content = (
'{"assistant_response":"","tool_calls":['
'{"name":"web_search","arguments":'
'{"query":"Python 3.13 release notes","num_results":3}}'
"]}"
)
compat_response.choices[0].finish_reason = "stop"
compat_response.model = provider.model
compat_response.usage.prompt_tokens = 18
compat_response.usage.completion_tokens = 9
async def side_effect(*args, **kwargs):
if kwargs.get("stream"):
raise RuntimeError(
'OpenrouterException - {"error":{"message":"No endpoints found '
"that support tool use. To learn more about provider routing, "
'visit: https://openrouter.ai/docs/guides/routing/provider-selection",'
'"code":404}}'
)
return compat_response
mock_acompletion.side_effect = side_effect
events = []
async for event in provider.stream(
messages=[{"role": "user", "content": "Search for the Python 3.13 release notes."}],
system="Use tools when needed.",
tools=tools,
max_tokens=256,
):
events.append(event)
tool_calls = [event for event in events if isinstance(event, ToolCallEvent)]
assert len(tool_calls) == 1
assert tool_calls[0].tool_name == "web_search"
assert tool_calls[0].tool_input == {
"query": "Python 3.13 release notes",
"num_results": 3,
}
assert tool_calls[0].tool_use_id.startswith("openrouter_compat_")
finish_events = [event for event in events if isinstance(event, FinishEvent)]
assert len(finish_events) == 1
assert finish_events[0].stop_reason == "tool_calls"
assert finish_events[0].input_tokens == 18
assert finish_events[0].output_tokens == 9
assert mock_acompletion.call_count == 2
first_call = mock_acompletion.call_args_list[0].kwargs
assert first_call["stream"] is True
assert "tools" in first_call
second_call = mock_acompletion.call_args_list[1].kwargs
assert "tools" not in second_call
assert "Tool compatibility mode is active" in second_call["messages"][0]["content"]
assert provider.model in OPENROUTER_TOOL_COMPAT_MODEL_CACHE
@pytest.mark.asyncio
@patch("litellm.acompletion")
async def test_stream_tool_compat_parses_textual_tool_calls_and_uses_cache(
self,
mock_acompletion,
):
"""Textual tool-call markers should become ToolCallEvents and skip repeat probing."""
from framework.llm.stream_events import ToolCallEvent
provider = LiteLLMProvider(
model="openrouter/liquid/lfm-2.5-1.2b-thinking:free",
api_key="test-key",
)
tools = [
Tool(
name="ask_user_multiple",
description="Ask the user a multiple-choice question",
parameters={
"properties": {
"options": {"type": "array"},
"question": {"type": "string"},
"prompt": {"type": "string"},
},
"required": ["options", "question", "prompt"],
},
)
]
compat_response = MagicMock()
compat_response.choices = [MagicMock()]
compat_response.choices[0].message.content = (
"<|tool_call_start|>"
"[ask_user_multiple(options=['Quartet Collaborator', 'Project Advisor'], "
"question='Who are you?', prompt='Who are you?')]"
"<|tool_call_end|>"
)
compat_response.choices[0].finish_reason = "stop"
compat_response.model = provider.model
compat_response.usage.prompt_tokens = 10
compat_response.usage.completion_tokens = 5
call_state = {"count": 0}
async def side_effect(*args, **kwargs):
call_state["count"] += 1
if kwargs.get("stream"):
raise RuntimeError(
'OpenrouterException - {"error":{"message":"No endpoints found '
'that support tool use.","code":404}}'
)
return compat_response
mock_acompletion.side_effect = side_effect
first_events = []
async for event in provider.stream(
messages=[{"role": "user", "content": "Who are you?"}],
system="Use tools when needed.",
tools=tools,
max_tokens=128,
):
first_events.append(event)
tool_calls = [event for event in first_events if isinstance(event, ToolCallEvent)]
assert len(tool_calls) == 1
assert tool_calls[0].tool_name == "ask_user_multiple"
assert tool_calls[0].tool_input == {
"options": ["Quartet Collaborator", "Project Advisor"],
"question": "Who are you?",
"prompt": "Who are you?",
}
second_events = []
async for event in provider.stream(
messages=[{"role": "user", "content": "Who are you?"}],
system="Use tools when needed.",
tools=tools,
max_tokens=128,
):
second_events.append(event)
second_tool_calls = [event for event in second_events if isinstance(event, ToolCallEvent)]
assert len(second_tool_calls) == 1
assert mock_acompletion.call_count == 3
assert mock_acompletion.call_args_list[0].kwargs["stream"] is True
assert "stream" not in mock_acompletion.call_args_list[1].kwargs
assert "stream" not in mock_acompletion.call_args_list[2].kwargs
@pytest.mark.asyncio
@patch("litellm.acompletion")
async def test_stream_tool_compat_parses_plain_text_tool_call_lines(
self,
mock_acompletion,
):
"""Plain textual tool-call lines should execute as tools, not user-visible text."""
from framework.llm.stream_events import FinishEvent, TextDeltaEvent, ToolCallEvent
provider = LiteLLMProvider(
model="openrouter/liquid/lfm-2.5-1.2b-thinking:free",
api_key="test-key",
)
tools = [
Tool(
name="ask_user",
description="Ask the user a single multiple-choice question",
parameters={
"properties": {
"question": {"type": "string"},
"options": {"type": "array"},
},
"required": ["question", "options"],
},
)
]
compat_response = MagicMock()
compat_response.choices = [MagicMock()]
compat_response.choices[0].message.content = (
"Queen has been loaded. It's ready to assist with your planning needs.\n\n"
"ask_user('What would you like to do?', ['Define a new agent', "
"'Diagnose an existing agent', 'Explore tools'])"
)
compat_response.choices[0].finish_reason = "stop"
compat_response.model = provider.model
compat_response.usage.prompt_tokens = 11
compat_response.usage.completion_tokens = 7
async def side_effect(*args, **kwargs):
if kwargs.get("stream"):
raise RuntimeError(
'OpenrouterException - {"error":{"message":"No endpoints found '
'that support tool use.","code":404}}'
)
return compat_response
mock_acompletion.side_effect = side_effect
events = []
async for event in provider.stream(
messages=[{"role": "user", "content": "hello"}],
system="Use tools when needed.",
tools=tools,
max_tokens=128,
):
events.append(event)
tool_calls = [event for event in events if isinstance(event, ToolCallEvent)]
assert len(tool_calls) == 1
assert tool_calls[0].tool_name == "ask_user"
assert tool_calls[0].tool_input == {
"question": "What would you like to do?",
"options": ["Define a new agent", "Diagnose an existing agent", "Explore tools"],
}
text_events = [event for event in events if isinstance(event, TextDeltaEvent)]
assert len(text_events) == 1
assert "ask_user(" not in text_events[0].snapshot
assert text_events[0].snapshot == (
"Queen has been loaded. It's ready to assist with your planning needs."
)
finish_events = [event for event in events if isinstance(event, FinishEvent)]
assert len(finish_events) == 1
assert finish_events[0].stop_reason == "tool_calls"
@pytest.mark.asyncio
@patch("litellm.acompletion")
async def test_stream_tool_compat_treats_non_json_as_plain_text(self, mock_acompletion):
"""If fallback output is not valid JSON, preserve it as assistant text."""
from framework.llm.stream_events import FinishEvent, TextDeltaEvent, ToolCallEvent
provider = LiteLLMProvider(
model="openrouter/liquid/lfm-2.5-1.2b-thinking:free",
api_key="test-key",
)
tools = [
Tool(
name="web_search",
description="Search the web",
parameters={"properties": {"query": {"type": "string"}}, "required": ["query"]},
)
]
compat_response = MagicMock()
compat_response.choices = [MagicMock()]
compat_response.choices[0].message.content = "I can answer directly without tools."
compat_response.choices[0].finish_reason = "stop"
compat_response.model = provider.model
compat_response.usage.prompt_tokens = 12
compat_response.usage.completion_tokens = 6
async def side_effect(*args, **kwargs):
if kwargs.get("stream"):
raise RuntimeError(
'OpenrouterException - {"error":{"message":"No endpoints found '
'that support tool use.","code":404}}'
)
return compat_response
mock_acompletion.side_effect = side_effect
events = []
async for event in provider.stream(
messages=[{"role": "user", "content": "Say hello."}],
system="Be concise.",
tools=tools,
max_tokens=128,
):
events.append(event)
text_events = [event for event in events if isinstance(event, TextDeltaEvent)]
assert len(text_events) == 1
assert text_events[0].snapshot == "I can answer directly without tools."
assert not any(isinstance(event, ToolCallEvent) for event in events)
finish_events = [event for event in events if isinstance(event, FinishEvent)]
assert len(finish_events) == 1
assert finish_events[0].stop_reason == "stop"
# ---------------------------------------------------------------------------
# AgentRunner._is_local_model — parameterized tests
# ---------------------------------------------------------------------------
@@ -21,3 +21,8 @@ def test_minimax_provider_prefix_maps_to_minimax_api_key():
def test_minimax_model_name_prefix_maps_to_minimax_api_key():
runner = _runner_for_unit_test()
assert runner._get_api_key_env_var("minimax-chat") == "MINIMAX_API_KEY"
def test_openrouter_provider_prefix_maps_to_openrouter_api_key():
runner = _runner_for_unit_test()
assert runner._get_api_key_env_var("openrouter/x-ai/grok-4.20-beta") == "OPENROUTER_API_KEY"
+520
View File
@@ -0,0 +1,520 @@
"""Tests for safe_eval — the sandboxed expression evaluator used by edge conditions.
Covers: literals, data structures, arithmetic, comparisons, boolean logic
(including short-circuit semantics), variable lookup, subscript/attribute
access, whitelisted function calls, method calls, ternary expressions,
chained comparisons, and security boundaries (private attrs, disallowed
AST nodes, disallowed function calls).
"""
import pytest
from framework.graph.safe_eval import safe_eval
# ---------------------------------------------------------------------------
# Literals and constants
# ---------------------------------------------------------------------------
class TestLiterals:
def test_integer(self):
assert safe_eval("42") == 42
def test_negative_integer(self):
assert safe_eval("-1") == -1
def test_float(self):
assert safe_eval("3.14") == pytest.approx(3.14)
def test_string(self):
assert safe_eval("'hello'") == "hello"
def test_double_quoted_string(self):
assert safe_eval('"world"') == "world"
def test_boolean_true(self):
assert safe_eval("True") is True
def test_boolean_false(self):
assert safe_eval("False") is False
def test_none(self):
assert safe_eval("None") is None
# ---------------------------------------------------------------------------
# Data structures
# ---------------------------------------------------------------------------
class TestDataStructures:
def test_list(self):
assert safe_eval("[1, 2, 3]") == [1, 2, 3]
def test_empty_list(self):
assert safe_eval("[]") == []
def test_nested_list(self):
assert safe_eval("[[1, 2], [3, 4]]") == [[1, 2], [3, 4]]
def test_tuple(self):
assert safe_eval("(1, 2, 3)") == (1, 2, 3)
def test_dict(self):
assert safe_eval("{'a': 1, 'b': 2}") == {"a": 1, "b": 2}
def test_empty_dict(self):
assert safe_eval("{}") == {}
# ---------------------------------------------------------------------------
# Arithmetic and binary operators
# ---------------------------------------------------------------------------
class TestArithmetic:
def test_addition(self):
assert safe_eval("2 + 3") == 5
def test_subtraction(self):
assert safe_eval("10 - 4") == 6
def test_multiplication(self):
assert safe_eval("3 * 7") == 21
def test_division(self):
assert safe_eval("10 / 4") == 2.5
def test_floor_division(self):
assert safe_eval("10 // 3") == 3
def test_modulo(self):
assert safe_eval("10 % 3") == 1
def test_power(self):
assert safe_eval("2 ** 10") == 1024
def test_complex_expression(self):
assert safe_eval("(2 + 3) * 4 - 1") == 19
# ---------------------------------------------------------------------------
# Unary operators
# ---------------------------------------------------------------------------
class TestUnaryOps:
def test_negation(self):
assert safe_eval("-5") == -5
def test_positive(self):
assert safe_eval("+5") == 5
def test_not_true(self):
assert safe_eval("not True") is False
def test_not_false(self):
assert safe_eval("not False") is True
def test_bitwise_invert(self):
assert safe_eval("~0") == -1
# ---------------------------------------------------------------------------
# Comparisons
# ---------------------------------------------------------------------------
class TestComparisons:
def test_equal(self):
assert safe_eval("1 == 1") is True
def test_not_equal(self):
assert safe_eval("1 != 2") is True
def test_less_than(self):
assert safe_eval("1 < 2") is True
def test_greater_than(self):
assert safe_eval("2 > 1") is True
def test_less_equal(self):
assert safe_eval("2 <= 2") is True
def test_greater_equal(self):
assert safe_eval("3 >= 2") is True
def test_is_none(self):
assert safe_eval("x is None", {"x": None}) is True
def test_is_not_none(self):
assert safe_eval("x is not None", {"x": 42}) is True
def test_in_list(self):
assert safe_eval("'a' in x", {"x": ["a", "b", "c"]}) is True
def test_not_in_list(self):
assert safe_eval("'z' not in x", {"x": ["a", "b"]}) is True
def test_chained_comparison(self):
"""Chained comparisons like 1 < x < 10 should work."""
assert safe_eval("1 < x < 10", {"x": 5}) is True
def test_chained_comparison_false(self):
assert safe_eval("1 < x < 3", {"x": 5}) is False
def test_chained_three_way(self):
assert safe_eval("0 <= x <= 100", {"x": 50}) is True
# ---------------------------------------------------------------------------
# Boolean operators (with short-circuit semantics)
# ---------------------------------------------------------------------------
class TestBooleanOps:
def test_and_true(self):
assert safe_eval("True and True") is True
def test_and_false(self):
assert safe_eval("True and False") is False
def test_or_true(self):
assert safe_eval("False or True") is True
def test_or_false(self):
assert safe_eval("False or False") is False
def test_and_returns_last_truthy(self):
"""Python `and` returns the last value if all truthy."""
assert safe_eval("1 and 2 and 3") == 3
def test_and_returns_first_falsy(self):
"""Python `and` returns the first falsy value."""
assert safe_eval("1 and 0 and 3") == 0
def test_or_returns_first_truthy(self):
"""Python `or` returns the first truthy value."""
assert safe_eval("0 or '' or 42") == 42
def test_or_returns_last_falsy(self):
"""Python `or` returns the last value if all falsy."""
assert safe_eval("0 or '' or None") is None
def test_and_short_circuits(self):
"""and should NOT evaluate the right side if left is falsy.
This is the bug we fixed previously this would crash with
TypeError because all operands were eagerly evaluated.
"""
# x is None, so `x.get("key")` would crash if evaluated
assert safe_eval("x is not None and x.get('key')", {"x": None}) is False
def test_or_short_circuits(self):
"""or should NOT evaluate the right side if left is truthy."""
# x is truthy, so the crash-prone right side should never run
assert safe_eval("x or y.get('missing')", {"x": "found", "y": {}}) == "found"
def test_and_guard_pattern_truthy(self):
"""Guard pattern: check not None, then access — when value exists."""
ctx = {"x": {"key": "value"}}
assert safe_eval("x is not None and x.get('key')", ctx) == "value"
def test_multi_and(self):
assert safe_eval("True and True and True") is True
def test_multi_or(self):
assert safe_eval("False or False or True") is True
def test_mixed_and_or(self):
assert safe_eval("True or False and False") is True
# ---------------------------------------------------------------------------
# Ternary (if/else) expressions
# ---------------------------------------------------------------------------
class TestTernary:
def test_ternary_true_branch(self):
assert safe_eval("'yes' if True else 'no'") == "yes"
def test_ternary_false_branch(self):
assert safe_eval("'yes' if False else 'no'") == "no"
def test_ternary_with_context(self):
assert safe_eval("x * 2 if x > 0 else -x", {"x": 5}) == 10
def test_ternary_false_with_context(self):
assert safe_eval("x * 2 if x > 0 else -x", {"x": -3}) == 3
# ---------------------------------------------------------------------------
# Variable lookup
# ---------------------------------------------------------------------------
class TestVariables:
def test_simple_variable(self):
assert safe_eval("x", {"x": 42}) == 42
def test_string_variable(self):
assert safe_eval("name", {"name": "Alice"}) == "Alice"
def test_dict_variable(self):
ctx = {"output": {"status": "ok"}}
assert safe_eval("output", ctx) == {"status": "ok"}
def test_undefined_variable_raises(self):
with pytest.raises(NameError, match="not defined"):
safe_eval("undefined_var")
def test_multiple_variables(self):
assert safe_eval("x + y", {"x": 10, "y": 20}) == 30
# ---------------------------------------------------------------------------
# Subscript access (indexing)
# ---------------------------------------------------------------------------
class TestSubscript:
def test_dict_subscript(self):
assert safe_eval("d['key']", {"d": {"key": "value"}}) == "value"
def test_list_subscript(self):
assert safe_eval("items[0]", {"items": [10, 20, 30]}) == 10
def test_nested_subscript(self):
ctx = {"data": {"users": [{"name": "Alice"}]}}
assert safe_eval("data['users'][0]['name']", ctx) == "Alice"
def test_missing_key_raises(self):
with pytest.raises(KeyError):
safe_eval("d['missing']", {"d": {}})
# ---------------------------------------------------------------------------
# Attribute access
# ---------------------------------------------------------------------------
class TestAttributeAccess:
def test_private_attr_blocked(self):
"""Attributes starting with _ must be blocked for security."""
with pytest.raises(ValueError, match="private attribute"):
safe_eval("x.__class__", {"x": 42})
def test_dunder_blocked(self):
with pytest.raises(ValueError, match="private attribute"):
safe_eval("x.__dict__", {"x": {}})
def test_single_underscore_blocked(self):
with pytest.raises(ValueError, match="private attribute"):
safe_eval("x._internal", {"x": {}})
# ---------------------------------------------------------------------------
# Whitelisted function calls
# ---------------------------------------------------------------------------
class TestFunctionCalls:
def test_len(self):
assert safe_eval("len(x)", {"x": [1, 2, 3]}) == 3
def test_int_conversion(self):
assert safe_eval("int('42')") == 42
def test_float_conversion(self):
assert safe_eval("float('3.14')") == pytest.approx(3.14)
def test_str_conversion(self):
assert safe_eval("str(42)") == "42"
def test_bool_conversion(self):
assert safe_eval("bool(1)") is True
def test_abs(self):
assert safe_eval("abs(-5)") == 5
def test_min(self):
assert safe_eval("min(3, 1, 2)") == 1
def test_max(self):
assert safe_eval("max(3, 1, 2)") == 3
def test_sum(self):
assert safe_eval("sum(x)", {"x": [1, 2, 3]}) == 6
def test_round(self):
assert safe_eval("round(3.7)") == 4
def test_all(self):
assert safe_eval("all([True, True, True])") is True
def test_any(self):
assert safe_eval("any([False, False, True])") is True
def test_list_constructor(self):
assert safe_eval("list(x)", {"x": (1, 2, 3)}) == [1, 2, 3]
def test_dict_constructor(self):
assert safe_eval("dict(a=1, b=2)") == {"a": 1, "b": 2}
def test_tuple_constructor(self):
assert safe_eval("tuple(x)", {"x": [1, 2]}) == (1, 2)
def test_set_constructor(self):
assert safe_eval("set(x)", {"x": [1, 2, 2, 3]}) == {1, 2, 3}
# ---------------------------------------------------------------------------
# Whitelisted method calls
# ---------------------------------------------------------------------------
class TestMethodCalls:
def test_dict_get(self):
assert safe_eval("d.get('key', 'default')", {"d": {"key": "val"}}) == "val"
def test_dict_get_missing(self):
assert safe_eval("d.get('missing', 'default')", {"d": {}}) == "default"
def test_dict_keys(self):
result = safe_eval("list(d.keys())", {"d": {"a": 1, "b": 2}})
assert sorted(result) == ["a", "b"]
def test_dict_values(self):
result = safe_eval("list(d.values())", {"d": {"a": 1, "b": 2}})
assert sorted(result) == [1, 2]
def test_string_lower(self):
assert safe_eval("s.lower()", {"s": "HELLO"}) == "hello"
def test_string_upper(self):
assert safe_eval("s.upper()", {"s": "hello"}) == "HELLO"
def test_string_strip(self):
assert safe_eval("s.strip()", {"s": " hi "}) == "hi"
def test_string_split(self):
assert safe_eval("s.split(',')", {"s": "a,b,c"}) == ["a", "b", "c"]
# ---------------------------------------------------------------------------
# Security: disallowed operations
# ---------------------------------------------------------------------------
class TestSecurity:
def test_import_blocked(self):
"""__import__ is not in context, so NameError is raised."""
with pytest.raises(NameError, match="not defined"):
safe_eval("__import__('os')")
def test_lambda_blocked(self):
with pytest.raises(ValueError, match="not allowed"):
safe_eval("(lambda: 1)()")
def test_comprehension_blocked(self):
with pytest.raises(ValueError, match="not allowed"):
safe_eval("[x for x in range(10)]")
def test_assignment_blocked(self):
"""Assignment expressions should not parse in eval mode."""
with pytest.raises(SyntaxError):
safe_eval("x = 5")
def test_disallowed_function_blocked(self):
"""eval is not in safe functions, so NameError is raised."""
with pytest.raises(NameError, match="not defined"):
safe_eval("eval('1+1')")
def test_exec_blocked(self):
"""exec is not in safe functions, so NameError is raised."""
with pytest.raises(NameError, match="not defined"):
safe_eval("exec('x=1')")
def test_type_call_blocked(self):
"""type is not in safe functions, so NameError is raised."""
with pytest.raises(NameError, match="not defined"):
safe_eval("type(42)")
def test_getattr_builtin_blocked(self):
"""getattr is not in safe functions, so NameError is raised."""
with pytest.raises(NameError, match="not defined"):
safe_eval("getattr(x, '__class__')", {"x": 42})
def test_empty_expression_raises(self):
with pytest.raises(SyntaxError):
safe_eval("")
# ---------------------------------------------------------------------------
# Real-world edge condition patterns (from graph executor usage)
# ---------------------------------------------------------------------------
class TestEdgeConditionPatterns:
"""Patterns commonly used in EdgeSpec.condition_expr."""
def test_output_key_exists_and_not_none(self):
ctx = {"output": {"approved_contacts": ["alice@example.com"]}}
assert safe_eval("output.get('approved_contacts') is not None", ctx) is True
def test_output_key_missing(self):
ctx = {"output": {}}
assert safe_eval("output.get('approved_contacts') is not None", ctx) is False
def test_output_key_check_with_fallback(self):
ctx = {"output": {"redo_extraction": True}}
assert safe_eval("output.get('redo_extraction') is not None", ctx) is True
def test_guard_then_length_check(self):
"""Guard pattern: check key exists, then check length."""
ctx = {"output": {"results": [1, 2, 3]}}
assert (
safe_eval(
"output.get('results') is not None and len(output['results']) > 0",
ctx,
)
is True
)
def test_guard_short_circuits_on_none(self):
"""Guard pattern: short-circuit prevents crash on None."""
ctx = {"output": {}}
assert (
safe_eval(
"output.get('results') is not None and len(output['results']) > 0",
ctx,
)
is False
)
def test_success_flag_check(self):
ctx = {"output": {"success": True}, "memory": {"attempts": 2}}
assert safe_eval("output.get('success') == True", ctx) is True
def test_memory_threshold(self):
ctx = {"memory": {"score": 0.85}}
assert safe_eval("memory.get('score', 0) >= 0.8", ctx) is True
def test_string_contains_check(self):
ctx = {"output": {"status": "completed_with_warnings"}}
assert safe_eval("'completed' in output.get('status', '')", ctx) is True
def test_fallback_chain(self):
"""or-chain for fallback values."""
ctx = {"output": {}}
result = safe_eval(
"output.get('primary') or output.get('secondary') or 'default'",
ctx,
)
assert result == "default"
def test_no_context_needed(self):
"""Some edges use constant expressions."""
assert safe_eval("True") is True
assert safe_eval("1 == 1") is True
+136
View File
@@ -0,0 +1,136 @@
# SDR Agent
An AI-powered sales development outreach automation template for [Hive](https://github.com/aden-hive/hive).
Score contacts by priority, filter suspicious profiles, generate personalized messages, and create Gmail drafts — all with human review before anything is sent.
## Overview
The SDR Agent automates the full outreach pipeline:
```
Intake → Score Contacts → Filter Contacts → Personalize → Send Outreach → Report
```
1. **Intake** — Accept a contact list and outreach goal; confirm strategy with user
2. **Score Contacts** — Rank contacts 0100 using priority factors (alumni, degree, domain, etc.)
3. **Filter Contacts** — Detect and skip suspicious/fake profiles (risk score ≥ 7)
4. **Personalize** — Generate an 80120 word personalized message per contact
5. **Send Outreach** — Create Gmail drafts for human review (never sends automatically)
6. **Report** — Summarize campaign: contacts scored, filtered, drafted
## Quickstart
```bash
cd examples/templates/sdr_agent
# Run interactively via TUI
python -m sdr_agent tui
# Run via CLI with a contacts JSON string
python -m sdr_agent run \
--contacts '[{"name":"Jane Doe","company":"Acme","title":"Engineer","connection_degree":"2nd","is_alumni":true}]' \
--goal "coffee chat" \
--background "Learning Technologist at UWO" \
--max-contacts 20
# Validate agent structure
python -m sdr_agent validate
```
## Contact Schema
Each contact in your list supports the following fields:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `name` | string | ✅ | Contact's full name |
| `email` | string | ❌ | Email address (draft placeholder if missing) |
| `company` | string | ✅ | Current company |
| `title` | string | ✅ | Job title |
| `linkedin_url` | string | ❌ | LinkedIn profile URL |
| `connection_degree` | string | ❌ | `"1st"`, `"2nd"`, or `"3rd"` |
| `is_alumni` | boolean | ❌ | Shares school with user |
| `school_name` | string | ❌ | School name for alumni messaging |
| `connections_count` | integer | ❌ | Number of LinkedIn connections |
| `mutual_connections` | integer | ❌ | Count of mutual connections |
| `has_photo` | boolean | ❌ | Has a profile photo |
## Scoring Model
The `score-contacts` node ranks each contact 0100:
| Factor | Points |
|--------|--------|
| Alumni | +30 |
| 1st degree | +25 |
| 2nd degree | +20 |
| 3rd degree | +10 |
| Domain verified | +10 |
| Mutual connections (×1, max 10) | +10 |
| Active job posting | +10 |
| Has profile photo | +5 |
| 500+ connections | +5 |
## Scam Detection
The `filter-contacts` node calculates a risk score and excludes contacts with risk ≥ 7:
| Red Flag | Risk |
|----------|------|
| Fewer than 50 connections | +3 |
| No profile photo | +2 |
| Fewer than 2 work positions | +2 |
| Generic title + few connections | +2 |
| Unverifiable company | +2 |
| AI-generated-looking profile | +2 |
| 5000+ connections, 0 mutual | +1 |
## Pipeline Output Files
Each run writes to `~/.hive/agents/sdr_agent/data/`:
| File | Contents |
|------|----------|
| `contacts.jsonl` | Raw contact list |
| `scored_contacts.jsonl` | Contacts with `priority_score` |
| `safe_contacts.jsonl` | Contacts passing scam filter |
| `personalized_contacts.jsonl` | Contacts with `outreach_message` |
| `drafts.jsonl` | Draft creation records |
## Safety Constraints
- **Never sends emails** — only `gmail_create_draft` is called; human must review and send
- **Batch limit** — processes at most `max_contacts` per run (default: 20)
- **Skip suspicious** — contacts with `risk_score ≥ 7` are always excluded
## Tools Required
- `gmail_create_draft` — create Gmail draft for each contact
- `load_data` — read JSONL data files
- `append_data` — write to JSONL data files
## Architecture
```
┌──────────────────────────────────────────────────────────────┐
│ SDR Agent │
│ │
│ ┌────────┐ ┌───────────────┐ ┌────────────────┐ │
│ │ Intake │──▶│ Score Contacts│──▶│ Filter Contacts│ │
│ └────────┘ └───────────────┘ └────────────────┘ │
│ ▲ │ │
│ │ ▼ │
│ ┌────────┐ ┌───────────────┐ ┌─────────────┐ │
│ │ Report │◀──│ Send Outreach │◀──│ Personalize │ │
│ └────────┘ └───────────────┘ └─────────────┘ │
│ │
│ ● client_facing nodes: intake, report │
│ ● automated nodes: score-contacts, filter-contacts, │
│ personalize, send-outreach │
└──────────────────────────────────────────────────────────────┘
```
## Inspiration
This template is inspired by real-world SDR automation patterns, including contact ranking, scam detection, and two-step personalization (hook extraction → message generation) — demonstrating how job-search and sales outreach workflows can be modeled as AI agent pipelines in Hive.
+45
View File
@@ -0,0 +1,45 @@
"""
SDR Agent Automated sales development outreach pipeline.
Score contacts by priority, filter suspicious profiles, generate personalized
outreach messages, and create Gmail drafts for human review before sending.
"""
from .agent import (
SDRAgent,
default_agent,
goal,
nodes,
edges,
loop_config,
async_entry_points,
entry_node,
entry_points,
pause_nodes,
terminal_nodes,
conversation_mode,
identity_prompt,
)
from .config import RuntimeConfig, AgentMetadata, default_config, metadata
__version__ = "1.0.0"
__all__ = [
"SDRAgent",
"default_agent",
"goal",
"nodes",
"edges",
"loop_config",
"async_entry_points",
"entry_node",
"entry_points",
"pause_nodes",
"terminal_nodes",
"conversation_mode",
"identity_prompt",
"RuntimeConfig",
"AgentMetadata",
"default_config",
"metadata",
]
+234
View File
@@ -0,0 +1,234 @@
"""
CLI entry point for SDR Agent.
Automates sales development outreach: score contacts, filter suspicious
profiles, generate personalized messages, and create Gmail drafts.
"""
import asyncio
import json
import logging
import sys
import click
from .agent import default_agent, SDRAgent
def setup_logging(verbose=False, debug=False):
"""Configure logging for execution visibility."""
if debug:
level, fmt = logging.DEBUG, "%(asctime)s %(name)s: %(message)s"
elif verbose:
level, fmt = logging.INFO, "%(message)s"
else:
level, fmt = logging.WARNING, "%(levelname)s: %(message)s"
logging.basicConfig(level=level, format=fmt, stream=sys.stderr)
logging.getLogger("framework").setLevel(level)
@click.group()
@click.version_option(version="1.0.0")
def cli():
"""SDR Agent - Automated outreach with contact scoring and personalization."""
pass
@cli.command()
@click.option(
"--contacts",
"-c",
type=str,
required=True,
help="JSON string or file path of contacts list",
)
@click.option(
"--goal",
"-g",
type=str,
default="coffee chat",
help="Outreach goal (e.g. 'coffee chat', 'sales pitch')",
)
@click.option(
"--background",
"-b",
type=str,
default="",
help="Your background/role for personalization",
)
@click.option(
"--max-contacts",
"-m",
type=int,
default=20,
help="Max contacts to process per batch (default: 20)",
)
@click.option(
"--mock", is_flag=True, help="Run in mock mode without LLM or Gmail calls"
)
@click.option("--quiet", "-q", is_flag=True, help="Only output result JSON")
@click.option("--verbose", "-v", is_flag=True, help="Show execution details")
@click.option("--debug", is_flag=True, help="Show debug logging")
def run(contacts, goal, background, max_contacts, mock, quiet, verbose, debug):
"""Execute an SDR outreach campaign for the given contacts."""
if not quiet:
setup_logging(verbose=verbose, debug=debug)
context = {
"contacts": contacts,
"outreach_goal": goal,
"user_background": background,
"max_contacts": str(max_contacts),
}
result = asyncio.run(default_agent.run(context, mock_mode=mock))
output_data = {
"success": result.success,
"steps_executed": result.steps_executed,
"output": result.output,
}
if result.error:
output_data["error"] = result.error
click.echo(json.dumps(output_data, indent=2, default=str))
sys.exit(0 if result.success else 1)
@cli.command()
@click.option("--mock", is_flag=True, help="Run in mock mode")
@click.option("--verbose", "-v", is_flag=True, help="Show execution details")
@click.option("--debug", is_flag=True, help="Show debug logging")
def tui(mock, verbose, debug):
"""Launch the TUI dashboard for interactive SDR outreach."""
setup_logging(verbose=verbose, debug=debug)
try:
from framework.tui.app import AdenTUI
except ImportError:
click.echo(
"TUI requires the 'textual' package. Install with: pip install textual"
)
sys.exit(1)
async def run_with_tui():
agent = SDRAgent()
await agent.start(mock_mode=mock)
try:
app = AdenTUI(agent._agent_runtime)
await app.run_async()
finally:
await agent.stop()
asyncio.run(run_with_tui())
@cli.command()
@click.option("--json", "output_json", is_flag=True)
def info(output_json):
"""Show agent information."""
info_data = default_agent.info()
if output_json:
click.echo(json.dumps(info_data, indent=2))
else:
click.echo(f"Agent: {info_data['name']}")
click.echo(f"Version: {info_data['version']}")
click.echo(f"Description: {info_data['description']}")
click.echo(f"\nNodes: {', '.join(info_data['nodes'])}")
click.echo(f"Client-facing: {', '.join(info_data['client_facing_nodes'])}")
click.echo(f"Entry: {info_data['entry_node']}")
click.echo(f"Terminal: {', '.join(info_data['terminal_nodes'])}")
@cli.command()
def validate():
"""Validate agent structure."""
validation = default_agent.validate()
if validation["valid"]:
click.echo("Agent is valid")
if validation["warnings"]:
for warning in validation["warnings"]:
click.echo(f" WARNING: {warning}")
else:
click.echo("Agent has errors:")
for error in validation["errors"]:
click.echo(f" ERROR: {error}")
sys.exit(0 if validation["valid"] else 1)
@cli.command()
@click.option("--verbose", "-v", is_flag=True)
def shell(verbose):
"""Interactive SDR outreach session (CLI, no TUI)."""
asyncio.run(_interactive_shell(verbose))
async def _interactive_shell(verbose=False):
"""Async interactive shell."""
setup_logging(verbose=verbose)
click.echo("=== SDR Agent ===")
click.echo("Automated contact scoring, filtering, and outreach personalization\n")
agent = SDRAgent()
await agent.start()
try:
while True:
try:
goal = await asyncio.get_event_loop().run_in_executor(
None, input, "Outreach goal (e.g. 'coffee chat')> "
)
if goal.lower() in ["quit", "exit", "q"]:
click.echo("Goodbye!")
break
contacts = await asyncio.get_event_loop().run_in_executor(
None, input, "Contacts (JSON)> "
)
background = await asyncio.get_event_loop().run_in_executor(
None, input, "Your background/role> "
)
if not contacts.strip():
continue
click.echo("\nRunning SDR campaign...\n")
result = await agent.trigger_and_wait(
"start",
{
"contacts": contacts,
"outreach_goal": goal,
"user_background": background,
"max_contacts": "20",
},
)
if result is None:
click.echo("\n[Execution timed out]\n")
continue
if result.success:
output = result.output
if "summary_report" in output:
click.echo("\n--- Campaign Report ---\n")
click.echo(output["summary_report"])
click.echo("\n")
else:
click.echo(f"\nCampaign failed: {result.error}\n")
except KeyboardInterrupt:
click.echo("\nGoodbye!")
break
except Exception as e:
click.echo(f"Error: {e}", err=True)
import traceback
traceback.print_exc()
finally:
await agent.stop()
if __name__ == "__main__":
cli()
+378
View File
@@ -0,0 +1,378 @@
{
"agent": {
"id": "sdr_agent",
"name": "SDR Agent",
"version": "1.0.0",
"description": "Automate sales development outreach using AI-powered contact scoring, scam detection, and personalized message generation. Score contacts by priority, filter suspicious profiles, generate personalized outreach messages, and create Gmail drafts for review — all without sending emails automatically."
},
"graph": {
"id": "sdr-agent-graph",
"goal_id": "sdr-agent",
"version": "1.0.0",
"entry_node": "intake",
"entry_points": {
"start": "intake"
},
"pause_nodes": [],
"terminal_nodes": ["complete"],
"conversation_mode": "continuous",
"identity_prompt": "You are an SDR (Sales Development Representative) assistant. You help users automate their outreach by scoring contacts, filtering suspicious profiles, generating personalized messages, and creating Gmail drafts — all with human review before anything is sent.",
"nodes": [
{
"id": "intake",
"name": "Intake",
"description": "Receive the contact list and outreach goal from the user. Confirm the strategy and batch size before proceeding.",
"node_type": "event_loop",
"input_keys": [
"contacts",
"outreach_goal",
"max_contacts",
"user_background"
],
"output_keys": [
"contacts",
"outreach_goal",
"max_contacts",
"user_background"
],
"nullable_output_keys": [],
"input_schema": {},
"output_schema": {},
"system_prompt": "You are an SDR (Sales Development Representative) assistant helping automate outreach.\n\n**STEP 1 — Respond to the user (text only, NO tool calls):**\n\nRead the user's input from context. Confirm your understanding of:\n- The contact list they provided (or ask them to provide one)\n- Their outreach goal (e.g. \"coffee chat\", \"sales pitch\", \"networking\")\n- Their background/role (used to personalize messages)\n- The batch size (max_contacts). Default to 20 if not specified.\n\nPresent a summary like:\n\"Here's what I'll do:\n1. Score and rank your contacts by priority (alumni status, connection degree, etc.)\n2. Filter out suspicious or low-quality profiles (risk score ≥ 7)\n3. Generate a personalized outreach message for each contact\n4. Create Gmail draft emails for your review — I never send automatically\n\nReady to proceed with [N] contacts for [goal]?\"\n\n**STEP 2 — After the user confirms, call set_output:**\n\n- set_output(\"contacts\", <the contact list as a JSON string>)\n- set_output(\"outreach_goal\", <the confirmed goal, e.g. \"coffee chat\">)\n- set_output(\"max_contacts\", <the confirmed batch size as a string, e.g. \"20\">)\n- set_output(\"user_background\", <user's background/role, e.g. \"Learning Technologist at UWO\">)",
"tools": [],
"model": null,
"function": null,
"routes": {},
"max_retries": 3,
"retry_on": [],
"max_node_visits": 0,
"output_model": null,
"max_validation_retries": 2,
"client_facing": true,
"success_criteria": null
},
{
"id": "score-contacts",
"name": "Score Contacts",
"description": "Score and rank each contact from 0 to 100 based on priority factors: alumni status, connection degree, domain verification, mutual connections, and active job postings.",
"node_type": "event_loop",
"input_keys": [
"contacts",
"outreach_goal"
],
"output_keys": [
"scored_contacts"
],
"nullable_output_keys": [],
"input_schema": {},
"output_schema": {},
"system_prompt": "You are a contact prioritization engine. Score each contact from 0 to 100.\n\n**SCORING RULES (additive):**\n- Alumni of the user's school: +30 points\n- 1st degree connection: +25 points\n- 2nd degree connection: +20 points\n- 3rd degree connection: +10 points\n- Domain verified (company email matches LinkedIn company): +10 points\n- Has mutual connections (1 point each, max 10): up to +10 points\n- Active job posting at their company: +10 points\n- Has a profile photo: +5 points\n- Over 500 connections: +5 points\n\nCap the final score at 100.\n\n**STEP 1 — Load the contacts:**\nCall load_data(filename=\"contacts.jsonl\") to read the contact list.\nIf \"contacts\" in context is a JSON string (not a filename), write it first:\n- For each contact in the list, call append_data(filename=\"contacts.jsonl\", data=<JSON contact object>)\nThen read it back.\n\n**STEP 2 — Score each contact:**\nFor each contact, calculate the priority score using the rules above.\nAdd a \"priority_score\" field to each contact object.\n\n**STEP 3 — Write scored contacts and set output:**\n- Call append_data(filename=\"scored_contacts.jsonl\", data=<JSON contact with priority_score>) for each contact.\n- Sort contacts by priority_score (highest first) in your final output.\n- Call set_output(\"scored_contacts\", \"scored_contacts.jsonl\")",
"tools": [
"load_data",
"append_data"
],
"model": null,
"function": null,
"routes": {},
"max_retries": 3,
"retry_on": [],
"max_node_visits": 0,
"output_model": null,
"max_validation_retries": 2,
"client_facing": false,
"success_criteria": null
},
{
"id": "filter-contacts",
"name": "Filter Contacts",
"description": "Analyze each contact for authenticity and filter out suspicious profiles. Any contact with a risk score of 7 or higher is skipped.",
"node_type": "event_loop",
"input_keys": [
"scored_contacts"
],
"output_keys": [
"safe_contacts",
"filtered_count"
],
"nullable_output_keys": [],
"input_schema": {},
"output_schema": {},
"system_prompt": "You are a profile authenticity analyzer. Your job is to detect suspicious or fake LinkedIn profiles.\n\n**RISK SCORING RULES (additive):**\n- Fewer than 50 connections: +3 points\n- No profile photo: +2 points\n- Fewer than 2 positions in work history: +2 points\n- Generic title (e.g. \"entrepreneur\", \"CEO\", \"consultant\") AND fewer than 100 connections: +2 points\n- Company name appears generic or unverifiable: +2 points\n- Profile text seems auto-generated or overly promotional: +2 points\n- Connection count over 5000 with no mutual connections: +1 point\n\n**DECISION RULE:**\n- risk_score < 4: SAFE — include in outreach\n- risk_score 46: CAUTION — include but flag\n- risk_score ≥ 7: SKIP — exclude from outreach\n\n**STEP 1 — Load scored contacts:**\nCall load_data(filename=<the \"scored_contacts\" value from context>).\nProcess contacts chunk by chunk if has_more=true.\n\n**STEP 2 — Analyze each contact:**\nFor each contact, calculate a risk_score using the rules above.\nDetermine: is_safe (risk_score < 7), recommendation (safe/caution/skip), flags (list of triggered rules).\n\n**STEP 3 — Write safe contacts and set output:**\n- For each contact where risk_score < 7: call append_data(filename=\"safe_contacts.jsonl\", data=<contact JSON with risk_score and flags added>)\n- Track how many contacts were filtered (risk_score ≥ 7)\n- Call set_output(\"safe_contacts\", \"safe_contacts.jsonl\")\n- Call set_output(\"filtered_count\", <number of skipped contacts as string>)",
"tools": [
"load_data",
"append_data"
],
"model": null,
"function": null,
"routes": {},
"max_retries": 3,
"retry_on": [],
"max_node_visits": 0,
"output_model": null,
"max_validation_retries": 2,
"client_facing": false,
"success_criteria": null
},
{
"id": "personalize",
"name": "Personalize",
"description": "Generate a personalized outreach message for each contact based on their profile, shared background, and the user's outreach goal.",
"node_type": "event_loop",
"input_keys": [
"safe_contacts",
"outreach_goal",
"user_background"
],
"output_keys": [
"personalized_contacts"
],
"nullable_output_keys": [],
"input_schema": {},
"output_schema": {},
"system_prompt": "You are a professional outreach message writer. Generate personalized messages for each contact.\n\n**TWO-STEP PERSONALIZATION:**\n\nFor each contact, follow this two-step approach:\n\nSTEP A — Extract hooks (analyze the profile):\nLook for 2-3 specific talking points from the contact's profile:\n- Shared alumni connection\n- Specific role, company, or career transition worth mentioning\n- Any mutual interests aligned with the user's background\n\nSTEP B — Generate the message:\nWrite a warm, professional outreach message using the hooks.\n\n**MESSAGE REQUIREMENTS:**\n- 80-120 words (LinkedIn message length)\n- Start with a specific observation (\"I noticed you...\" or \"Fellow [school] alum here...\")\n- Mention the shared connection or interest naturally\n- State the outreach goal clearly but softly (e.g. \"Open to a brief 15-min chat?\")\n- Professional but warm tone — NOT templated or AI-sounding\n- Do NOT mention job postings directly unless the goal is job-related\n- Do NOT use generic openers like \"I hope this finds you well\"\n- End with a low-pressure ask\n\n**STEP 1 — Load safe contacts:**\nCall load_data(filename=<the \"safe_contacts\" value from context>).\n\n**STEP 2 — Generate message for each contact:**\nFor each contact: generate the personalized message using the two-step approach above.\nAdd \"outreach_message\" field to each contact object.\n\n**STEP 3 — Write output and set:**\n- Call append_data(filename=\"personalized_contacts.jsonl\", data=<contact JSON with outreach_message>) for each.\n- Call set_output(\"personalized_contacts\", \"personalized_contacts.jsonl\")",
"tools": [
"load_data",
"append_data"
],
"model": null,
"function": null,
"routes": {},
"max_retries": 3,
"retry_on": [],
"max_node_visits": 0,
"output_model": null,
"max_validation_retries": 2,
"client_facing": false,
"success_criteria": null
},
{
"id": "send-outreach",
"name": "Send Outreach",
"description": "Create Gmail draft emails for each contact using their personalized message. Drafts are created for human review — emails are never sent automatically.",
"node_type": "event_loop",
"input_keys": [
"personalized_contacts",
"outreach_goal"
],
"output_keys": [
"drafts_created"
],
"nullable_output_keys": [],
"input_schema": {},
"output_schema": {},
"system_prompt": "You are an outreach execution assistant. Create Gmail draft emails for each contact.\n\n**CRITICAL RULE: NEVER send emails automatically. Only create drafts.**\n\n**STEP 1 — Load personalized contacts:**\nCall load_data(filename=<the \"personalized_contacts\" value from context>).\nProcess chunk by chunk if has_more=true.\n\n**STEP 2 — Create Gmail draft for each contact:**\nFor each contact with an \"outreach_message\":\n- subject: \"Coffee Chat Request\" (or appropriate subject based on outreach_goal)\n- to: contact's email address (use LinkedIn profile URL if email not available — note this in body)\n- body: the \"outreach_message\" from the contact object\n\nCall gmail_create_draft(\n to=<contact email or linkedin_url as placeholder>,\n subject=<appropriate subject line>,\n body=<outreach_message>\n)\n\nRecord each draft: call append_data(\n filename=\"drafts.jsonl\",\n data=<JSON: {contact_name, contact_email, subject, status: \"draft_created\"}>\n)\n\n**STEP 3 — Set output:**\n- Call set_output(\"drafts_created\", \"drafts.jsonl\")\n\n**IMPORTANT:** If a contact has no email address, create the draft with their LinkedIn URL as a placeholder and add a note in the body: \"Note: Please find the recipient's email before sending.\"",
"tools": [
"gmail_create_draft",
"load_data",
"append_data"
],
"model": null,
"function": null,
"routes": {},
"max_retries": 3,
"retry_on": [],
"max_node_visits": 0,
"output_model": null,
"max_validation_retries": 2,
"client_facing": false,
"success_criteria": null
},
{
"id": "report",
"name": "Report",
"description": "Generate a summary report of the outreach campaign: contacts scored, filtered, messaged, and drafts created. Present to user for review.",
"node_type": "event_loop",
"input_keys": [
"drafts_created",
"filtered_count",
"outreach_goal"
],
"output_keys": [
"summary_report"
],
"nullable_output_keys": [],
"input_schema": {},
"output_schema": {},
"system_prompt": "You are an SDR assistant. Generate a clear campaign summary report and present it to the user.\n\n**STEP 1 — Load draft records:**\nCall load_data(filename=<the \"drafts_created\" value from context>) to read the draft records.\nIf has_more=true, load additional chunks until all records are loaded.\n\n**STEP 2 — Present the report (text only, NO tool calls):**\n\nPresent a clean summary:\n\n📊 **SDR Campaign Summary — [outreach_goal]**\n\n**Overview:**\n- Total contacts processed: [N]\n- Contacts filtered (suspicious profiles): [filtered_count]\n- Safe contacts messaged: [N - filtered_count]\n- Gmail drafts created: [N]\n\n**Drafts Created:**\nList each draft: Contact Name | Company | Subject\n\n**Next Steps:**\n\"Your Gmail drafts are ready for review. Please:\n1. Open Gmail and review each draft\n2. Personalize further if needed\n3. Send when ready\n\nCampaign complete!\"\n\n**STEP 3 — After the user responds, call set_output:**\n- set_output(\"summary_report\", <the formatted report text>)",
"tools": [
"load_data"
],
"model": null,
"function": null,
"routes": {},
"max_retries": 3,
"retry_on": [],
"max_node_visits": 0,
"output_model": null,
"max_validation_retries": 2,
"client_facing": true,
"success_criteria": null
},
{
"id": "complete",
"name": "Complete",
"description": "Terminal node - campaign complete.",
"node_type": "event_loop",
"input_keys": [
"summary_report"
],
"output_keys": [
"final_report"
],
"nullable_output_keys": [],
"input_schema": {},
"output_schema": {},
"system_prompt": "Campaign is complete. Set the final output.\n\nCall set_output(\"final_report\", <summary_report value from context>)",
"tools": [],
"model": null,
"function": null,
"routes": {},
"max_retries": 3,
"retry_on": [],
"max_node_visits": 1,
"output_model": null,
"max_validation_retries": 2,
"client_facing": false,
"success_criteria": null
}
],
"edges": [
{
"id": "intake-to-score",
"source": "intake",
"target": "score-contacts",
"condition": "on_success",
"condition_expr": null,
"priority": 1,
"input_mapping": {}
},
{
"id": "score-to-filter",
"source": "score-contacts",
"target": "filter-contacts",
"condition": "on_success",
"condition_expr": null,
"priority": 1,
"input_mapping": {}
},
{
"id": "filter-to-personalize",
"source": "filter-contacts",
"target": "personalize",
"condition": "on_success",
"condition_expr": null,
"priority": 1,
"input_mapping": {}
},
{
"id": "personalize-to-send",
"source": "personalize",
"target": "send-outreach",
"condition": "on_success",
"condition_expr": null,
"priority": 1,
"input_mapping": {}
},
{
"id": "send-to-report",
"source": "send-outreach",
"target": "report",
"condition": "on_success",
"condition_expr": null,
"priority": 1,
"input_mapping": {}
},
{
"id": "report-to-complete",
"source": "report",
"target": "complete",
"condition": "on_success",
"condition_expr": null,
"priority": 1,
"input_mapping": {}
}
],
"max_steps": 100,
"max_retries_per_node": 3,
"description": "Automated SDR outreach pipeline: score contacts by priority, filter suspicious profiles, generate personalized messages, and create Gmail drafts for human review."
},
"goal": {
"id": "sdr-agent",
"name": "SDR Agent",
"description": "Automate sales development outreach: score contacts by priority, filter suspicious profiles, generate personalized messages, and create Gmail drafts for human review.",
"status": "draft",
"success_criteria": [
{
"id": "contact-scoring-accuracy",
"description": "Contacts are correctly scored and ranked by priority factors (alumni status, connection degree, domain verification)",
"metric": "scoring_accuracy",
"target": ">=90%",
"weight": 0.30,
"met": false
},
{
"id": "scam-filter-effectiveness",
"description": "Suspicious profiles (risk_score >= 7) are correctly identified and excluded from outreach",
"metric": "filter_precision",
"target": ">=95%",
"weight": 0.25,
"met": false
},
{
"id": "message-personalization",
"description": "Generated messages reference specific profile details (alumni connection, role, company) and match the outreach goal",
"metric": "personalization_score",
"target": ">=80%",
"weight": 0.30,
"met": false
},
{
"id": "draft-creation",
"description": "Gmail drafts are created for all safe contacts without errors",
"metric": "draft_success_rate",
"target": "100%",
"weight": 0.15,
"met": false
}
],
"constraints": [
{
"id": "draft-not-send",
"description": "Agent creates Gmail drafts but NEVER sends emails automatically",
"constraint_type": "hard",
"category": "safety",
"check": ""
},
{
"id": "respect-batch-limit",
"description": "Must not process more contacts than the configured max_contacts parameter",
"constraint_type": "hard",
"category": "operational",
"check": ""
},
{
"id": "skip-suspicious",
"description": "Contacts with risk_score >= 7 must be excluded from outreach",
"constraint_type": "hard",
"category": "safety",
"check": ""
}
],
"context": {},
"required_capabilities": [],
"input_schema": {},
"output_schema": {},
"version": "1.0.0",
"parent_version": null,
"evolution_reason": null
},
"required_tools": [
"gmail_create_draft",
"load_data",
"append_data"
],
"metadata": {
"node_count": 7,
"edge_count": 6
}
}
+375
View File
@@ -0,0 +1,375 @@
"""Agent graph construction for SDR Agent."""
from pathlib import Path
from framework.graph import EdgeSpec, EdgeCondition, Goal, SuccessCriterion, Constraint
from framework.graph.checkpoint_config import CheckpointConfig
from framework.graph.edge import AsyncEntryPointSpec, GraphSpec
from framework.graph.executor import ExecutionResult
from framework.llm import LiteLLMProvider
from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from .config import default_config, metadata
from .nodes import (
intake_node,
score_contacts_node,
filter_contacts_node,
personalize_node,
send_outreach_node,
report_node,
)
# Goal definition
goal = Goal(
id="sdr-agent",
name="SDR Agent",
description=(
"Automate sales development outreach: score contacts by priority, "
"filter suspicious profiles, generate personalized messages, "
"and create Gmail drafts for human review."
),
success_criteria=[
SuccessCriterion(
id="contact-scoring-accuracy",
description=(
"Contacts are correctly scored and ranked by priority factors "
"(alumni status, connection degree, domain verification)"
),
metric="scoring_accuracy",
target=">=90%",
weight=0.30,
),
SuccessCriterion(
id="scam-filter-effectiveness",
description=(
"Suspicious profiles (risk_score >= 7) are correctly identified "
"and excluded from outreach"
),
metric="filter_precision",
target=">=95%",
weight=0.25,
),
SuccessCriterion(
id="message-personalization",
description=(
"Generated messages reference specific profile details "
"(alumni connection, role, company) and match the outreach goal"
),
metric="personalization_score",
target=">=80%",
weight=0.30,
),
SuccessCriterion(
id="draft-creation",
description="Gmail drafts are created for all safe contacts without errors",
metric="draft_success_rate",
target="100%",
weight=0.15,
),
],
constraints=[
Constraint(
id="draft-not-send",
description="Agent creates Gmail drafts but NEVER sends emails automatically",
constraint_type="hard",
category="safety",
),
Constraint(
id="respect-batch-limit",
description="Must not process more contacts than the configured max_contacts parameter",
constraint_type="hard",
category="operational",
),
Constraint(
id="skip-suspicious",
description="Contacts with risk_score >= 7 must be excluded from outreach",
constraint_type="hard",
category="safety",
),
],
)
# Node list
nodes = [
intake_node,
score_contacts_node,
filter_contacts_node,
personalize_node,
send_outreach_node,
report_node,
]
# Edge definitions
edges = [
EdgeSpec(
id="intake-to-score",
source="intake",
target="score-contacts",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
EdgeSpec(
id="score-to-filter",
source="score-contacts",
target="filter-contacts",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
EdgeSpec(
id="filter-to-personalize",
source="filter-contacts",
target="personalize",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
EdgeSpec(
id="personalize-to-send",
source="personalize",
target="send-outreach",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
EdgeSpec(
id="send-to-report",
source="send-outreach",
target="report",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
EdgeSpec(
id="report-to-intake",
source="report",
target="intake",
condition=EdgeCondition.ON_SUCCESS,
priority=1,
),
]
# Graph configuration
entry_node = "intake"
entry_points = {"start": "intake"}
async_entry_points: list[AsyncEntryPointSpec] = [] # SDR Agent is manually triggered
pause_nodes = []
terminal_nodes = []
loop_config = {
"max_iterations": 100,
"max_tool_calls_per_turn": 30,
"max_tool_result_chars": 8000,
"max_history_tokens": 32000,
}
conversation_mode = "continuous"
identity_prompt = (
"You are an SDR (Sales Development Representative) assistant. "
"You help users automate their outreach by scoring contacts, filtering "
"suspicious profiles, generating personalized messages, and creating "
"Gmail drafts — all with human review before anything is sent."
)
class SDRAgent:
"""
SDR Agent 6-node pipeline for automated outreach.
Flow: intake -> score-contacts -> filter-contacts -> personalize
-> send-outreach -> report -> intake (loop)
Pipeline:
1. intake: Receive contact list and outreach goal
2. score-contacts: Rank contacts 0-100 by priority factors
3. filter-contacts: Remove suspicious profiles (risk >= 7)
4. personalize: Generate personalized messages for each contact
5. send-outreach: Create Gmail drafts (never sends automatically)
6. report: Summarize campaign results and present to user
"""
def __init__(self, config=None):
self.config = config or default_config
self.goal = goal
self.nodes = nodes
self.edges = edges
self.entry_node = entry_node
self.entry_points = entry_points
self.pause_nodes = pause_nodes
self.terminal_nodes = terminal_nodes
self._agent_runtime: AgentRuntime | None = None
self._graph: GraphSpec | None = None
self._tool_registry: ToolRegistry | None = None
def _build_graph(self) -> GraphSpec:
"""Build the GraphSpec."""
return GraphSpec(
id="sdr-agent-graph",
goal_id=self.goal.id,
version="1.0.0",
entry_node=self.entry_node,
entry_points=self.entry_points,
terminal_nodes=self.terminal_nodes,
pause_nodes=self.pause_nodes,
nodes=self.nodes,
edges=self.edges,
default_model=self.config.model,
max_tokens=self.config.max_tokens,
loop_config=loop_config,
conversation_mode=conversation_mode,
identity_prompt=identity_prompt,
)
def _setup(self, mock_mode=False) -> None:
"""Set up the agent runtime with sessions, checkpoints, and logging."""
self._storage_path = Path.home() / ".hive" / "agents" / "sdr_agent"
self._storage_path.mkdir(parents=True, exist_ok=True)
self._tool_registry = ToolRegistry()
mcp_config_path = Path(__file__).parent / "mcp_servers.json"
if mcp_config_path.exists():
self._tool_registry.load_mcp_config(mcp_config_path)
tools_path = Path(__file__).parent / "tools.py"
if tools_path.exists():
self._tool_registry.discover_from_module(tools_path)
if mock_mode:
from framework.llm.mock import MockLLMProvider
llm = MockLLMProvider()
else:
llm = LiteLLMProvider(
model=self.config.model,
api_key=self.config.api_key,
api_base=self.config.api_base,
)
tool_executor = self._tool_registry.get_executor()
tools = list(self._tool_registry.get_tools().values())
self._graph = self._build_graph()
checkpoint_config = CheckpointConfig(
enabled=True,
checkpoint_on_node_start=False,
checkpoint_on_node_complete=True,
checkpoint_max_age_days=7,
async_checkpoint=True,
)
entry_point_specs = [
EntryPointSpec(
id="default",
name="Default",
entry_node=self.entry_node,
trigger_type="manual",
isolation_level="shared",
),
]
self._agent_runtime = create_agent_runtime(
graph=self._graph,
goal=self.goal,
storage_path=self._storage_path,
entry_points=entry_point_specs,
llm=llm,
tools=tools,
tool_executor=tool_executor,
checkpoint_config=checkpoint_config,
)
async def start(self, mock_mode=False) -> None:
"""Set up and start the agent runtime."""
if self._agent_runtime is None:
self._setup(mock_mode=mock_mode)
if not self._agent_runtime.is_running:
await self._agent_runtime.start()
async def stop(self) -> None:
"""Stop the agent runtime and clean up."""
if self._agent_runtime and self._agent_runtime.is_running:
await self._agent_runtime.stop()
self._agent_runtime = None
async def trigger_and_wait(
self,
entry_point: str,
input_data: dict,
timeout: float | None = None,
session_state: dict | None = None,
) -> ExecutionResult | None:
"""Execute the graph and wait for completion."""
if self._agent_runtime is None:
raise RuntimeError("Agent not started. Call start() first.")
return await self._agent_runtime.trigger_and_wait(
entry_point_id=entry_point,
input_data=input_data,
timeout=timeout,
session_state=session_state,
)
async def run(
self, context: dict, mock_mode=False, session_state=None
) -> ExecutionResult:
"""Run the agent (convenience method for single execution)."""
await self.start(mock_mode=mock_mode)
try:
result = await self.trigger_and_wait(
"default", context, session_state=session_state
)
return result or ExecutionResult(success=False, error="Execution timeout")
finally:
await self.stop()
def info(self):
"""Get agent information."""
return {
"name": metadata.name,
"version": metadata.version,
"description": metadata.description,
"goal": {
"name": self.goal.name,
"description": self.goal.description,
},
"nodes": [n.id for n in self.nodes],
"edges": [e.id for e in self.edges],
"entry_node": self.entry_node,
"entry_points": self.entry_points,
"pause_nodes": self.pause_nodes,
"terminal_nodes": self.terminal_nodes,
"client_facing_nodes": [n.id for n in self.nodes if n.client_facing],
}
def validate(self):
"""Validate agent structure."""
errors = []
warnings = []
node_ids = {node.id for node in self.nodes}
for edge in self.edges:
if edge.source not in node_ids:
errors.append(f"Edge {edge.id}: source '{edge.source}' not found")
if edge.target not in node_ids:
errors.append(f"Edge {edge.id}: target '{edge.target}' not found")
if self.entry_node not in node_ids:
errors.append(f"Entry node '{self.entry_node}' not found")
for terminal in self.terminal_nodes:
if terminal not in node_ids:
errors.append(f"Terminal node '{terminal}' not found")
for ep_id, node_id in self.entry_points.items():
if node_id not in node_ids:
errors.append(
f"Entry point '{ep_id}' references unknown node '{node_id}'"
)
return {
"valid": len(errors) == 0,
"errors": errors,
"warnings": warnings,
}
# Create default instance
default_agent = SDRAgent()
+30
View File
@@ -0,0 +1,30 @@
"""Runtime configuration for SDR Agent."""
from dataclasses import dataclass
from framework.config import RuntimeConfig
default_config = RuntimeConfig()
@dataclass
class AgentMetadata:
name: str = "SDR Agent"
version: str = "1.0.0"
description: str = (
"Automate sales development outreach using AI-powered contact scoring, "
"scam detection, and personalized message generation. "
"Score contacts by priority, filter suspicious profiles, generate "
"personalized outreach messages, and create Gmail drafts for review."
)
intro_message: str = (
"Hi! I'm your SDR (Sales Development Representative) assistant. "
"Provide a list of contacts and your outreach goal, and I'll "
"score them by priority, filter out suspicious profiles, generate "
"personalized messages for each contact, and create Gmail drafts "
"for your review. I never send emails automatically — you stay in control. "
"To get started, share your contact list and tell me about your outreach goal!"
)
metadata = AgentMetadata()
@@ -0,0 +1,97 @@
[
{
"name": "Sarah Chen",
"email": "sarah.chen@techcorp.io",
"company": "TechCorp",
"title": "Learning & Development Manager",
"linkedin_url": "https://linkedin.com/in/sarah-chen-ld",
"connection_degree": "2nd",
"is_alumni": true,
"school_name": "University of Western Ontario",
"connections_count": 843,
"mutual_connections": 7,
"has_photo": true,
"company_domain_verified": true
},
{
"name": "James Okafor",
"email": "james.okafor@edventure.co",
"company": "EdVenture",
"title": "Instructional Designer",
"linkedin_url": "https://linkedin.com/in/james-okafor-id",
"connection_degree": "1st",
"is_alumni": false,
"connections_count": 621,
"mutual_connections": 12,
"has_photo": true,
"company_domain_verified": true
},
{
"name": "Emily Zhao",
"email": "emily.zhao@univedu.ca",
"company": "UniEdu",
"title": "Director of Digital Learning",
"linkedin_url": "https://linkedin.com/in/emily-zhao-dl",
"connection_degree": "2nd",
"is_alumni": true,
"school_name": "University of Western Ontario",
"connections_count": 1204,
"mutual_connections": 3,
"has_photo": true,
"company_domain_verified": true,
"active_job_posting": true
},
{
"name": "Marcus Williams",
"email": "marcus@growthsales.io",
"company": "GrowthSales",
"title": "CEO",
"linkedin_url": "https://linkedin.com/in/marcus-williams-ceo",
"connection_degree": "3rd",
"is_alumni": false,
"connections_count": 6300,
"mutual_connections": 0,
"has_photo": true,
"company_domain_verified": false
},
{
"name": "Priya Patel",
"email": "",
"company": "FutureLearn Inc.",
"title": "EdTech Product Manager",
"linkedin_url": "https://linkedin.com/in/priya-patel-edtech",
"connection_degree": "2nd",
"is_alumni": false,
"connections_count": 512,
"mutual_connections": 5,
"has_photo": true,
"company_domain_verified": true
},
{
"name": "Alex Johnson",
"email": "alex@bizopp.biz",
"company": "Biz Opportunity Global",
"title": "Entrepreneur",
"linkedin_url": "https://linkedin.com/in/alex-johnson-biz",
"connection_degree": "3rd",
"is_alumni": false,
"connections_count": 38,
"mutual_connections": 0,
"has_photo": false,
"company_domain_verified": false
},
{
"name": "Natalie Brown",
"email": "natalie.brown@learningpro.com",
"company": "LearningPro",
"title": "HR Learning Specialist",
"linkedin_url": "https://linkedin.com/in/natalie-brown-hr",
"connection_degree": "1st",
"is_alumni": true,
"school_name": "University of Western Ontario",
"connections_count": 389,
"mutual_connections": 9,
"has_photo": true,
"company_domain_verified": true
}
]
+270
View File
@@ -0,0 +1,270 @@
{
"original_draft": {
"agent_name": "sdr_agent",
"goal": "Automate sales development outreach: score contacts by priority, filter suspicious profiles, generate personalized messages, and create Gmail drafts for human review.",
"description": "",
"success_criteria": [
"Contacts are correctly scored and ranked by priority factors (alumni status, connection degree, domain verification)",
"Suspicious profiles (risk_score >= 7) are correctly identified and excluded from outreach",
"Generated messages reference specific profile details (alumni connection, role, company) and match the outreach goal",
"Gmail drafts are created for all safe contacts without errors"
],
"constraints": [
"Agent creates Gmail drafts but NEVER sends emails automatically",
"Must not process more contacts than the configured max_contacts parameter",
"Contacts with risk_score >= 7 must be excluded from outreach"
],
"nodes": [
{
"id": "intake",
"name": "Intake",
"description": "Receive the contact list and outreach goal from the user. Confirm the strategy and batch size before proceeding.",
"node_type": "event_loop",
"tools": [
"load_contacts_from_file"
],
"input_keys": [
"contacts",
"outreach_goal",
"max_contacts",
"user_background"
],
"output_keys": [
"contacts",
"outreach_goal",
"max_contacts",
"user_background"
],
"success_criteria": "The user has confirmed the contact list, outreach goal, batch size, and their background. All four keys have been written via set_output.",
"sub_agents": [],
"flowchart_type": "start",
"flowchart_shape": "stadium",
"flowchart_color": "#8aad3f"
},
{
"id": "score-contacts",
"name": "Score Contacts",
"description": "Score and rank each contact from 0 to 100 based on priority factors: alumni status, connection degree, domain verification, mutual connections, and active job postings.",
"node_type": "event_loop",
"tools": [
"load_data",
"append_data"
],
"input_keys": [
"contacts",
"outreach_goal"
],
"output_keys": [
"scored_contacts"
],
"success_criteria": "Every contact has a priority_score field (0-100) and scored_contacts.jsonl has been written and referenced via set_output.",
"sub_agents": [],
"flowchart_type": "database",
"flowchart_shape": "cylinder",
"flowchart_color": "#508878"
},
{
"id": "filter-contacts",
"name": "Filter Contacts",
"description": "Analyze each contact for authenticity and filter out suspicious profiles. Any contact with a risk score of 7 or higher is skipped.",
"node_type": "event_loop",
"tools": [
"load_data",
"append_data"
],
"input_keys": [
"scored_contacts"
],
"output_keys": [
"safe_contacts",
"filtered_count"
],
"success_criteria": "Each contact has a risk_score and recommendation field. Contacts with risk_score >= 7 are excluded. safe_contacts.jsonl and filtered_count are set via set_output.",
"sub_agents": [],
"flowchart_type": "database",
"flowchart_shape": "cylinder",
"flowchart_color": "#508878"
},
{
"id": "personalize",
"name": "Personalize",
"description": "Generate a personalized outreach message for each contact based on their profile, shared background, and the user's outreach goal.",
"node_type": "event_loop",
"tools": [
"load_data",
"append_data"
],
"input_keys": [
"safe_contacts",
"outreach_goal",
"user_background"
],
"output_keys": [
"personalized_contacts"
],
"success_criteria": "Every safe contact has an outreach_message field of 80-120 words that references a specific hook from their profile. personalized_contacts.jsonl is set via set_output.",
"sub_agents": [],
"flowchart_type": "database",
"flowchart_shape": "cylinder",
"flowchart_color": "#508878"
},
{
"id": "send-outreach",
"name": "Send Outreach",
"description": "Create Gmail draft emails for each contact using their personalized message. Drafts are created for human review \u2014 emails are never sent automatically.",
"node_type": "event_loop",
"tools": [
"gmail_create_draft",
"load_data",
"append_data"
],
"input_keys": [
"personalized_contacts",
"outreach_goal"
],
"output_keys": [
"drafts_created"
],
"success_criteria": "A Gmail draft has been created for every safe contact. drafts.jsonl records each draft and drafts_created is set via set_output.",
"sub_agents": [],
"flowchart_type": "database",
"flowchart_shape": "cylinder",
"flowchart_color": "#508878"
},
{
"id": "report",
"name": "Report",
"description": "Generate a summary report of the outreach campaign: contacts scored, filtered, messaged, and drafts created. Present to user for review.",
"node_type": "event_loop",
"tools": [
"load_data"
],
"input_keys": [
"drafts_created",
"filtered_count",
"outreach_goal"
],
"output_keys": [
"summary_report"
],
"success_criteria": "A campaign summary has been presented to the user listing totals for contacts scored, filtered, messaged, and drafts created. summary_report is set via set_output.",
"sub_agents": [],
"flowchart_type": "terminal",
"flowchart_shape": "stadium",
"flowchart_color": "#b5453a"
}
],
"edges": [
{
"id": "edge-0",
"source": "intake",
"target": "score-contacts",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-1",
"source": "score-contacts",
"target": "filter-contacts",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-2",
"source": "filter-contacts",
"target": "personalize",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-3",
"source": "personalize",
"target": "send-outreach",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-4",
"source": "send-outreach",
"target": "report",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-5",
"source": "report",
"target": "intake",
"condition": "on_success",
"description": "",
"label": ""
}
],
"entry_node": "intake",
"terminal_nodes": [
"report"
],
"flowchart_legend": {
"start": {
"shape": "stadium",
"color": "#8aad3f"
},
"terminal": {
"shape": "stadium",
"color": "#b5453a"
},
"process": {
"shape": "rectangle",
"color": "#b5a575"
},
"decision": {
"shape": "diamond",
"color": "#d89d26"
},
"io": {
"shape": "parallelogram",
"color": "#d06818"
},
"document": {
"shape": "document",
"color": "#c4b830"
},
"database": {
"shape": "cylinder",
"color": "#508878"
},
"subprocess": {
"shape": "subroutine",
"color": "#887a48"
},
"browser": {
"shape": "hexagon",
"color": "#cc8850"
}
}
},
"flowchart_map": {
"intake": [
"intake"
],
"score-contacts": [
"score-contacts"
],
"filter-contacts": [
"filter-contacts"
],
"personalize": [
"personalize"
],
"send-outreach": [
"send-outreach"
],
"report": [
"report"
]
}
}
@@ -0,0 +1,14 @@
{
"hive-tools": {
"transport": "stdio",
"command": "uv",
"args": [
"run",
"python",
"mcp_server.py",
"--stdio"
],
"cwd": "../../../tools",
"description": "Hive tools MCP server"
}
}
@@ -0,0 +1,339 @@
"""Node definitions for SDR Agent."""
from framework.graph import NodeSpec
# Node 1: Intake (client-facing)
# Receives contact list and outreach goal, confirms with user before proceeding.
intake_node = NodeSpec(
id="intake",
name="Intake",
description=(
"Receive the contact list and outreach goal from the user. "
"Confirm the strategy and batch size before proceeding."
),
node_type="event_loop",
client_facing=True,
max_node_visits=0,
input_keys=["contacts", "outreach_goal", "max_contacts", "user_background"],
output_keys=["contacts", "outreach_goal", "max_contacts", "user_background"],
success_criteria=(
"The user has confirmed the contact list, outreach goal, batch size, and "
"their background. All four keys have been written via set_output."
),
system_prompt="""\
You are an SDR (Sales Development Representative) assistant helping automate outreach.
**STEP 1 Understand the input (text only, NO tool calls):**
Read the user's input from context. Determine what they provided:
- If "contacts" is a **file path** (ends in .json or .jsonl), note that you'll load it in step 2.
- If "contacts" is a **JSON string**, you'll use it directly.
- Identify the outreach goal, background, and batch size (default 20).
**STEP 2 Load contacts if needed:**
If the user provided a file path for contacts, call:
- load_contacts_from_file(file_path=<the path>)
This writes the contacts to contacts.jsonl in the session directory.
**STEP 3 Confirm with the user (text only, NO tool calls):**
Present a summary like:
"Here's what I'll do:
1. Score and rank your contacts by priority (alumni status, connection degree, etc.)
2. Filter out suspicious or low-quality profiles (risk score 7)
3. Generate a personalized outreach message for each contact
4. Create Gmail draft emails for your review I never send automatically
Ready to proceed with [N] contacts for [goal]?"
**STEP 4 After the user confirms, call set_output:**
- set_output("contacts", <the contact list as a JSON string, or "contacts.jsonl" if loaded from file>)
- set_output("outreach_goal", <the confirmed goal, e.g. "coffee chat">)
- set_output("max_contacts", <the confirmed batch size as a string, e.g. "20">)
- set_output("user_background", <user's background/role, e.g. "Learning Technologist at UWO">)
""",
tools=["load_contacts_from_file"],
)
# Node 2: Score Contacts
# Ranks contacts 0-100 based on alumni status, connection degree, domain, etc.
score_contacts_node = NodeSpec(
id="score-contacts",
name="Score Contacts",
description=(
"Score and rank each contact from 0 to 100 based on priority factors: "
"alumni status, connection degree, domain verification, mutual connections, "
"and active job postings."
),
node_type="event_loop",
client_facing=False,
max_node_visits=0,
input_keys=["contacts", "outreach_goal"],
output_keys=["scored_contacts"],
success_criteria=(
"Every contact has a priority_score field (0-100) and scored_contacts.jsonl "
"has been written and referenced via set_output."
),
system_prompt="""\
You are a contact prioritization engine. Score each contact from 0 to 100.
**SCORING RULES (additive):**
- Alumni of the user's school: +30 points
- 1st degree connection: +25 points
- 2nd degree connection: +20 points
- 3rd degree connection: +10 points
- Domain verified (company email matches LinkedIn company): +10 points
- Has mutual connections (1 point each, max 10): up to +10 points
- Active job posting at their company: +10 points
- Has a profile photo: +5 points
- Over 500 connections: +5 points
Cap the final score at 100.
**STEP 1 Load the contacts:**
Call load_data(filename="contacts.jsonl") to read the contact list.
If "contacts" in context is a JSON string (not a filename), write it first:
- For each contact in the list, call append_data(filename="contacts.jsonl", data=<JSON contact object>)
Then read it back.
**STEP 2 Score each contact:**
For each contact, calculate the priority score using the rules above.
Add a "priority_score" field to each contact object.
**STEP 3 Write scored contacts and set output:**
- Call append_data(filename="scored_contacts.jsonl", data=<JSON contact with priority_score>) for each contact.
- Sort contacts by priority_score (highest first) in your final output.
- Call set_output("scored_contacts", "scored_contacts.jsonl")
""",
tools=["load_data", "append_data"],
)
# Node 3: Filter Contacts (Scam Detection)
# Filters out suspicious or fake profiles using a risk scoring system.
filter_contacts_node = NodeSpec(
id="filter-contacts",
name="Filter Contacts",
description=(
"Analyze each contact for authenticity and filter out suspicious profiles. "
"Any contact with a risk score of 7 or higher is skipped."
),
node_type="event_loop",
client_facing=False,
max_node_visits=0,
input_keys=["scored_contacts"],
output_keys=["safe_contacts", "filtered_count"],
success_criteria=(
"Each contact has a risk_score and recommendation field. Contacts with "
"risk_score >= 7 are excluded. safe_contacts.jsonl and filtered_count are "
"set via set_output."
),
system_prompt="""\
You are a profile authenticity analyzer. Your job is to detect suspicious or fake LinkedIn profiles.
**RISK SCORING RULES (additive):**
- Fewer than 50 connections: +3 points
- No profile photo: +2 points
- Fewer than 2 positions in work history: +2 points
- Generic title (e.g. "entrepreneur", "CEO", "consultant") AND fewer than 100 connections: +2 points
- Company name appears generic or unverifiable: +2 points
- Profile text seems auto-generated or overly promotional: +2 points
- Connection count over 5000 with no mutual connections: +1 point
**DECISION RULE:**
- risk_score < 4: SAFE include in outreach
- risk_score 46: CAUTION include but flag
- risk_score 7: SKIP exclude from outreach
**STEP 1 Load scored contacts:**
Call load_data(filename=<the "scored_contacts" value from context>).
Process contacts chunk by chunk if has_more=true.
**STEP 2 Analyze each contact:**
For each contact, calculate a risk_score using the rules above.
Determine: is_safe (risk_score < 7), recommendation (safe/caution/skip), flags (list of triggered rules).
**STEP 3 Write safe contacts and set output:**
- For each contact where risk_score < 7: call append_data(filename="safe_contacts.jsonl", data=<contact JSON with risk_score and flags added>)
- Track how many contacts were filtered (risk_score 7)
- Call set_output("safe_contacts", "safe_contacts.jsonl")
- Call set_output("filtered_count", <number of skipped contacts as string>)
""",
tools=["load_data", "append_data"],
)
# Node 4: Personalize Messages
# Generates personalized outreach messages for each safe contact.
personalize_node = NodeSpec(
id="personalize",
name="Personalize",
description=(
"Generate a personalized outreach message for each contact based on "
"their profile, shared background, and the user's outreach goal."
),
node_type="event_loop",
client_facing=False,
max_node_visits=0,
input_keys=["safe_contacts", "outreach_goal", "user_background"],
output_keys=["personalized_contacts"],
success_criteria=(
"Every safe contact has an outreach_message field of 80-120 words that "
"references a specific hook from their profile. personalized_contacts.jsonl "
"is set via set_output."
),
system_prompt="""\
You are a professional outreach message writer. Generate personalized messages for each contact.
**TWO-STEP PERSONALIZATION:**
For each contact, follow this two-step approach:
STEP A Extract hooks (analyze the profile):
Look for 2-3 specific talking points from the contact's profile:
- Shared alumni connection
- Specific role, company, or career transition worth mentioning
- Any mutual interests aligned with the user's background
STEP B Generate the message:
Write a warm, professional outreach message using the hooks.
**MESSAGE REQUIREMENTS:**
- 80-120 words (LinkedIn message length)
- Start with a specific observation ("I noticed you..." or "Fellow [school] alum here...")
- Mention the shared connection or interest naturally
- State the outreach goal clearly but softly (e.g. "Open to a brief 15-min chat?")
- Professional but warm tone NOT templated or AI-sounding
- Do NOT mention job postings directly unless the goal is job-related
- Do NOT use generic openers like "I hope this finds you well"
- End with a low-pressure ask
**STEP 1 Load safe contacts:**
Call load_data(filename=<the "safe_contacts" value from context>).
**STEP 2 Generate message for each contact:**
For each contact: generate the personalized message using the two-step approach above.
Add "outreach_message" field to each contact object.
**STEP 3 Write output and set:**
- Call append_data(filename="personalized_contacts.jsonl", data=<contact JSON with outreach_message>) for each.
- Call set_output("personalized_contacts", "personalized_contacts.jsonl")
""",
tools=["load_data", "append_data"],
)
# Node 5: Send Outreach (Create Gmail Drafts)
# Creates Gmail draft emails for each personalized contact. Never sends automatically.
send_outreach_node = NodeSpec(
id="send-outreach",
name="Send Outreach",
description=(
"Create Gmail draft emails for each contact using their personalized message. "
"Drafts are created for human review — emails are never sent automatically."
),
node_type="event_loop",
client_facing=False,
max_node_visits=0,
input_keys=["personalized_contacts", "outreach_goal"],
output_keys=["drafts_created"],
success_criteria=(
"A Gmail draft has been created for every safe contact. "
"drafts.jsonl records each draft and drafts_created is set via set_output."
),
system_prompt="""\
You are an outreach execution assistant. Create Gmail draft emails for each contact.
**CRITICAL RULE: NEVER send emails automatically. Only create drafts.**
**STEP 1 Load personalized contacts:**
Call load_data(filename=<the "personalized_contacts" value from context>).
Process chunk by chunk if has_more=true.
**STEP 2 Create Gmail draft for each contact:**
For each contact with an "outreach_message":
- subject: "Coffee Chat Request" (or appropriate subject based on outreach_goal)
- to: contact's email address (use LinkedIn profile URL if email not available — note this in body)
- body: the "outreach_message" from the contact object
Call gmail_create_draft(
to=<contact email or linkedin_url as placeholder>,
subject=<appropriate subject line>,
body=<outreach_message>
)
Record each draft: call append_data(
filename="drafts.jsonl",
data=<JSON: {contact_name, contact_email, subject, status: "draft_created"}>
)
**STEP 3 Set output:**
- Call set_output("drafts_created", "drafts.jsonl")
**IMPORTANT:** If a contact has no email address, create the draft with their LinkedIn URL as a placeholder
and add a note in the body: "Note: Please find the recipient's email before sending."
""",
tools=["gmail_create_draft", "load_data", "append_data"],
)
# Node 6: Report (client-facing)
# Summarizes results and presents to user for review.
report_node = NodeSpec(
id="report",
name="Report",
description=(
"Generate a summary report of the outreach campaign: contacts scored, "
"filtered, messaged, and drafts created. Present to user for review."
),
node_type="event_loop",
client_facing=True,
max_node_visits=0,
input_keys=["drafts_created", "filtered_count", "outreach_goal"],
output_keys=["summary_report"],
success_criteria=(
"A campaign summary has been presented to the user listing totals for "
"contacts scored, filtered, messaged, and drafts created. "
"summary_report is set via set_output."
),
system_prompt="""\
You are an SDR assistant. Generate a clear campaign summary report and present it to the user.
**STEP 1 Load draft records:**
Call load_data(filename=<the "drafts_created" value from context>) to read the draft records.
If has_more=true, load additional chunks until all records are loaded.
**STEP 2 Present the report (text only, NO tool calls):**
Present a clean summary:
📊 **SDR Campaign Summary [outreach_goal]**
**Overview:**
- Total contacts processed: [N]
- Contacts filtered (suspicious profiles): [filtered_count]
- Safe contacts messaged: [N - filtered_count]
- Gmail drafts created: [N]
**Drafts Created:**
List each draft: Contact Name | Company | Subject
**Next Steps:**
"Your Gmail drafts are ready for review. Please:
1. Open Gmail and review each draft
2. Personalize further if needed
3. Send when ready
Would you like to run another outreach batch or adjust the strategy?"
**STEP 3 After the user responds, call set_output:**
- set_output("summary_report", <the formatted report text>)
""",
tools=["load_data"],
)
__all__ = [
"intake_node",
"score_contacts_node",
"filter_contacts_node",
"personalize_node",
"send_outreach_node",
"report_node",
]
+132
View File
@@ -0,0 +1,132 @@
"""
Custom tool functions for SDR Agent.
Follows the ToolRegistry.discover_from_module() contract:
- TOOLS: dict[str, Tool] tool definitions
- tool_executor(tool_use) unified dispatcher
These tools provide SDR-specific utilities for loading contact data
from a JSON file and writing it to the session's data directory for
downstream nodes to process via the standard load_data/append_data tools.
"""
from __future__ import annotations
import json
from framework.llm.provider import Tool, ToolResult, ToolUse
from framework.runner.tool_registry import _execution_context
# ---------------------------------------------------------------------------
# Tool definitions (auto-discovered by ToolRegistry.discover_from_module)
# ---------------------------------------------------------------------------
TOOLS = {
"load_contacts_from_file": Tool(
name="load_contacts_from_file",
description=(
"Load a contacts JSON file from an absolute or relative path "
"and write its contents to contacts.jsonl in the session data directory. "
"Returns the number of contacts loaded and the output filename."
),
parameters={
"type": "object",
"properties": {
"file_path": {
"type": "string",
"description": (
"Absolute or relative path to a JSON file containing "
"a list of contact objects."
),
},
},
"required": ["file_path"],
},
),
}
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _get_data_dir() -> str:
"""Get the session-scoped data_dir from ToolRegistry execution context."""
ctx = _execution_context.get()
if not ctx or "data_dir" not in ctx:
raise RuntimeError(
"data_dir not set in execution context. "
"Is the tool running inside a GraphExecutor?"
)
return ctx["data_dir"]
# ---------------------------------------------------------------------------
# Core implementation
# ---------------------------------------------------------------------------
def _load_contacts_from_file(file_path: str) -> dict:
"""Read a contacts JSON file and write it as contacts.jsonl to data_dir.
Args:
file_path: Path to the contacts JSON file.
Returns:
dict with ``filename`` (always ``"contacts.jsonl"``) and ``count``.
"""
from pathlib import Path
data_dir = _get_data_dir()
Path(data_dir).mkdir(parents=True, exist_ok=True)
output_path = Path(data_dir) / "contacts.jsonl"
try:
with open(file_path, encoding="utf-8") as f:
contacts = json.load(f)
except FileNotFoundError:
return {"error": f"File not found: {file_path}"}
except json.JSONDecodeError as e:
return {"error": f"Invalid JSON: {e}"}
if not isinstance(contacts, list):
contacts = [contacts]
count = 0
with open(output_path, "w", encoding="utf-8") as f:
for contact in contacts:
f.write(json.dumps(contact, ensure_ascii=False) + "\n")
count += 1
return {"filename": "contacts.jsonl", "count": count}
# ---------------------------------------------------------------------------
# Unified tool executor (auto-discovered by ToolRegistry.discover_from_module)
# ---------------------------------------------------------------------------
def tool_executor(tool_use: ToolUse) -> ToolResult:
"""Dispatch tool calls to their implementations."""
if tool_use.name == "load_contacts_from_file":
try:
file_path = tool_use.input.get("file_path", "")
result = _load_contacts_from_file(file_path=file_path)
return ToolResult(
tool_use_id=tool_use.id,
content=json.dumps(result),
is_error="error" in result,
)
except Exception as e:
return ToolResult(
tool_use_id=tool_use.id,
content=json.dumps({"error": str(e)}),
is_error=True,
)
return ToolResult(
tool_use_id=tool_use.id,
content=json.dumps({"error": f"Unknown tool: {tool_use.name}"}),
is_error=True,
)
+137 -21
View File
@@ -778,6 +778,7 @@ $ProviderMap = [ordered]@{
GOOGLE_API_KEY = @{ Name = "Google AI"; Id = "google" }
GROQ_API_KEY = @{ Name = "Groq"; Id = "groq" }
CEREBRAS_API_KEY = @{ Name = "Cerebras"; Id = "cerebras" }
OPENROUTER_API_KEY = @{ Name = "OpenRouter"; Id = "openrouter" }
MISTRAL_API_KEY = @{ Name = "Mistral"; Id = "mistral" }
TOGETHER_API_KEY = @{ Name = "Together AI"; Id = "together" }
DEEPSEEK_API_KEY = @{ Name = "DeepSeek"; Id = "deepseek" }
@@ -820,9 +821,81 @@ $ModelChoices = @{
)
}
function Normalize-OpenRouterModelId {
param([string]$ModelId)
$normalized = if ($ModelId) { $ModelId.Trim() } else { "" }
if ($normalized -match '(?i)^openrouter/(.+)$') {
$normalized = $matches[1]
}
return $normalized
}
function Get-ModelSelection {
param([string]$ProviderId)
if ($ProviderId -eq "openrouter") {
$defaultModel = ""
if ($PrevModel -and $PrevProvider -eq $ProviderId) {
$defaultModel = Normalize-OpenRouterModelId $PrevModel
}
Write-Host ""
Write-Color -Text "Enter your OpenRouter model id:" -Color White
Write-Color -Text " Paste from openrouter.ai (example: x-ai/grok-4.20-beta)" -Color DarkGray
Write-Color -Text " If calls fail with guardrail/privacy errors: openrouter.ai/settings/privacy" -Color DarkGray
Write-Host ""
while ($true) {
if ($defaultModel) {
$rawModel = Read-Host "Model id [$defaultModel]"
if ([string]::IsNullOrWhiteSpace($rawModel)) { $rawModel = $defaultModel }
} else {
$rawModel = Read-Host "Model id"
}
$normalizedModel = Normalize-OpenRouterModelId $rawModel
if (-not [string]::IsNullOrWhiteSpace($normalizedModel)) {
$openrouterKey = $null
if ($SelectedEnvVar) {
$openrouterKey = [System.Environment]::GetEnvironmentVariable($SelectedEnvVar, "Process")
if (-not $openrouterKey) {
$openrouterKey = [System.Environment]::GetEnvironmentVariable($SelectedEnvVar, "User")
}
}
if ($openrouterKey) {
Write-Host " Verifying model id... " -NoNewline
try {
$modelApiBase = if ($SelectedApiBase) { $SelectedApiBase } else { "https://openrouter.ai/api/v1" }
$hcResult = & uv run python (Join-Path $ScriptDir "scripts/check_llm_key.py") "openrouter" $openrouterKey $modelApiBase $normalizedModel 2>$null
$hcJson = $hcResult | ConvertFrom-Json
if ($hcJson.valid -eq $true) {
if ($hcJson.model) {
$normalizedModel = [string]$hcJson.model
}
Write-Color -Text "ok" -Color Green
} elseif ($hcJson.valid -eq $false) {
Write-Color -Text "failed" -Color Red
Write-Warn $hcJson.message
Write-Host ""
continue
} else {
Write-Color -Text "--" -Color Yellow
Write-Color -Text " Could not verify model id (network issue). Continuing with your selection." -Color DarkGray
}
} catch {
Write-Color -Text "--" -Color Yellow
Write-Color -Text " Could not verify model id (network issue). Continuing with your selection." -Color DarkGray
}
} else {
Write-Color -Text " Skipping model verification (OpenRouter key not available in current shell)." -Color DarkGray
}
Write-Host ""
Write-Ok "Model: $normalizedModel"
return @{ Model = $normalizedModel; MaxTokens = 8192; MaxContextTokens = 120000 }
}
Write-Color -Text "Model id cannot be empty." -Color Red
}
}
$choices = $ModelChoices[$ProviderId]
if (-not $choices -or $choices.Count -eq 0) {
return @{ Model = $DefaultModels[$ProviderId]; MaxTokens = 8192; MaxContextTokens = 120000 }
@@ -883,6 +956,7 @@ $SelectedEnvVar = ""
$SelectedModel = ""
$SelectedMaxTokens = 8192
$SelectedMaxContextTokens = 120000
$SelectedApiBase = ""
$SubscriptionMode = ""
# ── Credential detection (silent — just set flags) ───────────
@@ -912,15 +986,16 @@ if (-not $hiveKey) { $hiveKey = $env:HIVE_API_KEY }
if ($hiveKey) { $HiveCredDetected = $true }
# Detect API key providers
$ProviderMenuEnvVars = @("ANTHROPIC_API_KEY", "OPENAI_API_KEY", "GEMINI_API_KEY", "GROQ_API_KEY", "CEREBRAS_API_KEY")
$ProviderMenuNames = @("Anthropic (Claude) - Recommended", "OpenAI (GPT)", "Google Gemini - Free tier available", "Groq - Fast, free tier", "Cerebras - Fast, free tier")
$ProviderMenuIds = @("anthropic", "openai", "gemini", "groq", "cerebras")
$ProviderMenuEnvVars = @("ANTHROPIC_API_KEY", "OPENAI_API_KEY", "GEMINI_API_KEY", "GROQ_API_KEY", "CEREBRAS_API_KEY", "OPENROUTER_API_KEY")
$ProviderMenuNames = @("Anthropic (Claude) - Recommended", "OpenAI (GPT)", "Google Gemini - Free tier available", "Groq - Fast, free tier", "Cerebras - Fast, free tier", "OpenRouter - Bring any OpenRouter model")
$ProviderMenuIds = @("anthropic", "openai", "gemini", "groq", "cerebras", "openrouter")
$ProviderMenuUrls = @(
"https://console.anthropic.com/settings/keys",
"https://platform.openai.com/api-keys",
"https://aistudio.google.com/apikey",
"https://console.groq.com/keys",
"https://cloud.cerebras.ai/"
"https://cloud.cerebras.ai/",
"https://openrouter.ai/keys"
)
# ── Read previous configuration (if any) ──────────────────────
@@ -979,6 +1054,7 @@ if ($PrevSubMode -or $PrevProvider) {
"gemini" { $DefaultChoice = "8" }
"groq" { $DefaultChoice = "9" }
"cerebras" { $DefaultChoice = "10" }
"openrouter" { $DefaultChoice = "11" }
"kimi" { $DefaultChoice = "4" }
}
}
@@ -1028,7 +1104,7 @@ if ($HiveCredDetected) { Write-Color -Text " (credential detected)" -Color Gree
Write-Host ""
Write-Color -Text " API key providers:" -Color Cyan
# 6-10) API key providers
# 6-11) API key providers
for ($idx = 0; $idx -lt $ProviderMenuEnvVars.Count; $idx++) {
$num = $idx + 6
$envVal = [System.Environment]::GetEnvironmentVariable($ProviderMenuEnvVars[$idx], "Process")
@@ -1039,8 +1115,9 @@ for ($idx = 0; $idx -lt $ProviderMenuEnvVars.Count; $idx++) {
if ($envVal) { Write-Color -Text " (credential detected)" -Color Green } else { Write-Host "" }
}
$SkipChoice = 6 + $ProviderMenuEnvVars.Count
Write-Host " " -NoNewline
Write-Color -Text "11" -Color Cyan -NoNewline
Write-Color -Text "$SkipChoice" -Color Cyan -NoNewline
Write-Host ") Skip for now"
Write-Host ""
@@ -1051,16 +1128,16 @@ if ($DefaultChoice) {
while ($true) {
if ($DefaultChoice) {
$raw = Read-Host "Enter choice (1-11) [$DefaultChoice]"
$raw = Read-Host "Enter choice (1-$SkipChoice) [$DefaultChoice]"
if ([string]::IsNullOrWhiteSpace($raw)) { $raw = $DefaultChoice }
} else {
$raw = Read-Host "Enter choice (1-11)"
$raw = Read-Host "Enter choice (1-$SkipChoice)"
}
if ($raw -match '^\d+$') {
$num = [int]$raw
if ($num -ge 1 -and $num -le 11) { break }
if ($num -ge 1 -and $num -le $SkipChoice) { break }
}
Write-Color -Text "Invalid choice. Please enter 1-11" -Color Red
Write-Color -Text "Invalid choice. Please enter 1-$SkipChoice" -Color Red
}
switch ($num) {
@@ -1163,13 +1240,18 @@ switch ($num) {
}
Write-Color -Text " Model: $SelectedModel | API: $HiveLlmEndpoint" -Color DarkGray
}
{ $_ -ge 6 -and $_ -le 10 } {
{ $_ -ge 6 -and $_ -le 11 } {
# API key providers
$provIdx = $num - 6
$SelectedEnvVar = $ProviderMenuEnvVars[$provIdx]
$SelectedProviderId = $ProviderMenuIds[$provIdx]
$providerName = $ProviderMenuNames[$provIdx] -replace ' - .*', '' # strip description
$signupUrl = $ProviderMenuUrls[$provIdx]
if ($SelectedProviderId -eq "openrouter") {
$SelectedApiBase = "https://openrouter.ai/api/v1"
} else {
$SelectedApiBase = ""
}
# Prompt for key (allow replacement if already set) with verification + retry
while ($true) {
@@ -1198,7 +1280,11 @@ switch ($num) {
# Health check the new key
Write-Host " Verifying API key... " -NoNewline
try {
$hcResult = & $UvCmd run python (Join-Path $ScriptDir "scripts/check_llm_key.py") $SelectedProviderId $apiKey 2>$null
if ($SelectedApiBase) {
$hcResult = & uv run python (Join-Path $ScriptDir "scripts/check_llm_key.py") $SelectedProviderId $apiKey $SelectedApiBase 2>$null
} else {
$hcResult = & uv run python (Join-Path $ScriptDir "scripts/check_llm_key.py") $SelectedProviderId $apiKey 2>$null
}
$hcJson = $hcResult | ConvertFrom-Json
if ($hcJson.valid -eq $true) {
Write-Color -Text "ok" -Color Green
@@ -1236,7 +1322,7 @@ switch ($num) {
}
}
}
11 {
{ $_ -eq $SkipChoice } {
Write-Host ""
Write-Warn "Skipped. An LLM API key is required to test and use worker agents."
Write-Host " Add your API key later by running:"
@@ -1484,6 +1570,9 @@ if ($SelectedProviderId) {
} elseif ($SubscriptionMode -eq "hive_llm") {
$config.llm["api_base"] = $HiveLlmEndpoint
$config.llm["api_key_env_var"] = $SelectedEnvVar
} elseif ($SelectedProviderId -eq "openrouter") {
$config.llm["api_base"] = "https://openrouter.ai/api/v1"
$config.llm["api_key_env_var"] = $SelectedEnvVar
} else {
$config.llm["api_key_env_var"] = $SelectedEnvVar
}
@@ -1783,6 +1872,9 @@ if ($SelectedProviderId) {
Write-Color -Text " API: api.z.ai (OpenAI-compatible)" -Color DarkGray
} elseif ($SubscriptionMode -eq "codex") {
Write-Ok "OpenAI Codex Subscription -> $SelectedModel"
} elseif ($SelectedProviderId -eq "openrouter") {
Write-Ok "OpenRouter API Key -> $SelectedModel"
Write-Color -Text " API: openrouter.ai/api/v1 (OpenAI-compatible)" -Color DarkGray
} else {
Write-Color -Text " $SelectedProviderId" -Color Cyan -NoNewline
Write-Host " -> " -NoNewline
@@ -1813,14 +1905,39 @@ if ($CodexAvailable) {
Write-Host ""
}
# Auto-launch dashboard or show manual instructions
# Setup-only mode: show manual instructions
if ($FrontendBuilt) {
Write-Color -Text "Launching dashboard..." -Color White
Write-Color -Text "═══════════════════════════════════════════════════════" -Color Yellow
Write-Host ""
Write-Color -Text " Starting server on http://localhost:8787" -Color DarkGray
Write-Color -Text " Press Ctrl+C to stop" -Color DarkGray
Write-Color -Text " IMPORTANT: Restart your terminal now!" -Color Yellow
Write-Host ""
Write-Color -Text "═══════════════════════════════════════════════════════" -Color Yellow
Write-Host ""
Write-Host 'Environment variables (uv, API keys) are now configured, but you need to'
Write-Host 'restart your terminal for them to take effect in new sessions.'
Write-Host ""
Write-Color -Text "Run an Agent:" -Color White
Write-Host ""
Write-Host " Quickstart only sets things up. Launch the dashboard when you're ready:"
Write-Color -Text " hive open" -Color Cyan
Write-Host ""
if ($SelectedProviderId -or $credKey) {
Write-Color -Text "Note:" -Color White
Write-Host "- uv has been added to your User PATH"
if ($SelectedProviderId -and $SelectedEnvVar) {
Write-Host "- $SelectedEnvVar is set for LLM access"
}
if ($credKey) {
Write-Host "- HIVE_CREDENTIAL_KEY is set for credential encryption"
}
Write-Host "- All variables will persist across reboots"
Write-Host ""
}
Write-Color -Text 'Run .\quickstart.ps1 again to reconfigure.' -Color DarkGray
Write-Host ""
& (Join-Path $ScriptDir "hive.ps1") open
} else {
Write-Color -Text "═══════════════════════════════════════════════════════" -Color Yellow
Write-Host ""
@@ -1834,9 +1951,8 @@ if ($FrontendBuilt) {
Write-Color -Text "Run an Agent:" -Color White
Write-Host ""
Write-Host " Launch the interactive dashboard to browse and run agents:"
Write-Host " You can start an example agent or an agent built by yourself:"
Write-Color -Text " .\hive.ps1 tui" -Color Cyan
Write-Host " Frontend build was skipped or failed. Once the dashboard is available, launch it with:"
Write-Color -Text " hive open" -Color Cyan
Write-Host ""
if ($SelectedProviderId -or $credKey) {
+272 -110
View File
@@ -46,7 +46,6 @@ prompt_yes_no() {
else
prompt="$prompt [y/N] "
fi
read -r -p "$prompt" response
response="${response:-$default}"
[[ "$response" =~ ^[Yy] ]]
@@ -374,6 +373,7 @@ if [ "$USE_ASSOC_ARRAYS" = true ]; then
["GOOGLE_API_KEY"]="Google AI"
["GROQ_API_KEY"]="Groq"
["CEREBRAS_API_KEY"]="Cerebras"
["OPENROUTER_API_KEY"]="OpenRouter"
["MISTRAL_API_KEY"]="Mistral"
["TOGETHER_API_KEY"]="Together AI"
["DEEPSEEK_API_KEY"]="DeepSeek"
@@ -387,6 +387,7 @@ if [ "$USE_ASSOC_ARRAYS" = true ]; then
["GOOGLE_API_KEY"]="google"
["GROQ_API_KEY"]="groq"
["CEREBRAS_API_KEY"]="cerebras"
["OPENROUTER_API_KEY"]="openrouter"
["MISTRAL_API_KEY"]="mistral"
["TOGETHER_API_KEY"]="together"
["DEEPSEEK_API_KEY"]="deepseek"
@@ -510,9 +511,9 @@ if [ "$USE_ASSOC_ARRAYS" = true ]; then
}
else
# Bash 3.2 - use parallel indexed arrays
PROVIDER_ENV_VARS=(ANTHROPIC_API_KEY OPENAI_API_KEY MINIMAX_API_KEY GEMINI_API_KEY GOOGLE_API_KEY GROQ_API_KEY CEREBRAS_API_KEY MISTRAL_API_KEY TOGETHER_API_KEY DEEPSEEK_API_KEY)
PROVIDER_DISPLAY_NAMES=("Anthropic (Claude)" "OpenAI (GPT)" "MiniMax" "Google Gemini" "Google AI" "Groq" "Cerebras" "Mistral" "Together AI" "DeepSeek")
PROVIDER_ID_LIST=(anthropic openai minimax gemini google groq cerebras mistral together deepseek)
PROVIDER_ENV_VARS=(ANTHROPIC_API_KEY OPENAI_API_KEY MINIMAX_API_KEY GEMINI_API_KEY GOOGLE_API_KEY GROQ_API_KEY CEREBRAS_API_KEY OPENROUTER_API_KEY MISTRAL_API_KEY TOGETHER_API_KEY DEEPSEEK_API_KEY)
PROVIDER_DISPLAY_NAMES=("Anthropic (Claude)" "OpenAI (GPT)" "MiniMax" "Google Gemini" "Google AI" "Groq" "Cerebras" "OpenRouter" "Mistral" "Together AI" "DeepSeek")
PROVIDER_ID_LIST=(anthropic openai minimax gemini google groq cerebras openrouter mistral together deepseek)
# Default models by provider id (parallel arrays)
MODEL_PROVIDER_IDS=(anthropic openai minimax gemini groq cerebras mistral together_ai deepseek)
@@ -690,10 +691,91 @@ detect_shell_rc() {
SHELL_RC_FILE=$(detect_shell_rc)
SHELL_NAME=$(basename "$SHELL")
# Normalize user-pasted OpenRouter model IDs:
# - trim whitespace
# - strip leading "openrouter/" if present
normalize_openrouter_model_id() {
local raw="$1"
# Trim leading/trailing whitespace
raw="${raw#"${raw%%[![:space:]]*}"}"
raw="${raw%"${raw##*[![:space:]]}"}"
if [[ "$raw" =~ ^[Oo][Pp][Ee][Nn][Rr][Oo][Uu][Tt][Ee][Rr]/(.+)$ ]]; then
raw="${BASH_REMATCH[1]}"
fi
printf '%s' "$raw"
}
# Prompt the user to choose a model for their selected provider.
# Sets SELECTED_MODEL, SELECTED_MAX_TOKENS, and SELECTED_MAX_CONTEXT_TOKENS.
prompt_model_selection() {
local provider_id="$1"
if [ "$provider_id" = "openrouter" ]; then
local default_model=""
if [ -n "$PREV_MODEL" ] && [ "$provider_id" = "$PREV_PROVIDER" ]; then
default_model="$(normalize_openrouter_model_id "$PREV_MODEL")"
fi
echo ""
echo -e "${BOLD}Enter your OpenRouter model id:${NC}"
echo -e " ${DIM}Paste from openrouter.ai (example: x-ai/grok-4.20-beta)${NC}"
echo -e " ${DIM}If calls fail with guardrail/privacy errors: openrouter.ai/settings/privacy${NC}"
echo ""
local input_model=""
while true; do
if [ -n "$default_model" ]; then
read -r -p "Model id [$default_model]: " input_model || true
input_model="${input_model:-$default_model}"
else
read -r -p "Model id: " input_model || true
fi
local normalized_model
normalized_model="$(normalize_openrouter_model_id "$input_model")"
if [ -n "$normalized_model" ]; then
local openrouter_key=""
if [ -n "${SELECTED_ENV_VAR:-}" ]; then
openrouter_key="${!SELECTED_ENV_VAR:-}"
fi
if [ -n "$openrouter_key" ]; then
local model_hc_result=""
local model_hc_valid=""
local model_hc_msg=""
local model_hc_canonical=""
local model_hc_base="${SELECTED_API_BASE:-https://openrouter.ai/api/v1}"
echo -n " Verifying model id... "
model_hc_result="$(uv run python "$SCRIPT_DIR/scripts/check_llm_key.py" "openrouter" "$openrouter_key" "$model_hc_base" "$normalized_model" 2>/dev/null)" || true
model_hc_valid="$(echo "$model_hc_result" | $PYTHON_CMD -c "import json,sys; print(json.loads(sys.stdin.read()).get('valid',''))" 2>/dev/null)" || true
model_hc_msg="$(echo "$model_hc_result" | $PYTHON_CMD -c "import json,sys; print(json.loads(sys.stdin.read()).get('message',''))" 2>/dev/null)" || true
model_hc_canonical="$(echo "$model_hc_result" | $PYTHON_CMD -c "import json,sys; print(json.loads(sys.stdin.read()).get('model',''))" 2>/dev/null)" || true
if [ "$model_hc_valid" = "True" ]; then
if [ -n "$model_hc_canonical" ]; then
normalized_model="$model_hc_canonical"
fi
echo -e "${GREEN}ok${NC}"
elif [ "$model_hc_valid" = "False" ]; then
echo -e "${RED}failed${NC}"
echo -e " ${YELLOW}$model_hc_msg${NC}"
echo ""
continue
else
echo -e "${YELLOW}--${NC}"
echo -e " ${DIM}Could not verify model id (network issue). Continuing with your selection.${NC}"
fi
else
echo -e " ${DIM}Skipping model verification (OpenRouter key not available in current shell).${NC}"
fi
SELECTED_MODEL="$normalized_model"
SELECTED_MAX_TOKENS=8192
SELECTED_MAX_CONTEXT_TOKENS=120000
echo ""
echo -e "${GREEN}${NC} Model: ${DIM}$SELECTED_MODEL${NC}"
return
fi
echo -e "${RED}Model id cannot be empty.${NC}"
done
fi
local count
count="$(get_model_choice_count "$provider_id")"
@@ -787,34 +869,73 @@ save_configuration() {
max_context_tokens=120000
fi
mkdir -p "$HIVE_CONFIG_DIR"
$PYTHON_CMD -c "
uv run python - \
"$provider_id" \
"$env_var" \
"$model" \
"$max_tokens" \
"$max_context_tokens" \
"$use_claude_code_sub" \
"$api_base" \
"$use_codex_sub" \
"$(date -u +"%Y-%m-%dT%H:%M:%S+00:00")" 2>/dev/null <<'PY'
import json
config = {
'llm': {
'provider': '$provider_id',
'model': '$model',
'max_tokens': $max_tokens,
'max_context_tokens': $max_context_tokens,
'api_key_env_var': '$env_var'
},
'created_at': '$(date -u +"%Y-%m-%dT%H:%M:%S+00:00")'
import sys
from pathlib import Path
(
provider_id,
env_var,
model,
max_tokens,
max_context_tokens,
use_claude_code_sub,
api_base,
use_codex_sub,
created_at,
) = sys.argv[1:10]
cfg_path = Path.home() / ".hive" / "configuration.json"
cfg_path.parent.mkdir(parents=True, exist_ok=True)
try:
with open(cfg_path, encoding="utf-8-sig") as f:
config = json.load(f)
except (OSError, json.JSONDecodeError):
config = {}
config["llm"] = {
"provider": provider_id,
"model": model,
"max_tokens": int(max_tokens),
"max_context_tokens": int(max_context_tokens),
"api_key_env_var": env_var,
}
if '$use_claude_code_sub' == 'true':
config['llm']['use_claude_code_subscription'] = True
# No api_key_env_var needed for Claude Code subscription
config['llm'].pop('api_key_env_var', None)
if '$use_codex_sub' == 'true':
config['llm']['use_codex_subscription'] = True
# No api_key_env_var needed for Codex subscription
config['llm'].pop('api_key_env_var', None)
if '$api_base':
config['llm']['api_base'] = '$api_base'
with open('$HIVE_CONFIG_FILE', 'w') as f:
config["created_at"] = created_at
if use_claude_code_sub == "true":
config["llm"]["use_claude_code_subscription"] = True
config["llm"].pop("api_key_env_var", None)
else:
config["llm"].pop("use_claude_code_subscription", None)
if use_codex_sub == "true":
config["llm"]["use_codex_subscription"] = True
config["llm"].pop("api_key_env_var", None)
else:
config["llm"].pop("use_codex_subscription", None)
if api_base:
config["llm"]["api_base"] = api_base
else:
config["llm"].pop("api_base", None)
tmp_path = cfg_path.with_name(cfg_path.name + ".tmp")
with open(tmp_path, "w", encoding="utf-8") as f:
json.dump(config, f, indent=2)
tmp_path.replace(cfg_path)
print(json.dumps(config, indent=2))
" 2>/dev/null
PY
}
# Source shell rc file to pick up existing env vars (temporarily disable set -e)
@@ -895,26 +1016,36 @@ PREV_MODEL=""
PREV_ENV_VAR=""
PREV_SUB_MODE=""
if [ -f "$HIVE_CONFIG_FILE" ]; then
eval "$($PYTHON_CMD -c "
import json, sys
eval "$(uv run python - 2>/dev/null <<'PY'
import json
from pathlib import Path
cfg_path = Path.home() / ".hive" / "configuration.json"
try:
with open('$HIVE_CONFIG_FILE') as f:
with open(cfg_path, encoding="utf-8-sig") as f:
c = json.load(f)
llm = c.get('llm', {})
print(f'PREV_PROVIDER={llm.get(\"provider\", \"\")}')
print(f'PREV_MODEL={llm.get(\"model\", \"\")}')
print(f'PREV_ENV_VAR={llm.get(\"api_key_env_var\", \"\")}')
sub = ''
if llm.get('use_claude_code_subscription'): sub = 'claude_code'
elif llm.get('use_codex_subscription'): sub = 'codex'
elif llm.get('use_kimi_code_subscription'): sub = 'kimi_code'
elif llm.get('provider', '') == 'minimax' or 'api.minimax.io' in llm.get('api_base', ''): sub = 'minimax_code'
elif llm.get('provider', '') == 'hive' or 'adenhq.com' in llm.get('api_base', ''): sub = 'hive_llm'
elif 'api.z.ai' in llm.get('api_base', ''): sub = 'zai_code'
print(f'PREV_SUB_MODE={sub}')
llm = c.get("llm", {})
print(f"PREV_PROVIDER={llm.get(\"provider\", \"\")}")
print(f"PREV_MODEL={llm.get(\"model\", \"\")}")
print(f"PREV_ENV_VAR={llm.get(\"api_key_env_var\", \"\")}")
sub = ""
if llm.get("use_claude_code_subscription"):
sub = "claude_code"
elif llm.get("use_codex_subscription"):
sub = "codex"
elif llm.get("use_kimi_code_subscription"):
sub = "kimi_code"
elif llm.get("provider", "") == "minimax" or "api.minimax.io" in llm.get("api_base", ""):
sub = "minimax_code"
elif llm.get("provider", "") == "hive" or "adenhq.com" in llm.get("api_base", ""):
sub = "hive_llm"
elif "api.z.ai" in llm.get("api_base", ""):
sub = "zai_code"
print(f"PREV_SUB_MODE={sub}")
except Exception:
pass
" 2>/dev/null)" || true
PY
)" || true
fi
# Compute default menu number from previous config (only if credential is still valid)
@@ -951,6 +1082,7 @@ if [ -n "$PREV_SUB_MODE" ] || [ -n "$PREV_PROVIDER" ]; then
gemini) DEFAULT_CHOICE=9 ;;
groq) DEFAULT_CHOICE=10 ;;
cerebras) DEFAULT_CHOICE=11 ;;
openrouter) DEFAULT_CHOICE=12 ;;
minimax) DEFAULT_CHOICE=4 ;;
kimi) DEFAULT_CHOICE=5 ;;
hive) DEFAULT_CHOICE=6 ;;
@@ -1009,10 +1141,10 @@ fi
echo ""
echo -e " ${CYAN}${BOLD}API key providers:${NC}"
# 7-11) API key providers — show (credential detected) if key already set
PROVIDER_MENU_ENVS=(ANTHROPIC_API_KEY OPENAI_API_KEY GEMINI_API_KEY GROQ_API_KEY CEREBRAS_API_KEY)
PROVIDER_MENU_NAMES=("Anthropic (Claude) - Recommended" "OpenAI (GPT)" "Google Gemini - Free tier available" "Groq - Fast, free tier" "Cerebras - Fast, free tier")
for idx in 0 1 2 3 4; do
# 7-12) API key providers — show (credential detected) if key already set
PROVIDER_MENU_ENVS=(ANTHROPIC_API_KEY OPENAI_API_KEY GEMINI_API_KEY GROQ_API_KEY CEREBRAS_API_KEY OPENROUTER_API_KEY)
PROVIDER_MENU_NAMES=("Anthropic (Claude) - Recommended" "OpenAI (GPT)" "Google Gemini - Free tier available" "Groq - Fast, free tier" "Cerebras - Fast, free tier" "OpenRouter - Bring any OpenRouter model")
for idx in "${!PROVIDER_MENU_ENVS[@]}"; do
num=$((idx + 7))
env_var="${PROVIDER_MENU_ENVS[$idx]}"
if [ -n "${!env_var}" ]; then
@@ -1022,7 +1154,8 @@ for idx in 0 1 2 3 4; do
fi
done
echo -e " ${CYAN}12)${NC} Skip for now"
SKIP_CHOICE=$((7 + ${#PROVIDER_MENU_ENVS[@]}))
echo -e " ${CYAN}$SKIP_CHOICE)${NC} Skip for now"
echo ""
if [ -n "$DEFAULT_CHOICE" ]; then
@@ -1032,15 +1165,15 @@ fi
while true; do
if [ -n "$DEFAULT_CHOICE" ]; then
read -r -p "Enter choice (1-12) [$DEFAULT_CHOICE]: " choice || true
read -r -p "Enter choice (1-$SKIP_CHOICE) [$DEFAULT_CHOICE]: " choice || true
choice="${choice:-$DEFAULT_CHOICE}"
else
read -r -p "Enter choice (1-12): " choice || true
read -r -p "Enter choice (1-$SKIP_CHOICE): " choice || true
fi
if [[ "$choice" =~ ^[0-9]+$ ]] && [ "$choice" -ge 1 ] && [ "$choice" -le 12 ]; then
if [[ "$choice" =~ ^[0-9]+$ ]] && [ "$choice" -ge 1 ] && [ "$choice" -le "$SKIP_CHOICE" ]; then
break
fi
echo -e "${RED}Invalid choice. Please enter 1-12${NC}"
echo -e "${RED}Invalid choice. Please enter 1-$SKIP_CHOICE${NC}"
done
case $choice in
@@ -1058,7 +1191,7 @@ case $choice in
SELECTED_PROVIDER_ID="anthropic"
SELECTED_MODEL="claude-opus-4-6"
SELECTED_MAX_TOKENS=32768
SELECTED_MAX_CONTEXT_TOKENS=180000 # Claude — 200k context window
SELECTED_MAX_CONTEXT_TOKENS=960000 # Claude — 1M context window
echo ""
echo -e "${GREEN}${NC} Using Claude Code subscription"
fi
@@ -1070,7 +1203,7 @@ case $choice in
SELECTED_ENV_VAR="ZAI_API_KEY"
SELECTED_MODEL="glm-5"
SELECTED_MAX_TOKENS=32768
SELECTED_MAX_CONTEXT_TOKENS=120000 # GLM-5 — 128k context window
SELECTED_MAX_CONTEXT_TOKENS=180000 # GLM-5 — 200k context window
PROVIDER_NAME="ZAI"
echo ""
echo -e "${GREEN}${NC} Using ZAI Code subscription"
@@ -1128,7 +1261,7 @@ case $choice in
SELECTED_ENV_VAR="KIMI_API_KEY"
SELECTED_MODEL="kimi-k2.5"
SELECTED_MAX_TOKENS=32768
SELECTED_MAX_CONTEXT_TOKENS=120000 # Kimi K2.5 — 128k context window
SELECTED_MAX_CONTEXT_TOKENS=240000 # Kimi K2.5 — 256k context window
SELECTED_API_BASE="https://api.kimi.com/coding"
PROVIDER_NAME="Kimi"
SIGNUP_URL="https://www.kimi.com/code"
@@ -1142,7 +1275,7 @@ case $choice in
SELECTED_PROVIDER_ID="hive"
SELECTED_ENV_VAR="HIVE_API_KEY"
SELECTED_MAX_TOKENS=32768
SELECTED_MAX_CONTEXT_TOKENS=120000
SELECTED_MAX_CONTEXT_TOKENS=180000
SELECTED_API_BASE="$HIVE_LLM_ENDPOINT"
PROVIDER_NAME="Hive"
SIGNUP_URL="https://discord.com/invite/hQdU7QDkgR"
@@ -1194,6 +1327,13 @@ case $choice in
SIGNUP_URL="https://cloud.cerebras.ai/"
;;
12)
SELECTED_ENV_VAR="OPENROUTER_API_KEY"
SELECTED_PROVIDER_ID="openrouter"
SELECTED_API_BASE="https://openrouter.ai/api/v1"
PROVIDER_NAME="OpenRouter"
SIGNUP_URL="https://openrouter.ai/keys"
;;
"$SKIP_CHOICE")
echo ""
echo -e "${YELLOW}Skipped.${NC} An LLM API key is required to test and use worker agents."
echo -e "Add your API key later by running:"
@@ -1234,7 +1374,7 @@ if { [ -z "$SUBSCRIPTION_MODE" ] || [ "$SUBSCRIPTION_MODE" = "minimax_code" ] ||
echo -e "${GREEN}${NC} API key saved to $SHELL_RC_FILE"
# Health check the new key
echo -n " Verifying API key... "
if { [ "$SUBSCRIPTION_MODE" = "minimax_code" ] || [ "$SUBSCRIPTION_MODE" = "kimi_code" ] || [ "$SUBSCRIPTION_MODE" = "hive_llm" ]; } && [ -n "${SELECTED_API_BASE:-}" ]; then
if [ -n "${SELECTED_API_BASE:-}" ]; then
HC_RESULT=$(uv run python "$SCRIPT_DIR/scripts/check_llm_key.py" "$SELECTED_PROVIDER_ID" "$API_KEY" "$SELECTED_API_BASE" 2>/dev/null) || true
else
HC_RESULT=$(uv run python "$SCRIPT_DIR/scripts/check_llm_key.py" "$SELECTED_PROVIDER_ID" "$API_KEY" 2>/dev/null) || true
@@ -1346,20 +1486,28 @@ fi
if [ -n "$SELECTED_PROVIDER_ID" ]; then
echo ""
echo -n " Saving configuration... "
SAVE_OK=true
if [ "$SUBSCRIPTION_MODE" = "claude_code" ]; then
save_configuration "$SELECTED_PROVIDER_ID" "" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "true" "" > /dev/null
save_configuration "$SELECTED_PROVIDER_ID" "" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "true" "" > /dev/null || SAVE_OK=false
elif [ "$SUBSCRIPTION_MODE" = "codex" ]; then
save_configuration "$SELECTED_PROVIDER_ID" "" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "" "true" > /dev/null
save_configuration "$SELECTED_PROVIDER_ID" "" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "" "true" > /dev/null || SAVE_OK=false
elif [ "$SUBSCRIPTION_MODE" = "zai_code" ]; then
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "https://api.z.ai/api/coding/paas/v4" > /dev/null
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "https://api.z.ai/api/coding/paas/v4" > /dev/null || SAVE_OK=false
elif [ "$SUBSCRIPTION_MODE" = "minimax_code" ]; then
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "$SELECTED_API_BASE" > /dev/null
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "$SELECTED_API_BASE" > /dev/null || SAVE_OK=false
elif [ "$SUBSCRIPTION_MODE" = "kimi_code" ]; then
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "$SELECTED_API_BASE" > /dev/null
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "$SELECTED_API_BASE" > /dev/null || SAVE_OK=false
elif [ "$SUBSCRIPTION_MODE" = "hive_llm" ]; then
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "$SELECTED_API_BASE" > /dev/null
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "$SELECTED_API_BASE" > /dev/null || SAVE_OK=false
elif [ "$SELECTED_PROVIDER_ID" = "openrouter" ]; then
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "$SELECTED_API_BASE" > /dev/null || SAVE_OK=false
else
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" > /dev/null
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" > /dev/null || SAVE_OK=false
fi
if [ "$SAVE_OK" = false ]; then
echo -e "${RED}failed${NC}"
echo -e "${YELLOW} Could not write ~/.hive/configuration.json. Please rerun quickstart.${NC}"
exit 1
fi
echo -e "${GREEN}${NC}"
echo -e " ${DIM}~/.hive/configuration.json${NC}"
@@ -1375,22 +1523,44 @@ echo -e "${GREEN}⬢${NC} Browser automation enabled"
# Patch gcu_enabled into configuration.json
if [ -f "$HIVE_CONFIG_FILE" ]; then
uv run python -c "
if ! uv run python - <<'PY'
import json
with open('$HIVE_CONFIG_FILE') as f:
from pathlib import Path
cfg_path = Path.home() / ".hive" / "configuration.json"
with open(cfg_path, encoding="utf-8-sig") as f:
config = json.load(f)
config['gcu_enabled'] = True
with open('$HIVE_CONFIG_FILE', 'w') as f:
config["gcu_enabled"] = True
tmp_path = cfg_path.with_name(cfg_path.name + ".tmp")
with open(tmp_path, "w", encoding="utf-8") as f:
json.dump(config, f, indent=2)
"
tmp_path.replace(cfg_path)
PY
then
echo -e "${RED}failed${NC}"
echo -e "${YELLOW} Could not update ~/.hive/configuration.json with browser automation settings.${NC}"
exit 1
fi
else
mkdir -p "$HIVE_CONFIG_DIR"
uv run python -c "
if ! uv run python - "$(date -u +"%Y-%m-%dT%H:%M:%S+00:00")" <<'PY'
import json
config = {'gcu_enabled': True, 'created_at': '$(date -u +"%Y-%m-%dT%H:%M:%S+00:00")'}
with open('$HIVE_CONFIG_FILE', 'w') as f:
import sys
from pathlib import Path
cfg_path = Path.home() / ".hive" / "configuration.json"
cfg_path.parent.mkdir(parents=True, exist_ok=True)
config = {
"gcu_enabled": True,
"created_at": sys.argv[1],
}
with open(cfg_path, "w", encoding="utf-8") as f:
json.dump(config, f, indent=2)
"
PY
then
echo -e "${RED}failed${NC}"
echo -e "${YELLOW} Could not create ~/.hive/configuration.json for browser automation settings.${NC}"
exit 1
fi
fi
echo ""
@@ -1591,6 +1761,9 @@ if [ -n "$SELECTED_PROVIDER_ID" ]; then
elif [ "$SUBSCRIPTION_MODE" = "minimax_code" ]; then
echo -e " ${GREEN}${NC} MiniMax Coding Key → ${DIM}$SELECTED_MODEL${NC}"
echo -e " ${DIM}API: api.minimax.io/v1 (OpenAI-compatible)${NC}"
elif [ "$SELECTED_PROVIDER_ID" = "openrouter" ]; then
echo -e " ${GREEN}${NC} OpenRouter API Key → ${DIM}$SELECTED_MODEL${NC}"
echo -e " ${DIM}API: openrouter.ai/api/v1 (OpenAI-compatible)${NC}"
else
echo -e " ${CYAN}$SELECTED_PROVIDER_ID${NC}${DIM}$SELECTED_MODEL${NC}"
fi
@@ -1635,40 +1808,29 @@ if [ "$CODEX_AVAILABLE" = true ]; then
echo ""
fi
# Auto-launch dashboard if frontend was built
if [ "$FRONTEND_BUILT" = true ]; then
echo -e "${BOLD}Launching dashboard...${NC}"
echo ""
echo -e " ${DIM}Starting server on http://localhost:8787${NC}"
echo -e " ${DIM}Press Ctrl+C to stop${NC}"
echo ""
echo -e " ${DIM}Tip: You can restart the dashboard anytime with:${NC} ${CYAN}hive open${NC}"
echo ""
# exec replaces the quickstart process with hive open
exec "$SCRIPT_DIR/hive" open
else
# No frontend — show manual instructions
echo -e "${YELLOW}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${BOLD}⚠️ IMPORTANT: Load your new configuration${NC}"
echo -e "${YELLOW}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
echo -e " Your API keys have been saved to ${CYAN}$SHELL_RC_FILE${NC}"
echo -e " To use them, either:"
echo ""
echo -e " ${GREEN}Option 1:${NC} Source your shell config now:"
echo -e " ${CYAN}source $SHELL_RC_FILE${NC}"
echo ""
echo -e " ${GREEN}Option 2:${NC} Open a new terminal window"
echo ""
echo -e "${YELLOW}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
echo -e "${YELLOW}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${BOLD}IMPORTANT: Load your new configuration${NC}"
echo -e "${YELLOW}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
echo -e " Your API keys have been saved to ${CYAN}$SHELL_RC_FILE${NC}"
echo -e " To use them, either:"
echo ""
echo -e " ${GREEN}Option 1:${NC} Source your shell config now:"
echo -e " ${CYAN}source $SHELL_RC_FILE${NC}"
echo ""
echo -e " ${GREEN}Option 2:${NC} Open a new terminal window"
echo ""
echo -e "${YELLOW}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
echo -e "${BOLD}Run an Agent:${NC}"
echo ""
echo -e " Launch the interactive dashboard to browse and run agents:"
echo -e " You can start an example agent or an agent built by yourself:"
echo -e " ${CYAN}hive open${NC}"
echo ""
echo -e "${DIM}Run ./quickstart.sh again to reconfigure.${NC}"
echo ""
echo -e "${BOLD}Run an Agent:${NC}"
echo ""
if [ "$FRONTEND_BUILT" = true ]; then
echo -e " Quickstart only sets things up. Launch the dashboard when you're ready:"
else
echo -e " Frontend build was skipped or failed. Once the dashboard is available, launch it with:"
fi
echo -e " ${CYAN}hive open${NC}"
echo ""
echo -e "${DIM}Run ./quickstart.sh again to reconfigure.${NC}"
echo ""
+198 -3
View File
@@ -1,7 +1,7 @@
"""Validate an LLM API key without consuming tokens.
Usage:
python scripts/check_llm_key.py <provider_id> <api_key> [api_base]
python scripts/check_llm_key.py <provider_id> <api_key> [api_base] [model]
Exit codes:
0 = valid key
@@ -12,13 +12,125 @@ Output: single JSON line {"valid": bool, "message": str}
"""
import json
import re
import sys
import unicodedata
from difflib import get_close_matches
import httpx
from framework.config import HIVE_LLM_ENDPOINT
TIMEOUT = 10.0
OPENROUTER_SEPARATOR_TRANSLATION = str.maketrans(
{
"\u2010": "-",
"\u2011": "-",
"\u2012": "-",
"\u2013": "-",
"\u2014": "-",
"\u2015": "-",
"\u2212": "-",
"\u2044": "/",
"\u2215": "/",
"\u29F8": "/",
"\uFF0F": "/",
}
)
def _extract_error_message(response: httpx.Response) -> str:
"""Best-effort extraction of a provider error message."""
try:
payload = response.json()
except Exception:
text = (response.text or "").strip()
return text[:240] if text else ""
if isinstance(payload, dict):
error_value = payload.get("error")
if isinstance(error_value, dict):
message = error_value.get("message")
if isinstance(message, str) and message.strip():
return message.strip()
if isinstance(error_value, str) and error_value.strip():
return error_value.strip()
message = payload.get("message")
if isinstance(message, str) and message.strip():
return message.strip()
return ""
def _sanitize_openrouter_model_id(value: str) -> str:
"""Sanitize pasted OpenRouter model IDs into a comparable slug."""
normalized = unicodedata.normalize("NFKC", value or "")
normalized = "".join(
ch
for ch in normalized
if unicodedata.category(ch) not in {"Cc", "Cf"}
)
normalized = normalized.translate(OPENROUTER_SEPARATOR_TRANSLATION)
normalized = re.sub(r"\s+", "", normalized)
if normalized.casefold().startswith("openrouter/"):
normalized = normalized.split("/", 1)[1]
return normalized
def _normalize_openrouter_model_id(value: str) -> str:
"""Normalize OpenRouter model IDs for exact/alias matching."""
return _sanitize_openrouter_model_id(value).casefold()
def _extract_openrouter_model_lookup(payload: object) -> dict[str, str]:
"""Map normalized model IDs/aliases to a preferred canonical display slug."""
if not isinstance(payload, dict):
return {}
data = payload.get("data")
if not isinstance(data, list):
return {}
lookup: dict[str, str] = {}
for item in data:
if not isinstance(item, dict):
continue
model_id = item.get("id")
canonical_slug = item.get("canonical_slug")
candidates = [
_sanitize_openrouter_model_id(value)
for value in (model_id, canonical_slug)
if isinstance(value, str) and _sanitize_openrouter_model_id(value)
]
if not candidates:
continue
preferred_slug = candidates[-1]
for candidate in candidates:
lookup[_normalize_openrouter_model_id(candidate)] = preferred_slug
return lookup
def _format_openrouter_model_unavailable_message(
model: str, available_model_lookup: dict[str, str]
) -> str:
"""Return a helpful not-found message with close-match suggestions."""
suggestions = [
available_model_lookup[key]
for key in get_close_matches(
_normalize_openrouter_model_id(model),
list(available_model_lookup),
n=1,
cutoff=0.6,
)
]
base = f"OpenRouter model is not available for this key/settings: {model}"
if suggestions:
return f"{base}. Closest matches: {', '.join(suggestions)}"
return base
def check_anthropic(api_key: str, **_: str) -> dict:
@@ -58,6 +170,79 @@ def check_openai_compatible(api_key: str, endpoint: str, name: str) -> dict:
return {"valid": False, "message": f"{name} API returned status {r.status_code}"}
def check_openrouter(
api_key: str, api_base: str = "https://openrouter.ai/api/v1", **_: str
) -> dict:
"""Validate OpenRouter key against GET /models."""
endpoint = f"{api_base.rstrip('/')}/models"
with httpx.Client(timeout=TIMEOUT) as client:
r = client.get(endpoint, headers={"Authorization": f"Bearer {api_key}"})
if r.status_code in (200, 429):
return {"valid": True, "message": "OpenRouter API key valid"}
if r.status_code == 401:
return {"valid": False, "message": "Invalid OpenRouter API key"}
if r.status_code == 403:
return {"valid": False, "message": "OpenRouter API key lacks permissions"}
return {"valid": False, "message": f"OpenRouter API returned status {r.status_code}"}
def check_openrouter_model(
api_key: str,
model: str,
api_base: str = "https://openrouter.ai/api/v1",
**_: str,
) -> dict:
"""Validate that an OpenRouter model ID is available to this key/settings."""
requested_model = _sanitize_openrouter_model_id(model)
endpoint = f"{api_base.rstrip('/')}/models/user"
with httpx.Client(timeout=TIMEOUT) as client:
r = client.get(
endpoint,
headers={"Authorization": f"Bearer {api_key}"},
)
if r.status_code == 200:
available_model_lookup = _extract_openrouter_model_lookup(r.json())
matched_model = available_model_lookup.get(
_normalize_openrouter_model_id(requested_model)
)
if matched_model:
return {
"valid": True,
"message": f"OpenRouter model is available: {matched_model}",
"model": matched_model,
}
return {
"valid": False,
"message": _format_openrouter_model_unavailable_message(
requested_model, available_model_lookup
),
}
if r.status_code == 429:
return {
"valid": True,
"message": "OpenRouter model check rate-limited; assuming model is reachable",
}
if r.status_code == 401:
return {"valid": False, "message": "Invalid OpenRouter API key"}
if r.status_code == 403:
return {"valid": False, "message": "OpenRouter API key lacks permissions"}
detail = _extract_error_message(r)
if r.status_code in (400, 404, 422):
base = (
"OpenRouter model is not available for this key/settings: "
f"{requested_model}"
)
return {"valid": False, "message": f"{base}. {detail}" if detail else base}
suffix = f": {detail}" if detail else ""
return {
"valid": False,
"message": f"OpenRouter model check returned status {r.status_code}{suffix}",
}
def check_minimax(
api_key: str, api_base: str = "https://api.minimax.io/v1", **_: str
) -> dict:
@@ -131,6 +316,7 @@ PROVIDERS = {
"cerebras": lambda key, **kw: check_openai_compatible(
key, "https://api.cerebras.ai/v1/models", "Cerebras"
),
"openrouter": lambda key, **kw: check_openrouter(key, **kw),
"minimax": lambda key, **kw: check_minimax(key),
# Kimi For Coding uses an Anthropic-compatible endpoint; check via /v1/messages
# with empty messages (same as check_anthropic, triggers 400 not 401).
@@ -150,7 +336,7 @@ def main() -> None:
json.dumps(
{
"valid": False,
"message": "Usage: check_llm_key.py <provider> <key> [api_base]",
"message": "Usage: check_llm_key.py <provider> <key> [api_base] [model]",
}
)
)
@@ -159,10 +345,19 @@ def main() -> None:
provider_id = sys.argv[1]
api_key = sys.argv[2]
api_base = sys.argv[3] if len(sys.argv) > 3 else ""
model = sys.argv[4] if len(sys.argv) > 4 else ""
try:
if api_base and provider_id == "minimax":
if provider_id == "openrouter" and model:
result = check_openrouter_model(
api_key,
model=model,
api_base=(api_base or "https://openrouter.ai/api/v1"),
)
elif api_base and provider_id == "minimax":
result = check_minimax(api_key, api_base)
elif api_base and provider_id == "openrouter":
result = check_openrouter(api_key, api_base)
elif api_base and provider_id == "kimi":
# Kimi uses an Anthropic-compatible endpoint; check via /v1/messages
result = check_anthropic_compatible(
+1 -1
View File
@@ -12,7 +12,7 @@ import zlib
# Files beyond this size are skipped/rejected in hashline mode because
# hashline anchors are not practical on files this large (minified
# bundles, logs, data dumps). Shared by view_file, grep_search, and
# bundles, logs, data dumps). Shared by read_file, grep_search, and
# hashline_edit.
HASHLINE_MAX_FILE_BYTES = 10 * 1024 * 1024 # 10 MB
+1 -5
View File
@@ -70,8 +70,6 @@ from .file_system_toolkits.list_dir import register_tools as register_list_dir
from .file_system_toolkits.replace_file_content import (
register_tools as register_replace_file_content,
)
from .file_system_toolkits.view_file import register_tools as register_view_file
from .file_system_toolkits.write_to_file import register_tools as register_write_to_file
from .github_tool import register_tools as register_github
from .gitlab_tool import register_tools as register_gitlab
from .gmail_tool import register_tools as register_gmail
@@ -186,14 +184,12 @@ def _register_verified(
register_account_info(mcp, credentials=credentials)
# --- File system toolkits ---
register_view_file(mcp)
register_write_to_file(mcp)
register_list_dir(mcp)
register_replace_file_content(mcp)
register_apply_diff(mcp)
register_apply_patch(mcp)
register_grep_search(mcp)
# hashline_edit: anchor-based editing, pairs with view_file/grep_search hashline mode
# hashline_edit: anchor-based editing, pairs with read_file/grep_search hashline mode
register_hashline_edit(mcp)
register_execute_command(mcp)
register_data_tools(mcp)
@@ -75,7 +75,7 @@ def register_tools(mcp: FastMCP) -> None:
try:
if hashline:
# Use splitlines() for anchor consistency with
# view_file/hashline_edit (handles Unicode line
# read_file/hashline_edit (handles Unicode line
# separators like \u2028, \x85).
# Skip files > 10MB to avoid excessive memory use.
file_size = os.path.getsize(file_path)
@@ -6,11 +6,11 @@ Edit files using anchor-based line references for precise, hash-validated edits.
The `hashline_edit` tool enables file editing using short content-hash anchors (`N:hhhh`) instead of requiring exact text reproduction. Each line's anchor includes a 4-character hash of its content. If the file has changed since the model last read it, the hash won't match and the edit is cleanly rejected.
Use this tool together with `view_file(hashline=True)` and `grep_search(hashline=True)`, which return anchors for each line.
Use this tool together with `read_file(hashline=True)` and `grep_search(hashline=True)`, which return anchors for each line.
## Use Cases
- Making targeted edits after reading a file with `view_file(hashline=True)`
- Making targeted edits after reading a file with `read_file(hashline=True)`
- Replacing single lines, line ranges, or inserting new lines by anchor
- Batch editing multiple locations in a single atomic call
- Falling back to string replacement when anchors are not available
@@ -21,7 +21,7 @@ Use this tool together with `view_file(hashline=True)` and `grep_search(hashline
import json
# First, read the file with hashline mode to get anchors
content = view_file(path="app.py", hashline=True, workspace_id="ws-1", agent_id="a-1", session_id="s-1")
content = read_file(path="app.py", hashline=True)
# Returns lines like: 1:a3b1|def main(): 2:f1c2| print("hello") ...
# Then edit using the anchors
@@ -29,25 +29,10 @@ hashline_edit(
path="app.py",
edits=json.dumps([
{"op": "set_line", "anchor": "2:f1c2", "content": ' print("goodbye")'}
]),
workspace_id="ws-1",
agent_id="a-1",
session_id="s-1"
])
)
```
## Arguments
| Argument | Type | Required | Default | Description |
|----------|------|----------|---------|-------------|
| `path` | str | Yes | - | The path to the file (relative to session root) |
| `edits` | str | Yes | - | JSON string containing a list of edit operations (see Operations below) |
| `workspace_id` | str | Yes | - | The ID of the workspace |
| `agent_id` | str | Yes | - | The ID of the agent |
| `session_id` | str | Yes | - | The ID of the current session |
| `auto_cleanup` | bool | No | `True` | Strip hashline prefixes and echoed context from content. Set to `False` to write content exactly as provided. |
| `encoding` | str | No | `"utf-8"` | File encoding. Must match the file's actual encoding. |
## Operations
The `edits` parameter is a JSON array of operation objects. Each object must have an `"op"` field:
@@ -61,62 +46,6 @@ The `edits` parameter is a JSON array of operation objects. Each object must hav
| `replace` | `old_content`, `new_content`, `allow_multiple` (optional) | Fallback string replacement; errors if 0 or 2+ matches (unless `allow_multiple: true`) |
| `append` | `content` | Append new lines to end of file (works for empty files too) |
## Returns
**Success:**
```python
{
"success": True,
"path": "app.py",
"edits_applied": 2,
"content": "1:b2c4|def main():\n2:c4a1| print(\"goodbye\")\n..."
}
```
**Success (noop, content unchanged after applying edits):**
```python
{
"success": True,
"path": "app.py",
"edits_applied": 0,
"note": "Content unchanged after applying edits",
"content": "1:b2c4|def main():\n..."
}
```
**Success (with auto-cleanup applied):**
```python
{
"success": True,
"path": "app.py",
"edits_applied": 1,
"content": "...",
"cleanup_applied": ["prefix_strip"]
}
```
The `cleanup_applied` field is only present when cleanup actually modified content. Possible values: `prefix_strip`, `boundary_echo_strip`, `insert_echo_strip`.
**Success (replace with allow_multiple):**
```python
{
"success": True,
"path": "app.py",
"edits_applied": 1,
"content": "...",
"replacements": {"edit_1": 3}
}
```
The `replacements` field is only present when `allow_multiple: true` was used, showing the count per replace op.
**Error:**
```python
{
"error": "Edit #1 (set_line): Hash mismatch at line 2: expected 'f1c2', got 'a3b1'. Re-read the file to get current anchors."
}
```
## Error Handling
- Returns an error if the file doesn't exist
@@ -127,90 +56,11 @@ The `replacements` field is only present when `allow_multiple: true` was used, s
- Returns an error for unknown op types or invalid JSON
- All edits are validated before any writes occur (atomic): on any error the file is unchanged
## Examples
### Replacing a single line
```python
edits = json.dumps([
{"op": "set_line", "anchor": "5:a3b1", "content": " return result"}
])
result = hashline_edit(path="app.py", edits=edits, workspace_id="ws-1", agent_id="a-1", session_id="s-1")
# Returns: {"success": True, "path": "app.py", "edits_applied": 1, "content": "..."}
```
### Replacing a range of lines
```python
edits = json.dumps([{
"op": "replace_lines",
"start_anchor": "10:b1c2",
"end_anchor": "15:c2d3",
"content": " # simplified\n return x + y"
}])
result = hashline_edit(path="math.py", edits=edits, workspace_id="ws-1", agent_id="a-1", session_id="s-1")
```
### Inserting new lines after
```python
edits = json.dumps([
{"op": "insert_after", "anchor": "3:d4e5", "content": "import os\nimport sys"}
])
result = hashline_edit(path="app.py", edits=edits, workspace_id="ws-1", agent_id="a-1", session_id="s-1")
```
### Inserting new lines before
```python
edits = json.dumps([
{"op": "insert_before", "anchor": "1:a1b2", "content": "#!/usr/bin/env python3"}
])
result = hashline_edit(path="app.py", edits=edits, workspace_id="ws-1", agent_id="a-1", session_id="s-1")
```
### Batch editing
```python
edits = json.dumps([
{"op": "set_line", "anchor": "1:a1b2", "content": "#!/usr/bin/env python3"},
{"op": "insert_after", "anchor": "2:b2c3", "content": "import logging"},
{"op": "set_line", "anchor": "10:c3d4", "content": " logging.info('done')"},
])
result = hashline_edit(path="app.py", edits=edits, workspace_id="ws-1", agent_id="a-1", session_id="s-1")
```
### Replace all occurrences
```python
edits = json.dumps([
{"op": "replace", "old_content": "old_name", "new_content": "new_name", "allow_multiple": True}
])
result = hashline_edit(path="app.py", edits=edits, workspace_id="ws-1", agent_id="a-1", session_id="s-1")
# Returns: {..., "replacements": {"edit_1": 5}}
```
## Notes
- Anchors are generated by `view_file(hashline=True)` and `grep_search(hashline=True)`
- Anchors are generated by `read_file(hashline=True)` and `grep_search(hashline=True)`
- The hash is a CRC32-based 4-char hex digest of the line content (with trailing spaces and tabs stripped; leading whitespace is included so indentation changes invalidate anchors). Collision probability is ~0.0015% per changed line.
- All anchor-based ops are validated before any writes occur; if any op fails validation, the file is left unchanged
- String `replace` ops are applied after all anchor-based splices, so they match against post-splice content
- Original line endings (LF or CRLF) are preserved
- The response includes the updated file content in hashline format, so subsequent edits can use the new anchors without re-reading
## Auto-Cleanup Details
When `auto_cleanup=True` (the default), the tool strips hashline prefixes and echoed context that LLMs frequently include in edit content. Prefix stripping uses a **2+ non-empty line threshold** to avoid false positives. The prefix regex matches the `N:hhhh|` pattern (4-char hex hash).
**Why the threshold matters:** Single-line content matching the `N:hhhh|` pattern is ambiguous. It could be literal content (CSV data, config values, log format strings) that happens to match the pattern. With 2+ lines all matching, the probability of a false positive drops dramatically.
**Single-line example (NOT stripped):**
```python
# set_line with content "5:a3b1|hello" writes literally "5:a3b1|hello"
{"op": "set_line", "anchor": "2:f1c2", "content": "5:a3b1|hello"}
```
**Multi-line example (stripped):**
```python
# replace_lines where all lines match N:hhhh| pattern gets stripped
{"op": "replace_lines", "start_anchor": "2:f1c2", "end_anchor": "3:b2d3",
"content": "2:a3b1|BBB\n3:c4d2|CCC"}
# Writes "BBB\nCCC" (prefixes removed)
```
**Escape hatch:** Set `auto_cleanup=False` to write content exactly as provided, bypassing all cleanup heuristics.
@@ -39,7 +39,7 @@ def register_tools(mcp: FastMCP) -> None:
Edit a file using anchor-based line references (N:hash) for precise edits.
When to use
After reading a file with view_file(hashline=True), use the anchors to make
After reading a file with read_file(hashline=True), use the anchors to make
targeted edits without reproducing exact file content.
Rules & Constraints
@@ -1,106 +0,0 @@
# View File Tool
Reads the content of a file within the secure session sandbox.
## Description
The `view_file` tool allows you to read and retrieve the complete content of files within a sandboxed session environment. It provides metadata about the file along with its content.
## Use Cases
- Reading configuration files
- Viewing source code
- Inspecting log files
- Retrieving data files for processing
## Usage
```python
view_file(
path="config/settings.json",
workspace_id="workspace-123",
agent_id="agent-456",
session_id="session-789"
)
```
## Arguments
| Argument | Type | Required | Default | Description |
|----------|------|----------|---------|-------------|
| `path` | str | Yes | - | The path to the file (relative to session root) |
| `workspace_id` | str | Yes | - | The ID of the workspace |
| `agent_id` | str | Yes | - | The ID of the agent |
| `session_id` | str | Yes | - | The ID of the current session |
| `encoding` | str | No | `"utf-8"` | The encoding to use for reading the file |
| `max_size` | int | No | `10485760` | Maximum size of file content to return in bytes (10 MB) |
| `hashline` | bool | No | `False` | If True, return content with `N:hhhh\|content` anchors for use with `hashline_edit` |
| `offset` | int | No | `1` | 1-indexed start line (only used when `hashline=True`) |
| `limit` | int | No | `0` | Max lines to return, 0 = all (only used when `hashline=True`) |
## Returns
Returns a dictionary with the following structure:
**Success (default mode):**
```python
{
"success": True,
"path": "config/settings.json",
"content": "{\"debug\": true}",
"size_bytes": 16,
"lines": 1
}
```
**Success (hashline mode):**
```python
{
"success": True,
"path": "app.py",
"content": "1:a3f2|def main():\n2:f1c4| print(\"hello\")",
"hashline": True,
"offset": 1,
"limit": 0,
"total_lines": 2,
"shown_lines": 2,
"size_bytes": 35
}
```
**Error:**
```python
{
"error": "File not found at config/settings.json"
}
```
## Error Handling
- Returns an error dict if the file doesn't exist
- Returns an error dict if the file cannot be read (permission issues, encoding errors, etc.)
- Handles binary files gracefully by returning appropriate error messages
## Examples
### Reading a text file
```python
result = view_file(
path="README.md",
workspace_id="ws-1",
agent_id="agent-1",
session_id="session-1"
)
# Returns: {"success": True, "path": "README.md", "content": "# My Project\n...", "size_bytes": 1024, "lines": 42}
```
### Handling missing files
```python
result = view_file(
path="nonexistent.txt",
workspace_id="ws-1",
agent_id="agent-1",
session_id="session-1"
)
# Returns: {"error": "File not found at nonexistent.txt"}
```
@@ -1,3 +0,0 @@
from .view_file import register_tools
__all__ = ["register_tools"]
@@ -1,134 +0,0 @@
import os
from mcp.server.fastmcp import FastMCP
from aden_tools.hashline import HASHLINE_MAX_FILE_BYTES, format_hashlines
from ..security import get_secure_path
def register_tools(mcp: FastMCP) -> None:
"""Register file view tools with the MCP server."""
if getattr(mcp, "_file_tools_registered", False):
return
mcp._file_tools_registered = True
@mcp.tool()
def view_file(
path: str,
workspace_id: str,
agent_id: str,
session_id: str,
encoding: str = "utf-8",
max_size: int = HASHLINE_MAX_FILE_BYTES,
hashline: bool = False,
offset: int = 1,
limit: int = 0,
) -> dict:
"""
Purpose
Read the content of a file within the session sandbox.
When to use
Inspect file contents before making changes
Retrieve stored data or configuration
Review logs or artifacts
Rules & Constraints
File must exist at the specified path
Returns full content with size and line count
Always read before patching or modifying
Args:
path: The path to the file (relative to session root)
workspace_id: The ID of workspace
agent_id: The ID of agent
session_id: The ID of the current session
encoding: The encoding to use for reading the file (default: "utf-8")
max_size: The maximum size of file content to return in bytes (default: 10MB)
hashline: If True, return content with N:hhhh|content anchors
for use with hashline_edit (default: False)
offset: 1-indexed start line, only used when hashline=True (default: 1)
limit: Max lines to return, 0 = all, only used when hashline=True (default: 0)
Returns:
Dict with file content and metadata, or error dict
"""
try:
if max_size < 0:
return {"error": f"max_size must be non-negative, got {max_size}"}
secure_path = get_secure_path(path, workspace_id, agent_id, session_id)
if not os.path.exists(secure_path):
return {"error": f"File not found at {path}"}
if not os.path.isfile(secure_path):
return {"error": f"Path is not a file: {path}"}
with open(secure_path, encoding=encoding) as f:
content_raw = f.read()
if not hashline and (offset != 1 or limit != 0):
return {
"error": "offset and limit are only supported when hashline=True. "
"Set hashline=True to use paging."
}
if hashline:
if offset < 1:
return {"error": f"offset must be >= 1, got {offset}"}
if limit < 0:
return {"error": f"limit must be >= 0, got {limit}"}
all_lines = content_raw.splitlines()
total_lines = len(all_lines)
raw_size = len(content_raw.encode(encoding))
if offset > max(total_lines, 1):
return {"error": f"offset {offset} is beyond end of file ({total_lines} lines)"}
# Check size after considering offset/limit. When paging
# (offset or limit set), only check the formatted output size.
# When reading the full file, check the raw size.
is_paging = offset > 1 or limit > 0
if not is_paging and raw_size > max_size:
return {
"error": f"File too large for hashline mode ({raw_size} bytes, "
f"max {max_size}). Use offset and limit to read a section at a time."
}
formatted = format_hashlines(all_lines, offset=offset, limit=limit)
shown_lines = len(formatted.splitlines()) if formatted else 0
if is_paging and len(formatted.encode(encoding)) > max_size:
return {
"error": f"Requested section too large ({shown_lines} lines). "
f"Reduce limit to read a smaller section."
}
return {
"success": True,
"path": path,
"content": formatted,
"hashline": True,
"offset": offset,
"limit": limit,
"total_lines": total_lines,
"shown_lines": shown_lines,
"size_bytes": raw_size,
}
content = content_raw
if len(content.encode(encoding)) > max_size:
content = content[:max_size]
content += "\n\n[... Content truncated due to size limit ...]"
return {
"success": True,
"path": path,
"content": content,
"size_bytes": len(content.encode(encoding)),
"lines": len(content.splitlines()),
}
except Exception as e:
return {"error": f"Failed to read file: {str(e)}"}
@@ -1,92 +0,0 @@
# Write to File Tool
Writes content to a file within the secure session sandbox. Supports both overwriting and appending modes.
## Description
The `write_to_file` tool allows you to create new files or modify existing files within a sandboxed session environment. It automatically creates parent directories if they don't exist and provides flexible write modes.
## Use Cases
- Creating new configuration files
- Writing generated code or data
- Appending logs or output to existing files
- Saving processed results to disk
## Usage
```python
write_to_file(
path="config/settings.json",
content='{"debug": true}',
workspace_id="workspace-123",
agent_id="agent-456",
session_id="session-789",
append=False
)
```
## Arguments
| Argument | Type | Required | Default | Description |
|----------|------|----------|---------|-------------|
| `path` | str | Yes | - | The path to the file (relative to session root) |
| `content` | str | Yes | - | The content to write to the file |
| `workspace_id` | str | Yes | - | The ID of the workspace |
| `agent_id` | str | Yes | - | The ID of the agent |
| `session_id` | str | Yes | - | The ID of the current session |
| `append` | bool | No | False | Whether to append to the file instead of overwriting |
## Returns
Returns a dictionary with the following structure:
**Success:**
```python
{
"success": True,
"path": "config/settings.json",
"mode": "written", # or "appended"
"bytes_written": 18
}
```
**Error:**
```python
{
"error": "Failed to write to file: [error message]"
}
```
## Error Handling
- Returns an error dict if the file cannot be written (permission issues, invalid path, etc.)
- Automatically creates parent directories if they don't exist
- Handles encoding errors gracefully
## Examples
### Creating a new file
```python
result = write_to_file(
path="data/output.txt",
content="Hello, world!",
workspace_id="ws-1",
agent_id="agent-1",
session_id="session-1"
)
# Returns: {"success": True, "path": "data/output.txt", "mode": "written", "bytes_written": 13}
```
### Appending to a file
```python
result = write_to_file(
path="logs/activity.log",
content="\n[INFO] Task completed",
workspace_id="ws-1",
agent_id="agent-1",
session_id="session-1",
append=True
)
# Returns: {"success": True, "path": "logs/activity.log", "mode": "appended", "bytes_written": 24}
```
@@ -1,3 +0,0 @@
from .write_to_file import register_tools
__all__ = ["register_tools"]
@@ -1,61 +0,0 @@
import os
from mcp.server.fastmcp import FastMCP
from ..security import get_secure_path
def register_tools(mcp: FastMCP) -> None:
"""Register file write tools with the MCP server."""
@mcp.tool()
def write_to_file(
path: str,
content: str,
workspace_id: str,
agent_id: str,
session_id: str,
append: bool = False,
) -> dict:
"""
Purpose
Create a new file or append content to an existing file.
When to use
Append new events to append-only logs
Create new artifacts or summaries
Initialize new canonical memory files
Rules & Constraints
Must not overwrite canonical memory unless explicitly allowed
Should include structured data (JSON, Markdown with headers)
Every write must be intentional and minimal
Anti-pattern
Do NOT dump raw conversation transcripts without structure or reason.
Args:
path: The path to the file (relative to session root)
content: The content to write to the file
workspace_id: The ID of the workspace
agent_id: The ID of the agent
session_id: The ID of the current session
append: Whether to append to the file instead of overwriting (default: False)
Returns:
Dict with success status and path, or error dict
"""
try:
secure_path = get_secure_path(path, workspace_id, agent_id, session_id)
os.makedirs(os.path.dirname(secure_path), exist_ok=True)
mode = "a" if append else "w"
with open(secure_path, mode, encoding="utf-8") as f:
f.write(content)
return {
"success": True,
"path": path,
"mode": "appended" if append else "written",
"bytes_written": len(content.encode("utf-8")),
}
except Exception as e:
return {"error": f"Failed to write to file: {str(e)}"}
+11 -433
View File
@@ -32,290 +32,42 @@ def mock_secure_path(tmp_path):
return os.path.join(tmp_path, path)
with patch(
"aden_tools.tools.file_system_toolkits.view_file.view_file.get_secure_path",
"aden_tools.tools.file_system_toolkits.list_dir.list_dir.get_secure_path",
side_effect=_get_secure_path,
):
with patch(
"aden_tools.tools.file_system_toolkits.write_to_file.write_to_file.get_secure_path",
"aden_tools.tools.file_system_toolkits.replace_file_content.replace_file_content.get_secure_path",
side_effect=_get_secure_path,
):
with patch(
"aden_tools.tools.file_system_toolkits.list_dir.list_dir.get_secure_path",
"aden_tools.tools.file_system_toolkits.apply_diff.apply_diff.get_secure_path",
side_effect=_get_secure_path,
):
with patch(
"aden_tools.tools.file_system_toolkits.replace_file_content.replace_file_content.get_secure_path",
"aden_tools.tools.file_system_toolkits.apply_patch.apply_patch.get_secure_path",
side_effect=_get_secure_path,
):
with patch(
"aden_tools.tools.file_system_toolkits.apply_diff.apply_diff.get_secure_path",
"aden_tools.tools.file_system_toolkits.grep_search.grep_search.get_secure_path",
side_effect=_get_secure_path,
):
with patch(
"aden_tools.tools.file_system_toolkits.apply_patch.apply_patch.get_secure_path",
side_effect=_get_secure_path,
"aden_tools.tools.file_system_toolkits.grep_search.grep_search.WORKSPACES_DIR",
str(tmp_path),
):
with patch(
"aden_tools.tools.file_system_toolkits.grep_search.grep_search.get_secure_path",
"aden_tools.tools.file_system_toolkits.execute_command_tool.execute_command_tool.get_secure_path",
side_effect=_get_secure_path,
):
with patch(
"aden_tools.tools.file_system_toolkits.grep_search.grep_search.WORKSPACES_DIR",
"aden_tools.tools.file_system_toolkits.execute_command_tool.execute_command_tool.WORKSPACES_DIR",
str(tmp_path),
):
with patch(
"aden_tools.tools.file_system_toolkits.execute_command_tool.execute_command_tool.get_secure_path",
"aden_tools.tools.file_system_toolkits.hashline_edit.hashline_edit.get_secure_path",
side_effect=_get_secure_path,
):
with patch(
"aden_tools.tools.file_system_toolkits.execute_command_tool.execute_command_tool.WORKSPACES_DIR",
str(tmp_path),
):
with patch(
"aden_tools.tools.file_system_toolkits.hashline_edit.hashline_edit.get_secure_path",
side_effect=_get_secure_path,
):
yield
class TestViewFileTool:
"""Tests for view_file tool."""
@pytest.fixture
def view_file_fn(self, mcp):
from aden_tools.tools.file_system_toolkits.view_file import register_tools
register_tools(mcp)
return mcp._tool_manager._tools["view_file"].fn
def test_view_existing_file(self, view_file_fn, mock_workspace, mock_secure_path, tmp_path):
"""Viewing an existing file returns content and metadata."""
test_file = tmp_path / "test.txt"
test_file.write_text("Hello, World!", encoding="utf-8")
result = view_file_fn(path="test.txt", **mock_workspace)
assert result["success"] is True
assert result["content"] == "Hello, World!"
assert result["size_bytes"] == len(b"Hello, World!")
assert result["lines"] == 1
def test_view_nonexistent_file(self, view_file_fn, mock_workspace, mock_secure_path):
"""Viewing a non-existent file returns an error."""
result = view_file_fn(path="nonexistent.txt", **mock_workspace)
assert "error" in result
assert "not found" in result["error"].lower()
def test_view_multiline_file(self, view_file_fn, mock_workspace, mock_secure_path, tmp_path):
"""Viewing a multiline file returns correct line count."""
test_file = tmp_path / "multiline.txt"
content = "Line 1\nLine 2\nLine 3\nLine 4\n"
test_file.write_text(content, encoding="utf-8")
result = view_file_fn(path="multiline.txt", **mock_workspace)
assert result["success"] is True
assert result["content"] == content
assert result["lines"] == 4
def test_view_empty_file(self, view_file_fn, mock_workspace, mock_secure_path, tmp_path):
"""Viewing an empty file returns empty content."""
test_file = tmp_path / "empty.txt"
test_file.write_text("", encoding="utf-8")
result = view_file_fn(path="empty.txt", **mock_workspace)
assert result["success"] is True
assert result["content"] == ""
assert result["size_bytes"] == 0
assert result["lines"] == 0
def test_view_file_with_unicode(self, view_file_fn, mock_workspace, mock_secure_path, tmp_path):
"""Viewing a file with unicode characters works correctly."""
test_file = tmp_path / "unicode.txt"
content = "Hello 世界! 🌍 émoji"
test_file.write_text(content, encoding="utf-8")
result = view_file_fn(path="unicode.txt", **mock_workspace)
assert result["success"] is True
assert result["content"] == content
assert result["size_bytes"] == len(content.encode("utf-8"))
def test_view_nested_file(self, view_file_fn, mock_workspace, mock_secure_path, tmp_path):
"""Viewing a file in a nested directory works correctly."""
nested = tmp_path / "nested" / "dir"
nested.mkdir(parents=True)
test_file = nested / "file.txt"
test_file.write_text("nested content", encoding="utf-8")
result = view_file_fn(path="nested/dir/file.txt", **mock_workspace)
assert result["success"] is True
assert result["content"] == "nested content"
def test_view_file_with_max_size_truncation(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Viewing a file with max_size truncates content when exceeding limit."""
test_file = tmp_path / "large.txt"
content = "x" * 1000
test_file.write_text(content, encoding="utf-8")
result = view_file_fn(path="large.txt", max_size=100, **mock_workspace)
assert result["success"] is True
assert len(result["content"]) <= 100 + len(
"\n\n[... Content truncated due to size limit ...]"
)
assert "[... Content truncated due to size limit ...]" in result["content"]
def test_view_file_with_negative_max_size(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Viewing a file with negative max_size returns error."""
test_file = tmp_path / "test.txt"
test_file.write_text("content", encoding="utf-8")
result = view_file_fn(path="test.txt", max_size=-1, **mock_workspace)
assert "error" in result
assert "max_size must be non-negative" in result["error"]
def test_view_file_with_custom_encoding(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Viewing a file with custom encoding works correctly."""
test_file = tmp_path / "encoded.txt"
content = "Hello 世界"
test_file.write_text(content, encoding="utf-8")
result = view_file_fn(path="encoded.txt", encoding="utf-8", **mock_workspace)
assert result["success"] is True
assert result["content"] == content
def test_view_file_with_invalid_encoding(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Viewing a file with invalid encoding returns error."""
test_file = tmp_path / "test.txt"
test_file.write_text("content", encoding="utf-8")
result = view_file_fn(path="test.txt", encoding="invalid-encoding", **mock_workspace)
assert "error" in result
assert "Failed to read file" in result["error"]
def test_offset_without_hashline_returns_error(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Passing offset without hashline=True returns error."""
test_file = tmp_path / "test.txt"
test_file.write_text("aaa\nbbb\nccc\n")
result = view_file_fn(path="test.txt", offset=5, **mock_workspace)
assert "error" in result
assert "hashline=True" in result["error"]
def test_limit_without_hashline_returns_error(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Passing limit without hashline=True returns error."""
test_file = tmp_path / "test.txt"
test_file.write_text("aaa\nbbb\nccc\n")
result = view_file_fn(path="test.txt", limit=10, **mock_workspace)
assert "error" in result
assert "hashline=True" in result["error"]
def test_offset_and_limit_without_hashline_returns_error(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Passing both offset and limit without hashline=True returns error."""
test_file = tmp_path / "test.txt"
test_file.write_text("aaa\nbbb\nccc\n")
result = view_file_fn(path="test.txt", offset=2, limit=5, **mock_workspace)
assert "error" in result
assert "hashline=True" in result["error"]
class TestWriteToFileTool:
"""Tests for write_to_file tool."""
@pytest.fixture
def write_to_file_fn(self, mcp):
from aden_tools.tools.file_system_toolkits.write_to_file import register_tools
register_tools(mcp)
return mcp._tool_manager._tools["write_to_file"].fn
def test_write_new_file(self, write_to_file_fn, mock_workspace, mock_secure_path, tmp_path):
"""Writing to a new file creates it successfully."""
result = write_to_file_fn(path="new_file.txt", content="Test content", **mock_workspace)
assert result["success"] is True
assert result["mode"] == "written"
assert result["bytes_written"] > 0
# Verify file was created
created_file = tmp_path / "new_file.txt"
assert created_file.exists()
assert created_file.read_text(encoding="utf-8") == "Test content"
def test_write_append_mode(self, write_to_file_fn, mock_workspace, mock_secure_path, tmp_path):
"""Writing with append=True appends to existing file."""
test_file = tmp_path / "append_test.txt"
test_file.write_text("Line 1\n", encoding="utf-8")
result = write_to_file_fn(
path="append_test.txt", content="Line 2\n", append=True, **mock_workspace
)
assert result["success"] is True
assert result["mode"] == "appended"
assert test_file.read_text(encoding="utf-8") == "Line 1\nLine 2\n"
def test_write_overwrite_existing(
self, write_to_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Writing to existing file overwrites it by default."""
test_file = tmp_path / "overwrite.txt"
test_file.write_text("Original content", encoding="utf-8")
result = write_to_file_fn(path="overwrite.txt", content="New content", **mock_workspace)
assert result["success"] is True
assert result["mode"] == "written"
assert test_file.read_text(encoding="utf-8") == "New content"
def test_write_creates_parent_directories(
self, write_to_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Writing creates parent directories if they don't exist."""
result = write_to_file_fn(path="nested/dir/file.txt", content="Test", **mock_workspace)
assert result["success"] is True
created_file = tmp_path / "nested" / "dir" / "file.txt"
assert created_file.exists()
assert created_file.read_text(encoding="utf-8") == "Test"
def test_write_empty_content(
self, write_to_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Writing empty content creates empty file."""
result = write_to_file_fn(path="empty.txt", content="", **mock_workspace)
assert result["success"] is True
assert result["bytes_written"] == 0
created_file = tmp_path / "empty.txt"
assert created_file.exists()
assert created_file.read_text(encoding="utf-8") == ""
yield
class TestListDirTool:
@@ -805,167 +557,6 @@ class TestApplyPatchTool:
assert test_file.read_text(encoding="utf-8") == modified
class TestViewFileHashlineMode:
"""Tests for view_file hashline mode."""
@pytest.fixture
def view_file_fn(self, mcp):
from aden_tools.tools.file_system_toolkits.view_file import register_tools
register_tools(mcp)
return mcp._tool_manager._tools["view_file"].fn
def test_hashline_format(self, view_file_fn, mock_workspace, mock_secure_path, tmp_path):
"""hashline=True returns N:hhhh|content format."""
test_file = tmp_path / "test.txt"
test_file.write_text("hello\nworld\n")
result = view_file_fn(path="test.txt", hashline=True, **mock_workspace)
assert result["success"] is True
assert result["hashline"] is True
lines = result["content"].split("\n")
assert lines[0].startswith("1:")
assert "|hello" in lines[0]
assert lines[1].startswith("2:")
assert "|world" in lines[1]
def test_hashline_offset(self, view_file_fn, mock_workspace, mock_secure_path, tmp_path):
"""hashline with offset skips initial lines."""
test_file = tmp_path / "test.txt"
test_file.write_text("aaa\nbbb\nccc\n")
result = view_file_fn(path="test.txt", hashline=True, offset=2, **mock_workspace)
assert result["success"] is True
assert result["offset"] == 2
lines = result["content"].split("\n")
assert lines[0].startswith("2:")
assert "|bbb" in lines[0]
def test_hashline_limit(self, view_file_fn, mock_workspace, mock_secure_path, tmp_path):
"""hashline with limit restricts number of lines."""
test_file = tmp_path / "test.txt"
test_file.write_text("aaa\nbbb\nccc\nddd\n")
result = view_file_fn(path="test.txt", hashline=True, limit=2, **mock_workspace)
assert result["success"] is True
assert result["limit"] == 2
assert result["shown_lines"] == 2
assert result["total_lines"] == 4
def test_hashline_total_and_shown_lines(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""total_lines and shown_lines are reported correctly."""
test_file = tmp_path / "test.txt"
test_file.write_text("a\nb\nc\nd\ne\n")
result = view_file_fn(path="test.txt", hashline=True, offset=2, limit=2, **mock_workspace)
assert result["total_lines"] == 5
assert result["shown_lines"] == 2
def test_default_mode_unchanged(self, view_file_fn, mock_workspace, mock_secure_path, tmp_path):
"""Default mode (hashline=False) returns the same format as before."""
test_file = tmp_path / "test.txt"
test_file.write_text("hello\n")
result = view_file_fn(path="test.txt", **mock_workspace)
assert result["success"] is True
assert "hashline" not in result
assert result["content"] == "hello\n"
assert result["lines"] == 1
def test_hashline_offset_zero_returns_error(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""hashline with offset=0 returns error (must be >= 1)."""
test_file = tmp_path / "test.txt"
test_file.write_text("aaa\nbbb\n")
result = view_file_fn(path="test.txt", hashline=True, offset=0, **mock_workspace)
assert "error" in result
assert "offset" in result["error"].lower()
def test_hashline_negative_offset_returns_error(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""hashline with negative offset returns error."""
test_file = tmp_path / "test.txt"
test_file.write_text("aaa\nbbb\n")
result = view_file_fn(path="test.txt", hashline=True, offset=-1, **mock_workspace)
assert "error" in result
assert "offset" in result["error"].lower()
def test_hashline_negative_limit_returns_error(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""hashline with negative limit returns error."""
test_file = tmp_path / "test.txt"
test_file.write_text("aaa\nbbb\n")
result = view_file_fn(path="test.txt", hashline=True, limit=-1, **mock_workspace)
assert "error" in result
assert "limit" in result["error"].lower()
def test_hashline_truncated_file_returns_error(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Large file with hashline=True and no offset/limit returns error directing to paginate."""
test_file = tmp_path / "large.txt"
# Create a file larger than the max_size we'll pass
content = "line\n" * 100 # 500 bytes
test_file.write_text(content)
result = view_file_fn(path="large.txt", hashline=True, max_size=50, **mock_workspace)
assert "error" in result
assert "too large" in result["error"].lower()
assert "offset" in result["error"].lower()
assert "limit" in result["error"].lower()
def test_hashline_offset_beyond_end_returns_error(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""hashline with offset beyond total lines returns error."""
test_file = tmp_path / "test.txt"
test_file.write_text("aaa\nbbb\n")
result = view_file_fn(path="test.txt", hashline=True, offset=50, **mock_workspace)
assert "error" in result
assert "beyond" in result["error"].lower()
assert "2 lines" in result["error"]
def test_hashline_large_file_with_offset_limit_works(
self, view_file_fn, mock_workspace, mock_secure_path, tmp_path
):
"""Large file using offset/limit bypasses full-file size check."""
test_file = tmp_path / "large.txt"
lines = [f"line {i}" for i in range(1, 101)]
test_file.write_text("\n".join(lines) + "\n")
# File is large (> max_size=200), but offset/limit lets us page through it
result = view_file_fn(
path="large.txt", hashline=True, offset=10, limit=5, max_size=200, **mock_workspace
)
assert result["success"] is True
assert result["shown_lines"] == 5
assert result["total_lines"] == 100
# First shown line should be line 10
first_line = result["content"].split("\n")[0]
assert first_line.startswith("10:")
assert "|line 10" in first_line
class TestGrepSearchHashlineMode:
"""Tests for grep_search hashline mode."""
@@ -1047,13 +638,6 @@ class TestGrepSearchHashlineMode:
class TestHashlineCrossToolConsistency:
"""Cross-tool consistency tests for hashline workflows."""
@pytest.fixture
def view_file_fn(self, mcp):
from aden_tools.tools.file_system_toolkits.view_file import register_tools
register_tools(mcp)
return mcp._tool_manager._tools["view_file"].fn
@pytest.fixture
def grep_search_fn(self, mcp):
from aden_tools.tools.file_system_toolkits.grep_search import register_tools
@@ -1070,7 +654,6 @@ class TestHashlineCrossToolConsistency:
def test_unicode_line_separator_anchor_roundtrip(
self,
view_file_fn,
grep_search_fn,
hashline_edit_fn,
mock_workspace,
@@ -1081,11 +664,6 @@ class TestHashlineCrossToolConsistency:
test_file = tmp_path / "test.txt"
test_file.write_text("A\u2028B\nC\n", encoding="utf-8")
# Hashline view sees U+2028 as a line boundary via splitlines()
view_res = view_file_fn(path="test.txt", hashline=True, **mock_workspace)
assert view_res["success"] is True
assert view_res["total_lines"] == 3
# grep_search line iteration treats U+2028 as in-line content
grep_res = grep_search_fn(path="test.txt", pattern="B", hashline=True, **mock_workspace)
assert grep_res["success"] is True