Compare commits

..

39 Commits

Author SHA1 Message Date
Timothy 65c8e1653c chore: lint 2026-03-17 15:31:36 -07:00
Timothy 58e4fa918c feat: make worker node aware of boundaries 2026-03-17 15:28:41 -07:00
Timothy @aden d2eb86e534 Merge pull request #6540 from sundaram2021/fix/make-windows-compatibility
fix make test compatibility on windows
2026-03-17 11:41:32 -07:00
mma2027 23a7b080eb test: add comprehensive test suite for safe_eval (#4015)
* test: add comprehensive test suite for safe_eval sandboxed evaluator

Adds 113 tests across 14 test classes covering the full surface area of
the safe_eval expression evaluator used by edge conditions:

- Literals, data structures, arithmetic, unary/binary/boolean operators
- Short-circuit semantics for `and`/`or` (including guard patterns)
- Ternary expressions, variable lookup, subscript/attribute access
- Whitelisted function and method calls
- Security boundaries (private attrs, disallowed AST nodes, blocked builtins)
- Real-world EdgeSpec.condition_expr patterns from graph executor usage

* style: fix import sort order

---------

Co-authored-by: mma2027 <mma2027@users.noreply.github.com>
Co-authored-by: hundao <alchemy_wimp@hotmail.com>
2026-03-18 01:01:31 +08:00
mma2027 bf39bcdec9 fixed race condition deadlock, missing short-circuit eval, unhandled format exceptions (#4012) 2026-03-18 00:36:54 +08:00
Richard Tang 0276632491 Merge branch 'feat/graph-improvements' 2026-03-17 07:34:10 -07:00
RichardTang-Aden ae2993d0d1 Merge pull request #6528 from Antiarin/feat/trigger-nodes-in-draft-graph
Restore trigger nodes in the new flowchart
2026-03-16 20:54:36 -07:00
RichardTang-Aden d14d71f760 Merge pull request #6549 from aden-hive/staging
Release / Create Release (push) Waiting to run
release 0.7.2
2026-03-16 20:44:47 -07:00
Richard Tang ef6efc2f55 chore: lint and dead code 2026-03-16 20:44:03 -07:00
Antiarin 738641d35f fix: correct trigger target, label, and SSE event data
- Add name and entry_node to all trigger SSE events (TRIGGER_AVAILABLE,
  TRIGGER_ACTIVATED, TRIGGER_DEACTIVATED) so frontend gets correct data
  immediately instead of guessing
- Use ep.entry_node from backend in polling instead of guessing first
  non-trigger node
- Compute cronToLabel from trigger config during polling so pill labels
  show human-readable schedule
- Fix AsyncMock for event_bus.publish in tests
2026-03-17 09:07:10 +05:30
Antiarin 22f5534f08 fix: ensure Queen calls remove_trigger when user asks to remove scheduler
Added explicit prompt guidance requiring the Queen to call the
remove_trigger tool instead of just saying "it's removed."
2026-03-17 09:07:10 +05:30
Antiarin b79e7eca73 feat: live update trigger pill and detail panel on save
- Handle trigger_updated SSE event to update graph node label and
  config in real time when cron or task is saved
- Use cronToLabel for human-readable schedule display in detail panel
- Add "Saved" button feedback for Save Cron and Save Task (2s toast)
- Update trigger pill label to reflect new schedule on cron save
2026-03-17 09:07:10 +05:30
Antiarin 28250dc45e feat: support cron editing via trigger update API
- Extend PATCH /triggers/{id} to accept trigger_config with cron
  validation via croniter and active timer restart
- Add TRIGGER_UPDATED SSE event so frontend updates in real time
- Update frontend API client to use updateTrigger with config support
- Add tests for task update, cron restart, and invalid cron rejection
2026-03-17 09:07:10 +05:30
Antiarin fe5df6a87a feat: restore trigger node rendering in DraftGraph
Trigger nodes (scheduler, webhook, etc.) stopped appearing after the
v0.7.0 refactor because DraftGraph had no trigger awareness.

- Extract shared utilities (cssVar, truncateLabel, trigger colors/icons,
  useTriggerColors, cronToLabel) into lib/graphUtils.ts
- Render trigger pills above the draft flowchart with pill shape, icons,
  countdown timers, active/inactive status, and click handling
- Draw dashed edges from trigger pills to the correct draft node using
  flowchartMap lookup
- Name all trigger layout constants, fix countdown text color bug
- Include trigger pill extent in SVG viewBox width

Closes #6344
2026-03-17 09:07:10 +05:30
Richard Tang 07e4b593dd fix: write config when change model with existing key 2026-03-16 20:23:20 -07:00
Timothy 497591bf3b Merge remote-tracking branch 'origin/feat/hive-llm-support' into staging 2026-03-16 19:49:21 -07:00
Timothy a2a3e334d6 Merge branch 'feature/node-node-comm-by-file' into staging 2026-03-16 19:48:45 -07:00
Timothy 1ccbfaf800 Merge branch 'feature/agent-skills' into staging 2026-03-16 19:48:36 -07:00
bryan c2dea88398 refactor: active node always displaying 2026-03-16 19:30:44 -07:00
bryan dc95c88da0 chore: linter update 2026-03-16 19:22:51 -07:00
bryan b51e688d1a feat: transition when loading 2026-03-16 19:17:16 -07:00
bryan b77a3031fe refactor: update flowchart.json for templates 2026-03-16 17:27:28 -07:00
bryan c10eea04ec refactor: update graph node colors 2026-03-16 17:26:57 -07:00
Richard Tang 491a3f24da chore: Suppress noisy LiteLLM INFO logs 2026-03-16 16:45:23 -07:00
Richard Tang d59f8e99cb chore: prompt users to go to discord for hive key 2026-03-16 16:09:47 -07:00
Richard Tang 0a91b49417 feat: add validation and config for baseURL 2026-03-16 16:07:13 -07:00
Richard Tang b47175d1df feat: add hive llm spec in the quickstart 2026-03-16 14:10:30 -07:00
Sundaram Kumar Jha ff7b5c7e27 fix: prepend ~/.local/bin to PATH so uv is found in Git Bash on Windows 2026-03-17 01:28:25 +05:30
bryan 69f0ff7ac9 chore: linter update 2026-03-16 12:22:29 -07:00
bryan c3f13c50eb docs: remove stale iso 5807 references 2026-03-16 12:22:01 -07:00
bryan 5477408d40 chore: code quality updates 2026-03-16 12:18:46 -07:00
bryan 9fad385ddf fix: return staging phase for disk-loaded agents to prevent false planning loader 2026-03-16 12:14:20 -07:00
bryan cf44ee1d9b refactor: remove AgentGraph, extract shared types, add resizable graph panel 2026-03-16 12:13:56 -07:00
bryan 4ab33a39d6 chore: add generated flowchart.json for template agents 2026-03-16 12:13:29 -07:00
bryan ae19121802 test: add tests for flowchart_utils classification and remap 2026-03-16 12:13:16 -07:00
bryan b518525418 docs: update flowchart schema for 9 types with new color palette 2026-03-16 12:13:06 -07:00
bryan ac3fe38b33 refactor: remove dead shape cases and update imports 2026-03-16 12:12:50 -07:00
bryan 3c6a30fcae refactor: trim queen prompt to 9 flowchart types with dark theme colors 2026-03-16 12:12:35 -07:00
bryan 2ced873fb5 refactor: extract flowchart utils into dedicated module with fallback generation 2026-03-16 12:12:17 -07:00
50 changed files with 4552 additions and 6611 deletions
+9 -2
View File
@@ -1,4 +1,11 @@
.PHONY: lint format check test install-hooks help frontend-install frontend-dev frontend-build
.PHONY: lint format check test test-tools test-live test-all install-hooks help frontend-install frontend-dev frontend-build
# ── Ensure uv is findable in Git Bash on Windows ──────────────────────────────
# uv installs to ~/.local/bin on Windows/Linux/macOS. Git Bash may not include
# this in PATH by default, so we prepend it here.
export PATH := $(HOME)/.local/bin:$(PATH)
# ── Targets ───────────────────────────────────────────────────────────────────
help: ## Show this help
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | \
@@ -46,4 +53,4 @@ frontend-dev: ## Start frontend dev server
cd core/frontend && npm run dev
frontend-build: ## Build frontend for production
cd core/frontend && npm run build
cd core/frontend && npm run build
-740
View File
@@ -1,740 +0,0 @@
#!/usr/bin/env python3
"""
EventLoopNode WebSocket Demo
Real LLM, real FileConversationStore, real EventBus.
Streams EventLoopNode execution to a browser via WebSocket.
Usage:
cd /home/timothy/oss/hive/core
python demos/event_loop_wss_demo.py
Then open http://localhost:8765 in your browser.
"""
import asyncio
import json
import logging
import sys
import tempfile
from http import HTTPStatus
from pathlib import Path
import httpx
import websockets
from bs4 import BeautifulSoup
from websockets.http11 import Request, Response
# Add core, tools, and hive root to path
_CORE_DIR = Path(__file__).resolve().parent.parent
_HIVE_DIR = _CORE_DIR.parent
sys.path.insert(0, str(_CORE_DIR)) # framework.*
sys.path.insert(0, str(_HIVE_DIR / "tools" / "src")) # aden_tools.*
sys.path.insert(0, str(_HIVE_DIR)) # core.framework.* (for aden_tools imports)
import os # noqa: E402
from aden_tools.credentials import CREDENTIAL_SPECS, CredentialStoreAdapter # noqa: E402
from core.framework.credentials import CredentialStore # noqa: E402
from framework.credentials.storage import ( # noqa: E402
CompositeStorage,
EncryptedFileStorage,
EnvVarStorage,
)
from framework.graph.event_loop_node import EventLoopNode, LoopConfig # noqa: E402
from framework.graph.node import NodeContext, NodeSpec, SharedMemory # noqa: E402
from framework.llm.litellm import LiteLLMProvider # noqa: E402
from framework.llm.provider import Tool # noqa: E402
from framework.runner.tool_registry import ToolRegistry # noqa: E402
from framework.runtime.core import Runtime # noqa: E402
from framework.runtime.event_bus import EventBus, EventType # noqa: E402
from framework.storage.conversation_store import FileConversationStore # noqa: E402
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(name)s %(message)s")
logger = logging.getLogger("demo")
# -------------------------------------------------------------------------
# Persistent state (shared across WebSocket connections)
# -------------------------------------------------------------------------
STORE_DIR = Path(tempfile.mkdtemp(prefix="hive_demo_"))
STORE = FileConversationStore(STORE_DIR / "conversation")
RUNTIME = Runtime(STORE_DIR / "runtime")
LLM = LiteLLMProvider(model="claude-sonnet-4-5-20250929")
# -------------------------------------------------------------------------
# Tool Registry — real tools via ToolRegistry (same pattern as GraphExecutor)
# -------------------------------------------------------------------------
TOOL_REGISTRY = ToolRegistry()
# Credential store: Aden sync (OAuth2 tokens) + encrypted files + env var fallback
_env_mapping = {name: spec.env_var for name, spec in CREDENTIAL_SPECS.items()}
_local_storage = CompositeStorage(
primary=EncryptedFileStorage(),
fallbacks=[EnvVarStorage(env_mapping=_env_mapping)],
)
if os.environ.get("ADEN_API_KEY"):
try:
from framework.credentials.aden import ( # noqa: E402
AdenCachedStorage,
AdenClientConfig,
AdenCredentialClient,
AdenSyncProvider,
)
_client = AdenCredentialClient(AdenClientConfig(base_url="https://api.adenhq.com"))
_provider = AdenSyncProvider(client=_client)
_storage = AdenCachedStorage(
local_storage=_local_storage,
aden_provider=_provider,
)
_cred_store = CredentialStore(storage=_storage, providers=[_provider], auto_refresh=True)
_synced = _provider.sync_all(_cred_store)
logger.info("Synced %d credentials from Aden", _synced)
except Exception as e:
logger.warning("Aden sync unavailable: %s", e)
_cred_store = CredentialStore(storage=_local_storage)
else:
logger.info("ADEN_API_KEY not set, using local credential storage")
_cred_store = CredentialStore(storage=_local_storage)
CREDENTIALS = CredentialStoreAdapter(_cred_store)
# Debug: log which credentials resolved
for _name in ["brave_search", "hubspot", "anthropic"]:
_val = CREDENTIALS.get(_name)
if _val:
logger.debug("credential %s: OK (len=%d)", _name, len(_val))
else:
logger.debug("credential %s: not found", _name)
# --- web_search (Brave Search API) ---
TOOL_REGISTRY.register(
name="web_search",
tool=Tool(
name="web_search",
description=(
"Search the web for current information. "
"Returns titles, URLs, and snippets from search results."
),
parameters={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query (1-500 characters)",
},
"num_results": {
"type": "integer",
"description": "Number of results to return (1-20, default 10)",
},
},
"required": ["query"],
},
),
executor=lambda inputs: _exec_web_search(inputs),
)
def _exec_web_search(inputs: dict) -> dict:
api_key = CREDENTIALS.get("brave_search")
if not api_key:
return {"error": "brave_search credential not configured"}
query = inputs.get("query", "")
num_results = min(inputs.get("num_results", 10), 20)
resp = httpx.get(
"https://api.search.brave.com/res/v1/web/search",
params={"q": query, "count": num_results},
headers={"X-Subscription-Token": api_key, "Accept": "application/json"},
timeout=30.0,
)
if resp.status_code != 200:
return {"error": f"Brave API HTTP {resp.status_code}"}
data = resp.json()
results = [
{
"title": item.get("title", ""),
"url": item.get("url", ""),
"snippet": item.get("description", ""),
}
for item in data.get("web", {}).get("results", [])[:num_results]
]
return {"query": query, "results": results, "total": len(results)}
# --- web_scrape (httpx + BeautifulSoup, no playwright for sync compat) ---
TOOL_REGISTRY.register(
name="web_scrape",
tool=Tool(
name="web_scrape",
description=(
"Scrape and extract text content from a webpage URL. "
"Returns the page title and main text content."
),
parameters={
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "URL of the webpage to scrape",
},
"max_length": {
"type": "integer",
"description": "Maximum text length (default 50000)",
},
},
"required": ["url"],
},
),
executor=lambda inputs: _exec_web_scrape(inputs),
)
_SCRAPE_HEADERS = {
"User-Agent": (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/131.0.0.0 Safari/537.36"
),
"Accept": "text/html,application/xhtml+xml",
}
def _exec_web_scrape(inputs: dict) -> dict:
url = inputs.get("url", "")
max_length = max(1000, min(inputs.get("max_length", 50000), 500000))
if not url.startswith(("http://", "https://")):
url = "https://" + url
try:
resp = httpx.get(url, timeout=30.0, follow_redirects=True, headers=_SCRAPE_HEADERS)
if resp.status_code != 200:
return {"error": f"HTTP {resp.status_code}"}
soup = BeautifulSoup(resp.text, "html.parser")
for tag in soup(["script", "style", "nav", "footer", "header", "aside", "noscript"]):
tag.decompose()
title = soup.title.get_text(strip=True) if soup.title else ""
main = (
soup.find("article")
or soup.find("main")
or soup.find(attrs={"role": "main"})
or soup.find("body")
)
text = main.get_text(separator=" ", strip=True) if main else ""
text = " ".join(text.split())
if len(text) > max_length:
text = text[:max_length] + "..."
return {"url": url, "title": title, "content": text, "length": len(text)}
except httpx.TimeoutException:
return {"error": "Request timed out"}
except Exception as e:
return {"error": f"Scrape failed: {e}"}
# --- HubSpot CRM tools (optional, requires HUBSPOT_ACCESS_TOKEN) ---
_HUBSPOT_API = "https://api.hubapi.com"
def _hubspot_headers() -> dict | None:
token = CREDENTIALS.get("hubspot")
if token:
logger.debug("HubSpot token: %s...%s (len=%d)", token[:8], token[-4:], len(token))
else:
logger.debug("HubSpot token: not found")
if not token:
return None
return {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json",
"Accept": "application/json",
}
def _exec_hubspot_search(inputs: dict) -> dict:
headers = _hubspot_headers()
if not headers:
return {"error": "HUBSPOT_ACCESS_TOKEN not set"}
object_type = inputs.get("object_type", "contacts")
query = inputs.get("query", "")
limit = min(inputs.get("limit", 10), 100)
body: dict = {"limit": limit}
if query:
body["query"] = query
try:
resp = httpx.post(
f"{_HUBSPOT_API}/crm/v3/objects/{object_type}/search",
headers=headers,
json=body,
timeout=30.0,
)
if resp.status_code != 200:
return {"error": f"HubSpot API HTTP {resp.status_code}: {resp.text[:200]}"}
return resp.json()
except httpx.TimeoutException:
return {"error": "Request timed out"}
except Exception as e:
return {"error": f"HubSpot error: {e}"}
TOOL_REGISTRY.register(
name="hubspot_search",
tool=Tool(
name="hubspot_search",
description=(
"Search HubSpot CRM objects (contacts, companies, or deals). "
"Returns matching records with their properties."
),
parameters={
"type": "object",
"properties": {
"object_type": {
"type": "string",
"description": "CRM object type: 'contacts', 'companies', or 'deals'",
},
"query": {
"type": "string",
"description": "Search query (name, email, domain, etc.)",
},
"limit": {
"type": "integer",
"description": "Max results (1-100, default 10)",
},
},
"required": ["object_type"],
},
),
executor=lambda inputs: _exec_hubspot_search(inputs),
)
logger.info(
"ToolRegistry loaded: %s",
", ".join(TOOL_REGISTRY.get_registered_names()),
)
# -------------------------------------------------------------------------
# HTML page (embedded)
# -------------------------------------------------------------------------
HTML_PAGE = ( # noqa: E501
"""<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>EventLoopNode Live Demo</title>
<style>
* { box-sizing: border-box; margin: 0; padding: 0; }
body {
font-family: 'SF Mono', 'Fira Code', monospace;
background: #0d1117; color: #c9d1d9;
height: 100vh; display: flex; flex-direction: column;
}
header {
background: #161b22; padding: 12px 20px;
border-bottom: 1px solid #30363d;
display: flex; align-items: center; gap: 16px;
}
header h1 { font-size: 16px; color: #58a6ff; font-weight: 600; }
.status {
font-size: 12px; padding: 3px 10px; border-radius: 12px;
background: #21262d; color: #8b949e;
}
.status.running { background: #1a4b2e; color: #3fb950; }
.status.done { background: #1a3a5c; color: #58a6ff; }
.status.error { background: #4b1a1a; color: #f85149; }
.chat { flex: 1; overflow-y: auto; padding: 16px; }
.msg {
margin: 8px 0; padding: 10px 14px; border-radius: 8px;
line-height: 1.6; white-space: pre-wrap; word-wrap: break-word;
}
.msg.user { background: #1a3a5c; color: #58a6ff; }
.msg.assistant { background: #161b22; color: #c9d1d9; }
.msg.event {
background: transparent; color: #8b949e; font-size: 11px;
padding: 4px 14px; border-left: 3px solid #30363d;
}
.msg.event.loop { border-left-color: #58a6ff; }
.msg.event.tool { border-left-color: #d29922; }
.msg.event.stall { border-left-color: #f85149; }
.input-bar {
padding: 12px 16px; background: #161b22;
border-top: 1px solid #30363d; display: flex; gap: 8px;
}
.input-bar input {
flex: 1; background: #0d1117; border: 1px solid #30363d;
color: #c9d1d9; padding: 8px 12px; border-radius: 6px;
font-family: inherit; font-size: 14px; outline: none;
}
.input-bar input:focus { border-color: #58a6ff; }
.input-bar button {
background: #238636; color: #fff; border: none;
padding: 8px 20px; border-radius: 6px; cursor: pointer;
font-family: inherit; font-weight: 600;
}
.input-bar button:hover { background: #2ea043; }
.input-bar button:disabled {
background: #21262d; color: #484f58; cursor: not-allowed;
}
.input-bar button.clear { background: #da3633; }
.input-bar button.clear:hover { background: #f85149; }
</style>
</head>
<body>
<header>
<h1>EventLoopNode Live</h1>
<span id="status" class="status">Idle</span>
<span id="iter" class="status" style="display:none">Step 0</span>
</header>
<div id="chat" class="chat"></div>
<div class="input-bar">
<input id="input" type="text"
placeholder="Ask anything..." autofocus />
<button id="go" onclick="run()">Send</button>
<button class="clear"
onclick="clearConversation()">Clear</button>
</div>
<script>
let ws = null;
let currentAssistantEl = null;
let iterCount = 0;
const chat = document.getElementById('chat');
const status = document.getElementById('status');
const iterEl = document.getElementById('iter');
const goBtn = document.getElementById('go');
const inputEl = document.getElementById('input');
inputEl.addEventListener('keydown', e => {
if (e.key === 'Enter') run();
});
function setStatus(text, cls) {
status.textContent = text;
status.className = 'status ' + cls;
}
function addMsg(text, cls) {
const el = document.createElement('div');
el.className = 'msg ' + cls;
el.textContent = text;
chat.appendChild(el);
chat.scrollTop = chat.scrollHeight;
return el;
}
function connect() {
ws = new WebSocket('ws://' + location.host + '/ws');
ws.onopen = () => {
setStatus('Ready', 'done');
goBtn.disabled = false;
};
ws.onmessage = handleEvent;
ws.onerror = () => { setStatus('Error', 'error'); };
ws.onclose = () => {
setStatus('Reconnecting...', '');
goBtn.disabled = true;
setTimeout(connect, 2000);
};
}
function handleEvent(msg) {
const evt = JSON.parse(msg.data);
if (evt.type === 'llm_text_delta') {
if (currentAssistantEl) {
currentAssistantEl.textContent += evt.content;
chat.scrollTop = chat.scrollHeight;
}
}
else if (evt.type === 'ready') {
setStatus('Ready', 'done');
if (currentAssistantEl && !currentAssistantEl.textContent)
currentAssistantEl.remove();
goBtn.disabled = false;
}
else if (evt.type === 'node_loop_iteration') {
iterCount = evt.iteration || (iterCount + 1);
iterEl.textContent = 'Step ' + iterCount;
iterEl.style.display = '';
}
else if (evt.type === 'tool_call_started') {
var info = evt.tool_name + '('
+ JSON.stringify(evt.tool_input).slice(0, 120) + ')';
addMsg('TOOL ' + info, 'event tool');
}
else if (evt.type === 'tool_call_completed') {
var preview = (evt.result || '').slice(0, 200);
var cls = evt.is_error ? 'stall' : 'tool';
addMsg('RESULT ' + evt.tool_name + ': ' + preview,
'event ' + cls);
currentAssistantEl = addMsg('', 'assistant');
}
else if (evt.type === 'result') {
setStatus('Session ended', evt.success ? 'done' : 'error');
if (evt.error) addMsg('ERROR ' + evt.error, 'event stall');
if (currentAssistantEl && !currentAssistantEl.textContent)
currentAssistantEl.remove();
goBtn.disabled = false;
}
else if (evt.type === 'node_stalled') {
addMsg('STALLED ' + evt.reason, 'event stall');
}
else if (evt.type === 'cleared') {
chat.innerHTML = '';
iterCount = 0;
iterEl.textContent = 'Step 0';
iterEl.style.display = 'none';
setStatus('Ready', 'done');
goBtn.disabled = false;
}
}
function run() {
const text = inputEl.value.trim();
if (!text || !ws || ws.readyState !== 1) return;
addMsg(text, 'user');
currentAssistantEl = addMsg('', 'assistant');
inputEl.value = '';
setStatus('Running', 'running');
goBtn.disabled = true;
ws.send(JSON.stringify({ topic: text }));
}
function clearConversation() {
if (ws && ws.readyState === 1) {
ws.send(JSON.stringify({ command: 'clear' }));
}
}
connect();
</script>
</body>
</html>"""
)
# -------------------------------------------------------------------------
# WebSocket handler
# -------------------------------------------------------------------------
async def handle_ws(websocket):
"""Persistent WebSocket: long-lived EventLoopNode with client_facing blocking."""
global STORE
# -- Event forwarding (WebSocket ← EventBus) ----------------------------
bus = EventBus()
async def forward_event(event):
try:
payload = {"type": event.type.value, **event.data}
if event.node_id:
payload["node_id"] = event.node_id
await websocket.send(json.dumps(payload))
except Exception:
pass
bus.subscribe(
event_types=[
EventType.NODE_LOOP_STARTED,
EventType.NODE_LOOP_ITERATION,
EventType.NODE_LOOP_COMPLETED,
EventType.LLM_TEXT_DELTA,
EventType.TOOL_CALL_STARTED,
EventType.TOOL_CALL_COMPLETED,
EventType.NODE_STALLED,
],
handler=forward_event,
)
# -- Per-connection state -----------------------------------------------
node = None
loop_task = None
tools = list(TOOL_REGISTRY.get_tools().values())
tool_executor = TOOL_REGISTRY.get_executor()
node_spec = NodeSpec(
id="assistant",
name="Chat Assistant",
description="A conversational assistant that remembers context across messages",
node_type="event_loop",
client_facing=True,
system_prompt=(
"You are a helpful assistant with access to tools. "
"You can search the web, scrape webpages, and query HubSpot CRM. "
"Use tools when the user asks for current information or external data. "
"You have full conversation history, so you can reference previous messages."
),
)
# -- Ready callback: subscribe to CLIENT_INPUT_REQUESTED on the bus ---
async def on_input_requested(event):
try:
await websocket.send(json.dumps({"type": "ready"}))
except Exception:
pass
bus.subscribe(
event_types=[EventType.CLIENT_INPUT_REQUESTED],
handler=on_input_requested,
)
async def start_loop(first_message: str):
"""Create an EventLoopNode and run it as a background task."""
nonlocal node, loop_task
memory = SharedMemory()
ctx = NodeContext(
runtime=RUNTIME,
node_id="assistant",
node_spec=node_spec,
memory=memory,
input_data={},
llm=LLM,
available_tools=tools,
)
node = EventLoopNode(
event_bus=bus,
config=LoopConfig(max_iterations=10_000, max_context_tokens=32_000),
conversation_store=STORE,
tool_executor=tool_executor,
)
await node.inject_event(first_message)
async def _run():
try:
result = await node.execute(ctx)
try:
await websocket.send(
json.dumps(
{
"type": "result",
"success": result.success,
"output": result.output,
"error": result.error,
"tokens": result.tokens_used,
}
)
)
except Exception:
pass
logger.info(f"Loop ended: success={result.success}, tokens={result.tokens_used}")
except websockets.exceptions.ConnectionClosed:
logger.info("Loop stopped: WebSocket closed")
except Exception as e:
logger.exception("Loop error")
try:
await websocket.send(
json.dumps(
{
"type": "result",
"success": False,
"error": str(e),
"output": {},
}
)
)
except Exception:
pass
loop_task = asyncio.create_task(_run())
async def stop_loop():
"""Signal the node and wait for the loop task to finish."""
nonlocal node, loop_task
if loop_task and not loop_task.done():
if node:
node.signal_shutdown()
try:
await asyncio.wait_for(loop_task, timeout=5.0)
except (TimeoutError, asyncio.CancelledError):
loop_task.cancel()
node = None
loop_task = None
# -- Message loop (runs for the lifetime of this WebSocket) -------------
try:
async for raw in websocket:
try:
msg = json.loads(raw)
except Exception:
continue
# Clear command
if msg.get("command") == "clear":
import shutil
await stop_loop()
await STORE.close()
conv_dir = STORE_DIR / "conversation"
if conv_dir.exists():
shutil.rmtree(conv_dir)
STORE = FileConversationStore(conv_dir)
await websocket.send(json.dumps({"type": "cleared"}))
logger.info("Conversation cleared")
continue
topic = msg.get("topic", "")
if not topic:
continue
if node is None:
# First message — spin up the loop
logger.info(f"Starting persistent loop: {topic}")
await start_loop(topic)
else:
# Subsequent message — inject into the running loop
logger.info(f"Injecting message: {topic}")
await node.inject_event(topic)
except websockets.exceptions.ConnectionClosed:
pass
finally:
await stop_loop()
logger.info("WebSocket closed, loop stopped")
# -------------------------------------------------------------------------
# HTTP handler for serving the HTML page
# -------------------------------------------------------------------------
async def process_request(connection, request: Request):
"""Serve HTML on GET /, upgrade to WebSocket on /ws."""
if request.path == "/ws":
return None # let websockets handle the upgrade
# Serve the HTML page for any other path
return Response(
HTTPStatus.OK,
"OK",
websockets.Headers({"Content-Type": "text/html; charset=utf-8"}),
HTML_PAGE.encode(),
)
# -------------------------------------------------------------------------
# Main
# -------------------------------------------------------------------------
async def main():
port = 8765
async with websockets.serve(
handle_ws,
"0.0.0.0",
port,
process_request=process_request,
):
logger.info(f"Demo running at http://localhost:{port}")
logger.info("Open in your browser and enter a topic to research.")
await asyncio.Future() # run forever
if __name__ == "__main__":
asyncio.run(main())
File diff suppressed because it is too large Load Diff
-930
View File
@@ -1,930 +0,0 @@
#!/usr/bin/env python3
"""
Two-Node ContextHandoff Demo
Demonstrates ContextHandoff between two EventLoopNode instances:
Node A (Researcher) ContextHandoff Node B (Analyst)
Real LLM, real FileConversationStore, real EventBus.
Streams both nodes to a browser via WebSocket.
Usage:
cd /home/timothy/oss/hive/core
python demos/handoff_demo.py
Then open http://localhost:8766 in your browser.
"""
import asyncio
import json
import logging
import sys
import tempfile
from http import HTTPStatus
from pathlib import Path
import httpx
import websockets
from bs4 import BeautifulSoup
from websockets.http11 import Request, Response
# Add core, tools, and hive root to path
_CORE_DIR = Path(__file__).resolve().parent.parent
_HIVE_DIR = _CORE_DIR.parent
sys.path.insert(0, str(_CORE_DIR)) # framework.*
sys.path.insert(0, str(_HIVE_DIR / "tools" / "src")) # aden_tools.*
sys.path.insert(0, str(_HIVE_DIR)) # core.framework.* (for aden_tools imports)
from aden_tools.credentials import CREDENTIAL_SPECS, CredentialStoreAdapter # noqa: E402
from core.framework.credentials import CredentialStore # noqa: E402
from framework.credentials.storage import ( # noqa: E402
CompositeStorage,
EncryptedFileStorage,
EnvVarStorage,
)
from framework.graph.context_handoff import ContextHandoff # noqa: E402
from framework.graph.conversation import NodeConversation # noqa: E402
from framework.graph.event_loop_node import EventLoopNode, LoopConfig # noqa: E402
from framework.graph.node import NodeContext, NodeSpec, SharedMemory # noqa: E402
from framework.llm.litellm import LiteLLMProvider # noqa: E402
from framework.llm.provider import Tool # noqa: E402
from framework.runner.tool_registry import ToolRegistry # noqa: E402
from framework.runtime.core import Runtime # noqa: E402
from framework.runtime.event_bus import EventBus, EventType # noqa: E402
from framework.storage.conversation_store import FileConversationStore # noqa: E402
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(name)s %(message)s")
logger = logging.getLogger("handoff_demo")
# -------------------------------------------------------------------------
# Persistent state
# -------------------------------------------------------------------------
STORE_DIR = Path(tempfile.mkdtemp(prefix="hive_handoff_"))
RUNTIME = Runtime(STORE_DIR / "runtime")
LLM = LiteLLMProvider(model="claude-sonnet-4-5-20250929")
# -------------------------------------------------------------------------
# Credentials
# -------------------------------------------------------------------------
# Composite credential store: encrypted files (primary) + env vars (fallback)
_env_mapping = {name: spec.env_var for name, spec in CREDENTIAL_SPECS.items()}
_composite = CompositeStorage(
primary=EncryptedFileStorage(),
fallbacks=[EnvVarStorage(env_mapping=_env_mapping)],
)
CREDENTIALS = CredentialStoreAdapter(CredentialStore(storage=_composite))
for _name in ["brave_search", "hubspot"]:
_val = CREDENTIALS.get(_name)
if _val:
logger.debug("credential %s: OK (len=%d)", _name, len(_val))
else:
logger.debug("credential %s: not found", _name)
# -------------------------------------------------------------------------
# Tool Registry — web_search + web_scrape for Node A (Researcher)
# -------------------------------------------------------------------------
TOOL_REGISTRY = ToolRegistry()
def _exec_web_search(inputs: dict) -> dict:
api_key = CREDENTIALS.get("brave_search")
if not api_key:
return {"error": "brave_search credential not configured"}
query = inputs.get("query", "")
num_results = min(inputs.get("num_results", 10), 20)
resp = httpx.get(
"https://api.search.brave.com/res/v1/web/search",
params={"q": query, "count": num_results},
headers={
"X-Subscription-Token": api_key,
"Accept": "application/json",
},
timeout=30.0,
)
if resp.status_code != 200:
return {"error": f"Brave API HTTP {resp.status_code}"}
data = resp.json()
results = [
{
"title": item.get("title", ""),
"url": item.get("url", ""),
"snippet": item.get("description", ""),
}
for item in data.get("web", {}).get("results", [])[:num_results]
]
return {"query": query, "results": results, "total": len(results)}
TOOL_REGISTRY.register(
name="web_search",
tool=Tool(
name="web_search",
description=(
"Search the web for current information. "
"Returns titles, URLs, and snippets from search results."
),
parameters={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query (1-500 characters)",
},
"num_results": {
"type": "integer",
"description": "Number of results (1-20, default 10)",
},
},
"required": ["query"],
},
),
executor=lambda inputs: _exec_web_search(inputs),
)
_SCRAPE_HEADERS = {
"User-Agent": (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/131.0.0.0 Safari/537.36"
),
"Accept": "text/html,application/xhtml+xml",
}
def _exec_web_scrape(inputs: dict) -> dict:
url = inputs.get("url", "")
max_length = max(1000, min(inputs.get("max_length", 50000), 500000))
if not url.startswith(("http://", "https://")):
url = "https://" + url
try:
resp = httpx.get(
url,
timeout=30.0,
follow_redirects=True,
headers=_SCRAPE_HEADERS,
)
if resp.status_code != 200:
return {"error": f"HTTP {resp.status_code}"}
soup = BeautifulSoup(resp.text, "html.parser")
for tag in soup(["script", "style", "nav", "footer", "header", "aside", "noscript"]):
tag.decompose()
title = soup.title.get_text(strip=True) if soup.title else ""
main = (
soup.find("article")
or soup.find("main")
or soup.find(attrs={"role": "main"})
or soup.find("body")
)
text = main.get_text(separator=" ", strip=True) if main else ""
text = " ".join(text.split())
if len(text) > max_length:
text = text[:max_length] + "..."
return {
"url": url,
"title": title,
"content": text,
"length": len(text),
}
except httpx.TimeoutException:
return {"error": "Request timed out"}
except Exception as e:
return {"error": f"Scrape failed: {e}"}
TOOL_REGISTRY.register(
name="web_scrape",
tool=Tool(
name="web_scrape",
description=(
"Scrape and extract text content from a webpage URL. "
"Returns the page title and main text content."
),
parameters={
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "URL of the webpage to scrape",
},
"max_length": {
"type": "integer",
"description": "Maximum text length (default 50000)",
},
},
"required": ["url"],
},
),
executor=lambda inputs: _exec_web_scrape(inputs),
)
logger.info(
"ToolRegistry loaded: %s",
", ".join(TOOL_REGISTRY.get_registered_names()),
)
# -------------------------------------------------------------------------
# Node Specs
# -------------------------------------------------------------------------
RESEARCHER_SPEC = NodeSpec(
id="researcher",
name="Researcher",
description="Researches a topic using web search and scraping tools",
node_type="event_loop",
input_keys=["topic"],
output_keys=["research_summary"],
system_prompt=(
"You are a thorough research assistant. Your job is to research "
"the given topic using the web_search and web_scrape tools.\n\n"
"1. Search for relevant information on the topic\n"
"2. Scrape 1-2 of the most promising URLs for details\n"
"3. Synthesize your findings into a comprehensive summary\n"
"4. Use set_output with key='research_summary' to save your "
"findings\n\n"
"Be thorough but efficient. Aim for 2-4 search/scrape calls, "
"then summarize and set_output."
),
)
ANALYST_SPEC = NodeSpec(
id="analyst",
name="Analyst",
description="Analyzes research findings and provides insights",
node_type="event_loop",
input_keys=["context"],
output_keys=["analysis"],
system_prompt=(
"You are a strategic analyst. You receive research findings from "
"a previous researcher and must:\n\n"
"1. Identify key themes and patterns\n"
"2. Assess the reliability and significance of the findings\n"
"3. Provide actionable insights and recommendations\n"
"4. Use set_output with key='analysis' to save your analysis\n\n"
"Be concise but insightful. Focus on what matters most."
),
)
# -------------------------------------------------------------------------
# HTML page
# -------------------------------------------------------------------------
HTML_PAGE = ( # noqa: E501
"""<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>ContextHandoff Demo</title>
<style>
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
font-family: 'SF Mono', 'Fira Code', monospace;
background: #0d1117;
color: #c9d1d9;
height: 100vh;
display: flex;
flex-direction: column;
}
header {
background: #161b22;
padding: 12px 20px;
border-bottom: 1px solid #30363d;
display: flex;
align-items: center;
gap: 16px;
}
header h1 {
font-size: 16px;
color: #58a6ff;
font-weight: 600;
}
.badge {
font-size: 12px;
padding: 3px 10px;
border-radius: 12px;
background: #21262d;
color: #8b949e;
}
.badge.researcher {
background: #1a3a5c;
color: #58a6ff;
}
.badge.analyst {
background: #1a4b2e;
color: #3fb950;
}
.badge.handoff {
background: #3d1f00;
color: #d29922;
}
.badge.done {
background: #21262d;
color: #8b949e;
}
.badge.error {
background: #4b1a1a;
color: #f85149;
}
.chat {
flex: 1;
overflow-y: auto;
padding: 16px;
}
.msg {
margin: 8px 0;
padding: 10px 14px;
border-radius: 8px;
line-height: 1.6;
white-space: pre-wrap;
word-wrap: break-word;
}
.msg.user {
background: #1a3a5c;
color: #58a6ff;
}
.msg.assistant {
background: #161b22;
color: #c9d1d9;
}
.msg.assistant.analyst-msg {
border-left: 3px solid #3fb950;
}
.msg.event {
background: transparent;
color: #8b949e;
font-size: 11px;
padding: 4px 14px;
border-left: 3px solid #30363d;
}
.msg.event.loop {
border-left-color: #58a6ff;
}
.msg.event.tool {
border-left-color: #d29922;
}
.msg.event.stall {
border-left-color: #f85149;
}
.handoff-banner {
margin: 16px 0;
padding: 16px;
background: #1c1200;
border: 1px solid #d29922;
border-radius: 8px;
text-align: center;
}
.handoff-banner h3 {
color: #d29922;
font-size: 14px;
margin-bottom: 8px;
}
.handoff-banner p, .result-banner p {
color: #8b949e;
font-size: 12px;
line-height: 1.5;
max-height: 200px;
overflow-y: auto;
white-space: pre-wrap;
text-align: left;
}
.result-banner {
margin: 16px 0;
padding: 16px;
background: #0a2614;
border: 1px solid #3fb950;
border-radius: 8px;
}
.result-banner h3 {
color: #3fb950;
font-size: 14px;
margin-bottom: 8px;
text-align: center;
}
.result-banner .label {
color: #58a6ff;
font-size: 11px;
font-weight: 600;
margin-top: 10px;
margin-bottom: 2px;
}
.result-banner .tokens {
color: #484f58;
font-size: 11px;
text-align: center;
margin-top: 10px;
}
.input-bar {
padding: 12px 16px;
background: #161b22;
border-top: 1px solid #30363d;
display: flex;
gap: 8px;
}
.input-bar input {
flex: 1;
background: #0d1117;
border: 1px solid #30363d;
color: #c9d1d9;
padding: 8px 12px;
border-radius: 6px;
font-family: inherit;
font-size: 14px;
outline: none;
}
.input-bar input:focus {
border-color: #58a6ff;
}
.input-bar button {
background: #238636;
color: #fff;
border: none;
padding: 8px 20px;
border-radius: 6px;
cursor: pointer;
font-family: inherit;
font-weight: 600;
}
.input-bar button:hover {
background: #2ea043;
}
.input-bar button:disabled {
background: #21262d;
color: #484f58;
cursor: not-allowed;
}
</style>
</head>
<body>
<header>
<h1>ContextHandoff Demo</h1>
<span id="phase" class="badge">Idle</span>
<span id="iter" class="badge" style="display:none">Step 0</span>
</header>
<div id="chat" class="chat"></div>
<div class="input-bar">
<input id="input" type="text"
placeholder="Enter a research topic..." autofocus />
<button id="go" onclick="run()">Research</button>
</div>
<script>
let ws = null;
let currentAssistantEl = null;
let iterCount = 0;
let currentPhase = 'idle';
const chat = document.getElementById('chat');
const phase = document.getElementById('phase');
const iterEl = document.getElementById('iter');
const goBtn = document.getElementById('go');
const inputEl = document.getElementById('input');
inputEl.addEventListener('keydown', e => {
if (e.key === 'Enter') run();
});
function setPhase(text, cls) {
phase.textContent = text;
phase.className = 'badge ' + cls;
currentPhase = cls;
}
function addMsg(text, cls) {
const el = document.createElement('div');
el.className = 'msg ' + cls;
el.textContent = text;
chat.appendChild(el);
chat.scrollTop = chat.scrollHeight;
return el;
}
function addHandoffBanner(summary) {
const banner = document.createElement('div');
banner.className = 'handoff-banner';
const h3 = document.createElement('h3');
h3.textContent = 'Context Handoff: Researcher -> Analyst';
const p = document.createElement('p');
p.textContent = summary || 'Passing research context...';
banner.appendChild(h3);
banner.appendChild(p);
chat.appendChild(banner);
chat.scrollTop = chat.scrollHeight;
}
function addResultBanner(researcher, analyst, tokens) {
const banner = document.createElement('div');
banner.className = 'result-banner';
const h3 = document.createElement('h3');
h3.textContent = 'Pipeline Complete';
banner.appendChild(h3);
if (researcher && researcher.research_summary) {
const lbl = document.createElement('div');
lbl.className = 'label';
lbl.textContent = 'RESEARCH SUMMARY';
banner.appendChild(lbl);
const p = document.createElement('p');
p.textContent = researcher.research_summary;
banner.appendChild(p);
}
if (analyst && analyst.analysis) {
const lbl = document.createElement('div');
lbl.className = 'label';
lbl.textContent = 'ANALYSIS';
lbl.style.color = '#3fb950';
banner.appendChild(lbl);
const p = document.createElement('p');
p.textContent = analyst.analysis;
banner.appendChild(p);
}
if (tokens) {
const t = document.createElement('div');
t.className = 'tokens';
t.textContent = 'Total tokens: ' + tokens.toLocaleString();
banner.appendChild(t);
}
chat.appendChild(banner);
chat.scrollTop = chat.scrollHeight;
}
function connect() {
ws = new WebSocket('ws://' + location.host + '/ws');
ws.onopen = () => {
setPhase('Ready', 'done');
goBtn.disabled = false;
};
ws.onmessage = handleEvent;
ws.onerror = () => { setPhase('Error', 'error'); };
ws.onclose = () => {
setPhase('Reconnecting...', '');
goBtn.disabled = true;
setTimeout(connect, 2000);
};
}
function handleEvent(msg) {
const evt = JSON.parse(msg.data);
if (evt.type === 'phase') {
if (evt.phase === 'researcher') {
setPhase('Researcher', 'researcher');
} else if (evt.phase === 'handoff') {
setPhase('Handoff', 'handoff');
} else if (evt.phase === 'analyst') {
setPhase('Analyst', 'analyst');
}
iterCount = 0;
iterEl.style.display = 'none';
}
else if (evt.type === 'llm_text_delta') {
if (currentAssistantEl) {
currentAssistantEl.textContent += evt.content;
chat.scrollTop = chat.scrollHeight;
}
}
else if (evt.type === 'node_loop_iteration') {
iterCount = evt.iteration || (iterCount + 1);
iterEl.textContent = 'Step ' + iterCount;
iterEl.style.display = '';
}
else if (evt.type === 'tool_call_started') {
var info = evt.tool_name + '('
+ JSON.stringify(evt.tool_input).slice(0, 120) + ')';
addMsg('TOOL ' + info, 'event tool');
}
else if (evt.type === 'tool_call_completed') {
var preview = (evt.result || '').slice(0, 200);
var cls = evt.is_error ? 'stall' : 'tool';
addMsg(
'RESULT ' + evt.tool_name + ': ' + preview,
'event ' + cls
);
var assistCls = currentPhase === 'analyst'
? 'assistant analyst-msg' : 'assistant';
currentAssistantEl = addMsg('', assistCls);
}
else if (evt.type === 'handoff_context') {
addHandoffBanner(evt.summary);
var assistCls = 'assistant analyst-msg';
currentAssistantEl = addMsg('', assistCls);
}
else if (evt.type === 'node_result') {
if (evt.node_id === 'researcher') {
if (currentAssistantEl
&& !currentAssistantEl.textContent) {
currentAssistantEl.remove();
}
}
}
else if (evt.type === 'done') {
setPhase('Done', 'done');
iterEl.style.display = 'none';
if (currentAssistantEl
&& !currentAssistantEl.textContent) {
currentAssistantEl.remove();
}
currentAssistantEl = null;
addResultBanner(
evt.researcher, evt.analyst, evt.total_tokens
);
goBtn.disabled = false;
inputEl.placeholder = 'Enter another topic...';
}
else if (evt.type === 'error') {
setPhase('Error', 'error');
addMsg('ERROR ' + evt.message, 'event stall');
goBtn.disabled = false;
}
else if (evt.type === 'node_stalled') {
addMsg('STALLED ' + evt.reason, 'event stall');
}
}
function run() {
const text = inputEl.value.trim();
if (!text || !ws || ws.readyState !== 1) return;
chat.innerHTML = '';
addMsg(text, 'user');
currentAssistantEl = addMsg('', 'assistant');
inputEl.value = '';
goBtn.disabled = true;
ws.send(JSON.stringify({ topic: text }));
}
connect();
</script>
</body>
</html>"""
)
# -------------------------------------------------------------------------
# WebSocket handler — sequential Node A → Handoff → Node B
# -------------------------------------------------------------------------
async def handle_ws(websocket):
"""Run the two-node handoff pipeline per user message."""
try:
async for raw in websocket:
try:
msg = json.loads(raw)
except Exception:
continue
topic = msg.get("topic", "")
if not topic:
continue
logger.info(f"Starting handoff pipeline for: {topic}")
try:
await _run_pipeline(websocket, topic)
except websockets.exceptions.ConnectionClosed:
logger.info("WebSocket closed during pipeline")
return
except Exception as e:
logger.exception("Pipeline error")
try:
await websocket.send(json.dumps({"type": "error", "message": str(e)}))
except Exception:
pass
except websockets.exceptions.ConnectionClosed:
pass
async def _run_pipeline(websocket, topic: str):
"""Execute: Node A (research) → ContextHandoff → Node B (analysis)."""
import shutil
# Fresh stores for each run
run_dir = Path(tempfile.mkdtemp(prefix="hive_run_", dir=STORE_DIR))
store_a = FileConversationStore(run_dir / "node_a")
store_b = FileConversationStore(run_dir / "node_b")
# Shared event bus
bus = EventBus()
async def forward_event(event):
try:
payload = {"type": event.type.value, **event.data}
if event.node_id:
payload["node_id"] = event.node_id
await websocket.send(json.dumps(payload))
except Exception:
pass
bus.subscribe(
event_types=[
EventType.NODE_LOOP_STARTED,
EventType.NODE_LOOP_ITERATION,
EventType.NODE_LOOP_COMPLETED,
EventType.LLM_TEXT_DELTA,
EventType.TOOL_CALL_STARTED,
EventType.TOOL_CALL_COMPLETED,
EventType.NODE_STALLED,
],
handler=forward_event,
)
tools = list(TOOL_REGISTRY.get_tools().values())
tool_executor = TOOL_REGISTRY.get_executor()
# ---- Phase 1: Researcher ------------------------------------------------
await websocket.send(json.dumps({"type": "phase", "phase": "researcher"}))
node_a = EventLoopNode(
event_bus=bus,
judge=None, # implicit judge: accept when output_keys filled
config=LoopConfig(
max_iterations=20,
max_tool_calls_per_turn=30,
max_context_tokens=32_000,
),
conversation_store=store_a,
tool_executor=tool_executor,
)
ctx_a = NodeContext(
runtime=RUNTIME,
node_id="researcher",
node_spec=RESEARCHER_SPEC,
memory=SharedMemory(),
input_data={"topic": topic},
llm=LLM,
available_tools=tools,
)
result_a = await node_a.execute(ctx_a)
logger.info(
"Researcher done: success=%s, tokens=%s",
result_a.success,
result_a.tokens_used,
)
await websocket.send(
json.dumps(
{
"type": "node_result",
"node_id": "researcher",
"success": result_a.success,
"output": result_a.output,
}
)
)
if not result_a.success:
await websocket.send(
json.dumps(
{
"type": "error",
"message": f"Researcher failed: {result_a.error}",
}
)
)
return
# ---- Phase 2: Context Handoff -------------------------------------------
await websocket.send(json.dumps({"type": "phase", "phase": "handoff"}))
# Restore the researcher's conversation from store
conversation_a = await NodeConversation.restore(store_a)
if conversation_a is None:
await websocket.send(
json.dumps(
{
"type": "error",
"message": "Failed to restore researcher conversation",
}
)
)
return
handoff_engine = ContextHandoff(llm=LLM)
handoff_context = handoff_engine.summarize_conversation(
conversation=conversation_a,
node_id="researcher",
output_keys=["research_summary"],
)
formatted_handoff = ContextHandoff.format_as_input(handoff_context)
logger.info(
"Handoff: %d turns, ~%d tokens, keys=%s",
handoff_context.turn_count,
handoff_context.total_tokens_used,
list(handoff_context.key_outputs.keys()),
)
# Send handoff context to browser
await websocket.send(
json.dumps(
{
"type": "handoff_context",
"summary": handoff_context.summary[:500],
"turn_count": handoff_context.turn_count,
"tokens": handoff_context.total_tokens_used,
"key_outputs": handoff_context.key_outputs,
}
)
)
# ---- Phase 3: Analyst ---------------------------------------------------
await websocket.send(json.dumps({"type": "phase", "phase": "analyst"}))
node_b = EventLoopNode(
event_bus=bus,
judge=None, # implicit judge
config=LoopConfig(
max_iterations=10,
max_tool_calls_per_turn=30,
max_context_tokens=32_000,
),
conversation_store=store_b,
)
ctx_b = NodeContext(
runtime=RUNTIME,
node_id="analyst",
node_spec=ANALYST_SPEC,
memory=SharedMemory(),
input_data={"context": formatted_handoff},
llm=LLM,
available_tools=[],
)
result_b = await node_b.execute(ctx_b)
logger.info(
"Analyst done: success=%s, tokens=%s",
result_b.success,
result_b.tokens_used,
)
# ---- Done ---------------------------------------------------------------
await websocket.send(
json.dumps(
{
"type": "done",
"researcher": result_a.output,
"analyst": result_b.output,
"total_tokens": ((result_a.tokens_used or 0) + (result_b.tokens_used or 0)),
}
)
)
# Clean up temp stores
try:
shutil.rmtree(run_dir)
except Exception:
pass
# -------------------------------------------------------------------------
# HTTP handler
# -------------------------------------------------------------------------
async def process_request(connection, request: Request):
"""Serve HTML on GET /, upgrade to WebSocket on /ws."""
if request.path == "/ws":
return None
return Response(
HTTPStatus.OK,
"OK",
websockets.Headers({"Content-Type": "text/html; charset=utf-8"}),
HTML_PAGE.encode(),
)
# -------------------------------------------------------------------------
# Main
# -------------------------------------------------------------------------
async def main():
port = 8766
async with websockets.serve(
handle_ws,
"0.0.0.0",
port,
process_request=process_request,
):
logger.info(f"Handoff demo at http://localhost:{port}")
logger.info("Enter a research topic to start the pipeline.")
await asyncio.Future()
if __name__ == "__main__":
asyncio.run(main())
File diff suppressed because it is too large Load Diff
+17 -31
View File
@@ -287,44 +287,28 @@ visible to the user immediately. The draft captures business logic \
Include in each node: id, name, description, planned tools, \
input/output keys, and success criteria as high-level hints.
Each node is auto-classified into an ISO 5807 flowchart symbol type \
with a unique color. You can override auto-detection by setting \
`flowchart_type` explicitly on a node. Common types:
Each node is auto-classified into a flowchart symbol type with a unique \
color. You can override auto-detection by setting `flowchart_type` \
explicitly on a node. Available types:
**Core symbols:**
- **start** (green, stadium): Entry point / trigger
- **terminal** (red, stadium): End of flow
- **process** (blue, rectangle): Standard processing step
- **decision** (amber, diamond): Conditional branching
- **io** (purple, parallelogram): External data input/output
- **document** (blue-grey, wavy rect): Report or document generation
- **subprocess** (teal, subroutine): Delegated sub-agent / predefined process
- **preparation** (brown, hexagon): Setup / initialization step
- **manual_operation** (pink, trapezoid): Human-in-the-loop / manual review
- **delay** (orange, D-shape): Wait / throttle / cooldown
- **display** (cyan): Present results to user
**Data storage:**
- **database** (light green, cylinder): Database or data store
- **stored_data** (lime): Generic persistent data
- **internal_storage** (amber): In-memory / cache
**Flow operations:**
- **merge** (indigo, inv. triangle): Combine multiple inputs
- **extract** (indigo, triangle): Split or filter data
- **connector** (grey, circle): On-page link
- **offpage_connector** (dark grey, pentagon): Cross-page link
**Domain-specific:**
- **browser** (dark indigo, hexagon): GCU browser automation / sub-agent \
- **start** (sage green, stadium): Entry point / trigger
- **terminal** (dusty red, stadium): End of flow
- **process** (blue-gray, rectangle): Standard processing step
- **decision** (warm amber, diamond): Conditional branching
- **io** (dusty purple, parallelogram): External data input/output
- **document** (steel blue, wavy rect): Report or document generation
- **database** (muted teal, cylinder): Database or data store
- **subprocess** (dark cyan, subroutine): Delegated sub-agent / predefined process
- **browser** (deep blue, hexagon): GCU browser automation / sub-agent \
delegation. At build time, browser nodes are dissolved into the parent \
node's sub_agents list. Use for any GCU or sub-agent leaf node.
Auto-detection works well for most cases: first node start, nodes with \
no outgoing edges terminal, nodes with multiple conditional outgoing \
edges decision, GCU nodes browser, nodes mentioning "database" \
database, nodes mentioning "report/document" document, etc. Set \
flowchart_type explicitly only when auto-detection would be wrong.
database, nodes mentioning "report/document" document, I/O tools like \
send_email io. Everything else defaults to process. Set flowchart_type \
explicitly only when auto-detection would be wrong.
## Decision Nodes — Planning-Only Conditional Branching
@@ -1160,6 +1144,8 @@ Batch your response — do not call run_agent_with_input() once per trigger.
config since last run), skip it and inform the user.
- Never disable a trigger without telling the user. Use remove_trigger() only \
when explicitly asked or when the trigger is clearly obsolete.
- When the user asks to remove or disable a trigger, you MUST call remove_trigger(trigger_id). \
Never just say "it's removed" without actually calling the tool.
"""
# -- Backward-compatible composed versions (used by queen_node.system_prompt default) --
+4
View File
@@ -19,6 +19,10 @@ from framework.graph.edge import DEFAULT_MAX_TOKENS
# ---------------------------------------------------------------------------
HIVE_CONFIG_FILE = Path.home() / ".hive" / "configuration.json"
# Hive LLM router endpoint (Anthropic-compatible).
# litellm's Anthropic handler appends /v1/messages, so this is just the base host.
HIVE_LLM_ENDPOINT = "https://api.adenhq.com"
logger = logging.getLogger(__name__)
+13 -1
View File
@@ -504,9 +504,21 @@ class EventLoopNode(NodeProtocol):
_restored_tool_fingerprints = []
# Fresh conversation: either isolated mode or first node in continuous mode.
from framework.graph.prompt_composer import _with_datetime
from framework.graph.prompt_composer import (
EXECUTION_SCOPE_PREAMBLE,
_with_datetime,
)
system_prompt = _with_datetime(ctx.node_spec.system_prompt or "")
# Prepend execution-scope preamble for worker nodes so the
# LLM knows it is one step in a pipeline and should not try
# to perform work that belongs to other nodes.
if (
not ctx.is_subagent_mode
and ctx.node_spec.node_type in ("event_loop", "gcu")
and ctx.node_spec.output_keys
):
system_prompt = f"{EXECUTION_SCOPE_PREAMBLE}\n\n{system_prompt}"
# Prepend GCU browser best-practices prompt for gcu nodes
if ctx.node_spec.node_type == "gcu":
from framework.graph.gcu import GCU_BROWSER_SYSTEM_PROMPT
+7 -1
View File
@@ -1420,6 +1420,7 @@ class GraphExecutor:
next_spec = graph.get_node(current_node_id)
if next_spec and next_spec.node_type == "event_loop":
from framework.graph.prompt_composer import (
EXECUTION_SCOPE_PREAMBLE,
build_accounts_prompt,
build_narrative,
build_transition_marker,
@@ -1459,9 +1460,14 @@ class GraphExecutor:
)
# Compose new system prompt (Layer 1 + 2 + 3 + accounts)
# Prepend scope preamble to focus so the LLM stays
# within this node's responsibility.
_focus = next_spec.system_prompt
if next_spec.output_keys and _focus:
_focus = f"{EXECUTION_SCOPE_PREAMBLE}\n\n{_focus}"
new_system = compose_system_prompt(
identity_prompt=getattr(graph, "identity_prompt", None),
focus_prompt=next_spec.system_prompt,
focus_prompt=_focus,
narrative=narrative,
accounts_prompt=_node_accounts,
)
+16
View File
@@ -26,6 +26,16 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# Injected into every worker node's system prompt so the LLM understands
# it is one step in a multi-node pipeline and should not overreach.
EXECUTION_SCOPE_PREAMBLE = (
"EXECUTION SCOPE: You are one node in a multi-step workflow graph. "
"Focus ONLY on the task described in your instructions below. "
"Call set_output() for each of your declared output keys, then stop. "
"Do NOT attempt work that belongs to other nodes — the framework "
"routes data between nodes automatically."
)
def _with_datetime(prompt: str) -> str:
"""Append current datetime with local timezone to a system prompt."""
@@ -306,6 +316,12 @@ def build_transition_marker(
# Next phase
sections.append(f"\nNow entering: {next_node.name}")
sections.append(f" {next_node.description}")
if next_node.output_keys:
sections.append(
f"\nYour ONLY job in this phase: complete the task above and call "
f"set_output() for {next_node.output_keys}. Do NOT do work that "
f"belongs to later phases."
)
# Reflection prompt (engineered metacognition)
sections.append(
+15 -3
View File
@@ -115,11 +115,23 @@ class SafeEvalVisitor(ast.NodeVisitor):
return True
def visit_BoolOp(self, node: ast.BoolOp) -> Any:
values = [self.visit(v) for v in node.values]
# Short-circuit evaluation to match Python semantics.
# Previously all operands were eagerly evaluated, which broke
# guard patterns like: ``x is not None and x.get("key")``
if isinstance(node.op, ast.And):
return all(values)
result = True
for v in node.values:
result = self.visit(v)
if not result:
return result
return result
elif isinstance(node.op, ast.Or):
return any(values)
result = False
for v in node.values:
result = self.visit(v)
if result:
return result
return result
raise ValueError(f"Boolean operator {type(node.op).__name__} is not allowed")
def visit_IfExp(self, node: ast.IfExp) -> Any:
+7
View File
@@ -23,6 +23,7 @@ except ImportError:
litellm = None # type: ignore[assignment]
RateLimitError = Exception # type: ignore[assignment, misc]
from framework.config import HIVE_LLM_ENDPOINT as HIVE_API_BASE
from framework.llm.provider import LLMProvider, LLMResponse, Tool
from framework.llm.stream_events import StreamEvent
@@ -399,6 +400,10 @@ class LiteLLMProvider(LLMProvider):
# Strip a trailing /v1 in case the user's saved config has the old value.
if api_base and api_base.rstrip("/").endswith("/v1"):
api_base = api_base.rstrip("/")[:-3]
elif model.lower().startswith("hive/"):
model = "anthropic/" + model[len("hive/") :]
if api_base and api_base.rstrip("/").endswith("/v1"):
api_base = api_base.rstrip("/")[:-3]
self.model = model
self.api_key = api_key
self.api_base = api_base or self._default_api_base_for_model(_original_model)
@@ -428,6 +433,8 @@ class LiteLLMProvider(LLMProvider):
return MINIMAX_API_BASE
if model_lower.startswith("kimi/"):
return KIMI_API_BASE
if model_lower.startswith("hive/"):
return HIVE_API_BASE
return None
def _completion_with_rate_limit_retry(
+4
View File
@@ -206,6 +206,10 @@ def configure_logging(
root_logger.addHandler(handler)
root_logger.setLevel(level.upper())
# Suppress noisy LiteLLM INFO logs (model/provider line + Provider List URL
# printed on every single completion call). Warnings and errors still show.
logging.getLogger("LiteLLM").setLevel(logging.WARNING)
# When in JSON mode, configure known third-party loggers to use JSON formatter
# This ensures libraries like LiteLLM, httpcore also output clean JSON
if format == "json":
+10
View File
@@ -28,6 +28,7 @@ from framework.runner.tool_registry import ToolRegistry
from framework.runtime.agent_runtime import AgentRuntime, AgentRuntimeConfig, create_agent_runtime
from framework.runtime.execution_stream import EntryPointSpec
from framework.runtime.runtime_log_store import RuntimeLogStore
from framework.tools.flowchart_utils import generate_fallback_flowchart
if TYPE_CHECKING:
from framework.runner.protocol import AgentMessage, CapabilityResponse
@@ -959,6 +960,8 @@ class AgentRunner:
graph = GraphSpec(**graph_kwargs)
# Generate flowchart.json if missing (for template/legacy agents)
generate_fallback_flowchart(graph, goal, agent_path)
# Read skill configuration from agent module
agent_default_skills = getattr(agent_module, "default_skills", None)
agent_skills = getattr(agent_module, "skills", None)
@@ -1011,6 +1014,9 @@ class AgentRunner:
except json.JSONDecodeError as exc:
raise ValueError(f"Invalid JSON in agent export file: {agent_json_path}") from exc
# Generate flowchart.json if missing (for legacy JSON-based agents)
generate_fallback_flowchart(graph, goal, agent_path)
runner = cls(
agent_path=agent_path,
graph=graph,
@@ -1389,6 +1395,8 @@ class AgentRunner:
return "MINIMAX_API_KEY"
elif model_lower.startswith("kimi/"):
return "KIMI_API_KEY"
elif model_lower.startswith("hive/"):
return "HIVE_API_KEY"
else:
# Default: assume OpenAI-compatible
return "OPENAI_API_KEY"
@@ -1411,6 +1419,8 @@ class AgentRunner:
cred_id = "minimax"
elif model_lower.startswith("kimi/"):
cred_id = "kimi"
elif model_lower.startswith("hive/"):
cred_id = "hive"
# Add more mappings as providers are added to LLM_CREDENTIALS
if cred_id is None:
+17 -5
View File
@@ -455,11 +455,23 @@ class ToolRegistry:
for server_config in server_list:
server_config = self._resolve_mcp_server_config(server_config, base_dir)
try:
self.register_mcp_server(server_config)
except Exception as e:
name = server_config.get("name", "unknown")
logger.warning(f"Failed to register MCP server '{name}': {e}")
for _attempt in range(2):
try:
self.register_mcp_server(server_config)
break
except Exception as e:
name = server_config.get("name", "unknown")
if _attempt == 0:
logger.warning(
"MCP server '%s' failed to register, retrying in 2s: %s",
name,
e,
)
import time
time.sleep(2)
else:
logger.warning("MCP server '%s' failed after retry: %s", name, e)
# Snapshot credential files and ADEN_API_KEY so we can detect mid-session changes
self._mcp_cred_snapshot = self._snapshot_credentials()
+1
View File
@@ -159,6 +159,7 @@ class EventType(StrEnum):
TRIGGER_DEACTIVATED = "trigger_deactivated"
TRIGGER_FIRED = "trigger_fired"
TRIGGER_REMOVED = "trigger_removed"
TRIGGER_UPDATED = "trigger_updated"
@dataclass
+50 -1
View File
@@ -6,7 +6,7 @@ import logging
from aiohttp import web
from aiohttp.client_exceptions import ClientConnectionResetError as _AiohttpConnReset
from framework.runtime.event_bus import EventType
from framework.runtime.event_bus import AgentEvent, EventType
from framework.server.app import resolve_session
logger = logging.getLogger(__name__)
@@ -46,6 +46,7 @@ DEFAULT_EVENT_TYPES = [
EventType.TRIGGER_DEACTIVATED,
EventType.TRIGGER_FIRED,
EventType.TRIGGER_REMOVED,
EventType.TRIGGER_UPDATED,
EventType.DRAFT_GRAPH_UPDATED,
]
@@ -165,6 +166,54 @@ async def handle_events(request: web.Request) -> web.StreamResponse:
if replayed:
logger.info("SSE replayed %d buffered events for session='%s'", replayed, session.id)
# Inject a live-status snapshot so the frontend knows which nodes are
# currently running. This covers the case where the user navigated away
# and back — the localStorage snapshot is stale, and the ring-buffer
# replay may not include the original node_loop_started events.
worker_runtime = getattr(session, "worker_runtime", None)
if worker_runtime and getattr(worker_runtime, "is_running", False):
try:
for stream_info in worker_runtime.get_active_streams():
graph_id = stream_info.get("graph_id")
stream_id = stream_info.get("stream_id", "default")
for exec_id in stream_info.get("active_execution_ids", []):
# Synthesize execution_started so frontend sets workerRunState
synth_exec = AgentEvent(
type=EventType.EXECUTION_STARTED,
stream_id=stream_id,
execution_id=exec_id,
graph_id=graph_id,
data={"synthetic": True},
).to_dict()
try:
queue.put_nowait(synth_exec)
except asyncio.QueueFull:
pass
# Find the currently executing node via the executor
for _gid, reg in worker_runtime._graphs.items():
if _gid != graph_id:
continue
for _ep_id, stream in reg.streams.items():
for exec_id, executor in stream._active_executors.items():
current = getattr(executor, "current_node_id", None)
if current:
synth_node = AgentEvent(
type=EventType.NODE_LOOP_STARTED,
stream_id=stream_id,
node_id=current,
execution_id=exec_id,
graph_id=graph_id,
data={"synthetic": True},
).to_dict()
try:
queue.put_nowait(synth_node)
except asyncio.QueueFull:
pass
logger.info("SSE injected live-status snapshot for session='%s'", session.id)
except Exception:
logger.debug("Failed to inject live-status snapshot", exc_info=True)
event_count = 0
close_reason = "unknown"
try:
+118 -8
View File
@@ -24,6 +24,8 @@ Worker session browsing (persisted execution runs on disk):
"""
import asyncio
import contextlib
import json
import logging
import shutil
@@ -64,7 +66,9 @@ def _session_to_live_dict(session) -> dict:
"loaded_at": session.loaded_at,
"uptime_seconds": round(time.time() - session.loaded_at, 1),
"intro_message": getattr(session.runner, "intro_message", "") or "",
"queen_phase": phase_state.phase if phase_state else "planning",
"queen_phase": phase_state.phase
if phase_state
else ("staging" if session.worker_runtime else "planning"),
}
@@ -406,7 +410,7 @@ async def handle_session_entry_points(request: web.Request) -> web.Response:
async def handle_update_trigger_task(request: web.Request) -> web.Response:
"""PATCH /api/sessions/{session_id}/triggers/{trigger_id} — update trigger task."""
"""PATCH /api/sessions/{session_id}/triggers/{trigger_id} — update trigger fields."""
session, err = resolve_session(request)
if err:
return err
@@ -425,30 +429,136 @@ async def handle_update_trigger_task(request: web.Request) -> web.Response:
except Exception:
return web.json_response({"error": "Invalid JSON body"}, status=400)
task = body.get("task")
if task is None:
return web.json_response({"error": "Missing 'task' field"}, status=400)
if not isinstance(task, str):
return web.json_response({"error": "'task' must be a string"}, status=400)
updates: dict[str, object] = {}
tdef.task = task
if "task" in body:
task = body.get("task")
if not isinstance(task, str):
return web.json_response({"error": "'task' must be a string"}, status=400)
tdef.task = task
updates["task"] = tdef.task
trigger_config_update = body.get("trigger_config")
if trigger_config_update is not None:
if not isinstance(trigger_config_update, dict):
return web.json_response(
{"error": "'trigger_config' must be an object"},
status=400,
)
merged_trigger_config = dict(tdef.trigger_config)
merged_trigger_config.update(trigger_config_update)
if tdef.trigger_type == "timer":
cron_expr = merged_trigger_config.get("cron")
interval = merged_trigger_config.get("interval_minutes")
if cron_expr is not None and not isinstance(cron_expr, str):
return web.json_response(
{"error": "'trigger_config.cron' must be a string"},
status=400,
)
if cron_expr:
try:
from croniter import croniter
if not croniter.is_valid(cron_expr):
return web.json_response(
{"error": f"Invalid cron expression: {cron_expr}"},
status=400,
)
except ImportError:
return web.json_response(
{
"error": (
"croniter package not installed — cannot validate cron expression."
)
},
status=500,
)
merged_trigger_config.pop("interval_minutes", None)
elif interval is None:
return web.json_response(
{
"error": (
"Timer trigger needs 'cron' or 'interval_minutes' in trigger_config."
)
},
status=400,
)
elif not isinstance(interval, (int, float)) or interval <= 0:
return web.json_response(
{"error": "'trigger_config.interval_minutes' must be > 0"},
status=400,
)
tdef.trigger_config = merged_trigger_config
updates["trigger_config"] = tdef.trigger_config
if not updates:
return web.json_response(
{"error": "Provide at least one of 'task' or 'trigger_config'"},
status=400,
)
# Persist to session state and agent definition
from framework.tools.queen_lifecycle_tools import (
_persist_active_triggers,
_save_trigger_to_agent,
_start_trigger_timer,
_start_trigger_webhook,
)
if "trigger_config" in updates and trigger_id in getattr(session, "active_trigger_ids", set()):
task = session.active_timer_tasks.pop(trigger_id, None)
if task and not task.done():
task.cancel()
with contextlib.suppress(asyncio.CancelledError):
await task
getattr(session, "trigger_next_fire", {}).pop(trigger_id, None)
webhook_subs = getattr(session, "active_webhook_subs", {})
if sub_id := webhook_subs.pop(trigger_id, None):
with contextlib.suppress(Exception):
session.event_bus.unsubscribe(sub_id)
if tdef.trigger_type == "timer":
await _start_trigger_timer(session, trigger_id, tdef)
elif tdef.trigger_type == "webhook":
await _start_trigger_webhook(session, trigger_id, tdef)
if trigger_id in getattr(session, "active_trigger_ids", set()):
session_id = request.match_info["session_id"]
await _persist_active_triggers(session, session_id)
_save_trigger_to_agent(session, trigger_id, tdef)
# Emit SSE event so the frontend updates the graph and detail panel
bus = getattr(session, "event_bus", None)
if bus:
from framework.runtime.event_bus import AgentEvent, EventType
await bus.publish(
AgentEvent(
type=EventType.TRIGGER_UPDATED,
stream_id="queen",
data={
"trigger_id": trigger_id,
"task": tdef.task,
"trigger_config": tdef.trigger_config,
"trigger_type": tdef.trigger_type,
"name": tdef.description or trigger_id,
"entry_node": getattr(
getattr(getattr(session, "runner", None), "graph", None),
"entry_node",
None,
),
},
)
)
return web.json_response(
{
"trigger_id": trigger_id,
"task": tdef.task,
"trigger_config": tdef.trigger_config,
}
)
+6
View File
@@ -868,6 +868,10 @@ class SessionManager:
event_type = (
EventType.TRIGGER_AVAILABLE if kind == "available" else EventType.TRIGGER_REMOVED
)
# Resolve graph entry node for trigger target
runner = getattr(session, "runner", None)
graph_entry = runner.graph.entry_node if runner else None
for t in triggers.values():
await session.event_bus.publish(
AgentEvent(
@@ -877,6 +881,8 @@ class SessionManager:
"trigger_id": t.id,
"trigger_type": t.trigger_type,
"trigger_config": t.trigger_config,
"name": t.description or t.id,
**({"entry_node": graph_entry} if graph_entry else {}),
},
)
)
+67
View File
@@ -5,6 +5,7 @@ Uses aiohttp TestClient with mocked sessions to test all endpoints
without requiring actual LLM calls or agent loading.
"""
import asyncio
import json
from dataclasses import dataclass, field
from pathlib import Path
@@ -13,6 +14,7 @@ from unittest.mock import AsyncMock, MagicMock
import pytest
from aiohttp.test_utils import TestClient, TestServer
from framework.runtime.triggers import TriggerDefinition
from framework.server.app import create_app
from framework.server.session_manager import Session
@@ -172,6 +174,7 @@ def _make_session(
runner.intro_message = "Test intro"
mock_event_bus = MagicMock()
mock_event_bus.publish = AsyncMock()
mock_llm = MagicMock()
queen_executor = _make_queen_executor() if with_queen else None
@@ -484,6 +487,70 @@ class TestSessionCRUD:
data = await resp.json()
assert "primary" in data["graphs"]
@pytest.mark.asyncio
async def test_update_trigger_task(self, tmp_path):
session = _make_session(tmp_dir=tmp_path)
session.available_triggers["daily"] = TriggerDefinition(
id="daily",
trigger_type="timer",
trigger_config={"cron": "0 5 * * *"},
task="Old task",
)
app = _make_app_with_session(session)
async with TestClient(TestServer(app)) as client:
resp = await client.patch(
"/api/sessions/test_agent/triggers/daily",
json={"task": "New task"},
)
assert resp.status == 200
data = await resp.json()
assert data["task"] == "New task"
assert data["trigger_config"]["cron"] == "0 5 * * *"
assert session.available_triggers["daily"].task == "New task"
@pytest.mark.asyncio
async def test_update_trigger_cron_restarts_active_timer(self, tmp_path):
session = _make_session(tmp_dir=tmp_path)
session.available_triggers["daily"] = TriggerDefinition(
id="daily",
trigger_type="timer",
trigger_config={"cron": "0 5 * * *"},
task="Run task",
active=True,
)
session.active_trigger_ids.add("daily")
session.active_timer_tasks["daily"] = asyncio.create_task(asyncio.sleep(60))
app = _make_app_with_session(session)
async with TestClient(TestServer(app)) as client:
resp = await client.patch(
"/api/sessions/test_agent/triggers/daily",
json={"trigger_config": {"cron": "0 6 * * *"}},
)
assert resp.status == 200
data = await resp.json()
assert data["trigger_config"]["cron"] == "0 6 * * *"
assert "daily" in session.active_timer_tasks
assert session.active_timer_tasks["daily"] is not None
assert session.available_triggers["daily"].trigger_config["cron"] == "0 6 * * *"
session.active_timer_tasks["daily"].cancel()
@pytest.mark.asyncio
async def test_update_trigger_cron_rejects_invalid_expression(self, tmp_path):
session = _make_session(tmp_dir=tmp_path)
session.available_triggers["daily"] = TriggerDefinition(
id="daily",
trigger_type="timer",
trigger_config={"cron": "0 5 * * *"},
task="Run task",
)
app = _make_app_with_session(session)
async with TestClient(TestServer(app)) as client:
resp = await client.patch(
"/api/sessions/test_agent/triggers/daily",
json={"trigger_config": {"cron": "not a cron"}},
)
assert resp.status == 400
class TestExecution:
@pytest.mark.asyncio
+374
View File
@@ -0,0 +1,374 @@
"""Flowchart utilities for generating and persisting flowchart.json files.
Extracted from queen_lifecycle_tools so that non-Queen code paths
(e.g., AgentRunner.load) can generate flowcharts for legacy agents
that lack a flowchart.json.
"""
from __future__ import annotations
import json
import logging
from pathlib import Path
from typing import Any
logger = logging.getLogger(__name__)
FLOWCHART_FILENAME = "flowchart.json"
# ── Flowchart type catalogue (9 types) ───────────────────────────────────────
FLOWCHART_TYPES = {
"start": {"shape": "stadium", "color": "#8aad3f"}, # spring pollen
"terminal": {"shape": "stadium", "color": "#b5453a"}, # propolis red
"process": {"shape": "rectangle", "color": "#b5a575"}, # warm wheat
"decision": {"shape": "diamond", "color": "#d89d26"}, # royal honey
"io": {"shape": "parallelogram", "color": "#d06818"}, # burnt orange
"document": {"shape": "document", "color": "#c4b830"}, # goldenrod
"database": {"shape": "cylinder", "color": "#508878"}, # sage teal
"subprocess": {"shape": "subroutine", "color": "#887a48"}, # propolis gold
"browser": {"shape": "hexagon", "color": "#cc8850"}, # honey copper
}
# Backward-compat remap: old type names → canonical type
FLOWCHART_REMAP: dict[str, str] = {
"delay": "process",
"manual_operation": "process",
"preparation": "process",
"merge": "process",
"alternate_process": "process",
"connector": "process",
"offpage_connector": "process",
"extract": "process",
"sort": "process",
"collate": "process",
"summing_junction": "process",
"or": "process",
"comment": "process",
"display": "io",
"manual_input": "io",
"multi_document": "document",
"stored_data": "database",
"internal_storage": "database",
}
# ── File persistence ─────────────────────────────────────────────────────────
def save_flowchart_file(
agent_path: Path | str | None,
original_draft: dict,
flowchart_map: dict[str, list[str]] | None,
) -> None:
"""Persist the flowchart to the agent's folder."""
if agent_path is None:
return
p = Path(agent_path)
if not p.is_dir():
return
try:
target = p / FLOWCHART_FILENAME
target.write_text(
json.dumps(
{"original_draft": original_draft, "flowchart_map": flowchart_map},
indent=2,
),
encoding="utf-8",
)
logger.debug("Flowchart saved to %s", target)
except Exception:
logger.warning("Failed to save flowchart to %s", p, exc_info=True)
def load_flowchart_file(
agent_path: Path | str | None,
) -> tuple[dict | None, dict[str, list[str]] | None]:
"""Load flowchart from the agent's folder. Returns (original_draft, flowchart_map)."""
if agent_path is None:
return None, None
target = Path(agent_path) / FLOWCHART_FILENAME
if not target.is_file():
return None, None
try:
data = json.loads(target.read_text(encoding="utf-8"))
return data.get("original_draft"), data.get("flowchart_map")
except Exception:
logger.warning("Failed to load flowchart from %s", target, exc_info=True)
return None, None
# ── Node classification ──────────────────────────────────────────────────────
def classify_flowchart_node(
node: dict,
index: int,
total: int,
edges: list[dict],
terminal_ids: set[str],
) -> str:
"""Auto-detect the ISO 5807 flowchart type for a draft node.
Priority: explicit override > structural detection > heuristic > default.
"""
# Explicit override from the queen
explicit = node.get("flowchart_type", "").strip()
if explicit and explicit in FLOWCHART_TYPES:
return explicit
if explicit and explicit in FLOWCHART_REMAP:
return FLOWCHART_REMAP[explicit]
node_id = node["id"]
node_type = node.get("node_type", "event_loop")
node_tools = set(node.get("tools") or [])
desc = (node.get("description") or "").lower()
# GCU / browser automation nodes → hexagon
if node_type == "gcu":
return "browser"
# Entry node (first node or no incoming edges) → start terminator
incoming = {e["target"] for e in edges}
if index == 0 or (node_id not in incoming and index == 0):
return "start"
# Terminal node → end terminator
if node_id in terminal_ids:
return "terminal"
# Decision node: has outgoing edges with branching conditions → diamond
outgoing = [e for e in edges if e["source"] == node_id]
if len(outgoing) >= 2:
conditions = {e.get("condition", "on_success") for e in outgoing}
if len(conditions) > 1 or conditions - {"on_success"}:
return "decision"
# Sub-agent / subprocess nodes → subroutine (double-bordered rect)
if node.get("sub_agents"):
return "subprocess"
# Database / data store nodes → cylinder
db_tool_hints = {
"query_database",
"sql_query",
"read_table",
"write_table",
"save_data",
"load_data",
}
db_desc_hints = {"database", "data store", "storage", "persist", "cache"}
if node_tools & db_tool_hints or any(h in desc for h in db_desc_hints):
return "database"
# Document generation nodes → document shape
doc_tool_hints = {
"generate_report",
"create_document",
"write_report",
"render_template",
"export_pdf",
}
doc_desc_hints = {"report", "document", "summary", "write up", "writeup"}
if node_tools & doc_tool_hints or any(h in desc for h in doc_desc_hints):
return "document"
# I/O nodes: external data ingestion or delivery → parallelogram
io_tool_hints = {
"serve_file_to_user",
"send_email",
"post_message",
"upload_file",
"download_file",
"fetch_url",
"post_to_slack",
"send_notification",
"display_results",
}
io_desc_hints = {"deliver", "send", "output", "notify", "publish"}
if node_tools & io_tool_hints or any(h in desc for h in io_desc_hints):
return "io"
# Default: process (rectangle)
return "process"
# ── Draft synthesis from runtime graph ───────────────────────────────────────
def synthesize_draft_from_runtime(
runtime_nodes: list,
runtime_edges: list,
agent_name: str = "",
goal_name: str = "",
) -> tuple[dict, dict[str, list[str]]]:
"""Generate a flowchart draft from a loaded runtime graph.
Used for agents that were never planned through the draft workflow
(e.g., hand-coded or loaded from "my agents"). Produces a valid
DraftGraph structure with auto-classified flowchart types.
"""
nodes: list[dict] = []
edges: list[dict] = []
node_ids = {n.id for n in runtime_nodes}
# Build edge dicts first (needed for classification)
for i, re in enumerate(runtime_edges):
edges.append(
{
"id": f"edge-{i}",
"source": re.source,
"target": re.target,
"condition": str(re.condition.value)
if hasattr(re.condition, "value")
else str(re.condition),
"description": getattr(re, "description", "") or "",
"label": "",
}
)
# Terminal detection — exclude sub-agent nodes (they are leaf helpers, not endpoints)
sub_agent_ids: set[str] = set()
for rn in runtime_nodes:
for sa_id in getattr(rn, "sub_agents", None) or []:
sub_agent_ids.add(sa_id)
sources = {e["source"] for e in edges}
terminal_ids = node_ids - sources - sub_agent_ids
if not terminal_ids and runtime_nodes:
terminal_ids = {runtime_nodes[-1].id}
# Build node dicts with classification
total = len(runtime_nodes)
for i, rn in enumerate(runtime_nodes):
node: dict = {
"id": rn.id,
"name": rn.name,
"description": rn.description or "",
"node_type": getattr(rn, "node_type", "event_loop") or "event_loop",
"tools": list(rn.tools) if rn.tools else [],
"input_keys": list(rn.input_keys) if rn.input_keys else [],
"output_keys": list(rn.output_keys) if rn.output_keys else [],
"success_criteria": getattr(rn, "success_criteria", "") or "",
"sub_agents": list(rn.sub_agents) if getattr(rn, "sub_agents", None) else [],
}
fc_type = classify_flowchart_node(node, i, total, edges, terminal_ids)
fc_meta = FLOWCHART_TYPES[fc_type]
node["flowchart_type"] = fc_type
node["flowchart_shape"] = fc_meta["shape"]
node["flowchart_color"] = fc_meta["color"]
nodes.append(node)
# Add visual edges from parent nodes to their sub_agents.
# Sub-agents are connected via the sub_agents field, not via EdgeSpec,
# so they'd appear as disconnected islands without this.
# Two edges per sub-agent: delegate (parent→sub) and report (sub→parent).
edge_counter = len(edges)
for node in nodes:
for sa_id in node.get("sub_agents") or []:
if sa_id in node_ids:
edges.append(
{
"id": f"edge-subagent-{edge_counter}",
"source": node["id"],
"target": sa_id,
"condition": "always",
"description": "sub-agent delegation",
"label": "delegate",
}
)
edge_counter += 1
edges.append(
{
"id": f"edge-subagent-{edge_counter}",
"source": sa_id,
"target": node["id"],
"condition": "always",
"description": "sub-agent report back",
"label": "report",
}
)
edge_counter += 1
# Group sub-agent nodes under their parent in the flowchart map
# (mirrors what _dissolve_planning_nodes does for planned drafts)
sub_agent_ids_final: set[str] = set()
for node in nodes:
for sa_id in node.get("sub_agents") or []:
if sa_id in node_ids:
sub_agent_ids_final.add(sa_id)
fmap: dict[str, list[str]] = {}
for node in nodes:
nid = node["id"]
if nid in sub_agent_ids_final:
continue # skip — will be included via parent
absorbed = [nid]
for sa_id in node.get("sub_agents") or []:
if sa_id in node_ids:
absorbed.append(sa_id)
fmap[nid] = absorbed
draft = {
"agent_name": agent_name,
"goal": goal_name,
"description": "",
"success_criteria": [],
"constraints": [],
"nodes": nodes,
"edges": edges,
"entry_node": nodes[0]["id"] if nodes else "",
"terminal_nodes": sorted(terminal_ids),
"flowchart_legend": {
fc_type: {"shape": meta["shape"], "color": meta["color"]}
for fc_type, meta in FLOWCHART_TYPES.items()
},
}
return draft, fmap
# ── Fallback generation entry point ──────────────────────────────────────────
def generate_fallback_flowchart(
graph: Any,
goal: Any,
agent_path: Path,
) -> None:
"""Generate flowchart.json from a runtime GraphSpec if none exists.
This is a no-op if flowchart.json already exists. On failure, logs a
warning but never raises agent loading must not be blocked by
flowchart generation.
"""
try:
existing_draft, _ = load_flowchart_file(agent_path)
if existing_draft is not None:
return # already have one
draft, fmap = synthesize_draft_from_runtime(
runtime_nodes=list(graph.nodes),
runtime_edges=list(graph.edges),
agent_name=agent_path.name,
goal_name=goal.name if goal else "",
)
# Enrich with Goal metadata
if goal:
draft["goal"] = goal.description or goal.name or ""
draft["success_criteria"] = [sc.description for sc in (goal.success_criteria or [])]
draft["constraints"] = [c.description for c in (goal.constraints or [])]
# Use entry_node/terminal_nodes from GraphSpec if available
if graph.entry_node:
draft["entry_node"] = graph.entry_node
if graph.terminal_nodes:
draft["terminal_nodes"] = list(graph.terminal_nodes)
save_flowchart_file(agent_path, draft, fmap)
logger.info("Generated fallback flowchart.json for %s", agent_path.name)
except Exception:
logger.warning(
"Failed to generate fallback flowchart for %s",
agent_path,
exc_info=True,
)
+40 -391
View File
@@ -46,6 +46,13 @@ from framework.credentials.models import CredentialError
from framework.runner.preload_validation import credential_errors_to_json, validate_credentials
from framework.runtime.event_bus import AgentEvent, EventType
from framework.server.app import validate_agent_path
from framework.tools.flowchart_utils import (
FLOWCHART_TYPES,
classify_flowchart_node,
load_flowchart_file,
save_flowchart_file,
synthesize_draft_from_runtime,
)
if TYPE_CHECKING:
from framework.runner.tool_registry import ToolRegistry
@@ -294,66 +301,7 @@ def build_worker_profile(runtime: AgentRuntime, agent_path: Path | str | None =
return "\n".join(lines)
_FLOWCHART_TYPES = {
# ── Core symbols (ISO 5807 §4) ──────────────────────────
# Terminator — rounded rectangle (stadium shape)
"start": {"shape": "stadium", "color": "#4CAF50"}, # green
"terminal": {"shape": "stadium", "color": "#F44336"}, # red
# Process — rectangle
"process": {"shape": "rectangle", "color": "#2196F3"}, # blue
# Decision — diamond
"decision": {"shape": "diamond", "color": "#FF9800"}, # amber
# Data (Input/Output) — parallelogram
"io": {"shape": "parallelogram", "color": "#9C27B0"}, # purple
# Document — rectangle with wavy bottom
"document": {"shape": "document", "color": "#607D8B"}, # blue-grey
# Multi-document — stacked documents
"multi_document": {"shape": "multi_document", "color": "#78909C"}, # blue-grey light
# Predefined process / subroutine — rectangle with double vertical bars
"subprocess": {"shape": "subroutine", "color": "#009688"}, # teal
# Preparation — hexagon
"preparation": {"shape": "hexagon", "color": "#795548"}, # brown
# Manual input — trapezoid with slanted top
"manual_input": {"shape": "manual_input", "color": "#E91E63"}, # pink
# Manual operation — inverted trapezoid
"manual_operation": {"shape": "trapezoid", "color": "#AD1457"}, # dark pink
# Delay — half-rounded rectangle (D-shape)
"delay": {"shape": "delay", "color": "#FF5722"}, # deep orange
# Display — rounded rectangle with pointed left
"display": {"shape": "display", "color": "#00BCD4"}, # cyan
# ── Data storage symbols ────────────────────────────────
# Database / direct access storage — cylinder
"database": {"shape": "cylinder", "color": "#8BC34A"}, # light green
# Stored data — generic data store
"stored_data": {"shape": "stored_data", "color": "#CDDC39"}, # lime
# Internal storage — rectangle with cross-hatch
"internal_storage": {"shape": "internal_storage", "color": "#FFC107"}, # amber light
# ── Connectors ──────────────────────────────────────────
# On-page connector — small circle
"connector": {"shape": "circle", "color": "#9E9E9E"}, # grey
# Off-page connector — pentagon / home-plate
"offpage_connector": {"shape": "pentagon", "color": "#757575"}, # dark grey
# ── Flow operations ─────────────────────────────────────
# Merge — inverted triangle
"merge": {"shape": "triangle_inv", "color": "#3F51B5"}, # indigo
# Extract — upward triangle
"extract": {"shape": "triangle", "color": "#5C6BC0"}, # indigo light
# Sort — hourglass / double triangle
"sort": {"shape": "hourglass", "color": "#7986CB"}, # indigo lighter
# Collate — merged hourglass
"collate": {"shape": "hourglass_inv", "color": "#9FA8DA"}, # indigo lightest
# Summing junction — circle with cross
"summing_junction": {"shape": "circle_cross", "color": "#F06292"}, # pink light
# Or — circle with horizontal bar
"or": {"shape": "circle_bar", "color": "#CE93D8"}, # purple light
# ── Domain-specific (Hive agent context) ────────────────
# Browser automation (GCU) — mapped to preparation/hexagon
"browser": {"shape": "hexagon", "color": "#1A237E"}, # dark indigo
# Comment / annotation — flag shape
"comment": {"shape": "flag", "color": "#BDBDBD"}, # light grey
# Alternate process — rounded rectangle
"alternate_process": {"shape": "rounded_rect", "color": "#42A5F5"}, # light blue
}
# FLOWCHART_TYPES is imported from framework.tools.flowchart_utils
def _read_agent_triggers_json(agent_path: Path) -> list[dict]:
@@ -645,7 +593,7 @@ def _dissolve_planning_nodes(
if not predecessors:
# Decision at start: convert to regular process node
d_node["flowchart_type"] = "process"
fc_meta = _FLOWCHART_TYPES["process"]
fc_meta = FLOWCHART_TYPES["process"]
d_node["flowchart_shape"] = fc_meta["shape"]
d_node["flowchart_color"] = fc_meta["color"]
if not d_node.get("success_criteria"):
@@ -1157,309 +1105,20 @@ def register_queen_lifecycle_tools(
registry.register("replan_agent", _replan_tool, lambda inputs: replan_agent())
tools_registered += 1
# --- Flowchart file persistence -------------------------------------------
# The flowchart is saved as flowchart.json in the agent's folder so it
# survives restarts and is available when loading any agent.
FLOWCHART_FILENAME = "flowchart.json"
def _save_flowchart_file(
agent_path: Path | str | None,
original_draft: dict,
flowchart_map: dict[str, list[str]] | None,
) -> None:
"""Persist the flowchart to the agent's folder."""
if agent_path is None:
return
p = Path(agent_path)
if not p.is_dir():
return
try:
target = p / FLOWCHART_FILENAME
target.write_text(
json.dumps(
{"original_draft": original_draft, "flowchart_map": flowchart_map},
indent=2,
),
encoding="utf-8",
)
logger.debug("Flowchart saved to %s", target)
except Exception:
logger.warning("Failed to save flowchart to %s", p, exc_info=True)
def _load_flowchart_file(
agent_path: Path | str | None,
) -> tuple[dict | None, dict[str, list[str]] | None]:
"""Load flowchart from the agent's folder. Returns (original_draft, flowchart_map)."""
if agent_path is None:
return None, None
target = Path(agent_path) / FLOWCHART_FILENAME
if not target.is_file():
return None, None
try:
data = json.loads(target.read_text(encoding="utf-8"))
return data.get("original_draft"), data.get("flowchart_map")
except Exception:
logger.warning("Failed to load flowchart from %s", target, exc_info=True)
return None, None
def _synthesize_draft_from_runtime(
runtime_nodes: list,
runtime_edges: list,
agent_name: str = "",
goal_name: str = "",
) -> tuple[dict, dict[str, list[str]]]:
"""Generate a flowchart draft from a loaded runtime graph.
Used for agents that were never planned through the draft workflow
(e.g., hand-coded or loaded from "my agents"). Produces a valid
DraftGraph structure with auto-classified flowchart types.
"""
nodes: list[dict] = []
edges: list[dict] = []
node_ids = {n.id for n in runtime_nodes}
# Build edge dicts first (needed for classification)
for i, re in enumerate(runtime_edges):
edges.append(
{
"id": f"edge-{i}",
"source": re.source,
"target": re.target,
"condition": str(re.condition.value)
if hasattr(re.condition, "value")
else str(re.condition),
"description": getattr(re, "description", "") or "",
"label": "",
}
)
# Terminal detection — exclude sub-agent nodes (they are leaf helpers, not endpoints)
sub_agent_ids: set[str] = set()
for rn in runtime_nodes:
for sa_id in getattr(rn, "sub_agents", None) or []:
sub_agent_ids.add(sa_id)
sources = {e["source"] for e in edges}
terminal_ids = node_ids - sources - sub_agent_ids
if not terminal_ids and runtime_nodes:
terminal_ids = {runtime_nodes[-1].id}
# Build node dicts with classification
total = len(runtime_nodes)
for i, rn in enumerate(runtime_nodes):
node: dict = {
"id": rn.id,
"name": rn.name,
"description": rn.description or "",
"node_type": getattr(rn, "node_type", "event_loop") or "event_loop",
"tools": list(rn.tools) if rn.tools else [],
"input_keys": list(rn.input_keys) if rn.input_keys else [],
"output_keys": list(rn.output_keys) if rn.output_keys else [],
"success_criteria": getattr(rn, "success_criteria", "") or "",
"sub_agents": list(rn.sub_agents) if getattr(rn, "sub_agents", None) else [],
}
fc_type = _classify_flowchart_node(node, i, total, edges, terminal_ids)
fc_meta = _FLOWCHART_TYPES[fc_type]
node["flowchart_type"] = fc_type
node["flowchart_shape"] = fc_meta["shape"]
node["flowchart_color"] = fc_meta["color"]
nodes.append(node)
# Add visual edges from parent nodes to their sub_agents.
# Sub-agents are connected via the sub_agents field, not via EdgeSpec,
# so they'd appear as disconnected islands without this.
# Two edges per sub-agent: delegate (parent→sub) and report (sub→parent).
edge_counter = len(edges)
for node in nodes:
for sa_id in node.get("sub_agents") or []:
if sa_id in node_ids:
edges.append(
{
"id": f"edge-subagent-{edge_counter}",
"source": node["id"],
"target": sa_id,
"condition": "always",
"description": "sub-agent delegation",
"label": "delegate",
}
)
edge_counter += 1
edges.append(
{
"id": f"edge-subagent-{edge_counter}",
"source": sa_id,
"target": node["id"],
"condition": "always",
"description": "sub-agent report back",
"label": "report",
}
)
edge_counter += 1
# Group sub-agent nodes under their parent in the flowchart map
# (mirrors what _dissolve_planning_nodes does for planned drafts)
sub_agent_ids: set[str] = set()
for node in nodes:
for sa_id in node.get("sub_agents") or []:
if sa_id in node_ids:
sub_agent_ids.add(sa_id)
fmap: dict[str, list[str]] = {}
for node in nodes:
nid = node["id"]
if nid in sub_agent_ids:
continue # skip — will be included via parent
absorbed = [nid]
for sa_id in node.get("sub_agents") or []:
if sa_id in node_ids:
absorbed.append(sa_id)
fmap[nid] = absorbed
draft = {
"agent_name": agent_name,
"goal": goal_name,
"description": "",
"success_criteria": [],
"constraints": [],
"nodes": nodes,
"edges": edges,
"entry_node": nodes[0]["id"] if nodes else "",
"terminal_nodes": sorted(terminal_ids),
"flowchart_legend": {
fc_type: {"shape": meta["shape"], "color": meta["color"]}
for fc_type, meta in _FLOWCHART_TYPES.items()
},
}
return draft, fmap
# --- Flowchart utilities ---------------------------------------------------
# Flowchart persistence, classification, and synthesis functions are now in
# framework.tools.flowchart_utils. Local aliases for backward compatibility
# within this closure:
_save_flowchart_file = save_flowchart_file
_load_flowchart_file = load_flowchart_file
_synthesize_draft_from_runtime = synthesize_draft_from_runtime
_classify_flowchart_node = classify_flowchart_node
# --- save_agent_draft (Planning phase — declarative graph preview) ---------
# Creates a lightweight draft graph with nodes, edges, and business metadata.
# Loose validation: only requires names and descriptions. Emits an event
# so the frontend can render the graph during planning (before any code).
def _classify_flowchart_node(
node: dict,
index: int,
total: int,
edges: list[dict],
terminal_ids: set[str],
) -> str:
"""Auto-detect the ISO 5807 flowchart type for a draft node.
Priority: explicit override > structural detection > heuristic > default.
"""
# Explicit override from the queen
explicit = node.get("flowchart_type", "").strip()
if explicit and explicit in _FLOWCHART_TYPES:
return explicit
node_id = node["id"]
node_type = node.get("node_type", "event_loop")
node_tools = set(node.get("tools") or [])
desc = (node.get("description") or "").lower()
name = (node.get("name") or "").lower()
# GCU / browser automation nodes → hexagon
if node_type == "gcu":
return "browser"
# Entry node (first node or no incoming edges) → start terminator
incoming = {e["target"] for e in edges}
if index == 0 or (node_id not in incoming and index == 0):
return "start"
# Terminal node → end terminator
if node_id in terminal_ids:
return "terminal"
# Decision node: has outgoing edges with branching conditions → diamond
outgoing = [e for e in edges if e["source"] == node_id]
if len(outgoing) >= 2:
conditions = {e.get("condition", "on_success") for e in outgoing}
if len(conditions) > 1 or conditions - {"on_success"}:
return "decision"
# Sub-agent / subprocess nodes → subroutine (double-bordered rect)
if node.get("sub_agents"):
return "subprocess"
# Database / data store nodes → cylinder
db_tool_hints = {
"query_database",
"sql_query",
"read_table",
"write_table",
"save_data",
"load_data",
}
db_desc_hints = {"database", "data store", "storage", "persist", "cache"}
if node_tools & db_tool_hints or any(h in desc for h in db_desc_hints):
return "database"
# Document generation nodes → document shape
doc_tool_hints = {
"generate_report",
"create_document",
"write_report",
"render_template",
"export_pdf",
}
doc_desc_hints = {"report", "document", "summary", "write up", "writeup"}
if node_tools & doc_tool_hints or any(h in desc for h in doc_desc_hints):
return "document"
# I/O nodes: external data ingestion or delivery → parallelogram
io_tool_hints = {
"serve_file_to_user",
"send_email",
"post_message",
"upload_file",
"download_file",
"fetch_url",
"post_to_slack",
"send_notification",
}
io_desc_hints = {"deliver", "send", "output", "notify", "publish"}
if node_tools & io_tool_hints or any(h in desc for h in io_desc_hints):
return "io"
# Manual / human-in-the-loop nodes → trapezoid
manual_desc_hints = {
"human review",
"manual",
"approval",
"human-in-the-loop",
"user review",
"manual check",
}
if any(h in desc for h in manual_desc_hints) or any(h in name for h in manual_desc_hints):
return "manual_operation"
# Preparation / setup nodes → hexagon
prep_desc_hints = {"setup", "initialize", "prepare", "configure", "provision"}
if any(h in desc for h in prep_desc_hints) or any(h in name for h in prep_desc_hints):
return "preparation"
# Delay / wait nodes → D-shape
delay_desc_hints = {"wait", "delay", "pause", "cooldown", "throttle", "sleep"}
if any(h in desc for h in delay_desc_hints):
return "delay"
# Merge nodes → inverted triangle
merge_desc_hints = {"merge", "combine", "aggregate", "consolidate"}
if any(h in desc for h in merge_desc_hints) or any(h in name for h in merge_desc_hints):
return "merge"
# Display nodes → display shape
display_desc_hints = {"display", "show", "present", "render", "visualize"}
display_tool_hints = {"serve_file_to_user", "display_results"}
if node_tools & display_tool_hints or any(h in name for h in display_desc_hints):
return "display"
# Default: process (rectangle)
return "process"
def _dissolve_planning_nodes(
draft: dict,
) -> tuple[dict, dict[str, list[str]]]:
@@ -1545,7 +1204,7 @@ def register_queen_lifecycle_tools(
if not predecessors:
# Decision at start: convert to regular process node
d_node["flowchart_type"] = "process"
fc_meta = _FLOWCHART_TYPES["process"]
fc_meta = FLOWCHART_TYPES["process"]
d_node["flowchart_shape"] = fc_meta["shape"]
d_node["flowchart_color"] = fc_meta["color"]
if not d_node.get("success_criteria"):
@@ -2097,7 +1756,7 @@ def register_queen_lifecycle_tools(
validated_edges,
terminal_ids,
)
fc_meta = _FLOWCHART_TYPES[fc_type]
fc_meta = FLOWCHART_TYPES[fc_type]
node["flowchart_type"] = fc_type
node["flowchart_shape"] = fc_meta["shape"]
node["flowchart_color"] = fc_meta["color"]
@@ -2115,7 +1774,7 @@ def register_queen_lifecycle_tools(
# Color legend for the frontend
"flowchart_legend": {
fc_type: {"shape": meta["shape"], "color": meta["color"]}
for fc_type, meta in _FLOWCHART_TYPES.items()
for fc_type, meta in FLOWCHART_TYPES.items()
},
}
@@ -2286,39 +1945,18 @@ def register_queen_lifecycle_tools(
"decision",
"io",
"document",
"multi_document",
"subprocess",
"preparation",
"manual_input",
"manual_operation",
"delay",
"display",
"database",
"stored_data",
"internal_storage",
"connector",
"offpage_connector",
"merge",
"extract",
"sort",
"collate",
"summing_junction",
"or",
"subprocess",
"browser",
"comment",
"alternate_process",
],
"description": (
"ISO 5807 flowchart symbol type. Auto-detected if omitted. "
"Core: start (green stadium), terminal (red stadium), "
"process (blue rect), decision (amber diamond), "
"io (purple parallelogram), document (grey wavy rect), "
"subprocess (teal subroutine), preparation (brown hexagon), "
"manual_operation (pink trapezoid), delay (orange D-shape), "
"display (cyan), database (green cylinder), "
"merge (indigo triangle), browser (dark indigo hexagon — "
"for GCU/browser sub-agents; must be a leaf node connected "
"only to its managing parent)"
"Flowchart symbol type. Auto-detected if omitted. "
"start (sage green stadium), terminal (dusty red stadium), "
"process (blue-gray rect), decision (amber diamond), "
"io (purple parallelogram), document (steel blue wavy rect), "
"database (teal cylinder), subprocess (cyan subroutine), "
"browser (deep blue hexagon — for GCU/browser "
"sub-agents; must be a leaf node)"
),
},
"tools": {
@@ -4064,6 +3702,8 @@ def register_queen_lifecycle_tools(
_save_trigger_to_agent(session, trigger_id, tdef)
bus = getattr(session, "event_bus", None)
if bus:
_runner = getattr(session, "runner", None)
_graph_entry = _runner.graph.entry_node if _runner else None
await bus.publish(
AgentEvent(
type=EventType.TRIGGER_ACTIVATED,
@@ -4072,6 +3712,8 @@ def register_queen_lifecycle_tools(
"trigger_id": trigger_id,
"trigger_type": t_type,
"trigger_config": t_config,
"name": tdef.description or trigger_id,
**({"entry_node": _graph_entry} if _graph_entry else {}),
},
)
)
@@ -4124,6 +3766,8 @@ def register_queen_lifecycle_tools(
# Emit event
bus = getattr(session, "event_bus", None)
if bus:
_runner = getattr(session, "runner", None)
_graph_entry = _runner.graph.entry_node if _runner else None
await bus.publish(
AgentEvent(
type=EventType.TRIGGER_ACTIVATED,
@@ -4132,6 +3776,8 @@ def register_queen_lifecycle_tools(
"trigger_id": trigger_id,
"trigger_type": t_type,
"trigger_config": t_config,
"name": tdef.description or trigger_id,
**({"entry_node": _graph_entry} if _graph_entry else {}),
},
)
)
@@ -4230,7 +3876,10 @@ def register_queen_lifecycle_tools(
AgentEvent(
type=EventType.TRIGGER_DEACTIVATED,
stream_id="queen",
data={"trigger_id": trigger_id},
data={
"trigger_id": trigger_id,
"name": tdef.description or trigger_id if tdef else trigger_id,
},
)
)
+7 -3
View File
@@ -64,10 +64,14 @@ export const sessionsApi = {
`/sessions/${sessionId}/entry-points`,
),
updateTriggerTask: (sessionId: string, triggerId: string, task: string) =>
api.patch<{ trigger_id: string; task: string }>(
updateTrigger: (
sessionId: string,
triggerId: string,
patch: { task?: string; trigger_config?: Record<string, unknown> },
) =>
api.patch<{ trigger_id: string; task: string; trigger_config: Record<string, unknown> }>(
`/sessions/${sessionId}/triggers/${triggerId}`,
{ task },
patch,
),
graphs: (sessionId: string) =>
+2 -1
View File
@@ -337,7 +337,8 @@ export type EventTypeName =
| "trigger_activated"
| "trigger_deactivated"
| "trigger_fired"
| "trigger_removed";
| "trigger_removed"
| "trigger_updated";
export interface AgentEvent {
type: EventTypeName;
-770
View File
@@ -1,770 +0,0 @@
import { memo, useMemo, useState, useRef, useEffect, useCallback } from "react";
import { Play, Pause, Loader2, CheckCircle2 } from "lucide-react";
export type NodeStatus = "running" | "complete" | "pending" | "error" | "looping";
export type NodeType = "execution" | "trigger";
export interface GraphNode {
id: string;
label: string;
status: NodeStatus;
nodeType?: NodeType;
triggerType?: string;
triggerConfig?: Record<string, unknown>;
next?: string[];
backEdges?: string[];
iterations?: number;
maxIterations?: number;
statusLabel?: string;
edgeLabels?: Record<string, string>;
}
export type RunState = "idle" | "deploying" | "running";
interface AgentGraphProps {
nodes: GraphNode[];
title: string;
onNodeClick?: (node: GraphNode) => void;
onRun?: () => void;
onPause?: () => void;
version?: string;
runState?: RunState;
building?: boolean;
queenPhase?: "planning" | "building" | "staging" | "running";
}
// --- Extracted RunButton so hover state survives parent re-renders ---
export interface RunButtonProps {
runState: RunState;
disabled: boolean;
onRun: () => void;
onPause: () => void;
btnRef: React.Ref<HTMLButtonElement>;
}
export const RunButton = memo(function RunButton({ runState, disabled, onRun, onPause, btnRef }: RunButtonProps) {
const [hovered, setHovered] = useState(false);
const showPause = runState === "running" && hovered;
return (
<button
ref={btnRef}
onClick={runState === "running" ? onPause : onRun}
disabled={runState === "deploying" || disabled}
onMouseEnter={() => setHovered(true)}
onMouseLeave={() => setHovered(false)}
className={`flex items-center gap-1.5 px-2.5 py-1 rounded-md text-[11px] font-semibold transition-all duration-200 ${
showPause
? "bg-amber-500/15 text-amber-400 border border-amber-500/40 hover:bg-amber-500/25 active:scale-95 cursor-pointer"
: runState === "running"
? "bg-green-500/15 text-green-400 border border-green-500/30 cursor-pointer"
: runState === "deploying"
? "bg-primary/10 text-primary border border-primary/20 cursor-default"
: disabled
? "bg-muted/30 text-muted-foreground/40 border border-border/20 cursor-not-allowed"
: "bg-primary/10 text-primary border border-primary/20 hover:bg-primary/20 hover:border-primary/40 active:scale-95"
}`}
>
{runState === "deploying" ? (
<Loader2 className="w-3 h-3 animate-spin" />
) : showPause ? (
<Pause className="w-3 h-3 fill-current" />
) : runState === "running" ? (
<CheckCircle2 className="w-3 h-3" />
) : (
<Play className="w-3 h-3 fill-current" />
)}
{runState === "deploying" ? "Deploying\u2026" : showPause ? "Pause" : runState === "running" ? "Running" : "Run"}
</button>
);
});
const NODE_W_MAX = 180;
const NODE_H = 44;
const GAP_Y = 48;
const TOP_Y = 30;
const MARGIN_LEFT = 20;
const MARGIN_RIGHT = 50; // space for back-edge arcs
const SVG_BASE_W = 320;
const GAP_X = 12;
// Read a CSS custom property value (space-separated HSL components)
function cssVar(name: string): string {
return getComputedStyle(document.documentElement).getPropertyValue(name).trim();
}
type StatusColorSet = Record<NodeStatus, { dot: string; bg: string; border: string; glow: string }>;
type TriggerColorSet = { bg: string; border: string; text: string; icon: string };
function buildStatusColors(): StatusColorSet {
const running = cssVar("--node-running") || "45 95% 58%";
const looping = cssVar("--node-looping") || "38 90% 55%";
const complete = cssVar("--node-complete") || "43 70% 45%";
const pending = cssVar("--node-pending") || "35 15% 28%";
const pendingBg = cssVar("--node-pending-bg") || "35 10% 12%";
const pendingBorder = cssVar("--node-pending-border") || "35 10% 20%";
const error = cssVar("--node-error") || "0 65% 55%";
return {
running: {
dot: `hsl(${running})`,
bg: `hsl(${running} / 0.08)`,
border: `hsl(${running} / 0.5)`,
glow: `hsl(${running} / 0.15)`,
},
looping: {
dot: `hsl(${looping})`,
bg: `hsl(${looping} / 0.08)`,
border: `hsl(${looping} / 0.5)`,
glow: `hsl(${looping} / 0.15)`,
},
complete: {
dot: `hsl(${complete})`,
bg: `hsl(${complete} / 0.05)`,
border: `hsl(${complete} / 0.25)`,
glow: "none",
},
pending: {
dot: `hsl(${pending})`,
bg: `hsl(${pendingBg})`,
border: `hsl(${pendingBorder})`,
glow: "none",
},
error: {
dot: `hsl(${error})`,
bg: `hsl(${error} / 0.06)`,
border: `hsl(${error} / 0.3)`,
glow: `hsl(${error} / 0.1)`,
},
};
}
function buildTriggerColors(): TriggerColorSet {
const bg = cssVar("--trigger-bg") || "210 25% 14%";
const border = cssVar("--trigger-border") || "210 30% 30%";
const text = cssVar("--trigger-text") || "210 30% 65%";
const icon = cssVar("--trigger-icon") || "210 40% 55%";
return {
bg: `hsl(${bg})`,
border: `hsl(${border})`,
text: `hsl(${text})`,
icon: `hsl(${icon})`,
};
}
/** Hook that reads node/trigger colors from CSS vars and updates on theme changes. */
function useThemeColors() {
const [statusColors, setStatusColors] = useState<StatusColorSet>(buildStatusColors);
const [triggerColors, setTriggerColors] = useState<TriggerColorSet>(buildTriggerColors);
useEffect(() => {
const rebuild = () => {
setStatusColors(buildStatusColors());
setTriggerColors(buildTriggerColors());
};
const obs = new MutationObserver(rebuild);
obs.observe(document.documentElement, { attributes: true, attributeFilter: ["class", "style"] });
return () => obs.disconnect();
}, []);
return { statusColors, triggerColors };
}
// Active trigger — brighter, more saturated blue
const activeTriggerColors = {
bg: "hsl(210,30%,18%)",
border: "hsl(210,50%,50%)",
text: "hsl(210,40%,75%)",
icon: "hsl(210,60%,65%)",
};
const triggerIcons: Record<string, string> = {
webhook: "\u26A1", // lightning bolt
timer: "\u23F1", // stopwatch
api: "\u2192", // right arrow
event: "\u223F", // sine wave
};
/** Truncate label to fit within `availablePx` at the given fontSize. */
function truncateLabel(label: string, availablePx: number, fontSize: number): string {
const avgCharW = fontSize * 0.58;
const maxChars = Math.floor(availablePx / avgCharW);
if (label.length <= maxChars) return label;
return label.slice(0, Math.max(maxChars - 1, 1)) + "\u2026";
}
// ─── Pan & Zoom wrapper ───
function PanZoomSvg({ svgW, svgH, className, children }: { svgW: number; svgH: number; className?: string; children: React.ReactNode }) {
const [zoom, setZoom] = useState(1);
const [pan, setPan] = useState({ x: 0, y: 0 });
const [dragging, setDragging] = useState(false);
const dragStart = useRef({ x: 0, y: 0, panX: 0, panY: 0 });
const MIN_ZOOM = 0.4;
const MAX_ZOOM = 3;
const handleWheel = useCallback((e: React.WheelEvent) => {
e.preventDefault();
const delta = e.deltaY > 0 ? 0.9 : 1.1;
setZoom(z => Math.min(MAX_ZOOM, Math.max(MIN_ZOOM, z * delta)));
}, []);
const handleMouseDown = useCallback((e: React.MouseEvent) => {
if (e.button !== 0) return;
setDragging(true);
dragStart.current = { x: e.clientX, y: e.clientY, panX: pan.x, panY: pan.y };
}, [pan]);
const handleMouseMove = useCallback((e: React.MouseEvent) => {
if (!dragging) return;
setPan({
x: dragStart.current.panX + (e.clientX - dragStart.current.x),
y: dragStart.current.panY + (e.clientY - dragStart.current.y),
});
}, [dragging]);
const handleMouseUp = useCallback(() => setDragging(false), []);
const resetView = useCallback(() => {
setZoom(1);
setPan({ x: 0, y: 0 });
}, []);
return (
<div className="flex-1 relative overflow-hidden px-1 pb-5">
<div
onWheel={handleWheel}
onMouseDown={handleMouseDown}
onMouseMove={handleMouseMove}
onMouseUp={handleMouseUp}
onMouseLeave={handleMouseUp}
className="w-full h-full"
style={{ cursor: dragging ? "grabbing" : "grab" }}
>
<svg
width="100%"
viewBox={`0 0 ${svgW} ${svgH}`}
preserveAspectRatio="xMidYMin meet"
className={`select-none ${className || ""}`}
style={{
fontFamily: "'Inter', system-ui, sans-serif",
transform: `translate(${pan.x}px, ${pan.y}px) scale(${zoom})`,
transformOrigin: "center top",
}}
>
{children}
</svg>
</div>
{/* Zoom controls */}
<div className="absolute bottom-7 right-3 flex items-center gap-1 bg-card/80 backdrop-blur-sm border border-border/40 rounded-lg p-0.5 shadow-sm">
<button
onClick={() => setZoom(z => Math.min(MAX_ZOOM, z * 1.2))}
className="w-6 h-6 flex items-center justify-center rounded text-muted-foreground hover:text-foreground hover:bg-muted/60 transition-colors text-xs font-bold"
aria-label="Zoom in"
>+</button>
<button
onClick={resetView}
className="px-1.5 h-6 flex items-center justify-center rounded text-[10px] font-mono text-muted-foreground hover:text-foreground hover:bg-muted/60 transition-colors"
aria-label="Reset zoom"
>{Math.round(zoom * 100)}%</button>
<button
onClick={() => setZoom(z => Math.max(MIN_ZOOM, z * 0.8))}
className="w-6 h-6 flex items-center justify-center rounded text-muted-foreground hover:text-foreground hover:bg-muted/60 transition-colors text-xs font-bold"
aria-label="Zoom out"
>{"\u2212"}</button>
</div>
</div>
);
}
export default function AgentGraph({ nodes, title: _title, onNodeClick, onRun, onPause, version, runState: externalRunState, building, queenPhase }: AgentGraphProps) {
const [localRunState, setLocalRunState] = useState<RunState>("idle");
const runState = externalRunState ?? localRunState;
const runBtnRef = useRef<HTMLButtonElement>(null);
const { statusColors, triggerColors } = useThemeColors();
const handleRun = () => {
if (runState !== "idle") return;
if (onRun) {
onRun();
} else {
setLocalRunState("deploying");
setTimeout(() => setLocalRunState("running"), 1800);
setTimeout(() => setLocalRunState("idle"), 5000);
}
};
const idxMap = useMemo(() => Object.fromEntries(nodes.map((n, i) => [n.id, i])), [nodes]);
const backEdges = useMemo(() => {
const edges: { fromIdx: number; toIdx: number }[] = [];
nodes.forEach((n, i) => {
(n.next || []).forEach((toId) => {
const toIdx = idxMap[toId];
if (toIdx !== undefined && toIdx <= i) edges.push({ fromIdx: i, toIdx });
});
(n.backEdges || []).forEach((toId) => {
const toIdx = idxMap[toId];
if (toIdx !== undefined) edges.push({ fromIdx: i, toIdx });
});
});
return edges;
}, [nodes, idxMap]);
const forwardEdges = useMemo(() => {
const edges: { fromIdx: number; toIdx: number; fanCount: number; fanIndex: number; label?: string }[] = [];
nodes.forEach((n, i) => {
const targets = (n.next || [])
.map((toId) => ({ toId, toIdx: idxMap[toId] }))
.filter((t): t is { toId: string; toIdx: number } => t.toIdx !== undefined && t.toIdx > i);
targets.forEach(({ toId, toIdx }, fi) => {
edges.push({
fromIdx: i,
toIdx,
fanCount: targets.length,
fanIndex: fi,
label: n.edgeLabels?.[toId],
});
});
});
return edges;
}, [nodes, idxMap]);
// --- Layer-based layout computation ---
const layout = useMemo(() => {
if (nodes.length === 0) {
return { layers: [] as number[], cols: [] as number[], maxCols: 1, nodeW: NODE_W_MAX, colSpacing: 0, firstColX: MARGIN_LEFT };
}
// 1. Build reverse adjacency from forward edges (who are the parents of each node)
const parents = new Map<number, number[]>();
nodes.forEach((_, i) => parents.set(i, []));
forwardEdges.forEach((e) => {
parents.get(e.toIdx)!.push(e.fromIdx);
});
// 2. Assign layers via longest-path from entry
const layers = new Array(nodes.length).fill(0);
for (let i = 0; i < nodes.length; i++) {
const pars = parents.get(i) || [];
if (pars.length > 0) {
layers[i] = Math.max(...pars.map((p) => layers[p])) + 1;
}
}
// 3. Group nodes by layer
const layerGroups = new Map<number, number[]>();
layers.forEach((l, i) => {
const group = layerGroups.get(l) || [];
group.push(i);
layerGroups.set(l, group);
});
// 4. Compute max columns and dynamic node width
let maxCols = 1;
layerGroups.forEach((group) => {
maxCols = Math.max(maxCols, group.length);
});
const usableW = SVG_BASE_W - MARGIN_LEFT - MARGIN_RIGHT;
const nodeW = Math.min(NODE_W_MAX, Math.floor((usableW - (maxCols - 1) * GAP_X) / maxCols));
const colSpacing = nodeW + GAP_X;
const totalNodesW = maxCols * nodeW + (maxCols - 1) * GAP_X;
const firstColX = MARGIN_LEFT + (usableW - totalNodesW) / 2;
// 5. Assign columns within each layer (centered, ordered by parent column)
const cols = new Array(nodes.length).fill(0);
layerGroups.forEach((group) => {
if (group.length === 1) {
// Center single node: place at middle column
cols[group[0]] = (maxCols - 1) / 2;
} else {
// Sort group by average parent column to reduce crossings
const sorted = [...group].sort((a, b) => {
const aParents = parents.get(a) || [];
const bParents = parents.get(b) || [];
const aAvg = aParents.length > 0 ? aParents.reduce((s, p) => s + cols[p], 0) / aParents.length : 0;
const bAvg = bParents.length > 0 ? bParents.reduce((s, p) => s + cols[p], 0) / bParents.length : 0;
return aAvg - bAvg;
});
// Spread evenly, centered within maxCols
const offset = (maxCols - group.length) / 2;
sorted.forEach((nodeIdx, i) => {
cols[nodeIdx] = offset + i;
});
}
});
return { layers, cols, maxCols, nodeW, colSpacing, firstColX };
}, [nodes, forwardEdges]);
if (nodes.length === 0) {
return (
<div className="flex flex-col h-full">
<div className="px-5 pt-4 pb-2 flex items-center justify-between">
<div className="flex items-center gap-2">
<p className="text-[11px] text-muted-foreground font-medium uppercase tracking-wider">Pipeline</p>
{version && (
<span className="text-[10px] font-mono font-medium text-muted-foreground/60 border border-border/30 rounded px-1 py-0.5 leading-none">
{version}
</span>
)}
</div>
<RunButton runState={runState} disabled={nodes.length === 0 || queenPhase === "building" || queenPhase === "planning"} onRun={handleRun} onPause={onPause ?? (() => {})} btnRef={runBtnRef} />
</div>
<div className="flex-1 flex items-center justify-center px-5">
{building ? (
<div className="flex flex-col items-center gap-3">
<Loader2 className="w-6 h-6 animate-spin text-primary/60" />
<p className="text-xs text-muted-foreground/80 text-center">Building agent...</p>
</div>
) : (
<p className="text-xs text-muted-foreground/60 text-center italic">No pipeline configured yet.<br/>Chat with the Queen to get started.</p>
)}
</div>
</div>
);
}
const { layers, cols, nodeW, colSpacing, firstColX } = layout;
const nodePos = (i: number) => ({
x: firstColX + cols[i] * colSpacing,
y: TOP_Y + layers[i] * (NODE_H + GAP_Y),
});
const maxLayer = nodes.length > 0 ? Math.max(...layers) : 0;
const svgHeight = TOP_Y * 2 + (maxLayer + 1) * NODE_H + maxLayer * GAP_Y + 10;
const backEdgeSpace = backEdges.length > 0 ? MARGIN_RIGHT + backEdges.length * 18 : 20;
const svgWidth = Math.max(SVG_BASE_W, firstColX + layout.maxCols * nodeW + (layout.maxCols - 1) * GAP_X + backEdgeSpace);
// Check if a skip-level forward edge would collide with intermediate nodes
const hasCollision = (fromLayer: number, toLayer: number, fromX: number, toX: number): boolean => {
const minX = Math.min(fromX, toX);
const maxX = Math.max(fromX, toX) + nodeW;
for (let i = 0; i < nodes.length; i++) {
const l = layers[i];
if (l > fromLayer && l < toLayer) {
const nx = firstColX + cols[i] * colSpacing;
// Check horizontal overlap
if (nx < maxX && nx + nodeW > minX) return true;
}
}
return false;
};
const renderForwardEdge = (edge: { fromIdx: number; toIdx: number; fanCount: number; fanIndex: number; label?: string }, i: number) => {
const from = nodePos(edge.fromIdx);
const to = nodePos(edge.toIdx);
const fromCenterX = from.x + nodeW / 2;
const toCenterX = to.x + nodeW / 2;
const y1 = from.y + NODE_H;
const y2 = to.y;
// Fan-out: spread exit points across the source node's bottom
let startX = fromCenterX;
if (edge.fanCount > 1) {
const spread = nodeW * 0.5;
const step = edge.fanCount > 1 ? spread / (edge.fanCount - 1) : 0;
startX = fromCenterX - spread / 2 + edge.fanIndex * step;
}
const midY = (y1 + y2) / 2;
const fromLayer = layers[edge.fromIdx];
const toLayer = layers[edge.toIdx];
const skipsLayers = toLayer - fromLayer > 1;
let d: string;
if (skipsLayers && hasCollision(fromLayer, toLayer, from.x, to.x)) {
// Route around intermediate nodes: orthogonal detour to the left
const detourX = Math.min(from.x, to.x) - nodeW * 0.4;
d = `M ${startX} ${y1} L ${startX} ${midY} L ${detourX} ${midY} L ${detourX} ${y2 - 10} L ${toCenterX} ${y2 - 10} L ${toCenterX} ${y2}`;
} else if (Math.abs(startX - toCenterX) < 2) {
// Straight vertical line when aligned
d = `M ${startX} ${y1} L ${toCenterX} ${y2}`;
} else {
// Orthogonal: down, across, down
d = `M ${startX} ${y1} L ${startX} ${midY} L ${toCenterX} ${midY} L ${toCenterX} ${y2}`;
}
const fromNode = nodes[edge.fromIdx];
const isActive = fromNode.status === "complete" || fromNode.status === "running" || fromNode.status === "looping";
const strokeColor = isActive ? statusColors.complete.border : statusColors.pending.border;
const arrowColor = isActive ? statusColors.complete.dot : statusColors.pending.border;
return (
<g key={`fwd-${i}`}>
<path d={d} fill="none" stroke={strokeColor} strokeWidth={1.5} />
<polygon
points={`${toCenterX - 4},${y2 - 6} ${toCenterX + 4},${y2 - 6} ${toCenterX},${y2 - 1}`}
fill={arrowColor}
/>
{edge.label && (
<text
x={(startX + toCenterX) / 2 + 8}
y={midY - 2}
fill={statusColors.pending.dot}
fontSize={9}
fontStyle="italic"
>
{edge.label}
</text>
)}
</g>
);
};
const renderBackEdge = (edge: { fromIdx: number; toIdx: number }, i: number) => {
const from = nodePos(edge.fromIdx);
const to = nodePos(edge.toIdx);
const rightX = Math.max(from.x, to.x) + nodeW;
const rightOffset = 28 + i * 18;
const startX = from.x + nodeW;
const startY = from.y + NODE_H / 2;
const endX = to.x + nodeW;
const endY = to.y + NODE_H / 2;
const curveX = rightX + rightOffset;
const r = 12;
const fromNode = nodes[edge.fromIdx];
const isActive = fromNode.status === "complete" || fromNode.status === "running" || fromNode.status === "looping";
const color = isActive ? statusColors.looping.border : statusColors.pending.border;
// Bezier curve with rounded corners (kept as curves for back edges)
const path = `M ${startX} ${startY} C ${startX + r} ${startY}, ${curveX} ${startY}, ${curveX} ${startY - r} L ${curveX} ${endY + r} C ${curveX} ${endY}, ${endX + r} ${endY}, ${endX + 6} ${endY}`;
return (
<g key={`back-${i}`}>
<path d={path} fill="none" stroke={color} strokeWidth={1.5} strokeDasharray="4 3" />
<polygon
points={`${endX + 6},${endY - 3} ${endX + 6},${endY + 3} ${endX},${endY}`}
fill={isActive ? statusColors.looping.dot : statusColors.pending.border}
/>
</g>
);
};
const renderTriggerNode = (node: GraphNode, i: number) => {
const pos = nodePos(i);
const icon = triggerIcons[node.triggerType || ""] || "\u26A1";
const triggerFontSize = nodeW < 140 ? 10.5 : 11.5;
const triggerAvailW = nodeW - 38;
const triggerDisplayLabel = truncateLabel(node.label, triggerAvailW, triggerFontSize);
const nextFireIn = node.triggerConfig?.next_fire_in as number | undefined;
const isActive = node.status === "running" || node.status === "complete";
const colors = isActive ? activeTriggerColors : triggerColors;
// Format countdown for display below node
let countdownLabel: string | null = null;
if (isActive && nextFireIn != null && nextFireIn > 0) {
const h = Math.floor(nextFireIn / 3600);
const m = Math.floor((nextFireIn % 3600) / 60);
const s = Math.floor(nextFireIn % 60);
countdownLabel = h > 0
? `next in ${h}h ${String(m).padStart(2, "0")}m`
: `next in ${m}m ${String(s).padStart(2, "0")}s`;
}
// Status label below countdown
const statusLabel = isActive ? "active" : "inactive";
const statusColor = isActive ? "hsl(140,40%,50%)" : "hsl(210,20%,40%)";
return (
<g key={node.id} onClick={() => onNodeClick?.(node)} style={{ cursor: onNodeClick ? "pointer" : "default" }}>
<title>{node.label}</title>
{/* Pill-shaped background — solid border when active, dashed when inactive */}
<rect
x={pos.x} y={pos.y}
width={nodeW} height={NODE_H}
rx={NODE_H / 2}
fill={colors.bg}
stroke={colors.border}
strokeWidth={isActive ? 1.5 : 1}
strokeDasharray={isActive ? undefined : "4 2"}
/>
{/* Trigger type icon */}
<text
x={pos.x + 18} y={pos.y + NODE_H / 2}
fill={colors.icon} fontSize={13}
textAnchor="middle" dominantBaseline="middle"
>
{icon}
</text>
{/* Label */}
<text
x={pos.x + 32} y={pos.y + NODE_H / 2}
fill={colors.text}
fontSize={triggerFontSize}
fontWeight={500}
dominantBaseline="middle"
letterSpacing="0.01em"
>
{triggerDisplayLabel}
</text>
{/* Countdown label below node */}
{countdownLabel && (
<text
x={pos.x + nodeW / 2} y={pos.y + NODE_H + 13}
fill={triggerColors.text} fontSize={9.5}
textAnchor="middle" fontStyle="italic" opacity={0.7}
>
{countdownLabel}
</text>
)}
{/* Status label */}
<text
x={pos.x + nodeW / 2} y={pos.y + NODE_H + (countdownLabel ? 25 : 13)}
fill={statusColor} fontSize={9}
textAnchor="middle" opacity={0.8}
>
{statusLabel}
</text>
</g>
);
};
const renderNode = (node: GraphNode, i: number) => {
if (node.nodeType === "trigger") return renderTriggerNode(node, i);
const pos = nodePos(i);
const isActive = node.status === "running" || node.status === "looping";
const isDone = node.status === "complete";
const colors = statusColors[node.status];
const fontSize = nodeW < 140 ? 10.5 : 12.5;
const labelAvailW = nodeW - 38;
const displayLabel = truncateLabel(node.label, labelAvailW, fontSize);
return (
<g key={node.id} onClick={() => onNodeClick?.(node)} style={{ cursor: onNodeClick ? "pointer" : "default" }}>
<title>{node.label}</title>
{/* Ambient glow for active nodes */}
{isActive && (
<>
<rect
x={pos.x - 4} y={pos.y - 4}
width={nodeW + 8} height={NODE_H + 8}
rx={16} fill={colors.glow}
/>
<rect
x={pos.x - 2} y={pos.y - 2}
width={nodeW + 4} height={NODE_H + 4}
rx={14} fill="none" stroke={colors.dot} strokeWidth={1} opacity={0.25}
style={{ animation: "pulse-ring 2.5s ease-out infinite" }}
/>
</>
)}
{/* Node background */}
<rect
x={pos.x} y={pos.y}
width={nodeW} height={NODE_H}
rx={12}
fill={colors.bg}
stroke={colors.border}
strokeWidth={isActive ? 1.5 : 1}
/>
{/* Status dot */}
<circle cx={pos.x + 18} cy={pos.y + NODE_H / 2} r={4.5} fill={colors.dot} />
{isActive && (
<circle cx={pos.x + 18} cy={pos.y + NODE_H / 2} r={7} fill="none" stroke={colors.dot} strokeWidth={1} opacity={0.3}>
<animate attributeName="r" values="7;11;7" dur="2s" repeatCount="indefinite" />
<animate attributeName="opacity" values="0.3;0;0.3" dur="2s" repeatCount="indefinite" />
</circle>
)}
{/* Check mark for complete */}
{isDone && (
<text
x={pos.x + 18} y={pos.y + NODE_H / 2 + 1}
fill={colors.dot} fontSize={8} fontWeight={700}
textAnchor="middle" dominantBaseline="middle"
>
&#x2713;
</text>
)}
{/* Label -- truncated with ellipsis for narrow nodes */}
<text
x={pos.x + 32} y={pos.y + NODE_H / 2}
fill={isActive ? statusColors.running.dot : isDone ? statusColors.complete.dot : statusColors.pending.dot}
fontSize={fontSize}
fontWeight={isActive ? 600 : isDone ? 500 : 400}
dominantBaseline="middle"
letterSpacing="0.01em"
>
{displayLabel}
</text>
{/* Status label for active nodes */}
{node.statusLabel && isActive && (
<text
x={pos.x + nodeW + 10} y={pos.y + NODE_H / 2}
fill={statusColors.running.dot} fontSize={10.5} fontStyle="italic"
dominantBaseline="middle" opacity={0.8}
>
{node.statusLabel}
</text>
)}
{/* Iteration badge */}
{node.iterations !== undefined && node.iterations > 0 && (
<g>
<rect
x={pos.x + nodeW - 36} y={pos.y + NODE_H / 2 - 8}
width={26} height={16} rx={8}
fill={colors.dot} opacity={0.15}
/>
<text
x={pos.x + nodeW - 23} y={pos.y + NODE_H / 2}
fill={colors.dot} fontSize={9} fontWeight={600}
textAnchor="middle" dominantBaseline="middle" opacity={0.8}
>
{node.iterations}{node.maxIterations ? `/${node.maxIterations}` : "\u00d7"}
</text>
</g>
)}
</g>
);
};
return (
<div className="flex flex-col h-full">
{/* Compact sub-label */}
<div className="px-5 pt-4 pb-2 flex items-center justify-between">
<div className="flex items-center gap-2">
<p className="text-[11px] text-muted-foreground font-medium uppercase tracking-wider">Pipeline</p>
{version && (
<span className="text-[10px] font-mono font-medium text-muted-foreground/60 border border-border/30 rounded px-1 py-0.5 leading-none">
{version}
</span>
)}
</div>
<RunButton runState={runState} disabled={nodes.length === 0} onRun={handleRun} onPause={onPause ?? (() => {})} btnRef={runBtnRef} />
</div>
{/* Graph */}
<PanZoomSvg svgW={svgWidth} svgH={svgHeight} className={building ? "opacity-30" : ""}>
{forwardEdges.map((e, i) => renderForwardEdge(e, i))}
{backEdges.map((e, i) => renderBackEdge(e, i))}
{nodes.map((n, i) => renderNode(n, i))}
</PanZoomSvg>
{building && (
<div className="absolute inset-0 flex items-center justify-center">
<div className="flex flex-col items-center gap-3">
<Loader2 className="w-6 h-6 animate-spin text-primary/60" />
<p className="text-xs text-muted-foreground/80">Rebuilding agent...</p>
</div>
</div>
)}
</div>
);
}
+280 -158
View File
@@ -1,13 +1,25 @@
import { useEffect, useMemo, useRef, useState, useCallback } from "react";
import { useEffect, useLayoutEffect, useMemo, useRef, useState, useCallback } from "react";
import { Loader2 } from "lucide-react";
import type { DraftGraph as DraftGraphData, DraftNode } from "@/api/types";
import { RunButton } from "./AgentGraph";
import type { GraphNode, RunState } from "./AgentGraph";
import { RunButton } from "./RunButton";
import type { GraphNode, RunState } from "./graph-types";
import {
cssVar,
truncateLabel,
TRIGGER_ICONS,
ACTIVE_TRIGGER_COLORS,
useTriggerColors,
} from "@/lib/graphUtils";
// Read a CSS custom property value (space-separated HSL components)
function cssVar(name: string): string {
return getComputedStyle(document.documentElement).getPropertyValue(name).trim();
}
// ── Trigger layout constants ──
const TRIGGER_H = 38; // pill height
const TRIGGER_PILL_GAP_X = 16; // horizontal gap between multiple trigger pills
const TRIGGER_ICON_X = 16; // icon center offset from pill left edge
const TRIGGER_LABEL_X = 30; // label start offset from pill left edge
const TRIGGER_LABEL_INSET = 38; // icon + padding subtracted from pill width for label space
const TRIGGER_TEXT_Y = 11; // y-offset below pill for first text line (countdown or status)
const TRIGGER_TEXT_STEP = 11; // additional y-offset for second text line when countdown present
const TRIGGER_CLEARANCE = 30; // vertical space below pill for countdown + status text
interface DraftChromeColors {
edge: string;
@@ -74,6 +86,8 @@ type DraftNodeStatus = "pending" | "running" | "complete" | "error";
interface DraftGraphProps {
draft: DraftGraphData | null;
/** The post-build originalDraft — animation fires when this changes to a new non-null value. */
originalDraft?: DraftGraphData | null;
onNodeClick?: (node: DraftNode) => void;
/** Runtime node ID → list of original draft node IDs (post-dissolution mapping). */
flowchartMap?: Record<string, string[]>;
@@ -83,8 +97,8 @@ interface DraftGraphProps {
onRuntimeNodeClick?: (runtimeNodeId: string) => void;
/** True while the queen is building the agent from the draft. */
building?: boolean;
/** True while the queen is designing the draft (no draft yet). Shows a spinner. */
loading?: boolean;
/** Message to show with a spinner while loading/designing. Null = no spinner. */
loadingMessage?: string | null;
/** Called when the user clicks Run. */
onRun?: () => void;
/** Called when the user clicks Pause. */
@@ -105,13 +119,6 @@ function formatNodeId(id: string): string {
return id.split("-").map(w => w.charAt(0).toUpperCase() + w.slice(1)).join(" ");
}
function truncateLabel(label: string, availablePx: number, fontSize: number): string {
const avgCharW = fontSize * 0.58;
const maxChars = Math.floor(availablePx / avgCharW);
if (label.length <= maxChars) return label;
return label.slice(0, Math.max(maxChars - 1, 1)) + "\u2026";
}
/** Return the bounding-rect corner radius for a given flowchart shape. */
/**
* Render an ISO 5807 flowchart shape as an SVG element.
@@ -144,13 +151,9 @@ function FlowchartShape({
case "rectangle":
return <rect x={x} y={y} width={w} height={h} rx={4} {...common} />;
case "rounded_rect":
return <rect x={x} y={y} width={w} height={h} rx={12} {...common} />;
case "diamond": {
const cx = x + w / 2;
const cy = y + h / 2;
// Keep diamond within bounding box
return (
<polygon
points={`${cx},${y} ${x + w},${cy} ${cx},${y + h} ${x},${cy}`}
@@ -174,18 +177,6 @@ function FlowchartShape({
return <path d={d} {...common} />;
}
case "multi_document": {
const off = 3;
const d = `M ${x} ${y + 4 + off} Q ${x} ${y + off}, ${x + 8} ${y + off} L ${x + w - 8 - off} ${y + off} Q ${x + w - off} ${y + off}, ${x + w - off} ${y + 4 + off} L ${x + w - off} ${y + h - 8} C ${x + (w - off) * 0.75} ${y + h + 2}, ${x + (w - off) * 0.25} ${y + h - 10}, ${x} ${y + h - 4} Z`;
return (
<g>
<rect x={x + off * 2} y={y} width={w - off * 2} height={h - off} rx={4} fill={fill} stroke={stroke} strokeWidth={1.2} opacity={0.4} />
<rect x={x + off} y={y + off / 2} width={w - off} height={h - off} rx={4} fill={fill} stroke={stroke} strokeWidth={1.2} opacity={0.6} />
<path d={d} {...common} />
</g>
);
}
case "subroutine": {
const inset = 7;
return (
@@ -207,34 +198,6 @@ function FlowchartShape({
);
}
case "manual_input":
return (
<polygon
points={`${x},${y + 10} ${x + w},${y} ${x + w},${y + h} ${x},${y + h}`}
{...common}
/>
);
case "trapezoid": {
const inset = 12;
return (
<polygon
points={`${x},${y} ${x + w},${y} ${x + w - inset},${y + h} ${x + inset},${y + h}`}
{...common}
/>
);
}
case "delay": {
const d = `M ${x} ${y + 4} Q ${x} ${y}, ${x + 4} ${y} L ${x + w * 0.65} ${y} A ${w * 0.35} ${h / 2} 0 0 1 ${x + w * 0.65} ${y + h} L ${x + 4} ${y + h} Q ${x} ${y + h}, ${x} ${y + h - 4} Z`;
return <path d={d} {...common} />;
}
case "display": {
const d = `M ${x + 16} ${y} L ${x + w * 0.65} ${y} A ${w * 0.35} ${h / 2} 0 0 1 ${x + w * 0.65} ${y + h} L ${x + 16} ${y + h} L ${x} ${y + h / 2} Z`;
return <path d={d} {...common} />;
}
case "cylinder": {
const ry = 7;
return (
@@ -249,88 +212,6 @@ function FlowchartShape({
);
}
case "stored_data": {
const d = `M ${x + 14} ${y} L ${x + w} ${y} A 10 ${h / 2} 0 0 0 ${x + w} ${y + h} L ${x + 14} ${y + h} A 10 ${h / 2} 0 0 1 ${x + 14} ${y} Z`;
return <path d={d} {...common} />;
}
case "internal_storage":
return (
<g>
<rect x={x} y={y} width={w} height={h} rx={4} {...common} />
<line x1={x + 10} y1={y} x2={x + 10} y2={y + h} stroke={stroke} strokeWidth={0.8} opacity={0.5} />
<line x1={x} y1={y + 10} x2={x + w} y2={y + 10} stroke={stroke} strokeWidth={0.8} opacity={0.5} />
</g>
);
case "circle": {
const r = Math.min(w, h) / 2 - 2;
return <circle cx={x + w / 2} cy={y + h / 2} r={r} {...common} />;
}
case "pentagon":
return (
<polygon
points={`${x},${y} ${x + w},${y} ${x + w},${y + h * 0.6} ${x + w / 2},${y + h} ${x},${y + h * 0.6}`}
{...common}
/>
);
case "triangle_inv":
return (
<polygon
points={`${x},${y} ${x + w},${y} ${x + w / 2},${y + h}`}
{...common}
/>
);
case "triangle":
return (
<polygon
points={`${x + w / 2},${y} ${x + w},${y + h} ${x},${y + h}`}
{...common}
/>
);
case "hourglass":
return (
<polygon
points={`${x},${y} ${x + w},${y} ${x + w / 2},${y + h / 2} ${x + w},${y + h} ${x},${y + h} ${x + w / 2},${y + h / 2}`}
{...common}
/>
);
case "circle_cross": {
const r = Math.min(w, h) / 2 - 2;
const cx = x + w / 2;
const cy = y + h / 2;
return (
<g>
<circle cx={cx} cy={cy} r={r} {...common} />
<line x1={cx - r * 0.7} y1={cy - r * 0.7} x2={cx + r * 0.7} y2={cy + r * 0.7} stroke={stroke} strokeWidth={1} />
<line x1={cx + r * 0.7} y1={cy - r * 0.7} x2={cx - r * 0.7} y2={cy + r * 0.7} stroke={stroke} strokeWidth={1} />
</g>
);
}
case "circle_bar": {
const r = Math.min(w, h) / 2 - 2;
const cx = x + w / 2;
const cy = y + h / 2;
return (
<g>
<circle cx={cx} cy={cy} r={r} {...common} />
<line x1={cx} y1={cy - r} x2={cx} y2={cy + r} stroke={stroke} strokeWidth={1} />
<line x1={cx - r} y1={cy} x2={cx + r} y2={cy} stroke={stroke} strokeWidth={1} />
</g>
);
}
case "flag": {
const d = `M ${x} ${y} L ${x + w} ${y} L ${x + w - 8} ${y + h / 2} L ${x + w} ${y + h} L ${x} ${y + h} Z`;
return <path d={d} {...common} />;
}
default:
return <rect x={x} y={y} width={w} height={h} rx={8} {...common} />;
}
@@ -357,13 +238,51 @@ function Tooltip({ node, style }: { node: DraftNode; style: React.CSSProperties
);
}
export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNodes, onRuntimeNodeClick, building, loading, onRun, onPause, runState = "idle" }: DraftGraphProps) {
export default function DraftGraph({ draft, originalDraft, onNodeClick, flowchartMap, runtimeNodes, onRuntimeNodeClick, building, loadingMessage, onRun, onPause, runState = "idle" }: DraftGraphProps) {
const [hoveredNode, setHoveredNode] = useState<string | null>(null);
const [mousePos, setMousePos] = useState<{ x: number; y: number } | null>(null);
const containerRef = useRef<HTMLDivElement>(null);
const runBtnRef = useRef<HTMLButtonElement>(null);
const [containerW, setContainerW] = useState(484);
const chrome = useDraftChromeColors();
const triggerColors = useTriggerColors();
// Extract trigger nodes from runtimeNodes
const triggerNodes = useMemo(
() => (runtimeNodes ?? []).filter(n => n.nodeType === "trigger"),
[runtimeNodes],
);
// ── Entrance animation — fires when originalDraft becomes a new non-null value ──
// This covers: agent loaded, build finished, queen modifies flowchart.
// Tab switches remount via React key={activeWorker}, resetting all refs.
const prevOriginalDraft = useRef<DraftGraphData | null>(null);
const pendingAnimation = useRef(false);
const [entrancePhase, setEntrancePhase] = useState<"idle" | "hidden" | "visible">("idle");
const nodes = draft?.nodes ?? [];
useLayoutEffect(() => {
const prev = prevOriginalDraft.current;
prevOriginalDraft.current = originalDraft ?? null;
// Detect a new non-null originalDraft (object identity — each API/SSE response is a fresh object)
if (originalDraft && originalDraft !== prev) {
pendingAnimation.current = true;
}
// Fire when we have a pending animation, nodes are ready, and not mid-build
if (pendingAnimation.current && nodes.length > 0 && !building) {
pendingAnimation.current = false;
setEntrancePhase("hidden");
let raf1 = 0, raf2 = 0;
raf1 = requestAnimationFrame(() => {
raf2 = requestAnimationFrame(() => setEntrancePhase("visible"));
});
const t = setTimeout(() => setEntrancePhase("idle"), nodes.length * 120 + 1000);
return () => { clearTimeout(t); cancelAnimationFrame(raf1); cancelAnimationFrame(raf2); };
}
}, [originalDraft, nodes.length, building]);
// Shift-to-pin tooltip
const shiftHeld = useRef(false);
@@ -465,7 +384,6 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
const hasStatusOverlay = Object.keys(nodeStatuses).length > 0;
const nodes = draft?.nodes ?? [];
const edges = draft?.edges ?? [];
const idxMap = useMemo(
@@ -539,6 +457,11 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
layerGroups.forEach((group) => {
maxCols = Math.max(maxCols, group.length);
});
// Ensure maxCols accommodates any parent's children fan-out
// (prevents fan-out scaling from collapsing to zero)
children.forEach((kids) => {
maxCols = Math.max(maxCols, kids.length);
});
// Compute node width — keep back-edge overflow out of node sizing so nodes
// get full width. The viewBox is expanded later to fit back-edge curves.
@@ -644,6 +567,17 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
}
}
// Post-process: enforce minimum spacing within each layer
for (const [, group] of layerGroups) {
if (group.length <= 1) continue;
const sorted = [...group].sort((a, b) => colPos[a] - colPos[b]);
for (let j = 1; j < sorted.length; j++) {
if (colPos[sorted[j]] < colPos[sorted[j - 1]] + 1) {
colPos[sorted[j]] = colPos[sorted[j - 1]] + 1;
}
}
}
// Convert fractional column positions to pixel X positions
const colSpacing = nodeW + GAP_X;
const usedMin = Math.min(...colPos);
@@ -787,22 +721,27 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
return { nodeYOffset: offsets, totalExtraY: totalExtra, groupBoxMaxX: maxGroupX };
}, [nodes, maxLayer, flowchartMap, idxMap, layers, nodeXPositions, nodeW]);
// When triggers are present, push the entire draft graph down to make room
const triggerOffsetY = triggerNodes.length > 0
? TRIGGER_H + TRIGGER_TEXT_Y + TRIGGER_TEXT_STEP + TRIGGER_CLEARANCE
: 0;
const nodePos = (i: number) => ({
x: nodeXPositions[i],
y: TOP_Y + layers[i] * (NODE_H + GAP_Y) + nodeYOffset[i],
y: TOP_Y + triggerOffsetY + layers[i] * (NODE_H + GAP_Y) + nodeYOffset[i],
});
const svgHeight = TOP_Y + (maxLayer + 1) * NODE_H + maxLayer * GAP_Y + totalExtraY + 16;
const svgHeight = TOP_Y + triggerOffsetY + (maxLayer + 1) * NODE_H + maxLayer * GAP_Y + totalExtraY + 16;
// Compute group areas for runtime node boundaries on the draft
const groupAreas = useMemo(() => {
if (!flowchartMap || !runtimeNodes?.length) return [];
if (!flowchartMap) return [];
const groups: { runtimeId: string; label: string; draftIds: string[] }[] = [];
for (const [runtimeId, draftIds] of Object.entries(flowchartMap)) {
groups.push({ runtimeId, label: formatNodeId(runtimeId), draftIds });
}
return groups;
}, [flowchartMap, runtimeNodes]);
}, [flowchartMap]);
// Legend
const usedTypes = (() => {
@@ -840,12 +779,27 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
? `M ${startX} ${y1} L ${toCenterX} ${y2}`
: `M ${startX} ${y1} L ${startX} ${midY} L ${toCenterX} ${midY} L ${toCenterX} ${y2}`;
// Edge draw-in animation (stroke-dashoffset)
const isAnimating = entrancePhase !== "idle";
const pathLength = Math.abs(y2 - y1) + Math.abs(startX - toCenterX) + 1;
const edgeDelay = 200 + i * 80;
const edgeStyle: React.CSSProperties | undefined = isAnimating ? {
strokeDasharray: pathLength,
strokeDashoffset: entrancePhase === "hidden" ? pathLength : 0,
transition: `stroke-dashoffset 400ms ease-in-out ${edgeDelay}ms`,
} : undefined;
const edgeEndStyle: React.CSSProperties | undefined = isAnimating ? {
opacity: entrancePhase === "hidden" ? 0 : 1,
transition: `opacity 100ms ease-out ${edgeDelay + 350}ms`,
} : undefined;
return (
<g key={`fwd-${i}`}>
<path d={d} fill="none" stroke={chrome.edge} strokeWidth={1.2} />
<path d={d} fill="none" stroke={chrome.edge} strokeWidth={1.2} style={edgeStyle} />
<polygon
points={`${toCenterX - 3},${y2 - 5} ${toCenterX + 3},${y2 - 5} ${toCenterX},${y2 - 1}`}
fill={chrome.edgeArrow}
style={edgeEndStyle}
/>
{edge.label && (
<text
@@ -855,6 +809,7 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
fontSize={9}
fontStyle="italic"
textAnchor="middle"
style={edgeEndStyle}
>
{truncateLabel(edge.label, 80, 9)}
</text>
@@ -877,12 +832,26 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
const path = `M ${startX} ${startY} C ${startX + r} ${startY}, ${curveX} ${startY}, ${curveX} ${startY - r} L ${curveX} ${endY + r} C ${curveX} ${endY}, ${endX + r} ${endY}, ${endX + 5} ${endY}`;
// Back-edge draw-in animation (starts after forward edges)
const isAnimating = entrancePhase !== "idle";
const backPathLength = Math.abs(curveX - startX) + Math.abs(startY - endY) + Math.abs(curveX - endX) + 20;
const backDelay = nodes.length * 120 + 300 + i * 80;
const backEdgeStyle: React.CSSProperties | undefined = isAnimating ? {
strokeDashoffset: entrancePhase === "hidden" ? backPathLength : 0,
transition: `stroke-dashoffset 400ms ease-in-out ${backDelay}ms`,
} : undefined;
const backEndStyle: React.CSSProperties | undefined = isAnimating ? {
opacity: entrancePhase === "hidden" ? 0 : 1,
transition: `opacity 100ms ease-out ${backDelay + 350}ms`,
} : undefined;
return (
<g key={`back-${i}`}>
<path d={path} fill="none" stroke={chrome.backEdge} strokeWidth={1.2} strokeDasharray="4 3" />
<path d={path} fill="none" stroke={chrome.backEdge} strokeWidth={1.2} strokeDasharray={isAnimating ? backPathLength : "4 3"} style={backEdgeStyle} />
<polygon
points={`${endX + 5},${endY - 2.5} ${endX + 5},${endY + 2.5} ${endX},${endY}`}
fill={chrome.edge}
style={backEndStyle}
/>
</g>
);
@@ -895,6 +864,131 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
pending: "",
};
// ── Trigger node rendering ──
const triggerW = Math.min(nodeW, 180);
// Shared trigger pill X position (used by both node and edge renderers)
const triggerPillX = (idx: number) => {
const totalW = triggerNodes.length * triggerW + (triggerNodes.length - 1) * TRIGGER_PILL_GAP_X;
return (containerW - totalW) / 2 + idx * (triggerW + TRIGGER_PILL_GAP_X);
};
const renderTriggerNode = (node: GraphNode, triggerIdx: number) => {
const icon = TRIGGER_ICONS[node.triggerType || ""] || "\u26A1";
const isActive = node.status === "running" || node.status === "complete";
const colors = isActive ? ACTIVE_TRIGGER_COLORS : triggerColors;
const nextFireIn = node.triggerConfig?.next_fire_in as number | undefined;
const tx = triggerPillX(triggerIdx);
const ty = TOP_Y;
const fontSize = triggerW < 140 ? 10.5 : 11.5;
const displayLabel = truncateLabel(node.label, triggerW - TRIGGER_LABEL_INSET, fontSize);
// Countdown
let countdownLabel: string | null = null;
if (isActive && nextFireIn != null && nextFireIn > 0) {
const h = Math.floor(nextFireIn / 3600);
const m = Math.floor((nextFireIn % 3600) / 60);
const s = Math.floor(nextFireIn % 60);
countdownLabel = h > 0
? `next in ${h}h ${String(m).padStart(2, "0")}m`
: `next in ${m}m ${String(s).padStart(2, "0")}s`;
}
const statusLabel = isActive ? "active" : "inactive";
const statusColor = isActive ? "hsl(140,40%,50%)" : "hsl(210,20%,40%)";
return (
<g
key={node.id}
onClick={() => onRuntimeNodeClick?.(node.id)}
style={{ cursor: onRuntimeNodeClick ? "pointer" : "default" }}
>
<title>{node.label}</title>
{/* Pill-shaped background */}
<rect
x={tx} y={ty}
width={triggerW} height={TRIGGER_H}
rx={TRIGGER_H / 2}
fill={colors.bg}
stroke={colors.border}
strokeWidth={isActive ? 1.5 : 1}
strokeDasharray={isActive ? undefined : "4 2"}
/>
{/* Icon */}
<text
x={tx + TRIGGER_ICON_X} y={ty + TRIGGER_H / 2}
fill={colors.icon} fontSize={13}
textAnchor="middle" dominantBaseline="middle"
>
{icon}
</text>
{/* Label */}
<text
x={tx + TRIGGER_LABEL_X} y={ty + TRIGGER_H / 2}
fill={colors.text}
fontSize={fontSize}
fontWeight={500}
dominantBaseline="middle"
letterSpacing="0.01em"
>
{displayLabel}
</text>
{/* Countdown */}
{countdownLabel && (
<text
x={tx + triggerW / 2} y={ty + TRIGGER_H + TRIGGER_TEXT_Y}
fill={colors.text} fontSize={9}
textAnchor="middle" fontStyle="italic" opacity={0.7}
>
{countdownLabel}
</text>
)}
{/* Status */}
<text
x={tx + triggerW / 2} y={ty + TRIGGER_H + (countdownLabel ? TRIGGER_TEXT_Y + TRIGGER_TEXT_STEP : TRIGGER_TEXT_Y)}
fill={statusColor} fontSize={8.5}
textAnchor="middle" opacity={0.8}
>
{statusLabel}
</text>
</g>
);
};
const renderTriggerEdge = (triggerIdx: number) => {
if (nodes.length === 0) return null;
const triggerNode = triggerNodes[triggerIdx];
const runtimeTargetId = triggerNode?.next?.[0];
const targetDraftId = runtimeTargetId
? flowchartMap?.[runtimeTargetId]?.[0] ?? runtimeTargetId
: draft?.entry_node;
const targetIdx = targetDraftId ? idxMap[targetDraftId] ?? 0 : 0;
const targetPos = nodePos(targetIdx);
const targetX = targetPos.x + nodeW / 2;
const targetY = targetPos.y;
const tx = triggerPillX(triggerIdx) + triggerW / 2;
const ty = TOP_Y + TRIGGER_H + TRIGGER_TEXT_Y + TRIGGER_TEXT_STEP + 4;
const midY = (ty + targetY) / 2;
const d = Math.abs(tx - targetX) < 2
? `M ${tx} ${ty} L ${targetX} ${targetY}`
: `M ${tx} ${ty} L ${tx} ${midY} L ${targetX} ${midY} L ${targetX} ${targetY}`;
return (
<g key={`trigger-edge-${triggerIdx}`}>
<path d={d} fill="none" stroke={chrome.edge} strokeWidth={1.2} strokeDasharray="4 3" />
<polygon
points={`${targetX - 3},${targetY - 5} ${targetX + 3},${targetY - 5} ${targetX},${targetY - 1}`}
fill={chrome.edgeArrow}
/>
</g>
);
};
const renderNode = (node: DraftNode, i: number) => {
const pos = nodePos(i);
const isHovered = hoveredNode === node.id;
@@ -926,7 +1020,13 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
if (rect) setMousePos({ x: e.clientX - rect.left, y: e.clientY - rect.top });
}}
onMouseLeave={() => { if (!shiftHeld.current) { setHoveredNode(null); setMousePos(null); } }}
style={{ cursor: "pointer" }}
style={{
cursor: "pointer",
...(entrancePhase !== "idle" ? {
opacity: entrancePhase === "hidden" ? 0 : 1,
transition: `opacity 300ms ease-out ${i * 120}ms`,
} : {}),
}}
>
<FlowchartShape
@@ -966,18 +1066,17 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
);
};
if (loading || !draft || nodes.length === 0) {
if (!draft || nodes.length === 0) {
return (
<div className="flex flex-col h-full">
<div className="px-4 pt-3 pb-1.5 flex items-center gap-2">
<p className="text-[11px] text-muted-foreground font-medium uppercase tracking-wider">Draft</p>
<span className="text-[9px] font-mono font-medium rounded px-1 py-0.5 leading-none border text-amber-500/60 border-amber-500/20">planning</span>
</div>
<div className="flex-1 flex flex-col items-center justify-center gap-3">
{loading || !draft ? (
{loadingMessage ? (
<>
<Loader2 className="w-5 h-5 animate-spin text-muted-foreground/40" />
<p className="text-xs text-muted-foreground/50">Designing flowchart</p>
<p className="text-xs text-muted-foreground/50">{loadingMessage}</p>
</>
) : (
<p className="text-xs text-muted-foreground/60 text-center italic">
@@ -1004,6 +1103,11 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
<Loader2 className="w-2.5 h-2.5 animate-spin" />
building
</span>
) : loadingMessage ? (
<span className="text-[9px] font-mono font-medium rounded px-1 py-0.5 leading-none border text-amber-500/60 border-amber-500/20 flex items-center gap-1">
<Loader2 className="w-2.5 h-2.5 animate-spin" />
updating
</span>
) : (
<span className={`text-[9px] font-mono font-medium rounded px-1 py-0.5 leading-none border ${hasStatusOverlay ? "text-emerald-500/60 border-emerald-500/20" : "text-amber-500/60 border-amber-500/20"}`}>
{hasStatusOverlay ? "live" : "planning"}
@@ -1023,12 +1127,16 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
onMouseMove={handleMouseMove}
onMouseUp={handleMouseUp}
onMouseLeave={handleMouseUp}
className={`w-full h-full${building ? " opacity-30" : ""}`}
style={{ cursor: dragging ? "grabbing" : "grab" }}
className="w-full h-full"
style={{
opacity: building || loadingMessage ? 0.3 : 1,
transition: building || loadingMessage ? "none" : "opacity 300ms ease-out",
cursor: dragging ? "grabbing" : "grab",
}}
>
<svg
width="100%"
viewBox={`0 0 ${Math.max((maxContentRight ?? 0), groupBoxMaxX) + (backEdgeOverflow ?? 0)} ${totalH}`}
viewBox={`0 0 ${Math.max((maxContentRight ?? 0), groupBoxMaxX, triggerNodes.length > 0 ? triggerPillX(triggerNodes.length - 1) + triggerW : 0) + (backEdgeOverflow ?? 0)} ${totalH}`}
preserveAspectRatio="xMidYMin meet"
className="select-none"
style={{
@@ -1112,6 +1220,11 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
);
})}
{/* Trigger edges (dashed lines from trigger pills to first draft node) */}
{triggerNodes.map((_, i) => renderTriggerEdge(i))}
{/* Trigger pill nodes */}
{triggerNodes.map((tn, i) => renderTriggerNode(tn, i))}
{forwardEdges.map((e, i) => renderEdge(e, i))}
{backEdges.map((e, i) => renderBackEdge(e, i))}
{nodes.map((n, i) => renderNode(n, i))}
@@ -1150,6 +1263,15 @@ export default function DraftGraph({ draft, onNodeClick, flowchartMap, runtimeNo
</div>
)}
{!building && loadingMessage && (
<div className="absolute inset-0 flex items-center justify-center">
<div className="flex flex-col items-center gap-3">
<Loader2 className="w-6 h-6 animate-spin text-muted-foreground/40" />
<p className="text-xs text-muted-foreground/50">{loadingMessage}</p>
</div>
</div>
)}
{/* Zoom controls */}
<div className="absolute bottom-3 right-3 flex items-center gap-1 bg-card/80 backdrop-blur-sm border border-border/40 rounded-lg p-0.5 shadow-sm">
<button
@@ -1,6 +1,6 @@
import { useState, useEffect, useRef } from "react";
import { X, Cpu, Zap, Clock, RotateCcw, CheckCircle2, AlertCircle, Loader2, ChevronDown, ChevronRight, Copy, Check, Terminal, Wrench, BookOpen, GitBranch, Bot } from "lucide-react";
import type { GraphNode, NodeStatus } from "./AgentGraph";
import type { GraphNode, NodeStatus } from "./graph-types";
import type { NodeSpec, ToolInfo, NodeCriteria } from "../api/types";
import { graphsApi } from "../api/graphs";
import { logsApi } from "../api/logs";
@@ -0,0 +1,40 @@
import { memo, useState } from "react";
import { Play, Pause, Loader2, CheckCircle2 } from "lucide-react";
import type { RunButtonProps } from "./graph-types";
export const RunButton = memo(function RunButton({ runState, disabled, onRun, onPause, btnRef }: RunButtonProps) {
const [hovered, setHovered] = useState(false);
const showPause = runState === "running" && hovered;
return (
<button
ref={btnRef}
onClick={runState === "running" ? onPause : onRun}
disabled={runState === "deploying" || disabled}
onMouseEnter={() => setHovered(true)}
onMouseLeave={() => setHovered(false)}
className={`flex items-center gap-1.5 px-2.5 py-1 rounded-md text-[11px] font-semibold transition-all duration-200 ${
showPause
? "bg-amber-500/15 text-amber-400 border border-amber-500/40 hover:bg-amber-500/25 active:scale-95 cursor-pointer"
: runState === "running"
? "bg-green-500/15 text-green-400 border border-green-500/30 cursor-pointer"
: runState === "deploying"
? "bg-primary/10 text-primary border border-primary/20 cursor-default"
: disabled
? "bg-muted/30 text-muted-foreground/40 border border-border/20 cursor-not-allowed"
: "bg-primary/10 text-primary border border-primary/20 hover:bg-primary/20 hover:border-primary/40 active:scale-95"
}`}
>
{runState === "deploying" ? (
<Loader2 className="w-3 h-3 animate-spin" />
) : showPause ? (
<Pause className="w-3 h-3 fill-current" />
) : runState === "running" ? (
<CheckCircle2 className="w-3 h-3" />
) : (
<Play className="w-3 h-3 fill-current" />
)}
{runState === "deploying" ? "Deploying\u2026" : showPause ? "Pause" : runState === "running" ? "Running" : "Run"}
</button>
);
});
@@ -0,0 +1,28 @@
export type NodeStatus = "running" | "complete" | "pending" | "error" | "looping";
export type NodeType = "execution" | "trigger";
export interface GraphNode {
id: string;
label: string;
status: NodeStatus;
nodeType?: NodeType;
triggerType?: string;
triggerConfig?: Record<string, unknown>;
next?: string[];
backEdges?: string[];
iterations?: number;
maxIterations?: number;
statusLabel?: string;
edgeLabels?: Record<string, string>;
}
export type RunState = "idle" | "deploying" | "running";
export interface RunButtonProps {
runState: RunState;
disabled: boolean;
onRun: () => void;
onPause: () => void;
btnRef: React.Ref<HTMLButtonElement>;
}
+2 -2
View File
@@ -1,9 +1,9 @@
import type { GraphTopology, NodeSpec } from "@/api/types";
import type { GraphNode, NodeStatus } from "@/components/AgentGraph";
import type { GraphNode, NodeStatus } from "@/components/graph-types";
/**
* Convert a backend GraphTopology (nodes + edges + entry_node) into
* the GraphNode[] shape that AgentGraph renders.
* the GraphNode[] shape that DraftGraph renders.
*
* Four jobs:
* 1. Synthesize trigger nodes from non-manual entry_points
+88
View File
@@ -0,0 +1,88 @@
import { useEffect, useState } from "react";
// ── Shared graph utilities ──
// Common helpers used by both AgentGraph and DraftGraph.
// AgentGraph still has its own copies for now (separate cleanup PR).
/** Read a CSS custom property value (space-separated HSL components). */
export function cssVar(name: string): string {
return getComputedStyle(document.documentElement).getPropertyValue(name).trim();
}
/** Truncate label to fit within `availablePx` at the given fontSize. */
export function truncateLabel(label: string, availablePx: number, fontSize: number): string {
const avgCharW = fontSize * 0.58;
const maxChars = Math.floor(availablePx / avgCharW);
if (label.length <= maxChars) return label;
return label.slice(0, Math.max(maxChars - 1, 1)) + "\u2026";
}
// ── Trigger styling ──
export type TriggerColorSet = { bg: string; border: string; text: string; icon: string };
export function buildTriggerColors(): TriggerColorSet {
const bg = cssVar("--trigger-bg") || "210 25% 14%";
const border = cssVar("--trigger-border") || "210 30% 30%";
const text = cssVar("--trigger-text") || "210 30% 65%";
const icon = cssVar("--trigger-icon") || "210 40% 55%";
return {
bg: `hsl(${bg})`,
border: `hsl(${border})`,
text: `hsl(${text})`,
icon: `hsl(${icon})`,
};
}
export const ACTIVE_TRIGGER_COLORS: TriggerColorSet = {
bg: "hsl(210,30%,18%)",
border: "hsl(210,50%,50%)",
text: "hsl(210,40%,75%)",
icon: "hsl(210,60%,65%)",
};
export const TRIGGER_ICONS: Record<string, string> = {
webhook: "\u26A1", // lightning bolt
timer: "\u23F1", // stopwatch
api: "\u2192", // right arrow
event: "\u223F", // sine wave
};
/** Format a cron expression into a human-readable schedule label. */
export function cronToLabel(cron: string): string {
const parts = cron.trim().split(/\s+/);
if (parts.length !== 5) return cron;
const [min, hour, dom, mon, dow] = parts;
// */N * * * * -> "Every Nm"
if (min.startsWith("*/") && hour === "*" && dom === "*" && mon === "*" && dow === "*") {
return `Every ${min.slice(2)}m`;
}
// 0 */N * * * -> "Every Nh"
if (min === "0" && hour.startsWith("*/") && dom === "*" && mon === "*" && dow === "*") {
return `Every ${hour.slice(2)}h`;
}
// 0 H * * * -> "Daily at Ham/pm"
if (dom === "*" && mon === "*" && dow === "*" && !min.includes("*") && !hour.includes("*")) {
const h = parseInt(hour, 10);
const m = parseInt(min, 10);
const suffix = h >= 12 ? "PM" : "AM";
const h12 = h % 12 || 12;
return m === 0 ? `Daily at ${h12}${suffix}` : `Daily at ${h12}:${String(m).padStart(2, "0")}${suffix}`;
}
return cron;
}
/** Theme-reactive hook for inactive trigger colors. */
export function useTriggerColors(): TriggerColorSet {
const [colors, setColors] = useState<TriggerColorSet>(buildTriggerColors);
useEffect(() => {
const rebuild = () => setColors(buildTriggerColors());
const obs = new MutationObserver(rebuild);
obs.observe(document.documentElement, { attributes: true, attributeFilter: ["class", "style"] });
return () => obs.disconnect();
}, []);
return colors;
}
+1 -1
View File
@@ -4,7 +4,7 @@
*/
import type { ChatMessage } from "@/components/ChatPanel";
import type { GraphNode } from "@/components/AgentGraph";
import type { GraphNode } from "@/components/graph-types";
export const TAB_STORAGE_KEY = "hive:workspace-tabs";
+294 -64
View File
@@ -2,7 +2,7 @@ import { useState, useCallback, useRef, useEffect, useMemo } from "react";
import ReactDOM from "react-dom";
import { useSearchParams, useNavigate } from "react-router-dom";
import { Plus, KeyRound, Sparkles, Layers, ChevronLeft, Bot, Loader2, WifiOff, X } from "lucide-react";
import AgentGraph, { type GraphNode, type NodeStatus } from "@/components/AgentGraph";
import type { GraphNode, NodeStatus } from "@/components/graph-types";
import DraftGraph from "@/components/DraftGraph";
import ChatPanel, { type ChatMessage } from "@/components/ChatPanel";
import TopBar from "@/components/TopBar";
@@ -17,6 +17,7 @@ import { useMultiSSE } from "@/hooks/use-sse";
import type { LiveSession, AgentEvent, DiscoverEntry, NodeSpec, DraftGraph as DraftGraphData } from "@/api/types";
import { sseEventToChatMessage, formatAgentDisplayName } from "@/lib/chat-helpers";
import { topologyToGraphNodes } from "@/lib/graph-converter";
import { cronToLabel } from "@/lib/graphUtils";
import { ApiError } from "@/api/client";
const makeId = () => Math.random().toString(36).slice(2, 9);
@@ -327,6 +328,8 @@ interface AgentBackendState {
workerIsTyping: boolean;
llmSnapshots: Record<string, string>;
activeToolCalls: Record<string, { name: string; done: boolean; streamId: string }>;
/** True while save_agent_draft tool is running (between tool_call_started and draft_graph_updated) */
designingDraft: boolean;
/** Agent folder path — set after scaffolding, used for credential queries */
agentPath: string | null;
/** Structured question text from ask_user with options */
@@ -353,6 +356,7 @@ function defaultAgentState(): AgentBackendState {
workerInputMessageId: null,
queenBuilding: false,
queenPhase: "planning",
designingDraft: false,
draftGraph: null,
originalDraft: null,
flowchartMap: null,
@@ -554,9 +558,46 @@ export default function Workspace() {
const [dismissedBanner, setDismissedBanner] = useState<string | null>(null);
const [selectedNode, setSelectedNode] = useState<GraphNode | null>(null);
const [triggerTaskDraft, setTriggerTaskDraft] = useState("");
const [triggerCronDraft, setTriggerCronDraft] = useState("");
const [triggerTaskSaving, setTriggerTaskSaving] = useState(false);
const [triggerScheduleSaving, setTriggerScheduleSaving] = useState(false);
const [triggerCronSaved, setTriggerCronSaved] = useState(false);
const [triggerTaskSaved, setTriggerTaskSaved] = useState(false);
const [newTabOpen, setNewTabOpen] = useState(false);
const newTabBtnRef = useRef<HTMLButtonElement>(null);
const [graphPanelPct, setGraphPanelPct] = useState(30);
const savedGraphPanelPct = useRef(30);
const resizing = useRef(false);
// Drag-to-resize the graph panel
useEffect(() => {
const onMouseMove = (e: MouseEvent) => {
if (!resizing.current) return;
const pct = (e.clientX / window.innerWidth) * 100;
setGraphPanelPct(Math.max(15, Math.min(50, pct)));
};
const onMouseUp = () => {
resizing.current = false;
document.body.style.cursor = "";
};
window.addEventListener("mousemove", onMouseMove);
window.addEventListener("mouseup", onMouseUp);
return () => {
window.removeEventListener("mousemove", onMouseMove);
window.removeEventListener("mouseup", onMouseUp);
};
}, []);
// Shrink graph panel when node detail opens, restore when it closes
const nodeIsSelected = selectedNode !== null;
useEffect(() => {
if (nodeIsSelected) {
savedGraphPanelPct.current = graphPanelPct;
setGraphPanelPct(prev => Math.min(prev, 30));
} else {
setGraphPanelPct(savedGraphPanelPct.current);
}
}, [nodeIsSelected]); // eslint-disable-line react-hooks/exhaustive-deps
// Ref mirror of sessionsByAgent so SSE callback can read current graph
// state without adding sessionsByAgent to its dependency array.
@@ -577,6 +618,9 @@ export default function Workspace() {
// it was created in (avoids stale-closure when phase change and message
// events arrive in the same React batch).
const queenPhaseRef = useRef<Record<string, string>>({});
// Timestamp when designingDraft was set — used to enforce minimum spinner duration.
const designingDraftSinceRef = useRef<Record<string, number>>({});
const designingDraftTimerRef = useRef<Record<string, ReturnType<typeof setTimeout>>>({});
// Synchronous ref to suppress the queen's auto-intro SSE messages
// after a cold-restore (where we already restored the conversation from disk).
@@ -1186,8 +1230,8 @@ export default function Workspace() {
graphsApi.draftGraph(state.sessionId).then(({ draft }) => {
if (draft) updateAgentState(agentType, { draftGraph: draft });
}).catch(() => {});
} else {
// Fetch flowchart map for non-planning phases (staging, running, building)
} else if (state.queenPhase !== "building") {
// Fetch flowchart map for non-building phases (staging, running)
if (state.originalDraft) continue; // already have it
if (fetchedFlowchartMapSessionsRef.current.has(state.sessionId)) continue;
fetchedFlowchartMapSessionsRef.current.add(state.sessionId);
@@ -1196,6 +1240,7 @@ export default function Workspace() {
updateAgentState(agentType, {
flowchartMap: map,
originalDraft: original_draft,
draftGraph: null,
});
}
}).catch(() => {});
@@ -1220,12 +1265,28 @@ export default function Workspace() {
const fireMap = new Map<string, number>();
const taskMap = new Map<string, string>();
const labelMap = new Map<string, string>();
const targetMap = new Map<string, string>();
for (const ep of triggerEps) {
const nodeId = `__trigger_${ep.id}`;
if (ep.next_fire_in != null) {
fireMap.set(`__trigger_${ep.id}`, ep.next_fire_in);
fireMap.set(nodeId, ep.next_fire_in);
}
if (ep.task != null) {
taskMap.set(`__trigger_${ep.id}`, ep.task);
taskMap.set(nodeId, ep.task);
}
const cron = ep.trigger_config?.cron as string | undefined;
const interval = ep.trigger_config?.interval_minutes as number | undefined;
const epLabel = cron
? cronToLabel(cron)
: interval
? `Every ${interval >= 60 ? `${interval / 60}h` : `${interval}m`}`
: ep.name || undefined;
if (epLabel) {
labelMap.set(nodeId, epLabel);
}
if (ep.entry_node) {
targetMap.set(nodeId, ep.entry_node);
}
}
@@ -1234,14 +1295,18 @@ export default function Workspace() {
if (!ss?.length) return prev;
const existingIds = new Set(ss[0].graphNodes.map(n => n.id));
// Update existing trigger nodes
// Update existing trigger nodes (countdown, task, label, target)
let updated = ss[0].graphNodes.map((n) => {
if (n.nodeType !== "trigger") return n;
const nfi = fireMap.get(n.id);
const task = taskMap.get(n.id);
if (nfi == null && task == null) return n;
const label = labelMap.get(n.id);
const target = targetMap.get(n.id);
if (nfi == null && task == null && !label && !target) return n;
return {
...n,
...(label && label !== n.label ? { label } : {}),
...(target ? { next: [target] } : {}),
triggerConfig: {
...n.triggerConfig,
...(nfi != null ? { next_fire_in: nfi } : {}),
@@ -1251,14 +1316,15 @@ export default function Workspace() {
});
// Discover new triggers not yet in the graph
const entryNode = ss[0].graphNodes.find(n => n.nodeType !== "trigger")?.id;
const fallbackEntry = ss[0].graphNodes.find(n => n.nodeType !== "trigger")?.id;
const newNodes: GraphNode[] = [];
for (const ep of triggerEps) {
const nodeId = `__trigger_${ep.id}`;
if (existingIds.has(nodeId)) continue;
const target = ep.entry_node || fallbackEntry;
newNodes.push({
id: nodeId,
label: ep.name || ep.id,
label: labelMap.get(nodeId) || ep.name || ep.id,
status: "pending",
nodeType: "trigger",
triggerType: ep.trigger_type,
@@ -1267,7 +1333,7 @@ export default function Workspace() {
...(ep.next_fire_in != null ? { next_fire_in: ep.next_fire_in } : {}),
...(ep.task ? { task: ep.task } : {}),
},
...(entryNode ? { next: [entryNode] } : {}),
...(target ? { next: [target] } : {}),
});
}
if (newNodes.length > 0) {
@@ -1839,6 +1905,15 @@ export default function Workspace() {
const toolName = (event.data?.tool_name as string) || "unknown";
const toolUseId = (event.data?.tool_use_id as string) || "";
// Flag when the queen starts designing/updating the flowchart
if (isQueen && toolName === "save_agent_draft") {
designingDraftSinceRef.current[agentType] = Date.now();
// Clear any pending delayed-clear timer from a previous call
const prev = designingDraftTimerRef.current[agentType];
if (prev) clearTimeout(prev);
updateAgentState(agentType, { designingDraft: true });
}
// Track active (in-flight) tools and upsert activity row into chat
const sid = event.stream_id;
setAgentStates(prev => {
@@ -2046,20 +2121,19 @@ export default function Workspace() {
queenBuilding: newPhase === "building",
// Sync workerRunState so the RunButton reflects the phase
workerRunState: newPhase === "running" ? "running" : "idle",
// Clear draft graph once we leave planning/building; keep it during
// building so the DraftGraph can show a loading overlay.
...(newPhase !== "planning" && newPhase !== "building"
? { draftGraph: null }
: newPhase === "planning"
? { originalDraft: null, flowchartMap: null }
: {}),
// Clear originalDraft/flowchartMap when re-entering planning.
// draftGraph is cleared later when originalDraft arrives, so the
// entrance animation has data to render during the handoff.
...(newPhase === "planning"
? { originalDraft: null, flowchartMap: null }
: {}),
// Store agent path for credential queries
...(eventAgentPath ? { agentPath: eventAgentPath } : {}),
});
{
const sid = agentStates[agentType]?.sessionId;
if (sid) {
if (newPhase !== "planning") {
if (newPhase !== "planning" && newPhase !== "building") {
fetchedDraftSessionsRef.current.delete(sid);
fetchedFlowchartMapSessionsRef.current.delete(sid);
// Fetch the flowchart map (original draft + dissolution mapping)
@@ -2069,7 +2143,8 @@ export default function Workspace() {
originalDraft: original_draft,
});
}).catch(() => {});
} else {
} else if (newPhase === "planning") {
// Only clear dedup sets when re-entering planning (not building)
fetchedDraftSessionsRef.current.delete(sid);
fetchedFlowchartMapSessionsRef.current.delete(sid);
}
@@ -2082,7 +2157,28 @@ export default function Workspace() {
// The draft dict is published directly as event.data (not nested under a key)
const draft = event.data as unknown as DraftGraphData | undefined;
if (draft?.nodes) {
updateAgentState(agentType, { draftGraph: draft });
// Ensure the "Designing flowchart…" spinner stays visible for a
// minimum duration so users see feedback before the draft appears.
const MIN_SPINNER_MS = 600;
const since = designingDraftSinceRef.current[agentType] || 0;
const elapsed = Date.now() - since;
const remaining = Math.max(0, MIN_SPINNER_MS - elapsed);
const applyDraft = () => {
delete designingDraftTimerRef.current[agentType];
updateAgentState(agentType, { draftGraph: draft, designingDraft: false });
};
if (remaining > 0 && since > 0) {
// Update draftGraph now (so data is ready) but keep spinner visible
updateAgentState(agentType, { draftGraph: draft });
designingDraftTimerRef.current[agentType] = setTimeout(() => {
updateAgentState(agentType, { designingDraft: false });
delete designingDraftTimerRef.current[agentType];
}, remaining);
} else {
applyDraft();
}
}
break;
}
@@ -2093,6 +2189,7 @@ export default function Workspace() {
updateAgentState(agentType, {
flowchartMap: mapData.map ?? null,
originalDraft: mapData.original_draft ?? null,
draftGraph: null,
});
}
break;
@@ -2166,10 +2263,18 @@ export default function Workspace() {
// Synthesize new trigger node at the front of the graph
const triggerType = (event.data?.trigger_type as string) || "timer";
const triggerConfig = (event.data?.trigger_config as Record<string, unknown>) || {};
const entryNode = s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
const entryNode = (event.data?.entry_node as string) || s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
const triggerName = (event.data?.name as string) || triggerId;
const _cron = triggerConfig.cron as string | undefined;
const _interval = triggerConfig.interval_minutes as number | undefined;
const computedLabel = _cron
? cronToLabel(_cron)
: _interval
? `Every ${_interval >= 60 ? `${_interval / 60}h` : `${_interval}m`}`
: triggerName;
const newNode: GraphNode = {
id: nodeId,
label: triggerId,
label: computedLabel,
status: "running",
nodeType: "trigger",
triggerType,
@@ -2234,10 +2339,18 @@ export default function Workspace() {
if (s.graphNodes.some(n => n.id === nodeId)) return s;
const triggerType = (event.data?.trigger_type as string) || "timer";
const triggerConfig = (event.data?.trigger_config as Record<string, unknown>) || {};
const entryNode = s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
const entryNode = (event.data?.entry_node as string) || s.graphNodes.find(n => n.nodeType !== "trigger")?.id;
const triggerName = (event.data?.name as string) || triggerId;
const _cron2 = triggerConfig.cron as string | undefined;
const _interval2 = triggerConfig.interval_minutes as number | undefined;
const computedLabel2 = _cron2
? cronToLabel(_cron2)
: _interval2
? `Every ${_interval2 >= 60 ? `${_interval2 / 60}h` : `${_interval2}m`}`
: triggerName;
const newNode: GraphNode = {
id: nodeId,
label: triggerId,
label: computedLabel2,
status: "pending",
nodeType: "trigger",
triggerType,
@@ -2252,6 +2365,43 @@ export default function Workspace() {
break;
}
case "trigger_updated": {
const triggerId = event.data?.trigger_id as string;
if (triggerId) {
const nodeId = `__trigger_${triggerId}`;
const triggerConfig = (event.data?.trigger_config as Record<string, unknown>) || {};
const cron = triggerConfig.cron as string | undefined;
const interval = triggerConfig.interval_minutes as number | undefined;
const newLabel = cron
? cronToLabel(cron)
: interval
? `Every ${interval >= 60 ? `${interval / 60}h` : `${interval}m`}`
: undefined;
setSessionsByAgent(prev => {
const sessions = prev[agentType] || [];
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
return {
...prev,
[agentType]: sessions.map(s => {
if (s.id !== activeId) return s;
return {
...s,
graphNodes: s.graphNodes.map(n => {
if (n.id !== nodeId) return n;
return {
...n,
...(newLabel ? { label: newLabel } : {}),
triggerConfig: { ...n.triggerConfig, ...triggerConfig },
};
}),
};
}),
};
});
}
break;
}
case "trigger_removed": {
const triggerId = event.data?.trigger_id as string;
if (triggerId) {
@@ -2305,14 +2455,43 @@ export default function Workspace() {
const liveSelectedNode = selectedNode && currentGraph.nodes.find(n => n.id === selectedNode.id);
const resolvedSelectedNode = liveSelectedNode || selectedNode;
// Sync trigger task draft when selected trigger node changes
// Sync trigger drafts when selected trigger node changes
useEffect(() => {
if (resolvedSelectedNode?.nodeType === "trigger") {
const tc = resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined;
setTriggerTaskDraft((tc?.task as string) || "");
setTriggerCronDraft((tc?.cron as string) || "");
}
}, [resolvedSelectedNode?.id]);
const patchTriggerNode = useCallback((agentType: string, triggerNodeId: string, patch: { task?: string; trigger_config?: Record<string, unknown>; label?: string }) => {
setSessionsByAgent(prev => {
const sessions = prev[agentType] || [];
const activeId = activeSessionRef.current[agentType] || sessions[0]?.id;
return {
...prev,
[agentType]: sessions.map(s => {
if (s.id !== activeId) return s;
return {
...s,
graphNodes: s.graphNodes.map(n => {
if (n.id !== triggerNodeId) return n;
return {
...n,
...(patch.label !== undefined ? { label: patch.label } : {}),
triggerConfig: {
...n.triggerConfig,
...(patch.trigger_config || {}),
...(patch.task !== undefined ? { task: patch.task } : {}),
},
};
}),
};
}),
};
});
}, []);
// Build a flat list of all agent-type tabs for the tab bar
const agentTabs = Object.entries(sessionsByAgent)
.filter(([, sessions]) => sessions.length > 0)
@@ -2827,38 +3006,40 @@ export default function Workspace() {
{/* Main content area */}
<div className="flex flex-1 min-h-0">
{/* ── Pipeline graph + chat ─────────────────────────────────── */}
<div className={`${activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building" || activeAgentState?.originalDraft ? "w-[500px] min-w-[400px]" : "w-[300px] min-w-[240px]"} bg-card/30 flex flex-col border-r border-border/30 transition-[width] duration-200`}>
{/* ── Draft flowchart + chat ─────────────────────────────────── */}
<div
className="bg-card/30 flex flex-col border-r border-border/30 relative"
style={{ width: `${graphPanelPct}%`, minWidth: 240, flexShrink: 0 }}
>
<div className="flex-1 min-h-0">
{activeAgentState?.queenPhase === "planning" || activeAgentState?.queenPhase === "building" ? (
<DraftGraph draft={activeAgentState?.draftGraph ?? null} loading={!activeAgentState?.draftGraph} building={activeAgentState?.queenBuilding} onRun={handleRun} onPause={handlePause} runState={activeAgentState?.workerRunState ?? "idle"} />
) : activeAgentState?.originalDraft ? (
<DraftGraph
draft={activeAgentState.originalDraft}
building={activeAgentState?.queenBuilding}
onRun={handleRun}
onPause={handlePause}
runState={activeAgentState?.workerRunState ?? "idle"}
flowchartMap={activeAgentState.flowchartMap ?? undefined}
runtimeNodes={currentGraph.nodes}
onRuntimeNodeClick={(runtimeNodeId) => {
const node = currentGraph.nodes.find(n => n.id === runtimeNodeId);
if (node) setSelectedNode(prev => prev?.id === node.id ? null : node);
}}
/>
) : (
<AgentGraph
nodes={currentGraph.nodes}
title={currentGraph.title}
onNodeClick={(node) => setSelectedNode(prev => prev?.id === node.id ? null : node)}
onRun={handleRun}
onPause={handlePause}
runState={activeAgentState?.workerRunState ?? "idle"}
building={activeAgentState?.queenBuilding ?? false}
queenPhase={activeAgentState?.queenPhase ?? "building"}
/>
)}
<DraftGraph
key={activeWorker}
draft={activeAgentState?.originalDraft ?? activeAgentState?.draftGraph ?? null}
originalDraft={activeAgentState?.originalDraft ?? null}
loadingMessage={
activeAgentState?.designingDraft
? "Designing flowchart…"
: !activeAgentState?.originalDraft && !activeAgentState?.draftGraph && activeAgentState?.queenPhase !== "planning"
? "Loading flowchart…"
: null
}
building={activeAgentState?.queenBuilding}
onRun={handleRun}
onPause={handlePause}
runState={activeAgentState?.workerRunState ?? "idle"}
flowchartMap={activeAgentState?.flowchartMap ?? undefined}
runtimeNodes={currentGraph.nodes}
onRuntimeNodeClick={(runtimeNodeId) => {
const node = currentGraph.nodes.find(n => n.id === runtimeNodeId);
if (node) setSelectedNode(prev => prev?.id === node.id ? null : node);
}}
/>
</div>
{/* Resize handle */}
<div
className="absolute top-0 right-0 w-1 h-full cursor-col-resize hover:bg-primary/30 active:bg-primary/40 transition-colors z-10"
onMouseDown={() => { resizing.current = true; document.body.style.cursor = "col-resize"; }}
/>
</div>
<div className="flex-1 min-w-0 flex">
<div className="flex-1 min-w-0 relative">
@@ -2979,18 +3160,64 @@ export default function Workspace() {
const interval = tc?.interval_minutes as number | undefined;
const eventTypes = tc?.event_types as string[] | undefined;
const scheduleLabel = cron
? `cron: ${cron}`
? cronToLabel(cron)
: interval
? `Every ${interval >= 60 ? `${interval / 60}h` : `${interval}m`}`
: eventTypes?.length
? eventTypes.join(", ")
: null;
return scheduleLabel ? (
const canEditCron = resolvedSelectedNode.triggerType === "timer";
const cronChanged = canEditCron && triggerCronDraft.trim() !== (cron || "");
return scheduleLabel || canEditCron ? (
<div>
<p className="text-[10px] font-medium text-muted-foreground uppercase tracking-wider mb-1.5">Schedule</p>
<p className="text-xs text-foreground/80 font-mono bg-muted/30 rounded-lg px-3 py-2 border border-border/20">
{scheduleLabel}
</p>
{scheduleLabel && (
<p className="text-xs text-foreground/80 font-mono bg-muted/30 rounded-lg px-3 py-2 border border-border/20">
{scheduleLabel}
</p>
)}
{canEditCron && (
<>
<input
value={triggerCronDraft}
onChange={(e) => setTriggerCronDraft(e.target.value)}
placeholder="0 5 * * *"
className="mt-1.5 w-full text-xs text-foreground/80 bg-muted/30 rounded-lg px-3 py-2 border border-border/20 font-mono focus:outline-none focus:border-primary/40"
/>
<p className="text-[10px] text-muted-foreground/60 mt-1">
Edit the cron expression for this timer trigger.
</p>
{(cronChanged || triggerCronSaved) && (
<button
disabled={triggerScheduleSaving || !cronChanged}
onClick={async () => {
const sessionId = activeAgentState?.sessionId;
const triggerId = resolvedSelectedNode.id.replace("__trigger_", "");
const nextCron = triggerCronDraft.trim();
if (!sessionId || !nextCron) return;
const nextTriggerConfig: Record<string, unknown> = { cron: nextCron };
setTriggerScheduleSaving(true);
try {
await sessionsApi.updateTrigger(sessionId, triggerId, {
trigger_config: nextTriggerConfig,
});
patchTriggerNode(activeWorker, resolvedSelectedNode.id, {
trigger_config: nextTriggerConfig,
label: cronToLabel(nextCron),
});
setTriggerCronSaved(true);
setTimeout(() => setTriggerCronSaved(false), 2000);
} finally {
setTriggerScheduleSaving(false);
}
}}
className="mt-1.5 w-full text-[11px] px-3 py-1.5 rounded-lg border border-primary/30 text-primary hover:bg-primary/10 transition-colors disabled:opacity-50"
>
{triggerScheduleSaving ? "Saving..." : triggerCronSaved ? "Saved" : "Save Cron"}
</button>
)}
</>
)}
</div>
) : null;
})()}
@@ -3017,24 +3244,27 @@ export default function Workspace() {
{(() => {
const currentTask = (resolvedSelectedNode.triggerConfig as Record<string, unknown> | undefined)?.task as string || "";
const hasChanged = triggerTaskDraft !== currentTask;
if (!hasChanged) return null;
if (!hasChanged && !triggerTaskSaved) return null;
return (
<button
disabled={triggerTaskSaving}
disabled={triggerTaskSaving || !hasChanged}
onClick={async () => {
const sessionId = activeAgentState?.sessionId;
const triggerId = resolvedSelectedNode.id.replace("__trigger_", "");
if (!sessionId) return;
setTriggerTaskSaving(true);
try {
await sessionsApi.updateTriggerTask(sessionId, triggerId, triggerTaskDraft);
await sessionsApi.updateTrigger(sessionId, triggerId, { task: triggerTaskDraft });
patchTriggerNode(activeWorker, resolvedSelectedNode.id, { task: triggerTaskDraft });
setTriggerTaskSaved(true);
setTimeout(() => setTriggerTaskSaved(false), 2000);
} finally {
setTriggerTaskSaving(false);
}
}}
className="mt-1.5 w-full text-[11px] px-3 py-1.5 rounded-lg border border-primary/30 text-primary hover:bg-primary/10 transition-colors disabled:opacity-50"
>
{triggerTaskSaving ? "Saving..." : "Save Task"}
{triggerTaskSaving ? "Saving..." : triggerTaskSaved ? "Saved" : "Save Task"}
</button>
);
})()}
+1
View File
@@ -33,6 +33,7 @@ API_KEY_PROVIDERS = [
("TOGETHER_API_KEY", "Together AI", "together_ai/meta-llama/Llama-3.3-70B-Instruct-Turbo"),
("DEEPSEEK_API_KEY", "DeepSeek", "deepseek-chat"),
("MINIMAX_API_KEY", "MiniMax", "MiniMax-M2.5"),
("HIVE_API_KEY", "Hive LLM", "hive/queen"),
]
+276
View File
@@ -0,0 +1,276 @@
"""Tests for framework/tools/flowchart_utils.py."""
import json
from types import SimpleNamespace
from framework.tools.flowchart_utils import (
FLOWCHART_FILENAME,
FLOWCHART_TYPES,
classify_flowchart_node,
generate_fallback_flowchart,
load_flowchart_file,
save_flowchart_file,
synthesize_draft_from_runtime,
)
def _make_node(
id,
name="Node",
description="",
node_type="event_loop",
tools=None,
input_keys=None,
output_keys=None,
success_criteria="",
sub_agents=None,
):
"""Create a minimal node-like object matching NodeSpec interface."""
return SimpleNamespace(
id=id,
name=name,
description=description,
node_type=node_type,
tools=tools or [],
input_keys=input_keys or [],
output_keys=output_keys or [],
success_criteria=success_criteria,
sub_agents=sub_agents or [],
)
def _make_edge(source, target, condition="on_success", description=""):
"""Create a minimal edge-like object matching EdgeSpec interface."""
return SimpleNamespace(
source=source,
target=target,
condition=SimpleNamespace(value=condition),
description=description,
)
def _make_goal(
name="Test Goal", description="A test goal", success_criteria=None, constraints=None
):
"""Create a minimal goal-like object matching Goal interface."""
return SimpleNamespace(
name=name,
description=description,
success_criteria=success_criteria or [],
constraints=constraints or [],
)
def _make_graph(nodes, edges, entry_node=None, terminal_nodes=None):
"""Create a minimal graph-like object matching GraphSpec interface."""
return SimpleNamespace(
nodes=nodes,
edges=edges,
entry_node=entry_node or (nodes[0].id if nodes else ""),
terminal_nodes=terminal_nodes or [],
)
class TestClassifyFlowchartNode:
"""Test flowchart node classification logic."""
def test_first_node_is_start(self):
node = {"id": "n1", "node_type": "event_loop", "tools": []}
result = classify_flowchart_node(node, 0, 3, [], set())
assert result == "start"
def test_terminal_node(self):
node = {"id": "n3", "node_type": "event_loop", "tools": []}
edges = [{"source": "n1", "target": "n3"}]
result = classify_flowchart_node(node, 2, 3, edges, {"n3"})
assert result == "terminal"
def test_gcu_node_is_browser(self):
node = {"id": "n2", "node_type": "gcu", "tools": []}
edges = [{"source": "n1", "target": "n2"}]
result = classify_flowchart_node(node, 1, 3, edges, set())
assert result == "browser"
def test_subprocess_node(self):
node = {"id": "n2", "node_type": "event_loop", "tools": [], "sub_agents": ["sub1"]}
edges = [{"source": "n1", "target": "n2"}, {"source": "n2", "target": "n3"}]
result = classify_flowchart_node(node, 1, 3, edges, set())
assert result == "subprocess"
def test_default_is_process(self):
node = {"id": "n2", "node_type": "event_loop", "tools": [], "description": "do stuff"}
edges = [{"source": "n1", "target": "n2"}, {"source": "n2", "target": "n3"}]
result = classify_flowchart_node(node, 1, 3, edges, set())
assert result == "process"
def test_explicit_override(self):
node = {"id": "n2", "node_type": "event_loop", "tools": [], "flowchart_type": "database"}
edges = [{"source": "n1", "target": "n2"}]
result = classify_flowchart_node(node, 1, 3, edges, set())
assert result == "database"
def test_decision_node_with_branching(self):
node = {"id": "n2", "node_type": "event_loop", "tools": []}
edges = [
{"source": "n1", "target": "n2"},
{"source": "n2", "target": "n3", "condition": "on_success"},
{"source": "n2", "target": "n4", "condition": "on_failure"},
]
result = classify_flowchart_node(node, 1, 4, edges, set())
assert result == "decision"
class TestSynthesizeDraftFromRuntime:
"""Test runtime graph to DraftGraph conversion."""
def test_basic_linear_graph(self):
nodes = [
_make_node("intake", "Intake"),
_make_node("process", "Process"),
_make_node("deliver", "Deliver"),
]
edges = [
_make_edge("intake", "process"),
_make_edge("process", "deliver"),
]
draft, fmap = synthesize_draft_from_runtime(
nodes, edges, agent_name="test_agent", goal_name="Test"
)
assert draft["agent_name"] == "test_agent"
assert draft["goal"] == "Test"
assert len(draft["nodes"]) == 3
assert len(draft["edges"]) == 2
assert draft["entry_node"] == "intake"
assert "deliver" in draft["terminal_nodes"]
# First node should be start type
assert draft["nodes"][0]["flowchart_type"] == "start"
# Last node (terminal) should be terminal type
assert draft["nodes"][2]["flowchart_type"] == "terminal"
# Middle node should be process
assert draft["nodes"][1]["flowchart_type"] == "process"
# All nodes should have shape and color
for node in draft["nodes"]:
assert "flowchart_shape" in node
assert "flowchart_color" in node
# Flowchart map should be identity
assert fmap == {"intake": ["intake"], "process": ["process"], "deliver": ["deliver"]}
# Legend should contain all types
assert draft["flowchart_legend"] == {
k: {"shape": v["shape"], "color": v["color"]} for k, v in FLOWCHART_TYPES.items()
}
def test_graph_with_sub_agents(self):
nodes = [
_make_node("main", "Main", sub_agents=["helper"]),
_make_node("helper", "Helper"),
]
edges = [_make_edge("main", "helper")]
draft, fmap = synthesize_draft_from_runtime(nodes, edges)
# Sub-agent edges should be added
assert len(draft["edges"]) > 1
# Helper should be grouped under main in the flowchart map
assert "helper" not in fmap
assert fmap["main"] == ["main", "helper"]
class TestFlowchartFilePersistence:
"""Test save/load of flowchart.json."""
def test_save_and_load(self, tmp_path):
draft = {"agent_name": "test", "nodes": [], "edges": []}
fmap = {"n1": ["n1"]}
save_flowchart_file(tmp_path, draft, fmap)
loaded_draft, loaded_map = load_flowchart_file(tmp_path)
assert loaded_draft == draft
assert loaded_map == fmap
def test_load_missing_file(self, tmp_path):
draft, fmap = load_flowchart_file(tmp_path)
assert draft is None
assert fmap is None
def test_load_none_path(self):
draft, fmap = load_flowchart_file(None)
assert draft is None
assert fmap is None
def test_save_none_path(self):
# Should not raise
save_flowchart_file(None, {}, {})
class TestGenerateFallbackFlowchart:
"""Test the main entry point for fallback generation."""
def test_generates_file_when_missing(self, tmp_path):
nodes = [
_make_node("n1", "Start Node"),
_make_node("n2", "End Node"),
]
edges = [_make_edge("n1", "n2")]
graph = _make_graph(nodes, edges, entry_node="n1", terminal_nodes=["n2"])
goal = _make_goal()
generate_fallback_flowchart(graph, goal, tmp_path)
flowchart_path = tmp_path / FLOWCHART_FILENAME
assert flowchart_path.exists()
data = json.loads(flowchart_path.read_text())
assert data["original_draft"]["agent_name"] == tmp_path.name
assert data["original_draft"]["goal"] == "A test goal"
assert data["flowchart_map"] is not None
# Entry/terminal from GraphSpec should be used
assert data["original_draft"]["entry_node"] == "n1"
assert "n2" in data["original_draft"]["terminal_nodes"]
def test_skips_when_file_exists(self, tmp_path):
# Pre-create a flowchart.json
existing = {"original_draft": {"agent_name": "existing"}, "flowchart_map": {}}
(tmp_path / FLOWCHART_FILENAME).write_text(json.dumps(existing))
nodes = [_make_node("n1", "Node")]
graph = _make_graph(nodes, [], entry_node="n1")
goal = _make_goal()
generate_fallback_flowchart(graph, goal, tmp_path)
# Should not have been overwritten
data = json.loads((tmp_path / FLOWCHART_FILENAME).read_text())
assert data["original_draft"]["agent_name"] == "existing"
def test_handles_errors_gracefully(self, tmp_path):
# Pass an invalid path (file, not directory)
fake_path = tmp_path / "not_a_dir.txt"
fake_path.write_text("hello")
graph = _make_graph([], [])
goal = _make_goal()
# Should not raise
generate_fallback_flowchart(graph, goal, fake_path)
def test_enriches_with_goal_metadata(self, tmp_path):
nodes = [_make_node("n1", "Node")]
graph = _make_graph(nodes, [], entry_node="n1")
goal = _make_goal(
description="Find bugs",
success_criteria=[SimpleNamespace(description="All bugs found")],
constraints=[SimpleNamespace(description="No false positives")],
)
generate_fallback_flowchart(graph, goal, tmp_path)
data = json.loads((tmp_path / FLOWCHART_FILENAME).read_text())
assert data["original_draft"]["goal"] == "Find bugs"
assert data["original_draft"]["success_criteria"] == ["All bugs found"]
assert data["original_draft"]["constraints"] == ["No false positives"]
+520
View File
@@ -0,0 +1,520 @@
"""Tests for safe_eval — the sandboxed expression evaluator used by edge conditions.
Covers: literals, data structures, arithmetic, comparisons, boolean logic
(including short-circuit semantics), variable lookup, subscript/attribute
access, whitelisted function calls, method calls, ternary expressions,
chained comparisons, and security boundaries (private attrs, disallowed
AST nodes, disallowed function calls).
"""
import pytest
from framework.graph.safe_eval import safe_eval
# ---------------------------------------------------------------------------
# Literals and constants
# ---------------------------------------------------------------------------
class TestLiterals:
def test_integer(self):
assert safe_eval("42") == 42
def test_negative_integer(self):
assert safe_eval("-1") == -1
def test_float(self):
assert safe_eval("3.14") == pytest.approx(3.14)
def test_string(self):
assert safe_eval("'hello'") == "hello"
def test_double_quoted_string(self):
assert safe_eval('"world"') == "world"
def test_boolean_true(self):
assert safe_eval("True") is True
def test_boolean_false(self):
assert safe_eval("False") is False
def test_none(self):
assert safe_eval("None") is None
# ---------------------------------------------------------------------------
# Data structures
# ---------------------------------------------------------------------------
class TestDataStructures:
def test_list(self):
assert safe_eval("[1, 2, 3]") == [1, 2, 3]
def test_empty_list(self):
assert safe_eval("[]") == []
def test_nested_list(self):
assert safe_eval("[[1, 2], [3, 4]]") == [[1, 2], [3, 4]]
def test_tuple(self):
assert safe_eval("(1, 2, 3)") == (1, 2, 3)
def test_dict(self):
assert safe_eval("{'a': 1, 'b': 2}") == {"a": 1, "b": 2}
def test_empty_dict(self):
assert safe_eval("{}") == {}
# ---------------------------------------------------------------------------
# Arithmetic and binary operators
# ---------------------------------------------------------------------------
class TestArithmetic:
def test_addition(self):
assert safe_eval("2 + 3") == 5
def test_subtraction(self):
assert safe_eval("10 - 4") == 6
def test_multiplication(self):
assert safe_eval("3 * 7") == 21
def test_division(self):
assert safe_eval("10 / 4") == 2.5
def test_floor_division(self):
assert safe_eval("10 // 3") == 3
def test_modulo(self):
assert safe_eval("10 % 3") == 1
def test_power(self):
assert safe_eval("2 ** 10") == 1024
def test_complex_expression(self):
assert safe_eval("(2 + 3) * 4 - 1") == 19
# ---------------------------------------------------------------------------
# Unary operators
# ---------------------------------------------------------------------------
class TestUnaryOps:
def test_negation(self):
assert safe_eval("-5") == -5
def test_positive(self):
assert safe_eval("+5") == 5
def test_not_true(self):
assert safe_eval("not True") is False
def test_not_false(self):
assert safe_eval("not False") is True
def test_bitwise_invert(self):
assert safe_eval("~0") == -1
# ---------------------------------------------------------------------------
# Comparisons
# ---------------------------------------------------------------------------
class TestComparisons:
def test_equal(self):
assert safe_eval("1 == 1") is True
def test_not_equal(self):
assert safe_eval("1 != 2") is True
def test_less_than(self):
assert safe_eval("1 < 2") is True
def test_greater_than(self):
assert safe_eval("2 > 1") is True
def test_less_equal(self):
assert safe_eval("2 <= 2") is True
def test_greater_equal(self):
assert safe_eval("3 >= 2") is True
def test_is_none(self):
assert safe_eval("x is None", {"x": None}) is True
def test_is_not_none(self):
assert safe_eval("x is not None", {"x": 42}) is True
def test_in_list(self):
assert safe_eval("'a' in x", {"x": ["a", "b", "c"]}) is True
def test_not_in_list(self):
assert safe_eval("'z' not in x", {"x": ["a", "b"]}) is True
def test_chained_comparison(self):
"""Chained comparisons like 1 < x < 10 should work."""
assert safe_eval("1 < x < 10", {"x": 5}) is True
def test_chained_comparison_false(self):
assert safe_eval("1 < x < 3", {"x": 5}) is False
def test_chained_three_way(self):
assert safe_eval("0 <= x <= 100", {"x": 50}) is True
# ---------------------------------------------------------------------------
# Boolean operators (with short-circuit semantics)
# ---------------------------------------------------------------------------
class TestBooleanOps:
def test_and_true(self):
assert safe_eval("True and True") is True
def test_and_false(self):
assert safe_eval("True and False") is False
def test_or_true(self):
assert safe_eval("False or True") is True
def test_or_false(self):
assert safe_eval("False or False") is False
def test_and_returns_last_truthy(self):
"""Python `and` returns the last value if all truthy."""
assert safe_eval("1 and 2 and 3") == 3
def test_and_returns_first_falsy(self):
"""Python `and` returns the first falsy value."""
assert safe_eval("1 and 0 and 3") == 0
def test_or_returns_first_truthy(self):
"""Python `or` returns the first truthy value."""
assert safe_eval("0 or '' or 42") == 42
def test_or_returns_last_falsy(self):
"""Python `or` returns the last value if all falsy."""
assert safe_eval("0 or '' or None") is None
def test_and_short_circuits(self):
"""and should NOT evaluate the right side if left is falsy.
This is the bug we fixed previously this would crash with
TypeError because all operands were eagerly evaluated.
"""
# x is None, so `x.get("key")` would crash if evaluated
assert safe_eval("x is not None and x.get('key')", {"x": None}) is False
def test_or_short_circuits(self):
"""or should NOT evaluate the right side if left is truthy."""
# x is truthy, so the crash-prone right side should never run
assert safe_eval("x or y.get('missing')", {"x": "found", "y": {}}) == "found"
def test_and_guard_pattern_truthy(self):
"""Guard pattern: check not None, then access — when value exists."""
ctx = {"x": {"key": "value"}}
assert safe_eval("x is not None and x.get('key')", ctx) == "value"
def test_multi_and(self):
assert safe_eval("True and True and True") is True
def test_multi_or(self):
assert safe_eval("False or False or True") is True
def test_mixed_and_or(self):
assert safe_eval("True or False and False") is True
# ---------------------------------------------------------------------------
# Ternary (if/else) expressions
# ---------------------------------------------------------------------------
class TestTernary:
def test_ternary_true_branch(self):
assert safe_eval("'yes' if True else 'no'") == "yes"
def test_ternary_false_branch(self):
assert safe_eval("'yes' if False else 'no'") == "no"
def test_ternary_with_context(self):
assert safe_eval("x * 2 if x > 0 else -x", {"x": 5}) == 10
def test_ternary_false_with_context(self):
assert safe_eval("x * 2 if x > 0 else -x", {"x": -3}) == 3
# ---------------------------------------------------------------------------
# Variable lookup
# ---------------------------------------------------------------------------
class TestVariables:
def test_simple_variable(self):
assert safe_eval("x", {"x": 42}) == 42
def test_string_variable(self):
assert safe_eval("name", {"name": "Alice"}) == "Alice"
def test_dict_variable(self):
ctx = {"output": {"status": "ok"}}
assert safe_eval("output", ctx) == {"status": "ok"}
def test_undefined_variable_raises(self):
with pytest.raises(NameError, match="not defined"):
safe_eval("undefined_var")
def test_multiple_variables(self):
assert safe_eval("x + y", {"x": 10, "y": 20}) == 30
# ---------------------------------------------------------------------------
# Subscript access (indexing)
# ---------------------------------------------------------------------------
class TestSubscript:
def test_dict_subscript(self):
assert safe_eval("d['key']", {"d": {"key": "value"}}) == "value"
def test_list_subscript(self):
assert safe_eval("items[0]", {"items": [10, 20, 30]}) == 10
def test_nested_subscript(self):
ctx = {"data": {"users": [{"name": "Alice"}]}}
assert safe_eval("data['users'][0]['name']", ctx) == "Alice"
def test_missing_key_raises(self):
with pytest.raises(KeyError):
safe_eval("d['missing']", {"d": {}})
# ---------------------------------------------------------------------------
# Attribute access
# ---------------------------------------------------------------------------
class TestAttributeAccess:
def test_private_attr_blocked(self):
"""Attributes starting with _ must be blocked for security."""
with pytest.raises(ValueError, match="private attribute"):
safe_eval("x.__class__", {"x": 42})
def test_dunder_blocked(self):
with pytest.raises(ValueError, match="private attribute"):
safe_eval("x.__dict__", {"x": {}})
def test_single_underscore_blocked(self):
with pytest.raises(ValueError, match="private attribute"):
safe_eval("x._internal", {"x": {}})
# ---------------------------------------------------------------------------
# Whitelisted function calls
# ---------------------------------------------------------------------------
class TestFunctionCalls:
def test_len(self):
assert safe_eval("len(x)", {"x": [1, 2, 3]}) == 3
def test_int_conversion(self):
assert safe_eval("int('42')") == 42
def test_float_conversion(self):
assert safe_eval("float('3.14')") == pytest.approx(3.14)
def test_str_conversion(self):
assert safe_eval("str(42)") == "42"
def test_bool_conversion(self):
assert safe_eval("bool(1)") is True
def test_abs(self):
assert safe_eval("abs(-5)") == 5
def test_min(self):
assert safe_eval("min(3, 1, 2)") == 1
def test_max(self):
assert safe_eval("max(3, 1, 2)") == 3
def test_sum(self):
assert safe_eval("sum(x)", {"x": [1, 2, 3]}) == 6
def test_round(self):
assert safe_eval("round(3.7)") == 4
def test_all(self):
assert safe_eval("all([True, True, True])") is True
def test_any(self):
assert safe_eval("any([False, False, True])") is True
def test_list_constructor(self):
assert safe_eval("list(x)", {"x": (1, 2, 3)}) == [1, 2, 3]
def test_dict_constructor(self):
assert safe_eval("dict(a=1, b=2)") == {"a": 1, "b": 2}
def test_tuple_constructor(self):
assert safe_eval("tuple(x)", {"x": [1, 2]}) == (1, 2)
def test_set_constructor(self):
assert safe_eval("set(x)", {"x": [1, 2, 2, 3]}) == {1, 2, 3}
# ---------------------------------------------------------------------------
# Whitelisted method calls
# ---------------------------------------------------------------------------
class TestMethodCalls:
def test_dict_get(self):
assert safe_eval("d.get('key', 'default')", {"d": {"key": "val"}}) == "val"
def test_dict_get_missing(self):
assert safe_eval("d.get('missing', 'default')", {"d": {}}) == "default"
def test_dict_keys(self):
result = safe_eval("list(d.keys())", {"d": {"a": 1, "b": 2}})
assert sorted(result) == ["a", "b"]
def test_dict_values(self):
result = safe_eval("list(d.values())", {"d": {"a": 1, "b": 2}})
assert sorted(result) == [1, 2]
def test_string_lower(self):
assert safe_eval("s.lower()", {"s": "HELLO"}) == "hello"
def test_string_upper(self):
assert safe_eval("s.upper()", {"s": "hello"}) == "HELLO"
def test_string_strip(self):
assert safe_eval("s.strip()", {"s": " hi "}) == "hi"
def test_string_split(self):
assert safe_eval("s.split(',')", {"s": "a,b,c"}) == ["a", "b", "c"]
# ---------------------------------------------------------------------------
# Security: disallowed operations
# ---------------------------------------------------------------------------
class TestSecurity:
def test_import_blocked(self):
"""__import__ is not in context, so NameError is raised."""
with pytest.raises(NameError, match="not defined"):
safe_eval("__import__('os')")
def test_lambda_blocked(self):
with pytest.raises(ValueError, match="not allowed"):
safe_eval("(lambda: 1)()")
def test_comprehension_blocked(self):
with pytest.raises(ValueError, match="not allowed"):
safe_eval("[x for x in range(10)]")
def test_assignment_blocked(self):
"""Assignment expressions should not parse in eval mode."""
with pytest.raises(SyntaxError):
safe_eval("x = 5")
def test_disallowed_function_blocked(self):
"""eval is not in safe functions, so NameError is raised."""
with pytest.raises(NameError, match="not defined"):
safe_eval("eval('1+1')")
def test_exec_blocked(self):
"""exec is not in safe functions, so NameError is raised."""
with pytest.raises(NameError, match="not defined"):
safe_eval("exec('x=1')")
def test_type_call_blocked(self):
"""type is not in safe functions, so NameError is raised."""
with pytest.raises(NameError, match="not defined"):
safe_eval("type(42)")
def test_getattr_builtin_blocked(self):
"""getattr is not in safe functions, so NameError is raised."""
with pytest.raises(NameError, match="not defined"):
safe_eval("getattr(x, '__class__')", {"x": 42})
def test_empty_expression_raises(self):
with pytest.raises(SyntaxError):
safe_eval("")
# ---------------------------------------------------------------------------
# Real-world edge condition patterns (from graph executor usage)
# ---------------------------------------------------------------------------
class TestEdgeConditionPatterns:
"""Patterns commonly used in EdgeSpec.condition_expr."""
def test_output_key_exists_and_not_none(self):
ctx = {"output": {"approved_contacts": ["alice@example.com"]}}
assert safe_eval("output.get('approved_contacts') is not None", ctx) is True
def test_output_key_missing(self):
ctx = {"output": {}}
assert safe_eval("output.get('approved_contacts') is not None", ctx) is False
def test_output_key_check_with_fallback(self):
ctx = {"output": {"redo_extraction": True}}
assert safe_eval("output.get('redo_extraction') is not None", ctx) is True
def test_guard_then_length_check(self):
"""Guard pattern: check key exists, then check length."""
ctx = {"output": {"results": [1, 2, 3]}}
assert (
safe_eval(
"output.get('results') is not None and len(output['results']) > 0",
ctx,
)
is True
)
def test_guard_short_circuits_on_none(self):
"""Guard pattern: short-circuit prevents crash on None."""
ctx = {"output": {}}
assert (
safe_eval(
"output.get('results') is not None and len(output['results']) > 0",
ctx,
)
is False
)
def test_success_flag_check(self):
ctx = {"output": {"success": True}, "memory": {"attempts": 2}}
assert safe_eval("output.get('success') == True", ctx) is True
def test_memory_threshold(self):
ctx = {"memory": {"score": 0.85}}
assert safe_eval("memory.get('score', 0) >= 0.8", ctx) is True
def test_string_contains_check(self):
ctx = {"output": {"status": "completed_with_warnings"}}
assert safe_eval("'completed' in output.get('status', '')", ctx) is True
def test_fallback_chain(self):
"""or-chain for fallback values."""
ctx = {"output": {}}
result = safe_eval(
"output.get('primary') or output.get('secondary') or 'default'",
ctx,
)
assert result == "default"
def test_no_context_needed(self):
"""Some edges use constant expressions."""
assert safe_eval("True") is True
assert safe_eval("1 == 1") is True
+40 -88
View File
@@ -1,6 +1,6 @@
# Draft Flowchart System — Complete Reference
The draft flowchart system bridges user-facing workflow design (planning phase) and the runtime agent graph (execution phase). During planning, the queen agent creates an ISO 5807 flowchart that the user reviews. On approval, decision nodes are dissolved into runtime-compatible structures, and the original flowchart is preserved for live status overlay during execution.
The draft flowchart system bridges user-facing workflow design (planning phase) and the runtime agent graph (execution phase). During planning, the queen agent creates a flowchart that the user reviews. On approval, decision nodes are dissolved into runtime-compatible structures, and the original flowchart is preserved for live status overlay during execution.
---
@@ -20,14 +20,15 @@ DraftGraph (SSE) ────► │ Decision diamonds │
│ │ merged into │ Flowchart Map
▼ │ predecessor criteria │ inverts to
Frontend renders │ │ overlay status
ISO 5807 flowchart │ Original draft │ on original
Flowchart with │ Original draft │ on original
with diamond │ preserved │ flowchart
decisions │ │
└──────────────────────┘
```
**Key files:**
- Backend: `core/framework/tools/queen_lifecycle_tools.py` — draft creation, classification, dissolution
- Backend: `core/framework/tools/queen_lifecycle_tools.py` — draft creation, dissolution
- Backend: `core/framework/tools/flowchart_utils.py` — type definitions, classification, persistence
- Backend: `core/framework/server/routes_graphs.py` — REST endpoints
- Frontend: `core/frontend/src/components/DraftGraph.tsx` — SVG flowchart renderer
- Frontend: `core/frontend/src/api/types.ts` — TypeScript interfaces
@@ -114,17 +115,9 @@ decisions │ │
"type": "string",
"enum": [
"start", "terminal", "process", "decision",
"io", "document", "multi_document",
"subprocess", "preparation",
"manual_input", "manual_operation",
"delay", "display",
"database", "stored_data", "internal_storage",
"connector", "offpage_connector",
"merge", "extract", "sort", "collate",
"summing_junction", "or",
"browser", "comment", "alternate_process"
"io", "document", "database", "subprocess", "browser"
],
"description": "ISO 5807 flowchart symbol. Auto-detected if omitted."
"description": "Flowchart symbol type. Auto-detected if omitted."
},
"tools": {
"type": "array",
@@ -213,7 +206,7 @@ After `save_agent_draft` processes the input, it stores and emits an enriched dr
"sub_agents": [],
"flowchart_type": "start",
"flowchart_shape": "stadium",
"flowchart_color": "#4CAF50"
"flowchart_color": "#8aad3f"
},
{
"id": "check-tier",
@@ -223,7 +216,7 @@ After `save_agent_draft` processes the input, it stores and emits an enriched dr
"decision_clause": "Is lead score > 80?",
"flowchart_type": "decision",
"flowchart_shape": "diamond",
"flowchart_color": "#FF9800"
"flowchart_color": "#d89d26"
}
],
"edges": [
@@ -253,10 +246,10 @@ After `save_agent_draft` processes the input, it stores and emits an enriched dr
}
],
"flowchart_legend": {
"start": { "shape": "stadium", "color": "#4CAF50" },
"terminal": { "shape": "stadium", "color": "#F44336" },
"process": { "shape": "rectangle", "color": "#2196F3" },
"decision": { "shape": "diamond", "color": "#FF9800" }
"start": { "shape": "stadium", "color": "#8aad3f" },
"terminal": { "shape": "stadium", "color": "#b5453a" },
"process": { "shape": "rectangle", "color": "#b5a575" },
"decision": { "shape": "diamond", "color": "#d89d26" }
}
}
```
@@ -265,7 +258,7 @@ After `save_agent_draft` processes the input, it stores and emits an enriched dr
| Field | Type | Description |
|---|---|---|
| `flowchart_type` | `string` | The resolved ISO 5807 symbol type |
| `flowchart_type` | `string` | The resolved flowchart symbol type |
| `flowchart_shape` | `string` | SVG shape identifier for the frontend renderer |
| `flowchart_color` | `string` | Hex color code for the symbol |
@@ -290,67 +283,27 @@ Returned by `GET /api/sessions/{id}/flowchart-map` after `confirm_and_build()` d
---
## 2. ISO 5807 Flowchart Types
### Core Symbols
## 2. Flowchart Types
| Type | Shape | Color | SVG Primitive | Description |
|---|---|---|---|---|
| `start` | stadium | `#4CAF50` green | `<rect rx={h/2}>` | Entry point / start terminator |
| `terminal` | stadium | `#F44336` red | `<rect rx={h/2}>` | End point / stop terminator |
| `process` | rectangle | `#2196F3` blue | `<rect rx={4}>` | General processing step |
| `decision` | diamond | `#FF9800` amber | `<polygon>` 4-point | Branching / conditional logic |
| `io` | parallelogram | `#9C27B0` purple | `<polygon>` skewed | Data input or output |
| `document` | document | `#607D8B` blue-grey | `<path>` wavy bottom | Single document output |
| `multi_document` | multi_document | `#78909C` blue-grey | stacked `<rect>` + `<path>` | Multiple documents |
| `subprocess` | subroutine | `#009688` teal | `<rect>` + inner `<line>` | Predefined process / sub-agent |
| `preparation` | hexagon | `#795548` brown | `<polygon>` 6-point | Setup / initialization step |
| `manual_input` | manual_input | `#E91E63` pink | `<polygon>` sloped top | Manual data entry |
| `manual_operation` | trapezoid | `#AD1457` dark pink | `<polygon>` tapered bottom | Human-in-the-loop / approval |
| `delay` | delay | `#FF5722` deep orange | `<path>` D-shape | Wait / pause / cooldown |
| `display` | display | `#00BCD4` cyan | `<path>` pointed left | Display / render output |
### Data Storage Symbols
| Type | Shape | Color | SVG Primitive | Description |
|---|---|---|---|---|
| `database` | cylinder | `#8BC34A` light green | `<path>` + `<ellipse>` top/bottom | Database / direct access storage |
| `stored_data` | stored_data | `#CDDC39` lime | `<path>` curved left | Generic data store |
| `internal_storage` | internal_storage | `#FFC107` amber | `<rect>` + internal `<line>` grid | Internal memory / cache |
### Connectors
| Type | Shape | Color | SVG Primitive | Description |
|---|---|---|---|---|
| `connector` | circle | `#9E9E9E` grey | `<circle>` | On-page connector |
| `offpage_connector` | pentagon | `#757575` dark grey | `<polygon>` 5-point | Off-page connector |
### Flow Operations
| Type | Shape | Color | SVG Primitive | Description |
|---|---|---|---|---|
| `merge` | triangle_inv | `#3F51B5` indigo | `<polygon>` inverted | Merge multiple flows |
| `extract` | triangle | `#5C6BC0` indigo light | `<polygon>` upward | Extract / split flow |
| `sort` | hourglass | `#7986CB` indigo lighter | `<polygon>` X-shape | Sort operation |
| `collate` | hourglass_inv | `#9FA8DA` indigo lightest | `<polygon>` X-shape inv | Collate operation |
| `summing_junction` | circle_cross | `#F06292` pink light | `<circle>` + cross `<line>` | Summing junction |
| `or` | circle_bar | `#CE93D8` purple light | `<circle>` + plus `<line>` | Logical OR |
### Domain-Specific (Hive)
| Type | Shape | Color | SVG Primitive | Description |
|---|---|---|---|---|
| `browser` | hexagon | `#1A237E` dark indigo | `<polygon>` 6-point | Browser automation (GCU node) |
| `comment` | flag | `#BDBDBD` light grey | `<path>` notched right | Annotation / comment |
| `alternate_process` | rounded_rect | `#42A5F5` light blue | `<rect rx={12}>` | Alternate process variant |
| `start` | stadium | `#8aad3f` spring pollen | `<rect rx={h/2}>` | Entry point / start terminator |
| `terminal` | stadium | `#b5453a` propolis red | `<rect rx={h/2}>` | End point / stop terminator |
| `process` | rectangle | `#b5a575` warm wheat | `<rect rx={4}>` | General processing step (default) |
| `decision` | diamond | `#d89d26` royal honey | `<polygon>` 4-point | Branching / conditional logic |
| `io` | parallelogram | `#d06818` burnt orange | `<polygon>` skewed | Data input or output |
| `document` | document | `#c4b830` goldenrod | `<path>` wavy bottom | Document / report generation |
| `database` | cylinder | `#508878` sage teal | `<path>` + `<ellipse>` | Database / data store |
| `subprocess` | subroutine | `#887a48` propolis gold | `<rect>` + inner `<line>` | Predefined process / sub-agent |
| `browser` | hexagon | `#cc8850` honey copper | `<polygon>` 6-point | Browser automation (GCU node) |
---
## 3. Auto-Classification Priority
When `flowchart_type` is omitted from a node, the backend classifies it automatically using this priority (function `_classify_flowchart_node` in `queen_lifecycle_tools.py`):
When `flowchart_type` is omitted from a node, the backend classifies it automatically using this priority (function `classify_flowchart_node` in `flowchart_utils.py`):
1. **Explicit override** — if `flowchart_type` is set and valid, use it
1. **Explicit override** — if `flowchart_type` is set and valid, use it (old type names are remapped automatically)
2. **Node type**`gcu` nodes become `browser`
3. **Position** — first node becomes `start`
4. **Terminal detection** — nodes in `terminal_nodes` (or with no outgoing edges) become `terminal`
@@ -359,14 +312,8 @@ When `flowchart_type` is omitted from a node, the backend classifies it automati
7. **Tool heuristics** — tool names match known patterns:
- DB tools (`query_database`, `sql_query`, `read_table`, etc.) → `database`
- Doc tools (`generate_report`, `create_document`, etc.) → `document`
- I/O tools (`send_email`, `post_to_slack`, `fetch_url`, etc.) → `io`
- Display tools (`serve_file_to_user`, `display_results`) → `display`
- I/O tools (`send_email`, `post_to_slack`, `fetch_url`, `display_results`, etc.) → `io`
8. **Description keyword heuristics**:
- `"manual"`, `"approval"`, `"human review"``manual_operation`
- `"setup"`, `"prepare"`, `"configure"``preparation`
- `"wait"`, `"delay"`, `"pause"``delay`
- `"merge"`, `"combine"`, `"aggregate"``merge`
- `"display"`, `"show"`, `"render"``display`
- `"database"`, `"data store"`, `"persist"``database`
- `"report"`, `"document"`, `"summary"``document`
- `"deliver"`, `"send"`, `"notify"``io`
@@ -441,7 +388,7 @@ The runtime Level 2 judge evaluates the decision clause against the node's conve
An SVG-based flowchart renderer that operates in two modes:
1. **Planning mode** — renders the draft graph with ISO 5807 shapes during the planning phase
1. **Planning mode** — renders the draft graph with flowchart shapes during the planning phase
2. **Runtime overlay mode** — renders the original (pre-dissolution) draft with live execution status when `flowchartMap` and `runtimeNodes` props are provided
#### Props
@@ -475,7 +422,7 @@ Constants:
#### Shape Rendering
The `FlowchartShape` component renders each ISO 5807 shape as SVG primitives. Each shape receives:
The `FlowchartShape` component renders each flowchart shape as SVG primitives. Each shape receives:
- `x, y, w, h` — bounding box in SVG units
- `color` — the hex color from the flowchart type
- `selected` — hover state (increases fill opacity from 18% to 28%, brightens stroke)
@@ -535,17 +482,22 @@ const STATUS_COLORS = {
### Workspace Integration (`workspace.tsx`)
The workspace conditionally renders `DraftGraph` in three scenarios:
The workspace always renders a single `<DraftGraph>` component, selecting the best available draft:
| Condition | Renders | Panel Width |
|---|---|---|
| `queenPhase === "planning"` and `draftGraph` exists | `<DraftGraph draft={draftGraph} />` | 500px |
| `originalDraft` exists (post-planning) | `<DraftGraph draft={originalDraft} flowchartMap={...} runtimeNodes={...} />` | 500px |
| Neither | `<AgentGraph ... />` (runtime pipeline view) | 300px |
```tsx
<DraftGraph
draft={activeAgentState?.originalDraft ?? activeAgentState?.draftGraph ?? null}
loading={activeAgentState?.queenPhase === "planning" && !activeAgentState?.draftGraph}
flowchartMap={activeAgentState?.flowchartMap ?? undefined}
runtimeNodes={currentGraph.nodes}
/>
```
The graph panel is user-resizable (drag handle on the right edge, 15%50% of viewport width, default 30%).
**State management:**
- `draftGraph`: Set by `draft_graph_updated` SSE event during planning; cleared on phase change
- `originalDraft` + `flowchartMap`: Fetched from `GET /api/sessions/{id}/flowchart-map` when phase transitions away from planning
- `originalDraft` + `flowchartMap`: Fetched from `GET /api/sessions/{id}/flowchart-map` when phase transitions away from planning. For template/legacy agents, `originalDraft` is generated at load time via `generate_fallback_flowchart()`.
---
@@ -0,0 +1,307 @@
{
"original_draft": {
"agent_name": "competitive_intel_agent",
"goal": "Monitor competitor websites, news sources, and GitHub repositories to produce a structured weekly digest with key insights, detailed findings per competitor, and 30-day trend analysis.",
"description": "",
"success_criteria": [
"Check multiple source types per competitor (website, news, GitHub)",
"All findings structured with competitor, category, update, source, and date",
"Uses stored data to compare with previous reports for trend analysis",
"User receives a formatted, readable competitive intelligence digest"
],
"constraints": [
"Never fabricate findings, news, or data \u2014 only report what was found",
"Every finding must include a source URL",
"Prioritize findings from the past 7 days; include up to 30 days"
],
"nodes": [
{
"id": "intake",
"name": "Competitor Intake",
"description": "Collect competitor list, focus areas, and report preferences from the user",
"node_type": "event_loop",
"tools": [],
"input_keys": [
"competitors_input"
],
"output_keys": [
"competitors",
"focus_areas",
"report_frequency",
"has_github_competitors"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "start",
"flowchart_shape": "stadium",
"flowchart_color": "#8aad3f"
},
{
"id": "web-scraper",
"name": "Website Monitor",
"description": "Scrape competitor websites for pricing, features, and announcements",
"node_type": "event_loop",
"tools": [
"web_search",
"web_scrape"
],
"input_keys": [
"competitors",
"focus_areas"
],
"output_keys": [
"web_findings"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "process",
"flowchart_shape": "rectangle",
"flowchart_color": "#b5a575"
},
{
"id": "news-search",
"name": "News & Press Monitor",
"description": "Search for competitor mentions in news, press releases, and industry publications",
"node_type": "event_loop",
"tools": [
"web_search",
"web_scrape"
],
"input_keys": [
"competitors",
"focus_areas"
],
"output_keys": [
"news_findings"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "decision",
"flowchart_shape": "diamond",
"flowchart_color": "#d89d26"
},
{
"id": "github-monitor",
"name": "GitHub Activity Monitor",
"description": "Track public GitHub repository activity for competitors with GitHub presence",
"node_type": "event_loop",
"tools": [
"github_list_repos",
"github_get_repo",
"github_search_repos"
],
"input_keys": [
"competitors"
],
"output_keys": [
"github_findings"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "process",
"flowchart_shape": "rectangle",
"flowchart_color": "#b5a575"
},
{
"id": "aggregator",
"name": "Data Aggregator",
"description": "Combine findings from all sources, deduplicate, and structure for analysis",
"node_type": "event_loop",
"tools": [
"save_data",
"load_data",
"list_data_files"
],
"input_keys": [
"competitors",
"web_findings",
"news_findings",
"github_findings"
],
"output_keys": [
"aggregated_findings"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "database",
"flowchart_shape": "cylinder",
"flowchart_color": "#508878"
},
{
"id": "analysis",
"name": "Insight Analysis",
"description": "Extract key insights, detect trends, and compare with historical data",
"node_type": "event_loop",
"tools": [
"load_data",
"save_data",
"list_data_files"
],
"input_keys": [
"aggregated_findings",
"competitors",
"focus_areas"
],
"output_keys": [
"key_highlights",
"trend_analysis",
"detailed_findings"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "database",
"flowchart_shape": "cylinder",
"flowchart_color": "#508878"
},
{
"id": "report",
"name": "Report Generator",
"description": "Generate and deliver the competitive intelligence digest as an HTML report",
"node_type": "event_loop",
"tools": [
"save_data",
"load_data",
"serve_file_to_user",
"list_data_files"
],
"input_keys": [
"key_highlights",
"trend_analysis",
"detailed_findings",
"competitors"
],
"output_keys": [
"delivery_status"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "terminal",
"flowchart_shape": "stadium",
"flowchart_color": "#b5453a"
}
],
"edges": [
{
"id": "edge-0",
"source": "intake",
"target": "web-scraper",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-1",
"source": "web-scraper",
"target": "news-search",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-2",
"source": "news-search",
"target": "github-monitor",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-3",
"source": "news-search",
"target": "aggregator",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-4",
"source": "github-monitor",
"target": "aggregator",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-5",
"source": "aggregator",
"target": "analysis",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-6",
"source": "analysis",
"target": "report",
"condition": "on_success",
"description": "",
"label": ""
}
],
"entry_node": "intake",
"terminal_nodes": [
"report"
],
"flowchart_legend": {
"start": {
"shape": "stadium",
"color": "#8aad3f"
},
"terminal": {
"shape": "stadium",
"color": "#b5453a"
},
"process": {
"shape": "rectangle",
"color": "#b5a575"
},
"decision": {
"shape": "diamond",
"color": "#d89d26"
},
"io": {
"shape": "parallelogram",
"color": "#d06818"
},
"document": {
"shape": "document",
"color": "#c4b830"
},
"database": {
"shape": "cylinder",
"color": "#508878"
},
"subprocess": {
"shape": "subroutine",
"color": "#887a48"
},
"browser": {
"shape": "hexagon",
"color": "#cc8850"
}
}
},
"flowchart_map": {
"intake": [
"intake"
],
"web-scraper": [
"web-scraper"
],
"news-search": [
"news-search"
],
"github-monitor": [
"github-monitor"
],
"aggregator": [
"aggregator"
],
"analysis": [
"analysis"
],
"report": [
"report"
]
}
}
@@ -0,0 +1,221 @@
{
"original_draft": {
"agent_name": "deep_research_agent",
"goal": "Research any topic by searching diverse sources, analyzing findings, and producing a cited report \u2014 with user checkpoints to guide direction.",
"description": "",
"success_criteria": [
"Use multiple diverse, authoritative sources",
"Every factual claim in the report cites its source",
"User reviews findings before report generation",
"Final report answers the original research questions"
],
"constraints": [
"Only include information found in fetched sources",
"Every claim must cite its source with a numbered reference",
"Present findings to the user before writing the final report"
],
"nodes": [
{
"id": "intake",
"name": "Research Intake",
"description": "Discuss the research topic with the user, clarify scope, and confirm direction",
"node_type": "event_loop",
"tools": [],
"input_keys": [
"user_request"
],
"output_keys": [
"research_brief"
],
"success_criteria": "The research brief is specific and actionable: it states the topic, the key questions to answer, the desired scope, and depth.",
"sub_agents": [],
"flowchart_type": "start",
"flowchart_shape": "stadium",
"flowchart_color": "#8aad3f"
},
{
"id": "research",
"name": "Research",
"description": "Search the web, fetch source content, and compile findings",
"node_type": "event_loop",
"tools": [
"web_search",
"web_scrape",
"load_data",
"save_data",
"append_data",
"list_data_files"
],
"input_keys": [
"research_brief",
"feedback"
],
"output_keys": [
"findings",
"sources",
"gaps"
],
"success_criteria": "Findings reference at least 3 distinct sources with URLs. Key claims are substantiated by fetched content, not generated.",
"sub_agents": [],
"flowchart_type": "database",
"flowchart_shape": "cylinder",
"flowchart_color": "#508878"
},
{
"id": "review",
"name": "Review Findings",
"description": "Present findings to user and decide whether to research more or write the report",
"node_type": "event_loop",
"tools": [],
"input_keys": [
"findings",
"sources",
"gaps",
"research_brief"
],
"output_keys": [
"needs_more_research",
"feedback"
],
"success_criteria": "The user has been presented with findings and has explicitly indicated whether they want more research or are ready for the report.",
"sub_agents": [],
"flowchart_type": "decision",
"flowchart_shape": "diamond",
"flowchart_color": "#d89d26"
},
{
"id": "report",
"name": "Write & Deliver Report",
"description": "Write a cited HTML report from the findings and present it to the user",
"node_type": "event_loop",
"tools": [
"save_data",
"append_data",
"serve_file_to_user",
"load_data",
"list_data_files"
],
"input_keys": [
"findings",
"sources",
"research_brief"
],
"output_keys": [
"delivery_status",
"next_action"
],
"success_criteria": "An HTML report has been saved, the file link has been presented to the user, and the user has indicated what they want to do next.",
"sub_agents": [],
"flowchart_type": "terminal",
"flowchart_shape": "stadium",
"flowchart_color": "#b5453a"
}
],
"edges": [
{
"id": "edge-0",
"source": "intake",
"target": "research",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-1",
"source": "research",
"target": "review",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-2",
"source": "review",
"target": "research",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-3",
"source": "review",
"target": "report",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-4",
"source": "report",
"target": "research",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-5",
"source": "report",
"target": "intake",
"condition": "conditional",
"description": "",
"label": ""
}
],
"entry_node": "intake",
"terminal_nodes": [
"report"
],
"flowchart_legend": {
"start": {
"shape": "stadium",
"color": "#8aad3f"
},
"terminal": {
"shape": "stadium",
"color": "#b5453a"
},
"process": {
"shape": "rectangle",
"color": "#b5a575"
},
"decision": {
"shape": "diamond",
"color": "#d89d26"
},
"io": {
"shape": "parallelogram",
"color": "#d06818"
},
"document": {
"shape": "document",
"color": "#c4b830"
},
"database": {
"shape": "cylinder",
"color": "#508878"
},
"subprocess": {
"shape": "subroutine",
"color": "#887a48"
},
"browser": {
"shape": "hexagon",
"color": "#cc8850"
}
}
},
"flowchart_map": {
"intake": [
"intake"
],
"research": [
"research"
],
"review": [
"review"
],
"report": [
"report"
]
}
}
@@ -0,0 +1,218 @@
{
"original_draft": {
"agent_name": "email_inbox_management",
"goal": "Manage Gmail inbox emails autonomously using user-defined free-text rules. For every five minutes, fetch inbox emails (configurable batch size, default 100), apply the user's rules to each email, and execute the appropriate Gmail actions \u2014 trash, mark as spam, mark important, mark read/unread, star, draft replies, create/apply custom labels, and more.",
"description": "",
"success_criteria": [
"Gmail actions are applied correctly to the right emails based on the user's rules",
"Produces a summary report showing what was done: how many emails were affected by each action type, with email subjects listed",
"All fetched emails up to the configured max are processed and acted upon; none are silently skipped",
"Custom labels are created and applied correctly when rules require them"
],
"constraints": [
"Must loop through all inbox emails by paginating with max_emails as page size; no emails should be silently skipped",
"Archiving removes from inbox but preserves the email; only explicit trash rules move emails to trash",
"Agent creates draft replies but NEVER sends them automatically"
],
"nodes": [
{
"id": "intake",
"name": "Intake",
"description": "Receive and validate input parameters: rules and max_emails. Present the interpreted rules back to the user for confirmation.",
"node_type": "event_loop",
"tools": [
"gmail_list_labels"
],
"input_keys": [
"rules",
"max_emails"
],
"output_keys": [
"rules",
"max_emails",
"query"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "start",
"flowchart_shape": "stadium",
"flowchart_color": "#8aad3f"
},
{
"id": "fetch-emails",
"name": "Fetch Emails",
"description": "Fetch one page of emails from Gmail inbox. Returns emails filename and next_page_token for pagination. The graph loops back here if more pages remain.",
"node_type": "event_loop",
"tools": [
"bulk_fetch_emails"
],
"input_keys": [
"rules",
"max_emails",
"next_page_token",
"last_processed_timestamp",
"query"
],
"output_keys": [
"emails",
"next_page_token"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "process",
"flowchart_shape": "rectangle",
"flowchart_color": "#b5a575"
},
{
"id": "classify-and-act",
"name": "Classify and Act",
"description": "Apply the user's rules to each email and execute the appropriate Gmail actions.",
"node_type": "event_loop",
"tools": [
"gmail_trash_message",
"gmail_modify_message",
"gmail_batch_modify_messages",
"gmail_create_draft",
"gmail_create_label",
"gmail_list_labels",
"load_data",
"append_data"
],
"input_keys": [
"rules",
"emails"
],
"output_keys": [
"actions_taken"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "decision",
"flowchart_shape": "diamond",
"flowchart_color": "#d89d26"
},
{
"id": "report",
"name": "Report",
"description": "Generate a summary report of all actions taken on the emails and present it to the user.",
"node_type": "event_loop",
"tools": [
"load_data",
"get_current_timestamp"
],
"input_keys": [
"actions_taken",
"rules"
],
"output_keys": [
"summary_report",
"rules",
"last_processed_timestamp"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "terminal",
"flowchart_shape": "stadium",
"flowchart_color": "#b5453a"
}
],
"edges": [
{
"id": "edge-0",
"source": "intake",
"target": "fetch-emails",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-1",
"source": "fetch-emails",
"target": "classify-and-act",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-2",
"source": "classify-and-act",
"target": "fetch-emails",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-3",
"source": "classify-and-act",
"target": "report",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-4",
"source": "report",
"target": "intake",
"condition": "on_success",
"description": "",
"label": ""
}
],
"entry_node": "intake",
"terminal_nodes": [
"report"
],
"flowchart_legend": {
"start": {
"shape": "stadium",
"color": "#8aad3f"
},
"terminal": {
"shape": "stadium",
"color": "#b5453a"
},
"process": {
"shape": "rectangle",
"color": "#b5a575"
},
"decision": {
"shape": "diamond",
"color": "#d89d26"
},
"io": {
"shape": "parallelogram",
"color": "#d06818"
},
"document": {
"shape": "document",
"color": "#c4b830"
},
"database": {
"shape": "cylinder",
"color": "#508878"
},
"subprocess": {
"shape": "subroutine",
"color": "#887a48"
},
"browser": {
"shape": "hexagon",
"color": "#cc8850"
}
}
},
"flowchart_map": {
"intake": [
"intake"
],
"fetch-emails": [
"fetch-emails"
],
"classify-and-act": [
"classify-and-act"
],
"report": [
"report"
]
}
}
@@ -0,0 +1,168 @@
{
"original_draft": {
"agent_name": "email_reply_agent",
"goal": "Filter unreplied emails by user criteria, confirm recipients, send personalized replies.",
"description": "",
"success_criteria": [
"Accurately finds unreplied emails matching user criteria",
"User confirms recipient list before sending",
"Replies are personalized based on email content and tone guidance"
],
"constraints": [
"Never send emails without explicit user confirmation; always present recipient list and get approval first",
"Process up to 50 emails per batch"
],
"nodes": [
{
"id": "intake",
"name": "Intake",
"description": "Gather email filter criteria from user",
"node_type": "event_loop",
"tools": [],
"input_keys": [
"batch_complete",
"restart"
],
"output_keys": [
"filter_criteria"
],
"success_criteria": "Filter criteria is specific enough to search Gmail (sender, subject, date range, or keywords).",
"sub_agents": [],
"flowchart_type": "start",
"flowchart_shape": "stadium",
"flowchart_color": "#8aad3f"
},
{
"id": "search",
"name": "Search Emails",
"description": "Search Gmail for unreplied emails matching filter criteria",
"node_type": "event_loop",
"tools": [
"gmail_list_messages",
"gmail_get_message",
"gmail_batch_get_messages"
],
"input_keys": [
"filter_criteria"
],
"output_keys": [
"email_list"
],
"success_criteria": "Found unreplied emails matching criteria with sender, subject, snippet, message_id.",
"sub_agents": [],
"flowchart_type": "process",
"flowchart_shape": "rectangle",
"flowchart_color": "#b5a575"
},
{
"id": "confirm-draft",
"name": "Confirm & Reply",
"description": "Present emails for confirmation, send personalized replies",
"node_type": "event_loop",
"tools": [
"gmail_reply_email"
],
"input_keys": [
"email_list",
"filter_criteria"
],
"output_keys": [
"batch_complete",
"restart"
],
"success_criteria": "User confirmed recipients and personalized replies sent for each.",
"sub_agents": [],
"flowchart_type": "terminal",
"flowchart_shape": "stadium",
"flowchart_color": "#b5453a"
}
],
"edges": [
{
"id": "edge-0",
"source": "intake",
"target": "search",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-1",
"source": "search",
"target": "confirm-draft",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-2",
"source": "confirm-draft",
"target": "intake",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-3",
"source": "confirm-draft",
"target": "intake",
"condition": "conditional",
"description": "",
"label": ""
}
],
"entry_node": "intake",
"terminal_nodes": [
"confirm-draft"
],
"flowchart_legend": {
"start": {
"shape": "stadium",
"color": "#8aad3f"
},
"terminal": {
"shape": "stadium",
"color": "#b5453a"
},
"process": {
"shape": "rectangle",
"color": "#b5a575"
},
"decision": {
"shape": "diamond",
"color": "#d89d26"
},
"io": {
"shape": "parallelogram",
"color": "#d06818"
},
"document": {
"shape": "document",
"color": "#c4b830"
},
"database": {
"shape": "cylinder",
"color": "#508878"
},
"subprocess": {
"shape": "subroutine",
"color": "#887a48"
},
"browser": {
"shape": "hexagon",
"color": "#cc8850"
}
}
},
"flowchart_map": {
"intake": [
"intake"
],
"search": [
"search"
],
"confirm-draft": [
"confirm-draft"
]
}
}
@@ -0,0 +1,186 @@
{
"original_draft": {
"agent_name": "job_hunter",
"goal": "Analyze a user's resume to identify their strongest role fits, find 10 matching job opportunities, let the user select which to pursue, then generate a resume customization list and cold outreach email for each selected job.",
"description": "",
"success_criteria": [
"Identifies 2-3 role types that genuinely match the user's experience",
"Found jobs align with identified roles and user's background",
"Resume changes are specific, actionable, and tailored to each job posting",
"Cold emails are personalized, professional, and reference specific company/role details",
"User approves outputs without major revisions needed"
],
"constraints": [
"Only suggest roles the user is realistically qualified for - no aspirational stretch roles",
"Resume customizations must be truthful - enhance presentation, never fabricate experience",
"Cold emails must be professional and not spammy",
"Only customize for jobs the user explicitly selects"
],
"nodes": [
{
"id": "intake",
"name": "Intake",
"description": "Analyze resume and identify 3-5 strongest role types",
"node_type": "event_loop",
"tools": [],
"input_keys": [
"resume_text"
],
"output_keys": [
"resume_text",
"role_analysis"
],
"success_criteria": "The user's resume has been analyzed and 3-5 target roles identified based on their actual experience.",
"sub_agents": [],
"flowchart_type": "start",
"flowchart_shape": "stadium",
"flowchart_color": "#8aad3f"
},
{
"id": "job-search",
"name": "Job Search",
"description": "Search for 10 jobs matching identified roles by scraping job board sites directly",
"node_type": "event_loop",
"tools": [
"web_scrape"
],
"input_keys": [
"role_analysis"
],
"output_keys": [
"job_listings"
],
"success_criteria": "10 relevant job listings have been found with complete details including title, company, location, description, and URL.",
"sub_agents": [],
"flowchart_type": "process",
"flowchart_shape": "rectangle",
"flowchart_color": "#b5a575"
},
{
"id": "job-review",
"name": "Job Review",
"description": "Present all 10 jobs to the user, let them select which to pursue",
"node_type": "event_loop",
"tools": [],
"input_keys": [
"job_listings",
"resume_text"
],
"output_keys": [
"selected_jobs"
],
"success_criteria": "User has reviewed all job listings and explicitly selected which jobs they want to apply to.",
"sub_agents": [],
"flowchart_type": "process",
"flowchart_shape": "rectangle",
"flowchart_color": "#b5a575"
},
{
"id": "customize",
"name": "Customize",
"description": "For each selected job, generate resume customization list and cold outreach email, create Gmail drafts",
"node_type": "event_loop",
"tools": [
"save_data",
"append_data",
"serve_file_to_user",
"gmail_create_draft"
],
"input_keys": [
"selected_jobs",
"resume_text"
],
"output_keys": [
"application_materials"
],
"success_criteria": "Resume customization list and cold outreach email generated for each selected job, saved as HTML, and Gmail drafts created in user's inbox.",
"sub_agents": [],
"flowchart_type": "terminal",
"flowchart_shape": "stadium",
"flowchart_color": "#b5453a"
}
],
"edges": [
{
"id": "edge-0",
"source": "intake",
"target": "job-search",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-1",
"source": "job-search",
"target": "job-review",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-2",
"source": "job-review",
"target": "customize",
"condition": "on_success",
"description": "",
"label": ""
}
],
"entry_node": "intake",
"terminal_nodes": [
"customize"
],
"flowchart_legend": {
"start": {
"shape": "stadium",
"color": "#8aad3f"
},
"terminal": {
"shape": "stadium",
"color": "#b5453a"
},
"process": {
"shape": "rectangle",
"color": "#b5a575"
},
"decision": {
"shape": "diamond",
"color": "#d89d26"
},
"io": {
"shape": "parallelogram",
"color": "#d06818"
},
"document": {
"shape": "document",
"color": "#c4b830"
},
"database": {
"shape": "cylinder",
"color": "#508878"
},
"subprocess": {
"shape": "subroutine",
"color": "#887a48"
},
"browser": {
"shape": "hexagon",
"color": "#cc8850"
}
}
},
"flowchart_map": {
"intake": [
"intake"
],
"job-search": [
"job-search"
],
"job-review": [
"job-review"
],
"customize": [
"customize"
]
}
}
@@ -0,0 +1,165 @@
{
"original_draft": {
"agent_name": "local_business_extractor",
"goal": "Find local businesses on Maps, extract contacts, and sync to Google Sheets.",
"description": "",
"success_criteria": [
"Extract business details from Maps",
"Sync data to Google Sheets"
],
"constraints": [
"Must verify website presence before scraping"
],
"nodes": [
{
"id": "map-search-worker",
"name": "Maps Browser Worker",
"description": "Browser subagent that searches Google Maps and extracts business links.",
"node_type": "gcu",
"tools": [],
"input_keys": [
"query"
],
"output_keys": [
"business_list"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "browser",
"flowchart_shape": "hexagon",
"flowchart_color": "#cc8850"
},
{
"id": "extract-contacts",
"name": "Extract Business Details",
"description": "Scrapes business websites and Maps for comprehensive business details.",
"node_type": "event_loop",
"tools": [
"exa_get_contents",
"exa_search"
],
"input_keys": [
"user_request"
],
"output_keys": [
"business_data"
],
"success_criteria": "Comprehensive business details (reviews, hours, contacts) extracted.",
"sub_agents": [
"map-search-worker"
],
"flowchart_type": "subprocess",
"flowchart_shape": "subroutine",
"flowchart_color": "#887a48"
},
{
"id": "sheets-sync",
"name": "Google Sheets Sync",
"description": "Appends the extracted business data to a Google Sheets spreadsheet.",
"node_type": "event_loop",
"tools": [
"google_sheets_create_spreadsheet",
"google_sheets_update_values",
"google_sheets_append_values",
"google_sheets_get_values"
],
"input_keys": [
"business_data"
],
"output_keys": [
"spreadsheet_id"
],
"success_criteria": "Data successfully synced to Google Sheets.",
"sub_agents": [],
"flowchart_type": "terminal",
"flowchart_shape": "stadium",
"flowchart_color": "#b5453a"
}
],
"edges": [
{
"id": "edge-0",
"source": "extract-contacts",
"target": "sheets-sync",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-1",
"source": "sheets-sync",
"target": "extract-contacts",
"condition": "always",
"description": "",
"label": ""
},
{
"id": "edge-subagent-2",
"source": "extract-contacts",
"target": "map-search-worker",
"condition": "always",
"description": "sub-agent delegation",
"label": "delegate"
},
{
"id": "edge-subagent-3",
"source": "map-search-worker",
"target": "extract-contacts",
"condition": "always",
"description": "sub-agent report back",
"label": "report"
}
],
"entry_node": "extract-contacts",
"terminal_nodes": [
"sheets-sync"
],
"flowchart_legend": {
"start": {
"shape": "stadium",
"color": "#8aad3f"
},
"terminal": {
"shape": "stadium",
"color": "#b5453a"
},
"process": {
"shape": "rectangle",
"color": "#b5a575"
},
"decision": {
"shape": "diamond",
"color": "#d89d26"
},
"io": {
"shape": "parallelogram",
"color": "#d06818"
},
"document": {
"shape": "document",
"color": "#c4b830"
},
"database": {
"shape": "cylinder",
"color": "#508878"
},
"subprocess": {
"shape": "subroutine",
"color": "#887a48"
},
"browser": {
"shape": "hexagon",
"color": "#cc8850"
}
}
},
"flowchart_map": {
"extract-contacts": [
"extract-contacts",
"map-search-worker"
],
"sheets-sync": [
"sheets-sync"
]
}
}
@@ -0,0 +1,172 @@
{
"original_draft": {
"agent_name": "meeting_scheduler",
"goal": "Check calendar availability, find optimal meeting times, record meetings, and send reminders.",
"description": "",
"success_criteria": [
"Meeting time found within requested duration",
"Meeting recorded in spreadsheet accurately",
"Attendee email reminder sent",
"User confirms meeting details"
],
"constraints": [
"Must use Google Calendar API for availability check",
"Meeting duration must match requested time",
"Spreadsheet record must include date, time, attendee, title"
],
"nodes": [
{
"id": "intake",
"name": "Intake",
"description": "Gather meeting details from the user",
"node_type": "event_loop",
"tools": [],
"input_keys": [
"attendee_email",
"meeting_duration_minutes"
],
"output_keys": [
"attendee_email",
"meeting_duration_minutes",
"meeting_title"
],
"success_criteria": "User has provided attendee email, meeting duration, and title.",
"sub_agents": [],
"flowchart_type": "start",
"flowchart_shape": "stadium",
"flowchart_color": "#8aad3f"
},
{
"id": "schedule",
"name": "Schedule",
"description": "Find available time on calendar, book meeting with Google Meet, and log to Google Sheet",
"node_type": "event_loop",
"tools": [
"calendar_check_availability",
"calendar_create_event",
"calendar_list_events",
"google_sheets_create_spreadsheet",
"google_sheets_get_spreadsheet",
"google_sheets_append_values",
"send_email"
],
"input_keys": [
"attendee_email",
"meeting_duration_minutes",
"meeting_title"
],
"output_keys": [
"meeting_time",
"booking_confirmed",
"spreadsheet_recorded",
"email_sent",
"meet_link"
],
"success_criteria": "Meeting time found, Google Meet created, Google Sheet 'Meeting Scheduler' updated with date/time/attendee/title/meet_link, and confirmation email sent.",
"sub_agents": [],
"flowchart_type": "io",
"flowchart_shape": "parallelogram",
"flowchart_color": "#d06818"
},
{
"id": "confirm",
"name": "Confirm",
"description": "Present booking confirmation to user with Google Meet link",
"node_type": "event_loop",
"tools": [],
"input_keys": [
"meeting_time",
"booking_confirmed",
"meet_link"
],
"output_keys": [
"next_action"
],
"success_criteria": "User has acknowledged the booking and received the Google Meet link.",
"sub_agents": [],
"flowchart_type": "terminal",
"flowchart_shape": "stadium",
"flowchart_color": "#b5453a"
}
],
"edges": [
{
"id": "edge-0",
"source": "intake",
"target": "schedule",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-1",
"source": "schedule",
"target": "confirm",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-2",
"source": "confirm",
"target": "intake",
"condition": "conditional",
"description": "",
"label": ""
}
],
"entry_node": "intake",
"terminal_nodes": [
"confirm"
],
"flowchart_legend": {
"start": {
"shape": "stadium",
"color": "#8aad3f"
},
"terminal": {
"shape": "stadium",
"color": "#b5453a"
},
"process": {
"shape": "rectangle",
"color": "#b5a575"
},
"decision": {
"shape": "diamond",
"color": "#d89d26"
},
"io": {
"shape": "parallelogram",
"color": "#d06818"
},
"document": {
"shape": "document",
"color": "#c4b830"
},
"database": {
"shape": "cylinder",
"color": "#508878"
},
"subprocess": {
"shape": "subroutine",
"color": "#887a48"
},
"browser": {
"shape": "hexagon",
"color": "#cc8850"
}
}
},
"flowchart_map": {
"intake": [
"intake"
],
"schedule": [
"schedule"
],
"confirm": [
"confirm"
]
}
}
@@ -0,0 +1,150 @@
{
"original_draft": {
"agent_name": "tech_news_reporter",
"goal": "Research the latest technology and AI news from the web, summarize key stories, and produce a well-organized report for the user to read.",
"description": "",
"success_criteria": [
"Finds recent, relevant tech/AI news articles",
"Covers diverse topics, not just one story",
"Produces a structured, readable report with sections, summaries, and links",
"Includes source attribution with URLs for every story",
"Delivers the report to the user in a viewable format"
],
"constraints": [
"Never fabricate news stories or URLs",
"Always attribute sources with links",
"Only include news from the past week"
],
"nodes": [
{
"id": "intake",
"name": "Intake",
"description": "Greet the user and ask if they have specific tech/AI topics to focus on, or if they want a general news roundup.",
"node_type": "event_loop",
"tools": [],
"input_keys": [],
"output_keys": [
"research_brief"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "start",
"flowchart_shape": "stadium",
"flowchart_color": "#8aad3f"
},
{
"id": "research",
"name": "Research",
"description": "Scrape well-known tech news sites for recent articles and extract key information including titles, summaries, sources, and topics.",
"node_type": "event_loop",
"tools": [
"web_scrape"
],
"input_keys": [
"research_brief"
],
"output_keys": [
"articles_data"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "process",
"flowchart_shape": "rectangle",
"flowchart_color": "#b5a575"
},
{
"id": "compile-report",
"name": "Compile Report",
"description": "Organize the researched articles into a structured HTML report, save it, and deliver a clickable link to the user.",
"node_type": "event_loop",
"tools": [
"save_data",
"append_data",
"serve_file_to_user"
],
"input_keys": [
"articles_data"
],
"output_keys": [
"report_file"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "terminal",
"flowchart_shape": "stadium",
"flowchart_color": "#b5453a"
}
],
"edges": [
{
"id": "edge-0",
"source": "intake",
"target": "research",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-1",
"source": "research",
"target": "compile-report",
"condition": "on_success",
"description": "",
"label": ""
}
],
"entry_node": "intake",
"terminal_nodes": [
"compile-report"
],
"flowchart_legend": {
"start": {
"shape": "stadium",
"color": "#8aad3f"
},
"terminal": {
"shape": "stadium",
"color": "#b5453a"
},
"process": {
"shape": "rectangle",
"color": "#b5a575"
},
"decision": {
"shape": "diamond",
"color": "#d89d26"
},
"io": {
"shape": "parallelogram",
"color": "#d06818"
},
"document": {
"shape": "document",
"color": "#c4b830"
},
"database": {
"shape": "cylinder",
"color": "#508878"
},
"subprocess": {
"shape": "subroutine",
"color": "#887a48"
},
"browser": {
"shape": "hexagon",
"color": "#cc8850"
}
}
},
"flowchart_map": {
"intake": [
"intake"
],
"research": [
"research"
],
"compile-report": [
"compile-report"
]
}
}
@@ -0,0 +1,172 @@
{
"original_draft": {
"agent_name": "twitter_news_agent",
"goal": "Achieve an accurate and concise daily news digest based on Twitter feed monitoring.",
"description": "",
"success_criteria": [
"Navigate and extract tweets from at least 3 handles.",
"Provide a summary of the most important stories.",
"Maintain a persistent log of daily digests."
],
"constraints": [
"Respect rate limits and ethical web usage."
],
"nodes": [
{
"id": "fetch-tweets",
"name": "Fetch Tech Tweets",
"description": "Browser subagent to navigate to tech news Twitter profiles and extract latest tweets.",
"node_type": "gcu",
"tools": [],
"input_keys": [
"twitter_handles"
],
"output_keys": [
"raw_tweets"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "browser",
"flowchart_shape": "hexagon",
"flowchart_color": "#cc8850"
},
{
"id": "process-news",
"name": "Process Tech News",
"description": "Analyze and summarize the raw tweets into a daily tech digest.",
"node_type": "event_loop",
"tools": [
"save_data",
"load_data"
],
"input_keys": [
"user_request",
"feedback",
"raw_tweets"
],
"output_keys": [
"daily_digest"
],
"success_criteria": "A high-quality, tech-focused news summary.",
"sub_agents": [
"fetch-tweets"
],
"flowchart_type": "subprocess",
"flowchart_shape": "subroutine",
"flowchart_color": "#887a48"
},
{
"id": "review-digest",
"name": "Review Digest",
"description": "Present the news digest for user review and approval.",
"node_type": "event_loop",
"tools": [],
"input_keys": [
"daily_digest"
],
"output_keys": [
"status",
"feedback"
],
"success_criteria": "User has reviewed the digest and provided feedback or approval.",
"sub_agents": [],
"flowchart_type": "terminal",
"flowchart_shape": "stadium",
"flowchart_color": "#b5453a"
}
],
"edges": [
{
"id": "edge-0",
"source": "process-news",
"target": "review-digest",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-1",
"source": "review-digest",
"target": "process-news",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-2",
"source": "review-digest",
"target": "process-news",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-subagent-3",
"source": "process-news",
"target": "fetch-tweets",
"condition": "always",
"description": "sub-agent delegation",
"label": "delegate"
},
{
"id": "edge-subagent-4",
"source": "fetch-tweets",
"target": "process-news",
"condition": "always",
"description": "sub-agent report back",
"label": "report"
}
],
"entry_node": "process-news",
"terminal_nodes": [
"review-digest"
],
"flowchart_legend": {
"start": {
"shape": "stadium",
"color": "#8aad3f"
},
"terminal": {
"shape": "stadium",
"color": "#b5453a"
},
"process": {
"shape": "rectangle",
"color": "#b5a575"
},
"decision": {
"shape": "diamond",
"color": "#d89d26"
},
"io": {
"shape": "parallelogram",
"color": "#d06818"
},
"document": {
"shape": "document",
"color": "#c4b830"
},
"database": {
"shape": "cylinder",
"color": "#508878"
},
"subprocess": {
"shape": "subroutine",
"color": "#887a48"
},
"browser": {
"shape": "hexagon",
"color": "#cc8850"
}
}
},
"flowchart_map": {
"process-news": [
"process-news",
"fetch-tweets"
],
"review-digest": [
"review-digest"
]
}
}
@@ -0,0 +1,237 @@
{
"original_draft": {
"agent_name": "vulnerability_assessment",
"goal": "A passive, OSINT-based website vulnerability assessment agent that accepts a website domain, performs non-intrusive security scanning using purpose-built Python tools, produces letter-grade risk scores (A-F) per category, and delivers a structured vulnerability report with remediation guidance. The user is consulted after scanning to decide whether to investigate further or generate the final report.",
"description": "",
"success_criteria": [
"Overall risk grade (A-F) generated from combined scan results",
"At least 5 of 6 security categories scored (SSL/TLS, HTTP Headers, DNS, Network, Technology, Attack Surface)",
"At least 3 security findings identified across different categories",
"Every finding includes clear, actionable remediation steps a developer can follow",
"User is presented findings with risk grades and given checkpoint to continue deeper scanning or generate report"
],
"constraints": [
"Never execute active attacks, send exploit payloads, or perform actions that could trigger WAF/IDS systems. Passive and OSINT-based scanning only \u2014 no nmap, sqlmap, or attack payloads.",
"All findings and remediation steps must be written for developers using clear language, not security jargon"
],
"nodes": [
{
"id": "intake",
"name": "Intake",
"description": "Collect the target website domain from the user and confirm the scanning scope",
"node_type": "event_loop",
"tools": [],
"input_keys": [],
"output_keys": [
"target_domain"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "start",
"flowchart_shape": "stadium",
"flowchart_color": "#8aad3f"
},
{
"id": "passive-recon",
"name": "Passive Reconnaissance",
"description": "Run all 6 passive scanning tools against the target domain: SSL/TLS, HTTP headers, DNS security, port scanning, tech stack detection, and subdomain enumeration",
"node_type": "event_loop",
"tools": [
"ssl_tls_scan",
"http_headers_scan",
"dns_security_scan",
"port_scan",
"tech_stack_detect",
"subdomain_enumerate"
],
"input_keys": [
"target_domain",
"feedback"
],
"output_keys": [
"scan_results"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "process",
"flowchart_shape": "rectangle",
"flowchart_color": "#b5a575"
},
{
"id": "risk-scoring",
"name": "Risk Scoring",
"description": "Calculate weighted letter grades (A-F) per security category and overall risk score from scan results",
"node_type": "event_loop",
"tools": [
"risk_score"
],
"input_keys": [
"scan_results"
],
"output_keys": [
"risk_report"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "process",
"flowchart_shape": "rectangle",
"flowchart_color": "#b5a575"
},
{
"id": "findings-review",
"name": "Findings Review",
"description": "Present risk grades and security findings to the user, ask whether to continue deeper scanning or generate the final report",
"node_type": "event_loop",
"tools": [],
"input_keys": [
"scan_results",
"risk_report",
"target_domain"
],
"output_keys": [
"continue_scanning",
"feedback",
"all_findings"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "decision",
"flowchart_shape": "diamond",
"flowchart_color": "#d89d26"
},
{
"id": "final-report",
"name": "Risk Dashboard Report",
"description": "Generate an HTML risk dashboard with color-coded grades, category breakdown, detailed findings, and remediation steps",
"node_type": "event_loop",
"tools": [
"save_data",
"append_data",
"serve_file_to_user"
],
"input_keys": [
"all_findings",
"risk_report",
"target_domain"
],
"output_keys": [
"report_status"
],
"success_criteria": "",
"sub_agents": [],
"flowchart_type": "terminal",
"flowchart_shape": "stadium",
"flowchart_color": "#b5453a"
}
],
"edges": [
{
"id": "edge-0",
"source": "intake",
"target": "passive-recon",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-1",
"source": "passive-recon",
"target": "risk-scoring",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-2",
"source": "risk-scoring",
"target": "findings-review",
"condition": "on_success",
"description": "",
"label": ""
},
{
"id": "edge-3",
"source": "findings-review",
"target": "passive-recon",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-4",
"source": "findings-review",
"target": "final-report",
"condition": "conditional",
"description": "",
"label": ""
},
{
"id": "edge-5",
"source": "final-report",
"target": "intake",
"condition": "on_success",
"description": "",
"label": ""
}
],
"entry_node": "intake",
"terminal_nodes": [
"final-report"
],
"flowchart_legend": {
"start": {
"shape": "stadium",
"color": "#8aad3f"
},
"terminal": {
"shape": "stadium",
"color": "#b5453a"
},
"process": {
"shape": "rectangle",
"color": "#b5a575"
},
"decision": {
"shape": "diamond",
"color": "#d89d26"
},
"io": {
"shape": "parallelogram",
"color": "#d06818"
},
"document": {
"shape": "document",
"color": "#c4b830"
},
"database": {
"shape": "cylinder",
"color": "#508878"
},
"subprocess": {
"shape": "subroutine",
"color": "#887a48"
},
"browser": {
"shape": "hexagon",
"color": "#cc8850"
}
}
},
"flowchart_map": {
"intake": [
"intake"
],
"passive-recon": [
"passive-recon"
],
"risk-scoring": [
"risk-scoring"
],
"findings-review": [
"findings-review"
],
"final-report": [
"final-report"
]
}
}
+124 -15
View File
@@ -21,6 +21,9 @@ $ErrorActionPreference = "Continue"
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Definition
$UvHelperPath = Join-Path $ScriptDir "scripts\uv-discovery.ps1"
# Hive LLM router endpoint
$HiveLlmEndpoint = "https://api.adenhq.com"
. $UvHelperPath
# ============================================================
@@ -903,6 +906,11 @@ $kimiKey = [System.Environment]::GetEnvironmentVariable("KIMI_API_KEY", "User")
if (-not $kimiKey) { $kimiKey = $env:KIMI_API_KEY }
if ($kimiKey) { $KimiCredDetected = $true }
$HiveCredDetected = $false
$hiveKey = [System.Environment]::GetEnvironmentVariable("HIVE_API_KEY", "User")
if (-not $hiveKey) { $hiveKey = $env:HIVE_API_KEY }
if ($hiveKey) { $HiveCredDetected = $true }
# Detect API key providers
$ProviderMenuEnvVars = @("ANTHROPIC_API_KEY", "OPENAI_API_KEY", "GEMINI_API_KEY", "GROQ_API_KEY", "CEREBRAS_API_KEY")
$ProviderMenuNames = @("Anthropic (Claude) - Recommended", "OpenAI (GPT)", "Google Gemini - Free tier available", "Groq - Fast, free tier", "Cerebras - Fast, free tier")
@@ -933,6 +941,7 @@ if (Test-Path $HiveConfigFile) {
elseif ($prevLlm.use_kimi_code_subscription) { $PrevSubMode = "kimi_code" }
elseif ($prevLlm.api_base -and $prevLlm.api_base -like "*api.z.ai*") { $PrevSubMode = "zai_code" }
elseif ($prevLlm.api_base -and $prevLlm.api_base -like "*api.kimi.com*") { $PrevSubMode = "kimi_code" }
elseif ($prevLlm.provider -eq "hive" -or ($prevLlm.api_base -and $prevLlm.api_base -like "*adenhq.com*")) { $PrevSubMode = "hive_llm" }
}
} catch { }
}
@@ -946,6 +955,7 @@ if ($PrevSubMode -or $PrevProvider) {
"zai_code" { if ($ZaiCredDetected) { $prevCredValid = $true } }
"codex" { if ($CodexCredDetected) { $prevCredValid = $true } }
"kimi_code" { if ($KimiCredDetected) { $prevCredValid = $true } }
"hive_llm" { if ($HiveCredDetected) { $prevCredValid = $true } }
default {
if ($PrevEnvVar) {
$envVal = [System.Environment]::GetEnvironmentVariable($PrevEnvVar, "Process")
@@ -960,14 +970,15 @@ if ($PrevSubMode -or $PrevProvider) {
"zai_code" { $DefaultChoice = "2" }
"codex" { $DefaultChoice = "3" }
"kimi_code" { $DefaultChoice = "4" }
"hive_llm" { $DefaultChoice = "5" }
}
if (-not $DefaultChoice) {
switch ($PrevProvider) {
"anthropic" { $DefaultChoice = "5" }
"openai" { $DefaultChoice = "6" }
"gemini" { $DefaultChoice = "7" }
"groq" { $DefaultChoice = "8" }
"cerebras" { $DefaultChoice = "9" }
"anthropic" { $DefaultChoice = "6" }
"openai" { $DefaultChoice = "7" }
"gemini" { $DefaultChoice = "8" }
"groq" { $DefaultChoice = "9" }
"cerebras" { $DefaultChoice = "10" }
"kimi" { $DefaultChoice = "4" }
}
}
@@ -1007,12 +1018,19 @@ Write-Host ") Kimi Code Subscription " -NoNewline
Write-Color -Text "(use your Kimi Code plan)" -Color DarkGray -NoNewline
if ($KimiCredDetected) { Write-Color -Text " (credential detected)" -Color Green } else { Write-Host "" }
# 5) Hive LLM
Write-Host " " -NoNewline
Write-Color -Text "5" -Color Cyan -NoNewline
Write-Host ") Hive LLM " -NoNewline
Write-Color -Text "(use your Hive API key)" -Color DarkGray -NoNewline
if ($HiveCredDetected) { Write-Color -Text " (credential detected)" -Color Green } else { Write-Host "" }
Write-Host ""
Write-Color -Text " API key providers:" -Color Cyan
# 5-9) API key providers
# 6-10) API key providers
for ($idx = 0; $idx -lt $ProviderMenuEnvVars.Count; $idx++) {
$num = $idx + 5
$num = $idx + 6
$envVal = [System.Environment]::GetEnvironmentVariable($ProviderMenuEnvVars[$idx], "Process")
if (-not $envVal) { $envVal = [System.Environment]::GetEnvironmentVariable($ProviderMenuEnvVars[$idx], "User") }
Write-Host " " -NoNewline
@@ -1022,7 +1040,7 @@ for ($idx = 0; $idx -lt $ProviderMenuEnvVars.Count; $idx++) {
}
Write-Host " " -NoNewline
Write-Color -Text "10" -Color Cyan -NoNewline
Write-Color -Text "11" -Color Cyan -NoNewline
Write-Host ") Skip for now"
Write-Host ""
@@ -1033,16 +1051,16 @@ if ($DefaultChoice) {
while ($true) {
if ($DefaultChoice) {
$raw = Read-Host "Enter choice (1-10) [$DefaultChoice]"
$raw = Read-Host "Enter choice (1-11) [$DefaultChoice]"
if ([string]::IsNullOrWhiteSpace($raw)) { $raw = $DefaultChoice }
} else {
$raw = Read-Host "Enter choice (1-10)"
$raw = Read-Host "Enter choice (1-11)"
}
if ($raw -match '^\d+$') {
$num = [int]$raw
if ($num -ge 1 -and $num -le 10) { break }
if ($num -ge 1 -and $num -le 11) { break }
}
Write-Color -Text "Invalid choice. Please enter 1-10" -Color Red
Write-Color -Text "Invalid choice. Please enter 1-11" -Color Red
}
switch ($num) {
@@ -1121,9 +1139,33 @@ switch ($num) {
Write-Ok "Using Kimi Code subscription"
Write-Color -Text " Model: kimi-k2.5 | API: api.kimi.com/coding" -Color DarkGray
}
{ $_ -ge 5 -and $_ -le 9 } {
5 {
# Hive LLM
$SubscriptionMode = "hive_llm"
$SelectedProviderId = "hive"
$SelectedEnvVar = "HIVE_API_KEY"
$SelectedMaxTokens = 32768
$SelectedMaxContextTokens = 120000
Write-Host ""
Write-Ok "Using Hive LLM"
Write-Host ""
Write-Host " Select a model:"
Write-Host " " -NoNewline; Write-Color -Text "1)" -Color Cyan -NoNewline; Write-Host " queen " -NoNewline; Write-Color -Text "(default - Hive flagship)" -Color DarkGray
Write-Host " " -NoNewline; Write-Color -Text "2)" -Color Cyan -NoNewline; Write-Host " kimi-2.5"
Write-Host " " -NoNewline; Write-Color -Text "3)" -Color Cyan -NoNewline; Write-Host " GLM-5"
Write-Host ""
$hiveModelChoice = Read-Host " Enter model choice (1-3) [1]"
if (-not $hiveModelChoice) { $hiveModelChoice = "1" }
switch ($hiveModelChoice) {
"2" { $SelectedModel = "kimi-2.5" }
"3" { $SelectedModel = "GLM-5" }
default { $SelectedModel = "queen" }
}
Write-Color -Text " Model: $SelectedModel | API: $HiveLlmEndpoint" -Color DarkGray
}
{ $_ -ge 6 -and $_ -le 10 } {
# API key providers
$provIdx = $num - 5
$provIdx = $num - 6
$SelectedEnvVar = $ProviderMenuEnvVars[$provIdx]
$SelectedProviderId = $ProviderMenuIds[$provIdx]
$providerName = $ProviderMenuNames[$provIdx] -replace ' - .*', '' # strip description
@@ -1194,7 +1236,7 @@ switch ($num) {
}
}
}
10 {
11 {
Write-Host ""
Write-Warn "Skipped. An LLM API key is required to test and use worker agents."
Write-Host " Add your API key later by running:"
@@ -1335,6 +1377,70 @@ if ($SubscriptionMode -eq "kimi_code") {
}
}
# For Hive LLM: prompt for API key with verification + retry
if ($SubscriptionMode -eq "hive_llm") {
while ($true) {
$existingHive = [System.Environment]::GetEnvironmentVariable("HIVE_API_KEY", "User")
if (-not $existingHive) { $existingHive = $env:HIVE_API_KEY }
if ($existingHive) {
$masked = $existingHive.Substring(0, [Math]::Min(4, $existingHive.Length)) + "..." + $existingHive.Substring([Math]::Max(0, $existingHive.Length - 4))
Write-Host ""
Write-Color -Text " $([char]0x2B22) Current Hive key: $masked" -Color Green
Write-Host ""
$apiKey = Read-Host "Paste a new Hive API key (or press Enter to keep current)"
} else {
Write-Host ""
Write-Host " Get your API key from: " -NoNewline
Write-Color -Text "https://discord.com/invite/hQdU7QDkgR" -Color Cyan
Write-Host ""
$apiKey = Read-Host "Paste your Hive API key (or press Enter to skip)"
}
if ($apiKey) {
[System.Environment]::SetEnvironmentVariable("HIVE_API_KEY", $apiKey, "User")
$env:HIVE_API_KEY = $apiKey
Write-Host ""
Write-Ok "Hive API key saved as User environment variable"
# Health check the new key
Write-Host " Verifying Hive API key... " -NoNewline
try {
$hcOutput = & $PythonCmd scripts/check_llm_key.py hive $apiKey "$HiveLlmEndpoint" 2>&1
$hcJson = $hcOutput | ConvertFrom-Json
if ($hcJson.valid -eq $true) {
Write-Color -Text "ok" -Color Green
break
} elseif ($hcJson.valid -eq $false) {
Write-Color -Text "failed" -Color Red
Write-Warn $hcJson.message
[System.Environment]::SetEnvironmentVariable("HIVE_API_KEY", $null, "User")
Remove-Item -Path "Env:\HIVE_API_KEY" -ErrorAction SilentlyContinue
Write-Host ""
Read-Host " Press Enter to try again"
} else {
Write-Color -Text "--" -Color Yellow
Write-Color -Text " Could not verify key (network issue). The key has been saved." -Color DarkGray
break
}
} catch {
Write-Color -Text "--" -Color Yellow
break
}
} elseif (-not $existingHive) {
Write-Host ""
Write-Warn "Skipped. Add your Hive API key later:"
Write-Color -Text " [System.Environment]::SetEnvironmentVariable('HIVE_API_KEY', 'your-key', 'User')" -Color Cyan
$SelectedEnvVar = ""
$SelectedProviderId = ""
$SubscriptionMode = ""
break
} else {
break
}
}
}
# Prompt for model if not already selected (manual provider path)
if ($SelectedProviderId -and -not $SelectedModel) {
$modelSel = Get-ModelSelection $SelectedProviderId
@@ -1375,6 +1481,9 @@ if ($SelectedProviderId) {
} elseif ($SubscriptionMode -eq "kimi_code") {
$config.llm["api_base"] = "https://api.kimi.com/coding"
$config.llm["api_key_env_var"] = $SelectedEnvVar
} elseif ($SubscriptionMode -eq "hive_llm") {
$config.llm["api_base"] = $HiveLlmEndpoint
$config.llm["api_key_env_var"] = $SelectedEnvVar
} else {
$config.llm["api_key_env_var"] = $SelectedEnvVar
}
+67 -19
View File
@@ -32,6 +32,9 @@ NC='\033[0m' # No Color
# Get the directory where this script is located
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# Hive LLM router endpoint
HIVE_LLM_ENDPOINT="https://api.adenhq.com"
# Helper function for prompts
prompt_yes_no() {
local prompt="$1"
@@ -864,6 +867,11 @@ elif [ -n "${KIMI_API_KEY:-}" ]; then
KIMI_CRED_DETECTED=true
fi
HIVE_CRED_DETECTED=false
if [ -n "${HIVE_API_KEY:-}" ]; then
HIVE_CRED_DETECTED=true
fi
# Detect API key providers
if [ "$USE_ASSOC_ARRAYS" = true ]; then
for env_var in "${!PROVIDER_NAMES[@]}"; do
@@ -901,6 +909,7 @@ try:
elif llm.get('use_codex_subscription'): sub = 'codex'
elif llm.get('use_kimi_code_subscription'): sub = 'kimi_code'
elif llm.get('provider', '') == 'minimax' or 'api.minimax.io' in llm.get('api_base', ''): sub = 'minimax_code'
elif llm.get('provider', '') == 'hive' or 'adenhq.com' in llm.get('api_base', ''): sub = 'hive_llm'
elif 'api.z.ai' in llm.get('api_base', ''): sub = 'zai_code'
print(f'PREV_SUB_MODE={sub}')
except Exception:
@@ -917,6 +926,7 @@ if [ -n "$PREV_SUB_MODE" ] || [ -n "$PREV_PROVIDER" ]; then
zai_code) [ "$ZAI_CRED_DETECTED" = true ] && PREV_CRED_VALID=true ;;
codex) [ "$CODEX_CRED_DETECTED" = true ] && PREV_CRED_VALID=true ;;
kimi_code) [ "$KIMI_CRED_DETECTED" = true ] && PREV_CRED_VALID=true ;;
hive_llm) [ "$HIVE_CRED_DETECTED" = true ] && PREV_CRED_VALID=true ;;
*)
# API key provider — check if the env var is set
if [ -n "$PREV_ENV_VAR" ] && [ -n "${!PREV_ENV_VAR}" ]; then
@@ -932,16 +942,18 @@ if [ -n "$PREV_SUB_MODE" ] || [ -n "$PREV_PROVIDER" ]; then
codex) DEFAULT_CHOICE=3 ;;
minimax_code) DEFAULT_CHOICE=4 ;;
kimi_code) DEFAULT_CHOICE=5 ;;
hive_llm) DEFAULT_CHOICE=6 ;;
esac
if [ -z "$DEFAULT_CHOICE" ]; then
case "$PREV_PROVIDER" in
anthropic) DEFAULT_CHOICE=6 ;;
openai) DEFAULT_CHOICE=7 ;;
gemini) DEFAULT_CHOICE=8 ;;
groq) DEFAULT_CHOICE=9 ;;
cerebras) DEFAULT_CHOICE=10 ;;
anthropic) DEFAULT_CHOICE=7 ;;
openai) DEFAULT_CHOICE=8 ;;
gemini) DEFAULT_CHOICE=9 ;;
groq) DEFAULT_CHOICE=10 ;;
cerebras) DEFAULT_CHOICE=11 ;;
minimax) DEFAULT_CHOICE=4 ;;
kimi) DEFAULT_CHOICE=5 ;;
hive) DEFAULT_CHOICE=6 ;;
esac
fi
fi
@@ -987,14 +999,21 @@ else
echo -e " ${CYAN}5)${NC} Kimi Code Subscription ${DIM}(use your Kimi Code plan)${NC}"
fi
# 6) Hive LLM
if [ "$HIVE_CRED_DETECTED" = true ]; then
echo -e " ${CYAN}6)${NC} Hive LLM ${DIM}(use your Hive API key)${NC} ${GREEN}(credential detected)${NC}"
else
echo -e " ${CYAN}6)${NC} Hive LLM ${DIM}(use your Hive API key)${NC}"
fi
echo ""
echo -e " ${CYAN}${BOLD}API key providers:${NC}"
# 6-10) API key providers — show (credential detected) if key already set
# 7-11) API key providers — show (credential detected) if key already set
PROVIDER_MENU_ENVS=(ANTHROPIC_API_KEY OPENAI_API_KEY GEMINI_API_KEY GROQ_API_KEY CEREBRAS_API_KEY)
PROVIDER_MENU_NAMES=("Anthropic (Claude) - Recommended" "OpenAI (GPT)" "Google Gemini - Free tier available" "Groq - Fast, free tier" "Cerebras - Fast, free tier")
for idx in 0 1 2 3 4; do
num=$((idx + 6))
num=$((idx + 7))
env_var="${PROVIDER_MENU_ENVS[$idx]}"
if [ -n "${!env_var}" ]; then
echo -e " ${CYAN}$num)${NC} ${PROVIDER_MENU_NAMES[$idx]} ${GREEN}(credential detected)${NC}"
@@ -1003,7 +1022,7 @@ for idx in 0 1 2 3 4; do
fi
done
echo -e " ${CYAN}11)${NC} Skip for now"
echo -e " ${CYAN}12)${NC} Skip for now"
echo ""
if [ -n "$DEFAULT_CHOICE" ]; then
@@ -1013,15 +1032,15 @@ fi
while true; do
if [ -n "$DEFAULT_CHOICE" ]; then
read -r -p "Enter choice (1-11) [$DEFAULT_CHOICE]: " choice || true
read -r -p "Enter choice (1-12) [$DEFAULT_CHOICE]: " choice || true
choice="${choice:-$DEFAULT_CHOICE}"
else
read -r -p "Enter choice (1-11): " choice || true
read -r -p "Enter choice (1-12): " choice || true
fi
if [[ "$choice" =~ ^[0-9]+$ ]] && [ "$choice" -ge 1 ] && [ "$choice" -le 11 ]; then
if [[ "$choice" =~ ^[0-9]+$ ]] && [ "$choice" -ge 1 ] && [ "$choice" -le 12 ]; then
break
fi
echo -e "${RED}Invalid choice. Please enter 1-11${NC}"
echo -e "${RED}Invalid choice. Please enter 1-12${NC}"
done
case $choice in
@@ -1118,36 +1137,63 @@ case $choice in
echo -e " ${DIM}Model: kimi-k2.5 | API: api.kimi.com/coding${NC}"
;;
6)
# Hive LLM
SUBSCRIPTION_MODE="hive_llm"
SELECTED_PROVIDER_ID="hive"
SELECTED_ENV_VAR="HIVE_API_KEY"
SELECTED_MAX_TOKENS=32768
SELECTED_MAX_CONTEXT_TOKENS=120000
SELECTED_API_BASE="$HIVE_LLM_ENDPOINT"
PROVIDER_NAME="Hive"
SIGNUP_URL="https://discord.com/invite/hQdU7QDkgR"
echo ""
echo -e "${GREEN}${NC} Using Hive LLM"
echo ""
echo -e " Select a model:"
echo -e " ${CYAN}1)${NC} queen ${DIM}(default — Hive flagship)${NC}"
echo -e " ${CYAN}2)${NC} kimi-2.5"
echo -e " ${CYAN}3)${NC} GLM-5"
echo ""
read -r -p " Enter model choice (1-3) [1]: " hive_model_choice || true
hive_model_choice="${hive_model_choice:-1}"
case "$hive_model_choice" in
2) SELECTED_MODEL="kimi-2.5" ;;
3) SELECTED_MODEL="GLM-5" ;;
*) SELECTED_MODEL="queen" ;;
esac
echo -e " ${DIM}Model: $SELECTED_MODEL | API: ${HIVE_LLM_ENDPOINT}${NC}"
;;
7)
SELECTED_ENV_VAR="ANTHROPIC_API_KEY"
SELECTED_PROVIDER_ID="anthropic"
PROVIDER_NAME="Anthropic"
SIGNUP_URL="https://console.anthropic.com/settings/keys"
;;
7)
8)
SELECTED_ENV_VAR="OPENAI_API_KEY"
SELECTED_PROVIDER_ID="openai"
PROVIDER_NAME="OpenAI"
SIGNUP_URL="https://platform.openai.com/api-keys"
;;
8)
9)
SELECTED_ENV_VAR="GEMINI_API_KEY"
SELECTED_PROVIDER_ID="gemini"
PROVIDER_NAME="Google Gemini"
SIGNUP_URL="https://aistudio.google.com/apikey"
;;
9)
10)
SELECTED_ENV_VAR="GROQ_API_KEY"
SELECTED_PROVIDER_ID="groq"
PROVIDER_NAME="Groq"
SIGNUP_URL="https://console.groq.com/keys"
;;
10)
11)
SELECTED_ENV_VAR="CEREBRAS_API_KEY"
SELECTED_PROVIDER_ID="cerebras"
PROVIDER_NAME="Cerebras"
SIGNUP_URL="https://cloud.cerebras.ai/"
;;
11)
12)
echo ""
echo -e "${YELLOW}Skipped.${NC} An LLM API key is required to test and use worker agents."
echo -e "Add your API key later by running:"
@@ -1160,7 +1206,7 @@ case $choice in
esac
# For API-key providers: prompt for key (allow replacement if already set)
if { [ -z "$SUBSCRIPTION_MODE" ] || [ "$SUBSCRIPTION_MODE" = "minimax_code" ] || [ "$SUBSCRIPTION_MODE" = "kimi_code" ]; } && [ -n "$SELECTED_ENV_VAR" ]; then
if { [ -z "$SUBSCRIPTION_MODE" ] || [ "$SUBSCRIPTION_MODE" = "minimax_code" ] || [ "$SUBSCRIPTION_MODE" = "kimi_code" ] || [ "$SUBSCRIPTION_MODE" = "hive_llm" ]; } && [ -n "$SELECTED_ENV_VAR" ]; then
while true; do
CURRENT_KEY="${!SELECTED_ENV_VAR}"
if [ -n "$CURRENT_KEY" ]; then
@@ -1188,7 +1234,7 @@ if { [ -z "$SUBSCRIPTION_MODE" ] || [ "$SUBSCRIPTION_MODE" = "minimax_code" ] ||
echo -e "${GREEN}${NC} API key saved to $SHELL_RC_FILE"
# Health check the new key
echo -n " Verifying API key... "
if { [ "$SUBSCRIPTION_MODE" = "minimax_code" ] || [ "$SUBSCRIPTION_MODE" = "kimi_code" ]; } && [ -n "${SELECTED_API_BASE:-}" ]; then
if { [ "$SUBSCRIPTION_MODE" = "minimax_code" ] || [ "$SUBSCRIPTION_MODE" = "kimi_code" ] || [ "$SUBSCRIPTION_MODE" = "hive_llm" ]; } && [ -n "${SELECTED_API_BASE:-}" ]; then
HC_RESULT=$(uv run python "$SCRIPT_DIR/scripts/check_llm_key.py" "$SELECTED_PROVIDER_ID" "$API_KEY" "$SELECTED_API_BASE" 2>/dev/null) || true
else
HC_RESULT=$(uv run python "$SCRIPT_DIR/scripts/check_llm_key.py" "$SELECTED_PROVIDER_ID" "$API_KEY" 2>/dev/null) || true
@@ -1310,6 +1356,8 @@ if [ -n "$SELECTED_PROVIDER_ID" ]; then
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "$SELECTED_API_BASE" > /dev/null
elif [ "$SUBSCRIPTION_MODE" = "kimi_code" ]; then
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "$SELECTED_API_BASE" > /dev/null
elif [ "$SUBSCRIPTION_MODE" = "hive_llm" ]; then
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" "" "$SELECTED_API_BASE" > /dev/null
else
save_configuration "$SELECTED_PROVIDER_ID" "$SELECTED_ENV_VAR" "$SELECTED_MODEL" "$SELECTED_MAX_TOKENS" "$SELECTED_MAX_CONTEXT_TOKENS" > /dev/null
fi
+10
View File
@@ -16,6 +16,8 @@ import sys
import httpx
from framework.config import HIVE_LLM_ENDPOINT
TIMEOUT = 10.0
@@ -135,6 +137,10 @@ PROVIDERS = {
"kimi": lambda key, **kw: check_anthropic_compatible(
key, "https://api.kimi.com/coding/v1/messages", "Kimi"
),
# Hive LLM uses an Anthropic-compatible endpoint
"hive": lambda key, **kw: check_anthropic_compatible(
key, f"{HIVE_LLM_ENDPOINT}/v1/messages", "Hive"
),
}
@@ -162,6 +168,10 @@ def main() -> None:
result = check_anthropic_compatible(
api_key, api_base.rstrip("/") + "/v1/messages", "Kimi"
)
elif api_base and provider_id == "hive":
result = check_anthropic_compatible(
api_key, api_base.rstrip("/") + "/v1/messages", "Hive"
)
elif api_base:
# Custom API base (ZAI or other OpenAI-compatible)
endpoint = api_base.rstrip("/") + "/models"