Compare commits
125 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 90f376136e | |||
| 64830a6720 | |||
| 514d2828fa | |||
| dee3980dbe | |||
| c03f1caa58 | |||
| 648e3cd52a | |||
| b216df76a0 | |||
| 6e88bb0205 | |||
| cf1e26b012 | |||
| 47e02c0821 | |||
| ecbf543e4c | |||
| 7daca39bb2 | |||
| d8712ceb72 | |||
| 5a90a4ba42 | |||
| e69c381331 | |||
| 8f608048f9 | |||
| df29c49bd0 | |||
| b3759db83b | |||
| 8308207be8 | |||
| 6b86c602c7 | |||
| d9644eaa39 | |||
| 3976ea6934 | |||
| cc00ae8999 | |||
| 70bf337c03 | |||
| 6ecdbf47b0 | |||
| e0e1abbb64 | |||
| cb8c26ee18 | |||
| 3d6beca577 | |||
| bed9670395 | |||
| 61bb0b6594 | |||
| e7506fcd25 | |||
| 7cc92eb8c3 | |||
| 3a70243b82 | |||
| b701605a62 | |||
| a5b17a293b | |||
| 7ad25d986b | |||
| db572b9be6 | |||
| 0ee653a164 | |||
| c92662bdb1 | |||
| 19469ff404 | |||
| 7fcb51985d | |||
| 3c9911c25b | |||
| 3dbd20040a | |||
| c9d62139af | |||
| 172b180477 | |||
| 6637bc8d96 | |||
| 93dc35dcbb | |||
| 30ad3edfbf | |||
| d10912be15 | |||
| b9c5191059 | |||
| d9037172d8 | |||
| df41732e95 | |||
| cd9a625041 | |||
| 420d703138 | |||
| 66866e524d | |||
| 33e6c018a3 | |||
| 1ac50ab532 | |||
| 4df924d3d7 | |||
| 8f2d87cc5d | |||
| 4b795584f6 | |||
| 6024ae4241 | |||
| aaa5d661c3 | |||
| 2e5670ace6 | |||
| 634658e829 | |||
| dc64cc68a1 | |||
| e8d56c815d | |||
| cc21780e99 | |||
| 714de59d2a | |||
| ed8d417bef | |||
| 294df7f066 | |||
| b46fe69712 | |||
| 6e6efb97bd | |||
| dc4be4f906 | |||
| 9cb20986d2 | |||
| 7d75c6a09f | |||
| aab38222db | |||
| c46082780f | |||
| ff8123acb9 | |||
| 358b4e1bf2 | |||
| 934c424510 | |||
| f12db37d75 | |||
| 2b263f6e10 | |||
| 4513f5dcd7 | |||
| 1fabd8e8fb | |||
| 043c79e0e4 | |||
| 9193336fd3 | |||
| 59e90d3168 | |||
| ef34b1190a | |||
| 1e848d67bb | |||
| a0e68871f7 | |||
| 767beb4005 | |||
| 95655a4c85 | |||
| 102866780c | |||
| a379ae97c8 | |||
| d5ae7e6c4b | |||
| 68f6b72564 | |||
| 95f1d1abcd | |||
| e0cd16b92b | |||
| 71a71beca7 | |||
| 45cfae5217 | |||
| 4d877469d5 | |||
| c7e85aa9f5 | |||
| 8f042b7ca5 | |||
| 1630c1ee7a | |||
| 08b0cbc208 | |||
| 4bcbebf761 | |||
| 00a3f94315 | |||
| 76fe644cac | |||
| c6c333761b | |||
| 0417e33ab2 | |||
| 42a9c7b0f1 | |||
| 398e86d787 | |||
| 51406e358e | |||
| 137162eada | |||
| 2b7a38f746 | |||
| 6022f6c911 | |||
| dacda3337f | |||
| 267f797abc | |||
| 42fd1ec8d1 | |||
| 81774d5d0e | |||
| d1cbfd1e54 | |||
| fd71501215 | |||
| 406bfb23b9 | |||
| 1e2e6e03dd | |||
| 8b99bb8590 |
@@ -0,0 +1,241 @@
|
||||
---
|
||||
name: browser-edge-cases
|
||||
description: SOP for debugging browser automation failures on complex websites. Use when browser tools fail on specific sites like LinkedIn, Twitter/X, SPAs, or sites with Shadow DOM.
|
||||
license: MIT
|
||||
---
|
||||
|
||||
# Browser Tool Edge Cases
|
||||
|
||||
Standard Operating Procedure for debugging and fixing browser automation failures on complex websites.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- `browser_scroll` succeeds but page doesn't move
|
||||
- `browser_click` succeeds but no action triggered
|
||||
- `browser_type` text disappears or doesn't work
|
||||
- `browser_snapshot` hangs or returns stale content
|
||||
- `browser_navigate` loads wrong content
|
||||
|
||||
## SOP: Debugging Browser Tool Failures
|
||||
|
||||
### Phase 1: Reproduce & Isolate
|
||||
|
||||
```
|
||||
1. Create minimal test case demonstrating failure
|
||||
2. Test against simple site (example.com) to verify tool works
|
||||
3. Test against problematic site to confirm issue
|
||||
```
|
||||
|
||||
**Quick isolation test:**
|
||||
```python
|
||||
# Test 1: Does the tool work at all?
|
||||
await browser_navigate(tab_id, "https://example.com")
|
||||
result = await browser_scroll(tab_id, "down", 100)
|
||||
# Should work on simple sites
|
||||
|
||||
# Test 2: Does it fail on the problematic site?
|
||||
await browser_navigate(tab_id, "https://linkedin.com/feed")
|
||||
result = await browser_scroll(tab_id, "down", 100)
|
||||
# If this fails but example.com works → site-specific edge case
|
||||
```
|
||||
|
||||
### Phase 2: Analyze Root Cause
|
||||
|
||||
**Step 2a: Check console for errors**
|
||||
```python
|
||||
console = await browser_console(tab_id)
|
||||
# Look for: CSP violations, React errors, JavaScript exceptions
|
||||
```
|
||||
|
||||
**Step 2b: Inspect DOM structure**
|
||||
```python
|
||||
html = await browser_html(tab_id)
|
||||
snapshot = await browser_snapshot(tab_id)
|
||||
# Look for:
|
||||
# - Nested scrollable divs (overflow: scroll/auto)
|
||||
# - Shadow DOM roots
|
||||
# - iframes
|
||||
# - Custom widgets
|
||||
```
|
||||
|
||||
**Step 2c: Identify the pattern**
|
||||
|
||||
| Symptom | Likely Cause | Check |
|
||||
|---------|--------------|-------|
|
||||
| Scroll doesn't move | Nested scroll container | Look for `overflow: scroll` divs |
|
||||
| Click no effect | Element covered | Check `getBoundingClientRect` vs viewport |
|
||||
| Type clears | Autocomplete/React | Check for event listeners on input |
|
||||
| Snapshot hangs | Huge DOM | Check node count in snapshot |
|
||||
| Snapshot stale | SPA hydration | Wait after navigation |
|
||||
|
||||
### Phase 3: Implement Multi-Layer Fix
|
||||
|
||||
**Pattern: Always have fallbacks**
|
||||
|
||||
```python
|
||||
async def robust_operation(tab_id):
|
||||
# Method 1: Primary approach
|
||||
try:
|
||||
result = await primary_method(tab_id)
|
||||
if verify_success(result):
|
||||
return result
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Method 2: CDP fallback
|
||||
try:
|
||||
result = await cdp_fallback(tab_id)
|
||||
if verify_success(result):
|
||||
return result
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Method 3: JavaScript fallback
|
||||
return await javascript_fallback(tab_id)
|
||||
```
|
||||
|
||||
**Pattern: Always add timeouts**
|
||||
|
||||
```python
|
||||
# Bad - can hang forever
|
||||
result = await browser_snapshot(tab_id)
|
||||
|
||||
# Good - fails fast with useful error
|
||||
try:
|
||||
result = await browser_snapshot(tab_id, timeout_s=10.0)
|
||||
except asyncio.TimeoutError:
|
||||
# Handle timeout gracefully
|
||||
result = await fallback_snapshot(tab_id)
|
||||
```
|
||||
|
||||
### Phase 4: Verify Fix
|
||||
|
||||
```
|
||||
1. Run against problematic site → should work
|
||||
2. Run against simple site → should still work (regression check)
|
||||
3. Document in registry.md
|
||||
```
|
||||
|
||||
## Pattern Library
|
||||
|
||||
### P1: Nested Scrollable Containers
|
||||
|
||||
**Sites:** LinkedIn, Twitter/X, any SPA with scrollable feeds
|
||||
|
||||
**Detection:**
|
||||
```javascript
|
||||
// Find largest scrollable container
|
||||
const candidates = [];
|
||||
document.querySelectorAll('*').forEach(el => {
|
||||
const style = getComputedStyle(el);
|
||||
if (style.overflow.includes('scroll') || style.overflow.includes('auto')) {
|
||||
const rect = el.getBoundingClientRect();
|
||||
if (rect.width > 100 && rect.height > 100) {
|
||||
candidates.push({el, area: rect.width * rect.height});
|
||||
}
|
||||
}
|
||||
});
|
||||
candidates.sort((a, b) => b.area - a.area);
|
||||
return candidates[0]?.el;
|
||||
```
|
||||
|
||||
**Fix:** Dispatch scroll events at container's center, not viewport center.
|
||||
|
||||
### P2: Element Covered by Overlay
|
||||
|
||||
**Sites:** Modals, tooltips, SPAs with loading overlays
|
||||
|
||||
**Detection:**
|
||||
```javascript
|
||||
const rect = element.getBoundingClientRect();
|
||||
const centerX = rect.left + rect.width / 2;
|
||||
const centerY = rect.top + rect.height / 2;
|
||||
const topElement = document.elementFromPoint(centerX, centerY);
|
||||
return topElement === element || element.contains(topElement);
|
||||
```
|
||||
|
||||
**Fix:** Wait for overlay to disappear, or use JavaScript click.
|
||||
|
||||
### P3: React Synthetic Events
|
||||
|
||||
**Sites:** React SPAs, modern web apps
|
||||
|
||||
**Detection:** If CDP click doesn't trigger handler but manual click works.
|
||||
|
||||
**Fix:** Use JavaScript click as primary:
|
||||
```javascript
|
||||
element.click();
|
||||
```
|
||||
|
||||
### P4: Huge DOM / Accessibility Tree
|
||||
|
||||
**Sites:** LinkedIn, Facebook, Twitter (feeds with 1000s of nodes)
|
||||
|
||||
**Detection:**
|
||||
```javascript
|
||||
document.querySelectorAll('*').length > 5000
|
||||
```
|
||||
|
||||
**Fix:**
|
||||
1. Add timeout to snapshot operation
|
||||
2. Truncate tree at 2000 nodes
|
||||
3. Fall back to DOM-based snapshot if accessibility tree too large
|
||||
|
||||
### P5: SPA Hydration Delay
|
||||
|
||||
**Sites:** React, Vue, Angular SPAs after navigation
|
||||
|
||||
**Detection:**
|
||||
```javascript
|
||||
// Check if React app has hydrated
|
||||
document.querySelector('[data-reactroot]') ||
|
||||
document.querySelector('[data-reactid]')
|
||||
```
|
||||
|
||||
**Fix:** Wait for specific selector after navigation:
|
||||
```python
|
||||
await browser_navigate(tab_id, url, wait_until="load")
|
||||
await browser_wait(tab_id, selector='[data-testid="content"]', timeout_ms=5000)
|
||||
```
|
||||
|
||||
### P6: Shadow DOM
|
||||
|
||||
**Sites:** Components using Shadow DOM, Lit elements
|
||||
|
||||
**Detection:**
|
||||
```javascript
|
||||
document.querySelectorAll('*').some(el => el.shadowRoot)
|
||||
```
|
||||
|
||||
**Fix:** Pierce shadow root:
|
||||
```javascript
|
||||
function queryShadow(selector) {
|
||||
const parts = selector.split('>>>');
|
||||
let node = document;
|
||||
for (const part of parts) {
|
||||
if (node.shadowRoot) {
|
||||
node = node.shadowRoot.querySelector(part.trim());
|
||||
} else {
|
||||
node = node.querySelector(part.trim());
|
||||
}
|
||||
}
|
||||
return node;
|
||||
}
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Issue | Primary Fix | Fallback |
|
||||
|-------|-------------|----------|
|
||||
| Scroll not working | Find scrollable container | Mouse wheel at container center |
|
||||
| Click no effect | JavaScript click() | CDP mouse events |
|
||||
| Type clears | Add delay_ms | Use execCommand |
|
||||
| Snapshot hangs | Add timeout_s | DOM snapshot fallback |
|
||||
| Stale content | Wait for selector | Increase wait_until timeout |
|
||||
| Shadow DOM | Pierce selector | JavaScript traversal |
|
||||
|
||||
## References
|
||||
|
||||
- [registry.md](registry.md) - Full list of known edge cases
|
||||
- [scripts/test_case.py](scripts/test_case.py) - Template for testing new cases
|
||||
- [BROWSER_USE_PATTERNS.md](../../tools/BROWSER_USE_PATTERNS.md) - Implementation patterns from browser-use
|
||||
@@ -0,0 +1,261 @@
|
||||
# Browser Edge Case Registry
|
||||
|
||||
Curated list of known browser automation edge cases with symptoms, causes, and fixes.
|
||||
|
||||
---
|
||||
|
||||
## Scroll Issues
|
||||
|
||||
### #1: LinkedIn Nested Scroll Container
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | LinkedIn (linkedin.com/feed) |
|
||||
| **Symptom** | `browser_scroll()` returns `{ok: true}` but page doesn't move |
|
||||
| **Root Cause** | Content is in a nested scrollable div (`overflow: scroll`), not the main window |
|
||||
| **Detection** | `document.querySelectorAll('*')` with `overflow: scroll/auto` has large candidates |
|
||||
| **Fix** | JavaScript finds largest scrollable container, uses `container.scrollBy()` |
|
||||
| **Code** | `bridge.py:808-891` - smart scroll with container detection |
|
||||
| **Verified** | 2026-04-03 ✓ |
|
||||
|
||||
### #2: Twitter/X Lazy Loading
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | Twitter/X (x.com) |
|
||||
| **Symptom** | Infinite scroll doesn't load new content |
|
||||
| **Root Cause** | Lazy loading requires content to be visible before loading more |
|
||||
| **Detection** | Scroll position at bottom but no new `[data-testid="tweet"]` elements |
|
||||
| **Fix** | Add `wait_for_selector` between scroll calls with 1s delay |
|
||||
| **Code** | Test file: `tests/test_x_page_load_repro.py` |
|
||||
| **Verified** | - |
|
||||
|
||||
### #3: Modal/Dialog Scroll Container
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | Any site with modal dialogs |
|
||||
| **Symptom** | Scroll scrolls background page, not modal content |
|
||||
| **Root Cause** | Modal has its own scroll container with `overflow: scroll` |
|
||||
| **Detection** | Visible element with `position: fixed` and scrollable content |
|
||||
| **Fix** | Find visible modal container (highest z-index scrollable), scroll that |
|
||||
| **Code** | - |
|
||||
| **Verified** | - |
|
||||
|
||||
---
|
||||
|
||||
## Click Issues
|
||||
|
||||
### #4: Element Covered by Overlay
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | SPAs, sites with loading overlays |
|
||||
| **Symptom** | Click succeeds but no action triggered |
|
||||
| **Root Cause** | Element is covered by transparent overlay, tooltip, or iframe |
|
||||
| **Detection** | `document.elementFromPoint(x, y) !== target` |
|
||||
| **Fix** | Wait for overlay to disappear, or use JavaScript `element.click()` |
|
||||
| **Code** | `bridge.py:394-591` - JavaScript click as primary |
|
||||
| **Verified** | - |
|
||||
|
||||
### #5: React Synthetic Events
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | React applications |
|
||||
| **Symptom** | CDP click doesn't trigger React handler |
|
||||
| **Root Cause** | React uses synthetic events that don't respond to CDP events |
|
||||
| **Detection** | Site uses React (check for `__reactFiber$` or `data-reactroot`) |
|
||||
| **Fix** | Use JavaScript `element.click()` as primary method |
|
||||
| **Code** | `bridge.py:394-591` - JavaScript-first click |
|
||||
| **Verified** | - |
|
||||
|
||||
### #6: Shadow DOM Elements
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | Components using Shadow DOM, Lit elements |
|
||||
| **Symptom** | `querySelector` can't find element |
|
||||
| **Root Cause** | Element is inside a shadow root, not main DOM tree |
|
||||
| **Detection** | `element.shadowRoot !== null` on parent elements |
|
||||
| **Fix** | Use piercing selector (`host >>> target`) or traverse shadow roots |
|
||||
| **Code** | See SKILL.md P6 pattern |
|
||||
| **Verified** | 2026-04-03 ✓ |
|
||||
|
||||
---
|
||||
|
||||
## Input Issues
|
||||
|
||||
### #7: ContentEditable / Rich Text Editors
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | Rich text editors (Notion, Slack web, etc.) |
|
||||
| **Symptom** | `browser_type()` doesn't insert text |
|
||||
| **Root Cause** | Element is `contenteditable`, not an `<input>` or `<textarea>` |
|
||||
| **Detection** | `element.contentEditable === 'true'` |
|
||||
| **Fix** | Focus via JavaScript, use `execCommand('insertText')` or `Input.dispatchKeyEvent` |
|
||||
| **Code** | `bridge.py:616-694` - contentEditable handling |
|
||||
| **Verified** | 2026-04-03 ✓ |
|
||||
|
||||
### #8: Autocomplete Field Clearing
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | Search fields with autocomplete, address forms |
|
||||
| **Symptom** | Typed text gets cleared immediately |
|
||||
| **Root Cause** | Field expects realistic keystroke timing for autocomplete |
|
||||
| **Detection** | Field has autocomplete listeners or dropdown appears |
|
||||
| **Fix** | Add `delay_ms=50` between keystrokes |
|
||||
| **Code** | `bridge.py:type()` - delay_ms parameter |
|
||||
| **Verified** | 2026-04-03 ✓ |
|
||||
|
||||
### #9: Custom Date Pickers
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | Forms with custom date widgets |
|
||||
| **Symptom** | Can't type date into date field |
|
||||
| **Root Cause** | Custom widget intercepts and blocks keyboard input |
|
||||
| **Detection** | Typing doesn't change field value |
|
||||
| **Fix** | Click calendar widget icon, select date from dropdown |
|
||||
| **Code** | - |
|
||||
| **Verified** | - |
|
||||
|
||||
---
|
||||
|
||||
## Snapshot Issues
|
||||
|
||||
### #10: LinkedIn Huge DOM Tree
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | LinkedIn, Facebook, Twitter feeds |
|
||||
| **Symptom** | `browser_snapshot()` hangs forever |
|
||||
| **Root Cause** | 10k+ DOM nodes, accessibility tree has 50k+ nodes |
|
||||
| **Detection** | `document.querySelectorAll('*').length > 5000` |
|
||||
| **Fix** | Add `timeout_s` param with `asyncio.timeout()`, proper error handling |
|
||||
| **Code** | `bridge.py:1041-1028` - snapshot with timeout protection |
|
||||
| **Verified** | 2026-04-03 ✓ (0.08s on LinkedIn) |
|
||||
|
||||
### #11: SPA Hydration Delay
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | React/Vue/Angular SPAs |
|
||||
| **Symptom** | Snapshot shows old content after navigation |
|
||||
| **Root Cause** | Client-side hydration hasn't completed when snapshot runs |
|
||||
| **Detection** | `document.readyState === 'complete'` but content missing |
|
||||
| **Fix** | Wait for specific selector after navigation |
|
||||
| **Code** | Test file: `tests/test_x_page_load_repro.py` |
|
||||
| **Verified** | - |
|
||||
|
||||
### #12: iframe Content Missing
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | Sites with embedded content |
|
||||
| **Symptom** | Snapshot missing iframe content |
|
||||
| **Root Cause** | Accessibility tree doesn't include iframe content |
|
||||
| **Detection** | `document.querySelectorAll('iframe')` has results |
|
||||
| **Fix** | Use `DOM.getFrameOwner` + separate snapshot for each iframe |
|
||||
| **Code** | - |
|
||||
| **Verified** | - |
|
||||
|
||||
---
|
||||
|
||||
## Navigation Issues
|
||||
|
||||
### #13: SPA Navigation Events
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | React Router, Vue Router SPAs |
|
||||
| **Symptom** | `wait_until="load"` fires before content ready |
|
||||
| **Root Cause** | SPA uses client-side routing, no full page load |
|
||||
| **Detection** | URL changes but `load` event already fired |
|
||||
| **Fix** | Use `wait_until="networkidle"` or `wait_for_selector` |
|
||||
| **Code** | `bridge.py:navigate()` - wait_until options |
|
||||
| **Verified** | - |
|
||||
|
||||
### #14: Cross-Origin Redirects
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | OAuth flows, SSO logins |
|
||||
| **Symptom** | Navigation fails during redirect |
|
||||
| **Root Cause** | Cross-origin security prevents CDP tracking |
|
||||
| **Detection** | URL changes to different domain |
|
||||
| **Fix** | Use `wait_for_url` with pattern matching instead of exact URL |
|
||||
| **Code** | - |
|
||||
| **Verified** | - |
|
||||
|
||||
---
|
||||
|
||||
## Screenshot Issues
|
||||
|
||||
### #15: Selector Screenshot Not Implemented
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | Any site |
|
||||
| **Symptom** | `browser_screenshot(selector="h1")` takes full viewport instead of element |
|
||||
| **Root Cause** | `selector` param existed in signature but was silently ignored in both `bridge.py` and `inspection.py` |
|
||||
| **Detection** | Screenshot with selector same byte size as screenshot without selector |
|
||||
| **Fix** | Use CDP `Runtime.evaluate` to call `getBoundingClientRect()` on the element, pass result as `clip` to `Page.captureScreenshot` |
|
||||
| **Code** | `bridge.py:1315-1344` - selector clip logic; `inspection.py:94-96` - pass selector to bridge |
|
||||
| **Verified** | 2026-04-03 ✓ (JS rect query returns correct viewport coords; requires server restart) |
|
||||
|
||||
### #16: Stale Browser Context (Group ID Mismatch)
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | Any |
|
||||
| **Symptom** | `browser_open()` returns `"No group with id: XXXXXXX"` even though `browser_status` shows `running: true` |
|
||||
| **Root Cause** | In-memory `_contexts` dict has a stale `groupId` from a Chrome tab group that was closed outside the tool (e.g. user closed the tab group) |
|
||||
| **Detection** | `browser_status` returns `running: true` but `browser_open` fails with "No group with id" |
|
||||
| **Fix** | Call `browser_stop()` to clear stale context from `_contexts`, then `browser_start()` again |
|
||||
| **Code** | `tools/lifecycle.py:144-160` - `already_running` check uses cached dict without validating against Chrome |
|
||||
| **Verified** | 2026-04-03 ✓ |
|
||||
|
||||
---
|
||||
|
||||
## How to Add New Edge Cases
|
||||
|
||||
1. **Reproduce** the issue with minimal test case
|
||||
2. **Document** using the template below
|
||||
3. **Implement** fix with multi-layer fallback
|
||||
4. **Verify** against both problematic and simple sites
|
||||
5. **Submit** by appending to this file
|
||||
|
||||
### Template
|
||||
|
||||
```markdown
|
||||
### #N: [Short Title]
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Site** | [URL or site type] |
|
||||
| **Symptom** | [What the user observes] |
|
||||
| **Root Cause** | [Technical explanation] |
|
||||
| **Detection** | [JavaScript to detect this case] |
|
||||
| **Fix** | [Solution approach] |
|
||||
| **Code** | [File:line reference if implemented] |
|
||||
| **Verified** | [Date or "pending"] |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Statistics
|
||||
|
||||
| Category | Count |
|
||||
|----------|-------|
|
||||
| Scroll Issues | 3 |
|
||||
| Click Issues | 3 |
|
||||
| Input Issues | 3 |
|
||||
| Snapshot Issues | 3 |
|
||||
| Navigation Issues | 2 |
|
||||
| Screenshot Issues | 2 |
|
||||
| **Total** | **16** |
|
||||
|
||||
Last updated: 2026-04-03
|
||||
@@ -0,0 +1,113 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Test #2: Twitter/X Lazy Loading Scroll
|
||||
|
||||
Symptom: Infinite scroll doesn't load new content
|
||||
Root Cause: Lazy loading requires content to be visible before loading more
|
||||
Fix: Add wait_for_selector between scroll calls
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
|
||||
|
||||
from gcu.browser.bridge import BeelineBridge
|
||||
|
||||
BRIDGE_PORT = 9229
|
||||
CONTEXT_NAME = "twitter-scroll-test"
|
||||
|
||||
|
||||
async def test_twitter_lazy_scroll():
|
||||
"""Test that repeated scrolls with waits load new content."""
|
||||
print("=" * 70)
|
||||
print("TEST #2: Twitter/X Lazy Loading Scroll")
|
||||
print("=" * 70)
|
||||
|
||||
bridge = BeelineBridge()
|
||||
|
||||
try:
|
||||
await bridge.start()
|
||||
|
||||
for i in range(10):
|
||||
await asyncio.sleep(1)
|
||||
if bridge.is_connected:
|
||||
print("✓ Extension connected!")
|
||||
break
|
||||
print(f"Waiting for extension... ({i + 1}/10)")
|
||||
else:
|
||||
print("✗ Extension not connected")
|
||||
return
|
||||
|
||||
context = await bridge.create_context(CONTEXT_NAME)
|
||||
tab_id = context.get("tabId")
|
||||
group_id = context.get("groupId")
|
||||
print(f"✓ Created tab: {tab_id}")
|
||||
|
||||
# Navigate to Twitter/X
|
||||
print("\n--- Navigating to X.com ---")
|
||||
await bridge.navigate(tab_id, "https://x.com", wait_until="networkidle", timeout_ms=30000)
|
||||
print("✓ Page loaded")
|
||||
|
||||
# Wait for tweets to appear
|
||||
print("\n--- Waiting for tweets ---")
|
||||
await bridge.wait_for_selector(tab_id, '[data-testid="tweet"]', timeout_ms=10000)
|
||||
|
||||
# Count initial tweets
|
||||
initial_count = await bridge.evaluate(
|
||||
tab_id,
|
||||
"(function() { return document.querySelectorAll("
|
||||
"'[data-testid=\"tweet\"]').length; })()",
|
||||
)
|
||||
print(f"Initial tweet count: {initial_count.get('result', 0)}")
|
||||
|
||||
# Take screenshot of initial state
|
||||
screenshot = await bridge.screenshot(tab_id)
|
||||
print(f"Screenshot: {len(screenshot.get('data', ''))} bytes")
|
||||
|
||||
# Scroll multiple times with waits
|
||||
print("\n--- Scrolling with waits ---")
|
||||
for i in range(3):
|
||||
result = await bridge.scroll(tab_id, "down", 500)
|
||||
print(f" Scroll {i + 1}: {result.get('method', 'unknown')} method")
|
||||
|
||||
# Wait for new content to load
|
||||
await asyncio.sleep(2)
|
||||
|
||||
# Count tweets after scroll
|
||||
count_result = await bridge.evaluate(
|
||||
tab_id,
|
||||
"(function() { return document.querySelectorAll("
|
||||
"'[data-testid=\"tweet\"]').length; })()",
|
||||
)
|
||||
count = count_result.get("result", 0)
|
||||
print(f" Tweet count after scroll: {count}")
|
||||
|
||||
# Final count
|
||||
final_count = await bridge.evaluate(
|
||||
tab_id,
|
||||
"(function() { return document.querySelectorAll("
|
||||
"'[data-testid=\"tweet\"]').length; })()",
|
||||
)
|
||||
final = final_count.get("result", 0)
|
||||
initial = initial_count.get("result", 0)
|
||||
|
||||
print("\n--- Results ---")
|
||||
print(f"Initial tweets: {initial}")
|
||||
print(f"Final tweets: {final}")
|
||||
|
||||
if final > initial:
|
||||
print(f"✓ PASS: Loaded {final - initial} new tweets")
|
||||
else:
|
||||
print("✗ FAIL: No new tweets loaded (may need login)")
|
||||
|
||||
await bridge.destroy_context(group_id)
|
||||
print("\n✓ Context destroyed")
|
||||
|
||||
finally:
|
||||
await bridge.stop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_twitter_lazy_scroll())
|
||||
@@ -0,0 +1,96 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Test #3: Modal/Dialog Scroll Container
|
||||
|
||||
Symptom: Scroll scrolls background page, not modal content
|
||||
Root Cause: Modal has its own scroll container with overflow: scroll
|
||||
Fix: Find visible modal container (highest z-index scrollable), scroll that
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
|
||||
|
||||
from gcu.browser.bridge import BeelineBridge
|
||||
|
||||
BRIDGE_PORT = 9229
|
||||
CONTEXT_NAME = "modal-scroll-test"
|
||||
|
||||
# Test site with modal - using a demo site
|
||||
MODAL_DEMO_URL = "https://www.w3schools.com/howto/howto_css_modals.asp"
|
||||
|
||||
|
||||
async def test_modal_scroll():
|
||||
"""Test that scroll targets modal content, not background."""
|
||||
print("=" * 70)
|
||||
print("TEST #3: Modal/Dialog Scroll Container")
|
||||
print("=" * 70)
|
||||
|
||||
bridge = BeelineBridge()
|
||||
|
||||
try:
|
||||
await bridge.start()
|
||||
|
||||
for i in range(10):
|
||||
await asyncio.sleep(1)
|
||||
if bridge.is_connected:
|
||||
print("✓ Extension connected!")
|
||||
break
|
||||
else:
|
||||
print("✗ Extension not connected")
|
||||
return
|
||||
|
||||
context = await bridge.create_context(CONTEXT_NAME)
|
||||
tab_id = context.get("tabId")
|
||||
group_id = context.get("groupId")
|
||||
print(f"✓ Created tab: {tab_id}")
|
||||
|
||||
# Navigate to modal demo
|
||||
print("\n--- Navigating to modal demo ---")
|
||||
await bridge.navigate(tab_id, MODAL_DEMO_URL, wait_until="load")
|
||||
print("✓ Page loaded")
|
||||
|
||||
# Take screenshot before
|
||||
screenshot_before = await bridge.screenshot(tab_id)
|
||||
print(f"Screenshot before: {len(screenshot_before.get('data', ''))} bytes")
|
||||
|
||||
# Click button to open modal
|
||||
print("\n--- Opening modal ---")
|
||||
# Find and click the "Open Modal" button
|
||||
result = await bridge.click(tab_id, ".ws-btn", timeout_ms=5000)
|
||||
print(f"Click result: {result}")
|
||||
|
||||
await asyncio.sleep(1)
|
||||
|
||||
# Take screenshot with modal open
|
||||
screenshot_modal = await bridge.screenshot(tab_id)
|
||||
print(f"Screenshot modal open: {len(screenshot_modal.get('data', ''))} bytes")
|
||||
|
||||
# Try to scroll within modal
|
||||
print("\n--- Scrolling modal content ---")
|
||||
result = await bridge.scroll(tab_id, "down", 100)
|
||||
print(f"Scroll result: {result}")
|
||||
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
# Take screenshot after scroll
|
||||
screenshot_after = await bridge.screenshot(tab_id)
|
||||
print(f"Screenshot after scroll: {len(screenshot_after.get('data', ''))} bytes")
|
||||
|
||||
# Check if modal content scrolled (not background)
|
||||
# This is a visual check - we can verify by comparing screenshots
|
||||
print("\n--- Results ---")
|
||||
print(f"Modal scroll test completed. Method used: {result.get('method', 'unknown')}")
|
||||
print("Visual verification needed: Check if modal content scrolled vs background")
|
||||
|
||||
await bridge.destroy_context(group_id)
|
||||
print("\n✓ Context destroyed")
|
||||
|
||||
finally:
|
||||
await bridge.stop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_modal_scroll())
|
||||
@@ -0,0 +1,123 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Test #4: Element Covered by Overlay
|
||||
|
||||
Symptom: Click succeeds but no action triggered
|
||||
Root Cause: Element is covered by transparent overlay, tooltip, or iframe
|
||||
Detection: document.elementFromPoint(x, y) !== target
|
||||
Fix: Wait for overlay to disappear, or use JavaScript element.click()
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
|
||||
|
||||
from gcu.browser.bridge import BeelineBridge
|
||||
|
||||
CONTEXT_NAME = "overlay-click-test"
|
||||
|
||||
|
||||
async def test_overlay_click():
|
||||
"""Test clicking elements that are covered by overlays."""
|
||||
print("=" * 70)
|
||||
print("TEST #4: Element Covered by Overlay")
|
||||
print("=" * 70)
|
||||
|
||||
bridge = BeelineBridge()
|
||||
|
||||
try:
|
||||
await bridge.start()
|
||||
|
||||
for i in range(10):
|
||||
await asyncio.sleep(1)
|
||||
if bridge.is_connected:
|
||||
print("✓ Extension connected!")
|
||||
break
|
||||
else:
|
||||
print("✗ Extension not connected")
|
||||
return
|
||||
|
||||
context = await bridge.create_context(CONTEXT_NAME)
|
||||
tab_id = context.get("tabId")
|
||||
group_id = context.get("groupId")
|
||||
print(f"✓ Created tab: {tab_id}")
|
||||
|
||||
# Create a test page with overlay
|
||||
print("\n--- Creating test page with overlay ---")
|
||||
test_html = """
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head><title>Overlay Test</title></head>
|
||||
<body>
|
||||
<button id="target-btn" onclick="alert('Clicked!')">Click Me</button>
|
||||
<div id="overlay" style="position:fixed;top:0;left:0;
|
||||
width:100%;height:100%;
|
||||
background:rgba(0,0,0,0.3);z-index:1000;"></div>
|
||||
<script>
|
||||
window.clickCount = 0;
|
||||
document.getElementById('target-btn').addEventListener('click', () => {
|
||||
window.clickCount++;
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
# Navigate to data URL
|
||||
import base64
|
||||
|
||||
data_url = f"data:text/html;base64,{base64.b64encode(test_html.encode()).decode()}"
|
||||
await bridge.navigate(tab_id, data_url, wait_until="load")
|
||||
|
||||
# Screenshot before
|
||||
screenshot = await bridge.screenshot(tab_id)
|
||||
print(f"Screenshot: {len(screenshot.get('data', ''))} bytes")
|
||||
|
||||
# Try to click the covered button
|
||||
print("\n--- Attempting to click covered button ---")
|
||||
|
||||
# First, check if element is covered
|
||||
coverage_check = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
const btn = document.getElementById('target-btn');
|
||||
const rect = btn.getBoundingClientRect();
|
||||
const centerX = rect.left + rect.width / 2;
|
||||
const centerY = rect.top + rect.height / 2;
|
||||
const topElement = document.elementFromPoint(centerX, centerY);
|
||||
return {
|
||||
isCovered: topElement !== btn && !btn.contains(topElement),
|
||||
topElement: topElement?.tagName,
|
||||
targetElement: btn.tagName
|
||||
};
|
||||
})();
|
||||
""",
|
||||
)
|
||||
print(f"Coverage check: {coverage_check.get('result', {})}")
|
||||
|
||||
# Try CDP click (may fail due to overlay)
|
||||
click_result = await bridge.click(tab_id, "#target-btn", timeout_ms=5000)
|
||||
print(f"Click result: {click_result}")
|
||||
|
||||
# Check if click registered
|
||||
count_result = await bridge.evaluate(tab_id, "(function() { return window.clickCount; })()")
|
||||
count = count_result.get("result", 0)
|
||||
print(f"Click count after CDP click: {count}")
|
||||
|
||||
if count > 0:
|
||||
print("✓ PASS: JavaScript click penetrated overlay")
|
||||
else:
|
||||
print("✗ FAIL: Click did not reach button (overlay blocked it)")
|
||||
|
||||
await bridge.destroy_context(group_id)
|
||||
print("\n✓ Context destroyed")
|
||||
|
||||
finally:
|
||||
await bridge.stop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_overlay_click())
|
||||
@@ -0,0 +1,152 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Test #6: Shadow DOM Elements
|
||||
|
||||
Symptom: querySelector can't find element
|
||||
Root Cause: Element is inside a shadow root, not main DOM tree
|
||||
Detection: element.shadowRoot !== null on parent elements
|
||||
Fix: Use piercing selector (host >>> target) or traverse shadow roots
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
|
||||
|
||||
from gcu.browser.bridge import BeelineBridge
|
||||
|
||||
CONTEXT_NAME = "shadow-dom-test"
|
||||
|
||||
|
||||
async def test_shadow_dom():
|
||||
"""Test clicking elements inside Shadow DOM."""
|
||||
print("=" * 70)
|
||||
print("TEST #6: Shadow DOM Elements")
|
||||
print("=" * 70)
|
||||
|
||||
bridge = BeelineBridge()
|
||||
|
||||
try:
|
||||
await bridge.start()
|
||||
|
||||
for i in range(10):
|
||||
await asyncio.sleep(1)
|
||||
if bridge.is_connected:
|
||||
print("✓ Extension connected!")
|
||||
break
|
||||
else:
|
||||
print("✗ Extension not connected")
|
||||
return
|
||||
|
||||
context = await bridge.create_context(CONTEXT_NAME)
|
||||
tab_id = context.get("tabId")
|
||||
group_id = context.get("groupId")
|
||||
print(f"✓ Created tab: {tab_id}")
|
||||
|
||||
# Create test page with Shadow DOM
|
||||
print("\n--- Creating test page with Shadow DOM ---")
|
||||
test_html = """
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head><title>Shadow DOM Test</title></head>
|
||||
<body>
|
||||
<div id="shadow-host"></div>
|
||||
<script>
|
||||
const host = document.getElementById('shadow-host');
|
||||
const shadow = host.attachShadow({ mode: 'open' });
|
||||
shadow.innerHTML = `
|
||||
<style>
|
||||
button { padding: 10px 20px; font-size: 16px; }
|
||||
</style>
|
||||
<button id="shadow-btn">Shadow Button</button>
|
||||
`;
|
||||
shadow.getElementById('shadow-btn').addEventListener('click', () => {
|
||||
window.shadowClickCount = (window.shadowClickCount || 0) + 1;
|
||||
console.log('Shadow button clicked:', window.shadowClickCount);
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
# Write to file and use file:// URL (data: URLs don't work well with extension)
|
||||
test_file = Path("/tmp/shadow_dom_test.html")
|
||||
test_file.write_text(test_html.strip())
|
||||
file_url = f"file://{test_file}"
|
||||
await bridge.navigate(tab_id, file_url, wait_until="load")
|
||||
print("✓ Page loaded")
|
||||
|
||||
# Screenshot
|
||||
screenshot = await bridge.screenshot(tab_id)
|
||||
print(f"Screenshot: {len(screenshot.get('data', ''))} bytes")
|
||||
|
||||
# Detect Shadow DOM
|
||||
print("\n--- Detecting Shadow DOM ---")
|
||||
detection = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
const hosts = [];
|
||||
document.querySelectorAll('*').forEach(el => {
|
||||
if (el.shadowRoot) {
|
||||
hosts.push({
|
||||
tag: el.tagName,
|
||||
id: el.id,
|
||||
hasButton: el.shadowRoot.querySelector('button') !== null
|
||||
});
|
||||
}
|
||||
});
|
||||
return { count: hosts.length, hosts };
|
||||
})();
|
||||
""",
|
||||
)
|
||||
print(f"Shadow DOM detection: {detection.get('result', {})}")
|
||||
|
||||
# Try to click shadow button using regular selector (should fail)
|
||||
print("\n--- Attempting click with regular selector ---")
|
||||
try:
|
||||
result = await bridge.click(tab_id, "#shadow-btn", timeout_ms=3000)
|
||||
print(f"Result: {result}")
|
||||
except Exception as e:
|
||||
print(f"Expected failure: {e}")
|
||||
|
||||
# Try to click using JavaScript that pierces shadow DOM
|
||||
print("\n--- Clicking via JavaScript shadow piercing ---")
|
||||
click_result = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
const host = document.getElementById('shadow-host');
|
||||
const btn = host.shadowRoot.getElementById('shadow-btn');
|
||||
if (btn) {
|
||||
btn.click();
|
||||
return { success: true, clicked: 'shadow-btn' };
|
||||
}
|
||||
return { success: false, error: 'Button not found' };
|
||||
})();
|
||||
""",
|
||||
)
|
||||
print(f"JS click result: {click_result.get('result', {})}")
|
||||
|
||||
# Verify click was registered
|
||||
count_result = await bridge.evaluate(
|
||||
tab_id, "(function() { return window.shadowClickCount || 0; })()"
|
||||
)
|
||||
count = count_result.get("result") or 0
|
||||
print(f"Shadow click count: {count}")
|
||||
|
||||
if count and count > 0:
|
||||
print("✓ PASS: Shadow DOM element clicked successfully")
|
||||
else:
|
||||
print("✗ FAIL: Could not click Shadow DOM element")
|
||||
|
||||
await bridge.destroy_context(group_id)
|
||||
print("\n✓ Context destroyed")
|
||||
|
||||
finally:
|
||||
await bridge.stop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_shadow_dom())
|
||||
@@ -0,0 +1,180 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Test #7: ContentEditable / Rich Text Editors
|
||||
|
||||
Symptom: browser_type() doesn't insert text
|
||||
Root Cause: Element is contenteditable, not an <input> or <textarea>
|
||||
Detection: element.contentEditable === 'true'
|
||||
Fix: Focus via JavaScript, use execCommand('insertText') or Input.dispatchKeyEvent
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
|
||||
|
||||
from gcu.browser.bridge import BeelineBridge
|
||||
|
||||
CONTEXT_NAME = "contenteditable-test"
|
||||
|
||||
|
||||
async def test_contenteditable():
|
||||
"""Test typing into contenteditable elements."""
|
||||
print("=" * 70)
|
||||
print("TEST #7: ContentEditable / Rich Text Editors")
|
||||
print("=" * 70)
|
||||
|
||||
bridge = BeelineBridge()
|
||||
|
||||
try:
|
||||
await bridge.start()
|
||||
|
||||
for i in range(10):
|
||||
await asyncio.sleep(1)
|
||||
if bridge.is_connected:
|
||||
print("✓ Extension connected!")
|
||||
break
|
||||
else:
|
||||
print("✗ Extension not connected")
|
||||
return
|
||||
|
||||
context = await bridge.create_context(CONTEXT_NAME)
|
||||
tab_id = context.get("tabId")
|
||||
group_id = context.get("groupId")
|
||||
print(f"✓ Created tab: {tab_id}")
|
||||
|
||||
# Create test page with contenteditable
|
||||
test_html = """
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head><title>ContentEditable Test</title></head>
|
||||
<body>
|
||||
<h2>ContentEditable Test</h2>
|
||||
|
||||
<h3>1. Simple contenteditable div</h3>
|
||||
<div id="editor1" contenteditable="true"
|
||||
style="border:1px solid #ccc;padding:10px;
|
||||
min-height:50px;">Start text</div>
|
||||
|
||||
<h3>2. Rich text editor (like Notion)</h3>
|
||||
<div id="editor2" contenteditable="true"
|
||||
style="border:1px solid #ccc;padding:10px;
|
||||
min-height:50px;">
|
||||
<p>Type here...</p>
|
||||
</div>
|
||||
|
||||
<h3>3. Regular input (for comparison)</h3>
|
||||
<input id="input1" type="text" placeholder="Regular input" />
|
||||
|
||||
<script>
|
||||
// Track content changes
|
||||
window.editor1Content = '';
|
||||
window.editor2Content = '';
|
||||
|
||||
document.getElementById('editor1').addEventListener('input', (e) => {
|
||||
window.editor1Content = e.target.innerText;
|
||||
});
|
||||
document.getElementById('editor2').addEventListener('input', (e) => {
|
||||
window.editor2Content = e.target.innerText;
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
# Write to file and use file:// URL (data: URLs don't work well with extension)
|
||||
test_file = Path("/tmp/contenteditable_test.html")
|
||||
test_file.write_text(test_html.strip())
|
||||
file_url = f"file://{test_file}"
|
||||
await bridge.navigate(tab_id, file_url, wait_until="load")
|
||||
print("✓ Page loaded")
|
||||
|
||||
# Screenshot with timeout protection
|
||||
try:
|
||||
screenshot = await asyncio.wait_for(bridge.screenshot(tab_id), timeout=10.0)
|
||||
print(f"Screenshot: {len(screenshot.get('data', ''))} bytes")
|
||||
except asyncio.TimeoutError:
|
||||
print("Screenshot timed out (skipping)")
|
||||
|
||||
# Detect contenteditable
|
||||
print("\n--- Detecting contenteditable elements ---")
|
||||
detection = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
const editables = document.querySelectorAll('[contenteditable="true"]');
|
||||
return {
|
||||
count: editables.length,
|
||||
ids: Array.from(editables).map(el => el.id)
|
||||
};
|
||||
})();
|
||||
""",
|
||||
)
|
||||
print(f"Contenteditable detection: {detection.get('result', {})}")
|
||||
|
||||
# Test 1: Type into regular input (baseline)
|
||||
print("\n--- Test 1: Regular input ---")
|
||||
await bridge.click(tab_id, "#input1")
|
||||
await bridge.type_text(tab_id, "#input1", "Hello input")
|
||||
input_result = await bridge.evaluate(
|
||||
tab_id, "(function() { return document.getElementById('input1').value; })()"
|
||||
)
|
||||
print(f"Input value: {input_result.get('result', '')}")
|
||||
|
||||
# Test 2: Type into contenteditable div
|
||||
print("\n--- Test 2: Contenteditable div ---")
|
||||
await bridge.click(tab_id, "#editor1")
|
||||
await bridge.type_text(tab_id, "#editor1", "Hello contenteditable", clear_first=True)
|
||||
editor_result = await bridge.evaluate(
|
||||
tab_id,
|
||||
"(function() { return document.getElementById('editor1').innerText; })()",
|
||||
)
|
||||
print(f"Editor1 innerText: {editor_result.get('result', '')}")
|
||||
|
||||
# Test 3: Use JavaScript insertText for rich editor
|
||||
print("\n--- Test 3: JavaScript insertText for rich editor ---")
|
||||
insert_result = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
const editor = document.getElementById('editor2');
|
||||
editor.focus();
|
||||
document.execCommand('selectAll', false, null);
|
||||
document.execCommand('insertText', false, 'Hello from execCommand');
|
||||
return editor.innerText;
|
||||
})();
|
||||
""",
|
||||
)
|
||||
print(f"Editor2 after execCommand: {insert_result.get('result', '')}")
|
||||
|
||||
# Screenshot after with timeout protection
|
||||
try:
|
||||
screenshot_after = await asyncio.wait_for(bridge.screenshot(tab_id), timeout=10.0)
|
||||
print(f"Screenshot after: {len(screenshot_after.get('data', ''))} bytes")
|
||||
except asyncio.TimeoutError:
|
||||
print("Screenshot after timed out (skipping)")
|
||||
|
||||
# Results
|
||||
print("\n--- Results ---")
|
||||
input_val = input_result.get("result", "")
|
||||
editor1_val = editor_result.get("result", "")
|
||||
editor2_val = insert_result.get("result", "")
|
||||
|
||||
input_pass = "Hello input" in input_val
|
||||
editor1_pass = "Hello contenteditable" in editor1_val
|
||||
editor2_pass = "execCommand" in editor2_val
|
||||
|
||||
print(f"Input: {'✓ PASS' if input_pass else '✗ FAIL'} - {input_val}")
|
||||
print(f"Editor1: {'✓ PASS' if editor1_pass else '✗ FAIL'} - {editor1_val}")
|
||||
print(f"Editor2: {'✓ PASS' if editor2_pass else '✗ FAIL'} - {editor2_val}")
|
||||
|
||||
await bridge.destroy_context(group_id)
|
||||
print("\n✓ Context destroyed")
|
||||
|
||||
finally:
|
||||
await bridge.stop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_contenteditable())
|
||||
@@ -0,0 +1,253 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Test #8: Autocomplete Field Clearing
|
||||
|
||||
Symptom: Typed text gets cleared immediately
|
||||
Root Cause: Field expects realistic keystroke timing for autocomplete
|
||||
Detection: Field has autocomplete listeners or dropdown appears
|
||||
Fix: Add delay_ms between keystrokes
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
|
||||
|
||||
from gcu.browser.bridge import BeelineBridge
|
||||
|
||||
CONTEXT_NAME = "autocomplete-test"
|
||||
|
||||
|
||||
async def test_autocomplete():
|
||||
"""Test typing into fields with autocomplete behavior."""
|
||||
print("=" * 70)
|
||||
print("TEST #8: Autocomplete Field Clearing")
|
||||
print("=" * 70)
|
||||
|
||||
bridge = BeelineBridge()
|
||||
|
||||
try:
|
||||
await bridge.start()
|
||||
|
||||
for i in range(10):
|
||||
await asyncio.sleep(1)
|
||||
if bridge.is_connected:
|
||||
print("✓ Extension connected!")
|
||||
break
|
||||
else:
|
||||
print("✗ Extension not connected")
|
||||
return
|
||||
|
||||
context = await bridge.create_context(CONTEXT_NAME)
|
||||
tab_id = context.get("tabId")
|
||||
group_id = context.get("groupId")
|
||||
print(f"✓ Created tab: {tab_id}")
|
||||
|
||||
# Create test page with autocomplete behavior
|
||||
test_html = """
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head><title>Autocomplete Test</title>
|
||||
<style>
|
||||
.autocomplete-items {
|
||||
position: absolute;
|
||||
border: 1px solid #d4d4d4;
|
||||
border-top: none;
|
||||
z-index: 99;
|
||||
top: 100%;
|
||||
left: 0;
|
||||
right: 0;
|
||||
max-height: 200px;
|
||||
overflow-y: auto;
|
||||
background: white;
|
||||
}
|
||||
.autocomplete-items div {
|
||||
padding: 10px;
|
||||
cursor: pointer;
|
||||
}
|
||||
.autocomplete-items div:hover {
|
||||
background-color: #e9e9e9;
|
||||
}
|
||||
.autocomplete-active {
|
||||
background-color: DodgerBlue !important;
|
||||
color: white;
|
||||
}
|
||||
.autocomplete { position: relative; display: inline-block; }
|
||||
input { width: 300px; padding: 10px; font-size: 16px; }
|
||||
</style></head>
|
||||
<body>
|
||||
<h2>Autocomplete Test</h2>
|
||||
|
||||
<div class="autocomplete">
|
||||
<input id="search" type="text" placeholder="Search countries..." autocomplete="off">
|
||||
</div>
|
||||
|
||||
<div id="log" style="margin-top:20px;font-family:monospace;"></div>
|
||||
|
||||
<script>
|
||||
const countries = [
|
||||
"Afghanistan","Albania","Algeria",
|
||||
"Andorra","Angola","Argentina",
|
||||
"Armenia","Australia","Austria",
|
||||
"Azerbaijan","Bahamas","Bahrain",
|
||||
"Bangladesh","Belarus","Belgium",
|
||||
"Belize","Benin","Bhutan",
|
||||
"Bolivia","Brazil","Canada",
|
||||
"China","Colombia","Denmark",
|
||||
"Egypt","France","Germany",
|
||||
"India","Indonesia","Italy",
|
||||
"Japan","Mexico","Netherlands",
|
||||
"Nigeria","Norway","Pakistan",
|
||||
"Peru","Philippines","Poland",
|
||||
"Portugal","Russia","Spain",
|
||||
"Sweden","Switzerland","Thailand",
|
||||
"Turkey","Ukraine",
|
||||
"United Kingdom","United States",
|
||||
"Vietnam"
|
||||
];
|
||||
|
||||
const input = document.getElementById('search');
|
||||
const log = document.getElementById('log');
|
||||
let currentFocus = -1;
|
||||
let typingTimeout = null;
|
||||
|
||||
// Track events for testing
|
||||
window.inputEvents = [];
|
||||
window.inputValue = '';
|
||||
|
||||
function logEvent(type, value) {
|
||||
window.inputEvents.push({ type, value, time: Date.now() });
|
||||
const entry = document.createElement('div');
|
||||
entry.textContent = type + ': ' + value;
|
||||
log.insertBefore(entry, log.firstChild);
|
||||
}
|
||||
|
||||
// Simulate autocomplete that clears fast typing
|
||||
input.addEventListener('input', function(e) {
|
||||
const val = this.value;
|
||||
|
||||
// Clear previous dropdown
|
||||
closeAllLists();
|
||||
|
||||
if (!val) return;
|
||||
|
||||
// If typing too fast (autocomplete-style), clear and restart
|
||||
clearTimeout(typingTimeout);
|
||||
typingTimeout = setTimeout(() => {
|
||||
logEvent('input', val);
|
||||
window.inputValue = val;
|
||||
|
||||
// Create dropdown
|
||||
const div = document.createElement('div');
|
||||
div.setAttribute('id', this.id + 'autocomplete-list');
|
||||
div.setAttribute('class', 'autocomplete-items');
|
||||
this.parentNode.appendChild(div);
|
||||
|
||||
countries.filter(
|
||||
c => c.substr(0, val.length).toUpperCase()
|
||||
=== val.toUpperCase()
|
||||
).slice(0, 5).forEach(country => {
|
||||
const item = document.createElement('div');
|
||||
item.innerHTML = '<strong>'
|
||||
+ country.substr(0, val.length)
|
||||
+ '</strong>'
|
||||
+ country.substr(val.length);
|
||||
item.addEventListener('click', function() {
|
||||
input.value = country;
|
||||
closeAllLists();
|
||||
logEvent('select', country);
|
||||
window.inputValue = country;
|
||||
});
|
||||
div.appendChild(item);
|
||||
});
|
||||
}, 100); // 100ms debounce
|
||||
});
|
||||
|
||||
function closeAllLists() {
|
||||
document.querySelectorAll('.autocomplete-items').forEach(el => el.remove());
|
||||
}
|
||||
|
||||
document.addEventListener('click', function() {
|
||||
closeAllLists();
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
# Write to file and use file:// URL (data: URLs don't work well with extension)
|
||||
test_file = Path("/tmp/autocomplete_test.html")
|
||||
test_file.write_text(test_html.strip())
|
||||
file_url = f"file://{test_file}"
|
||||
await bridge.navigate(tab_id, file_url, wait_until="load")
|
||||
print("✓ Page loaded")
|
||||
|
||||
# Screenshot
|
||||
screenshot = await bridge.screenshot(tab_id)
|
||||
print(f"Screenshot: {len(screenshot.get('data', ''))} bytes")
|
||||
|
||||
# Test 1: Fast typing (no delay) - may fail
|
||||
print("\n--- Test 1: Fast typing (delay_ms=0) ---")
|
||||
await bridge.click(tab_id, "#search")
|
||||
await bridge.type_text(tab_id, "#search", "Ger", clear_first=True, delay_ms=0)
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
fast_result = await bridge.evaluate(
|
||||
tab_id, "(function() { return document.getElementById('search').value; })()"
|
||||
)
|
||||
fast_value = fast_result.get("result", "")
|
||||
print(f"Value after fast typing: '{fast_value}'")
|
||||
|
||||
# Check events
|
||||
events_result = await bridge.evaluate(
|
||||
tab_id, "(function() { return window.inputEvents; })()"
|
||||
)
|
||||
print(f"Events logged: {events_result.get('result', [])}")
|
||||
|
||||
# Test 2: Slow typing (with delay) - should work
|
||||
print("\n--- Test 2: Slow typing (delay_ms=100) ---")
|
||||
await bridge.click(tab_id, "#search")
|
||||
await bridge.type_text(tab_id, "#search", "United", clear_first=True, delay_ms=100)
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
slow_result = await bridge.evaluate(
|
||||
tab_id, "(function() { return document.getElementById('search').value; })()"
|
||||
)
|
||||
slow_value = slow_result.get("result", "")
|
||||
print(f"Value after slow typing: '{slow_value}'")
|
||||
|
||||
# Check if dropdown appeared
|
||||
dropdown_result = await bridge.evaluate(
|
||||
tab_id,
|
||||
"(function() { return document.querySelectorAll("
|
||||
"'.autocomplete-items div').length; })()",
|
||||
)
|
||||
dropdown_count = dropdown_result.get("result", 0)
|
||||
print(f"Dropdown items: {dropdown_count}")
|
||||
|
||||
# Screenshot with dropdown
|
||||
screenshot_dropdown = await bridge.screenshot(tab_id)
|
||||
print(f"Screenshot with dropdown: {len(screenshot_dropdown.get('data', ''))} bytes")
|
||||
|
||||
# Results
|
||||
print("\n--- Results ---")
|
||||
if "United" in slow_value:
|
||||
print("✓ PASS: Slow typing with delay_ms worked")
|
||||
else:
|
||||
print("✗ FAIL: Slow typing still didn't work")
|
||||
|
||||
if dropdown_count > 0:
|
||||
print("✓ PASS: Autocomplete dropdown appeared")
|
||||
else:
|
||||
print("⚠ WARNING: No autocomplete dropdown")
|
||||
|
||||
await bridge.destroy_context(group_id)
|
||||
print("\n✓ Context destroyed")
|
||||
|
||||
finally:
|
||||
await bridge.stop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_autocomplete())
|
||||
@@ -0,0 +1,162 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Test #10: LinkedIn Huge DOM Tree
|
||||
|
||||
Symptom: browser_snapshot() hangs forever
|
||||
Root Cause: 10k+ DOM nodes, accessibility tree has 50k+ nodes
|
||||
Detection: document.querySelectorAll('*').length > 5000
|
||||
Fix: Add timeout (10s default), truncate tree at 2000 nodes
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import time
|
||||
import base64
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
|
||||
|
||||
from gcu.browser.bridge import BeelineBridge
|
||||
|
||||
CONTEXT_NAME = "huge-dom-test"
|
||||
|
||||
|
||||
async def test_huge_dom():
|
||||
"""Test snapshot performance on huge DOM trees."""
|
||||
print("=" * 70)
|
||||
print("TEST #10: Huge DOM Tree (LinkedIn-style)")
|
||||
print("=" * 70)
|
||||
|
||||
bridge = BeelineBridge()
|
||||
|
||||
try:
|
||||
await bridge.start()
|
||||
|
||||
for i in range(10):
|
||||
await asyncio.sleep(1)
|
||||
if bridge.is_connected:
|
||||
print("✓ Extension connected!")
|
||||
break
|
||||
else:
|
||||
print("✗ Extension not connected")
|
||||
return
|
||||
|
||||
context = await bridge.create_context(CONTEXT_NAME)
|
||||
tab_id = context.get("tabId")
|
||||
group_id = context.get("groupId")
|
||||
print(f"✓ Created tab: {tab_id}")
|
||||
|
||||
# Test 1: Small DOM (baseline)
|
||||
print("\n--- Test 1: Small DOM (baseline) ---")
|
||||
small_html = """
|
||||
<!DOCTYPE html>
|
||||
<html><body>
|
||||
<h1>Small Page</h1>
|
||||
<p>A few elements</p>
|
||||
<button>Click me</button>
|
||||
</body></html>
|
||||
"""
|
||||
data_url = f"data:text/html;base64,{base64.b64encode(small_html.encode()).decode()}"
|
||||
await bridge.navigate(tab_id, data_url, wait_until="load")
|
||||
|
||||
start = time.perf_counter()
|
||||
snapshot = await bridge.snapshot(tab_id, timeout_s=5.0)
|
||||
elapsed = time.perf_counter() - start
|
||||
tree_len = len(snapshot.get("tree", ""))
|
||||
print(f"Small DOM snapshot: {elapsed:.3f}s, {tree_len} chars")
|
||||
|
||||
# Test 2: Generate huge DOM
|
||||
print("\n--- Test 2: Huge DOM (5000+ elements) ---")
|
||||
huge_html = """
|
||||
<!DOCTYPE html>
|
||||
<html><body>
|
||||
<h1>Huge DOM Test</h1>
|
||||
<div id="container"></div>
|
||||
<script>
|
||||
const container = document.getElementById('container');
|
||||
for (let i = 0; i < 5000; i++) {
|
||||
const div = document.createElement('div');
|
||||
div.className = 'item-' + i;
|
||||
div.innerHTML = '<span>Item ' + i + '</span><button>Action</button>';
|
||||
container.appendChild(div);
|
||||
}
|
||||
</script>
|
||||
</body></html>
|
||||
"""
|
||||
data_url = f"data:text/html;base64,{base64.b64encode(huge_html.encode()).decode()}"
|
||||
await bridge.navigate(tab_id, data_url, wait_until="load")
|
||||
|
||||
# Count elements
|
||||
count_result = await bridge.evaluate(
|
||||
tab_id, "(function() { return document.querySelectorAll('*').length; })()"
|
||||
)
|
||||
elem_count = count_result.get("result", 0)
|
||||
print(f"DOM elements: {elem_count}")
|
||||
|
||||
# Skip screenshot on huge DOM - it can timeout
|
||||
# Instead verify page loaded by checking DOM
|
||||
print("✓ Page verified (skipping screenshot on huge DOM)")
|
||||
|
||||
# Test snapshot with timeout
|
||||
print("\n--- Testing snapshot with 10s timeout ---")
|
||||
start = time.perf_counter()
|
||||
try:
|
||||
snapshot = await bridge.snapshot(tab_id, timeout_s=10.0)
|
||||
elapsed = time.perf_counter() - start
|
||||
tree_len = len(snapshot.get("tree", ""))
|
||||
truncated = "(truncated)" in snapshot.get("tree", "")
|
||||
print(f"✓ Huge DOM snapshot: {elapsed:.3f}s, {tree_len} chars, truncated={truncated}")
|
||||
|
||||
if elapsed < 5.0:
|
||||
print("✓ PASS: Snapshot completed quickly")
|
||||
else:
|
||||
print(f"⚠ WARNING: Snapshot took {elapsed:.1f}s")
|
||||
|
||||
if truncated:
|
||||
print("✓ PASS: Tree was truncated to prevent hang")
|
||||
else:
|
||||
print("⚠ WARNING: Tree not truncated (may need adjustment)")
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
print("✗ FAIL: Snapshot timed out (this shouldn't happen)")
|
||||
|
||||
# Test 3: Real LinkedIn
|
||||
print("\n--- Test 3: Real LinkedIn Feed ---")
|
||||
await bridge.navigate(
|
||||
tab_id, "https://www.linkedin.com/feed", wait_until="load", timeout_ms=30000
|
||||
)
|
||||
await asyncio.sleep(2)
|
||||
|
||||
count_result = await bridge.evaluate(
|
||||
tab_id, "(function() { return document.querySelectorAll('*').length; })()"
|
||||
)
|
||||
elem_count = count_result.get("result", 0)
|
||||
print(f"LinkedIn DOM elements: {elem_count}")
|
||||
|
||||
start = time.perf_counter()
|
||||
try:
|
||||
snapshot = await bridge.snapshot(tab_id, timeout_s=15.0)
|
||||
elapsed = time.perf_counter() - start
|
||||
tree_len = len(snapshot.get("tree", ""))
|
||||
truncated = "(truncated)" in snapshot.get("tree", "")
|
||||
print(f"LinkedIn snapshot: {elapsed:.3f}s, {tree_len} chars, truncated={truncated}")
|
||||
|
||||
if elapsed < 5.0:
|
||||
print("✓ PASS: LinkedIn snapshot fast enough")
|
||||
elif elapsed < 15.0:
|
||||
print("⚠ WARNING: LinkedIn snapshot slow but within timeout")
|
||||
else:
|
||||
print("✗ FAIL: LinkedIn snapshot too slow")
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
print("✗ FAIL: LinkedIn snapshot timed out")
|
||||
|
||||
await bridge.destroy_context(group_id)
|
||||
print("\n✓ Context destroyed")
|
||||
|
||||
finally:
|
||||
await bridge.stop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_huge_dom())
|
||||
@@ -0,0 +1,190 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Test #13: SPA Navigation Events
|
||||
|
||||
Symptom: wait_until="load" fires before content ready
|
||||
Root Cause: SPA uses client-side routing, no full page load
|
||||
Detection: URL changes but load event already fired
|
||||
Fix: Use wait_until="networkidle" or wait_for_selector
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
|
||||
|
||||
from gcu.browser.bridge import BeelineBridge
|
||||
|
||||
CONTEXT_NAME = "spa-nav-test"
|
||||
|
||||
|
||||
async def test_spa_navigation():
|
||||
"""Test navigation timing on SPA pages."""
|
||||
print("=" * 70)
|
||||
print("TEST #13: SPA Navigation Events")
|
||||
print("=" * 70)
|
||||
|
||||
bridge = BeelineBridge()
|
||||
|
||||
try:
|
||||
await bridge.start()
|
||||
|
||||
for i in range(10):
|
||||
await asyncio.sleep(1)
|
||||
if bridge.is_connected:
|
||||
print("✓ Extension connected!")
|
||||
break
|
||||
else:
|
||||
print("✗ Extension not connected")
|
||||
return
|
||||
|
||||
context = await bridge.create_context(CONTEXT_NAME)
|
||||
tab_id = context.get("tabId")
|
||||
group_id = context.get("groupId")
|
||||
print(f"✓ Created tab: {tab_id}")
|
||||
|
||||
# Create a test SPA
|
||||
spa_html = """
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>SPA Test</title>
|
||||
<style>
|
||||
nav a { margin-right: 10px; }
|
||||
.page { padding: 20px; border: 1px solid #ccc; margin-top: 10px; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<nav>
|
||||
<a href="#home" onclick="navigate('home')">Home</a>
|
||||
<a href="#about" onclick="navigate('about')">About</a>
|
||||
<a href="#contact" onclick="navigate('contact')">Contact</a>
|
||||
</nav>
|
||||
<div id="app" class="page">
|
||||
<h1>Loading...</h1>
|
||||
</div>
|
||||
<script>
|
||||
// Simulate SPA routing
|
||||
let currentPage = '';
|
||||
|
||||
async function navigate(page) {
|
||||
event.preventDefault();
|
||||
currentPage = page;
|
||||
|
||||
// Show loading state
|
||||
document.getElementById('app').innerHTML = '<h1>Loading...</h1>';
|
||||
|
||||
// Simulate async content loading (like real SPAs)
|
||||
await new Promise(r => setTimeout(r, 500));
|
||||
|
||||
// Render content
|
||||
const content = {
|
||||
home: '<h1>Home Page</h1><p>Welcome!</p>'
|
||||
+ '<button id="home-btn">Home Action</button>',
|
||||
about: '<h1>About Page</h1><p>Simulated SPA.</p>'
|
||||
+ '<button id="about-btn">About Action</button>',
|
||||
contact: '<h1>Contact Page</h1>'
|
||||
+ '<p>Contact us at test@example.com</p>'
|
||||
+ '<button id="contact-btn">Contact Action</button>'
|
||||
};
|
||||
|
||||
document.getElementById('app').innerHTML = content[page] || '<h1>404</h1>';
|
||||
window.location.hash = page;
|
||||
}
|
||||
|
||||
// Initial load with delay (simulates SPA hydration)
|
||||
setTimeout(() => {
|
||||
navigate('home');
|
||||
}, 1000);
|
||||
|
||||
// Track for testing
|
||||
window.pageLoads = [];
|
||||
window.addEventListener('hashchange', () => {
|
||||
window.pageLoads.push(window.location.hash);
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
# Write to file and use file:// URL (data: URLs don't work well with extension)
|
||||
test_file = Path("/tmp/spa_test.html")
|
||||
test_file.write_text(spa_html.strip())
|
||||
file_url = f"file://{test_file}"
|
||||
|
||||
# Test 1: wait_until="load" - may fire before content ready
|
||||
print("\n--- Test 1: wait_until='load' ---")
|
||||
start = time.perf_counter()
|
||||
await bridge.navigate(tab_id, file_url, wait_until="load")
|
||||
elapsed = time.perf_counter() - start
|
||||
print(f"Navigation completed in {elapsed:.3f}s")
|
||||
|
||||
# Check content immediately
|
||||
content = await bridge.evaluate(
|
||||
tab_id,
|
||||
"(function() { return document.getElementById('app').innerText; })()",
|
||||
)
|
||||
print(f"Content immediately after load: '{content.get('result', '')}'")
|
||||
|
||||
# Screenshot
|
||||
screenshot = await bridge.screenshot(tab_id)
|
||||
print(f"Screenshot: {len(screenshot.get('data', ''))} bytes")
|
||||
|
||||
# Wait for content
|
||||
print("\n--- Waiting for content to hydrate ---")
|
||||
await bridge.wait_for_selector(tab_id, "#home-btn", timeout_ms=5000)
|
||||
print("✓ Content loaded")
|
||||
|
||||
# Check content after wait
|
||||
content_after = await bridge.evaluate(
|
||||
tab_id,
|
||||
"(function() { return document.getElementById('app').innerText; })()",
|
||||
)
|
||||
print(f"Content after wait: '{content_after.get('result', '')}'")
|
||||
|
||||
# Test 2: SPA navigation (no full page load)
|
||||
print("\n--- Test 2: SPA client-side navigation ---")
|
||||
|
||||
# Click "About" link
|
||||
await bridge.click(tab_id, 'a[href="#about"]')
|
||||
await asyncio.sleep(1)
|
||||
|
||||
# Check if content changed
|
||||
about_content = await bridge.evaluate(
|
||||
tab_id,
|
||||
"(function() { return document.getElementById('app').innerText; })()",
|
||||
)
|
||||
print(f"Content after SPA nav: '{about_content.get('result', '')}'")
|
||||
|
||||
if "About Page" in about_content.get("result", ""):
|
||||
print("✓ PASS: SPA navigation worked")
|
||||
else:
|
||||
print("✗ FAIL: SPA navigation didn't update content")
|
||||
|
||||
# Test 3: wait_until="networkidle"
|
||||
print("\n--- Test 3: wait_until='networkidle' ---")
|
||||
await bridge.navigate(tab_id, file_url, wait_until="networkidle", timeout_ms=10000)
|
||||
|
||||
# Check content immediately
|
||||
content_networkidle = await bridge.evaluate(
|
||||
tab_id,
|
||||
"(function() { return document.getElementById('app').innerText; })()",
|
||||
)
|
||||
print(f"Content after networkidle: '{content_networkidle.get('result', '')}'")
|
||||
|
||||
if "Home Page" in content_networkidle.get("result", ""):
|
||||
print("✓ PASS: networkidle waited for content")
|
||||
else:
|
||||
print("⚠ WARNING: networkidle didn't wait long enough")
|
||||
|
||||
await bridge.destroy_context(group_id)
|
||||
print("\n✓ Context destroyed")
|
||||
|
||||
finally:
|
||||
await bridge.stop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_spa_navigation())
|
||||
@@ -0,0 +1,267 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Test #15: Screenshot Functionality
|
||||
|
||||
Tests browser_screenshot across multiple scenarios:
|
||||
- Basic viewport screenshot
|
||||
- Full-page screenshot
|
||||
- Selector-based screenshot
|
||||
- Screenshot on complex DOM
|
||||
- Timeout handling
|
||||
|
||||
Category: screenshot
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import base64
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
|
||||
|
||||
from gcu.browser.bridge import BeelineBridge
|
||||
|
||||
CONTEXT_NAME = "screenshot-test"
|
||||
|
||||
SIMPLE_HTML = """<!DOCTYPE html>
|
||||
<html>
|
||||
<head><style>
|
||||
body { margin: 0; background: #fff; font-family: sans-serif; }
|
||||
h1 { color: #333; padding: 20px; }
|
||||
.box { width: 200px; height: 100px; background: #4a90e2; margin: 20px; }
|
||||
.long-content { height: 2000px; background: linear-gradient(blue, red); }
|
||||
</style></head>
|
||||
<body>
|
||||
<h1 id="title">Screenshot Test Page</h1>
|
||||
<div class="box" id="target-box">Target Box</div>
|
||||
<div class="long-content"></div>
|
||||
</body>
|
||||
</html>"""
|
||||
|
||||
|
||||
def check_png(data: str) -> bool:
|
||||
"""Verify that base64 data decodes to a valid PNG."""
|
||||
try:
|
||||
raw = base64.b64decode(data)
|
||||
return raw[:8] == b"\x89PNG\r\n\x1a\n"
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
async def test_basic_screenshot(bridge: BeelineBridge, tab_id: int, data_url: str):
|
||||
print("\n--- Test 1: Basic Viewport Screenshot ---")
|
||||
await bridge.navigate(tab_id, data_url, wait_until="load")
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
start = time.perf_counter()
|
||||
result = await bridge.screenshot(tab_id)
|
||||
elapsed = time.perf_counter() - start
|
||||
|
||||
ok = result.get("ok")
|
||||
data = result.get("data", "")
|
||||
mime = result.get("mimeType", "")
|
||||
|
||||
print(f" ok={ok}, mimeType={mime}, elapsed={elapsed:.3f}s")
|
||||
print(f" data length: {len(data)} chars")
|
||||
|
||||
if ok and data:
|
||||
valid_png = check_png(data)
|
||||
print(f" valid PNG: {valid_png}")
|
||||
if valid_png:
|
||||
raw = base64.b64decode(data)
|
||||
print(f" PNG size: {len(raw)} bytes")
|
||||
print(" ✓ PASS: Basic screenshot works")
|
||||
return True
|
||||
else:
|
||||
print(" ✗ FAIL: Data is not a valid PNG")
|
||||
else:
|
||||
print(f" ✗ FAIL: {result.get('error', 'no data')}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_full_page_screenshot(bridge: BeelineBridge, tab_id: int, data_url: str):
|
||||
print("\n--- Test 2: Full Page Screenshot ---")
|
||||
await bridge.navigate(tab_id, data_url, wait_until="load")
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
viewport_result = await bridge.screenshot(tab_id, full_page=False)
|
||||
full_result = await bridge.screenshot(tab_id, full_page=True)
|
||||
|
||||
v_data = viewport_result.get("data", "")
|
||||
f_data = full_result.get("data", "")
|
||||
|
||||
if not v_data or not f_data:
|
||||
print(f" ✗ FAIL: viewport ok={viewport_result.get('ok')}, full ok={full_result.get('ok')}")
|
||||
return False
|
||||
|
||||
v_size = len(base64.b64decode(v_data))
|
||||
f_size = len(base64.b64decode(f_data))
|
||||
print(f" Viewport PNG: {v_size} bytes")
|
||||
print(f" Full page PNG: {f_size} bytes")
|
||||
|
||||
if f_size > v_size:
|
||||
print(" ✓ PASS: Full page larger than viewport")
|
||||
return True
|
||||
else:
|
||||
print(" ✗ FAIL: Full page not larger than viewport (may not capture long pages)")
|
||||
return False
|
||||
|
||||
|
||||
async def test_selector_screenshot(bridge: BeelineBridge, tab_id: int, data_url: str):
|
||||
print("\n--- Test 3: Selector Screenshot ---")
|
||||
await bridge.navigate(tab_id, data_url, wait_until="load")
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
# selector param exists in signature but may not be implemented
|
||||
result = await bridge.screenshot(tab_id, selector="#target-box")
|
||||
|
||||
ok = result.get("ok")
|
||||
data = result.get("data", "")
|
||||
|
||||
if ok and data:
|
||||
# If implemented, the box screenshot should be smaller than a full viewport screenshot
|
||||
full_result = await bridge.screenshot(tab_id)
|
||||
full_data = full_result.get("data", "")
|
||||
|
||||
if full_data:
|
||||
sel_size = len(base64.b64decode(data))
|
||||
full_size = len(base64.b64decode(full_data))
|
||||
print(f" Selector PNG: {sel_size} bytes")
|
||||
print(f" Full page PNG: {full_size} bytes")
|
||||
if sel_size < full_size:
|
||||
print(" ✓ PASS: Selector screenshot smaller than full page")
|
||||
return True
|
||||
else:
|
||||
print(" ⚠ WARNING: Selector screenshot not smaller (may be full page)")
|
||||
return False
|
||||
else:
|
||||
print(
|
||||
" ⚠ NOT IMPLEMENTED: selector param ignored"
|
||||
f" (returns full page) - error={result.get('error')}"
|
||||
)
|
||||
print(" NOTE: selector parameter exists in signature but is not used in implementation")
|
||||
return False
|
||||
|
||||
|
||||
async def test_screenshot_url_metadata(bridge: BeelineBridge, tab_id: int):
|
||||
print("\n--- Test 4: Screenshot URL Metadata ---")
|
||||
await bridge.navigate(tab_id, "https://example.com", wait_until="load")
|
||||
await asyncio.sleep(1)
|
||||
|
||||
result = await bridge.screenshot(tab_id)
|
||||
url = result.get("url", "")
|
||||
tab = result.get("tabId")
|
||||
|
||||
print(f" url={url!r}, tabId={tab}")
|
||||
|
||||
if "example.com" in url:
|
||||
print(" ✓ PASS: URL metadata captured correctly")
|
||||
return True
|
||||
else:
|
||||
print(f" ✗ FAIL: Expected example.com in URL, got {url!r}")
|
||||
return False
|
||||
|
||||
|
||||
async def test_screenshot_timeout(bridge: BeelineBridge, tab_id: int, data_url: str):
|
||||
print("\n--- Test 5: Timeout Handling ---")
|
||||
await bridge.navigate(tab_id, data_url, wait_until="load")
|
||||
|
||||
# Very short timeout - likely still completes since simple page
|
||||
start = time.perf_counter()
|
||||
result = await bridge.screenshot(tab_id, timeout_s=0.001)
|
||||
elapsed = time.perf_counter() - start
|
||||
|
||||
if not result.get("ok"):
|
||||
err = result.get("error", "")
|
||||
if "timed out" in err or "cancelled" in err:
|
||||
print(f" ✓ PASS: Timeout handled gracefully: {err!r}")
|
||||
return True
|
||||
else:
|
||||
print(f" ⚠ Fast enough to beat timeout: {err!r} in {elapsed:.3f}s")
|
||||
return True # Not a failure, just fast
|
||||
else:
|
||||
print(
|
||||
f" ⚠ Screenshot completed before timeout ({elapsed:.3f}s) - too fast to test timeout"
|
||||
)
|
||||
return True # Still ok, just very fast
|
||||
|
||||
|
||||
async def test_screenshot_complex_site(bridge: BeelineBridge, tab_id: int):
|
||||
print("\n--- Test 6: Complex Site (example.com) ---")
|
||||
await bridge.navigate(tab_id, "https://example.com", wait_until="load")
|
||||
await asyncio.sleep(1)
|
||||
|
||||
start = time.perf_counter()
|
||||
result = await bridge.screenshot(tab_id)
|
||||
elapsed = time.perf_counter() - start
|
||||
|
||||
ok = result.get("ok")
|
||||
data = result.get("data", "")
|
||||
|
||||
print(f" ok={ok}, elapsed={elapsed:.3f}s, data_len={len(data)}")
|
||||
if ok and check_png(data):
|
||||
print(" ✓ PASS: Screenshot on real site works")
|
||||
return True
|
||||
else:
|
||||
print(f" ✗ FAIL: {result.get('error', 'bad data')}")
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
print("=" * 70)
|
||||
print("TEST #15: Screenshot Functionality")
|
||||
print("=" * 70)
|
||||
|
||||
bridge = BeelineBridge()
|
||||
|
||||
try:
|
||||
await bridge.start()
|
||||
|
||||
for i in range(10):
|
||||
await asyncio.sleep(1)
|
||||
if bridge.is_connected:
|
||||
print("✓ Extension connected!")
|
||||
break
|
||||
print(f"Waiting for extension... ({i + 1}/10)")
|
||||
else:
|
||||
print("✗ Extension not connected. Ensure Chrome with Beeline extension is running.")
|
||||
return
|
||||
|
||||
context = await bridge.create_context(CONTEXT_NAME)
|
||||
tab_id = context.get("tabId")
|
||||
group_id = context.get("groupId")
|
||||
print(f"✓ Created tab: {tab_id}")
|
||||
|
||||
data_url = f"data:text/html;base64,{base64.b64encode(SIMPLE_HTML.encode()).decode()}"
|
||||
|
||||
results = {
|
||||
"basic": await test_basic_screenshot(bridge, tab_id, data_url),
|
||||
"full_page": await test_full_page_screenshot(bridge, tab_id, data_url),
|
||||
"selector": await test_selector_screenshot(bridge, tab_id, data_url),
|
||||
"metadata": await test_screenshot_url_metadata(bridge, tab_id),
|
||||
"timeout": await test_screenshot_timeout(bridge, tab_id, data_url),
|
||||
"complex_site": await test_screenshot_complex_site(bridge, tab_id),
|
||||
}
|
||||
|
||||
print("\n" + "=" * 70)
|
||||
print("SUMMARY")
|
||||
print("=" * 70)
|
||||
for name, passed in results.items():
|
||||
status = "✓ PASS" if passed else "✗ FAIL"
|
||||
print(f" {status}: {name}")
|
||||
|
||||
passed_count = sum(1 for v in results.values() if v)
|
||||
total = len(results)
|
||||
print(f"\n {passed_count}/{total} tests passed")
|
||||
|
||||
await bridge.destroy_context(group_id)
|
||||
print("\n✓ Context destroyed")
|
||||
|
||||
finally:
|
||||
await bridge.stop()
|
||||
print("✓ Bridge stopped")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -0,0 +1,333 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Browser Edge Case Test Template
|
||||
|
||||
This script provides a template for testing and debugging browser tool failures
|
||||
on specific websites. Use this to reproduce, isolate, and verify fixes.
|
||||
|
||||
Usage:
|
||||
1. Copy this file: cp test_case.py test_#[number]_[site].py
|
||||
2. Fill in the CONFIG section with your test details
|
||||
3. Run: uv run python test_#[number]_[site].py
|
||||
|
||||
Example:
|
||||
uv run python test_01_linkedin_scroll.py
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
# Add tools to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent / "tools" / "src"))
|
||||
|
||||
from gcu.browser.bridge import BeelineBridge
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# CONFIG: Fill in these values for your test case
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
TEST_CASE = {
|
||||
"number": 1,
|
||||
"name": "LinkedIn Nested Scroll Container",
|
||||
"site": "https://www.linkedin.com/feed",
|
||||
"simple_site": "https://example.com",
|
||||
"category": "scroll", # scroll, click, input, snapshot, navigation
|
||||
"symptom": "scroll() returns success but page doesn't move",
|
||||
}
|
||||
|
||||
BRIDGE_PORT = 9229
|
||||
CONTEXT_NAME = "edge-case-test"
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# TEST FUNCTIONS
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
async def test_simple_site(bridge: BeelineBridge, tab_id: int) -> dict:
|
||||
"""Test that the tool works on a simple site (baseline)."""
|
||||
print("\n--- Baseline Test (Simple Site) ---")
|
||||
|
||||
await bridge.navigate(tab_id, TEST_CASE["simple_site"], wait_until="load")
|
||||
await asyncio.sleep(1)
|
||||
|
||||
# Adjust this based on category
|
||||
if TEST_CASE["category"] == "scroll":
|
||||
result = await bridge.scroll(tab_id, "down", 100)
|
||||
print(f" Scroll result: {result}")
|
||||
return result
|
||||
elif TEST_CASE["category"] == "click":
|
||||
# Add click test
|
||||
pass
|
||||
elif TEST_CASE["category"] == "snapshot":
|
||||
result = await bridge.snapshot(tab_id, timeout_s=5.0)
|
||||
print(f" Snapshot length: {len(result.get('tree', ''))}")
|
||||
return result
|
||||
|
||||
return {"ok": True}
|
||||
|
||||
|
||||
async def test_problematic_site(bridge: BeelineBridge, tab_id: int) -> dict:
|
||||
"""Test the tool on the problematic site."""
|
||||
print("\n--- Problem Site Test ---")
|
||||
|
||||
await bridge.navigate(tab_id, TEST_CASE["site"], wait_until="load", timeout_ms=30000)
|
||||
await asyncio.sleep(2)
|
||||
|
||||
# Adjust this based on category
|
||||
if TEST_CASE["category"] == "scroll":
|
||||
# Get scroll positions before
|
||||
before = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
const results = { window: { y: window.scrollY } };
|
||||
document.querySelectorAll('*').forEach((el, i) => {
|
||||
const style = getComputedStyle(el);
|
||||
if ((style.overflowY === 'scroll' || style.overflowY === 'auto') &&
|
||||
el.scrollHeight > el.clientHeight) {
|
||||
results['el_' + i] = {
|
||||
tag: el.tagName,
|
||||
scrollTop: el.scrollTop,
|
||||
class: el.className.substring(0, 30)
|
||||
};
|
||||
}
|
||||
});
|
||||
return results;
|
||||
})();
|
||||
""",
|
||||
)
|
||||
print(f" Before scroll: {before.get('result', {})}")
|
||||
|
||||
# Try to scroll
|
||||
result = await bridge.scroll(tab_id, "down", 500)
|
||||
print(f" Scroll result: {result}")
|
||||
|
||||
await asyncio.sleep(1)
|
||||
|
||||
# Get scroll positions after
|
||||
after = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
const results = { window: { y: window.scrollY } };
|
||||
document.querySelectorAll('*').forEach((el, i) => {
|
||||
const style = getComputedStyle(el);
|
||||
if ((style.overflowY === 'scroll' || style.overflowY === 'auto') &&
|
||||
el.scrollHeight > el.clientHeight) {
|
||||
results['el_' + i] = {
|
||||
tag: el.tagName,
|
||||
scrollTop: el.scrollTop,
|
||||
class: el.className.substring(0, 30)
|
||||
};
|
||||
}
|
||||
});
|
||||
return results;
|
||||
})();
|
||||
""",
|
||||
)
|
||||
print(f" After scroll: {after.get('result', {})}")
|
||||
|
||||
# Check if anything changed
|
||||
before_data = before.get("result", {}) or {}
|
||||
after_data = after.get("result", {}) or {}
|
||||
|
||||
changed = False
|
||||
for key in after_data:
|
||||
if key in before_data:
|
||||
b_val = (
|
||||
before_data[key].get("scrollTop", 0)
|
||||
if isinstance(before_data[key], dict)
|
||||
else 0
|
||||
)
|
||||
a_val = (
|
||||
after_data[key].get("scrollTop", 0) if isinstance(after_data[key], dict) else 0
|
||||
)
|
||||
if a_val != b_val:
|
||||
print(f" ✓ CHANGE DETECTED: {key} scrolled from {b_val} to {a_val}")
|
||||
changed = True
|
||||
|
||||
if not changed:
|
||||
print(" ✗ NO CHANGE: Scroll did not affect any container")
|
||||
|
||||
return {"ok": changed, "scroll_result": result}
|
||||
|
||||
elif TEST_CASE["category"] == "snapshot":
|
||||
start = time.perf_counter()
|
||||
try:
|
||||
result = await bridge.snapshot(tab_id, timeout_s=15.0)
|
||||
elapsed = time.perf_counter() - start
|
||||
tree_len = len(result.get("tree", ""))
|
||||
print(f" Snapshot completed in {elapsed:.2f}s, {tree_len} chars")
|
||||
return {"ok": True, "elapsed": elapsed, "tree_length": tree_len}
|
||||
except asyncio.TimeoutError:
|
||||
print(" ✗ SNAPSHOT TIMED OUT")
|
||||
return {"ok": False, "error": "timeout"}
|
||||
|
||||
return {"ok": True}
|
||||
|
||||
|
||||
async def detect_root_cause(bridge: BeelineBridge, tab_id: int) -> dict:
|
||||
"""Run detection scripts to identify the root cause."""
|
||||
print("\n--- Root Cause Detection ---")
|
||||
|
||||
detections = {}
|
||||
|
||||
# Detection 1: Nested scrollable containers
|
||||
scroll_check = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
const candidates = [];
|
||||
document.querySelectorAll('*').forEach(el => {
|
||||
const style = getComputedStyle(el);
|
||||
if (style.overflow.includes('scroll') || style.overflow.includes('auto')) {
|
||||
const rect = el.getBoundingClientRect();
|
||||
if (rect.width > 100 && rect.height > 100) {
|
||||
candidates.push({
|
||||
tag: el.tagName,
|
||||
area: rect.width * rect.height,
|
||||
class: el.className.substring(0, 30)
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
candidates.sort((a, b) => b.area - a.area);
|
||||
return {
|
||||
count: candidates.length,
|
||||
largest: candidates[0]
|
||||
};
|
||||
})();
|
||||
""",
|
||||
)
|
||||
detections["nested_scroll"] = scroll_check.get("result", {})
|
||||
print(f" Nested scroll containers: {detections['nested_scroll']}")
|
||||
|
||||
# Detection 2: Shadow DOM
|
||||
shadow_check = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
const withShadow = [];
|
||||
document.querySelectorAll('*').forEach(el => {
|
||||
if (el.shadowRoot) {
|
||||
withShadow.push(el.tagName);
|
||||
}
|
||||
});
|
||||
return { count: withShadow.length, elements: withShadow.slice(0, 5) };
|
||||
})();
|
||||
""",
|
||||
)
|
||||
detections["shadow_dom"] = shadow_check.get("result", {})
|
||||
print(f" Shadow DOM: {detections['shadow_dom']}")
|
||||
|
||||
# Detection 3: iframes
|
||||
iframe_check = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
const iframes = document.querySelectorAll('iframe');
|
||||
return { count: iframes.length };
|
||||
})();
|
||||
""",
|
||||
)
|
||||
detections["iframes"] = iframe_check.get("result", {})
|
||||
print(f" iframes: {detections['iframes']}")
|
||||
|
||||
# Detection 4: DOM size
|
||||
dom_check = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
return {
|
||||
elements: document.querySelectorAll('*').length,
|
||||
body_children: document.body.children.length
|
||||
};
|
||||
})();
|
||||
""",
|
||||
)
|
||||
detections["dom_size"] = dom_check.get("result", {})
|
||||
print(f" DOM size: {detections['dom_size']}")
|
||||
|
||||
# Detection 5: Framework detection
|
||||
framework_check = await bridge.evaluate(
|
||||
tab_id,
|
||||
"""
|
||||
(function() {
|
||||
return {
|
||||
react: !!document.querySelector('[data-reactroot], [data-reactid]'),
|
||||
vue: !!document.querySelector('[data-v-]'),
|
||||
angular: !!document.querySelector('[ng-app], [ng-version]')
|
||||
};
|
||||
})();
|
||||
""",
|
||||
)
|
||||
detections["frameworks"] = framework_check.get("result", {})
|
||||
print(f" Frameworks: {detections['frameworks']}")
|
||||
|
||||
return detections
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# MAIN
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
|
||||
async def main():
|
||||
print("=" * 70)
|
||||
print(f"EDGE CASE TEST #{TEST_CASE['number']}: {TEST_CASE['name']}")
|
||||
print("=" * 70)
|
||||
print(f"Site: {TEST_CASE['site']}")
|
||||
print(f"Category: {TEST_CASE['category']}")
|
||||
print(f"Symptom: {TEST_CASE['symptom']}")
|
||||
|
||||
bridge = BeelineBridge()
|
||||
|
||||
try:
|
||||
print("\n--- Starting Bridge ---")
|
||||
await bridge.start()
|
||||
|
||||
# Wait for extension connection
|
||||
for i in range(10):
|
||||
await asyncio.sleep(1)
|
||||
if bridge.is_connected:
|
||||
print("✓ Extension connected!")
|
||||
break
|
||||
print(f"Waiting for extension... ({i + 1}/10)")
|
||||
else:
|
||||
print("✗ Extension not connected. Ensure Chrome with Beeline extension is running.")
|
||||
return
|
||||
|
||||
# Create browser context
|
||||
context = await bridge.create_context(CONTEXT_NAME)
|
||||
tab_id = context.get("tabId")
|
||||
group_id = context.get("groupId")
|
||||
print(f"✓ Created tab: {tab_id}")
|
||||
|
||||
# Run tests
|
||||
baseline_result = await test_simple_site(bridge, tab_id)
|
||||
problem_result = await test_problematic_site(bridge, tab_id)
|
||||
detections = await detect_root_cause(bridge, tab_id)
|
||||
|
||||
# Summary
|
||||
print("\n" + "=" * 70)
|
||||
print("SUMMARY")
|
||||
print("=" * 70)
|
||||
print(f"Baseline test: {'✓ PASS' if baseline_result.get('ok') else '✗ FAIL'}")
|
||||
print(f"Problem test: {'✓ PASS' if problem_result.get('ok') else '✗ FAIL'}")
|
||||
print(f"Root cause indicators: {list(k for k, v in detections.items() if v)}")
|
||||
|
||||
# Cleanup
|
||||
print("\n--- Cleanup ---")
|
||||
await bridge.destroy_context(group_id)
|
||||
print("✓ Context destroyed")
|
||||
|
||||
finally:
|
||||
await bridge.stop()
|
||||
print("✓ Bridge stopped")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -63,7 +63,7 @@ jobs:
|
||||
working-directory: core
|
||||
run: |
|
||||
uv sync
|
||||
uv run pytest tests/ -v
|
||||
uv run pytest tests/ -v --ignore=tests/dummy_agents
|
||||
|
||||
test-tools:
|
||||
name: Test Tools (${{ matrix.os }})
|
||||
|
||||
@@ -70,6 +70,8 @@ tmp/
|
||||
temp/
|
||||
|
||||
exports/*
|
||||
exports.old*
|
||||
artifacts/*
|
||||
|
||||
.claude/settings.local.json
|
||||
|
||||
@@ -79,3 +81,4 @@ core/tests/*dumps/*
|
||||
screenshots/*
|
||||
|
||||
.gemini/*
|
||||
.coverage
|
||||
|
||||
@@ -0,0 +1,9 @@
|
||||
{"type": "connection", "event": "connect", "ts": "2026-04-04T01:10:38.245667+00:00", "profile": "default"}
|
||||
{"type": "connection", "event": "hello", "details": {"version": "1.0"}, "ts": "2026-04-04T01:10:38.247207+00:00", "profile": "default"}
|
||||
{"type": "connection", "event": "disconnect", "ts": "2026-04-04T01:11:57.148273+00:00", "profile": "default"}
|
||||
{"type": "connection", "event": "connect", "ts": "2026-04-04T01:12:09.162378+00:00", "profile": "default"}
|
||||
{"type": "connection", "event": "hello", "details": {"version": "1.0"}, "ts": "2026-04-04T01:12:09.163899+00:00", "profile": "default"}
|
||||
{"type": "connection", "event": "disconnect", "ts": "2026-04-04T01:15:12.826042+00:00", "profile": "default"}
|
||||
{"type": "connection", "event": "connect", "ts": "2026-04-04T01:15:30.842533+00:00", "profile": "default"}
|
||||
{"type": "connection", "event": "hello", "details": {"version": "1.0"}, "ts": "2026-04-04T01:15:30.845025+00:00", "profile": "default"}
|
||||
{"type": "tool_call", "tool": "browser_stop", "params": {"profile": "gcu-browser-worker:3"}, "result": {"ok": true, "status": "not_running", "profile": "gcu-browser-worker:3"}, "ok": true, "duration_ms": 0.01, "ts": "2026-04-04T01:29:04.294954+00:00", "profile": "default"}
|
||||
@@ -333,6 +333,22 @@ make test-live # Run live API integration tests (requires credentials)
|
||||
- **WebSocket** for real-time updates
|
||||
- **Tailwind CSS** for styling
|
||||
|
||||
### Frontend Dev Workflow
|
||||
|
||||
> **Note:** `./quickstart.sh` handles the full setup including the web UI.
|
||||
> The commands below are for contributors iterating on the frontend code after
|
||||
> initial setup is complete.
|
||||
|
||||
```bash
|
||||
# Start the backend server
|
||||
hive serve
|
||||
|
||||
# In a separate terminal, run the frontend dev server with hot-reload
|
||||
cd core/frontend
|
||||
npm install # only needed after dependency changes
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### Useful Development Commands
|
||||
|
||||
```bash
|
||||
|
||||
@@ -28,7 +28,7 @@ check: ## Run all checks without modifying files (CI-safe)
|
||||
cd tools && uv run ruff format --check .
|
||||
|
||||
test: ## Run all tests (core + tools, excludes live)
|
||||
cd core && uv run python -m pytest tests/ -v
|
||||
cd core && uv run python -m pytest tests/ -v --ignore=tests/dummy_agents
|
||||
cd tools && uv run python -m pytest -v
|
||||
|
||||
test-tools: ## Run tool tests only (mocked, no credentials needed)
|
||||
@@ -38,7 +38,7 @@ test-live: ## Run live integration tests (requires real API credentials)
|
||||
cd tools && uv run python -m pytest -m live -s -o "addopts=" --log-cli-level=INFO
|
||||
|
||||
test-all: ## Run everything including live tests
|
||||
cd core && uv run python -m pytest tests/ -v
|
||||
cd core && uv run python -m pytest tests/ -v --ignore=tests/dummy_agents
|
||||
cd tools && uv run python -m pytest -v
|
||||
cd tools && uv run python -m pytest -m live -s -o "addopts=" --log-cli-level=INFO
|
||||
|
||||
|
||||
@@ -51,7 +51,7 @@ https://github.com/user-attachments/assets/bf10edc3-06ba-48b6-98ba-d069b15fb69d
|
||||
|
||||
## Who Is Hive For?
|
||||
|
||||
Hive is the harness layer for teams moving AI agents from prototype to production. Models are getting better on their own — the bottleneck is the infrastructure around them: state management, failure recovery, cost control, and observability.
|
||||
Hive is the multi-agent harness layer for teams moving AI agents from prototype to production. Single agents like Openclaw and Cowork can finish personal jobs pretty well but lack the rigor to fulfil business processes.
|
||||
|
||||
Hive is a good fit if you:
|
||||
|
||||
@@ -194,18 +194,6 @@ flowchart LR
|
||||
style V6 fill:#fff,stroke:#ed8c00,stroke-width:1px,color:#cc5d00
|
||||
```
|
||||
|
||||
### The Hive Advantage
|
||||
|
||||
| Typical Agent Frameworks | Hive |
|
||||
| -------------------------- | -------------------------------------- |
|
||||
| Focus on model orchestration | **Production harness**: state, recovery, observability |
|
||||
| Hardcode agent workflows | Describe goals in natural language |
|
||||
| Manual graph definition | Auto-generated agent graphs |
|
||||
| Reactive error handling | Outcome-evaluation and adaptiveness |
|
||||
| Static tool configurations | Dynamic SDK-wrapped nodes |
|
||||
| Separate monitoring setup | Built-in real-time observability |
|
||||
| DIY budget management | Integrated cost controls & degradation |
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **[Define Your Goal](docs/key_concepts/goals_outcome.md)** → Describe what you want to achieve in plain English
|
||||
|
||||
+17
-60
@@ -1,66 +1,23 @@
|
||||
"""
|
||||
Aden Hive Framework: A goal-driven agent runtime optimized for Builder observability.
|
||||
"""Hive Agent Framework.
|
||||
|
||||
The runtime is designed around DECISIONS, not just actions. Every significant
|
||||
choice the agent makes is captured with:
|
||||
- What it was trying to do (intent)
|
||||
- What options it considered
|
||||
- What it chose and why
|
||||
- What happened as a result
|
||||
- Whether that was good or bad (evaluated post-hoc)
|
||||
|
||||
This gives the Builder LLM the information it needs to improve agent behavior.
|
||||
|
||||
## Testing Framework
|
||||
|
||||
The framework includes a Goal-Based Testing system (Goal → Agent → Eval):
|
||||
- Generate tests from Goal success_criteria and constraints
|
||||
- Mandatory user approval before tests are stored
|
||||
- Parallel test execution with error categorization
|
||||
- Debug tools with fix suggestions
|
||||
|
||||
See `framework.testing` for details.
|
||||
Core classes:
|
||||
AgentHost -- hosts agents, manages entry points and pipeline
|
||||
Orchestrator -- routes between nodes in a graph
|
||||
AgentLoop -- the LLM + tool execution loop (one per node)
|
||||
AgentLoader -- loads agent.json from disk, builds pipeline
|
||||
DecisionTracker -- records decisions for post-hoc analysis
|
||||
"""
|
||||
|
||||
from framework.llm import AnthropicProvider, LLMProvider
|
||||
from framework.runner import AgentRunner
|
||||
from framework.runtime.core import Runtime
|
||||
from framework.schemas.decision import Decision, DecisionEvaluation, Option, Outcome
|
||||
from framework.schemas.run import Problem, Run, RunSummary
|
||||
|
||||
# Testing framework
|
||||
from framework.testing import (
|
||||
ApprovalStatus,
|
||||
DebugTool,
|
||||
ErrorCategory,
|
||||
Test,
|
||||
TestResult,
|
||||
TestStorage,
|
||||
TestSuiteResult,
|
||||
)
|
||||
from framework.agent_loop import AgentLoop
|
||||
from framework.host import AgentHost
|
||||
from framework.loader import AgentLoader
|
||||
from framework.orchestrator import Orchestrator
|
||||
from framework.tracker import DecisionTracker
|
||||
|
||||
__all__ = [
|
||||
# Schemas
|
||||
"Decision",
|
||||
"Option",
|
||||
"Outcome",
|
||||
"DecisionEvaluation",
|
||||
"Run",
|
||||
"RunSummary",
|
||||
"Problem",
|
||||
# Runtime
|
||||
"Runtime",
|
||||
# LLM
|
||||
"LLMProvider",
|
||||
"AnthropicProvider",
|
||||
# Runner
|
||||
"AgentRunner",
|
||||
# Testing
|
||||
"Test",
|
||||
"TestResult",
|
||||
"TestSuiteResult",
|
||||
"TestStorage",
|
||||
"ApprovalStatus",
|
||||
"ErrorCategory",
|
||||
"DebugTool",
|
||||
"AgentHost",
|
||||
"AgentLoader",
|
||||
"AgentLoop",
|
||||
"DecisionTracker",
|
||||
"Orchestrator",
|
||||
]
|
||||
|
||||
@@ -0,0 +1,32 @@
|
||||
"""Agent loop -- the core agent execution primitive."""
|
||||
|
||||
from framework.agent_loop.conversation import ( # noqa: F401
|
||||
ConversationStore,
|
||||
Message,
|
||||
NodeConversation,
|
||||
)
|
||||
|
||||
# Lazy import to avoid circular dependency with graph/event_loop/
|
||||
# (graph/event_loop/* imports framework.graph.conversation which is a shim
|
||||
# pointing here, which would trigger agent_loop.py loading, which imports
|
||||
# graph/event_loop/* again)
|
||||
|
||||
|
||||
def __getattr__(name: str):
|
||||
if name in ("AgentLoop", "JudgeProtocol", "JudgeVerdict", "LoopConfig", "OutputAccumulator"):
|
||||
from framework.agent_loop.agent_loop import (
|
||||
AgentLoop,
|
||||
JudgeProtocol,
|
||||
JudgeVerdict,
|
||||
LoopConfig,
|
||||
OutputAccumulator,
|
||||
)
|
||||
_exports = {
|
||||
"AgentLoop": AgentLoop,
|
||||
"JudgeProtocol": JudgeProtocol,
|
||||
"JudgeVerdict": JudgeVerdict,
|
||||
"LoopConfig": LoopConfig,
|
||||
"OutputAccumulator": OutputAccumulator,
|
||||
}
|
||||
return _exports[name]
|
||||
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||
+293
-360
File diff suppressed because it is too large
Load Diff
@@ -324,7 +324,7 @@ def _try_extract_key(content: str, key: str) -> str | None:
|
||||
3. Colon format: ``key: value``.
|
||||
4. Equals format: ``key = value``.
|
||||
"""
|
||||
from framework.graph.node import find_json_object
|
||||
from framework.orchestrator.node import find_json_object
|
||||
|
||||
# 1. Whole message is JSON
|
||||
try:
|
||||
@@ -453,6 +453,9 @@ class NodeConversation:
|
||||
)
|
||||
self._messages.append(msg)
|
||||
self._next_seq += 1
|
||||
# Invalidate stale API token count so estimate_tokens() uses
|
||||
# the char-based heuristic which reflects the new message.
|
||||
self._last_api_input_tokens = None
|
||||
await self._persist(msg)
|
||||
return msg
|
||||
|
||||
@@ -471,6 +474,7 @@ class NodeConversation:
|
||||
)
|
||||
self._messages.append(msg)
|
||||
self._next_seq += 1
|
||||
self._last_api_input_tokens = None
|
||||
await self._persist(msg)
|
||||
return msg
|
||||
|
||||
@@ -495,6 +499,7 @@ class NodeConversation:
|
||||
)
|
||||
self._messages.append(msg)
|
||||
self._next_seq += 1
|
||||
self._last_api_input_tokens = None
|
||||
await self._persist(msg)
|
||||
return msg
|
||||
|
||||
@@ -575,12 +580,15 @@ class NodeConversation:
|
||||
|
||||
Uses actual API input token count when available (set via
|
||||
:meth:`update_token_count`), otherwise falls back to a
|
||||
``total_chars / 4`` heuristic that includes both message content
|
||||
AND tool_call argument sizes.
|
||||
character-based heuristic that includes message content, tool_call
|
||||
arguments, and image blocks. The heuristic applies a 4/3 safety
|
||||
margin to avoid under-counting (inspired by Claude Code's compact
|
||||
service).
|
||||
"""
|
||||
if self._last_api_input_tokens is not None:
|
||||
return self._last_api_input_tokens
|
||||
total_chars = 0
|
||||
image_tokens = 0
|
||||
for m in self._messages:
|
||||
total_chars += len(m.content)
|
||||
if m.tool_calls:
|
||||
@@ -588,7 +596,11 @@ class NodeConversation:
|
||||
func = tc.get("function", {})
|
||||
total_chars += len(func.get("arguments", ""))
|
||||
total_chars += len(func.get("name", ""))
|
||||
return total_chars // 4
|
||||
if m.image_content:
|
||||
# Images/documents have a fixed token cost per block
|
||||
image_tokens += len(m.image_content) * 2000
|
||||
# Apply 4/3 safety margin to character-based estimate
|
||||
return (total_chars * 4) // (3 * 4) + image_tokens
|
||||
|
||||
def update_token_count(self, actual_input_tokens: int) -> None:
|
||||
"""Store actual API input token count for more accurate compaction.
|
||||
@@ -877,6 +889,15 @@ class NodeConversation:
|
||||
freeform_lines: list[str] = []
|
||||
collapsed_msgs: list[Message] = []
|
||||
|
||||
# Collect all tool_use IDs present in old messages so we can detect
|
||||
# orphaned tool results whose parent assistant message was already
|
||||
# compacted away (API invariant protection).
|
||||
old_tc_ids: set[str] = set()
|
||||
for msg in old_messages:
|
||||
if msg.tool_calls:
|
||||
for tc in msg.tool_calls:
|
||||
old_tc_ids.add(tc.get("id", ""))
|
||||
|
||||
if aggressive:
|
||||
# Aggressive: only keep set_output tool pairs and error results.
|
||||
# Everything else is collapsed into a tool-call history summary.
|
||||
@@ -898,9 +919,17 @@ class NodeConversation:
|
||||
else:
|
||||
collapsible_tc_ids |= tc_ids
|
||||
|
||||
# Skill content and transition markers are always protected
|
||||
for msg in old_messages:
|
||||
if msg.role == "tool" and msg.is_skill_content and msg.tool_use_id:
|
||||
protected_tc_ids.add(msg.tool_use_id)
|
||||
|
||||
# Second pass: classify all messages
|
||||
for msg in old_messages:
|
||||
if msg.role == "tool":
|
||||
if msg.is_transition_marker:
|
||||
# Transition markers are always kept (phase boundaries)
|
||||
kept_structural.append(msg)
|
||||
elif msg.role == "tool":
|
||||
tc_id = msg.tool_use_id or ""
|
||||
if tc_id in protected_tc_ids:
|
||||
kept_structural.append(msg)
|
||||
@@ -909,6 +938,12 @@ class NodeConversation:
|
||||
kept_structural.append(msg)
|
||||
# Protect the parent assistant message too
|
||||
protected_tc_ids.add(tc_id)
|
||||
elif msg.is_skill_content:
|
||||
kept_structural.append(msg)
|
||||
elif tc_id and tc_id not in old_tc_ids:
|
||||
# Orphaned tool result — parent tool_use not in old msgs.
|
||||
# Keep it to maintain API invariants.
|
||||
kept_structural.append(msg)
|
||||
else:
|
||||
collapsed_msgs.append(msg)
|
||||
elif msg.role == "assistant" and msg.tool_calls:
|
||||
@@ -940,7 +975,10 @@ class NodeConversation:
|
||||
else:
|
||||
# Standard mode: keep all tool call pairs as structural
|
||||
for msg in old_messages:
|
||||
if msg.role == "tool":
|
||||
if msg.is_transition_marker:
|
||||
# Transition markers are always kept (phase boundaries)
|
||||
kept_structural.append(msg)
|
||||
elif msg.role == "tool":
|
||||
kept_structural.append(msg)
|
||||
elif msg.role == "assistant" and msg.tool_calls:
|
||||
compact_tcs = _compact_tool_calls(msg.tool_calls)
|
||||
@@ -0,0 +1,7 @@
|
||||
"""Agent loop internals -- compaction, judge, tools, subagent execution.
|
||||
|
||||
Re-exports from legacy locations for the new import path.
|
||||
"""
|
||||
|
||||
from framework.agent_loop.internals.compaction import * # noqa: F401, F403
|
||||
from framework.agent_loop.internals.synthetic_tools import * # noqa: F401, F403
|
||||
+244
-19
@@ -1,7 +1,8 @@
|
||||
"""Conversation compaction pipeline.
|
||||
|
||||
Implements the multi-level compaction strategy:
|
||||
1. Prune old tool results
|
||||
0. Microcompaction (count-based tool result clearing — cheapest)
|
||||
1. Prune old tool results (token-budget based)
|
||||
2. Structure-preserving compaction (spillover)
|
||||
3. LLM summary compaction (with recursive splitting)
|
||||
4. Emergency deterministic summary (no LLM)
|
||||
@@ -13,15 +14,16 @@ import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import time
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from framework.graph.conversation import NodeConversation
|
||||
from framework.graph.event_loop.event_publishing import publish_context_usage
|
||||
from framework.graph.event_loop.types import LoopConfig, OutputAccumulator
|
||||
from framework.graph.node import NodeContext
|
||||
from framework.runtime.event_bus import EventBus
|
||||
from framework.agent_loop.conversation import Message, NodeConversation
|
||||
from framework.agent_loop.internals.event_publishing import publish_context_usage
|
||||
from framework.agent_loop.internals.types import LoopConfig, OutputAccumulator
|
||||
from framework.orchestrator.node import NodeContext
|
||||
from framework.host.event_bus import EventBus
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -29,6 +31,121 @@ logger = logging.getLogger(__name__)
|
||||
LLM_COMPACT_CHAR_LIMIT: int = 240_000
|
||||
LLM_COMPACT_MAX_DEPTH: int = 10
|
||||
|
||||
# Microcompaction: tools whose results can be safely cleared
|
||||
COMPACTABLE_TOOLS: frozenset[str] = frozenset(
|
||||
{
|
||||
"read_file",
|
||||
"run_command",
|
||||
"web_search",
|
||||
"web_fetch",
|
||||
"grep_search",
|
||||
"glob_search",
|
||||
"write_file",
|
||||
"edit_file",
|
||||
"browser_screenshot",
|
||||
"list_directory",
|
||||
}
|
||||
)
|
||||
|
||||
# Keep at most this many compactable tool results; clear older ones
|
||||
MICROCOMPACT_KEEP_RECENT: int = 8
|
||||
|
||||
# Circuit-breaker: stop auto-compacting after this many consecutive failures
|
||||
MAX_CONSECUTIVE_FAILURES: int = 3
|
||||
|
||||
# Track consecutive compaction failures per conversation (module-level)
|
||||
_failure_counts: dict[int, int] = {}
|
||||
|
||||
# Track last compaction time per conversation for recompaction detection
|
||||
_last_compact_times: dict[int, float] = {}
|
||||
|
||||
|
||||
def microcompact(
|
||||
conversation: NodeConversation,
|
||||
*,
|
||||
keep_recent: int = MICROCOMPACT_KEEP_RECENT,
|
||||
) -> int:
|
||||
"""Clear old compactable tool results by count, keeping only the most recent.
|
||||
|
||||
This is the cheapest possible compaction — no LLM call, no structural
|
||||
changes, just replaces old tool result content with a short placeholder.
|
||||
Inspired by Claude Code's cached-microcompact strategy.
|
||||
|
||||
Returns the number of tool results cleared.
|
||||
"""
|
||||
# Collect indices of compactable tool results (newest first)
|
||||
compactable_indices: list[int] = []
|
||||
messages = conversation.messages
|
||||
for i in range(len(messages) - 1, -1, -1):
|
||||
msg = messages[i]
|
||||
if msg.role != "tool" or msg.is_error or msg.is_skill_content:
|
||||
continue
|
||||
if msg.content.startswith(("[Pruned tool result", "[Old tool result")):
|
||||
continue
|
||||
if len(msg.content) < 100:
|
||||
continue
|
||||
|
||||
# Check if the tool that produced this result is compactable
|
||||
tool_name = _find_tool_name_for_result(messages, msg)
|
||||
if tool_name and tool_name in COMPACTABLE_TOOLS:
|
||||
compactable_indices.append(i)
|
||||
|
||||
# Keep the most recent N, clear the rest
|
||||
to_clear = compactable_indices[keep_recent:]
|
||||
if not to_clear:
|
||||
return 0
|
||||
|
||||
cleared = 0
|
||||
for i in to_clear:
|
||||
msg = messages[i]
|
||||
spillover = _extract_spillover_filename_inline(msg.content)
|
||||
orig_len = len(msg.content)
|
||||
if spillover:
|
||||
placeholder = (
|
||||
f"[Old tool result cleared: {orig_len} chars. "
|
||||
f"Full data in '{spillover}'. "
|
||||
f"Use load_data('{spillover}') to retrieve.]"
|
||||
)
|
||||
else:
|
||||
placeholder = f"[Old tool result cleared: {orig_len} chars.]"
|
||||
|
||||
# Mutate in-place (microcompact is synchronous, no store writes)
|
||||
conversation._messages[i] = Message(
|
||||
seq=msg.seq,
|
||||
role=msg.role,
|
||||
content=placeholder,
|
||||
tool_use_id=msg.tool_use_id,
|
||||
tool_calls=msg.tool_calls,
|
||||
is_error=msg.is_error,
|
||||
phase_id=msg.phase_id,
|
||||
is_transition_marker=msg.is_transition_marker,
|
||||
)
|
||||
cleared += 1
|
||||
|
||||
if cleared > 0:
|
||||
# Invalidate cached token count
|
||||
conversation._last_api_input_tokens = None
|
||||
|
||||
return cleared
|
||||
|
||||
|
||||
def _find_tool_name_for_result(messages: list[Message], tool_msg: Message) -> str | None:
|
||||
"""Find the tool name from the assistant message that triggered this tool result."""
|
||||
if not tool_msg.tool_use_id:
|
||||
return None
|
||||
for msg in messages:
|
||||
if msg.tool_calls:
|
||||
for tc in msg.tool_calls:
|
||||
if tc.get("id") == tool_msg.tool_use_id:
|
||||
return tc.get("function", {}).get("name")
|
||||
return None
|
||||
|
||||
|
||||
def _extract_spillover_filename_inline(content: str) -> str | None:
|
||||
"""Quick inline check for spillover filename in tool result content."""
|
||||
match = re.search(r"saved to '([^']+)'", content, re.IGNORECASE)
|
||||
return match.group(1) if match else None
|
||||
|
||||
|
||||
async def compact(
|
||||
ctx: NodeContext,
|
||||
@@ -43,11 +160,31 @@ async def compact(
|
||||
"""Run the full compaction pipeline if conversation needs compaction.
|
||||
|
||||
Pipeline stages (in order, short-circuits when budget is restored):
|
||||
1. Prune old tool results
|
||||
0. Microcompaction (count-based tool result clearing — cheapest)
|
||||
1. Prune old tool results (token-budget based)
|
||||
2. Structure-preserving compaction (free, no LLM)
|
||||
3. LLM summary compaction (recursive split if too large)
|
||||
4. Emergency deterministic summary (fallback)
|
||||
"""
|
||||
conv_id = id(conversation)
|
||||
|
||||
# Circuit breaker: stop auto-compacting after repeated failures
|
||||
if _failure_counts.get(conv_id, 0) >= MAX_CONSECUTIVE_FAILURES:
|
||||
logger.warning(
|
||||
"Circuit breaker: skipping compaction after %d consecutive failures",
|
||||
_failure_counts[conv_id],
|
||||
)
|
||||
return
|
||||
|
||||
# Recompaction detection
|
||||
now = time.monotonic()
|
||||
last_time = _last_compact_times.get(conv_id)
|
||||
if last_time is not None and (now - last_time) < 30:
|
||||
logger.warning(
|
||||
"Recompaction chain detected: only %.1fs since last compaction",
|
||||
now - last_time,
|
||||
)
|
||||
|
||||
ratio_before = conversation.usage_ratio()
|
||||
phase_grad = getattr(ctx, "continuous_mode", False)
|
||||
pre_inventory: list[dict[str, Any]] | None = None
|
||||
@@ -55,6 +192,26 @@ async def compact(
|
||||
if ratio_before >= 1.0:
|
||||
pre_inventory = build_message_inventory(conversation)
|
||||
|
||||
# --- Step 0: Microcompaction (count-based, cheapest) ---
|
||||
mc_cleared = microcompact(conversation)
|
||||
if mc_cleared > 0:
|
||||
logger.info(
|
||||
"Microcompact cleared %d old tool results: %.0f%% -> %.0f%%",
|
||||
mc_cleared,
|
||||
ratio_before * 100,
|
||||
conversation.usage_ratio() * 100,
|
||||
)
|
||||
if not conversation.needs_compaction():
|
||||
_record_success(conv_id, now)
|
||||
await log_compaction(
|
||||
ctx,
|
||||
conversation,
|
||||
ratio_before,
|
||||
event_bus,
|
||||
pre_inventory=pre_inventory,
|
||||
)
|
||||
return
|
||||
|
||||
# --- Step 1: Prune old tool results (free, fast) ---
|
||||
protect = max(2000, config.max_context_tokens // 12)
|
||||
pruned = await conversation.prune_old_tool_results(
|
||||
@@ -69,6 +226,7 @@ async def compact(
|
||||
conversation.usage_ratio() * 100,
|
||||
)
|
||||
if not conversation.needs_compaction():
|
||||
_record_success(conv_id, now)
|
||||
await log_compaction(
|
||||
ctx,
|
||||
conversation,
|
||||
@@ -87,6 +245,7 @@ async def compact(
|
||||
phase_graduated=phase_grad,
|
||||
)
|
||||
if not conversation.needs_compaction():
|
||||
_record_success(conv_id, now)
|
||||
await log_compaction(
|
||||
ctx,
|
||||
conversation,
|
||||
@@ -118,8 +277,10 @@ async def compact(
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning("LLM compaction failed: %s", e)
|
||||
_failure_counts[conv_id] = _failure_counts.get(conv_id, 0) + 1
|
||||
|
||||
if not conversation.needs_compaction():
|
||||
_record_success(conv_id, now)
|
||||
await log_compaction(
|
||||
ctx,
|
||||
conversation,
|
||||
@@ -140,6 +301,7 @@ async def compact(
|
||||
keep_recent=1,
|
||||
phase_graduated=phase_grad,
|
||||
)
|
||||
_record_success(conv_id, now)
|
||||
await log_compaction(
|
||||
ctx,
|
||||
conversation,
|
||||
@@ -149,9 +311,46 @@ async def compact(
|
||||
)
|
||||
|
||||
|
||||
def _record_success(conv_id: int, timestamp: float) -> None:
|
||||
"""Reset failure counter and record compaction time on success."""
|
||||
_failure_counts.pop(conv_id, None)
|
||||
_last_compact_times[conv_id] = timestamp
|
||||
|
||||
|
||||
# --- LLM compaction with binary-search splitting ----------------------
|
||||
|
||||
|
||||
def strip_images_from_messages(messages: list[Message]) -> list[Message]:
|
||||
"""Strip image_content from messages before LLM summarisation.
|
||||
|
||||
Images/documents are replaced with ``[image]`` markers so the summary
|
||||
notes they existed without wasting tokens sending binary data to the
|
||||
compaction LLM. Returns a new list (original messages are not mutated).
|
||||
"""
|
||||
stripped: list[Message] = []
|
||||
for msg in messages:
|
||||
if msg.image_content:
|
||||
n_images = len(msg.image_content)
|
||||
marker = " ".join("[image]" for _ in range(n_images))
|
||||
content = f"{msg.content}\n{marker}" if msg.content else marker
|
||||
stripped.append(
|
||||
Message(
|
||||
seq=msg.seq,
|
||||
role=msg.role,
|
||||
content=content,
|
||||
tool_use_id=msg.tool_use_id,
|
||||
tool_calls=msg.tool_calls,
|
||||
is_error=msg.is_error,
|
||||
phase_id=msg.phase_id,
|
||||
is_transition_marker=msg.is_transition_marker,
|
||||
image_content=None, # stripped
|
||||
)
|
||||
)
|
||||
else:
|
||||
stripped.append(msg)
|
||||
return stripped
|
||||
|
||||
|
||||
async def llm_compact(
|
||||
ctx: NodeContext,
|
||||
messages: list,
|
||||
@@ -169,12 +368,16 @@ async def llm_compact(
|
||||
in half and each half is summarised independently. Tool history is
|
||||
appended once at the top-level call (``_depth == 0``).
|
||||
"""
|
||||
from framework.graph.conversation import extract_tool_call_history
|
||||
from framework.graph.event_loop.tool_result_handler import is_context_too_large_error
|
||||
from framework.agent_loop.conversation import extract_tool_call_history
|
||||
from framework.agent_loop.internals.tool_result_handler import is_context_too_large_error
|
||||
|
||||
if _depth > max_depth:
|
||||
raise RuntimeError(f"LLM compaction recursion limit ({max_depth})")
|
||||
|
||||
# Strip images before summarisation to avoid wasting tokens
|
||||
if _depth == 0:
|
||||
messages = strip_images_from_messages(messages)
|
||||
|
||||
formatted = format_messages_for_summary(messages)
|
||||
|
||||
# Proactive split: avoid wasting an API call on oversized input
|
||||
@@ -297,7 +500,12 @@ def build_llm_compaction_prompt(
|
||||
*,
|
||||
max_context_tokens: int = 128_000,
|
||||
) -> str:
|
||||
"""Build prompt for LLM compaction targeting 50% of token budget."""
|
||||
"""Build prompt for LLM compaction targeting 50% of token budget.
|
||||
|
||||
Uses a structured section format inspired by Claude Code's compact
|
||||
service. Each section focuses on a different aspect of the conversation
|
||||
so the summariser produces consistently useful, well-organised output.
|
||||
"""
|
||||
spec = ctx.node_spec
|
||||
ctx_lines = [f"NODE: {spec.name} (id={spec.id})"]
|
||||
if spec.description:
|
||||
@@ -330,13 +538,30 @@ def build_llm_compaction_prompt(
|
||||
f"CONVERSATION MESSAGES:\n{formatted_messages}\n\n"
|
||||
"INSTRUCTIONS:\n"
|
||||
f"Write a summary of approximately {target_chars} characters "
|
||||
f"(~{target_tokens} tokens).\n"
|
||||
"1. Preserve ALL user-stated rules, constraints, and preferences "
|
||||
"verbatim.\n"
|
||||
"2. Preserve key decisions made and results obtained.\n"
|
||||
"3. Preserve in-progress work state so the agent can continue.\n"
|
||||
"4. Be detailed enough that the agent can resume without "
|
||||
"re-doing work.\n"
|
||||
f"(~{target_tokens} tokens).\n\n"
|
||||
"Organise the summary into these sections (omit empty ones):\n\n"
|
||||
"1. **Primary Request and Intent** — What the user originally asked "
|
||||
"for and the high-level goal the agent is working toward.\n"
|
||||
"2. **Key Technical Concepts** — Important domain-specific terms, "
|
||||
"patterns, or architectural decisions established in the conversation.\n"
|
||||
"3. **Files and Code Sections** — Specific files read/written/edited "
|
||||
"with brief descriptions of changes. Include short code snippets only "
|
||||
"when they capture critical logic.\n"
|
||||
"4. **Errors and Fixes** — Problems encountered and how they were "
|
||||
"resolved. Include root causes so the agent doesn't repeat them.\n"
|
||||
"5. **Problem Solving Efforts** — Approaches tried, dead ends hit, "
|
||||
"and reasoning behind the current strategy.\n"
|
||||
"6. **User Messages** — Preserve ALL user-stated rules, constraints, "
|
||||
"identity preferences, and account details verbatim.\n"
|
||||
"7. **Pending Tasks** — Work remaining, outputs still needed, and "
|
||||
"any blockers.\n"
|
||||
"8. **Current Work** — The most recent action taken and the immediate "
|
||||
"next step the agent should perform. This section is the most important "
|
||||
"for seamless resumption.\n\n"
|
||||
"Additional rules:\n"
|
||||
"- Be detailed enough that the agent can resume without re-doing work.\n"
|
||||
"- Preserve key decisions made and results obtained.\n"
|
||||
"- When in doubt, keep information rather than discard it.\n"
|
||||
)
|
||||
|
||||
|
||||
@@ -499,7 +724,7 @@ async def log_compaction(
|
||||
)
|
||||
|
||||
if event_bus:
|
||||
from framework.runtime.event_bus import AgentEvent, EventType
|
||||
from framework.host.event_bus import AgentEvent, EventType
|
||||
|
||||
event_data: dict[str, Any] = {
|
||||
"level": level,
|
||||
@@ -636,6 +861,6 @@ def _extract_tool_call_history(conversation: NodeConversation) -> str:
|
||||
directly (vs. the module-level extract_tool_call_history in conversation.py
|
||||
which works on raw message lists).
|
||||
"""
|
||||
from framework.graph.conversation import extract_tool_call_history
|
||||
from framework.agent_loop.conversation import extract_tool_call_history
|
||||
|
||||
return extract_tool_call_history(list(conversation.messages))
|
||||
+7
-4
@@ -14,9 +14,9 @@ from collections.abc import Awaitable, Callable
|
||||
from dataclasses import dataclass
|
||||
from typing import Any
|
||||
|
||||
from framework.graph.conversation import ConversationStore, NodeConversation
|
||||
from framework.graph.event_loop.types import LoopConfig, OutputAccumulator, TriggerEvent
|
||||
from framework.graph.node import NodeContext
|
||||
from framework.agent_loop.conversation import ConversationStore, NodeConversation
|
||||
from framework.agent_loop.internals.types import LoopConfig, OutputAccumulator, TriggerEvent
|
||||
from framework.orchestrator.node import NodeContext
|
||||
from framework.llm.capabilities import supports_image_tool_results
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -153,7 +153,10 @@ async def drain_injection_queue(
|
||||
) -> int:
|
||||
"""Drain all pending injected events as user messages. Returns count."""
|
||||
count = 0
|
||||
logger.debug("[drain_injection_queue] Starting to drain queue, initial queue size: %s", queue.qsize() if hasattr(queue, 'qsize') else 'unknown')
|
||||
logger.debug(
|
||||
"[drain_injection_queue] Starting to drain queue, initial queue size: %s",
|
||||
queue.qsize() if hasattr(queue, "qsize") else "unknown",
|
||||
)
|
||||
while not queue.empty():
|
||||
try:
|
||||
content, is_client_input, image_content = queue.get_nowait()
|
||||
+5
-5
@@ -9,10 +9,10 @@ from __future__ import annotations
|
||||
import logging
|
||||
import time
|
||||
|
||||
from framework.graph.conversation import NodeConversation
|
||||
from framework.graph.event_loop.types import HookContext
|
||||
from framework.graph.node import NodeContext
|
||||
from framework.runtime.event_bus import EventBus
|
||||
from framework.agent_loop.conversation import NodeConversation
|
||||
from framework.agent_loop.internals.types import HookContext
|
||||
from framework.orchestrator.node import NodeContext
|
||||
from framework.host.event_bus import EventBus
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -177,7 +177,7 @@ async def publish_context_usage(
|
||||
if not event_bus:
|
||||
return
|
||||
|
||||
from framework.runtime.event_bus import AgentEvent, EventType
|
||||
from framework.host.event_bus import AgentEvent, EventType
|
||||
|
||||
estimated = conversation.estimate_tokens()
|
||||
max_tokens = conversation._max_context_tokens
|
||||
+4
-4
@@ -5,9 +5,9 @@ from __future__ import annotations
|
||||
import logging
|
||||
from collections.abc import Callable
|
||||
|
||||
from framework.graph.conversation import NodeConversation
|
||||
from framework.graph.event_loop.types import JudgeProtocol, JudgeVerdict, OutputAccumulator
|
||||
from framework.graph.node import NodeContext
|
||||
from framework.agent_loop.conversation import NodeConversation
|
||||
from framework.agent_loop.internals.types import JudgeProtocol, JudgeVerdict, OutputAccumulator
|
||||
from framework.orchestrator.node import NodeContext
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -155,7 +155,7 @@ async def judge_turn(
|
||||
|
||||
# Level 2b: conversation-aware quality check (if success_criteria set)
|
||||
if ctx.node_spec.success_criteria and ctx.llm:
|
||||
from framework.graph.conversation_judge import evaluate_phase_completion
|
||||
from framework.orchestrator.conversation_judge import evaluate_phase_completion
|
||||
|
||||
verdict = await evaluate_phase_completion(
|
||||
llm=ctx.llm,
|
||||
-112
@@ -204,118 +204,6 @@ def build_escalate_tool() -> Tool:
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def build_delegate_tool(sub_agents: list[str], node_registry: dict[str, Any]) -> Tool | None:
|
||||
"""Build the synthetic delegate_to_sub_agent tool for subagent invocation.
|
||||
|
||||
Args:
|
||||
sub_agents: List of node IDs that can be invoked as subagents.
|
||||
node_registry: Map of node_id -> NodeSpec for looking up subagent descriptions.
|
||||
|
||||
Returns:
|
||||
Tool definition if sub_agents is non-empty, None otherwise.
|
||||
"""
|
||||
if not sub_agents:
|
||||
return None
|
||||
|
||||
agent_descriptions = []
|
||||
for agent_id in sub_agents:
|
||||
spec = node_registry.get(agent_id)
|
||||
if spec:
|
||||
desc = getattr(spec, "description", "(no description)")
|
||||
agent_descriptions.append(f"- {agent_id}: {desc}")
|
||||
else:
|
||||
agent_descriptions.append(f"- {agent_id}: (not found in registry)")
|
||||
|
||||
return Tool(
|
||||
name="delegate_to_sub_agent",
|
||||
description=(
|
||||
"Delegate a task to a specialized sub-agent. The sub-agent runs "
|
||||
"autonomously with read-only access to current memory and returns "
|
||||
"its result. Use this to parallelize work or leverage specialized capabilities.\n\n"
|
||||
"Available sub-agents:\n" + "\n".join(agent_descriptions)
|
||||
),
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"agent_id": {
|
||||
"type": "string",
|
||||
"description": f"The sub-agent to invoke. Must be one of: {sub_agents}",
|
||||
"enum": sub_agents,
|
||||
},
|
||||
"task": {
|
||||
"type": "string",
|
||||
"description": (
|
||||
"The task description for the sub-agent to execute. "
|
||||
"Be specific about what you want the sub-agent to do and "
|
||||
"what information to return."
|
||||
),
|
||||
},
|
||||
},
|
||||
"required": ["agent_id", "task"],
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def build_report_to_parent_tool() -> Tool:
|
||||
"""Build the synthetic report_to_parent tool for sub-agent progress reports.
|
||||
|
||||
Sub-agents call this to send one-way progress updates, partial findings,
|
||||
or status reports to the parent node (and external observers via event bus)
|
||||
without blocking execution.
|
||||
|
||||
When ``wait_for_response`` is True, the sub-agent blocks until the parent
|
||||
relays the user's response — used for escalation (e.g. login pages, CAPTCHAs).
|
||||
|
||||
When ``mark_complete`` is True, the sub-agent terminates immediately after
|
||||
sending the report — no need to call set_output for each output key.
|
||||
"""
|
||||
return Tool(
|
||||
name="report_to_parent",
|
||||
description=(
|
||||
"Send a report to the parent agent. By default this is fire-and-forget: "
|
||||
"the parent receives the report but does not respond. "
|
||||
"Set wait_for_response=true to BLOCK until the user replies — use this "
|
||||
"when you need human intervention (e.g. login pages, CAPTCHAs, "
|
||||
"authentication walls). The user's response is returned as the tool result. "
|
||||
"Set mark_complete=true to finish your task and terminate immediately "
|
||||
"after sending the report — use this when your findings are in the "
|
||||
"message/data fields and you don't need to call set_output."
|
||||
),
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"message": {
|
||||
"type": "string",
|
||||
"description": "A human-readable status or progress message.",
|
||||
},
|
||||
"data": {
|
||||
"type": "object",
|
||||
"description": "Optional structured data to include with the report.",
|
||||
},
|
||||
"wait_for_response": {
|
||||
"type": "boolean",
|
||||
"description": (
|
||||
"If true, block execution until the user responds. "
|
||||
"Use for escalation scenarios requiring human intervention."
|
||||
),
|
||||
"default": False,
|
||||
},
|
||||
"mark_complete": {
|
||||
"type": "boolean",
|
||||
"description": (
|
||||
"If true, terminate the sub-agent immediately after sending "
|
||||
"this report. The report message and data are delivered to the "
|
||||
"parent as the final result. No set_output calls are needed."
|
||||
),
|
||||
"default": False,
|
||||
},
|
||||
},
|
||||
"required": ["message"],
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def handle_set_output(
|
||||
tool_input: dict[str, Any],
|
||||
output_keys: list[str] | None,
|
||||
+15
-12
@@ -222,7 +222,7 @@ def truncate_tool_result(
|
||||
- Small results (≤ limit): full content kept + file annotation
|
||||
- Large results (> limit): preview + file reference
|
||||
- Errors: pass through unchanged
|
||||
- load_data results: truncate with pagination hint (no re-spill)
|
||||
- read_file/load_data results: truncate with pagination hint (no re-spill)
|
||||
"""
|
||||
limit = max_tool_result_chars
|
||||
|
||||
@@ -230,12 +230,12 @@ def truncate_tool_result(
|
||||
if result.is_error:
|
||||
return result
|
||||
|
||||
# load_data reads FROM spilled files — never re-spill (circular).
|
||||
# read_file/load_data reads FROM spilled files — never re-spill (circular).
|
||||
# Just truncate with a pagination hint if the result is too large.
|
||||
if tool_name == "load_data":
|
||||
if tool_name in ("load_data", "read_file"):
|
||||
if limit <= 0 or len(result.content) <= limit:
|
||||
return result # Small load_data result — pass through as-is
|
||||
# Large load_data result — truncate with smart preview
|
||||
return result # Small result — pass through as-is
|
||||
# Large result — truncate with smart preview
|
||||
PREVIEW_CAP = min(5000, max(limit - 500, limit // 2))
|
||||
|
||||
metadata_str = ""
|
||||
@@ -284,7 +284,7 @@ def truncate_tool_result(
|
||||
spill_path.mkdir(parents=True, exist_ok=True)
|
||||
filename = next_spill_filename_fn(tool_name)
|
||||
|
||||
# Pretty-print JSON content so load_data's line-based
|
||||
# Pretty-print JSON content so read_file's line-based
|
||||
# pagination works correctly.
|
||||
write_content = result.content
|
||||
parsed_json: Any = None # track for metadata extraction
|
||||
@@ -294,7 +294,10 @@ def truncate_tool_result(
|
||||
except (json.JSONDecodeError, TypeError, ValueError):
|
||||
pass # Not JSON — write as-is
|
||||
|
||||
(spill_path / filename).write_text(write_content, encoding="utf-8")
|
||||
file_path = spill_path / filename
|
||||
file_path.write_text(write_content, encoding="utf-8")
|
||||
# Use absolute path so parent agents can find files from subagents
|
||||
abs_path = str(file_path.resolve())
|
||||
|
||||
if limit > 0 and len(result.content) > limit:
|
||||
# Large result: build a small, metadata-rich preview so the
|
||||
@@ -316,14 +319,14 @@ def truncate_tool_result(
|
||||
# Assemble header with structural info + warning
|
||||
header = (
|
||||
f"[Result from {tool_name}: {len(result.content):,} chars — "
|
||||
f"too large for context, saved to '{filename}'.]\n"
|
||||
f"too large for context, saved to '{abs_path}'.]\n"
|
||||
)
|
||||
if metadata_str:
|
||||
header += f"\nData structure:\n{metadata_str}"
|
||||
header += (
|
||||
f"\n\nWARNING: The preview below is INCOMPLETE. "
|
||||
f"Do NOT draw conclusions or counts from it. "
|
||||
f"Use load_data(filename='{filename}') to read the "
|
||||
f"Use read_file(path='{abs_path}') to read the "
|
||||
f"full data before analysis."
|
||||
)
|
||||
|
||||
@@ -332,11 +335,11 @@ def truncate_tool_result(
|
||||
"Tool result spilled to file: %s (%d chars → %s)",
|
||||
tool_name,
|
||||
len(result.content),
|
||||
filename,
|
||||
abs_path,
|
||||
)
|
||||
else:
|
||||
# Small result: keep full content + annotation
|
||||
content = f"{result.content}\n\n[Saved to '{filename}']"
|
||||
# Small result: keep full content + annotation with absolute path
|
||||
content = f"{result.content}\n\n[Saved to '{abs_path}']"
|
||||
logger.info(
|
||||
"Tool result saved to file: %s (%d chars → %s)",
|
||||
tool_name,
|
||||
+18
-10
@@ -9,10 +9,8 @@ from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Any, Literal, Protocol, runtime_checkable
|
||||
|
||||
from framework.graph.conversation import (
|
||||
from framework.agent_loop.conversation import (
|
||||
ConversationStore,
|
||||
get_run_cursor,
|
||||
update_run_cursor,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -70,7 +68,7 @@ class LoopConfig:
|
||||
max_output_value_chars: int = 2_000
|
||||
|
||||
# Stream retry.
|
||||
max_stream_retries: int = 3
|
||||
max_stream_retries: int = 5
|
||||
stream_retry_backoff_base: float = 2.0
|
||||
stream_retry_max_delay: float = 60.0
|
||||
|
||||
@@ -79,13 +77,20 @@ class LoopConfig:
|
||||
|
||||
# Client-facing auto-block grace period.
|
||||
cf_grace_turns: int = 1
|
||||
# Worker auto-escalation: text-only turns before escalating to queen.
|
||||
worker_escalation_grace_turns: int = 1
|
||||
tool_doom_loop_enabled: bool = True
|
||||
|
||||
# Per-tool-call timeout.
|
||||
tool_call_timeout_seconds: float = 60.0
|
||||
|
||||
# Subagent delegation timeout.
|
||||
subagent_timeout_seconds: float = 600.0
|
||||
# Subagent delegation timeout (wall-clock max).
|
||||
subagent_timeout_seconds: float = 3600.0
|
||||
|
||||
# Subagent inactivity timeout - only timeout if no activity for this duration.
|
||||
# This resets whenever the subagent makes progress (tool calls, LLM responses).
|
||||
# Set to 0 to use only the wall-clock timeout.
|
||||
subagent_inactivity_timeout_seconds: float = 300.0
|
||||
|
||||
# Lifecycle hooks.
|
||||
hooks: dict[str, list] | None = None
|
||||
@@ -151,8 +156,9 @@ class OutputAccumulator:
|
||||
if isinstance(value, (dict, list))
|
||||
else str(value)
|
||||
)
|
||||
(spill_path / filename).write_text(write_content, encoding="utf-8")
|
||||
file_size = (spill_path / filename).stat().st_size
|
||||
file_path = spill_path / filename
|
||||
file_path.write_text(write_content, encoding="utf-8")
|
||||
file_size = file_path.stat().st_size
|
||||
logger.info(
|
||||
"set_output value auto-spilled: key=%s, %d chars -> %s (%d bytes)",
|
||||
key,
|
||||
@@ -160,9 +166,11 @@ class OutputAccumulator:
|
||||
filename,
|
||||
file_size,
|
||||
)
|
||||
# Use absolute path so parent agents can find files from subagents
|
||||
abs_path = str(file_path.resolve())
|
||||
return (
|
||||
f"[Saved to '{filename}' ({file_size:,} bytes). "
|
||||
f"Use load_data(filename='{filename}') "
|
||||
f"[Saved to '{abs_path}' ({file_size:,} bytes). "
|
||||
f"Use read_file(path='{abs_path}') "
|
||||
f"to access full data.]"
|
||||
)
|
||||
|
||||
@@ -8,6 +8,14 @@ FRAMEWORK_AGENTS_DIR = Path(__file__).parent
|
||||
def list_framework_agents() -> list[Path]:
|
||||
"""List all framework agent directories."""
|
||||
return sorted(
|
||||
[p for p in FRAMEWORK_AGENTS_DIR.iterdir() if p.is_dir() and (p / "agent.py").exists()],
|
||||
[
|
||||
p
|
||||
for p in FRAMEWORK_AGENTS_DIR.iterdir()
|
||||
if p.is_dir()
|
||||
and (
|
||||
(p / "agent.json").exists()
|
||||
or (p / "agent.py").exists()
|
||||
)
|
||||
],
|
||||
key=lambda p: p.name,
|
||||
)
|
||||
|
||||
@@ -21,15 +21,15 @@ from pathlib import Path
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from framework.config import get_max_context_tokens
|
||||
from framework.graph import Goal, NodeSpec, SuccessCriterion
|
||||
from framework.graph.checkpoint_config import CheckpointConfig
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.executor import ExecutionResult
|
||||
from framework.orchestrator import Goal, NodeSpec, SuccessCriterion
|
||||
from framework.orchestrator.checkpoint_config import CheckpointConfig
|
||||
from framework.orchestrator.edge import GraphSpec
|
||||
from framework.orchestrator.orchestrator import ExecutionResult
|
||||
from framework.llm import LiteLLMProvider
|
||||
from framework.runner.mcp_registry import MCPRegistry
|
||||
from framework.runner.tool_registry import ToolRegistry
|
||||
from framework.runtime.agent_runtime import AgentRuntime, create_agent_runtime
|
||||
from framework.runtime.execution_stream import EntryPointSpec
|
||||
from framework.loader.mcp_registry import MCPRegistry
|
||||
from framework.loader.tool_registry import ToolRegistry
|
||||
from framework.host.agent_host import AgentHost
|
||||
from framework.host.execution_manager import EntryPointSpec
|
||||
|
||||
from .config import default_config
|
||||
from .nodes import build_tester_node
|
||||
@@ -37,7 +37,7 @@ from .nodes import build_tester_node
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.runner import AgentRunner
|
||||
from framework.loader import AgentLoader
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -233,7 +233,7 @@ requires_account_selection = True
|
||||
"""Signal TUI to show account picker before starting the agent."""
|
||||
|
||||
|
||||
def configure_for_account(runner: AgentRunner, account: dict) -> None:
|
||||
def configure_for_account(runner: AgentLoader, account: dict) -> None:
|
||||
"""Scope the tester node's tools to the selected provider.
|
||||
|
||||
Handles both Aden accounts (account= routing) and local accounts
|
||||
@@ -325,7 +325,7 @@ def _activate_local_account(credential_id: str, alias: str) -> None:
|
||||
|
||||
|
||||
def _configure_aden_node(
|
||||
runner: AgentRunner,
|
||||
runner: AgentLoader,
|
||||
provider: str,
|
||||
alias: str,
|
||||
detail: str,
|
||||
@@ -368,7 +368,7 @@ or any other identifier — always use the alias exactly as shown.
|
||||
|
||||
|
||||
def _configure_local_node(
|
||||
runner: AgentRunner,
|
||||
runner: AgentLoader,
|
||||
provider: str,
|
||||
alias: str,
|
||||
identity: dict,
|
||||
@@ -497,7 +497,7 @@ class CredentialTesterAgent:
|
||||
def __init__(self, config=None):
|
||||
self.config = config or default_config
|
||||
self._selected_account: dict | None = None
|
||||
self._agent_runtime: AgentRuntime | None = None
|
||||
self._agent_runtime: AgentHost | None = None
|
||||
self._tool_registry: ToolRegistry | None = None
|
||||
self._storage_path: Path | None = None
|
||||
|
||||
@@ -613,7 +613,7 @@ class CredentialTesterAgent:
|
||||
|
||||
graph = self._build_graph()
|
||||
|
||||
self._agent_runtime = create_agent_runtime(
|
||||
self._agent_runtime = AgentHost(
|
||||
graph=graph,
|
||||
goal=goal,
|
||||
storage_path=self._storage_path,
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
"""Node definitions for Credential Tester agent."""
|
||||
|
||||
from framework.graph import NodeSpec
|
||||
from framework.orchestrator import NodeSpec
|
||||
|
||||
|
||||
def build_tester_node(
|
||||
|
||||
@@ -27,8 +27,8 @@ def _get_last_active(agent_path: Path) -> str | None:
|
||||
"""Return the most recent updated_at timestamp across all sessions.
|
||||
|
||||
Checks both worker sessions (``~/.hive/agents/{name}/sessions/``) and
|
||||
queen sessions (``~/.hive/queen/session/``) whose ``meta.json`` references
|
||||
the same *agent_path*.
|
||||
queen sessions (``~/.hive/agents/queens/default/sessions/``) whose
|
||||
``meta.json`` references the same *agent_path*.
|
||||
"""
|
||||
from datetime import datetime
|
||||
|
||||
@@ -52,26 +52,33 @@ def _get_last_active(agent_path: Path) -> str | None:
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
# 2. Queen sessions
|
||||
queen_sessions_dir = Path.home() / ".hive" / "queen" / "session"
|
||||
if queen_sessions_dir.exists():
|
||||
# 2. Queen sessions (scan all queen identity directories)
|
||||
from framework.config import QUEENS_DIR
|
||||
|
||||
if QUEENS_DIR.exists():
|
||||
resolved = agent_path.resolve()
|
||||
for d in queen_sessions_dir.iterdir():
|
||||
if not d.is_dir():
|
||||
for queen_dir in QUEENS_DIR.iterdir():
|
||||
if not queen_dir.is_dir():
|
||||
continue
|
||||
meta_file = d / "meta.json"
|
||||
if not meta_file.exists():
|
||||
sessions_dir = queen_dir / "sessions"
|
||||
if not sessions_dir.exists():
|
||||
continue
|
||||
try:
|
||||
meta = json.loads(meta_file.read_text(encoding="utf-8"))
|
||||
stored = meta.get("agent_path")
|
||||
if not stored or Path(stored).resolve() != resolved:
|
||||
for d in sessions_dir.iterdir():
|
||||
if not d.is_dir():
|
||||
continue
|
||||
meta_file = d / "meta.json"
|
||||
if not meta_file.exists():
|
||||
continue
|
||||
try:
|
||||
meta = json.loads(meta_file.read_text(encoding="utf-8"))
|
||||
stored = meta.get("agent_path")
|
||||
if not stored or Path(stored).resolve() != resolved:
|
||||
continue
|
||||
ts = datetime.fromtimestamp(d.stat().st_mtime).isoformat()
|
||||
if latest is None or ts > latest:
|
||||
latest = ts
|
||||
except Exception:
|
||||
continue
|
||||
ts = datetime.fromtimestamp(d.stat().st_mtime).isoformat()
|
||||
if latest is None or ts > latest:
|
||||
latest = ts
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
return latest
|
||||
|
||||
@@ -112,13 +119,33 @@ def _count_runs(agent_name: str) -> int:
|
||||
def _extract_agent_stats(agent_path: Path) -> tuple[int, int, list[str]]:
|
||||
"""Extract node count, tool count, and tags from an agent directory.
|
||||
|
||||
Prefers agent.py (AST-parsed) over agent.json for node/tool counts
|
||||
since agent.json may be stale. Tags are only available from agent.json.
|
||||
Checks agent.json (declarative) first, then agent.py (legacy).
|
||||
"""
|
||||
import ast
|
||||
|
||||
node_count, tool_count, tags = 0, 0, []
|
||||
|
||||
# Declarative JSON agents (preferred)
|
||||
agent_json = agent_path / "agent.json"
|
||||
if agent_json.exists():
|
||||
try:
|
||||
data = json.loads(agent_json.read_text(encoding="utf-8"))
|
||||
if isinstance(data, dict):
|
||||
json_nodes = data.get("nodes", [])
|
||||
node_count = len(json_nodes)
|
||||
tools: set[str] = set()
|
||||
for n in json_nodes:
|
||||
node_tools = n.get("tools", {})
|
||||
if isinstance(node_tools, dict):
|
||||
tools.update(node_tools.get("allowed", []))
|
||||
elif isinstance(node_tools, list):
|
||||
tools.update(node_tools)
|
||||
tool_count = len(tools)
|
||||
return node_count, tool_count, tags
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Legacy: agent.py (AST-parsed)
|
||||
agent_py = agent_path / "agent.py"
|
||||
if agent_py.exists():
|
||||
try:
|
||||
@@ -132,39 +159,31 @@ def _extract_agent_stats(agent_path: Path) -> tuple[int, int, list[str]]:
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
agent_json = agent_path / "agent.json"
|
||||
if agent_json.exists():
|
||||
try:
|
||||
data = json.loads(agent_json.read_text(encoding="utf-8"))
|
||||
json_nodes = data.get("graph", {}).get("nodes", []) or data.get("nodes", [])
|
||||
if node_count == 0:
|
||||
node_count = len(json_nodes)
|
||||
tools: set[str] = set()
|
||||
for n in json_nodes:
|
||||
tools.update(n.get("tools", []))
|
||||
tool_count = len(tools)
|
||||
tags = data.get("agent", {}).get("tags", [])
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return node_count, tool_count, tags
|
||||
|
||||
|
||||
def discover_agents() -> dict[str, list[AgentEntry]]:
|
||||
"""Discover agents from all known sources grouped by category."""
|
||||
from framework.runner.cli import (
|
||||
from framework.loader.cli import (
|
||||
_extract_python_agent_metadata,
|
||||
_get_framework_agents_dir,
|
||||
_is_valid_agent_dir,
|
||||
)
|
||||
|
||||
from framework.config import COLONIES_DIR
|
||||
|
||||
groups: dict[str, list[AgentEntry]] = {}
|
||||
sources = [
|
||||
("Your Agents", Path("exports")),
|
||||
("Your Agents", COLONIES_DIR),
|
||||
("Your Agents", Path("exports")), # compat fallback
|
||||
("Framework", _get_framework_agents_dir()),
|
||||
("Examples", Path("examples/templates")),
|
||||
]
|
||||
|
||||
# Track seen agent directory names to avoid duplicates when the same
|
||||
# agent exists in both colonies/ and exports/ (colonies takes priority).
|
||||
_seen_agent_names: set[str] = set()
|
||||
|
||||
for category, base_dir in sources:
|
||||
if not base_dir.exists():
|
||||
continue
|
||||
@@ -172,6 +191,9 @@ def discover_agents() -> dict[str, list[AgentEntry]]:
|
||||
for path in sorted(base_dir.iterdir(), key=lambda p: p.name):
|
||||
if not _is_valid_agent_dir(path):
|
||||
continue
|
||||
if path.name in _seen_agent_names:
|
||||
continue
|
||||
_seen_agent_names.add(path.name)
|
||||
|
||||
name, desc = _extract_python_agent_metadata(path)
|
||||
config_fallback_name = path.name.replace("_", " ").title()
|
||||
@@ -179,13 +201,19 @@ def discover_agents() -> dict[str, list[AgentEntry]]:
|
||||
|
||||
node_count, tool_count, tags = _extract_agent_stats(path)
|
||||
if not used_config:
|
||||
agent_json = path / "agent.json"
|
||||
if agent_json.exists():
|
||||
# Try agent.json (declarative) for metadata
|
||||
agent_json_path = path / "agent.json"
|
||||
if agent_json_path.exists():
|
||||
try:
|
||||
data = json.loads(agent_json.read_text(encoding="utf-8"))
|
||||
meta = data.get("agent", {})
|
||||
name = meta.get("name", name)
|
||||
desc = meta.get("description", desc)
|
||||
data = json.loads(
|
||||
agent_json_path.read_text(encoding="utf-8"),
|
||||
)
|
||||
if isinstance(data, dict):
|
||||
raw_name = data.get("name", name)
|
||||
if "-" in raw_name and " " not in raw_name:
|
||||
raw_name = raw_name.replace("-", " ").title()
|
||||
name = raw_name
|
||||
desc = data.get("description", desc)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
@@ -204,6 +232,8 @@ def discover_agents() -> dict[str, list[AgentEntry]]:
|
||||
)
|
||||
)
|
||||
if entries:
|
||||
groups[category] = entries
|
||||
existing = groups.get(category, [])
|
||||
existing.extend(entries)
|
||||
groups[category] = existing
|
||||
|
||||
return groups
|
||||
|
||||
@@ -1,19 +1,13 @@
|
||||
"""
|
||||
Queen — Native agent builder for the Hive framework.
|
||||
"""Queen -- the agent builder for the Hive framework."""
|
||||
|
||||
Deeply understands the agent framework and produces complete Python packages
|
||||
with goals, nodes, edges, system prompts, MCP configuration, and tests
|
||||
from natural language specifications.
|
||||
"""
|
||||
|
||||
from .agent import queen_goal, queen_graph
|
||||
from .agent import queen_goal, queen_loop_config
|
||||
from .config import AgentMetadata, RuntimeConfig, default_config, metadata
|
||||
|
||||
__version__ = "1.0.0"
|
||||
|
||||
__all__ = [
|
||||
"queen_goal",
|
||||
"queen_graph",
|
||||
"queen_loop_config",
|
||||
"RuntimeConfig",
|
||||
"AgentMetadata",
|
||||
"default_config",
|
||||
|
||||
@@ -1,38 +1,29 @@
|
||||
"""Queen graph definition."""
|
||||
"""Queen agent definition.
|
||||
|
||||
from framework.graph import Goal
|
||||
from framework.graph.edge import GraphSpec
|
||||
The queen is a single AgentLoop -- no graph, no orchestrator.
|
||||
Loaded by queen_orchestrator.create_queen().
|
||||
"""
|
||||
|
||||
from framework.orchestrator.goal import Goal
|
||||
|
||||
from .nodes import queen_node
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Queen graph — the primary persistent conversation.
|
||||
# Loaded by queen_orchestrator.create_queen(), NOT by AgentRunner.
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
queen_goal = Goal(
|
||||
id="queen-manager",
|
||||
name="Queen Manager",
|
||||
description=(
|
||||
"Manage the worker agent lifecycle and serve as the user's primary interactive interface."
|
||||
"Manage the worker agent lifecycle and serve as the "
|
||||
"user's primary interactive interface."
|
||||
),
|
||||
success_criteria=[],
|
||||
constraints=[],
|
||||
)
|
||||
|
||||
queen_graph = GraphSpec(
|
||||
id="queen-graph",
|
||||
goal_id=queen_goal.id,
|
||||
version="1.0.0",
|
||||
entry_node="queen",
|
||||
entry_points={"start": "queen"},
|
||||
terminal_nodes=[],
|
||||
pause_nodes=[],
|
||||
nodes=[queen_node],
|
||||
edges=[],
|
||||
conversation_mode="continuous",
|
||||
loop_config={
|
||||
"max_iterations": 999_999,
|
||||
"max_tool_calls_per_turn": 30,
|
||||
},
|
||||
)
|
||||
# Loop config -- used by queen_orchestrator to build LoopConfig
|
||||
queen_loop_config = {
|
||||
"max_iterations": 999_999,
|
||||
"max_tool_calls_per_turn": 30,
|
||||
"max_context_tokens": 180_000,
|
||||
}
|
||||
|
||||
__all__ = ["queen_goal", "queen_loop_config", "queen_node"]
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"include": ["gcu-tools"]
|
||||
}
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from framework.graph import NodeSpec
|
||||
from framework.orchestrator import NodeSpec
|
||||
|
||||
# Load reference docs at import time so they're always in the system prompt.
|
||||
# No voluntary read_file() calls needed — the LLM gets everything upfront.
|
||||
@@ -37,7 +37,7 @@ _appendices = _build_appendices()
|
||||
|
||||
# GCU guide — shared between planning and building via _shared_building_knowledge.
|
||||
_gcu_section = (
|
||||
("\n\n# GCU Nodes — Browser Automation\n\n" + _gcu_guide)
|
||||
("\n\n# Browser Automation Nodes\n\n" + _gcu_guide)
|
||||
if _is_gcu_enabled() and _gcu_guide
|
||||
else ""
|
||||
)
|
||||
@@ -81,25 +81,20 @@ _QUEEN_PLANNING_TOOLS = [
|
||||
"save_agent_draft",
|
||||
"confirm_and_build",
|
||||
# Scaffold + transition to building (requires confirm_and_build first)
|
||||
"initialize_and_build_agent",
|
||||
# Load existing agent (after user confirms)
|
||||
"load_built_agent",
|
||||
"save_global_memory",
|
||||
]
|
||||
|
||||
# Building phase: full coding + agent construction tools.
|
||||
_QUEEN_BUILDING_TOOLS = (
|
||||
_SHARED_TOOLS
|
||||
+ [
|
||||
"load_built_agent",
|
||||
"list_credentials",
|
||||
"replan_agent",
|
||||
"save_agent_draft", # Re-draft during building → auto-dissolves + updates flowchart
|
||||
"save_global_memory",
|
||||
]
|
||||
)
|
||||
_QUEEN_BUILDING_TOOLS = _SHARED_TOOLS + [
|
||||
"load_built_agent",
|
||||
"list_credentials",
|
||||
"replan_agent",
|
||||
"save_agent_draft", # Re-draft during building → auto-dissolves + updates flowchart
|
||||
]
|
||||
|
||||
# Staging phase: agent loaded but not yet running — inspect, configure, launch.
|
||||
# No backward transitions — staging only goes forward to running.
|
||||
_QUEEN_STAGING_TOOLS = [
|
||||
# Read-only (inspect agent files, logs)
|
||||
"read_file",
|
||||
@@ -109,20 +104,16 @@ _QUEEN_STAGING_TOOLS = [
|
||||
# Agent inspection
|
||||
"list_credentials",
|
||||
"get_graph_status",
|
||||
# Launch or go back
|
||||
# Launch
|
||||
"run_agent_with_input",
|
||||
"stop_graph_and_edit",
|
||||
"stop_graph_and_plan",
|
||||
# Trigger management
|
||||
"set_trigger",
|
||||
"remove_trigger",
|
||||
"list_triggers",
|
||||
"save_global_memory",
|
||||
]
|
||||
|
||||
# Running phase: worker is executing — monitor and control.
|
||||
# Note: stop_graph_and_edit / stop_graph_and_plan are NOT available here.
|
||||
# The queen must go through INCUBATING first before dropping to building/planning.
|
||||
# Running phase: worker is executing — monitor, control, or switch to editing.
|
||||
# switch_to_editing lets the queen explicitly stop and tweak without rebuilding.
|
||||
_QUEEN_RUNNING_TOOLS = [
|
||||
# Read-only coding (for inspecting logs, files)
|
||||
"read_file",
|
||||
@@ -133,6 +124,7 @@ _QUEEN_RUNNING_TOOLS = [
|
||||
"list_credentials",
|
||||
# Worker lifecycle
|
||||
"stop_graph",
|
||||
"switch_to_editing",
|
||||
"get_graph_status",
|
||||
"run_agent_with_input",
|
||||
"inject_message",
|
||||
@@ -141,13 +133,12 @@ _QUEEN_RUNNING_TOOLS = [
|
||||
"set_trigger",
|
||||
"remove_trigger",
|
||||
"list_triggers",
|
||||
"save_global_memory",
|
||||
]
|
||||
|
||||
# Incubating phase: worker done, still loaded — tweak config and re-run.
|
||||
# Editing phase: worker done, still loaded — tweak config and re-run.
|
||||
# Has inject_message for live adjustments. stop_graph_and_edit/plan available
|
||||
# here (not in running) to escalate when a deeper change is needed.
|
||||
_QUEEN_INCUBATING_TOOLS = [
|
||||
# here to escalate when a deeper change is needed.
|
||||
_QUEEN_EDITING_TOOLS = [
|
||||
# Read-only (inspect)
|
||||
"read_file",
|
||||
"list_directory",
|
||||
@@ -159,15 +150,11 @@ _QUEEN_INCUBATING_TOOLS = [
|
||||
# Re-run or tweak
|
||||
"run_agent_with_input",
|
||||
"inject_message",
|
||||
# Escalate to building/planning (kills worker)
|
||||
"stop_graph_and_edit",
|
||||
"stop_graph_and_plan",
|
||||
# Monitoring
|
||||
"get_worker_health_summary",
|
||||
"set_trigger",
|
||||
"remove_trigger",
|
||||
"list_triggers",
|
||||
"save_global_memory",
|
||||
]
|
||||
|
||||
|
||||
@@ -184,7 +171,7 @@ _shared_building_knowledge = (
|
||||
|
||||
## Paths (MANDATORY)
|
||||
**Always use RELATIVE paths** \
|
||||
(e.g. `exports/agent_name/config.py`, `exports/agent_name/nodes/__init__.py`).
|
||||
(e.g. `exports/agent_name/agent.json`).
|
||||
**Never use absolute paths** like `/mnt/data/...` or `/workspace/...` — they fail.
|
||||
The project root is implicit.
|
||||
|
||||
@@ -194,14 +181,18 @@ When designing worker nodes or writing worker system prompts, reference these \
|
||||
tool names — NOT the coder-tools names (read_file, write_file, etc.).
|
||||
|
||||
Worker data tools (for large results and spillover):
|
||||
- save_data(filename, data, data_dir) — save data to a file for later retrieval
|
||||
- load_data(filename, data_dir, offset_bytes?, limit_bytes?) — load data \
|
||||
with byte-based pagination
|
||||
- list_data_files(data_dir) — list available data files
|
||||
- append_data(filename, data, data_dir) — append to a file incrementally
|
||||
- edit_data(filename, old_text, new_text, data_dir) — find-and-replace in a data file
|
||||
- serve_file_to_user(filename, data_dir, label?, open_in_browser?) — \
|
||||
generate a clickable file URI for the user
|
||||
Worker data tools (from files-tools MCP server):
|
||||
- read_file(path) — read a file
|
||||
- write_file(path, content) — write/create a file
|
||||
- list_files(path) — list directory contents
|
||||
- search_files(pattern, path) — regex search in files
|
||||
|
||||
Worker data tools (from hive-tools MCP server):
|
||||
- csv_read, csv_write, csv_append — CSV operations
|
||||
- pdf_read — read PDF files
|
||||
|
||||
All tools are registered in the global MCP registry (~/.hive/mcp_registry/). \
|
||||
Workers get tools from: hive-tools, gcu-tools, files-tools.
|
||||
|
||||
IMPORTANT: Do NOT tell workers to use read_file, write_file, edit_file, \
|
||||
search_files, or list_directory — those are YOUR tools, not theirs.
|
||||
@@ -216,7 +207,7 @@ _planning_knowledge = """\
|
||||
# Core Mandates (Planning)
|
||||
- **DO NOT propose a complete goal on your own.** Instead, \
|
||||
collaborate with the user to define it.
|
||||
- **NEVER call `initialize_and_build_agent` without explicit user approval.** \
|
||||
- **NEVER call `confirm_and_build` without explicit user approval.** \
|
||||
Present the full design first and wait for the user to confirm before building.
|
||||
- **Discover tools dynamically.** NEVER reference tools from static \
|
||||
docs. Always run list_agent_tools() to see what actually exists.
|
||||
@@ -264,9 +255,9 @@ When the stakeholder describes what they want, mentally construct:
|
||||
|
||||
**After the user responds, assess fit and gaps together.** Be honest and specific. \
|
||||
Reference tools from list_agent_tools() AND built-in capabilities:
|
||||
- **GCU browser automation** (`node_type="gcu"`) provides full Playwright-based \
|
||||
- **Browser automation provides full Playwright-based \
|
||||
browser control (navigation, clicking, typing, scrolling, JS-rendered pages, \
|
||||
multi-tab). Do NOT list browser automation as missing — use GCU nodes.
|
||||
multi-tab). Do NOT list browser automation as missing — use browser nodes with tools: {policy: "all"}.
|
||||
|
||||
Present a short **Framework Fit Assessment**:
|
||||
- **Works well**: 2-4 strengths for this use case
|
||||
@@ -318,14 +309,11 @@ explicitly on a node. Available types:
|
||||
- **io** (dusty purple, parallelogram): External data input/output
|
||||
- **document** (steel blue, wavy rect): Report or document generation
|
||||
- **database** (muted teal, cylinder): Database or data store
|
||||
- **subprocess** (dark cyan, subroutine): Delegated sub-agent / predefined process
|
||||
- **browser** (deep blue, hexagon): GCU browser automation / sub-agent \
|
||||
delegation. At build time, browser nodes are dissolved into the parent \
|
||||
node's sub_agents list. Use for any GCU or sub-agent leaf node.
|
||||
- **browser** (deep blue, hexagon): Browser automation node (uses gcu-tools).
|
||||
|
||||
Auto-detection works well for most cases: first node → start, nodes with \
|
||||
no outgoing edges → terminal, nodes with multiple conditional outgoing \
|
||||
edges → decision, GCU nodes → browser, nodes mentioning "database" → \
|
||||
edges → decision, browser tool nodes → browser, nodes mentioning "database" → \
|
||||
database, nodes mentioning "report/document" → document, I/O tools like \
|
||||
send_email → io. Everything else defaults to process. Set flowchart_type \
|
||||
explicitly only when auto-detection would be wrong.
|
||||
@@ -366,48 +354,19 @@ gather → [Valid data?] →Yes→ transform → deliver
|
||||
In the draft: the `[Valid data?]` node has `flowchart_type: "decision"`, \
|
||||
`decision_clause: "Data passes validation checks?"`, with labeled yes/no edges.
|
||||
|
||||
## Sub-Agent Nodes — Planning-Only Delegation
|
||||
## Browser Automation Nodes
|
||||
|
||||
Sub-agent nodes (dark teal subroutines) are **planning-only** visual elements \
|
||||
that show which nodes delegate to sub-agents. At `confirm_and_build()`, \
|
||||
sub-agent nodes are **dissolved** into their parent node:
|
||||
|
||||
- The sub-agent node's ID is added to the predecessor's `sub_agents` list
|
||||
- The sub-agent node and its connecting edge are removed
|
||||
- At runtime, the parent node can invoke the sub-agent via `delegate_to_sub_agent`
|
||||
|
||||
**Rules for sub-agent nodes (INCLUDING GCU nodes):**
|
||||
- GCU nodes are auto-detected as `flowchart_type: "browser"` (hexagon)
|
||||
- Connect from the managing parent node to the sub-agent node
|
||||
- Sub-agent nodes must be **leaf nodes** — NO outgoing edges to other nodes
|
||||
- At build time, browser/GCU nodes are dissolved into the parent's \
|
||||
`sub_agents` list, just like decision nodes are dissolved into criteria
|
||||
|
||||
**CRITICAL: GCU nodes (`node_type: "gcu"`) are ALWAYS sub-agents.** \
|
||||
They MUST NOT appear in the linear flow. NEVER chain GCU nodes \
|
||||
sequentially (A → gcu1 → gcu2 → B is WRONG). Instead, attach them \
|
||||
as leaves to the parent that orchestrates them:
|
||||
Browser nodes are regular `event_loop` nodes with browser tools \
|
||||
(from the gcu-tools MCP server) in their tool list. They are wired \
|
||||
into the graph with edges like any other node:
|
||||
```
|
||||
WRONG: intake → gcu_find_prospect → gcu_scan_mutuals → check_results
|
||||
WRONG: decision_node → gcu_node (as a yes/no branch)
|
||||
RIGHT: intake (sub_agents: [gcu_find, gcu_scan]) → check_results
|
||||
research → browser_scan → analyze_results
|
||||
```
|
||||
The parent node delegates to its GCU sub-agents and collects results. \
|
||||
The main flow continues from the parent, not from the GCU node. \
|
||||
GCU nodes MUST NOT be children of decision nodes — decision nodes \
|
||||
dissolve at build time, which would leave the GCU as a dangling \
|
||||
workflow step.
|
||||
Use `tools: {policy: "all"}` to give browser nodes access to all \
|
||||
browser tools, or list specific ones with `policy: "explicit"`.
|
||||
|
||||
**How to show delegation in the flowchart:**
|
||||
```
|
||||
research → (deep_searcher) ← browser/GCU node, leaf
|
||||
research → [Enough results?] ← decision node
|
||||
```
|
||||
After dissolution: `research` node gets `sub_agents: ["deep_searcher"]` \
|
||||
and `success_criteria: "Enough results?"`.
|
||||
|
||||
If the worker agent start from some initial input it is okay. \
|
||||
The queen(you) owns intake: you gathers user requirements, then calls \
|
||||
If the worker agent starts from some initial input it is okay. \
|
||||
The queen(you) owns intake: you gather user requirements, then call \
|
||||
`run_agent_with_input(task)` with a structured task description. \
|
||||
When building the agent, design the entry node's `input_keys` to \
|
||||
match what the queen will provide at run time. Worker nodes should \
|
||||
@@ -423,14 +382,14 @@ You MUST get explicit user approval before ANY code is generated.
|
||||
2. **WAIT for user response.** Do NOT proceed without it.
|
||||
3. Handle the response:
|
||||
- If **Approve / Proceed**: Call confirm_and_build(), then \
|
||||
initialize_and_build_agent(agent_name, nodes)
|
||||
confirm_and_build(agent_name)
|
||||
- If **Adjust scope**: Discuss changes, update the draft with \
|
||||
save_agent_draft() again, and re-ask
|
||||
- If **More questions**: Answer them honestly, then ask again
|
||||
- If **Reconsider**: Discuss alternatives. If they decide to proceed, \
|
||||
that's their informed choice
|
||||
|
||||
**NEVER call initialize_and_build_agent without first calling \
|
||||
**NEVER call confirm_and_build without first calling \
|
||||
confirm_and_build().** The system will block the transition if you try.
|
||||
"""
|
||||
|
||||
@@ -489,53 +448,75 @@ When a user says "my agent is failing" or "debug this agent":
|
||||
## 5. Implement
|
||||
|
||||
**You should only reach this step after the user has approved the draft design \
|
||||
in the planning phase. The draft metadata will pre-populate descriptions, \
|
||||
goals, success criteria, and node metadata in the generated files.**
|
||||
and you have called `confirm_and_build(agent_name="my_agent")`.**
|
||||
|
||||
Call `initialize_and_build_agent(agent_name, nodes)` to generate all package \
|
||||
files. The agent_name must be snake_case (e.g., "my_agent"). Pass node names \
|
||||
as comma-separated string (e.g., "gather,process,review").
|
||||
The tool creates: config.py, nodes/__init__.py, agent.py, \
|
||||
__init__.py, __main__.py, mcp_servers.json, tests/conftest.py.
|
||||
`confirm_and_build` created the agent directory (returned in agent_path). \
|
||||
Now write the complete agent config directly:
|
||||
|
||||
The generated files are **structurally complete** with correct imports, \
|
||||
class definition, `validate()` method, `default_agent` export, and \
|
||||
`__init__.py` re-exports. They pass validation as-is.
|
||||
```
|
||||
write_file("<colony_path>/agent.json", <complete JSON config>)
|
||||
```
|
||||
|
||||
`mcp_servers.json` is auto-generated with hive-tools as the default. \
|
||||
Do NOT manually create or overwrite `mcp_servers.json`.
|
||||
The agent.json must include ALL of these in one write:
|
||||
- `name`, `version`, `description`
|
||||
- `goal` with `description`, `success_criteria`, `constraints`
|
||||
- `identity_prompt` (agent-level behavior)
|
||||
- `nodes` — each with `id`, `description`, `system_prompt`, `tools`, \
|
||||
`input_keys`, `output_keys`, `success_criteria`
|
||||
- `edges` — connecting all nodes with proper conditions
|
||||
- `entry_node`, `terminal_nodes`
|
||||
- `mcp_servers` — REQUIRED. Always include all three: \
|
||||
`[{"name": "hive-tools"}, {"name": "gcu-tools"}, {"name": "files-tools"}]`
|
||||
- `loop_config` — `max_iterations`, `max_context_tokens`
|
||||
|
||||
### Customizing generated files
|
||||
**Write the COMPLETE config in one `write_file` call. No TODOs, no placeholders.** \
|
||||
The queen writes final production-ready system prompts directly.
|
||||
|
||||
**CRITICAL: Use `edit_file` to customize TODO placeholders. \
|
||||
NEVER use `write_file` to rewrite generated files from scratch. \
|
||||
Rewriting breaks imports, class structure, and causes validation failures.**
|
||||
**There are NO Python files.** The framework loads agent.json directly.
|
||||
|
||||
Safe to edit with `edit_file`:
|
||||
- System prompts, tools, input_keys, output_keys, success_criteria in \
|
||||
nodes/__init__.py
|
||||
- Goal description, success criteria values, constraint values, edge \
|
||||
definitions, identity_prompt in agent.py
|
||||
- CLI options in __main__.py
|
||||
- For triggers (timers/webhooks), add entries to triggers.json in the \
|
||||
agent's export directory
|
||||
MCP servers are loaded from the global registry by name. Available servers:
|
||||
- `hive-tools` — web search, email, CRM, calendar, 100+ integrations
|
||||
- `gcu-tools` — browser automation (click, type, navigate, screenshot)
|
||||
- `files-tools` — file I/O (read, write, edit, search, list)
|
||||
|
||||
Do NOT modify or rewrite:
|
||||
- Import statements at top of agent.py (they are correct)
|
||||
- The agent class definition, `validate()`, `_build_graph()`, `_setup()`, \
|
||||
or lifecycle methods (start/stop/run)
|
||||
- `__init__.py` exports (all required variables are already re-exported)
|
||||
- `default_agent = ClassName()` at bottom of agent.py
|
||||
**Template variables:** Add a `variables:` section at the top of agent.json \
|
||||
and use `{{variable_name}}` in system prompts for config injection:
|
||||
```yaml
|
||||
variables:
|
||||
spreadsheet_id: "1ZVx..."
|
||||
nodes:
|
||||
- id: start
|
||||
system_prompt: |
|
||||
Use spreadsheet: {{spreadsheet_id}}
|
||||
```
|
||||
|
||||
### Tool access in nodes
|
||||
|
||||
Each node declares its tool access policy:
|
||||
```yaml
|
||||
# Explicit list (recommended)
|
||||
tools:
|
||||
policy: explicit
|
||||
allowed: [web_search, write_file]
|
||||
|
||||
# All tools (for browser automation nodes)
|
||||
tools:
|
||||
policy: all
|
||||
|
||||
# No tools (for handoff/summary nodes)
|
||||
tools:
|
||||
policy: none
|
||||
```
|
||||
|
||||
## 6. Verify and Load
|
||||
|
||||
Call `validate_agent_package("{name}")` after initialization. \
|
||||
It runs structural checks (class validation, graph validation, tool \
|
||||
validation, tests) and returns a consolidated result. If anything \
|
||||
fails: read the error, fix with edit_file, re-validate. Up to 3x.
|
||||
fails: read the error, fix with read_file+write_file, re-validate. Up to 3x.
|
||||
|
||||
When validation passes, immediately call \
|
||||
`load_built_agent("exports/{name}")` to load the agent into the \
|
||||
`load_built_agent("<agent_path>")` to load the agent into the \
|
||||
session. This switches to STAGING phase and shows the graph in the \
|
||||
visualizer. Do NOT wait for user input between validation and loading.
|
||||
"""
|
||||
@@ -548,52 +529,54 @@ _package_builder_knowledge = _shared_building_knowledge + _planning_knowledge +
|
||||
# Queen-specific: extra tool docs, behavior, phase 7, style
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
# -- Phase-specific identities --
|
||||
# -- Character core (immutable across all phases) --
|
||||
|
||||
_queen_identity_planning = """\
|
||||
You are an experienced, responsible and curious Solution Architect. \
|
||||
"Queen" is the internal alias. \
|
||||
You ask smart questions to guide user to the solution \
|
||||
You are in PLANNING phase — your job is to either: \
|
||||
(a) understand what the user wants and design a new agent, or \
|
||||
(b) diagnose issues with an existing agent, discuss a fix plan with the user, \
|
||||
then transition to building to implement. \
|
||||
You have read-only tools for exploration but no write/edit tools. \
|
||||
Focus on conversation, research, and design. \
|
||||
_queen_character_core = """\
|
||||
You are the advisor defined in <core_identity> above. Stay in character.
|
||||
|
||||
Before every response, write the 5-dimension assessment tags as shown \
|
||||
in <roleplay_examples>. These tags are stripped from user view but kept \
|
||||
in conversation history -- you will see them on subsequent turns:
|
||||
<social_distance> <context> <mood_filter> <physical_presence> <language_engine>
|
||||
Then write your visible response. Direct, in character, no preamble.
|
||||
|
||||
You remember people. When you've worked with someone before, build on \
|
||||
what you know. The instructions that follow tell you what to DO in each \
|
||||
phase. Your identity tells you WHO you are.\
|
||||
"""
|
||||
|
||||
# -- Phase-specific work roles (what you DO, not who you ARE) --
|
||||
|
||||
_queen_role_planning = """\
|
||||
You are in PLANNING phase. Your work: understand what the user wants, \
|
||||
research available tools, and design the agent architecture. \
|
||||
You have read-only tools — no write/edit. Focus on conversation, \
|
||||
research, and design. \
|
||||
You MUST use ask_user / ask_user_multiple tools for ALL questions — \
|
||||
never ask questions in plain text without calling the tool.\
|
||||
"""
|
||||
|
||||
_queen_identity_building = """\
|
||||
You are an experienced, responsible and curious Solution Architect. \
|
||||
"Queen" is the internal alias.\
|
||||
You design and build production-ready agent systems \
|
||||
from natural language requirements. You understand the Hive framework at the \
|
||||
source code level and create agents that are robust, well-tested, and follow \
|
||||
best practices. You collaborate with users to refine requirements, assess fit, \
|
||||
and deliver complete solutions. \
|
||||
You design and build the agent to do the job but don't do the job on your own
|
||||
_queen_role_building = """\
|
||||
You are in BUILDING phase. Your work: implement the approved design as \
|
||||
production-ready code, validate it, and load the agent for staging. \
|
||||
You have full coding tools. \
|
||||
You design and build the agent to do the job but don't do the job yourself.\
|
||||
"""
|
||||
|
||||
_queen_identity_staging = """\
|
||||
You are a Solution Engineer preparing an agent for deployment. \
|
||||
"Queen" is your internal alias. \
|
||||
The agent is loaded and ready. \
|
||||
Your role is to verify configuration, confirm credentials, and ensure the user \
|
||||
understands what the agent will do. You guide the user through the final checks \
|
||||
before execution.
|
||||
_queen_role_staging = """\
|
||||
You are in STAGING phase. The agent is loaded and ready. \
|
||||
Your work: verify configuration, confirm credentials, and launch \
|
||||
when the user is ready.\
|
||||
"""
|
||||
|
||||
_queen_identity_running = """\
|
||||
You are a Solution Engineer running agents on behalf of the user. \
|
||||
"Queen" is your internal alias. You monitor execution, handle \
|
||||
escalations when the agent gets stuck, and care deeply about outcomes. When the \
|
||||
agent finishes, you report results clearly and help the user decide what to do next.
|
||||
_queen_role_running = """\
|
||||
You are in RUNNING phase. The agent is executing. \
|
||||
Your work: monitor progress, handle escalations when the agent gets stuck, \
|
||||
and report outcomes clearly. Help the user decide what to do next.\
|
||||
"""
|
||||
|
||||
_queen_identity_incubating = """\
|
||||
You are a Solution Engineer in INCUBATING mode. \
|
||||
"Queen" is your internal alias. The worker has finished executing and is still loaded. \
|
||||
_queen_identity_editing = """\
|
||||
You are in EDITING mode. The worker has finished executing and is still loaded. \
|
||||
You can tweak configuration, inject messages, and re-run with different input \
|
||||
without rebuilding. If a deeper change is needed (code edits, new tools), \
|
||||
escalate to BUILDING via stop_graph_and_edit or to PLANNING via stop_graph_and_plan.
|
||||
@@ -627,13 +610,11 @@ document, database, subprocess, etc.) with unique shapes and colors. Set \
|
||||
flowchart_type on a node to override. Nodes need only an id. \
|
||||
Use decision nodes (flowchart_type: "decision", with decision_clause and \
|
||||
labeled yes/no edges) to make conditional branching explicit. \
|
||||
GCU/sub-agent nodes (node_type: "gcu") are auto-detected as browser \
|
||||
hexagons — connect them as leaf nodes to their parent.
|
||||
- confirm_and_build() — Record user confirmation of the draft. Dissolves \
|
||||
planning-only nodes (decision → predecessor criteria; browser/GCU → \
|
||||
predecessor sub_agents list). Call this ONLY after the user explicitly \
|
||||
approves via ask_user.
|
||||
- initialize_and_build_agent(agent_name?, nodes?) — Scaffold the agent package \
|
||||
- confirm_and_build(agent_name) — Scaffold the agent package \
|
||||
and transition to BUILDING phase. For new agents, this REQUIRES \
|
||||
save_agent_draft() + confirm_and_build() first. The draft metadata is used to \
|
||||
pre-populate the generated files. Without agent_name: transition to BUILDING \
|
||||
@@ -643,16 +624,14 @@ to fix the currently loaded agent (no draft required).
|
||||
- load_built_agent(agent_path) — Load an existing agent and switch to STAGING \
|
||||
phase. Only use this when the user explicitly asks to work with an existing agent \
|
||||
(e.g. "load my_agent", "run the research agent"). Confirm with the user first.
|
||||
- save_global_memory(category, description, content, name?) — Save durable \
|
||||
cross-queen memory about the user only (profile, preferences, environment, feedback)
|
||||
|
||||
## Workflow summary
|
||||
1. Understand requirements → discover tools → design graph
|
||||
2. Call save_agent_draft() to create visual draft → present to user
|
||||
3. Call ask_user() to get explicit approval
|
||||
4. Call confirm_and_build() to record approval
|
||||
5. Call initialize_and_build_agent() to scaffold and start building
|
||||
For diagnosis of existing agents, call initialize_and_build_agent() \
|
||||
5. Call confirm_and_build() to scaffold and start building
|
||||
For diagnosis of existing agents, call confirm_and_build() \
|
||||
(no args) after agreeing on a fix plan with the user.
|
||||
"""
|
||||
|
||||
@@ -676,8 +655,6 @@ updated flowchart immediately. Use this when you make structural changes \
|
||||
restored (with decision/browser nodes intact) so you can edit it. Use \
|
||||
when the user wants to change integrations, swap tools, rethink the \
|
||||
flow, or discuss any design changes before you build them.
|
||||
- save_global_memory(category, description, content, name?) — Save durable \
|
||||
cross-queen memory about the user only
|
||||
|
||||
When you finish building an agent, call load_built_agent(path) to stage it.
|
||||
"""
|
||||
@@ -688,19 +665,13 @@ _queen_tools_staging = """
|
||||
The agent is loaded and ready to run. You can inspect it and launch it:
|
||||
- Read-only: read_file, list_directory, search_files, run_command
|
||||
- list_credentials(credential_id?) — Verify credentials are configured
|
||||
- get_graph_status(focus?) — Brief status. Drill in with focus: memory, tools, issues, progress
|
||||
- get_graph_status(focus?) — Brief status
|
||||
- run_agent_with_input(task) — Start the worker and switch to RUNNING phase
|
||||
- stop_graph_and_plan() — Go to PLANNING phase to discuss changes with the user \
|
||||
first (DEFAULT for most modification requests)
|
||||
- stop_graph_and_edit() — Go to BUILDING phase for immediate, specific fixes
|
||||
- set_trigger(trigger_id, trigger_type?, trigger_config?) — Activate a trigger (timer)
|
||||
- remove_trigger(trigger_id) — Deactivate a trigger
|
||||
- list_triggers() — List all triggers and their active/inactive status
|
||||
- save_global_memory(category, description, content, name?) — Save durable \
|
||||
cross-queen memory about the user only
|
||||
- set_trigger / remove_trigger / list_triggers — Timer management
|
||||
|
||||
You do NOT have write tools. To modify the agent, prefer \
|
||||
stop_graph_and_plan() unless the user gave a specific instruction.
|
||||
You do NOT have write tools or backward transition tools in staging. \
|
||||
To modify the agent, run it first — after it finishes you enter EDITING \
|
||||
phase where you can escalate to building or planning.
|
||||
"""
|
||||
|
||||
_queen_tools_running = """
|
||||
@@ -708,156 +679,64 @@ _queen_tools_running = """
|
||||
|
||||
The worker is running. You have monitoring and lifecycle tools:
|
||||
- Read-only: read_file, list_directory, search_files, run_command
|
||||
- get_graph_status(focus?) — Brief status. Drill in: activity, memory, tools, issues, progress
|
||||
- get_graph_status(focus?) — Brief status
|
||||
- inject_message(content) — Send a message to the running worker
|
||||
- get_worker_health_summary() — Read the latest health data
|
||||
- stop_graph() — Stop the worker and return to STAGING phase, then ask the user what to do next
|
||||
- stop_graph_and_plan() — Stop and switch to PLANNING phase to discuss changes \
|
||||
with the user first (DEFAULT for most modification requests)
|
||||
- stop_graph_and_edit() — Stop and switch to BUILDING phase for specific fixes
|
||||
- stop_graph() — Stop the worker immediately
|
||||
- switch_to_editing() — Stop the worker and enter EDITING phase \
|
||||
for config tweaks, re-runs, or escalation to building/planning
|
||||
- run_agent_with_input(task) — Re-run the worker with new input
|
||||
- set_trigger / remove_trigger / list_triggers — Timer management
|
||||
|
||||
You do NOT have write tools. To modify the agent, prefer \
|
||||
stop_graph_and_plan() unless the user gave a specific instruction. \
|
||||
To just stop without modifying, call stop_graph().
|
||||
- stop_graph_and_edit() — Stop the worker and switch back to BUILDING phase
|
||||
- set_trigger(trigger_id, trigger_type?, trigger_config?) — Activate a trigger (timer)
|
||||
- remove_trigger(trigger_id) — Deactivate a trigger
|
||||
- list_triggers() — List all triggers and their active/inactive status
|
||||
- save_global_memory(category, description, content, name?) — Save durable \
|
||||
cross-queen memory about the user only
|
||||
|
||||
You do NOT have write tools or agent construction tools. \
|
||||
If you need to modify the agent, call stop_graph_and_edit() to switch back \
|
||||
to BUILDING phase. To stop the worker and ask the user what to do next, call \
|
||||
stop_graph() to return to STAGING phase.
|
||||
When the worker finishes on its own, you automatically move to EDITING \
|
||||
phase. You can also call switch_to_editing() to stop early and tweak.
|
||||
"""
|
||||
|
||||
_queen_tools_incubating = """
|
||||
# Tools (INCUBATING phase)
|
||||
_queen_tools_editing = """
|
||||
# Tools (EDITING phase)
|
||||
|
||||
The worker has finished executing and is still loaded. You can tweak and re-run:
|
||||
- Read-only: read_file, list_directory, search_files, run_command
|
||||
- get_graph_status(focus?) — Brief status of the loaded agent
|
||||
- inject_message(content) — Send a config tweak or prompt adjustment to the worker
|
||||
- inject_message(content) — Send a config tweak or prompt adjustment
|
||||
- run_agent_with_input(task) — Re-run the worker with new input
|
||||
- get_worker_health_summary() — Review last run's health data
|
||||
- set_trigger / remove_trigger / list_triggers — Timer management
|
||||
|
||||
To escalate when a deeper change is needed (code edits, new tools, redesign):
|
||||
- stop_graph_and_edit() — Kill the worker and switch to BUILDING phase
|
||||
- stop_graph_and_plan() — Kill the worker and switch to PLANNING phase
|
||||
|
||||
You do NOT have write/edit file tools. If you need to modify code, \
|
||||
call stop_graph_and_edit() to switch to BUILDING phase.
|
||||
You do NOT have write/edit file tools or backward transition tools. \
|
||||
You can only re-run or tweak from this phase.
|
||||
"""
|
||||
|
||||
_queen_behavior_incubating = """
|
||||
## Incubating — tweak and re-run
|
||||
_queen_behavior_editing = """
|
||||
## Editing — tweak and re-run
|
||||
|
||||
The worker finished. Review the results and decide:
|
||||
1. **Re-run** with different input: call run_agent_with_input(task)
|
||||
2. **Inject adjustments**: use inject_message to tweak prompts or config
|
||||
3. **Escalate**: if the agent needs code changes, call stop_graph_and_edit()
|
||||
|
||||
Do NOT suggest rebuilding unless the user explicitly asks. Default to re-running.
|
||||
Do NOT suggest rebuilding. You cannot go back to building or planning \
|
||||
from this phase. Default to re-running with adjusted input.
|
||||
Report the last run's results to the user and ask what they want to do next.
|
||||
"""
|
||||
|
||||
# -- Behavior shared across all phases --
|
||||
|
||||
_queen_behavior_always = """
|
||||
# Behavior
|
||||
# System Rules
|
||||
|
||||
## Images attached by the user
|
||||
## ask_user (CRITICAL)
|
||||
|
||||
Users can attach images directly to their chat messages. When you see an \
|
||||
image in the conversation, analyze it using your native vision capability — \
|
||||
do NOT say you cannot see images or that you lack access to files. The image \
|
||||
is embedded in the message; no tool call is needed to view it. Describe what \
|
||||
you see, answer questions about it, and use the visual content to inform your \
|
||||
response just as you would text.
|
||||
Any response that expects user input MUST end with ask_user or \
|
||||
ask_user_multiple. The system cannot detect you're waiting otherwise. \
|
||||
Never write questions as plain text without the tool call. \
|
||||
For 2+ questions, use ask_user_multiple so users answer in one go. \
|
||||
Keep your text to a brief intro -- the widget renders the questions. \
|
||||
Always provide 2-4 short options; users can type custom responses.
|
||||
|
||||
## CRITICAL RULE — ask_user / ask_user_multiple
|
||||
## Images
|
||||
|
||||
Every response that ends with a question, a prompt, or expects user \
|
||||
input MUST finish with a call to ask_user or ask_user_multiple. \
|
||||
The system CANNOT detect that you are waiting for \
|
||||
input unless you call one of these tools. You MUST call it as the LAST \
|
||||
action in your response.
|
||||
|
||||
NEVER end a response with a question in text without calling ask_user. \
|
||||
NEVER rely on the user seeing your text and replying — call ask_user. \
|
||||
NEVER list options as text bullets — the tool renders interactive buttons.
|
||||
|
||||
**When you have 2+ questions**, use ask_user_multiple instead of ask_user. \
|
||||
This renders all questions at once so the user answers in one interaction \
|
||||
instead of going back and forth. ALWAYS prefer ask_user_multiple when \
|
||||
you need to clarify multiple things. \
|
||||
**IMPORTANT: When using ask_user_multiple, do NOT repeat the questions \
|
||||
in your text response.** The widget renders the questions with options — \
|
||||
duplicating them in text wastes the user's time and delays the widget \
|
||||
appearing. Keep your text to a brief context/intro sentence only.
|
||||
|
||||
Always provide 2-4 short options that cover the most likely answers. \
|
||||
The user can always type a custom response.
|
||||
|
||||
### WRONG — never do this:
|
||||
```
|
||||
I need a few details:
|
||||
- Documentation Source: Where should the agent look?
|
||||
- Trigger: Should the agent poll or get a URL?
|
||||
- Review Channel: Slack, Email, or Sheets?
|
||||
|
||||
Which of these would you like to define first?
|
||||
1. Documentation source
|
||||
2. Trigger
|
||||
3. Review channel
|
||||
```
|
||||
This lists questions as plain text with NO tool call — the user has no \
|
||||
interactive widget and the system doesn't know you're waiting for input.
|
||||
|
||||
### RIGHT — always do this:
|
||||
Write a brief intro (1-2 sentences), then call the tool:
|
||||
- ask_user_multiple(questions=[
|
||||
{"id": "docs", "prompt": "Where should the agent find answers?",
|
||||
"options": ["GitHub repo", "Documentation website", "Internal wiki"]},
|
||||
{"id": "trigger", "prompt": "How should questions be discovered?",
|
||||
"options": ["Poll search automatically", "I provide a URL"]},
|
||||
{"id": "review", "prompt": "Where to send drafted responses?",
|
||||
"options": ["Slack", "Email", "Google Sheets"]}
|
||||
])
|
||||
|
||||
Examples (single question):
|
||||
- ask_user("Ready to proceed?",
|
||||
["Yes, go ahead", "Let me change something"])
|
||||
|
||||
## Greeting
|
||||
|
||||
When the user greets you, respond concisely (under 10 lines) with worker \
|
||||
status only:
|
||||
1. Use plain, user-facing wording about load/run state; avoid internal phase \
|
||||
labels ("staging phase", "building phase", "running phase") unless the user \
|
||||
explicitly asks for phase details.
|
||||
2. If loaded, prefer this format: "<graph_name> has been loaded. <one sentence \
|
||||
on what it does from Worker Profile>."
|
||||
3. Do NOT include identity details unless the user explicitly asks about identity.
|
||||
4. THEN call ask_user to prompt them — do NOT just write text.
|
||||
5. Preferred loaded example:
|
||||
local_business_extractor/*agent name*/ has been loaded. It finds local businesses on \
|
||||
Google Maps, extracts contact details, and syncs them to Google Sheets.
|
||||
ask_user("Do you want to run it?", ["Yes, run it", "Check credentials first",
|
||||
"Modify the worker"])
|
||||
|
||||
## When user ask identity and responsibility
|
||||
|
||||
Only answer identity when the user explicitly asks (for example: "who are you?", \
|
||||
"what is your identity?", "what does Queen mean?").
|
||||
1. Use the alias "Queen" and "Worker" in the response.
|
||||
2. Explain role/responsibility for the current phase:
|
||||
- PLANNING: understand requirements, negotiate scope, design agent architecture.
|
||||
- BUILDING: architect and implement agents.
|
||||
- STAGING: verify readiness, credentials, and launch conditions.
|
||||
- RUNNING: monitor execution, handle escalations, and report outcomes.
|
||||
3. Keep identity responses concise and do NOT include extra process details.
|
||||
Users can attach images to messages. Analyze them directly using your \
|
||||
vision capability -- the image is embedded, no tool call needed.
|
||||
"""
|
||||
|
||||
# -- PLANNING phase behavior --
|
||||
@@ -878,7 +757,7 @@ that changes the structure, call save_agent_draft() again so they see the \
|
||||
update in real-time. The flowchart is a live collaboration tool.
|
||||
8. When the design is stable, use ask_user to get explicit approval
|
||||
9. Call confirm_and_build() after the user approves
|
||||
10. Call initialize_and_build_agent(agent_name, nodes) to scaffold and start building
|
||||
10. Call confirm_and_build(agent_name) to scaffold and start building
|
||||
|
||||
**The flowchart is your shared whiteboard.** Don't describe changes in text \
|
||||
and then ask "should I update the draft?" — just update it. If the user says \
|
||||
@@ -889,7 +768,7 @@ see every structural change reflected in the visualizer as you discuss it.
|
||||
**CRITICAL: Planning → Building boundary.** You MUST get explicit user \
|
||||
confirmation before moving to building. The sequence is:
|
||||
save_agent_draft() → iterate with user → ask_user() → confirm_and_build() → \
|
||||
initialize_and_build_agent()
|
||||
confirm_and_build()
|
||||
Skipping any of these steps will be blocked by the system.
|
||||
|
||||
Remember: DO NOT write or edit any files yet. This is a read-only exploration \
|
||||
@@ -905,7 +784,7 @@ your priority is diagnosis, not new design:
|
||||
2. Summarize the root cause to the user
|
||||
3. Propose a fix plan (what to change, what behavior to adjust)
|
||||
4. Get user approval via ask_user
|
||||
5. Call initialize_and_build_agent() (no args) to transition to building and implement the fix
|
||||
5. Call confirm_and_build() (no args) to transition to building and implement the fix
|
||||
|
||||
Do NOT start the full discovery workflow (tool discovery, gap analysis) in \
|
||||
diagnosis mode — you already have a built agent, you just need to fix it.
|
||||
@@ -914,26 +793,10 @@ diagnosis mode — you already have a built agent, you just need to fix it.
|
||||
_queen_memory_instructions = """
|
||||
## Your Memory
|
||||
|
||||
Relevant colony memories from this queen session may appear in context under \
|
||||
"--- Colony Memories ---". Relevant global user memories may appear under \
|
||||
"--- Global Memories ---".
|
||||
|
||||
Colony memories are shared with the worker for this queen session. Use them \
|
||||
for continuity about what this user is trying to do, what has worked, and \
|
||||
what the colony has learned together.
|
||||
|
||||
Global memories are shared across queens and are only for durable knowledge \
|
||||
about the user: who they are, their preferences, their environment, and \
|
||||
their feedback.
|
||||
|
||||
Memories older than 1 day include a staleness warning. Treat these as \
|
||||
point-in-time observations — verify current details before asserting them \
|
||||
as fact.
|
||||
|
||||
You do NOT need to manually save or recall colony memories. A background \
|
||||
reflection agent automatically extracts colony learnings from each \
|
||||
conversation turn. Use `save_global_memory` only when you learn something \
|
||||
durable about the user that should help future queens.
|
||||
Relevant global memories about the user may appear at the end of this prompt \
|
||||
under "--- Global Memories ---". These are automatically maintained across \
|
||||
sessions. Use them to inform your responses but verify stale claims before \
|
||||
asserting them as fact.
|
||||
"""
|
||||
|
||||
_queen_behavior_always = _queen_behavior_always + _queen_memory_instructions
|
||||
@@ -957,7 +820,7 @@ delegate agent construction to the worker, even as a "research" subtask.
|
||||
## Keeping the flowchart in sync during building
|
||||
|
||||
When you make structural changes to the agent (add/remove/rename nodes, \
|
||||
change edges, modify sub-agent assignments), call save_agent_draft() to \
|
||||
change edges, modify node connections), call save_agent_draft() to \
|
||||
update the flowchart. During building, this auto-dissolves planning-only \
|
||||
nodes without needing user re-confirmation. The user sees the updated \
|
||||
flowchart immediately.
|
||||
@@ -976,15 +839,15 @@ user says "replan", "go back", "let's redesign", "change the approach", \
|
||||
|
||||
## CRITICAL — Graph topology errors require replanning, not code edits
|
||||
|
||||
If you discover that the agent graph has structural problems — GCU nodes \
|
||||
If you discover that the agent graph has structural problems — browser nodes \
|
||||
in the linear flow, missing edges, wrong node connections, incorrect \
|
||||
sub-agent assignments — you MUST call replan_agent() and fix the draft. \
|
||||
Do NOT attempt to fix topology by editing agent.py directly. The graph \
|
||||
node connections — you MUST call replan_agent() and fix the draft. \
|
||||
Do NOT attempt to fix topology by editing agent.json directly. The graph \
|
||||
structure is defined by the draft → dissolution → code-gen pipeline. \
|
||||
Editing code to rewire nodes bypasses the flowchart and creates drift \
|
||||
between what the user sees and what the code does.
|
||||
Editing the config to rewire nodes bypasses the flowchart and creates drift \
|
||||
between what the user sees and what the config does.
|
||||
|
||||
**WRONG:** "Let me fix agent.py to remove GCU nodes from edges..."
|
||||
**WRONG:** "Let me fix agent.json to remove browser nodes from edges..."
|
||||
**RIGHT:** Call replan_agent(), fix the draft with save_agent_draft(), \
|
||||
get user approval, then confirm_and_build() → the corrected code is \
|
||||
generated automatically.
|
||||
@@ -1002,8 +865,7 @@ prompt). It can ONLY do what its goal and tools allow.
|
||||
run_agent_with_input(task) (if in staging) or load then run (if in building)
|
||||
- Anything else → do it yourself. Do NOT reframe user requests into \
|
||||
subtasks to justify delegation.
|
||||
- Building, modifying, or configuring agents is ALWAYS your job. \
|
||||
Use stop_graph_and_edit when you need to.
|
||||
- Building, modifying, or configuring agents is ALWAYS your job.
|
||||
|
||||
## When the user says "run", "execute", or "start" (without specifics)
|
||||
|
||||
@@ -1027,7 +889,8 @@ or assume what the user wants. Use ask_user to collect the task details \
|
||||
compose a structured task description from their input and call \
|
||||
run_agent_with_input(task). The worker has no intake node — it receives \
|
||||
your task and starts processing.
|
||||
- If the user wants to modify the agent, call stop_graph_and_edit().
|
||||
- If the user wants to modify the agent, wait for EDITING phase \
|
||||
(after worker finishes) where you will have stop_graph_and_edit().
|
||||
|
||||
## When idle (worker not running):
|
||||
- Greet the user. Mention what the worker can do in one sentence.
|
||||
@@ -1055,16 +918,15 @@ building something new.
|
||||
|
||||
## Fixing or Modifying the loaded worker
|
||||
|
||||
Use stop_graph_and_plan() when:
|
||||
- The user says "modify", "improve", "fix", or "change" without specifics
|
||||
- The request is vague or open-ended ("make it better", "it's not working right")
|
||||
- You need to understand the user's intent before making changes
|
||||
- The issue requires inspecting logs, checkpoints, or past runs first
|
||||
During RUNNING phase, you cannot directly switch to building or planning. \
|
||||
When the worker finishes, you move to EDITING where you can:
|
||||
- Re-run with different input via run_agent_with_input(task)
|
||||
- Tweak config via inject_message(content)
|
||||
- Escalate to stop_graph_and_edit() or stop_graph_and_plan() if deeper changes are needed
|
||||
|
||||
Use stop_graph_and_edit() only when:
|
||||
- The user gave a specific, concrete instruction ("add save_data to the gather node")
|
||||
- You already discussed the fix in a previous planning session
|
||||
- The change is trivial and unambiguous (rename, toggle a flag)
|
||||
During STAGING or EDITING phase:
|
||||
- Use stop_graph_and_plan() when the request is vague or needs discussion
|
||||
- Use stop_graph_and_edit() when the user gave a specific, concrete instruction
|
||||
|
||||
## Trigger Management
|
||||
|
||||
@@ -1111,18 +973,15 @@ You wake up when:
|
||||
If the user asks for progress, call get_graph_status() ONCE and report. \
|
||||
If the summary mentions issues, follow up with get_graph_status(focus="issues").
|
||||
|
||||
## Subagent delegations (browser automation, GCU)
|
||||
## Browser automation nodes
|
||||
|
||||
When the worker delegates to a subagent (e.g., GCU browser automation), expect it \
|
||||
to take 2-5 minutes. During this time:
|
||||
- Progress will show 0% — this is NORMAL. The subagent only calls set_output at the end.
|
||||
- Check get_graph_status(focus="full") for "subagent_activity" — this shows the \
|
||||
subagent's latest reasoning text and confirms it is making real progress.
|
||||
- Do NOT conclude the subagent is stuck just because progress is 0% or because \
|
||||
you see repeated browser_click/browser_snapshot calls — that is the expected \
|
||||
pattern for web scraping.
|
||||
- Only intervene if: the subagent has been running for 5+ minutes with no new \
|
||||
subagent_activity updates, OR the judge escalates.
|
||||
Browser nodes may take 2-5 minutes for web scraping tasks. During this time:
|
||||
- Progress will show 0% until the node calls set_output at the end.
|
||||
- Check get_graph_status(focus="full") for activity updates.
|
||||
- Do NOT conclude it is stuck just because you see repeated \
|
||||
browser_click/browser_snapshot calls — that is expected for web scraping.
|
||||
- Only intervene if: the node has been running for 5+ minutes with no new \
|
||||
activity updates, OR the judge escalates.
|
||||
|
||||
## Handling worker termination ([WORKER_TERMINAL])
|
||||
|
||||
@@ -1154,11 +1013,11 @@ escalations. If the user gave you instructions (e.g., "just retry on errors", \
|
||||
|
||||
CRITICAL — escalation relay protocol:
|
||||
When an escalation requires user input (auth blocks, human review), the worker \
|
||||
or its subagent is BLOCKED and waiting for your response. You MUST follow this \
|
||||
or is BLOCKED and waiting for your response. You MUST follow this \
|
||||
exact two-step sequence:
|
||||
Step 1: call ask_user() to get the user's answer.
|
||||
Step 2: call inject_message() with the user's answer IMMEDIATELY after.
|
||||
If you skip Step 2, the worker/subagent stays blocked FOREVER and the task hangs. \
|
||||
If you skip Step 2, the worker stays blocked FOREVER and the task hangs. \
|
||||
NEVER respond to the user without also calling inject_message() to unblock \
|
||||
the worker. Even if the user says "skip" or "cancel", you must still relay that \
|
||||
decision via inject_message() so the worker can clean up.
|
||||
@@ -1179,8 +1038,10 @@ decision via inject_message() so the worker can clean up.
|
||||
|
||||
**Errors / unexpected failures:**
|
||||
- Explain what went wrong in plain terms.
|
||||
- Ask the user: "Fix the agent and retry?" → use stop_graph_and_edit() if yes.
|
||||
- Or offer: "Diagnose the issue" → use stop_graph_and_plan() to investigate first.
|
||||
- Ask the user: "Fix the agent and retry?" → in EDITING phase, \
|
||||
use stop_graph_and_edit().
|
||||
- Or offer: "Diagnose the issue" → in EDITING phase, \
|
||||
use stop_graph_and_plan().
|
||||
- Or offer: "Retry as-is", "Skip this task", "Abort run"
|
||||
- (Skip asking if user explicitly told you to auto-retry or auto-skip errors.)
|
||||
- If the escalation had wait_for_response: inject_message() with the decision.
|
||||
@@ -1200,14 +1061,12 @@ building something new.
|
||||
|
||||
- Call get_graph_status(focus="issues") for more details when needed.
|
||||
|
||||
## Fixing or Modifying the loaded worker
|
||||
## Fixing or Modifying the loaded worker (while running)
|
||||
|
||||
When the user asks to fix, change, modify, or update the loaded worker \
|
||||
(e.g., "change the report node", "add a node", "delete node X"):
|
||||
|
||||
**Default: use stop_graph_and_plan().** Most modification requests need \
|
||||
discussion first. Only use stop_graph_and_edit() when the user gave a \
|
||||
specific, unambiguous instruction or you already agreed on the fix.
|
||||
When the user asks to fix or modify the worker while it is running, \
|
||||
do NOT attempt to switch phases. Wait for the worker to finish — \
|
||||
you will move to EDITING phase automatically. From there you can \
|
||||
use stop_graph_and_edit() or stop_graph_and_plan().
|
||||
|
||||
## Trigger Handling
|
||||
|
||||
@@ -1244,7 +1103,7 @@ _queen_tools_docs = (
|
||||
+ "\n\n### Phase transitions\n"
|
||||
"- save_agent_draft(...) → creates visual-only draft graph (stays in PLANNING)\n"
|
||||
"- confirm_and_build() → records user approval of draft (stays in PLANNING)\n"
|
||||
"- initialize_and_build_agent(agent_name?, nodes?) → scaffolds package + switches to "
|
||||
"- confirm_and_build(agent_name) → scaffolds package + switches to "
|
||||
"BUILDING (requires draft + confirmation for new agents)\n"
|
||||
"- replan_agent() → switches back to PLANNING phase (only when user explicitly requests)\n"
|
||||
"- load_built_agent(path) → switches to STAGING phase\n"
|
||||
@@ -1271,9 +1130,21 @@ Do NOT tell the user to run `python -m {name} run` — run it here.
|
||||
"""
|
||||
|
||||
_queen_style = """
|
||||
# Style
|
||||
- Responsible and thoughtful
|
||||
- Concise. No fluff. Direct. No emojis.
|
||||
# Communication
|
||||
|
||||
## Adaptive Calibration
|
||||
|
||||
Read the user's signals and calibrate your register:
|
||||
- Short responses -> they want brevity. Match it.
|
||||
- "Why?" questions -> they want reasoning. Provide it.
|
||||
- Correct technical terms -> they know the domain. Skip basics.
|
||||
- Terse or frustrated ("just do X") -> acknowledge and simplify.
|
||||
- Exploratory ("what if...", "could we also...") -> slow down and explore.
|
||||
|
||||
If your cross-session memory describes how this person communicates, \
|
||||
start from that -- don't rediscover it.
|
||||
|
||||
## Operational Style
|
||||
- When starting the worker, describe what you told it in one sentence.
|
||||
- When an escalation arrives, lead with severity and recommended action.
|
||||
"""
|
||||
@@ -1299,11 +1170,12 @@ queen_node = NodeSpec(
|
||||
+ _QUEEN_BUILDING_TOOLS
|
||||
+ _QUEEN_STAGING_TOOLS
|
||||
+ _QUEEN_RUNNING_TOOLS
|
||||
+ _QUEEN_INCUBATING_TOOLS
|
||||
+ _QUEEN_EDITING_TOOLS
|
||||
)
|
||||
),
|
||||
system_prompt=(
|
||||
_queen_identity_building
|
||||
_queen_character_core
|
||||
+ _queen_role_building
|
||||
+ _queen_style
|
||||
+ _package_builder_knowledge
|
||||
+ _queen_tools_docs
|
||||
@@ -1319,7 +1191,7 @@ ALL_QUEEN_TOOLS = sorted(
|
||||
+ _QUEEN_BUILDING_TOOLS
|
||||
+ _QUEEN_STAGING_TOOLS
|
||||
+ _QUEEN_RUNNING_TOOLS
|
||||
+ _QUEEN_INCUBATING_TOOLS
|
||||
+ _QUEEN_EDITING_TOOLS
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1330,23 +1202,24 @@ __all__ = [
|
||||
"_QUEEN_BUILDING_TOOLS",
|
||||
"_QUEEN_STAGING_TOOLS",
|
||||
"_QUEEN_RUNNING_TOOLS",
|
||||
"_QUEEN_INCUBATING_TOOLS",
|
||||
# Phase-specific prompt segments (used by session_manager for dynamic prompts)
|
||||
"_queen_identity_planning",
|
||||
"_queen_identity_building",
|
||||
"_queen_identity_staging",
|
||||
"_queen_identity_running",
|
||||
"_queen_identity_incubating",
|
||||
"_QUEEN_EDITING_TOOLS",
|
||||
# Character + phase-specific prompt segments (used by session_manager for dynamic prompts)
|
||||
"_queen_character_core",
|
||||
"_queen_role_planning",
|
||||
"_queen_role_building",
|
||||
"_queen_role_staging",
|
||||
"_queen_role_running",
|
||||
"_queen_identity_editing",
|
||||
"_queen_tools_planning",
|
||||
"_queen_tools_building",
|
||||
"_queen_tools_staging",
|
||||
"_queen_tools_running",
|
||||
"_queen_tools_incubating",
|
||||
"_queen_tools_editing",
|
||||
"_queen_behavior_always",
|
||||
"_queen_behavior_building",
|
||||
"_queen_behavior_staging",
|
||||
"_queen_behavior_running",
|
||||
"_queen_behavior_incubating",
|
||||
"_queen_behavior_editing",
|
||||
"_queen_phase_7",
|
||||
"_queen_style",
|
||||
"_shared_building_knowledge",
|
||||
|
||||
@@ -1,80 +0,0 @@
|
||||
"""Queen thinking hook — HR persona classifier.
|
||||
|
||||
Fires once when the queen enters building mode at session start.
|
||||
Makes a single non-streaming LLM call (acting as an HR Director) to select
|
||||
the best-fit expert persona for the user's request, then returns a persona
|
||||
prefix string that replaces the queen's default "Solution Architect" identity.
|
||||
|
||||
This is designed to activate the model's latent domain expertise — a CFO
|
||||
persona on a financial question, a Lawyer on a legal question, etc.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.llm.provider import LLMProvider
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_HR_SYSTEM_PROMPT = """\
|
||||
You are an expert HR Director and talent consultant at a world-class firm.
|
||||
A new request has arrived and you must identify which professional's expertise
|
||||
would produce the highest-quality response.
|
||||
|
||||
Reply with ONLY a valid JSON object — no markdown, no prose, no explanation:
|
||||
{"role": "<job title>", "persona": "<2-3 sentence first-person identity statement>"}
|
||||
|
||||
Rules:
|
||||
- Choose from any real professional role: CFO, CEO, CTO, Lawyer, Data Scientist,
|
||||
Product Manager, Security Engineer, DevOps Engineer, Software Architect,
|
||||
HR Director, Marketing Director, Business Analyst, UX Designer,
|
||||
Financial Analyst, Operations Director, Legal Counsel, etc.
|
||||
- The persona statement must be written in first person ("I am..." or "I have...").
|
||||
- Select the role whose domain knowledge most directly applies to solving the request.
|
||||
- If the request is clearly about coding or building software systems, pick Software Architect.
|
||||
- "Queen" is your internal alias — do not include it in the persona.
|
||||
"""
|
||||
|
||||
|
||||
async def select_expert_persona(user_message: str, llm: LLMProvider) -> str:
|
||||
"""Run the HR classifier and return a persona prefix string.
|
||||
|
||||
Makes a single non-streaming acomplete() call with the session LLM.
|
||||
Returns an empty string on any failure so the queen falls back
|
||||
gracefully to its default "Solution Architect" identity.
|
||||
|
||||
Args:
|
||||
user_message: The user's opening message for the session.
|
||||
llm: The session LLM provider.
|
||||
|
||||
Returns:
|
||||
A persona prefix like "You are a CFO. I am a CFO with 20 years..."
|
||||
or "" on failure.
|
||||
"""
|
||||
if not user_message.strip():
|
||||
return ""
|
||||
|
||||
try:
|
||||
response = await llm.acomplete(
|
||||
messages=[{"role": "user", "content": user_message}],
|
||||
system=_HR_SYSTEM_PROMPT,
|
||||
max_tokens=1024,
|
||||
json_mode=True,
|
||||
)
|
||||
raw = response.content.strip()
|
||||
parsed = json.loads(raw)
|
||||
role = parsed.get("role", "").strip()
|
||||
persona = parsed.get("persona", "").strip()
|
||||
if not role or not persona:
|
||||
logger.warning("Thinking hook: empty role/persona in response: %r", raw)
|
||||
return ""
|
||||
result = f"You are a {role}. {persona}"
|
||||
logger.info("Thinking hook: selected persona — %s", role)
|
||||
return result
|
||||
except Exception:
|
||||
logger.warning("Thinking hook: persona classification failed", exc_info=True)
|
||||
return ""
|
||||
@@ -1,25 +1,23 @@
|
||||
"""Shared memory helpers for queen/worker recall and reflection.
|
||||
"""Queen global memory helpers.
|
||||
|
||||
Each memory is an individual ``.md`` file in ``~/.hive/queen/memories/``
|
||||
with optional YAML frontmatter (name, type, description). Frontmatter
|
||||
is a convention enforced by prompt instructions — parsing is lenient and
|
||||
malformed files degrade gracefully (appear in scans with ``None`` metadata).
|
||||
Memory hierarchy::
|
||||
|
||||
Cursor-based incremental processing tracks which conversation messages
|
||||
have already been processed by the reflection agent.
|
||||
~/.hive/memories/
|
||||
global/ # shared across all queens and colonies
|
||||
colonies/{name}/ # colony-scoped memories
|
||||
agents/queens/{name}/ # queen-specific memories
|
||||
agents/{name}/ # per-worker-agent memories
|
||||
|
||||
Each memory is an individual ``.md`` file with optional YAML frontmatter
|
||||
(name, type, description).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import shutil
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import date
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -27,54 +25,35 @@ logger = logging.getLogger(__name__)
|
||||
# Constants
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
MEMORY_TYPES: tuple[str, ...] = ("goal", "environment", "technique", "reference", "diary")
|
||||
GLOBAL_MEMORY_CATEGORIES: tuple[str, ...] = ("profile", "preference", "environment", "feedback")
|
||||
|
||||
_HIVE_QUEEN_DIR = Path.home() / ".hive" / "queen"
|
||||
# Legacy shared v2 root. Colony memory now lives under queen sessions.
|
||||
MEMORY_DIR: Path = _HIVE_QUEEN_DIR / "memories"
|
||||
from framework.config import MEMORIES_DIR
|
||||
|
||||
MAX_FILES: int = 200
|
||||
MAX_FILE_SIZE_BYTES: int = 4096 # 4 KB hard limit per memory file
|
||||
|
||||
# How many lines of a memory file to read for header scanning.
|
||||
_HEADER_LINE_LIMIT: int = 30
|
||||
_MIGRATION_MARKER = ".migrated-from-shared-memory"
|
||||
_GLOBAL_MEMORY_CODE_PATTERN = re.compile(
|
||||
r"(/Users/|~/.hive|\.py\b|\.ts\b|\.tsx\b|\.js\b|"
|
||||
r"\b(graph|node|runtime|session|execution|worker|queen|subagent|checkpoint|flowchart)\b)",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
|
||||
# Frontmatter example provided to the reflection agent via prompt.
|
||||
MEMORY_FRONTMATTER_EXAMPLE: list[str] = [
|
||||
"```markdown",
|
||||
"---",
|
||||
"name: {{memory name}}",
|
||||
(
|
||||
"description: {{one-line description — used to decide "
|
||||
"relevance in future conversations, so be specific}}"
|
||||
),
|
||||
f"type: {{{{{', '.join(MEMORY_TYPES)}}}}}",
|
||||
"---",
|
||||
"",
|
||||
(
|
||||
"{{memory content — for feedback/project types, "
|
||||
"structure as: rule/fact, then **Why:** "
|
||||
"and **How to apply:** lines}}"
|
||||
),
|
||||
"```",
|
||||
]
|
||||
|
||||
|
||||
def colony_memory_dir(colony_id: str) -> Path:
|
||||
"""Return the colony memory directory for a queen session."""
|
||||
return _HIVE_QUEEN_DIR / "session" / colony_id / "memory" / "colony"
|
||||
|
||||
|
||||
def global_memory_dir() -> Path:
|
||||
"""Return the queen-global memory directory."""
|
||||
return _HIVE_QUEEN_DIR / "global_memory"
|
||||
"""Return the global memory directory (shared across all queens/colonies)."""
|
||||
return MEMORIES_DIR / "global"
|
||||
|
||||
|
||||
def colony_memory_dir(colony_name: str) -> Path:
|
||||
"""Return the memory directory for a named colony."""
|
||||
return MEMORIES_DIR / "colonies" / colony_name
|
||||
|
||||
|
||||
def queen_memory_dir(queen_name: str = "default") -> Path:
|
||||
"""Return the memory directory for a named queen."""
|
||||
return MEMORIES_DIR / "agents" / "queens" / queen_name
|
||||
|
||||
|
||||
def agent_memory_dir(agent_name: str) -> Path:
|
||||
"""Return the memory directory for a worker agent."""
|
||||
return MEMORIES_DIR / "agents" / agent_name
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -108,15 +87,6 @@ def parse_frontmatter(text: str) -> dict[str, str]:
|
||||
return result
|
||||
|
||||
|
||||
def parse_memory_type(raw: str | None) -> str | None:
|
||||
"""Validate *raw* against supported memory categories."""
|
||||
if raw is None:
|
||||
return None
|
||||
normalized = raw.strip().lower()
|
||||
allowed = set(MEMORY_TYPES) | set(GLOBAL_MEMORY_CATEGORIES)
|
||||
return normalized if normalized in allowed else None
|
||||
|
||||
|
||||
def parse_global_memory_category(raw: str | None) -> str | None:
|
||||
"""Validate *raw* against ``GLOBAL_MEMORY_CATEGORIES``."""
|
||||
if raw is None:
|
||||
@@ -165,7 +135,7 @@ class MemoryFile:
|
||||
filename=path.name,
|
||||
path=path,
|
||||
name=fm.get("name"),
|
||||
type=parse_memory_type(fm.get("type")),
|
||||
type=parse_global_memory_category(fm.get("type")),
|
||||
description=fm.get("description"),
|
||||
header_lines=lines,
|
||||
mtime=mtime,
|
||||
@@ -183,7 +153,7 @@ def scan_memory_files(memory_dir: Path | None = None) -> list[MemoryFile]:
|
||||
Files are sorted by modification time (newest first). Dotfiles and
|
||||
subdirectories are ignored.
|
||||
"""
|
||||
d = memory_dir or MEMORY_DIR
|
||||
d = memory_dir or global_memory_dir()
|
||||
if not d.is_dir():
|
||||
return []
|
||||
|
||||
@@ -236,318 +206,30 @@ def build_memory_document(
|
||||
)
|
||||
|
||||
|
||||
def diary_filename(d: date | None = None) -> str:
|
||||
"""Return the diary memory filename for date *d* (default: today)."""
|
||||
d = d or date.today()
|
||||
return f"MEMORY-{d.strftime('%Y-%m-%d')}.md"
|
||||
|
||||
|
||||
def build_diary_document(*, date_str: str, body: str) -> str:
|
||||
"""Build a diary memory file with frontmatter."""
|
||||
return build_memory_document(
|
||||
name=f"diary-{date_str}",
|
||||
description=f"Daily session narrative for {date_str}",
|
||||
mem_type="diary",
|
||||
body=body,
|
||||
)
|
||||
|
||||
|
||||
def validate_global_memory_payload(
|
||||
*,
|
||||
category: str,
|
||||
description: str,
|
||||
content: str,
|
||||
) -> str:
|
||||
"""Validate a queen-global memory save request."""
|
||||
parsed = parse_global_memory_category(category)
|
||||
if parsed is None:
|
||||
raise ValueError(
|
||||
"Invalid global memory category. Use one of: "
|
||||
+ ", ".join(GLOBAL_MEMORY_CATEGORIES)
|
||||
)
|
||||
if not description.strip():
|
||||
raise ValueError("Global memory description cannot be empty.")
|
||||
if not content.strip():
|
||||
raise ValueError("Global memory content cannot be empty.")
|
||||
|
||||
probe = f"{description}\n{content}"
|
||||
if _GLOBAL_MEMORY_CODE_PATTERN.search(probe):
|
||||
raise ValueError(
|
||||
"Global memory is only for durable user profile, preferences, "
|
||||
"environment, or feedback — not task/code/runtime details."
|
||||
)
|
||||
return parsed
|
||||
|
||||
|
||||
def save_global_memory(
|
||||
*,
|
||||
category: str,
|
||||
description: str,
|
||||
content: str,
|
||||
name: str | None = None,
|
||||
memory_dir: Path | None = None,
|
||||
) -> tuple[str, Path]:
|
||||
"""Persist one queen-global memory entry."""
|
||||
parsed = validate_global_memory_payload(
|
||||
category=category,
|
||||
description=description,
|
||||
content=content,
|
||||
)
|
||||
target_dir = memory_dir or global_memory_dir()
|
||||
target_dir.mkdir(parents=True, exist_ok=True)
|
||||
memory_name = (name or description).strip()
|
||||
filename = allocate_memory_filename(target_dir, memory_name)
|
||||
doc = build_memory_document(
|
||||
name=memory_name,
|
||||
description=description,
|
||||
mem_type=parsed,
|
||||
body=content,
|
||||
)
|
||||
if len(doc.encode("utf-8")) > MAX_FILE_SIZE_BYTES:
|
||||
raise ValueError(
|
||||
f"Global memory entry exceeds the {MAX_FILE_SIZE_BYTES} byte limit."
|
||||
)
|
||||
path = target_dir / filename
|
||||
path.write_text(doc, encoding="utf-8")
|
||||
return filename, path
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Manifest formatting
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _age_label(mtime: float) -> str:
|
||||
"""Human-readable age string from an mtime."""
|
||||
age_days = memory_age_days(mtime)
|
||||
if age_days <= 0:
|
||||
return "today"
|
||||
if age_days == 1:
|
||||
return "1 day ago"
|
||||
return f"{age_days} days ago"
|
||||
|
||||
|
||||
def format_memory_manifest(files: list[MemoryFile]) -> str:
|
||||
"""One-line-per-file text manifest for the recall selector / reflection agent.
|
||||
"""One-line-per-file text manifest.
|
||||
|
||||
Format: ``[type] filename (age): description``
|
||||
Format: ``[type] filename: description``
|
||||
"""
|
||||
lines: list[str] = []
|
||||
for mf in files:
|
||||
t = mf.type or "unknown"
|
||||
desc = mf.description or "(no description)"
|
||||
age = _age_label(mf.mtime)
|
||||
lines.append(f"[{t}] {mf.filename} ({age}): {desc}")
|
||||
lines.append(f"[{t}] {mf.filename}: {desc}")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Freshness / staleness
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_SECONDS_PER_DAY = 86_400
|
||||
|
||||
|
||||
def memory_age_days(mtime: float) -> int:
|
||||
"""Return the age of a memory file in whole days."""
|
||||
if mtime <= 0:
|
||||
return 0
|
||||
return int((time.time() - mtime) / _SECONDS_PER_DAY)
|
||||
|
||||
|
||||
def memory_freshness_text(mtime: float) -> str:
|
||||
"""Return a staleness warning for injection, or empty string if fresh."""
|
||||
d = memory_age_days(mtime)
|
||||
if d <= 1:
|
||||
return ""
|
||||
return (
|
||||
f"This memory is {d} days old. "
|
||||
"Memories are point-in-time observations, not live state — "
|
||||
"claims about code behavior or file:line citations may be outdated. "
|
||||
"Verify against current code before asserting as fact."
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Cursor-based incremental processing
|
||||
# Initialisation
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def read_conversation_parts(session_dir: Path) -> list[dict[str, Any]]:
|
||||
"""Read all conversation parts for a session using FileConversationStore.
|
||||
|
||||
Returns a list of raw message dicts in sequence order.
|
||||
"""
|
||||
from framework.storage.conversation_store import FileConversationStore
|
||||
|
||||
store = FileConversationStore(session_dir / "conversations")
|
||||
return await store.read_parts()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Initialisation and legacy migration
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def init_memory_dir(
|
||||
memory_dir: Path | None = None,
|
||||
*,
|
||||
migrate_legacy: bool = False,
|
||||
) -> None:
|
||||
"""Create the memory directory if missing.
|
||||
|
||||
When ``migrate_legacy`` is true, migrate both v1 memory files and the
|
||||
previous shared v2 queen memory store into this directory.
|
||||
"""
|
||||
d = memory_dir or MEMORY_DIR
|
||||
first_run = not d.exists()
|
||||
def init_memory_dir(memory_dir: Path | None = None) -> None:
|
||||
"""Create the memory directory if missing."""
|
||||
d = memory_dir or global_memory_dir()
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
if migrate_legacy:
|
||||
migrate_legacy_memories(d)
|
||||
migrate_shared_v2_memories(d)
|
||||
elif first_run and d == MEMORY_DIR:
|
||||
migrate_legacy_memories(d)
|
||||
|
||||
|
||||
def migrate_legacy_memories(memory_dir: Path | None = None) -> None:
|
||||
"""Convert old MEMORY.md + MEMORY-YYYY-MM-DD.md files to individual memory files.
|
||||
|
||||
Originals are moved to ``{memory_dir}/.legacy/``.
|
||||
"""
|
||||
d = memory_dir or MEMORY_DIR
|
||||
queen_dir = _HIVE_QUEEN_DIR
|
||||
legacy_archive = d / ".legacy"
|
||||
|
||||
migrated_any = False
|
||||
|
||||
# --- Semantic memory (MEMORY.md) ---
|
||||
semantic = queen_dir / "MEMORY.md"
|
||||
if semantic.exists():
|
||||
content = semantic.read_text(encoding="utf-8").strip()
|
||||
# Skip the blank seed template.
|
||||
if content and not content.startswith("# My Understanding of the User\n\n*No sessions"):
|
||||
_write_migration_file(
|
||||
d,
|
||||
filename="legacy-semantic-memory.md",
|
||||
name="legacy-semantic-memory",
|
||||
mem_type="reference",
|
||||
description="Migrated semantic memory from previous memory system",
|
||||
body=content,
|
||||
)
|
||||
migrated_any = True
|
||||
# Archive original.
|
||||
legacy_archive.mkdir(parents=True, exist_ok=True)
|
||||
semantic.rename(legacy_archive / "MEMORY.md")
|
||||
|
||||
# --- Episodic memories (MEMORY-YYYY-MM-DD.md) ---
|
||||
old_memories_dir = queen_dir / "memories"
|
||||
if old_memories_dir.is_dir():
|
||||
for ep_file in sorted(old_memories_dir.glob("MEMORY-*.md")):
|
||||
content = ep_file.read_text(encoding="utf-8").strip()
|
||||
if not content:
|
||||
continue
|
||||
date_part = ep_file.stem.replace("MEMORY-", "")
|
||||
slug = f"legacy-diary-{date_part}.md"
|
||||
_write_migration_file(
|
||||
d,
|
||||
filename=slug,
|
||||
name=f"legacy-diary-{date_part}",
|
||||
mem_type="diary",
|
||||
description=f"Migrated diary entry from {date_part}",
|
||||
body=content,
|
||||
)
|
||||
migrated_any = True
|
||||
# Archive original.
|
||||
legacy_archive.mkdir(parents=True, exist_ok=True)
|
||||
ep_file.rename(legacy_archive / ep_file.name)
|
||||
|
||||
if migrated_any:
|
||||
logger.info("queen_memory_v2: migrated legacy memory files to %s", d)
|
||||
|
||||
|
||||
def migrate_shared_v2_memories(
|
||||
memory_dir: Path | None = None,
|
||||
*,
|
||||
source_dir: Path | None = None,
|
||||
) -> None:
|
||||
"""Move shared queen v2 memory files into a colony directory once."""
|
||||
d = memory_dir or MEMORY_DIR
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
src = source_dir or MEMORY_DIR
|
||||
if d.resolve() == src.resolve():
|
||||
return
|
||||
|
||||
marker = d / _MIGRATION_MARKER
|
||||
if marker.exists():
|
||||
return
|
||||
|
||||
if not src.is_dir():
|
||||
return
|
||||
|
||||
md_files = sorted(
|
||||
f for f in src.glob("*.md")
|
||||
if f.is_file() and not f.name.startswith(".")
|
||||
)
|
||||
if not md_files:
|
||||
marker.write_text("no shared memories found\n", encoding="utf-8")
|
||||
return
|
||||
|
||||
archive = src / ".legacy_colony_migration"
|
||||
archive.mkdir(parents=True, exist_ok=True)
|
||||
migrated_any = False
|
||||
|
||||
for src_file in md_files:
|
||||
target = d / src_file.name
|
||||
if not target.exists():
|
||||
try:
|
||||
shutil.copy2(src_file, target)
|
||||
migrated_any = True
|
||||
except OSError:
|
||||
logger.debug("shared memory migration copy failed for %s", src_file, exc_info=True)
|
||||
continue
|
||||
|
||||
archived = archive / src_file.name
|
||||
counter = 2
|
||||
while archived.exists():
|
||||
archived = archive / f"{src_file.stem}-{counter}{src_file.suffix}"
|
||||
counter += 1
|
||||
try:
|
||||
src_file.rename(archived)
|
||||
except OSError:
|
||||
logger.debug("shared memory migration archive failed for %s", src_file, exc_info=True)
|
||||
|
||||
if migrated_any:
|
||||
logger.info("queen_memory_v2: migrated shared queen memories to %s", d)
|
||||
marker.write_text(
|
||||
f"migrated_at={int(time.time())}\nsource={src}\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
|
||||
def _write_migration_file(
|
||||
memory_dir: Path,
|
||||
filename: str,
|
||||
name: str,
|
||||
mem_type: str,
|
||||
description: str,
|
||||
body: str,
|
||||
) -> None:
|
||||
"""Write a single migrated memory file with frontmatter."""
|
||||
# Truncate body to respect file size limit (leave room for frontmatter).
|
||||
header = (
|
||||
f"---\n"
|
||||
f"name: {name}\n"
|
||||
f"description: {description}\n"
|
||||
f"type: {mem_type}\n"
|
||||
f"---\n\n"
|
||||
)
|
||||
max_body = MAX_FILE_SIZE_BYTES - len(header.encode("utf-8"))
|
||||
if len(body.encode("utf-8")) > max_body:
|
||||
# Rough truncation — cut at character level then trim to last newline.
|
||||
body = body[: max_body - 20]
|
||||
nl = body.rfind("\n")
|
||||
if nl > 0:
|
||||
body = body[:nl]
|
||||
body += "\n\n...(truncated during migration)"
|
||||
|
||||
path = memory_dir / filename
|
||||
path.write_text(header + body + "\n", encoding="utf-8")
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,11 +1,11 @@
|
||||
"""Recall selector — pre-turn memory selection for queen and worker memory.
|
||||
"""Recall selector — pre-turn global memory selection for the queen.
|
||||
|
||||
Before each conversation turn the system:
|
||||
1. Scans the memory directory for ``.md`` files (cap: 200).
|
||||
1. Scans the global memory directory for ``.md`` files (cap: 200).
|
||||
2. Reads headers (frontmatter + first 30 lines).
|
||||
3. Uses a single LLM call with structured JSON output to pick the ~5
|
||||
most relevant memories.
|
||||
4. Injects them into context with staleness warnings for older ones.
|
||||
4. Injects them into the system prompt.
|
||||
|
||||
The selector only sees the user's query string — no full conversation
|
||||
context. This keeps it cheap and fast. Errors are caught and return
|
||||
@@ -20,9 +20,8 @@ from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from framework.agents.queen.queen_memory_v2 import (
|
||||
MEMORY_DIR,
|
||||
format_memory_manifest,
|
||||
memory_freshness_text,
|
||||
global_memory_dir,
|
||||
scan_memory_files,
|
||||
)
|
||||
|
||||
@@ -32,29 +31,6 @@ logger = logging.getLogger(__name__)
|
||||
# Structured output schema
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
RECALL_SCHEMA: dict[str, Any] = {
|
||||
"type": "json_schema",
|
||||
"json_schema": {
|
||||
"name": "memory_selection",
|
||||
"strict": True,
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"selected_memories": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
},
|
||||
},
|
||||
"required": ["selected_memories"],
|
||||
"additionalProperties": False,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# System prompt
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
SELECT_MEMORIES_SYSTEM_PROMPT = """\
|
||||
You are selecting memories that will be useful to the Queen agent as it \
|
||||
processes a user's query.
|
||||
@@ -72,9 +48,6 @@ name and description.
|
||||
query, then do not include it in your list. Be selective and discerning.
|
||||
- If there are no memories in the list that would clearly be useful, \
|
||||
return an empty list.
|
||||
- If a list of recently-used tools is provided, do not select memories \
|
||||
that are usage reference or API documentation for those tools (the Queen \
|
||||
is already exercising them). Still select warnings or gotchas about them.
|
||||
"""
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -86,7 +59,6 @@ async def select_memories(
|
||||
query: str,
|
||||
llm: Any,
|
||||
memory_dir: Path | None = None,
|
||||
active_tools: list[str] | None = None,
|
||||
*,
|
||||
max_results: int = 5,
|
||||
) -> list[str]:
|
||||
@@ -94,51 +66,60 @@ async def select_memories(
|
||||
|
||||
Returns a list of filenames. Best-effort: on any error returns ``[]``.
|
||||
"""
|
||||
mem_dir = memory_dir or MEMORY_DIR
|
||||
mem_dir = memory_dir or global_memory_dir()
|
||||
files = scan_memory_files(mem_dir)
|
||||
if not files:
|
||||
logger.debug("recall: no memory files found, skipping selection")
|
||||
return []
|
||||
|
||||
logger.debug("recall: selecting from %d memory files for query: %.80s", len(files), query)
|
||||
logger.debug("recall: selecting from %d memories for query: %.100s", len(files), query)
|
||||
manifest = format_memory_manifest(files)
|
||||
|
||||
user_msg_parts = [f"## User query\n\n{query}\n\n## Available memories\n\n{manifest}"]
|
||||
if active_tools:
|
||||
user_msg_parts.append(f"\n\n## Recently-used tools\n\n{', '.join(active_tools)}")
|
||||
|
||||
user_msg = "".join(user_msg_parts)
|
||||
user_msg = f"## User query\n\n{query}\n\n## Available memories\n\n{manifest}"
|
||||
|
||||
try:
|
||||
resp = await llm.acomplete(
|
||||
messages=[{"role": "user", "content": user_msg}],
|
||||
system=SELECT_MEMORIES_SYSTEM_PROMPT,
|
||||
max_tokens=512,
|
||||
response_format=RECALL_SCHEMA,
|
||||
max_tokens=1024,
|
||||
response_format={"type": "json_object"},
|
||||
)
|
||||
data = json.loads(resp.content)
|
||||
raw = (resp.content or "").strip()
|
||||
if not raw:
|
||||
logger.warning(
|
||||
"recall: LLM returned empty response (model=%s, stop=%s)",
|
||||
resp.model,
|
||||
resp.stop_reason,
|
||||
)
|
||||
return []
|
||||
# Some models wrap JSON in markdown fences or add preamble text.
|
||||
# Try to extract the JSON object if raw parse fails.
|
||||
try:
|
||||
data = json.loads(raw)
|
||||
except json.JSONDecodeError:
|
||||
import re
|
||||
|
||||
m = re.search(r"\{.*\}", raw, re.DOTALL)
|
||||
if m:
|
||||
data = json.loads(m.group())
|
||||
else:
|
||||
logger.warning("recall: LLM returned non-JSON: %.200s", raw)
|
||||
return []
|
||||
selected = data.get("selected_memories", [])
|
||||
# Validate: only return filenames that actually exist.
|
||||
valid_names = {f.filename for f in files}
|
||||
result = [s for s in selected if s in valid_names][:max_results]
|
||||
logger.debug("recall: selected %d memories: %s", len(result), result)
|
||||
return result
|
||||
except Exception:
|
||||
logger.debug("recall: memory selection failed, returning []", exc_info=True)
|
||||
except Exception as exc:
|
||||
logger.warning("recall: memory selection failed (%s), returning []", exc)
|
||||
return []
|
||||
|
||||
|
||||
def format_recall_injection(
|
||||
filenames: list[str],
|
||||
memory_dir: Path | None = None,
|
||||
*,
|
||||
heading: str = "Selected Memories",
|
||||
) -> str:
|
||||
"""Read selected memory files and format for system prompt injection.
|
||||
|
||||
Prepends a staleness warning for memories older than 1 day.
|
||||
"""
|
||||
mem_dir = memory_dir or MEMORY_DIR
|
||||
"""Read selected memory files and format for system prompt injection."""
|
||||
mem_dir = memory_dir or global_memory_dir()
|
||||
if not filenames:
|
||||
return ""
|
||||
|
||||
@@ -151,86 +132,10 @@ def format_recall_injection(
|
||||
content = path.read_text(encoding="utf-8").strip()
|
||||
except OSError:
|
||||
continue
|
||||
|
||||
try:
|
||||
mtime = path.stat().st_mtime
|
||||
except OSError:
|
||||
mtime = 0.0
|
||||
|
||||
freshness = memory_freshness_text(mtime)
|
||||
header = f"### {fname}"
|
||||
if freshness:
|
||||
header += f"\n\n> {freshness}"
|
||||
blocks.append(f"{header}\n\n{content}")
|
||||
blocks.append(f"### {fname}\n\n{content}")
|
||||
|
||||
if not blocks:
|
||||
return ""
|
||||
|
||||
body = "\n\n---\n\n".join(blocks)
|
||||
logger.debug("recall: injecting %d memory blocks into context", len(blocks))
|
||||
return f"--- {heading} ---\n\n{body}\n\n--- End {heading} ---"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Cache update (called after each queen turn)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def update_recall_cache(
|
||||
session_dir: Path,
|
||||
llm: Any,
|
||||
phase_state: Any | None = None,
|
||||
memory_dir: Path | None = None,
|
||||
*,
|
||||
cache_setter: Any = None,
|
||||
heading: str = "Selected Memories",
|
||||
active_tools: list[str] | None = None,
|
||||
) -> None:
|
||||
"""Update the recall cache on *phase_state* for the next turn.
|
||||
|
||||
Reads the latest user message from conversation parts to use as the
|
||||
query for memory selection.
|
||||
"""
|
||||
mem_dir = memory_dir or MEMORY_DIR
|
||||
|
||||
# Extract latest user message as the query.
|
||||
query = _extract_latest_user_query(session_dir)
|
||||
if not query:
|
||||
logger.debug("recall: no user query found, skipping cache update")
|
||||
return
|
||||
logger.debug("recall: updating cache for query: %.80s", query)
|
||||
|
||||
try:
|
||||
selected = await select_memories(
|
||||
query,
|
||||
llm,
|
||||
mem_dir,
|
||||
active_tools=active_tools,
|
||||
)
|
||||
injection = format_recall_injection(selected, mem_dir, heading=heading)
|
||||
if cache_setter is not None:
|
||||
cache_setter(injection)
|
||||
elif phase_state is not None:
|
||||
phase_state._cached_recall_block = injection
|
||||
except Exception:
|
||||
logger.debug("recall: cache update failed", exc_info=True)
|
||||
|
||||
|
||||
def _extract_latest_user_query(session_dir: Path) -> str:
|
||||
"""Read the most recent user message from conversation parts."""
|
||||
parts_dir = session_dir / "conversations" / "parts"
|
||||
if not parts_dir.is_dir():
|
||||
return ""
|
||||
|
||||
part_files = sorted(parts_dir.glob("*.json"), reverse=True)
|
||||
for f in part_files[:20]: # Look back at most 20 messages.
|
||||
try:
|
||||
data = json.loads(f.read_text(encoding="utf-8"))
|
||||
if data.get("role") == "user":
|
||||
content = str(data.get("content", "")).strip()
|
||||
if content:
|
||||
# Truncate very long queries.
|
||||
return content[:1000] if len(content) > 1000 else content
|
||||
except (json.JSONDecodeError, OSError):
|
||||
continue
|
||||
return ""
|
||||
return f"--- Global Memories ---\n\n{body}\n\n--- End Global Memories ---"
|
||||
|
||||
@@ -25,10 +25,7 @@
|
||||
14. **Forgetting sys.path setup in conftest.py** — Tests need `exports/` and `core/` on sys.path.
|
||||
|
||||
## GCU Errors
|
||||
15. **Manually wiring browser tools on event_loop nodes** — Use `node_type="gcu"` which auto-includes browser tools. Do NOT manually list browser tool names.
|
||||
16. **Using GCU nodes as regular graph nodes** — GCU nodes are subagents only. They must ONLY appear in `sub_agents=["gcu-node-id"]` and be invoked via `delegate_to_sub_agent()`. Never connect via edges or use as entry/terminal nodes.
|
||||
17. **Reusing the same GCU node ID for parallel tasks** — Each concurrent browser task needs a distinct GCU node ID (e.g. `gcu-site-a`, `gcu-site-b`). Two `delegate_to_sub_agent` calls with the same `agent_id` share a browser profile and will interfere with each other's pages.
|
||||
18. **Passing `profile=` in GCU tool calls** — Profile isolation for parallel subagents is automatic. The framework injects a unique profile per subagent via an asyncio `ContextVar`. Hardcoding `profile="default"` in a GCU system prompt breaks this isolation.
|
||||
15. **Manually wiring browser tools on event_loop nodes** — Browser nodes use tools: {policy: "all"} to get all browser tools.
|
||||
|
||||
## Worker Agent Errors
|
||||
19. **Adding client-facing intake node to workers** — The queen owns intake. Workers should start with an autonomous processing node. Route worker review/approval through queen escalation instead of direct worker HITL.
|
||||
|
||||
@@ -0,0 +1,227 @@
|
||||
# Declarative Agent File Templates
|
||||
|
||||
Agents are defined as a single `agent.yaml` file. No Python code needed.
|
||||
The runner loads this file directly -- no `agent.py`, `config.py`, or
|
||||
`nodes/__init__.py` required.
|
||||
|
||||
## agent.yaml -- Complete Agent Definition
|
||||
|
||||
```yaml
|
||||
name: my-agent
|
||||
version: 1.0.0
|
||||
description: What this agent does.
|
||||
|
||||
metadata:
|
||||
intro_message: Welcome! What would you like me to do?
|
||||
|
||||
# Template variables -- substituted into system_prompt and identity_prompt
|
||||
# via {{variable_name}} syntax. Use this for config values that appear
|
||||
# in prompts (spreadsheet IDs, API endpoints, account names, etc.)
|
||||
variables:
|
||||
spreadsheet_id: "1ZVxWDL..."
|
||||
sheet_name: "contacts"
|
||||
|
||||
goal:
|
||||
description: What this agent achieves.
|
||||
success_criteria:
|
||||
- "First success criterion"
|
||||
- "Second success criterion"
|
||||
constraints:
|
||||
- "Hard constraint the agent must respect"
|
||||
|
||||
identity_prompt: |
|
||||
You are a helpful agent.
|
||||
|
||||
conversation_mode: continuous # always "continuous" for Hive agents
|
||||
|
||||
loop_config:
|
||||
max_iterations: 100
|
||||
max_tool_calls_per_turn: 30
|
||||
max_context_tokens: 32000
|
||||
|
||||
# MCP servers to connect (resolved by name from ~/.hive/mcp_registry/)
|
||||
mcp_servers:
|
||||
- name: hive-tools
|
||||
- name: gcu-tools
|
||||
|
||||
nodes:
|
||||
# Node 1: Process (autonomous entry node)
|
||||
# The queen handles intake and passes structured input via
|
||||
# run_agent_with_input(task). NO client-facing intake node.
|
||||
- id: process
|
||||
name: Process
|
||||
description: Execute the task using available tools
|
||||
max_node_visits: 0 # 0 = unlimited (forever-alive agents)
|
||||
input_keys: [user_request, feedback]
|
||||
output_keys: [results]
|
||||
nullable_output_keys: [feedback]
|
||||
tools:
|
||||
policy: explicit
|
||||
allowed: [web_search, web_scrape, save_data, load_data, list_data_files]
|
||||
success_criteria: Results are complete and accurate.
|
||||
system_prompt: |
|
||||
You are a processing agent. Your task is in memory under "user_request".
|
||||
If "feedback" is present, this is a revision.
|
||||
|
||||
Work in phases:
|
||||
1. Use tools to gather/process data
|
||||
2. Analyze results
|
||||
3. Call set_output in a SEPARATE turn:
|
||||
- set_output("results", "structured results")
|
||||
|
||||
# Node 2: Handoff (autonomous)
|
||||
- id: handoff
|
||||
name: Handoff
|
||||
description: Prepare worker results for queen review
|
||||
max_node_visits: 0
|
||||
input_keys: [results, user_request]
|
||||
output_keys: [next_action, feedback, worker_summary]
|
||||
nullable_output_keys: [feedback, worker_summary]
|
||||
tools:
|
||||
policy: none # handoff nodes don't need tools
|
||||
success_criteria: Results are packaged for queen decision-making.
|
||||
system_prompt: |
|
||||
Do NOT talk to the user directly. The queen is the only user interface.
|
||||
|
||||
If blocked, call escalate(reason, context) then set:
|
||||
- set_output("next_action", "escalated")
|
||||
- set_output("feedback", "what help is needed")
|
||||
|
||||
Otherwise summarize and set:
|
||||
- set_output("worker_summary", "short summary for queen")
|
||||
- set_output("next_action", "done") or "revise"
|
||||
- set_output("feedback", "what to revise") only when revising
|
||||
|
||||
edges:
|
||||
- from_node: process
|
||||
to_node: handoff
|
||||
# Feedback loop
|
||||
- from_node: handoff
|
||||
to_node: process
|
||||
condition: conditional
|
||||
condition_expr: "str(next_action).lower() == 'revise'"
|
||||
priority: 2
|
||||
# Escalation loop
|
||||
- from_node: handoff
|
||||
to_node: process
|
||||
condition: conditional
|
||||
condition_expr: "str(next_action).lower() == 'escalated'"
|
||||
priority: 3
|
||||
# Loop back for next task
|
||||
- from_node: handoff
|
||||
to_node: process
|
||||
condition: conditional
|
||||
condition_expr: "str(next_action).lower() == 'done'"
|
||||
|
||||
entry_node: process
|
||||
terminal_nodes: [] # [] = forever-alive
|
||||
```
|
||||
|
||||
## Key differences from Python templates
|
||||
|
||||
| Before (Python) | After (YAML) |
|
||||
|-------------------------------------|----------------------------------------|
|
||||
| `agent.py` (250 lines boilerplate) | Not needed |
|
||||
| `config.py` (dataclass + metadata) | `variables:` + `metadata:` in YAML |
|
||||
| `nodes/__init__.py` (NodeSpec calls)| `nodes:` list in YAML |
|
||||
| `__init__.py`, `__main__.py` | Not needed |
|
||||
| f-string config injection | `{{variable_name}}` templates |
|
||||
| `mcp_servers.json` (separate file) | `mcp_servers:` in YAML (or keep file) |
|
||||
|
||||
## Node types
|
||||
|
||||
| Type | Description | Tools |
|
||||
|--------------|---------------------------------------|--------------------------|
|
||||
| `event_loop` | LLM-driven orchestration (default) | Explicit list or `none` |
|
||||
| `gcu` | Browser automation via GCU tools | `policy: all` (auto) |
|
||||
|
||||
## Tool access policies
|
||||
|
||||
```yaml
|
||||
# Explicit list (recommended for most nodes)
|
||||
tools:
|
||||
policy: explicit
|
||||
allowed: [web_search, save_data]
|
||||
|
||||
# All tools (for browser automation nodes)
|
||||
tools:
|
||||
policy: all
|
||||
|
||||
# No tools (for handoff/summary nodes)
|
||||
tools:
|
||||
policy: none
|
||||
```
|
||||
|
||||
## Edge conditions
|
||||
|
||||
| Condition | When to use |
|
||||
|---------------|-------------------------------------------------------|
|
||||
| `on_success` | Default. Next node after current succeeds. |
|
||||
| `on_failure` | Fallback path when current node fails. |
|
||||
| `always` | Always traverse regardless of outcome. |
|
||||
| `conditional` | Evaluate `condition_expr` against shared memory keys. |
|
||||
| `llm_decide` | Let the LLM decide at runtime. |
|
||||
|
||||
## Template variables
|
||||
|
||||
Use `{{variable_name}}` in `system_prompt` and `identity_prompt`.
|
||||
Variables are defined in the top-level `variables:` map.
|
||||
|
||||
```yaml
|
||||
variables:
|
||||
spreadsheet_id: "1ZVxWDL..."
|
||||
api_endpoint: "https://api.example.com"
|
||||
|
||||
nodes:
|
||||
- id: start
|
||||
system_prompt: |
|
||||
Connect to spreadsheet: {{spreadsheet_id}}
|
||||
API endpoint: {{api_endpoint}}
|
||||
```
|
||||
|
||||
## Entry points
|
||||
|
||||
Default is a single manual entry point. For timer/scheduled triggers:
|
||||
|
||||
```yaml
|
||||
entry_points:
|
||||
- id: default
|
||||
trigger_type: manual
|
||||
- id: daily-check
|
||||
trigger_type: timer
|
||||
trigger_config:
|
||||
interval_minutes: 30
|
||||
```
|
||||
|
||||
## mcp_servers.json -- Still Supported
|
||||
|
||||
The `mcp_servers.json` file is still loaded automatically if present alongside
|
||||
`agent.yaml`. You can also inline servers in the YAML:
|
||||
|
||||
```yaml
|
||||
mcp_servers:
|
||||
- name: hive-tools
|
||||
- name: gcu-tools
|
||||
```
|
||||
|
||||
Both approaches work. The JSON file takes precedence for backward compatibility.
|
||||
|
||||
## Migration from Python agents
|
||||
|
||||
Run the migration tool to convert existing agents:
|
||||
|
||||
```bash
|
||||
uv run python -m framework.tools.migrate_agent exports/my_agent
|
||||
```
|
||||
|
||||
This generates `agent.yaml` from the existing `agent.py` + `nodes/` + `config.py`.
|
||||
The original files are left untouched. Once verified, you can delete the Python files.
|
||||
|
||||
## Files after migration
|
||||
|
||||
```
|
||||
my_agent/
|
||||
agent.yaml # The only required file
|
||||
mcp_servers.json # Optional (can inline in YAML)
|
||||
flowchart.json # Optional (auto-generated)
|
||||
```
|
||||
@@ -1,306 +1,193 @@
|
||||
# Hive Agent Framework — Condensed Reference
|
||||
# Hive Agent Framework -- Condensed Reference
|
||||
|
||||
## Architecture
|
||||
|
||||
Agents are Python packages in `exports/`:
|
||||
Agents are declarative JSON configs in `exports/`:
|
||||
```
|
||||
exports/my_agent/
|
||||
├── __init__.py # MUST re-export ALL module-level vars from agent.py
|
||||
├── __main__.py # CLI (run, tui, info, validate, shell)
|
||||
├── agent.py # Graph construction (goal, edges, agent class)
|
||||
├── config.py # Runtime config
|
||||
├── nodes/__init__.py # Node definitions (NodeSpec)
|
||||
├── mcp_servers.json # MCP tool server config
|
||||
└── tests/ # pytest tests
|
||||
agent.json # The entire agent definition
|
||||
mcp_servers.json # MCP tool server config (optional, prefer registry refs)
|
||||
```
|
||||
|
||||
## Agent Loading Contract
|
||||
No Python files. No `__init__.py`, `__main__.py`, `config.py`, or `nodes/`.
|
||||
|
||||
`AgentRunner.load()` imports the package (`__init__.py`) and reads these
|
||||
module-level variables via `getattr()`:
|
||||
## Agent Loading
|
||||
|
||||
| Variable | Required | Default if missing | Consequence |
|
||||
|----------|----------|--------------------|-------------|
|
||||
| `goal` | YES | `None` | **FATAL** — "must define goal, nodes, edges" |
|
||||
| `nodes` | YES | `None` | **FATAL** — same error |
|
||||
| `edges` | YES | `None` | **FATAL** — same error |
|
||||
| `entry_node` | no | `nodes[0].id` | Probably wrong node |
|
||||
| `entry_points` | no | `{}` | **Nodes unreachable** — validation fails |
|
||||
| `terminal_nodes` | **YES** | `[]` | **FATAL** — graph must have at least one terminal node |
|
||||
| `pause_nodes` | no | `[]` | OK |
|
||||
| `conversation_mode` | no | not passed | Isolated mode (no context carryover) |
|
||||
| `identity_prompt` | no | not passed | No agent-level identity |
|
||||
| `loop_config` | no | `{}` | No iteration limits |
|
||||
| `triggers.json` (file) | no | not present | No triggers (timers, webhooks) |
|
||||
`AgentLoader.load()` reads `agent.json` and builds the execution graph.
|
||||
If `agent.py` exists (legacy), it's loaded as a Python module instead.
|
||||
|
||||
**CRITICAL:** `__init__.py` MUST import and re-export ALL of these from
|
||||
`agent.py`. Missing exports silently fall back to defaults, causing
|
||||
hard-to-debug failures.
|
||||
## agent.json Schema
|
||||
|
||||
**Why `default_agent.validate()` is NOT sufficient:**
|
||||
`validate()` checks the agent CLASS's internal graph (self.nodes, self.edges).
|
||||
These are always correct because the constructor references agent.py's module
|
||||
vars directly. But `AgentRunner.load()` reads from the PACKAGE (`__init__.py`),
|
||||
not the class. So `validate()` passes while `AgentRunner.load()` fails.
|
||||
Always test with `AgentRunner.load("exports/{name}")` — this is the same
|
||||
code path the TUI and `hive run` use.
|
||||
|
||||
## Goal
|
||||
|
||||
Defines success criteria and constraints:
|
||||
```python
|
||||
goal = Goal(
|
||||
id="kebab-case-id",
|
||||
name="Display Name",
|
||||
description="What the agent does",
|
||||
success_criteria=[
|
||||
SuccessCriterion(id="sc-id", description="...", metric="...", target="...", weight=0.25),
|
||||
],
|
||||
constraints=[
|
||||
Constraint(id="c-id", description="...", constraint_type="hard", category="quality"),
|
||||
],
|
||||
)
|
||||
```json
|
||||
{
|
||||
"name": "my-agent",
|
||||
"version": "1.0.0",
|
||||
"description": "What this agent does",
|
||||
"goal": {
|
||||
"description": "What to achieve",
|
||||
"success_criteria": ["criterion 1", "criterion 2"],
|
||||
"constraints": ["constraint 1"]
|
||||
},
|
||||
"identity_prompt": "You are a helpful agent.",
|
||||
"conversation_mode": "continuous",
|
||||
"loop_config": {
|
||||
"max_iterations": 100,
|
||||
"max_tool_calls_per_turn": 30,
|
||||
"max_context_tokens": 32000
|
||||
},
|
||||
"mcp_servers": [
|
||||
{"name": "hive-tools"},
|
||||
{"name": "gcu-tools"}
|
||||
],
|
||||
"variables": {
|
||||
"spreadsheet_id": "1ZVx..."
|
||||
},
|
||||
"nodes": [...],
|
||||
"edges": [...],
|
||||
"entry_node": "process",
|
||||
"terminal_nodes": []
|
||||
}
|
||||
```
|
||||
- 3-5 success criteria, weights sum to 1.0
|
||||
- 1-5 constraints (hard/soft, categories: quality, accuracy, interaction, functional)
|
||||
|
||||
## NodeSpec Fields
|
||||
## Template Variables
|
||||
|
||||
Use `{{variable_name}}` in `system_prompt` and `identity_prompt`. Variables
|
||||
are defined in the top-level `variables` object:
|
||||
|
||||
```json
|
||||
{
|
||||
"variables": {"sheet_id": "1ZVx..."},
|
||||
"nodes": [{
|
||||
"id": "start",
|
||||
"system_prompt": "Use sheet: {{sheet_id}}"
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
## Node Fields
|
||||
|
||||
| Field | Type | Default | Description |
|
||||
|-------|------|---------|-------------|
|
||||
| id | str | required | kebab-case identifier |
|
||||
| name | str | required | Display name |
|
||||
| name | str | id | Display name |
|
||||
| description | str | required | What the node does |
|
||||
| node_type | str | required | `"event_loop"` or `"gcu"` (browser automation — see GCU Guide appendix) |
|
||||
| input_keys | list[str] | required | Memory keys this node reads |
|
||||
| output_keys | list[str] | required | Memory keys this node writes via set_output |
|
||||
| node_type | str | "event_loop" | `"event_loop"` |
|
||||
| input_keys | list | [] | Memory keys this node reads |
|
||||
| output_keys | list | [] | Memory keys this node writes via set_output |
|
||||
| system_prompt | str | "" | LLM instructions |
|
||||
| tools | list[str] | [] | Tool names from MCP servers |
|
||||
| client_facing | bool | False | Deprecated compatibility field. Queen interactivity is implicit; workers should escalate instead |
|
||||
| nullable_output_keys | list[str] | [] | Keys that may remain unset |
|
||||
| max_node_visits | int | 0 | 0=unlimited (default); >1 for one-shot feedback loops |
|
||||
| max_retries | int | 3 | Retries on failure |
|
||||
| tools | object | {} | Tool access policy (see below) |
|
||||
| nullable_output_keys | list | [] | Keys that may remain unset |
|
||||
| max_node_visits | int | 1 | 0=unlimited (for forever-alive agents) |
|
||||
| success_criteria | str | "" | Natural language for judge evaluation |
|
||||
| client_facing | bool | false | Whether output is shown to user |
|
||||
|
||||
## EdgeSpec Fields
|
||||
## Tool Access Policies
|
||||
|
||||
Each node declares its tools via a policy object:
|
||||
|
||||
```json
|
||||
{"tools": {"policy": "explicit", "allowed": ["web_search", "save_data"]}}
|
||||
{"tools": {"policy": "all"}}
|
||||
{"tools": {"policy": "none"}}
|
||||
```
|
||||
|
||||
- `explicit` (default): only named tools. Empty `allowed` = zero tools.
|
||||
- `all`: all tools from registry (e.g. for browser automation nodes).
|
||||
- `none`: no tools (for handoff/summary nodes).
|
||||
|
||||
## Edge Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| id | str | kebab-case identifier |
|
||||
| source | str | Source node ID |
|
||||
| target | str | Target node ID |
|
||||
| condition | EdgeCondition | ON_SUCCESS, ON_FAILURE, ALWAYS, CONDITIONAL |
|
||||
| condition_expr | str | Python expression evaluated against memory (for CONDITIONAL) |
|
||||
| priority | int | Positive=forward (evaluated first), negative=feedback (loop-back) |
|
||||
| from_node | str | Source node ID |
|
||||
| to_node | str | Target node ID |
|
||||
| condition | str | `on_success`, `on_failure`, `always`, `conditional` |
|
||||
| condition_expr | str | Python expression for conditional routing |
|
||||
| priority | int | Higher = evaluated first |
|
||||
|
||||
condition_expr examples:
|
||||
- `"needs_more_research == True"`
|
||||
- `"str(next_action).lower() == 'revise'"`
|
||||
|
||||
## Key Patterns
|
||||
|
||||
### STEP 1/STEP 2 (Client-Facing Nodes)
|
||||
```
|
||||
**STEP 1 — Respond to the user (text only, NO tool calls):**
|
||||
[Present information, ask questions]
|
||||
|
||||
**STEP 2 — After the user responds, call set_output:**
|
||||
- set_output("key", "value based on user response")
|
||||
```
|
||||
This prevents premature set_output before user interaction.
|
||||
|
||||
### Fewer, Richer Nodes (CRITICAL)
|
||||
|
||||
**Hard limit: 3-6 nodes for most agents.** Never exceed 6 unless the user
|
||||
explicitly requests a complex multi-phase pipeline.
|
||||
**Hard limit: 3-6 nodes for most agents.** Each node boundary serializes
|
||||
outputs and destroys in-context information. Merge unless:
|
||||
1. Client-facing boundary (different interaction models)
|
||||
2. Disjoint tool sets
|
||||
3. Parallel execution (fan-out branches)
|
||||
|
||||
Each node boundary serializes outputs to the shared buffer and **destroys** all
|
||||
in-context information: tool call results, intermediate reasoning, conversation
|
||||
history. A research node that searches, fetches, and analyzes in ONE node keeps
|
||||
all source material in its conversation context. Split across 3 nodes, each
|
||||
downstream node only sees the serialized summary string.
|
||||
|
||||
**Decision framework — merge unless ANY of these apply:**
|
||||
1. **Client-facing boundary** — Autonomous and client-facing work MUST be
|
||||
separate nodes (different interaction models)
|
||||
2. **Disjoint tool sets** — If tools are fundamentally different (e.g., web
|
||||
search vs database), separate nodes make sense
|
||||
3. **Parallel execution** — Fan-out branches must be separate nodes
|
||||
|
||||
**Red flags that you have too many nodes:**
|
||||
- A node with 0 tools (pure LLM reasoning) → merge into predecessor/successor
|
||||
- A node that sets only 1 trivial output → collapse into predecessor
|
||||
- Multiple consecutive autonomous nodes → combine into one rich node
|
||||
- A "report" node that presents analysis → merge into the client-facing node
|
||||
- A "confirm" or "schedule" node that doesn't call any external service → remove
|
||||
|
||||
**Typical agent structure (2 nodes):**
|
||||
**Typical structure (2 nodes):**
|
||||
```
|
||||
process (autonomous) ←→ review (queen-mediated)
|
||||
```
|
||||
The queen owns intake — she gathers requirements from the user, then
|
||||
passes structured input via `run_agent_with_input(task)`. When building
|
||||
the agent, design the entry node's `input_keys` to match what the queen
|
||||
will provide at run time. Worker agents should NOT have a client-facing
|
||||
intake node. Mid-execution review/approval should happen through queen
|
||||
escalation rather than direct worker HITL.
|
||||
|
||||
For simpler agents, just 1 autonomous node:
|
||||
```
|
||||
process (autonomous) — loops back to itself
|
||||
process (autonomous) <-> review (queen-mediated)
|
||||
```
|
||||
|
||||
### nullable_output_keys
|
||||
For inputs that only arrive on certain edges:
|
||||
```python
|
||||
research_node = NodeSpec(
|
||||
input_keys=["brief", "feedback"],
|
||||
nullable_output_keys=["feedback"], # Only present on feedback edge
|
||||
max_node_visits=3,
|
||||
)
|
||||
```
|
||||
|
||||
### Mutually Exclusive Outputs
|
||||
For routing decisions:
|
||||
```python
|
||||
review_node = NodeSpec(
|
||||
output_keys=["approved", "feedback"],
|
||||
nullable_output_keys=["approved", "feedback"], # Node sets one or the other
|
||||
)
|
||||
```
|
||||
|
||||
### Continuous Loop Pattern
|
||||
Mark the primary event_loop node as terminal: `terminal_nodes=["process"]`.
|
||||
The node has `output_keys` and can complete when the agent finishes its work.
|
||||
Use `conversation_mode="continuous"` to preserve context across transitions.
|
||||
The queen owns intake. Worker agents should NOT have a client-facing intake
|
||||
node. Mid-execution review should happen through queen escalation.
|
||||
|
||||
### set_output
|
||||
- Synthetic tool injected by framework
|
||||
- Call separately from real tool calls (separate turn)
|
||||
- `set_output("key", "value")` stores to the shared buffer
|
||||
|
||||
## Edge Conditions
|
||||
|
||||
| Condition | When |
|
||||
|-----------|------|
|
||||
| ON_SUCCESS | Node completed successfully |
|
||||
| ON_FAILURE | Node failed |
|
||||
| ALWAYS | Unconditional |
|
||||
| CONDITIONAL | condition_expr evaluates to True against memory |
|
||||
|
||||
condition_expr examples:
|
||||
- `"needs_more_research == True"`
|
||||
- `"str(next_action).lower() == 'new_agent'"`
|
||||
- `"feedback is not None"`
|
||||
|
||||
## Graph Lifecycle
|
||||
### Graph Lifecycle
|
||||
|
||||
| Pattern | terminal_nodes | When |
|
||||
|---------|---------------|------|
|
||||
| **Continuous loop** | `["node-with-output-keys"]` | **DEFAULT for all agents** |
|
||||
| Continuous loop | `["node-with-output-keys"]` | DEFAULT for all agents |
|
||||
| Linear | `["last-node"]` | One-shot/batch agents |
|
||||
|
||||
**Every graph must have at least one terminal node.** Terminal nodes
|
||||
define where execution ends. For interactive agents that loop continuously,
|
||||
mark the primary event_loop node as terminal (it has `output_keys` and can
|
||||
complete at any point). The framework default for `max_node_visits` is 0
|
||||
(unbounded), so nodes work correctly in continuous loops without explicit
|
||||
override. Only set `max_node_visits > 0` in one-shot agents with feedback loops.
|
||||
Every node must have at least one outgoing edge — no dead ends.
|
||||
Every graph must have at least one terminal node.
|
||||
|
||||
## Continuous Conversation Mode
|
||||
### Continuous Conversation Mode
|
||||
|
||||
`conversation_mode` has ONLY two valid states:
|
||||
- `"continuous"` — recommended for interactive agents
|
||||
- Omit entirely — isolated per-node conversations (each node starts fresh)
|
||||
- `"continuous"` -- recommended (context carries across node transitions)
|
||||
- Omit entirely -- isolated per-node conversations
|
||||
|
||||
**INVALID values** (do NOT use): `"client_facing"`, `"interactive"`,
|
||||
`"adaptive"`, `"shared"`. These do not exist in the framework.
|
||||
|
||||
When `conversation_mode="continuous"`:
|
||||
- Same conversation thread carries across node transitions
|
||||
- Layered system prompts: identity (agent-level) + narrative + focus (per-node)
|
||||
- Transition markers inserted at boundaries
|
||||
- Compaction happens opportunistically at phase transitions
|
||||
**INVALID values:** `"client_facing"`, `"interactive"`, `"shared"`.
|
||||
|
||||
## loop_config
|
||||
|
||||
Only three valid keys:
|
||||
```python
|
||||
loop_config = {
|
||||
"max_iterations": 100, # Max LLM turns per node visit
|
||||
"max_tool_calls_per_turn": 20, # Max tool calls per LLM response
|
||||
"max_context_tokens": 32000, # Triggers conversation compaction
|
||||
```json
|
||||
{
|
||||
"max_iterations": 100,
|
||||
"max_tool_calls_per_turn": 20,
|
||||
"max_context_tokens": 32000
|
||||
}
|
||||
```
|
||||
**INVALID keys** (do NOT use): `"strategy"`, `"mode"`, `"timeout"`,
|
||||
`"temperature"`. These are silently ignored or cause errors.
|
||||
|
||||
## Data Tools (Spillover)
|
||||
|
||||
For large data that exceeds context:
|
||||
- `save_data(filename, data)` — Write to session data dir
|
||||
- `load_data(filename, offset, limit)` — Read with pagination
|
||||
- `list_data_files()` — List files
|
||||
- `serve_file_to_user(filename, label)` — Clickable file:// URI
|
||||
- `save_data(filename, data)` -- write to session data dir
|
||||
- `load_data(filename, offset, limit)` -- read with pagination
|
||||
- `list_data_files()` -- list files
|
||||
- `serve_file_to_user(filename, label)` -- clickable file URI
|
||||
|
||||
`data_dir` is auto-injected by framework — LLM never sees it.
|
||||
`data_dir` is auto-injected by framework.
|
||||
|
||||
## Fan-Out / Fan-In
|
||||
|
||||
Multiple ON_SUCCESS edges from same source → parallel execution via asyncio.gather().
|
||||
- Parallel nodes must have disjoint output_keys
|
||||
- Only one branch may have client_facing nodes
|
||||
- Fan-in node gets all outputs in the shared buffer
|
||||
Multiple `on_success` edges from same source = parallel execution.
|
||||
Parallel nodes must have disjoint output_keys.
|
||||
|
||||
## Judge System
|
||||
|
||||
- **Implicit** (default): ACCEPTs when LLM finishes with no tool calls and all required outputs set
|
||||
- **SchemaJudge**: Validates against Pydantic model
|
||||
- **Custom**: Implement `evaluate(context) -> JudgeVerdict`
|
||||
|
||||
Judge is the SOLE acceptance mechanism — no ad-hoc framework gating.
|
||||
|
||||
## Triggers (Timers, Webhooks)
|
||||
|
||||
For agents that react to external events, create a `triggers.json` file
|
||||
in the agent's export directory:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "daily-check",
|
||||
"name": "Daily Check",
|
||||
"trigger_type": "timer",
|
||||
"trigger_config": {"cron": "0 9 * * *"},
|
||||
"task": "Run the daily check process"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Key Fields
|
||||
- `trigger_type`: `"timer"` or `"webhook"`
|
||||
- `trigger_config`: `{"cron": "0 9 * * *"}` or `{"interval_minutes": 20}`
|
||||
- `task`: describes what the worker should do when the trigger fires
|
||||
- Triggers can also be created/removed at runtime via `set_trigger` / `remove_trigger` queen tools
|
||||
|
||||
## Tool Discovery
|
||||
|
||||
Do NOT rely on a static tool list — it will be outdated. Always call
|
||||
`list_agent_tools()` with NO arguments first to see ALL available tools.
|
||||
Only use `group=` or `output_schema=` as follow-up calls after seeing the
|
||||
full list.
|
||||
Always call `list_agent_tools()` first to see available tools.
|
||||
Do NOT rely on a static tool list.
|
||||
|
||||
```
|
||||
list_agent_tools() # ALWAYS call this first
|
||||
list_agent_tools(group="gmail", output_schema="full") # then drill into a category
|
||||
list_agent_tools("exports/my_agent/mcp_servers.json") # specific agent's tools
|
||||
list_agent_tools() # full summary
|
||||
list_agent_tools(group="gmail", output_schema="full") # drill into category
|
||||
```
|
||||
|
||||
After building, run `validate_agent_package("{name}")` to check everything at once.
|
||||
|
||||
Common tool categories (verify via list_agent_tools):
|
||||
- **Web**: search, scrape, PDF
|
||||
- **Data**: save/load/append/list data files, serve to user
|
||||
- **File**: view, write, replace, diff, list, grep
|
||||
- **Communication**: email, gmail, slack, telegram
|
||||
- **CRM**: hubspot, apollo, calcom
|
||||
- **GitHub**: stargazers, user profiles, repos
|
||||
- **Vision**: image analysis
|
||||
- **Time**: current time
|
||||
After building, run `validate_agent_package("{name}")` to check everything.
|
||||
|
||||
@@ -1,158 +1,53 @@
|
||||
# GCU Browser Automation Guide
|
||||
# Browser Automation Guide
|
||||
|
||||
## When to Use GCU Nodes
|
||||
## When to Use Browser Nodes
|
||||
|
||||
Use `node_type="gcu"` when:
|
||||
- The user's workflow requires **navigating real websites** (scraping, form-filling, social media interaction, testing web UIs)
|
||||
- The task involves **dynamic/JS-rendered pages** that `web_scrape` cannot handle (SPAs, infinite scroll, login-gated content)
|
||||
- The agent needs to **interact with a website** — clicking, typing, scrolling, selecting, uploading files
|
||||
Use browser nodes (with `tools: {policy: "all"}`) when:
|
||||
- The task requires interacting with web pages (clicking, typing, navigating)
|
||||
- No API is available for the target service
|
||||
- The user is already logged in to the target site
|
||||
|
||||
Do NOT use GCU for:
|
||||
- Static content that `web_scrape` handles fine
|
||||
- API-accessible data (use the API directly)
|
||||
- PDF/file processing
|
||||
- Anything that doesn't require a browser UI
|
||||
## What Browser Nodes Are
|
||||
|
||||
## What GCU Nodes Are
|
||||
- Regular `event_loop` nodes with browser tools from gcu-tools MCP server
|
||||
- Set `tools: {policy: "all"}` to give access to all browser tools
|
||||
- Wire into the graph with edges like any other node
|
||||
- No special node_type needed
|
||||
|
||||
- `node_type="gcu"` — a declarative enhancement over `event_loop`
|
||||
- Framework auto-prepends browser best-practices system prompt
|
||||
- Framework auto-includes all 31 browser tools from `gcu-tools` MCP server
|
||||
- Same underlying `EventLoopNode` class — no new imports needed
|
||||
- `tools=[]` is correct — tools are auto-populated at runtime
|
||||
## Available Browser Tools
|
||||
|
||||
## GCU Architecture Pattern
|
||||
All tools are prefixed with `browser_`:
|
||||
- `browser_start`, `browser_open` -- launch/navigate
|
||||
- `browser_click`, `browser_fill`, `browser_type` -- interact
|
||||
- `browser_snapshot` -- read page content (preferred over screenshot)
|
||||
- `browser_screenshot` -- visual capture
|
||||
- `browser_scroll`, `browser_wait` -- navigation helpers
|
||||
- `browser_evaluate` -- run JavaScript
|
||||
|
||||
GCU nodes are **subagents** — invoked via `delegate_to_sub_agent()`, not connected via edges.
|
||||
## System Prompt Tips for Browser Nodes
|
||||
|
||||
- Primary nodes (`event_loop`, client-facing) orchestrate; GCU nodes do browser work
|
||||
- Parent node declares `sub_agents=["gcu-node-id"]` and calls `delegate_to_sub_agent(agent_id="gcu-node-id", task="...")`
|
||||
- GCU nodes set `max_node_visits=1` (single execution per delegation), `client_facing=False`
|
||||
- GCU nodes use `output_keys=["result"]` and return structured JSON via `set_output("result", ...)`
|
||||
|
||||
## GCU Node Definition Template
|
||||
|
||||
```python
|
||||
gcu_browser_node = NodeSpec(
|
||||
id="gcu-browser-worker",
|
||||
name="Browser Worker",
|
||||
description="Browser subagent that does X.",
|
||||
node_type="gcu",
|
||||
client_facing=False,
|
||||
max_node_visits=1,
|
||||
input_keys=[],
|
||||
output_keys=["result"],
|
||||
tools=[], # Auto-populated with all browser tools
|
||||
system_prompt="""\
|
||||
You are a browser agent. Your job: [specific task].
|
||||
|
||||
## Workflow
|
||||
1. browser_start (only if no browser is running yet)
|
||||
2. browser_open(url=TARGET_URL) — note the returned targetId
|
||||
3. browser_snapshot to read the page
|
||||
4. [task-specific steps]
|
||||
5. set_output("result", JSON)
|
||||
|
||||
## Output format
|
||||
set_output("result", JSON) with:
|
||||
- [field]: [type and description]
|
||||
""",
|
||||
)
|
||||
```
|
||||
1. Use browser_snapshot() to read page content (NOT browser_get_text)
|
||||
2. Use browser_wait(seconds=2-3) after navigation for page load
|
||||
3. If you hit an auth wall, call set_output with an error and move on
|
||||
4. Keep tool calls per turn <= 10 for reliability
|
||||
```
|
||||
|
||||
## Parent Node Template (orchestrating GCU subagents)
|
||||
|
||||
```python
|
||||
orchestrator_node = NodeSpec(
|
||||
id="orchestrator",
|
||||
...
|
||||
node_type="event_loop",
|
||||
sub_agents=["gcu-browser-worker"],
|
||||
system_prompt="""\
|
||||
...
|
||||
delegate_to_sub_agent(
|
||||
agent_id="gcu-browser-worker",
|
||||
task="Navigate to [URL]. Do [specific task]. Return JSON with [fields]."
|
||||
)
|
||||
...
|
||||
""",
|
||||
tools=[], # Orchestrator doesn't need browser tools
|
||||
)
|
||||
```
|
||||
|
||||
## mcp_servers.json with GCU
|
||||
## Example
|
||||
|
||||
```json
|
||||
{
|
||||
"hive-tools": { ... },
|
||||
"gcu-tools": {
|
||||
"transport": "stdio",
|
||||
"command": "uv",
|
||||
"args": ["run", "python", "-m", "gcu.server", "--stdio"],
|
||||
"cwd": "../../tools",
|
||||
"description": "GCU tools for browser automation"
|
||||
}
|
||||
"id": "scan-profiles",
|
||||
"name": "Scan LinkedIn Profiles",
|
||||
"description": "Navigate LinkedIn search results and collect profile data",
|
||||
"tools": {"policy": "all"},
|
||||
"input_keys": ["search_url"],
|
||||
"output_keys": ["profiles"],
|
||||
"system_prompt": "Navigate to the search URL, paginate through results..."
|
||||
}
|
||||
```
|
||||
|
||||
Note: `gcu-tools` is auto-added if any node uses `node_type="gcu"`, but including it explicitly is fine.
|
||||
|
||||
## GCU System Prompt Best Practices
|
||||
|
||||
Key rules to bake into GCU node prompts:
|
||||
|
||||
- Prefer `browser_snapshot` over `browser_get_text("body")` — compact accessibility tree vs 100KB+ raw HTML
|
||||
- Always `browser_wait` after navigation
|
||||
- Use large scroll amounts (~2000-5000) for lazy-loaded content
|
||||
- For spillover files, use `run_command` with grep, not `read_file`
|
||||
- If auth wall detected, report immediately — don't attempt login
|
||||
- Keep tool calls per turn ≤10
|
||||
- Tab isolation: when browser is already running, use `browser_open(background=true)` and pass `target_id` to every call
|
||||
|
||||
## Multiple Concurrent GCU Subagents
|
||||
|
||||
When a task can be parallelized across multiple sites or profiles, declare a distinct GCU
|
||||
node for each and invoke them all in the same LLM turn. The framework batches all
|
||||
`delegate_to_sub_agent` calls made in one turn and runs them with `asyncio.gather`, so
|
||||
they execute concurrently — not sequentially.
|
||||
|
||||
**Each GCU subagent automatically gets its own isolated browser context** — no `profile=`
|
||||
argument is needed in tool calls. The framework derives a unique profile from the subagent's
|
||||
node ID and instance counter and injects it via an asyncio `ContextVar` before the subagent
|
||||
runs.
|
||||
|
||||
### Example: three sites in parallel
|
||||
|
||||
```python
|
||||
# Three distinct GCU nodes
|
||||
gcu_site_a = NodeSpec(id="gcu-site-a", node_type="gcu", ...)
|
||||
gcu_site_b = NodeSpec(id="gcu-site-b", node_type="gcu", ...)
|
||||
gcu_site_c = NodeSpec(id="gcu-site-c", node_type="gcu", ...)
|
||||
|
||||
orchestrator = NodeSpec(
|
||||
id="orchestrator",
|
||||
node_type="event_loop",
|
||||
sub_agents=["gcu-site-a", "gcu-site-b", "gcu-site-c"],
|
||||
system_prompt="""\
|
||||
Call all three subagents in a single response to run them in parallel:
|
||||
delegate_to_sub_agent(agent_id="gcu-site-a", task="Scrape prices from site A")
|
||||
delegate_to_sub_agent(agent_id="gcu-site-b", task="Scrape prices from site B")
|
||||
delegate_to_sub_agent(agent_id="gcu-site-c", task="Scrape prices from site C")
|
||||
""",
|
||||
)
|
||||
Connected via regular edges:
|
||||
```
|
||||
search-setup -> scan-profiles -> process-results
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- Use distinct node IDs for each concurrent task — sharing an ID shares the browser context.
|
||||
- The GCU node prompts do not need to mention `profile=`; isolation is automatic.
|
||||
- Cleanup is automatic at session end, but GCU nodes can call `browser_stop()` explicitly
|
||||
if they want to release resources mid-run.
|
||||
|
||||
## GCU Anti-Patterns
|
||||
|
||||
- Using `browser_screenshot` to read text (use `browser_snapshot` instead; screenshots are for visual context only)
|
||||
- Re-navigating after scrolling (resets scroll position)
|
||||
- Attempting login on auth walls
|
||||
- Forgetting `target_id` in multi-tab scenarios
|
||||
- Putting browser tools directly on `event_loop` nodes instead of using GCU subagent pattern
|
||||
- Making GCU nodes `client_facing=True` (they should be autonomous subagents)
|
||||
|
||||
@@ -1,21 +1,20 @@
|
||||
"""Reflect agent — background memory extraction for queen and worker memory.
|
||||
"""Reflection agent — background global memory extraction for the queen.
|
||||
|
||||
A lightweight side agent that runs after each queen LLM turn. It
|
||||
inspects recent conversation messages (cursor-based incremental
|
||||
processing) and extracts learnings into individual memory files.
|
||||
A lightweight side agent that runs after each queen LLM turn. It inspects
|
||||
recent conversation messages and extracts durable user knowledge into
|
||||
individual memory files in ``~/.hive/memories/global/``.
|
||||
|
||||
Two reflection types:
|
||||
- **Short reflection**: every queen turn. Distills learnings. Nudged
|
||||
toward a 2-turn pattern (batch reads → batch writes).
|
||||
- **Long reflection**: every 5 short reflections, on CONTEXT_COMPACTED,
|
||||
and at session end. Organises, deduplicates, trims holistically.
|
||||
- **Short reflection**: after conversational queen turns. Distills
|
||||
learnings about the user (profile, preferences, environment, feedback).
|
||||
- **Long reflection**: every 5 short reflections and on CONTEXT_COMPACTED.
|
||||
Organises, deduplicates, trims the global memory directory.
|
||||
|
||||
The agent has restricted tool access: it can only read/write/delete
|
||||
memory files in ``~/.hive/queen/memories/`` and list them.
|
||||
Concurrency: an ``asyncio.Lock`` prevents overlapping runs. If a trigger
|
||||
fires while a reflection is already active the event is skipped.
|
||||
|
||||
Concurrency: an ``asyncio.Lock`` prevents overlapping runs. If a
|
||||
trigger fires while a reflection is already active the event is skipped
|
||||
(cursor hasn't advanced, so messages will be reconsidered next time).
|
||||
All reflections are fire-and-forget (spawned via ``asyncio.create_task``)
|
||||
so they never block the queen's event loop.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -23,22 +22,18 @@ from __future__ import annotations
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import traceback
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from framework.agents.queen.queen_memory_v2 import (
|
||||
GLOBAL_MEMORY_CATEGORIES,
|
||||
MAX_FILE_SIZE_BYTES,
|
||||
MAX_FILES,
|
||||
MEMORY_DIR,
|
||||
MEMORY_FRONTMATTER_EXAMPLE,
|
||||
MEMORY_TYPES,
|
||||
build_diary_document,
|
||||
diary_filename,
|
||||
format_memory_manifest,
|
||||
read_conversation_parts,
|
||||
global_memory_dir,
|
||||
parse_frontmatter,
|
||||
scan_memory_files,
|
||||
)
|
||||
from framework.llm.provider import LLMResponse, Tool
|
||||
@@ -53,7 +48,7 @@ _REFLECTION_TOOLS: list[Tool] = [
|
||||
Tool(
|
||||
name="list_memory_files",
|
||||
description=(
|
||||
"List all memory files with their type, name, age, and description. "
|
||||
"List all memory files with their type, name, and description. "
|
||||
"Returns a text manifest — one line per file."
|
||||
),
|
||||
parameters={
|
||||
@@ -161,6 +156,14 @@ def _execute_tool(name: str, args: dict[str, Any], memory_dir: Path) -> str:
|
||||
content = args.get("content", "")
|
||||
if not filename.endswith(".md"):
|
||||
return "ERROR: Filename must end with .md"
|
||||
# Enforce global memory type restrictions.
|
||||
fm = parse_frontmatter(content)
|
||||
mem_type = (fm.get("type") or "").strip().lower()
|
||||
if mem_type and mem_type not in GLOBAL_MEMORY_CATEGORIES:
|
||||
return (
|
||||
f"ERROR: Invalid memory type '{mem_type}'. "
|
||||
f"Allowed types: {', '.join(GLOBAL_MEMORY_CATEGORIES)}."
|
||||
)
|
||||
# Enforce file size limit.
|
||||
if len(content.encode("utf-8")) > MAX_FILE_SIZE_BYTES:
|
||||
return f"ERROR: Content exceeds {MAX_FILE_SIZE_BYTES} byte limit."
|
||||
@@ -206,19 +209,17 @@ async def _reflection_loop(
|
||||
user_msg: str,
|
||||
memory_dir: Path,
|
||||
max_turns: int = _MAX_TURNS,
|
||||
) -> bool:
|
||||
) -> tuple[bool, list[str], str]:
|
||||
"""Run a mini tool-use loop: LLM → tool calls → repeat.
|
||||
|
||||
Hard cap of *max_turns* iterations. Prompt nudges the LLM toward a
|
||||
2-turn pattern (batch reads in turn 1, batch writes in turn 2).
|
||||
|
||||
Returns ``True`` if the loop completed without LLM errors, ``False``
|
||||
if an LLM call failed (cursor should not advance).
|
||||
Returns (success, changed_files, last_text).
|
||||
"""
|
||||
messages: list[dict[str, Any]] = [{"role": "user", "content": user_msg}]
|
||||
logger.debug("reflect: starting loop (max %d turns)", max_turns)
|
||||
changed_files: list[str] = []
|
||||
last_text: str = ""
|
||||
|
||||
for _turn in range(max_turns):
|
||||
logger.info("reflect: loop turn %d/%d (msgs=%d)", _turn + 1, max_turns, len(messages))
|
||||
try:
|
||||
resp: LLMResponse = await llm.acomplete(
|
||||
messages=messages,
|
||||
@@ -226,21 +227,49 @@ async def _reflection_loop(
|
||||
tools=_REFLECTION_TOOLS,
|
||||
max_tokens=2048,
|
||||
)
|
||||
except asyncio.CancelledError:
|
||||
logger.warning("reflect: LLM call cancelled (task cancelled)")
|
||||
return False, changed_files, last_text
|
||||
except Exception:
|
||||
logger.warning("reflect: LLM call failed", exc_info=True)
|
||||
return False
|
||||
return False, changed_files, last_text
|
||||
|
||||
# Build assistant message.
|
||||
# Extract tool calls from litellm/OpenAI response object.
|
||||
tool_calls_raw: list[dict[str, Any]] = []
|
||||
if resp.raw_response and isinstance(resp.raw_response, dict):
|
||||
tool_calls_raw = resp.raw_response.get("tool_calls", [])
|
||||
raw = resp.raw_response
|
||||
if raw is not None:
|
||||
# litellm returns a ModelResponse object; tool calls live on
|
||||
# choices[0].message.tool_calls as a list of ChatCompletionMessageToolCall.
|
||||
try:
|
||||
msg_obj = raw.choices[0].message
|
||||
if hasattr(msg_obj, "tool_calls") and msg_obj.tool_calls:
|
||||
for tc in msg_obj.tool_calls:
|
||||
fn = tc.function
|
||||
try:
|
||||
args = json.loads(fn.arguments) if fn.arguments else {}
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
args = {}
|
||||
tool_calls_raw.append(
|
||||
{
|
||||
"id": tc.id,
|
||||
"name": fn.name,
|
||||
"input": args,
|
||||
}
|
||||
)
|
||||
except (AttributeError, IndexError):
|
||||
pass
|
||||
|
||||
assistant_msg: dict[str, Any] = {
|
||||
"role": "assistant",
|
||||
"content": resp.content or "",
|
||||
}
|
||||
logger.info(
|
||||
"reflect: LLM responded, text=%d chars, tool_calls=%d",
|
||||
len(resp.content or ""),
|
||||
len(tool_calls_raw),
|
||||
)
|
||||
|
||||
turn_text = resp.content or ""
|
||||
if turn_text:
|
||||
last_text = turn_text
|
||||
assistant_msg: dict[str, Any] = {"role": "assistant", "content": turn_text}
|
||||
if tool_calls_raw:
|
||||
# Convert to OpenAI format for the conversation.
|
||||
assistant_msg["tool_calls"] = [
|
||||
{
|
||||
"id": tc["id"],
|
||||
@@ -254,71 +283,73 @@ async def _reflection_loop(
|
||||
]
|
||||
messages.append(assistant_msg)
|
||||
|
||||
# No tool calls → agent is done.
|
||||
if not tool_calls_raw:
|
||||
logger.debug("reflect: loop done after %d turn(s) (no tool calls)", _turn + 1)
|
||||
break
|
||||
|
||||
# Execute each tool call and append results.
|
||||
logger.debug("reflect: turn %d — executing %d tool call(s): %s", _turn + 1, len(tool_calls_raw), [tc["name"] for tc in tool_calls_raw])
|
||||
for tc in tool_calls_raw:
|
||||
result = _execute_tool(tc["name"], tc.get("input", {}), memory_dir)
|
||||
messages.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": tc["id"],
|
||||
"content": result,
|
||||
})
|
||||
if tc["name"] in ("write_memory_file", "delete_memory_file"):
|
||||
fname = tc.get("input", {}).get("filename", "")
|
||||
if fname and not result.startswith("ERROR"):
|
||||
changed_files.append(fname)
|
||||
messages.append({"role": "tool", "tool_call_id": tc["id"], "content": result})
|
||||
|
||||
return True
|
||||
return True, changed_files, last_text
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# System prompts
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_FRONTMATTER_EXAMPLE = "\n".join(MEMORY_FRONTMATTER_EXAMPLE)
|
||||
_CATEGORIES_STR = ", ".join(GLOBAL_MEMORY_CATEGORIES)
|
||||
|
||||
_SHORT_REFLECT_SYSTEM = f"""\
|
||||
You are a reflection agent that distills learnings from a conversation into
|
||||
persistent memory files. You run in the background after each assistant turn.
|
||||
You are a reflection agent that distills durable knowledge about the USER
|
||||
into persistent global memory files. You run in the background after each
|
||||
assistant turn.
|
||||
|
||||
Your goal: identify anything from the recent messages worth remembering across
|
||||
future sessions — user preferences, project context, techniques that worked,
|
||||
goals, environment details, reference pointers.
|
||||
Your goal: identify anything from the recent messages worth remembering
|
||||
about the user across ALL future sessions — their profile, preferences,
|
||||
environment setup, or feedback on assistant behavior.
|
||||
|
||||
Memory types: {', '.join(MEMORY_TYPES)}
|
||||
Memory categories: {_CATEGORIES_STR}
|
||||
|
||||
Expected format for each memory file:
|
||||
{_FRONTMATTER_EXAMPLE}
|
||||
```markdown
|
||||
---
|
||||
name: {{{{memory name}}}}
|
||||
description: {{{{one-line description — specific and search-friendly}}}}
|
||||
type: {{{{{_CATEGORIES_STR}}}}}
|
||||
---
|
||||
|
||||
{{{{memory content}}}}
|
||||
```
|
||||
|
||||
Workflow (aim for 2 turns):
|
||||
Turn 1 — call list_memory_files to see what already exists, then
|
||||
read_memory_file for any that might need updating.
|
||||
Turn 1 — call list_memory_files to see what exists, then read_memory_file
|
||||
for any that might need updating.
|
||||
Turn 2 — call write_memory_file for new/updated memories.
|
||||
|
||||
Rules:
|
||||
- Only persist information that would be useful in a *future* conversation.
|
||||
Skip ephemeral task details, routine tool output, and anything obvious
|
||||
from the code or git history.
|
||||
- ONLY persist durable knowledge about the USER — who they are, how they
|
||||
like to work, their tech environment, their feedback on your behavior.
|
||||
- Do NOT store task-specific details, code patterns, file paths, or
|
||||
ephemeral session state.
|
||||
- Keep files concise. Each file should cover ONE topic.
|
||||
- If an existing memory already covers the learning, UPDATE it rather than
|
||||
creating a duplicate.
|
||||
- If there is nothing worth remembering from these messages, do nothing
|
||||
(just respond with a short note — no tool calls needed).
|
||||
- If there is nothing worth remembering, do nothing (respond with a brief
|
||||
reason — no tool calls needed).
|
||||
- File names should be kebab-case slugs ending in .md.
|
||||
- Include a specific, search-friendly description in the frontmatter.
|
||||
- Do NOT exceed {MAX_FILE_SIZE_BYTES} bytes per file or {MAX_FILES} total files.
|
||||
"""
|
||||
|
||||
_LONG_REFLECT_SYSTEM = f"""\
|
||||
You are a reflection agent performing a periodic housekeeping pass over the
|
||||
memory directory. Your job is to organise, deduplicate, and trim noise from
|
||||
the accumulated memory files.
|
||||
global memory directory. Your job is to organise, deduplicate, and trim
|
||||
noise from the accumulated memory files.
|
||||
|
||||
Memory types: {', '.join(MEMORY_TYPES)}
|
||||
|
||||
Expected format for each memory file:
|
||||
{_FRONTMATTER_EXAMPLE}
|
||||
Memory categories: {_CATEGORIES_STR}
|
||||
|
||||
Workflow:
|
||||
1. list_memory_files to get the full manifest.
|
||||
@@ -332,29 +363,6 @@ Rules:
|
||||
- Remove memories that are no longer relevant or are superseded.
|
||||
- Keep the total collection lean and high-signal.
|
||||
- Do NOT invent new information — only reorganise what exists.
|
||||
- Do NOT delete or merge MEMORY-*.md diary files. These are daily narratives
|
||||
managed by a separate process. You may read them for context but should not
|
||||
modify them.
|
||||
"""
|
||||
|
||||
_DIARY_SYSTEM = """\
|
||||
You maintain a daily diary entry for an AI colony session. You receive:
|
||||
(1) Today's existing diary content (may be empty if this is the first entry).
|
||||
(2) A transcript of recent conversation messages.
|
||||
|
||||
Write a cohesive 3-8 sentence narrative about what happened in this session today.
|
||||
Cover: what the user asked for, what was accomplished, key decisions or obstacles,
|
||||
and current status.
|
||||
|
||||
Rules:
|
||||
- If an existing diary is provided, rewrite it as a unified narrative incorporating
|
||||
the new developments. Merge and deduplicate — do not simply append.
|
||||
- Keep the total narrative under 3000 characters.
|
||||
- Focus on the story arc of the day, not individual tool calls or code details.
|
||||
- If the recent messages contain nothing substantive (greetings, routine
|
||||
confirmations), return the existing diary text unchanged.
|
||||
- Output only the diary prose. No headings, no timestamps, no code fences, no
|
||||
frontmatter.
|
||||
"""
|
||||
|
||||
|
||||
@@ -363,29 +371,33 @@ Rules:
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def _read_conversation_parts(session_dir: Path) -> list[dict[str, Any]]:
|
||||
"""Read conversation parts from the queen session directory."""
|
||||
from framework.storage.conversation_store import FileConversationStore
|
||||
|
||||
store = FileConversationStore(session_dir / "conversations")
|
||||
return await store.read_parts()
|
||||
|
||||
|
||||
async def run_short_reflection(
|
||||
session_dir: Path,
|
||||
llm: Any,
|
||||
memory_dir: Path | None = None,
|
||||
) -> None:
|
||||
"""Run a short reflection: extract learnings from conversation."""
|
||||
mem_dir = memory_dir or MEMORY_DIR
|
||||
"""Run a short reflection: extract user knowledge from conversation."""
|
||||
logger.info("reflect: starting short reflection for %s", session_dir)
|
||||
mem_dir = memory_dir or global_memory_dir()
|
||||
|
||||
messages = await read_conversation_parts(session_dir)
|
||||
messages = await _read_conversation_parts(session_dir)
|
||||
if not messages:
|
||||
logger.debug("reflect: short — no conversation parts")
|
||||
logger.info("reflect: no conversation parts found in %s, skipping", session_dir)
|
||||
return
|
||||
|
||||
logger.debug("reflect: short — %d conversation parts", len(messages))
|
||||
|
||||
# Build a readable transcript from recent messages.
|
||||
transcript_lines: list[str] = []
|
||||
for msg in messages[-50:]:
|
||||
role = msg.get("role", "")
|
||||
content = str(msg.get("content", "")).strip()
|
||||
if role == "tool":
|
||||
continue # Skip verbose tool results.
|
||||
if not content:
|
||||
if role == "tool" or not content:
|
||||
continue
|
||||
label = "user" if role == "user" else "assistant"
|
||||
if len(content) > 800:
|
||||
@@ -393,6 +405,7 @@ async def run_short_reflection(
|
||||
transcript_lines.append(f"[{label}]: {content}")
|
||||
|
||||
if not transcript_lines:
|
||||
logger.info("reflect: no transcript lines after filtering, skipping")
|
||||
return
|
||||
|
||||
transcript = "\n".join(transcript_lines)
|
||||
@@ -402,23 +415,26 @@ async def run_short_reflection(
|
||||
f"Timestamp: {datetime.now().isoformat(timespec='minutes')}"
|
||||
)
|
||||
|
||||
await _reflection_loop(llm, _SHORT_REFLECT_SYSTEM, user_msg, mem_dir)
|
||||
logger.debug("reflect: short reflection done")
|
||||
_, changed, reason = await _reflection_loop(llm, _SHORT_REFLECT_SYSTEM, user_msg, mem_dir)
|
||||
if changed:
|
||||
logger.info("reflect: short reflection done, changed files: %s", changed)
|
||||
else:
|
||||
logger.info("reflect: short reflection done, no changes — %s", reason or "no reason")
|
||||
|
||||
|
||||
async def run_long_reflection(
|
||||
llm: Any,
|
||||
memory_dir: Path | None = None,
|
||||
) -> None:
|
||||
"""Run a long reflection: organise and deduplicate all memories."""
|
||||
mem_dir = memory_dir or MEMORY_DIR
|
||||
"""Run a long reflection: organise and deduplicate all global memories."""
|
||||
logger.debug("reflect: starting long reflection")
|
||||
mem_dir = memory_dir or global_memory_dir()
|
||||
files = scan_memory_files(mem_dir)
|
||||
|
||||
if not files:
|
||||
logger.debug("reflect: long — no memory files to organise")
|
||||
logger.debug("reflect: no memory files, skipping long reflection")
|
||||
return
|
||||
|
||||
logger.debug("reflect: long — organising %d memory files", len(files))
|
||||
manifest = format_memory_manifest(files)
|
||||
user_msg = (
|
||||
f"## Current memory manifest ({len(files)} files)\n\n"
|
||||
@@ -426,86 +442,43 @@ async def run_long_reflection(
|
||||
f"Timestamp: {datetime.now().isoformat(timespec='minutes')}"
|
||||
)
|
||||
|
||||
await _reflection_loop(llm, _LONG_REFLECT_SYSTEM, user_msg, mem_dir)
|
||||
logger.debug("reflect: long reflection done (%d files)", len(files))
|
||||
_, changed, reason = await _reflection_loop(llm, _LONG_REFLECT_SYSTEM, user_msg, mem_dir)
|
||||
if changed:
|
||||
logger.debug("reflect: long reflection done (%d files), changed: %s", len(files), changed)
|
||||
else:
|
||||
logger.debug(
|
||||
"reflect: long reflection done (%d files), no changes — %s",
|
||||
len(files),
|
||||
reason or "no reason",
|
||||
)
|
||||
|
||||
|
||||
async def run_diary_update(
|
||||
async def run_shutdown_reflection(
|
||||
session_dir: Path,
|
||||
llm: Any,
|
||||
memory_dir: Path | None = None,
|
||||
) -> None:
|
||||
"""Update today's diary file with a narrative of recent activity."""
|
||||
mem_dir = memory_dir or MEMORY_DIR
|
||||
|
||||
fname = diary_filename()
|
||||
diary_path = mem_dir / fname
|
||||
today_str = datetime.now().strftime("%Y-%m-%d")
|
||||
|
||||
# Read existing diary body (strip frontmatter).
|
||||
existing_body = ""
|
||||
if diary_path.exists():
|
||||
try:
|
||||
raw = diary_path.read_text(encoding="utf-8")
|
||||
m = re.match(r"^---\s*\n.*?\n---\s*\n?", raw, re.DOTALL)
|
||||
existing_body = raw[m.end() :].strip() if m else raw.strip()
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# Read all conversation messages for context.
|
||||
messages = await read_conversation_parts(session_dir)
|
||||
transcript_lines: list[str] = []
|
||||
for msg in messages[-40:]:
|
||||
role = msg.get("role", "")
|
||||
content = str(msg.get("content", "")).strip()
|
||||
if role == "tool" or not content:
|
||||
continue
|
||||
label = "user" if role == "user" else "assistant"
|
||||
if len(content) > 600:
|
||||
content = content[:600] + "..."
|
||||
transcript_lines.append(f"[{label}]: {content}")
|
||||
|
||||
if not transcript_lines:
|
||||
return
|
||||
|
||||
transcript = "\n".join(transcript_lines)
|
||||
user_msg = (
|
||||
f"## Today's Diary So Far\n\n"
|
||||
f"{existing_body or '(no entries yet)'}\n\n"
|
||||
f"## Recent Conversation\n\n"
|
||||
f"{transcript}\n\n"
|
||||
f"Date: {today_str}"
|
||||
)
|
||||
"""Run a final short reflection on session shutdown.
|
||||
|
||||
Called during session teardown so recent conversation insights are
|
||||
persisted before the session is destroyed.
|
||||
"""
|
||||
logger.info("reflect: running shutdown reflection for %s", session_dir)
|
||||
mem_dir = memory_dir or global_memory_dir()
|
||||
try:
|
||||
from framework.agents.queen.config import default_config
|
||||
|
||||
resp = await llm.acomplete(
|
||||
messages=[{"role": "user", "content": user_msg}],
|
||||
system=_DIARY_SYSTEM,
|
||||
max_tokens=min(default_config.max_tokens, 1024),
|
||||
)
|
||||
new_body = (resp.content or "").strip()
|
||||
if not new_body:
|
||||
return
|
||||
|
||||
doc = build_diary_document(date_str=today_str, body=new_body)
|
||||
if len(doc.encode("utf-8")) > MAX_FILE_SIZE_BYTES:
|
||||
new_body = new_body[:2800]
|
||||
doc = build_diary_document(date_str=today_str, body=new_body)
|
||||
|
||||
mem_dir.mkdir(parents=True, exist_ok=True)
|
||||
diary_path.write_text(doc, encoding="utf-8")
|
||||
logger.debug("diary: updated %s (%d chars)", fname, len(doc))
|
||||
await run_short_reflection(session_dir, llm, mem_dir)
|
||||
logger.info("reflect: shutdown reflection completed for %s", session_dir)
|
||||
except asyncio.CancelledError:
|
||||
logger.warning("reflect: shutdown reflection cancelled for %s", session_dir)
|
||||
except Exception:
|
||||
logger.warning("diary: update failed", exc_info=True)
|
||||
logger.warning("reflect: shutdown reflection failed", exc_info=True)
|
||||
_write_error("shutdown reflection")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Event-bus integration
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
# Run a long reflection every N short reflections.
|
||||
_LONG_REFLECT_INTERVAL = 5
|
||||
|
||||
|
||||
@@ -514,35 +487,23 @@ async def subscribe_reflection_triggers(
|
||||
session_dir: Path,
|
||||
llm: Any,
|
||||
memory_dir: Path | None = None,
|
||||
phase_state: Any = None,
|
||||
) -> list[str]:
|
||||
"""Subscribe to queen turn events and return subscription IDs.
|
||||
|
||||
Call this once during queen setup. Returns a list of event-bus
|
||||
subscription IDs for cleanup during session teardown.
|
||||
"""
|
||||
from framework.runtime.event_bus import EventType
|
||||
from framework.host.event_bus import EventType
|
||||
|
||||
mem_dir = memory_dir or MEMORY_DIR
|
||||
mem_dir = memory_dir or global_memory_dir()
|
||||
_lock = asyncio.Lock()
|
||||
_short_count = 0
|
||||
_background_tasks: set[asyncio.Task] = set()
|
||||
|
||||
async def _on_turn_complete(event: Any) -> None:
|
||||
nonlocal _short_count
|
||||
|
||||
# Only process queen turns.
|
||||
if getattr(event, "stream_id", None) != "queen":
|
||||
return
|
||||
|
||||
if _lock.locked():
|
||||
logger.debug("reflect: skipping — reflection already in progress")
|
||||
return
|
||||
|
||||
async def _do_turn_reflect(is_interval: bool, count: int) -> None:
|
||||
async with _lock:
|
||||
try:
|
||||
_short_count += 1
|
||||
logger.debug("reflect: turn complete — short count %d/%d", _short_count, _LONG_REFLECT_INTERVAL)
|
||||
if _short_count % _LONG_REFLECT_INTERVAL == 0:
|
||||
if is_interval:
|
||||
await run_short_reflection(session_dir, llm, mem_dir)
|
||||
await run_long_reflection(llm, mem_dir)
|
||||
else:
|
||||
@@ -551,46 +512,7 @@ async def subscribe_reflection_triggers(
|
||||
logger.warning("reflect: reflection failed", exc_info=True)
|
||||
_write_error("short/long reflection")
|
||||
|
||||
# Update daily diary after reflection.
|
||||
try:
|
||||
await run_diary_update(session_dir, llm, mem_dir)
|
||||
except Exception:
|
||||
logger.warning("reflect: diary update failed", exc_info=True)
|
||||
|
||||
# Update recall cache after reflection completes, guaranteeing
|
||||
# recall sees the current turn's extracted memories.
|
||||
if phase_state is not None:
|
||||
try:
|
||||
from framework.agents.queen.recall_selector import update_recall_cache
|
||||
await update_recall_cache(
|
||||
session_dir,
|
||||
llm,
|
||||
cache_setter=lambda block: (
|
||||
setattr(phase_state, "_cached_colony_recall_block", block),
|
||||
setattr(phase_state, "_cached_recall_block", block),
|
||||
),
|
||||
memory_dir=mem_dir,
|
||||
heading="Colony Memories",
|
||||
)
|
||||
await update_recall_cache(
|
||||
session_dir,
|
||||
llm,
|
||||
cache_setter=lambda block: setattr(
|
||||
phase_state, "_cached_global_recall_block", block
|
||||
),
|
||||
memory_dir=getattr(phase_state, "global_memory_dir", None),
|
||||
heading="Global Memories",
|
||||
)
|
||||
except Exception:
|
||||
logger.debug("recall: cache update failed", exc_info=True)
|
||||
|
||||
async def _on_compaction(event: Any) -> None:
|
||||
if getattr(event, "stream_id", None) != "queen":
|
||||
return
|
||||
|
||||
if _lock.locked():
|
||||
return
|
||||
|
||||
async def _do_compaction_reflect() -> None:
|
||||
async with _lock:
|
||||
try:
|
||||
await run_long_reflection(llm, mem_dir)
|
||||
@@ -598,6 +520,50 @@ async def subscribe_reflection_triggers(
|
||||
logger.warning("reflect: compaction-triggered reflection failed", exc_info=True)
|
||||
_write_error("compaction reflection")
|
||||
|
||||
def _fire_and_forget(coro: Any) -> None:
|
||||
"""Spawn a background task and prevent GC before it finishes."""
|
||||
task = asyncio.create_task(coro)
|
||||
_background_tasks.add(task)
|
||||
task.add_done_callback(_background_tasks.discard)
|
||||
|
||||
async def _on_turn_complete(event: Any) -> None:
|
||||
nonlocal _short_count
|
||||
|
||||
if getattr(event, "stream_id", None) != "queen":
|
||||
return
|
||||
|
||||
_short_count += 1
|
||||
|
||||
event_data = getattr(event, "data", {}) or {}
|
||||
stop_reason = event_data.get("stop_reason", "")
|
||||
is_tool_turn = stop_reason in ("tool_use", "tool_calls")
|
||||
is_interval = _short_count % _LONG_REFLECT_INTERVAL == 0
|
||||
|
||||
if is_tool_turn and not is_interval:
|
||||
logger.debug("reflect: skipping tool turn (count=%d)", _short_count)
|
||||
return
|
||||
|
||||
if _lock.locked():
|
||||
logger.debug("reflect: skipping, already running (count=%d)", _short_count)
|
||||
return
|
||||
|
||||
logger.debug(
|
||||
"reflect: triggered (count=%d, interval=%s, stop_reason=%s)",
|
||||
_short_count,
|
||||
is_interval,
|
||||
stop_reason,
|
||||
)
|
||||
_fire_and_forget(_do_turn_reflect(is_interval, _short_count))
|
||||
|
||||
async def _on_compaction(event: Any) -> None:
|
||||
if getattr(event, "stream_id", None) != "queen":
|
||||
return
|
||||
if _lock.locked():
|
||||
logger.debug("reflect: skipping compaction trigger, already running")
|
||||
return
|
||||
logger.debug("reflect: compaction triggered long reflection")
|
||||
_fire_and_forget(_do_compaction_reflect())
|
||||
|
||||
sub_ids: list[str] = []
|
||||
|
||||
sub1 = event_bus.subscribe(
|
||||
@@ -615,68 +581,10 @@ async def subscribe_reflection_triggers(
|
||||
return sub_ids
|
||||
|
||||
|
||||
async def subscribe_worker_memory_triggers(
|
||||
event_bus: Any,
|
||||
llm: Any,
|
||||
*,
|
||||
worker_sessions_dir: Path,
|
||||
colony_memory_dir: Path,
|
||||
recall_cache: dict[str, str],
|
||||
) -> list[str]:
|
||||
"""Subscribe colony memory lifecycle events for worker runs.
|
||||
|
||||
Short reflection is now handled synchronously at node handoff in
|
||||
``WorkerAgent._reflect_colony_memory()``. This function only manages:
|
||||
- Recall cache initialisation on execution start
|
||||
- Final long reflection + cleanup on execution end
|
||||
"""
|
||||
from framework.runtime.event_bus import EventType
|
||||
|
||||
_terminal_lock = asyncio.Lock()
|
||||
|
||||
def _is_worker_event(event: Any) -> bool:
|
||||
return bool(
|
||||
getattr(event, "execution_id", None)
|
||||
and getattr(event, "stream_id", None) not in ("queen", "judge")
|
||||
)
|
||||
|
||||
async def _on_execution_started(event: Any) -> None:
|
||||
if not _is_worker_event(event):
|
||||
return
|
||||
if event.execution_id is not None:
|
||||
recall_cache[event.execution_id] = ""
|
||||
|
||||
async def _on_execution_terminal(event: Any) -> None:
|
||||
if not _is_worker_event(event):
|
||||
return
|
||||
execution_id = event.execution_id
|
||||
if execution_id is None:
|
||||
return
|
||||
async with _terminal_lock:
|
||||
try:
|
||||
await run_long_reflection(llm, colony_memory_dir)
|
||||
except Exception:
|
||||
logger.warning("reflect: worker final reflection failed", exc_info=True)
|
||||
_write_error("worker final reflection")
|
||||
finally:
|
||||
recall_cache.pop(execution_id, None)
|
||||
|
||||
return [
|
||||
event_bus.subscribe(
|
||||
event_types=[EventType.EXECUTION_STARTED],
|
||||
handler=_on_execution_started,
|
||||
),
|
||||
event_bus.subscribe(
|
||||
event_types=[EventType.EXECUTION_COMPLETED, EventType.EXECUTION_FAILED],
|
||||
handler=_on_execution_terminal,
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
def _write_error(context: str) -> None:
|
||||
"""Best-effort write of the last traceback to an error file."""
|
||||
try:
|
||||
error_path = MEMORY_DIR / ".reflection_error.txt"
|
||||
error_path = global_memory_dir() / ".reflection_error.txt"
|
||||
error_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
error_path.write_text(
|
||||
f"context: {context}\ntime: {datetime.now().isoformat()}\n\n{traceback.format_exc()}",
|
||||
|
||||
@@ -22,10 +22,10 @@ def mock_mode():
|
||||
|
||||
@pytest_asyncio.fixture(scope="session")
|
||||
async def runner(tmp_path_factory, mock_mode):
|
||||
from framework.runner.runner import AgentRunner
|
||||
from framework.loader.agent_loader import AgentLoader
|
||||
|
||||
storage = tmp_path_factory.mktemp("agent_storage")
|
||||
r = AgentRunner.load(AGENT_PATH, mock_mode=mock_mode, storage_path=storage)
|
||||
r = AgentLoader.load(AGENT_PATH, mock_mode=mock_mode, storage_path=storage)
|
||||
r._setup()
|
||||
yield r
|
||||
await r.cleanup_async()
|
||||
|
||||
@@ -79,7 +79,7 @@ def main():
|
||||
subparsers = parser.add_subparsers(dest="command", required=True)
|
||||
|
||||
# Register runner commands (run, info, validate, list, shell)
|
||||
from framework.runner.cli import register_commands
|
||||
from framework.loader.cli import register_commands
|
||||
|
||||
register_commands(subparsers)
|
||||
|
||||
@@ -99,7 +99,7 @@ def main():
|
||||
register_debugger_commands(subparsers)
|
||||
|
||||
# Register MCP registry commands (mcp install, mcp add, ...)
|
||||
from framework.runner.mcp_registry_cli import register_mcp_commands
|
||||
from framework.loader.mcp_registry_cli import register_mcp_commands
|
||||
|
||||
register_mcp_commands(subparsers)
|
||||
|
||||
|
||||
+115
-14
@@ -12,13 +12,47 @@ from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from framework.graph.edge import DEFAULT_MAX_TOKENS
|
||||
from framework.orchestrator.edge import DEFAULT_MAX_TOKENS
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Hive home directory structure
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
HIVE_HOME = Path.home() / ".hive"
|
||||
QUEENS_DIR = HIVE_HOME / "agents" / "queens"
|
||||
COLONIES_DIR = HIVE_HOME / "colonies"
|
||||
MEMORIES_DIR = HIVE_HOME / "memories"
|
||||
|
||||
|
||||
def queen_dir(queen_name: str = "default") -> Path:
|
||||
"""Return the storage directory for a named queen agent."""
|
||||
return QUEENS_DIR / queen_name
|
||||
|
||||
|
||||
def colony_dir(colony_name: str) -> Path:
|
||||
"""Return the directory for a named colony."""
|
||||
return COLONIES_DIR / colony_name
|
||||
|
||||
|
||||
def memory_dir(scope: str, name: str | None = None) -> Path:
|
||||
"""Return memory dir for a scope.
|
||||
|
||||
Examples::
|
||||
|
||||
memory_dir("global") -> ~/.hive/memories/global
|
||||
memory_dir("colonies", "my_agent") -> ~/.hive/memories/colonies/my_agent
|
||||
memory_dir("agents/queens", "default")-> ~/.hive/memories/agents/queens/default
|
||||
memory_dir("agents", "worker_name") -> ~/.hive/memories/agents/worker_name
|
||||
"""
|
||||
base = MEMORIES_DIR / scope
|
||||
return base / name if name else base
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Low-level config file access
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
HIVE_CONFIG_FILE = Path.home() / ".hive" / "configuration.json"
|
||||
HIVE_CONFIG_FILE = HIVE_HOME / "configuration.json"
|
||||
|
||||
# Hive LLM router endpoint (Anthropic-compatible).
|
||||
# litellm's Anthropic handler appends /v1/messages, so this is just the base host.
|
||||
@@ -42,6 +76,48 @@ def get_hive_config() -> dict[str, Any]:
|
||||
return {}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Credential store helpers (for BYOK keys)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
# Provider name → credential store ID mapping
|
||||
_PROVIDER_CRED_MAP: dict[str, str] = {
|
||||
"anthropic": "anthropic",
|
||||
"openai": "openai",
|
||||
"gemini": "gemini",
|
||||
"google": "gemini",
|
||||
"minimax": "minimax",
|
||||
"groq": "groq",
|
||||
"cerebras": "cerebras",
|
||||
"openrouter": "openrouter",
|
||||
"mistral": "mistral",
|
||||
"together": "together",
|
||||
"together_ai": "together",
|
||||
"deepseek": "deepseek",
|
||||
"kimi": "kimi",
|
||||
"hive": "hive",
|
||||
}
|
||||
|
||||
|
||||
def _get_api_key_from_credential_store(provider: str) -> str | None:
|
||||
"""Look up a BYOK API key from the encrypted credential store.
|
||||
|
||||
Returns None if no key is found or the credential store is unavailable.
|
||||
"""
|
||||
if not os.environ.get("HIVE_CREDENTIAL_KEY"):
|
||||
return None
|
||||
cred_id = _PROVIDER_CRED_MAP.get(provider.lower())
|
||||
if not cred_id:
|
||||
return None
|
||||
try:
|
||||
from framework.credentials import CredentialStore
|
||||
|
||||
store = CredentialStore.with_encrypted_storage()
|
||||
return store.get(cred_id)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Derived helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -88,7 +164,7 @@ def get_worker_api_key() -> str | None:
|
||||
# Worker-specific subscription / env var
|
||||
if worker_llm.get("use_claude_code_subscription"):
|
||||
try:
|
||||
from framework.runner.runner import get_claude_code_token
|
||||
from framework.loader.agent_loader import get_claude_code_token
|
||||
|
||||
token = get_claude_code_token()
|
||||
if token:
|
||||
@@ -98,7 +174,7 @@ def get_worker_api_key() -> str | None:
|
||||
|
||||
if worker_llm.get("use_codex_subscription"):
|
||||
try:
|
||||
from framework.runner.runner import get_codex_token
|
||||
from framework.loader.agent_loader import get_codex_token
|
||||
|
||||
token = get_codex_token()
|
||||
if token:
|
||||
@@ -108,7 +184,7 @@ def get_worker_api_key() -> str | None:
|
||||
|
||||
if worker_llm.get("use_kimi_code_subscription"):
|
||||
try:
|
||||
from framework.runner.runner import get_kimi_code_token
|
||||
from framework.loader.agent_loader import get_kimi_code_token
|
||||
|
||||
token = get_kimi_code_token()
|
||||
if token:
|
||||
@@ -118,7 +194,7 @@ def get_worker_api_key() -> str | None:
|
||||
|
||||
if worker_llm.get("use_antigravity_subscription"):
|
||||
try:
|
||||
from framework.runner.runner import get_antigravity_token
|
||||
from framework.loader.agent_loader import get_antigravity_token
|
||||
|
||||
token = get_antigravity_token()
|
||||
if token:
|
||||
@@ -174,7 +250,7 @@ def get_worker_llm_extra_kwargs() -> dict[str, Any]:
|
||||
"User-Agent": "CodexBar",
|
||||
}
|
||||
try:
|
||||
from framework.runner.runner import get_codex_account_id
|
||||
from framework.loader.agent_loader import get_codex_account_id
|
||||
|
||||
account_id = get_codex_account_id()
|
||||
if account_id:
|
||||
@@ -221,22 +297,43 @@ def get_max_context_tokens() -> int:
|
||||
return get_hive_config().get("llm", {}).get("max_context_tokens", DEFAULT_MAX_CONTEXT_TOKENS)
|
||||
|
||||
|
||||
def get_api_keys() -> list[str] | None:
|
||||
"""Return a list of API keys if ``api_keys`` is configured, else ``None``.
|
||||
|
||||
This supports key-pool rotation: configure multiple keys in
|
||||
``~/.hive/configuration.json`` under ``llm.api_keys`` and the
|
||||
:class:`~framework.llm.key_pool.KeyPool` will rotate through them.
|
||||
"""
|
||||
llm = get_hive_config().get("llm", {})
|
||||
keys = llm.get("api_keys")
|
||||
if keys and isinstance(keys, list) and len(keys) > 0:
|
||||
return [k for k in keys if k] # filter empties
|
||||
return None
|
||||
|
||||
|
||||
def get_api_key() -> str | None:
|
||||
"""Return the API key, supporting env var, Claude Code subscription, Codex, and ZAI Code.
|
||||
|
||||
Priority:
|
||||
0. Explicit key pool (``api_keys`` list) -- returns first key for
|
||||
single-key callers; full pool available via :func:`get_api_keys`.
|
||||
1. Claude Code subscription (``use_claude_code_subscription: true``)
|
||||
reads the OAuth token from ``~/.claude/.credentials.json``.
|
||||
2. Codex subscription (``use_codex_subscription: true``)
|
||||
reads the OAuth token from macOS Keychain or ``~/.codex/auth.json``.
|
||||
3. Environment variable named in ``api_key_env_var``.
|
||||
"""
|
||||
# If an explicit key pool is configured, use the first key.
|
||||
pool_keys = get_api_keys()
|
||||
if pool_keys:
|
||||
return pool_keys[0]
|
||||
|
||||
llm = get_hive_config().get("llm", {})
|
||||
|
||||
# Claude Code subscription: read OAuth token directly
|
||||
if llm.get("use_claude_code_subscription"):
|
||||
try:
|
||||
from framework.runner.runner import get_claude_code_token
|
||||
from framework.loader.agent_loader import get_claude_code_token
|
||||
|
||||
token = get_claude_code_token()
|
||||
if token:
|
||||
@@ -247,7 +344,7 @@ def get_api_key() -> str | None:
|
||||
# Codex subscription: read OAuth token from Keychain / auth.json
|
||||
if llm.get("use_codex_subscription"):
|
||||
try:
|
||||
from framework.runner.runner import get_codex_token
|
||||
from framework.loader.agent_loader import get_codex_token
|
||||
|
||||
token = get_codex_token()
|
||||
if token:
|
||||
@@ -258,7 +355,7 @@ def get_api_key() -> str | None:
|
||||
# Kimi Code subscription: read API key from ~/.kimi/config.toml
|
||||
if llm.get("use_kimi_code_subscription"):
|
||||
try:
|
||||
from framework.runner.runner import get_kimi_code_token
|
||||
from framework.loader.agent_loader import get_kimi_code_token
|
||||
|
||||
token = get_kimi_code_token()
|
||||
if token:
|
||||
@@ -269,7 +366,7 @@ def get_api_key() -> str | None:
|
||||
# Antigravity subscription: read OAuth token from accounts JSON
|
||||
if llm.get("use_antigravity_subscription"):
|
||||
try:
|
||||
from framework.runner.runner import get_antigravity_token
|
||||
from framework.loader.agent_loader import get_antigravity_token
|
||||
|
||||
token = get_antigravity_token()
|
||||
if token:
|
||||
@@ -280,8 +377,12 @@ def get_api_key() -> str | None:
|
||||
# Standard env-var path (covers ZAI Code and all API-key providers)
|
||||
api_key_env_var = llm.get("api_key_env_var")
|
||||
if api_key_env_var:
|
||||
return os.environ.get(api_key_env_var)
|
||||
return None
|
||||
key = os.environ.get(api_key_env_var)
|
||||
if key:
|
||||
return key
|
||||
|
||||
# Credential store fallback — BYOK keys stored via the UI
|
||||
return _get_api_key_from_credential_store(llm.get("provider", ""))
|
||||
|
||||
|
||||
# OAuth credentials for Antigravity are fetched from the opencode-antigravity-auth project.
|
||||
@@ -422,7 +523,7 @@ def get_llm_extra_kwargs() -> dict[str, Any]:
|
||||
"User-Agent": "CodexBar",
|
||||
}
|
||||
try:
|
||||
from framework.runner.runner import get_codex_account_id
|
||||
from framework.loader.agent_loader import get_codex_account_id
|
||||
|
||||
account_id = get_codex_account_id()
|
||||
if account_id:
|
||||
|
||||
@@ -36,7 +36,7 @@ from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.graph import NodeSpec
|
||||
from framework.orchestrator import NodeSpec
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -533,7 +533,9 @@ class CredentialSetupSession:
|
||||
|
||||
|
||||
def load_agent_nodes(agent_path: str | Path) -> list:
|
||||
"""Load NodeSpec list from an agent's agent.py or agent.json.
|
||||
"""Load NodeSpec list from an agent directory.
|
||||
|
||||
Checks agent.json (declarative) first, then agent.py (legacy).
|
||||
|
||||
Args:
|
||||
agent_path: Path to agent directory.
|
||||
@@ -542,16 +544,28 @@ def load_agent_nodes(agent_path: str | Path) -> list:
|
||||
List of NodeSpec objects (empty list if agent can't be loaded).
|
||||
"""
|
||||
agent_path = Path(agent_path)
|
||||
agent_json_file = agent_path / "agent.json"
|
||||
agent_py = agent_path / "agent.py"
|
||||
agent_json = agent_path / "agent.json"
|
||||
|
||||
if agent_py.exists():
|
||||
if agent_json_file.exists():
|
||||
return _load_nodes_from_json_declarative(agent_json_file)
|
||||
elif agent_py.exists():
|
||||
return _load_nodes_from_python_agent(agent_path)
|
||||
elif agent_json.exists():
|
||||
return _load_nodes_from_json_agent(agent_json)
|
||||
return []
|
||||
|
||||
|
||||
def _load_nodes_from_json_declarative(agent_json: Path) -> list:
|
||||
"""Load nodes from a declarative JSON agent."""
|
||||
try:
|
||||
from framework.loader.agent_loader import load_agent_config
|
||||
|
||||
data = json.loads(agent_json.read_text(encoding="utf-8"))
|
||||
graph, _ = load_agent_config(data)
|
||||
return list(graph.nodes)
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def _load_nodes_from_python_agent(agent_path: Path) -> list:
|
||||
"""Load nodes from a Python-based agent."""
|
||||
import importlib.util
|
||||
@@ -590,7 +604,7 @@ def _load_nodes_from_json_agent(agent_json: Path) -> list:
|
||||
with open(agent_json, encoding="utf-8-sig") as f:
|
||||
data = json.load(f)
|
||||
|
||||
from framework.graph import NodeSpec
|
||||
from framework.orchestrator import NodeSpec
|
||||
|
||||
nodes_data = data.get("graph", {}).get("nodes", [])
|
||||
nodes = []
|
||||
|
||||
@@ -1,65 +0,0 @@
|
||||
"""Graph structures: Goals, Nodes, Edges, and Execution."""
|
||||
|
||||
from framework.graph.context import GraphContext
|
||||
from framework.graph.context_handoff import ContextHandoff, HandoffContext
|
||||
from framework.graph.conversation import ConversationStore, Message, NodeConversation
|
||||
from framework.graph.edge import DEFAULT_MAX_TOKENS, EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.event_loop_node import (
|
||||
EventLoopNode,
|
||||
JudgeProtocol,
|
||||
JudgeVerdict,
|
||||
LoopConfig,
|
||||
OutputAccumulator,
|
||||
)
|
||||
from framework.graph.executor import GraphExecutor
|
||||
from framework.graph.goal import Constraint, Goal, GoalStatus, SuccessCriterion
|
||||
from framework.graph.node import NodeContext, NodeProtocol, NodeResult, NodeSpec
|
||||
from framework.graph.worker_agent import (
|
||||
Activation,
|
||||
FanOutTag,
|
||||
FanOutTracker,
|
||||
WorkerAgent,
|
||||
WorkerCompletion,
|
||||
WorkerLifecycle,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# Goal
|
||||
"Goal",
|
||||
"SuccessCriterion",
|
||||
"Constraint",
|
||||
"GoalStatus",
|
||||
# Node
|
||||
"NodeSpec",
|
||||
"NodeContext",
|
||||
"NodeResult",
|
||||
"NodeProtocol",
|
||||
# Edge
|
||||
"EdgeSpec",
|
||||
"EdgeCondition",
|
||||
"GraphSpec",
|
||||
"DEFAULT_MAX_TOKENS",
|
||||
# Executor
|
||||
"GraphExecutor",
|
||||
# Conversation
|
||||
"NodeConversation",
|
||||
"ConversationStore",
|
||||
"Message",
|
||||
# Event Loop
|
||||
"EventLoopNode",
|
||||
"LoopConfig",
|
||||
"OutputAccumulator",
|
||||
"JudgeProtocol",
|
||||
"JudgeVerdict",
|
||||
# Context Handoff
|
||||
"ContextHandoff",
|
||||
"HandoffContext",
|
||||
# Worker Agent
|
||||
"WorkerAgent",
|
||||
"WorkerLifecycle",
|
||||
"WorkerCompletion",
|
||||
"Activation",
|
||||
"FanOutTag",
|
||||
"FanOutTracker",
|
||||
"GraphContext",
|
||||
]
|
||||
@@ -1,6 +0,0 @@
|
||||
"""EventLoopNode subpackage — modular components of the event loop orchestrator.
|
||||
|
||||
All public symbols are re-exported by the parent ``event_loop_node.py`` for
|
||||
backward compatibility. Internal consumers may import directly from these
|
||||
submodules for clarity.
|
||||
"""
|
||||
@@ -1,378 +0,0 @@
|
||||
"""Subagent execution for the event loop.
|
||||
|
||||
Handles the full subagent lifecycle: validation, context setup, tool filtering,
|
||||
conversation store derivation, execution, and cleanup.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from collections.abc import Awaitable, Callable
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from framework.graph.conversation import ConversationStore
|
||||
from framework.graph.event_loop.judge_pipeline import SubagentJudge
|
||||
from framework.graph.event_loop.types import LoopConfig, OutputAccumulator
|
||||
from framework.graph.node import DataBuffer, NodeContext
|
||||
from framework.llm.provider import ToolResult, ToolUse
|
||||
from framework.runtime.event_bus import EventBus
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.graph.event_loop_node import EventLoopNode
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
async def execute_subagent(
|
||||
ctx: NodeContext,
|
||||
agent_id: str,
|
||||
task: str,
|
||||
*,
|
||||
config: LoopConfig,
|
||||
event_loop_node_cls: type[EventLoopNode],
|
||||
escalation_receiver_cls: Callable[[], Any],
|
||||
accumulator: OutputAccumulator | None = None,
|
||||
event_bus: EventBus | None = None,
|
||||
tool_executor: Callable[[ToolUse], ToolResult | Awaitable[ToolResult]] | None = None,
|
||||
conversation_store: ConversationStore | None = None,
|
||||
subagent_instance_counter: dict[str, int] | None = None,
|
||||
) -> ToolResult:
|
||||
"""Execute a subagent and return the result as a ToolResult.
|
||||
|
||||
The subagent:
|
||||
- Gets a fresh conversation with just the task
|
||||
- Has read-only access to the parent's readable memory
|
||||
- Cannot delegate to its own subagents (prevents recursion)
|
||||
- Returns its output in structured JSON format
|
||||
|
||||
Args:
|
||||
ctx: Parent node's context (for memory, tools, LLM access).
|
||||
agent_id: The node ID of the subagent to invoke.
|
||||
task: The task description to give the subagent.
|
||||
accumulator: Parent's OutputAccumulator.
|
||||
event_bus: EventBus for lifecycle events.
|
||||
config: LoopConfig for iteration/tool limits.
|
||||
tool_executor: Tool executor callable.
|
||||
conversation_store: Parent conversation store (for deriving subagent store).
|
||||
subagent_instance_counter: Mutable counter dict for unique subagent paths.
|
||||
|
||||
Returns:
|
||||
ToolResult with structured JSON output.
|
||||
"""
|
||||
# Log subagent invocation start
|
||||
logger.info(
|
||||
"\n" + "=" * 60 + "\n"
|
||||
"🤖 SUBAGENT INVOCATION\n"
|
||||
"=" * 60 + "\n"
|
||||
"Parent Node: %s\n"
|
||||
"Subagent ID: %s\n"
|
||||
"Task: %s\n" + "=" * 60,
|
||||
ctx.node_id,
|
||||
agent_id,
|
||||
task[:500] + "..." if len(task) > 500 else task,
|
||||
)
|
||||
|
||||
# 1. Validate agent exists in registry
|
||||
if agent_id not in ctx.node_registry:
|
||||
return ToolResult(
|
||||
tool_use_id="",
|
||||
content=json.dumps(
|
||||
{
|
||||
"message": f"Sub-agent '{agent_id}' not found in registry",
|
||||
"data": None,
|
||||
"metadata": {"agent_id": agent_id, "success": False, "error": "not_found"},
|
||||
}
|
||||
),
|
||||
is_error=True,
|
||||
)
|
||||
|
||||
subagent_spec = ctx.node_registry[agent_id]
|
||||
|
||||
# 2. Create read-only memory snapshot
|
||||
parent_data = ctx.buffer.read_all()
|
||||
|
||||
# Merge in-flight outputs from the parent's accumulator.
|
||||
if accumulator:
|
||||
for key, value in accumulator.to_dict().items():
|
||||
if key not in parent_data:
|
||||
parent_data[key] = value
|
||||
|
||||
subagent_buffer = DataBuffer()
|
||||
for key, value in parent_data.items():
|
||||
subagent_buffer.write(key, value, validate=False)
|
||||
|
||||
read_keys = set(parent_data.keys()) | set(subagent_spec.input_keys or [])
|
||||
scoped_buffer = subagent_buffer.with_permissions(
|
||||
read_keys=list(read_keys),
|
||||
write_keys=[], # Read-only!
|
||||
)
|
||||
|
||||
# 2b. Compute instance counter early so the callback and child context
|
||||
# share the same stable node_id for this subagent invocation.
|
||||
if subagent_instance_counter is not None:
|
||||
subagent_instance_counter.setdefault(agent_id, 0)
|
||||
subagent_instance_counter[agent_id] += 1
|
||||
subagent_instance = str(subagent_instance_counter[agent_id])
|
||||
else:
|
||||
subagent_instance = "1"
|
||||
|
||||
if subagent_instance == "1":
|
||||
sa_node_id = f"{ctx.node_id}:subagent:{agent_id}"
|
||||
else:
|
||||
sa_node_id = f"{ctx.node_id}:subagent:{agent_id}:{subagent_instance}"
|
||||
|
||||
# 2c. Set up report callback (one-way channel to parent / event bus)
|
||||
subagent_reports: list[dict] = []
|
||||
|
||||
async def _report_callback(
|
||||
message: str,
|
||||
data: dict | None = None,
|
||||
*,
|
||||
wait_for_response: bool = False,
|
||||
) -> str | None:
|
||||
subagent_reports.append({"message": message, "data": data, "timestamp": time.time()})
|
||||
if event_bus:
|
||||
await event_bus.emit_subagent_report(
|
||||
stream_id=ctx.node_id,
|
||||
node_id=sa_node_id,
|
||||
subagent_id=agent_id,
|
||||
message=message,
|
||||
data=data,
|
||||
execution_id=ctx.execution_id,
|
||||
)
|
||||
|
||||
if not wait_for_response:
|
||||
return None
|
||||
|
||||
if not event_bus:
|
||||
logger.warning(
|
||||
"Subagent '%s' requested user response but no event_bus available",
|
||||
agent_id,
|
||||
)
|
||||
return None
|
||||
|
||||
# Create isolated receiver and register for input routing
|
||||
import uuid
|
||||
|
||||
escalation_id = f"{ctx.node_id}:escalation:{uuid.uuid4().hex[:8]}"
|
||||
receiver = escalation_receiver_cls()
|
||||
registry = ctx.shared_node_registry
|
||||
|
||||
registry[escalation_id] = receiver
|
||||
try:
|
||||
await event_bus.emit_escalation_requested(
|
||||
stream_id=ctx.stream_id or ctx.node_id,
|
||||
node_id=escalation_id,
|
||||
reason=f"Subagent report (wait_for_response) from {agent_id}",
|
||||
context=message,
|
||||
execution_id=ctx.execution_id,
|
||||
)
|
||||
# Block until queen responds
|
||||
return await receiver.wait()
|
||||
finally:
|
||||
registry.pop(escalation_id, None)
|
||||
|
||||
# 3. Filter tools for subagent
|
||||
subagent_tool_names = set(subagent_spec.tools or [])
|
||||
tool_source = ctx.all_tools if ctx.all_tools else ctx.available_tools
|
||||
|
||||
# GCU auto-population
|
||||
if subagent_spec.node_type == "gcu" and not subagent_tool_names:
|
||||
subagent_tools = [t for t in tool_source if t.name != "delegate_to_sub_agent"]
|
||||
else:
|
||||
subagent_tools = [
|
||||
t
|
||||
for t in tool_source
|
||||
if t.name in subagent_tool_names and t.name != "delegate_to_sub_agent"
|
||||
]
|
||||
|
||||
missing = subagent_tool_names - {t.name for t in subagent_tools}
|
||||
if missing:
|
||||
logger.warning(
|
||||
"Subagent '%s' requested tools not found in catalog: %s",
|
||||
agent_id,
|
||||
sorted(missing),
|
||||
)
|
||||
|
||||
logger.info(
|
||||
"📦 Subagent '%s' configuration:\n"
|
||||
" - System prompt: %s\n"
|
||||
" - Tools available (%d): %s\n"
|
||||
" - Memory keys inherited: %s",
|
||||
agent_id,
|
||||
(subagent_spec.system_prompt[:200] + "...")
|
||||
if subagent_spec.system_prompt and len(subagent_spec.system_prompt) > 200
|
||||
else subagent_spec.system_prompt,
|
||||
len(subagent_tools),
|
||||
[t.name for t in subagent_tools],
|
||||
list(parent_data.keys()),
|
||||
)
|
||||
|
||||
# 4. Build subagent context
|
||||
max_iter = min(config.max_iterations, 10)
|
||||
subagent_ctx = NodeContext(
|
||||
runtime=ctx.runtime,
|
||||
node_id=sa_node_id,
|
||||
node_spec=subagent_spec,
|
||||
buffer=scoped_buffer,
|
||||
input_data={"task": task, **parent_data},
|
||||
llm=ctx.llm,
|
||||
available_tools=subagent_tools,
|
||||
goal_context=(
|
||||
f"Your specific task: {task}\n\n"
|
||||
f"COMPLETION REQUIREMENTS:\n"
|
||||
f"When your task is done, you MUST call set_output() "
|
||||
f"for each required key: {subagent_spec.output_keys}\n"
|
||||
f"Alternatively, call report_to_parent(mark_complete=true) "
|
||||
f"with your findings in message/data.\n"
|
||||
f"You have a maximum of {max_iter} turns to complete this task."
|
||||
),
|
||||
goal=ctx.goal,
|
||||
max_tokens=ctx.max_tokens,
|
||||
runtime_logger=ctx.runtime_logger,
|
||||
is_subagent_mode=True, # Prevents nested delegation
|
||||
report_callback=_report_callback,
|
||||
node_registry={}, # Empty - no nested subagents
|
||||
shared_node_registry=ctx.shared_node_registry, # For escalation routing
|
||||
)
|
||||
|
||||
# 5. Create and execute subagent EventLoopNode
|
||||
subagent_conv_store = None
|
||||
if conversation_store is not None:
|
||||
from framework.storage.conversation_store import FileConversationStore
|
||||
|
||||
parent_base = getattr(conversation_store, "_base", None)
|
||||
if parent_base is not None:
|
||||
conversations_dir = parent_base.parent
|
||||
subagent_dir_name = f"{agent_id}-{subagent_instance}"
|
||||
subagent_store_path = conversations_dir / subagent_dir_name
|
||||
subagent_conv_store = FileConversationStore(base_path=subagent_store_path)
|
||||
|
||||
# Derive a subagent-scoped spillover dir
|
||||
subagent_spillover = None
|
||||
if config.spillover_dir:
|
||||
subagent_spillover = str(Path(config.spillover_dir) / agent_id / subagent_instance)
|
||||
|
||||
subagent_node = event_loop_node_cls(
|
||||
event_bus=event_bus,
|
||||
judge=SubagentJudge(task=task, max_iterations=max_iter),
|
||||
config=LoopConfig(
|
||||
max_iterations=max_iter,
|
||||
max_tool_calls_per_turn=config.max_tool_calls_per_turn,
|
||||
tool_call_overflow_margin=config.tool_call_overflow_margin,
|
||||
max_context_tokens=config.max_context_tokens,
|
||||
stall_detection_threshold=config.stall_detection_threshold,
|
||||
max_tool_result_chars=config.max_tool_result_chars,
|
||||
spillover_dir=subagent_spillover,
|
||||
),
|
||||
tool_executor=tool_executor,
|
||||
conversation_store=subagent_conv_store,
|
||||
)
|
||||
|
||||
# Inject a unique GCU browser profile for this subagent
|
||||
_profile_token = None
|
||||
try:
|
||||
from gcu.browser.session import set_active_profile as _set_gcu_profile
|
||||
|
||||
_profile_token = _set_gcu_profile(f"{agent_id}-{subagent_instance}")
|
||||
except ImportError:
|
||||
pass # GCU tools not installed; no-op
|
||||
|
||||
try:
|
||||
logger.info("🚀 Starting subagent '%s' execution...", agent_id)
|
||||
start_time = time.time()
|
||||
result = await subagent_node.execute(subagent_ctx)
|
||||
latency_ms = int((time.time() - start_time) * 1000)
|
||||
|
||||
separator = "-" * 60
|
||||
logger.info(
|
||||
"\n%s\n"
|
||||
"✅ SUBAGENT '%s' COMPLETED\n"
|
||||
"%s\n"
|
||||
"Success: %s\n"
|
||||
"Latency: %dms\n"
|
||||
"Tokens used: %s\n"
|
||||
"Output keys: %s\n"
|
||||
"%s",
|
||||
separator,
|
||||
agent_id,
|
||||
separator,
|
||||
result.success,
|
||||
latency_ms,
|
||||
result.tokens_used,
|
||||
list(result.output.keys()) if result.output else [],
|
||||
separator,
|
||||
)
|
||||
|
||||
result_json = {
|
||||
"message": (
|
||||
f"Sub-agent '{agent_id}' completed successfully"
|
||||
if result.success
|
||||
else f"Sub-agent '{agent_id}' failed: {result.error}"
|
||||
),
|
||||
"data": result.output,
|
||||
"reports": subagent_reports if subagent_reports else None,
|
||||
"metadata": {
|
||||
"agent_id": agent_id,
|
||||
"success": result.success,
|
||||
"tokens_used": result.tokens_used,
|
||||
"latency_ms": latency_ms,
|
||||
"report_count": len(subagent_reports),
|
||||
},
|
||||
}
|
||||
|
||||
return ToolResult(
|
||||
tool_use_id="",
|
||||
content=json.dumps(result_json, indent=2, default=str),
|
||||
is_error=not result.success,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(
|
||||
"\n" + "!" * 60 + "\n❌ SUBAGENT '%s' FAILED\nError: %s\n" + "!" * 60,
|
||||
agent_id,
|
||||
str(e),
|
||||
)
|
||||
result_json = {
|
||||
"message": f"Sub-agent '{agent_id}' raised exception: {e}",
|
||||
"data": None,
|
||||
"metadata": {
|
||||
"agent_id": agent_id,
|
||||
"success": False,
|
||||
"error": str(e),
|
||||
},
|
||||
}
|
||||
return ToolResult(
|
||||
tool_use_id="",
|
||||
content=json.dumps(result_json, indent=2),
|
||||
is_error=True,
|
||||
)
|
||||
finally:
|
||||
# Restore the GCU profile context
|
||||
if _profile_token is not None:
|
||||
from gcu.browser.session import _active_profile as _gcu_profile_var
|
||||
|
||||
_gcu_profile_var.reset(_profile_token)
|
||||
|
||||
# Stop the browser session for this subagent's profile
|
||||
if tool_executor is not None:
|
||||
_subagent_profile = f"{agent_id}-{subagent_instance}"
|
||||
try:
|
||||
_stop_use = ToolUse(
|
||||
id="gcu-cleanup",
|
||||
name="browser_stop",
|
||||
input={"profile": _subagent_profile},
|
||||
)
|
||||
_stop_result = tool_executor(_stop_use)
|
||||
if asyncio.iscoroutine(_stop_result) or asyncio.isfuture(_stop_result):
|
||||
await _stop_result
|
||||
except Exception as _gcu_exc:
|
||||
logger.warning(
|
||||
"GCU browser_stop failed for profile %r: %s",
|
||||
_subagent_profile,
|
||||
_gcu_exc,
|
||||
)
|
||||
@@ -0,0 +1,11 @@
|
||||
"""Host layer -- how agents are triggered and hosted."""
|
||||
|
||||
from framework.host.agent_host import ( # noqa: F401
|
||||
AgentHost,
|
||||
AgentRuntimeConfig,
|
||||
)
|
||||
from framework.host.event_bus import AgentEvent, EventBus, EventType # noqa: F401
|
||||
from framework.host.execution_manager import ( # noqa: F401
|
||||
EntryPointSpec,
|
||||
ExecutionManager,
|
||||
)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -148,8 +148,8 @@ class EventType(StrEnum):
|
||||
# Queen phase changes (building <-> staging <-> running)
|
||||
QUEEN_PHASE_CHANGED = "queen_phase_changed"
|
||||
|
||||
# Queen thinking hook — persona selected for the current building session
|
||||
QUEEN_PERSONA_SELECTED = "queen_persona_selected"
|
||||
# Queen identity — which queen profile was selected for this session
|
||||
QUEEN_IDENTITY_SELECTED = "queen_identity_selected"
|
||||
|
||||
# Subagent reports (one-way progress updates from sub-agents)
|
||||
SUBAGENT_REPORT = "subagent_report"
|
||||
+32
-37
@@ -18,18 +18,18 @@ from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from framework.graph.checkpoint_config import CheckpointConfig
|
||||
from framework.graph.executor import ExecutionResult, GraphExecutor
|
||||
from framework.runtime.event_bus import EventBus
|
||||
from framework.runtime.shared_state import IsolationLevel, SharedBufferManager
|
||||
from framework.runtime.stream_runtime import StreamRuntime, StreamRuntimeAdapter
|
||||
from framework.orchestrator.checkpoint_config import CheckpointConfig
|
||||
from framework.orchestrator.orchestrator import ExecutionResult, Orchestrator
|
||||
from framework.host.event_bus import EventBus
|
||||
from framework.host.shared_state import IsolationLevel, SharedBufferManager
|
||||
from framework.host.stream_runtime import StreamDecisionTracker, StreamRuntimeAdapter
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.goal import Goal
|
||||
from framework.orchestrator.edge import GraphSpec
|
||||
from framework.orchestrator.goal import Goal
|
||||
from framework.llm.provider import LLMProvider, Tool
|
||||
from framework.runtime.event_bus import AgentEvent
|
||||
from framework.runtime.outcome_aggregator import OutcomeAggregator
|
||||
from framework.host.event_bus import AgentEvent
|
||||
from framework.host.outcome_aggregator import OutcomeAggregator
|
||||
from framework.storage.concurrent import ConcurrentStorage
|
||||
from framework.storage.session_store import SessionStore
|
||||
|
||||
@@ -133,7 +133,7 @@ class ExecutionContext:
|
||||
status: str = "pending" # pending, running, completed, failed, paused
|
||||
|
||||
|
||||
class ExecutionStream:
|
||||
class ExecutionManager:
|
||||
"""
|
||||
Manages concurrent executions for a single entry point.
|
||||
|
||||
@@ -192,10 +192,6 @@ class ExecutionStream:
|
||||
context_warn_ratio: float | None = None,
|
||||
batch_init_nudge: str | None = None,
|
||||
dynamic_memory_provider_factory: Callable[[str], Callable[[], str] | None] | None = None,
|
||||
colony_memory_dir: Any = None,
|
||||
colony_worker_sessions_dir: Any = None,
|
||||
colony_recall_cache: dict[str, str] | None = None,
|
||||
colony_reflect_llm: Any = None,
|
||||
):
|
||||
"""
|
||||
Initialize execution stream.
|
||||
@@ -251,10 +247,6 @@ class ExecutionStream:
|
||||
self._context_warn_ratio: float | None = context_warn_ratio
|
||||
self._batch_init_nudge: str | None = batch_init_nudge
|
||||
self._dynamic_memory_provider_factory = dynamic_memory_provider_factory
|
||||
self._colony_memory_dir = colony_memory_dir
|
||||
self._colony_worker_sessions_dir = colony_worker_sessions_dir
|
||||
self._colony_recall_cache = colony_recall_cache
|
||||
self._colony_reflect_llm = colony_reflect_llm
|
||||
|
||||
_es_logger = logging.getLogger(__name__)
|
||||
if protocols_prompt:
|
||||
@@ -270,7 +262,7 @@ class ExecutionStream:
|
||||
)
|
||||
|
||||
# Create stream-scoped runtime
|
||||
self._runtime = StreamRuntime(
|
||||
self._runtime = StreamDecisionTracker(
|
||||
stream_id=stream_id,
|
||||
storage=storage,
|
||||
outcome_aggregator=outcome_aggregator,
|
||||
@@ -279,7 +271,7 @@ class ExecutionStream:
|
||||
# Execution tracking
|
||||
self._active_executions: dict[str, ExecutionContext] = {}
|
||||
self._execution_tasks: dict[str, asyncio.Task] = {}
|
||||
self._active_executors: dict[str, GraphExecutor] = {}
|
||||
self._active_executors: dict[str, Orchestrator] = {}
|
||||
self._cancel_reasons: dict[str, str] = {}
|
||||
self._execution_results: OrderedDict[str, ExecutionResult] = OrderedDict()
|
||||
self._execution_result_times: dict[str, float] = {}
|
||||
@@ -309,7 +301,7 @@ class ExecutionStream:
|
||||
|
||||
# Emit stream started event
|
||||
if self._scoped_event_bus:
|
||||
from framework.runtime.event_bus import AgentEvent, EventType
|
||||
from framework.host.event_bus import AgentEvent, EventType
|
||||
|
||||
await self._scoped_event_bus.publish(
|
||||
AgentEvent(
|
||||
@@ -434,7 +426,7 @@ class ExecutionStream:
|
||||
|
||||
# Emit stream stopped event
|
||||
if self._scoped_event_bus:
|
||||
from framework.runtime.event_bus import AgentEvent, EventType
|
||||
from framework.host.event_bus import AgentEvent, EventType
|
||||
|
||||
await self._scoped_event_bus.publish(
|
||||
AgentEvent(
|
||||
@@ -676,7 +668,7 @@ class ExecutionStream:
|
||||
# Create per-execution runtime logger
|
||||
runtime_logger = None
|
||||
if self._runtime_log_store:
|
||||
from framework.runtime.runtime_logger import RuntimeLogger
|
||||
from framework.tracker.runtime_logger import RuntimeLogger
|
||||
|
||||
runtime_logger = RuntimeLogger(
|
||||
store=self._runtime_log_store, agent_id=self.graph.id
|
||||
@@ -705,12 +697,7 @@ class ExecutionStream:
|
||||
# forward so the next attempt resumes at the failed node.
|
||||
while True:
|
||||
# Create executor for this execution.
|
||||
# Each execution gets its own storage under sessions/{exec_id}/
|
||||
# so conversations, spillover, and data files are all scoped
|
||||
# to this execution. The executor sets data_dir via execution
|
||||
# context (contextvars) so data tools and spillover share the
|
||||
# same session-scoped directory.
|
||||
executor = GraphExecutor(
|
||||
executor = Orchestrator(
|
||||
runtime=runtime_adapter,
|
||||
llm=self._llm,
|
||||
tools=self._tools,
|
||||
@@ -735,10 +722,6 @@ class ExecutionStream:
|
||||
if self._dynamic_memory_provider_factory is not None
|
||||
else None
|
||||
),
|
||||
colony_memory_dir=self._colony_memory_dir,
|
||||
colony_worker_sessions_dir=self._colony_worker_sessions_dir,
|
||||
colony_recall_cache=self._colony_recall_cache,
|
||||
colony_reflect_llm=self._colony_reflect_llm,
|
||||
)
|
||||
# Track executor so inject_input() can reach EventLoopNode instances
|
||||
self._active_executors[execution_id] = executor
|
||||
@@ -775,7 +758,7 @@ class ExecutionStream:
|
||||
|
||||
# Emit resurrection event
|
||||
if self._scoped_event_bus:
|
||||
from framework.runtime.event_bus import AgentEvent, EventType
|
||||
from framework.host.event_bus import AgentEvent, EventType
|
||||
|
||||
await self._scoped_event_bus.publish(
|
||||
AgentEvent(
|
||||
@@ -1131,7 +1114,7 @@ class ExecutionStream:
|
||||
Each stream only executes from its own entry_node, but the full
|
||||
graph must validate with all entry points accounted for.
|
||||
"""
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.orchestrator.edge import GraphSpec
|
||||
|
||||
# Merge entry points: this stream's entry + original graph's primary
|
||||
# entry + any other entry points. This ensures all nodes are
|
||||
@@ -1228,9 +1211,21 @@ class ExecutionStream:
|
||||
task.cancel()
|
||||
# Wait briefly for the task to finish. Don't block indefinitely —
|
||||
# the task may be stuck in a long LLM API call that doesn't
|
||||
# respond to cancellation quickly. The cancellation is already
|
||||
# requested; the task will clean up in the background.
|
||||
# respond to cancellation quickly.
|
||||
done, _ = await asyncio.wait({task}, timeout=5.0)
|
||||
if not done:
|
||||
# Task didn't finish within timeout — clean up bookkeeping now
|
||||
# so the session doesn't think it still has running executions.
|
||||
# The task will continue winding down in the background and its
|
||||
# finally block will harmlessly pop already-removed keys.
|
||||
logger.warning(
|
||||
"Execution %s did not finish within cancel timeout; force-cleaning bookkeeping",
|
||||
execution_id,
|
||||
)
|
||||
async with self._lock:
|
||||
self._active_executions.pop(execution_id, None)
|
||||
self._execution_tasks.pop(execution_id, None)
|
||||
self._active_executors.pop(execution_id, None)
|
||||
return True
|
||||
return False
|
||||
|
||||
+2
-2
@@ -14,8 +14,8 @@ from typing import TYPE_CHECKING, Any
|
||||
from framework.schemas.decision import Decision, Outcome
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.graph.goal import Goal
|
||||
from framework.runtime.event_bus import EventBus
|
||||
from framework.orchestrator.goal import Goal
|
||||
from framework.host.event_bus import EventBus
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -18,12 +18,12 @@ from framework.schemas.run import Run, RunStatus
|
||||
from framework.storage.concurrent import ConcurrentStorage
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.runtime.outcome_aggregator import OutcomeAggregator
|
||||
from framework.host.outcome_aggregator import OutcomeAggregator
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class StreamRuntime:
|
||||
class StreamDecisionTracker:
|
||||
"""
|
||||
Thread-safe runtime for a single execution stream.
|
||||
|
||||
@@ -431,7 +431,7 @@ class StreamRuntimeAdapter:
|
||||
by providing the same API as Runtime but routing to a specific execution.
|
||||
"""
|
||||
|
||||
def __init__(self, stream_runtime: StreamRuntime, execution_id: str):
|
||||
def __init__(self, stream_runtime: StreamDecisionTracker, execution_id: str):
|
||||
"""
|
||||
Create adapter for a specific execution.
|
||||
|
||||
@@ -13,7 +13,7 @@ from dataclasses import dataclass
|
||||
|
||||
from aiohttp import web
|
||||
|
||||
from framework.runtime.event_bus import EventBus
|
||||
from framework.host.event_bus import EventBus
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -0,0 +1,101 @@
|
||||
"""Thread-safe API key pool with round-robin rotation and health tracking.
|
||||
|
||||
When multiple API keys are configured, the pool rotates through them on each
|
||||
request. Keys that hit rate limits are temporarily cooled-down so the next
|
||||
call automatically uses a healthy key -- no sleep required.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import threading
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class KeyHealth:
|
||||
"""Per-key health counters."""
|
||||
|
||||
rate_limited_until: float = 0.0 # monotonic timestamp
|
||||
consecutive_errors: int = 0
|
||||
total_requests: int = 0
|
||||
total_successes: int = 0
|
||||
|
||||
|
||||
class KeyPool:
|
||||
"""Round-robin key pool with health tracking.
|
||||
|
||||
Thread-safe: all mutations protected by a lock so concurrent LLM calls
|
||||
(e.g. parallel tool execution in EventLoopNode) don't race.
|
||||
"""
|
||||
|
||||
def __init__(self, keys: list[str]) -> None:
|
||||
if not keys:
|
||||
raise ValueError("KeyPool requires at least one key")
|
||||
self._keys = list(keys)
|
||||
self._index = 0
|
||||
self._health: dict[str, KeyHealth] = {k: KeyHealth() for k in keys}
|
||||
self._lock = threading.Lock()
|
||||
|
||||
@property
|
||||
def size(self) -> int:
|
||||
return len(self._keys)
|
||||
|
||||
def get_key(self) -> str:
|
||||
"""Return the next healthy key (round-robin).
|
||||
|
||||
If every key is currently rate-limited, returns the one whose cooldown
|
||||
expires soonest so the caller can proceed with minimal delay.
|
||||
"""
|
||||
with self._lock:
|
||||
now = time.monotonic()
|
||||
for _ in range(len(self._keys)):
|
||||
key = self._keys[self._index]
|
||||
self._index = (self._index + 1) % len(self._keys)
|
||||
health = self._health[key]
|
||||
if health.rate_limited_until <= now:
|
||||
health.total_requests += 1
|
||||
return key
|
||||
# All rate-limited -- pick the one that expires soonest.
|
||||
soonest = min(self._keys, key=lambda k: self._health[k].rate_limited_until)
|
||||
self._health[soonest].total_requests += 1
|
||||
return soonest
|
||||
|
||||
def mark_rate_limited(self, key: str, retry_after: float = 60.0) -> None:
|
||||
"""Mark *key* as rate-limited for *retry_after* seconds."""
|
||||
with self._lock:
|
||||
health = self._health.get(key)
|
||||
if health:
|
||||
health.rate_limited_until = time.monotonic() + retry_after
|
||||
health.consecutive_errors += 1
|
||||
logger.info(
|
||||
"[key-pool] Key ...%s rate-limited for %.0fs (errors=%d)",
|
||||
key[-6:],
|
||||
retry_after,
|
||||
health.consecutive_errors,
|
||||
)
|
||||
|
||||
def mark_success(self, key: str) -> None:
|
||||
"""Record a successful call on *key*."""
|
||||
with self._lock:
|
||||
health = self._health.get(key)
|
||||
if health:
|
||||
health.consecutive_errors = 0
|
||||
health.total_successes += 1
|
||||
|
||||
def get_stats(self) -> dict[str, dict]:
|
||||
"""Return health stats keyed by the last 6 chars of each key."""
|
||||
with self._lock:
|
||||
now = time.monotonic()
|
||||
return {
|
||||
f"...{k[-6:]}": {
|
||||
"healthy": self._health[k].rate_limited_until <= now,
|
||||
"requests": self._health[k].total_requests,
|
||||
"successes": self._health[k].total_successes,
|
||||
"consecutive_errors": self._health[k].consecutive_errors,
|
||||
}
|
||||
for k in self._keys
|
||||
}
|
||||
+253
-31
@@ -7,6 +7,8 @@ Groq, and local models.
|
||||
See: https://docs.litellm.ai/docs/providers
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import ast
|
||||
import asyncio
|
||||
import hashlib
|
||||
@@ -18,7 +20,10 @@ import time
|
||||
from collections.abc import AsyncIterator
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.llm.key_pool import KeyPool
|
||||
|
||||
try:
|
||||
import litellm
|
||||
@@ -272,6 +277,10 @@ OPENROUTER_TOOL_COMPAT_CACHE_TTL_SECONDS = 3600
|
||||
# OpenRouter routing can change over time, so tool-compat caching must expire.
|
||||
OPENROUTER_TOOL_COMPAT_MODEL_CACHE: dict[str, float] = {}
|
||||
|
||||
# Transient stream errors (network blips, timeouts) use a separate cap
|
||||
# from rate-limit retries — 3 retries is sufficient for connection failures.
|
||||
STREAM_TRANSIENT_MAX_RETRIES = 3
|
||||
|
||||
# Directory for dumping failed requests
|
||||
FAILED_REQUESTS_DIR = Path.home() / ".hive" / "failed_requests"
|
||||
|
||||
@@ -338,34 +347,38 @@ def _dump_failed_request(
|
||||
attempt: int,
|
||||
) -> str:
|
||||
"""Dump failed request to a file for debugging. Returns the file path."""
|
||||
FAILED_REQUESTS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
try:
|
||||
FAILED_REQUESTS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S_%f")
|
||||
filename = f"{error_type}_{model.replace('/', '_')}_{timestamp}.json"
|
||||
filepath = FAILED_REQUESTS_DIR / filename
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S_%f")
|
||||
filename = f"{error_type}_{model.replace('/', '_')}_{timestamp}.json"
|
||||
filepath = FAILED_REQUESTS_DIR / filename
|
||||
|
||||
# Build dump data
|
||||
messages = kwargs.get("messages", [])
|
||||
dump_data = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"model": model,
|
||||
"error_type": error_type,
|
||||
"attempt": attempt,
|
||||
"estimated_tokens": _estimate_tokens(model, messages),
|
||||
"num_messages": len(messages),
|
||||
"messages": messages,
|
||||
"tools": kwargs.get("tools"),
|
||||
"max_tokens": kwargs.get("max_tokens"),
|
||||
"temperature": kwargs.get("temperature"),
|
||||
}
|
||||
# Build dump data
|
||||
messages = kwargs.get("messages", [])
|
||||
dump_data = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"model": model,
|
||||
"error_type": error_type,
|
||||
"attempt": attempt,
|
||||
"estimated_tokens": _estimate_tokens(model, messages),
|
||||
"num_messages": len(messages),
|
||||
"messages": messages,
|
||||
"tools": kwargs.get("tools"),
|
||||
"max_tokens": kwargs.get("max_tokens"),
|
||||
"temperature": kwargs.get("temperature"),
|
||||
}
|
||||
|
||||
with open(filepath, "w", encoding="utf-8") as f:
|
||||
json.dump(dump_data, f, indent=2, default=str)
|
||||
with open(filepath, "w", encoding="utf-8") as f:
|
||||
json.dump(dump_data, f, indent=2, default=str)
|
||||
|
||||
# Prune old dumps to prevent unbounded disk growth
|
||||
_prune_failed_request_dumps()
|
||||
# Prune old dumps to prevent unbounded disk growth
|
||||
_prune_failed_request_dumps()
|
||||
|
||||
return str(filepath)
|
||||
return str(filepath)
|
||||
except OSError as e:
|
||||
logger.warning(f"Failed to dump request debug log to {FAILED_REQUESTS_DIR}: {e}")
|
||||
return "log_write_failed"
|
||||
|
||||
|
||||
def _compute_retry_delay(
|
||||
@@ -458,6 +471,59 @@ def _is_stream_transient_error(exc: BaseException) -> bool:
|
||||
return isinstance(exc, transient_types)
|
||||
|
||||
|
||||
def _extract_text_tool_calls(
|
||||
text: str,
|
||||
) -> tuple[list, str]:
|
||||
"""Extract hallucinated tool calls from ``<tool_code>`` blocks in LLM text.
|
||||
|
||||
Some models (notably Gemini) emit tool invocations as text instead of using
|
||||
the structured function-calling API. This function parses those blocks and
|
||||
returns ``(tool_call_events, cleaned_text)`` where *cleaned_text* has the
|
||||
``<tool_code>`` blocks removed.
|
||||
|
||||
Expected format::
|
||||
|
||||
<tool_code>
|
||||
{
|
||||
"tool_name": { ...args }
|
||||
}
|
||||
</tool_code>
|
||||
"""
|
||||
from framework.llm.stream_events import ToolCallEvent
|
||||
|
||||
pattern = re.compile(r"<tool_code>\s*(.*?)\s*</tool_code>", re.DOTALL)
|
||||
events: list[ToolCallEvent] = []
|
||||
cleaned = text
|
||||
|
||||
for match in pattern.finditer(text):
|
||||
raw = match.group(1).strip()
|
||||
try:
|
||||
payload = json.loads(raw)
|
||||
except json.JSONDecodeError:
|
||||
logger.warning("[_extract_text_tool_calls] failed to parse JSON: %s", raw[:200])
|
||||
continue
|
||||
|
||||
if not isinstance(payload, dict):
|
||||
continue
|
||||
|
||||
for tool_name, tool_args in payload.items():
|
||||
key = f"{tool_name}:{json.dumps(tool_args, sort_keys=True)}"
|
||||
digest = hashlib.md5(key.encode()).hexdigest()[:12]
|
||||
call_id = f"synth_{digest}"
|
||||
events.append(
|
||||
ToolCallEvent(
|
||||
tool_use_id=call_id,
|
||||
tool_name=tool_name,
|
||||
tool_input=tool_args if isinstance(tool_args, dict) else {},
|
||||
)
|
||||
)
|
||||
|
||||
if events:
|
||||
cleaned = pattern.sub("", text).strip()
|
||||
|
||||
return events, cleaned
|
||||
|
||||
|
||||
class LiteLLMProvider(LLMProvider):
|
||||
"""
|
||||
LiteLLM-based LLM provider for multi-provider support.
|
||||
@@ -500,6 +566,7 @@ class LiteLLMProvider(LLMProvider):
|
||||
model: str = "gpt-4o-mini",
|
||||
api_key: str | None = None,
|
||||
api_base: str | None = None,
|
||||
api_keys: list[str] | None = None,
|
||||
**kwargs: Any,
|
||||
):
|
||||
"""
|
||||
@@ -512,6 +579,9 @@ class LiteLLMProvider(LLMProvider):
|
||||
look for the appropriate env var (OPENAI_API_KEY,
|
||||
ANTHROPIC_API_KEY, etc.)
|
||||
api_base: Custom API base URL (for proxies or local deployments)
|
||||
api_keys: Optional list of API keys for key-pool rotation. When
|
||||
provided with 2+ keys, a :class:`KeyPool` is created and
|
||||
keys are rotated on rate-limit errors.
|
||||
**kwargs: Additional arguments passed to litellm.completion()
|
||||
"""
|
||||
# Kimi For Coding exposes an Anthropic-compatible endpoint at
|
||||
@@ -533,11 +603,24 @@ class LiteLLMProvider(LLMProvider):
|
||||
if api_base and api_base.rstrip("/").endswith("/v1"):
|
||||
api_base = api_base.rstrip("/")[:-3]
|
||||
self.model = model
|
||||
self.api_key = api_key
|
||||
# Key pool: when multiple keys are provided, enable rotation.
|
||||
self._key_pool: KeyPool | None = None
|
||||
if api_keys and len(api_keys) > 1:
|
||||
from framework.llm.key_pool import KeyPool
|
||||
|
||||
self._key_pool = KeyPool(api_keys)
|
||||
self.api_key = api_keys[0] # default for OAuth detection below
|
||||
logger.info(
|
||||
"[litellm] Key pool enabled with %d keys for model %s",
|
||||
len(api_keys),
|
||||
model,
|
||||
)
|
||||
else:
|
||||
self.api_key = api_key or (api_keys[0] if api_keys else None)
|
||||
self.api_base = api_base or self._default_api_base_for_model(_original_model)
|
||||
self.extra_kwargs = kwargs
|
||||
# Detect Claude Code OAuth subscription by checking the api_key prefix.
|
||||
self._claude_code_oauth = bool(api_key and api_key.startswith("sk-ant-oat"))
|
||||
self._claude_code_oauth = bool(self.api_key and self.api_key.startswith("sk-ant-oat"))
|
||||
if self._claude_code_oauth:
|
||||
# Anthropic requires a specific User-Agent for OAuth requests.
|
||||
eh = self.extra_kwargs.setdefault("extra_headers", {})
|
||||
@@ -555,6 +638,36 @@ class LiteLLMProvider(LLMProvider):
|
||||
"LiteLLM is not installed. Please install it with: uv pip install litellm"
|
||||
)
|
||||
|
||||
def reconfigure(self, model: str, api_key: str | None = None, api_base: str | None = None) -> None:
|
||||
"""Hot-swap the model, API key, and/or base URL on this provider instance.
|
||||
|
||||
Since the same LiteLLMProvider object is shared by reference across the
|
||||
session, queen runner, agent runtime, and execution streams, mutating
|
||||
these attributes in-place propagates to all callers on the next LLM call.
|
||||
"""
|
||||
_original_model = model
|
||||
if _is_ollama_model(model):
|
||||
model = _ensure_ollama_chat_prefix(model)
|
||||
elif model.lower().startswith("kimi/"):
|
||||
model = "anthropic/" + model[len("kimi/"):]
|
||||
if api_base and api_base.rstrip("/").endswith("/v1"):
|
||||
api_base = api_base.rstrip("/")[:-3]
|
||||
elif model.lower().startswith("hive/"):
|
||||
model = "anthropic/" + model[len("hive/"):]
|
||||
if api_base and api_base.rstrip("/").endswith("/v1"):
|
||||
api_base = api_base.rstrip("/")[:-3]
|
||||
self.model = model
|
||||
self.api_key = api_key
|
||||
self.api_base = api_base or self._default_api_base_for_model(_original_model)
|
||||
self._claude_code_oauth = bool(api_key and api_key.startswith("sk-ant-oat"))
|
||||
if self._claude_code_oauth:
|
||||
eh = self.extra_kwargs.setdefault("extra_headers", {})
|
||||
eh.setdefault("user-agent", CLAUDE_CODE_USER_AGENT)
|
||||
self._codex_backend = bool(
|
||||
self.api_base and "chatgpt.com/backend-api/codex" in self.api_base
|
||||
)
|
||||
self._antigravity = bool(self.api_base and "localhost:8069" in self.api_base)
|
||||
|
||||
# Note: The Codex ChatGPT backend is a Responses API endpoint at
|
||||
# chatgpt.com/backend-api/codex/responses. LiteLLM's model registry
|
||||
# correctly marks codex models with mode="responses", so we do NOT
|
||||
@@ -578,10 +691,20 @@ class LiteLLMProvider(LLMProvider):
|
||||
def _completion_with_rate_limit_retry(
|
||||
self, max_retries: int | None = None, **kwargs: Any
|
||||
) -> Any:
|
||||
"""Call litellm.completion with retry on 429 rate limit errors and empty responses."""
|
||||
"""Call litellm.completion with retry on 429 rate limit errors and empty responses.
|
||||
|
||||
When a :class:`KeyPool` is configured, rate-limited keys are rotated
|
||||
automatically so the next attempt uses a different key -- no sleep
|
||||
needed between attempts.
|
||||
"""
|
||||
model = kwargs.get("model", self.model)
|
||||
retries = max_retries if max_retries is not None else RATE_LIMIT_MAX_RETRIES
|
||||
for attempt in range(retries + 1):
|
||||
# Rotate key from pool when available.
|
||||
current_key: str | None = None
|
||||
if self._key_pool:
|
||||
current_key = self._key_pool.get_key()
|
||||
kwargs["api_key"] = current_key
|
||||
try:
|
||||
response = litellm.completion(**kwargs) # type: ignore[union-attr]
|
||||
|
||||
@@ -656,8 +779,22 @@ class LiteLLMProvider(LLMProvider):
|
||||
time.sleep(wait)
|
||||
continue
|
||||
|
||||
if self._key_pool and current_key:
|
||||
self._key_pool.mark_success(current_key)
|
||||
return response
|
||||
except RateLimitError as e:
|
||||
# Key pool: mark the offending key and rotate immediately.
|
||||
if self._key_pool and current_key:
|
||||
self._key_pool.mark_rate_limited(current_key, retry_after=60.0)
|
||||
# When we have other healthy keys, skip the sleep -- the
|
||||
# next iteration will pick a different key automatically.
|
||||
if attempt < retries:
|
||||
logger.info(
|
||||
"[retry] Key pool rotating away from ...%s on 429",
|
||||
current_key[-6:],
|
||||
)
|
||||
continue
|
||||
|
||||
# Dump full request to file for debugging
|
||||
messages = kwargs.get("messages", [])
|
||||
token_count, token_method = _estimate_tokens(model, messages)
|
||||
@@ -670,7 +807,7 @@ class LiteLLMProvider(LLMProvider):
|
||||
if attempt == retries:
|
||||
logger.error(
|
||||
f"[retry] GAVE UP on {model} after {retries + 1} "
|
||||
f"attempts — rate limit error: {e!s}. "
|
||||
f"attempts -- rate limit error: {e!s}. "
|
||||
f"~{token_count} tokens ({token_method}). "
|
||||
f"Full request dumped to: {dump_path}"
|
||||
)
|
||||
@@ -789,10 +926,16 @@ class LiteLLMProvider(LLMProvider):
|
||||
"""Async version of _completion_with_rate_limit_retry.
|
||||
|
||||
Uses litellm.acompletion and asyncio.sleep instead of blocking calls.
|
||||
When a :class:`KeyPool` is configured, rate-limited keys are rotated.
|
||||
"""
|
||||
model = kwargs.get("model", self.model)
|
||||
retries = max_retries if max_retries is not None else RATE_LIMIT_MAX_RETRIES
|
||||
for attempt in range(retries + 1):
|
||||
# Rotate key from pool when available.
|
||||
current_key: str | None = None
|
||||
if self._key_pool:
|
||||
current_key = self._key_pool.get_key()
|
||||
kwargs["api_key"] = current_key
|
||||
try:
|
||||
response = await litellm.acompletion(**kwargs) # type: ignore[union-attr]
|
||||
|
||||
@@ -861,8 +1004,20 @@ class LiteLLMProvider(LLMProvider):
|
||||
await asyncio.sleep(wait)
|
||||
continue
|
||||
|
||||
if self._key_pool and current_key:
|
||||
self._key_pool.mark_success(current_key)
|
||||
return response
|
||||
except RateLimitError as e:
|
||||
# Key pool: mark the offending key and rotate immediately.
|
||||
if self._key_pool and current_key:
|
||||
self._key_pool.mark_rate_limited(current_key, retry_after=60.0)
|
||||
if attempt < retries:
|
||||
logger.info(
|
||||
"[async-retry] Key pool rotating away from ...%s on 429",
|
||||
current_key[-6:],
|
||||
)
|
||||
continue
|
||||
|
||||
messages = kwargs.get("messages", [])
|
||||
token_count, token_method = _estimate_tokens(model, messages)
|
||||
dump_path = _dump_failed_request(
|
||||
@@ -874,7 +1029,7 @@ class LiteLLMProvider(LLMProvider):
|
||||
if attempt == retries:
|
||||
logger.error(
|
||||
f"[async-retry] GAVE UP on {model} after {retries + 1} "
|
||||
f"attempts — rate limit error: {e!s}. "
|
||||
f"attempts -- rate limit error: {e!s}. "
|
||||
f"~{token_count} tokens ({token_method}). "
|
||||
f"Full request dumped to: {dump_path}"
|
||||
)
|
||||
@@ -1751,6 +1906,10 @@ class LiteLLMProvider(LLMProvider):
|
||||
|
||||
# --- Finish ---
|
||||
if choice.finish_reason:
|
||||
# Kimi's 'pause_turn' means the model emitted tool
|
||||
# calls and expects results — equivalent to 'tool_calls'.
|
||||
if choice.finish_reason == "pause_turn":
|
||||
choice.finish_reason = "tool_calls" if tool_calls_acc else "stop"
|
||||
stream_finish_reason = choice.finish_reason
|
||||
for _idx, tc_data in sorted(tool_calls_acc.items()):
|
||||
parsed_args = self._parse_tool_call_arguments(
|
||||
@@ -1918,6 +2077,39 @@ class LiteLLMProvider(LLMProvider):
|
||||
f"(last_role={last_role}). Returning empty result."
|
||||
)
|
||||
|
||||
# Gemini sometimes outputs tool calls as text in
|
||||
# <tool_code>{"name": {...args}}</tool_code> blocks
|
||||
# instead of using the function-calling API. Extract
|
||||
# these as real ToolCallEvents and strip them from the
|
||||
# text so the rest of the system treats them normally.
|
||||
if accumulated_text and "<tool_code>" in accumulated_text:
|
||||
extracted, cleaned = _extract_text_tool_calls(accumulated_text)
|
||||
if extracted:
|
||||
tool_names = [tc.tool_name for tc in extracted]
|
||||
logger.info(
|
||||
"[stream] Model emitted %d tool call(s) as <tool_code> text "
|
||||
"instead of structured function calls; converting to "
|
||||
"synthetic ToolCallEvents: %s",
|
||||
len(extracted),
|
||||
tool_names,
|
||||
)
|
||||
accumulated_text = cleaned
|
||||
# Emit a corrected TextDeltaEvent so the caller's
|
||||
# accumulated_text is overwritten with the cleaned text.
|
||||
yield TextDeltaEvent(content="", snapshot=cleaned)
|
||||
# Insert synthetic ToolCallEvents before FinishEvent.
|
||||
finish_idx = next(
|
||||
(i for i, ev in enumerate(tail_events) if isinstance(ev, FinishEvent)),
|
||||
len(tail_events),
|
||||
)
|
||||
for tc_ev in reversed(extracted):
|
||||
tail_events.insert(finish_idx, tc_ev)
|
||||
# Update TextEndEvent if present.
|
||||
for _i, _ev in enumerate(tail_events):
|
||||
if isinstance(_ev, TextEndEvent):
|
||||
tail_events[_i] = TextEndEvent(full_text=cleaned)
|
||||
break
|
||||
|
||||
# Success (or empty after exhausted retries) — flush events.
|
||||
for event in tail_events:
|
||||
yield event
|
||||
@@ -1937,6 +2129,36 @@ class LiteLLMProvider(LLMProvider):
|
||||
return
|
||||
|
||||
except Exception as e:
|
||||
# Some providers return non-standard finish_reason values
|
||||
# (e.g., kimi-k2.5 sends 'pause_turn') that LiteLLM's
|
||||
# internal stream_chunk_builder rejects via Pydantic
|
||||
# validation. If we already accumulated content and built
|
||||
# tail_events before the error, the stream was successful —
|
||||
# yield what we have instead of discarding it.
|
||||
if (accumulated_text or tool_calls_acc) and tail_events:
|
||||
# LiteLLM may wrap the original ValidationError in an
|
||||
# APIError with a different message. Check the full
|
||||
# exception chain (str(e) + str(__cause__)).
|
||||
_err_chain = f"{e} {e.__cause__}" if e.__cause__ else str(e)
|
||||
_is_finish_reason_err = (
|
||||
"finish_reason" in _err_chain and "validation error" in _err_chain.lower()
|
||||
) or (
|
||||
# Fallback: the APIError wrapper message for chunk-building failures
|
||||
"building chunks" in str(e).lower() and (accumulated_text or tool_calls_acc)
|
||||
)
|
||||
if _is_finish_reason_err:
|
||||
logger.warning(
|
||||
"[stream] %s: LiteLLM finish_reason validation "
|
||||
"error (non-standard provider value). "
|
||||
"Content was streamed successfully — "
|
||||
"using accumulated result. Error: %s",
|
||||
self.model,
|
||||
e,
|
||||
)
|
||||
for event in tail_events:
|
||||
yield event
|
||||
return
|
||||
|
||||
if self._should_use_openrouter_tool_compat(e, tools):
|
||||
_remember_openrouter_tool_compat_model(self.model)
|
||||
async for event in self._stream_via_openrouter_tool_compat(
|
||||
@@ -1947,13 +2169,13 @@ class LiteLLMProvider(LLMProvider):
|
||||
):
|
||||
yield event
|
||||
return
|
||||
if _is_stream_transient_error(e) and attempt < RATE_LIMIT_MAX_RETRIES:
|
||||
if _is_stream_transient_error(e) and attempt < STREAM_TRANSIENT_MAX_RETRIES:
|
||||
wait = _compute_retry_delay(attempt, exception=e)
|
||||
logger.warning(
|
||||
f"[stream-retry] {self.model} transient error "
|
||||
f"({type(e).__name__}): {e!s}. "
|
||||
f"Retrying in {wait:.1f}s "
|
||||
f"(attempt {attempt + 1}/{RATE_LIMIT_MAX_RETRIES})"
|
||||
f"(attempt {attempt + 1}/{STREAM_TRANSIENT_MAX_RETRIES})"
|
||||
)
|
||||
await asyncio.sleep(wait)
|
||||
continue
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
"""Loader layer -- agent loading from disk (JSON config, MCP, credentials)."""
|
||||
|
||||
from framework.loader.agent_loader import AgentLoader # noqa: F401
|
||||
from framework.loader.tool_registry import ToolRegistry # noqa: F401
|
||||
@@ -13,21 +13,20 @@ from framework.config import get_hive_config, get_max_context_tokens, get_prefer
|
||||
from framework.credentials.validation import (
|
||||
ensure_credential_key_env as _ensure_credential_key_env,
|
||||
)
|
||||
from framework.graph import Goal
|
||||
from framework.graph.edge import (
|
||||
from framework.orchestrator import Goal
|
||||
from framework.orchestrator.edge import (
|
||||
DEFAULT_MAX_TOKENS,
|
||||
EdgeCondition,
|
||||
EdgeSpec,
|
||||
GraphSpec,
|
||||
)
|
||||
from framework.graph.executor import ExecutionResult
|
||||
from framework.graph.node import NodeSpec
|
||||
from framework.orchestrator.orchestrator import ExecutionResult
|
||||
from framework.orchestrator.node import NodeSpec
|
||||
from framework.llm.provider import LLMProvider, Tool
|
||||
from framework.runner.preload_validation import run_preload_validation
|
||||
from framework.runner.tool_registry import ToolRegistry
|
||||
from framework.runtime.agent_runtime import AgentRuntime, AgentRuntimeConfig, create_agent_runtime
|
||||
from framework.runtime.execution_stream import EntryPointSpec
|
||||
from framework.runtime.runtime_log_store import RuntimeLogStore
|
||||
from framework.loader.preload_validation import run_preload_validation
|
||||
from framework.loader.tool_registry import ToolRegistry
|
||||
from framework.host.agent_host import AgentHost, AgentRuntimeConfig
|
||||
from framework.host.execution_manager import EntryPointSpec
|
||||
from framework.tools.flowchart_utils import generate_fallback_flowchart
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -881,6 +880,167 @@ class ValidationResult:
|
||||
missing_credentials: list[str] = field(default_factory=list)
|
||||
|
||||
|
||||
def _resolve_template_vars(text: str | None, variables: dict[str, str]) -> str | None:
|
||||
"""Resolve ``{{variable_name}}`` placeholders in *text*."""
|
||||
if text is None or not variables:
|
||||
return text
|
||||
import re
|
||||
|
||||
def _replace(m: re.Match) -> str:
|
||||
key = m.group(1).strip()
|
||||
return variables.get(key, m.group(0))
|
||||
|
||||
return re.sub(r"\{\{(.+?)\}\}", _replace, text)
|
||||
|
||||
|
||||
def load_agent_config(data: str | dict) -> tuple[GraphSpec, Goal]:
|
||||
"""Load ``GraphSpec`` and ``Goal`` from a declarative :class:`AgentConfig`.
|
||||
|
||||
The declarative format uses a ``name`` key at the top level, unlike the
|
||||
legacy export format which uses ``graph``/``goal`` keys. The runner
|
||||
auto-detects the format in :meth:`AgentLoader.load`.
|
||||
|
||||
Template variables in ``config.variables`` are resolved in all
|
||||
``system_prompt`` and ``identity_prompt`` fields via ``{{var_name}}``.
|
||||
|
||||
Returns:
|
||||
Tuple of (GraphSpec, Goal)
|
||||
"""
|
||||
from framework.orchestrator.edge import EdgeCondition, EdgeSpec
|
||||
from framework.orchestrator.goal import Constraint, Goal as GoalModel, SuccessCriterion
|
||||
from framework.schemas.agent_config import AgentConfig
|
||||
|
||||
if isinstance(data, str):
|
||||
data = json.loads(data)
|
||||
|
||||
config = AgentConfig.model_validate(data)
|
||||
tvars = config.variables
|
||||
|
||||
# Build Goal
|
||||
success_criteria = [
|
||||
SuccessCriterion(
|
||||
id=f"sc-{i}",
|
||||
description=sc,
|
||||
metric="llm_judge",
|
||||
target="",
|
||||
)
|
||||
for i, sc in enumerate(config.goal.success_criteria)
|
||||
]
|
||||
constraints = [
|
||||
Constraint(
|
||||
id=f"c-{i}",
|
||||
description=c,
|
||||
constraint_type="hard",
|
||||
category="general",
|
||||
)
|
||||
for i, c in enumerate(config.goal.constraints)
|
||||
]
|
||||
goal = GoalModel(
|
||||
id=f"{config.name}-goal",
|
||||
name=config.name,
|
||||
description=config.goal.description,
|
||||
success_criteria=success_criteria,
|
||||
constraints=constraints,
|
||||
)
|
||||
|
||||
# Build nodes
|
||||
condition_map = {
|
||||
"always": EdgeCondition.ALWAYS,
|
||||
"on_success": EdgeCondition.ON_SUCCESS,
|
||||
"on_failure": EdgeCondition.ON_FAILURE,
|
||||
"conditional": EdgeCondition.CONDITIONAL,
|
||||
"llm_decide": EdgeCondition.LLM_DECIDE,
|
||||
}
|
||||
|
||||
nodes = []
|
||||
for nc in config.nodes:
|
||||
# Resolve tool access: node-level config -> agent-level fallback
|
||||
if nc.tools.policy == "explicit" and nc.tools.allowed:
|
||||
tools_list = nc.tools.allowed
|
||||
tool_policy = "explicit"
|
||||
elif nc.tools.policy == "none":
|
||||
tools_list = []
|
||||
tool_policy = "none"
|
||||
else:
|
||||
# Inherit agent-level tool config
|
||||
if config.tools.policy == "explicit" and config.tools.allowed:
|
||||
tools_list = config.tools.allowed
|
||||
else:
|
||||
tools_list = []
|
||||
tool_policy = config.tools.policy
|
||||
|
||||
node_kwargs: dict = {
|
||||
"id": nc.id,
|
||||
"name": nc.name or nc.id,
|
||||
"description": nc.description or "",
|
||||
"node_type": nc.node_type,
|
||||
"system_prompt": _resolve_template_vars(nc.system_prompt, tvars),
|
||||
"tools": tools_list,
|
||||
"tool_access_policy": tool_policy,
|
||||
"model": nc.model,
|
||||
"input_keys": nc.input_keys,
|
||||
"output_keys": nc.output_keys,
|
||||
"nullable_output_keys": nc.nullable_output_keys,
|
||||
"max_iterations": nc.max_iterations,
|
||||
"success_criteria": nc.success_criteria,
|
||||
"skip_judge": nc.skip_judge,
|
||||
}
|
||||
# Optional fields -- only pass when set (avoids overriding defaults)
|
||||
if nc.client_facing:
|
||||
node_kwargs["client_facing"] = nc.client_facing
|
||||
if nc.max_node_visits != 1:
|
||||
node_kwargs["max_node_visits"] = nc.max_node_visits
|
||||
if nc.failure_criteria:
|
||||
node_kwargs["failure_criteria"] = nc.failure_criteria
|
||||
if nc.max_retries is not None:
|
||||
node_kwargs["max_retries"] = nc.max_retries
|
||||
|
||||
nodes.append(NodeSpec(**node_kwargs))
|
||||
|
||||
# Build edges
|
||||
edges = []
|
||||
for i, ec in enumerate(config.edges):
|
||||
edges.append(
|
||||
EdgeSpec(
|
||||
id=f"e-{i}-{ec.from_node}-{ec.to_node}",
|
||||
source=ec.from_node,
|
||||
target=ec.to_node,
|
||||
condition=condition_map.get(ec.condition, EdgeCondition.ON_SUCCESS),
|
||||
condition_expr=ec.condition_expr,
|
||||
priority=ec.priority,
|
||||
input_mapping=ec.input_mapping,
|
||||
)
|
||||
)
|
||||
|
||||
# Build entry_points dict for GraphSpec
|
||||
entry_points_dict: dict = {}
|
||||
if config.entry_points:
|
||||
for ep in config.entry_points:
|
||||
entry_points_dict[ep.id] = ep.entry_node or config.entry_node
|
||||
else:
|
||||
entry_points_dict = {"default": config.entry_node}
|
||||
|
||||
# Build GraphSpec
|
||||
graph_kwargs: dict = {
|
||||
"id": f"{config.name}-graph",
|
||||
"goal_id": goal.id,
|
||||
"version": config.version,
|
||||
"entry_node": config.entry_node,
|
||||
"entry_points": entry_points_dict,
|
||||
"terminal_nodes": config.terminal_nodes,
|
||||
"pause_nodes": config.pause_nodes,
|
||||
"nodes": nodes,
|
||||
"edges": edges,
|
||||
"max_tokens": config.max_tokens,
|
||||
"loop_config": dict(config.loop_config),
|
||||
"conversation_mode": config.conversation_mode,
|
||||
"identity_prompt": _resolve_template_vars(config.identity_prompt, tvars) or "",
|
||||
}
|
||||
|
||||
graph = GraphSpec(**graph_kwargs)
|
||||
return graph, goal
|
||||
|
||||
|
||||
def load_agent_export(data: str | dict) -> tuple[GraphSpec, Goal]:
|
||||
"""
|
||||
Load GraphSpec and Goal from export_graph() output.
|
||||
@@ -942,7 +1102,7 @@ def load_agent_export(data: str | dict) -> tuple[GraphSpec, Goal]:
|
||||
)
|
||||
|
||||
# Build Goal
|
||||
from framework.graph.goal import Constraint, SuccessCriterion
|
||||
from framework.orchestrator.goal import Constraint, SuccessCriterion
|
||||
|
||||
success_criteria = []
|
||||
for sc_data in goal_data.get("success_criteria", []):
|
||||
@@ -979,7 +1139,7 @@ def load_agent_export(data: str | dict) -> tuple[GraphSpec, Goal]:
|
||||
return graph, goal
|
||||
|
||||
|
||||
class AgentRunner:
|
||||
class AgentLoader:
|
||||
"""
|
||||
Loads and runs exported agents with minimal boilerplate.
|
||||
|
||||
@@ -991,15 +1151,15 @@ class AgentRunner:
|
||||
|
||||
Usage:
|
||||
# Simple usage
|
||||
runner = AgentRunner.load("exports/outbound-sales-agent")
|
||||
runner = AgentLoader.load("exports/outbound-sales-agent")
|
||||
result = await runner.run({"lead_id": "123"})
|
||||
|
||||
# With context manager
|
||||
async with AgentRunner.load("exports/outbound-sales-agent") as runner:
|
||||
async with AgentLoader.load("exports/outbound-sales-agent") as runner:
|
||||
result = await runner.run({"lead_id": "123"})
|
||||
|
||||
# With custom tools
|
||||
runner = AgentRunner.load("exports/outbound-sales-agent")
|
||||
runner = AgentLoader.load("exports/outbound-sales-agent")
|
||||
runner.register_tool("my_tool", my_tool_func)
|
||||
result = await runner.run({"lead_id": "123"})
|
||||
"""
|
||||
@@ -1027,7 +1187,7 @@ class AgentRunner:
|
||||
credential_store: Any | None = None,
|
||||
):
|
||||
"""
|
||||
Initialize the runner (use AgentRunner.load() instead).
|
||||
Initialize the runner (use AgentLoader.load() instead).
|
||||
|
||||
Args:
|
||||
agent_path: Path to agent folder
|
||||
@@ -1082,7 +1242,7 @@ class AgentRunner:
|
||||
self._approval_callback: Callable | None = None
|
||||
|
||||
# AgentRuntime — unified execution path for all agents
|
||||
self._agent_runtime: AgentRuntime | None = None
|
||||
self._agent_runtime: AgentHost | None = None
|
||||
# Pre-load validation: structural checks + credentials.
|
||||
# Fails fast with actionable guidance — no MCP noise on screen.
|
||||
run_preload_validation(
|
||||
@@ -1101,13 +1261,7 @@ class AgentRunner:
|
||||
os.environ["HIVE_AGENT_NAME"] = agent_path.name
|
||||
os.environ["HIVE_STORAGE_PATH"] = str(self._storage_path)
|
||||
|
||||
# Auto-discover MCP servers from mcp_servers.json
|
||||
mcp_config_path = agent_path / "mcp_servers.json"
|
||||
if mcp_config_path.exists():
|
||||
self._load_mcp_servers_from_config(mcp_config_path)
|
||||
|
||||
# Auto-discover registry-selected MCP servers from mcp_registry.json
|
||||
self._load_registry_mcp_servers(agent_path)
|
||||
# MCP tools are loaded by McpRegistryStage in the pipeline during AgentHost.start()
|
||||
|
||||
@staticmethod
|
||||
def _import_agent_module(agent_path: Path):
|
||||
@@ -1158,7 +1312,7 @@ class AgentRunner:
|
||||
interactive: bool = True,
|
||||
skip_credential_validation: bool | None = None,
|
||||
credential_store: Any | None = None,
|
||||
) -> "AgentRunner":
|
||||
) -> "AgentLoader":
|
||||
"""
|
||||
Load an agent from an export folder.
|
||||
|
||||
@@ -1299,21 +1453,22 @@ class AgentRunner:
|
||||
runner._agent_skills = agent_skills
|
||||
return runner
|
||||
|
||||
# Fallback: load from agent.json (legacy JSON-based agents)
|
||||
# Fallback: load from agent.json (declarative config)
|
||||
agent_json_path = agent_path / "agent.json"
|
||||
|
||||
if not agent_json_path.is_file():
|
||||
raise FileNotFoundError(f"No agent.py or agent.json found in {agent_path}")
|
||||
|
||||
with open(agent_json_path, encoding="utf-8") as f:
|
||||
export_data = f.read()
|
||||
|
||||
export_data = agent_json_path.read_text(encoding="utf-8")
|
||||
if not export_data.strip():
|
||||
raise ValueError(f"Empty agent export file: {agent_json_path}")
|
||||
raise ValueError(f"Empty agent.json: {agent_json_path}")
|
||||
|
||||
try:
|
||||
graph, goal = load_agent_export(export_data)
|
||||
except json.JSONDecodeError as exc:
|
||||
raise ValueError(f"Invalid JSON in agent export file: {agent_json_path}") from exc
|
||||
parsed = json.loads(export_data)
|
||||
graph, goal = load_agent_config(parsed)
|
||||
logger.info(
|
||||
"Loaded declarative agent config from agent.json (name=%s)",
|
||||
parsed.get("name"),
|
||||
)
|
||||
|
||||
# Generate flowchart.json if missing (for legacy JSON-based agents)
|
||||
generate_fallback_flowchart(graph, goal, agent_path)
|
||||
@@ -1396,60 +1551,6 @@ class AgentRunner:
|
||||
}
|
||||
return self._tool_registry.register_mcp_server(server_config)
|
||||
|
||||
def _load_mcp_servers_from_config(self, config_path: Path) -> None:
|
||||
"""Load and register MCP servers from a configuration file."""
|
||||
self._tool_registry.load_mcp_config(config_path)
|
||||
|
||||
def _load_registry_mcp_servers(self, agent_path: Path) -> None:
|
||||
"""Load and register MCP servers selected via ``mcp_registry.json``."""
|
||||
registry_json = agent_path / "mcp_registry.json"
|
||||
if registry_json.is_file():
|
||||
self._tool_registry.set_mcp_registry_agent_path(agent_path)
|
||||
else:
|
||||
self._tool_registry.set_mcp_registry_agent_path(None)
|
||||
|
||||
from framework.runner.mcp_registry import MCPRegistry
|
||||
|
||||
try:
|
||||
registry = MCPRegistry()
|
||||
registry.initialize()
|
||||
server_configs, selection_max_tools = registry.load_agent_selection(agent_path)
|
||||
except Exception as exc:
|
||||
logger.warning(
|
||||
"Failed to load MCP registry servers for '%s': %s",
|
||||
agent_path.name,
|
||||
exc,
|
||||
)
|
||||
return
|
||||
|
||||
if not server_configs:
|
||||
return
|
||||
|
||||
results = self._tool_registry.load_registry_servers(
|
||||
server_configs,
|
||||
preserve_existing_tools=True,
|
||||
log_collisions=True,
|
||||
max_tools=selection_max_tools,
|
||||
)
|
||||
loaded = [result for result in results if result["status"] == "loaded"]
|
||||
skipped = [result for result in results if result["status"] != "loaded"]
|
||||
|
||||
logger.info(
|
||||
"Loaded %d/%d MCP registry server(s) for agent '%s'",
|
||||
len(loaded),
|
||||
len(results),
|
||||
agent_path.name,
|
||||
)
|
||||
if skipped:
|
||||
logger.info(
|
||||
"Skipped MCP registry servers for agent '%s': %s",
|
||||
agent_path.name,
|
||||
[
|
||||
{"server": result["server"], "reason": result["skipped_reason"]}
|
||||
for result in skipped
|
||||
],
|
||||
)
|
||||
|
||||
def set_approval_callback(self, callback: Callable) -> None:
|
||||
"""
|
||||
Set a callback for human-in-the-loop approval during execution.
|
||||
@@ -1460,272 +1561,119 @@ class AgentRunner:
|
||||
self._approval_callback = callback
|
||||
|
||||
def _setup(self, event_bus=None) -> None:
|
||||
"""Set up runtime, LLM, and executor."""
|
||||
# Configure structured logging (auto-detects JSON vs human-readable)
|
||||
"""Set up runtime via pipeline stages.
|
||||
|
||||
Builds a pipeline with the default stages (LLM, credentials, MCP,
|
||||
skills) and passes it to AgentHost. The stages initialize during
|
||||
``AgentHost.start()`` and inject tools/LLM/credentials/skills.
|
||||
"""
|
||||
from framework.observability import configure_logging
|
||||
from framework.pipeline.stages.credential_resolver import CredentialResolverStage
|
||||
from framework.pipeline.stages.llm_provider import LlmProviderStage
|
||||
from framework.pipeline.stages.mcp_registry import McpRegistryStage
|
||||
from framework.pipeline.stages.skill_registry import SkillRegistryStage
|
||||
from framework.skills.config import SkillsConfig
|
||||
|
||||
configure_logging(level="INFO", format="auto")
|
||||
|
||||
# Set up session context for tools (agent_id)
|
||||
# Set up session context for tools
|
||||
agent_id = self.graph.id or "unknown"
|
||||
self._tool_registry.set_session_context(agent_id=agent_id)
|
||||
|
||||
self._tool_registry.set_session_context(
|
||||
agent_id=agent_id,
|
||||
)
|
||||
# Read MCP server refs from agent.json
|
||||
mcp_refs = []
|
||||
agent_json = self.agent_path / "agent.json"
|
||||
if agent_json.exists():
|
||||
try:
|
||||
import json as _json
|
||||
|
||||
# Create LLM provider
|
||||
# Uses LiteLLM which auto-detects the provider from model name
|
||||
# Skip if already injected (e.g. worker agents with a pre-built LLM)
|
||||
if self._llm is not None:
|
||||
pass # LLM already configured externally
|
||||
elif self.mock_mode:
|
||||
# Use mock LLM for testing without real API calls
|
||||
from framework.llm.mock import MockLLMProvider
|
||||
data = _json.loads(agent_json.read_text(encoding="utf-8"))
|
||||
mcp_refs = data.get("mcp_servers", [])
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
self._llm = MockLLMProvider(model=self.model)
|
||||
else:
|
||||
from framework.llm.litellm import LiteLLMProvider
|
||||
|
||||
# Check if a subscription mode is configured
|
||||
config = get_hive_config()
|
||||
llm_config = config.get("llm", {})
|
||||
use_claude_code = llm_config.get("use_claude_code_subscription", False)
|
||||
use_codex = llm_config.get("use_codex_subscription", False)
|
||||
use_kimi_code = llm_config.get("use_kimi_code_subscription", False)
|
||||
use_antigravity = llm_config.get("use_antigravity_subscription", False)
|
||||
api_base = llm_config.get("api_base")
|
||||
|
||||
api_key = None
|
||||
if use_claude_code:
|
||||
# Get OAuth token from Claude Code subscription
|
||||
api_key = get_claude_code_token()
|
||||
if not api_key:
|
||||
logger.warning(
|
||||
"Claude Code subscription configured but no token found. "
|
||||
"Run 'claude' to authenticate, then try again."
|
||||
)
|
||||
elif use_codex:
|
||||
# Get OAuth token from Codex subscription
|
||||
api_key = get_codex_token()
|
||||
if not api_key:
|
||||
logger.warning(
|
||||
"Codex subscription configured but no token found. "
|
||||
"Run 'codex' to authenticate, then try again."
|
||||
)
|
||||
elif use_kimi_code:
|
||||
# Get API key from Kimi Code CLI config (~/.kimi/config.toml)
|
||||
api_key = get_kimi_code_token()
|
||||
if not api_key:
|
||||
logger.warning(
|
||||
"Kimi Code subscription configured but no key found. "
|
||||
"Run 'kimi /login' to authenticate, then try again."
|
||||
)
|
||||
elif use_antigravity:
|
||||
pass # AntigravityProvider handles credentials internally
|
||||
|
||||
if api_key and use_claude_code:
|
||||
# Use litellm's built-in Anthropic OAuth support.
|
||||
# The lowercase "authorization" key triggers OAuth detection which
|
||||
# adds the required anthropic-beta and browser-access headers.
|
||||
self._llm = LiteLLMProvider(
|
||||
model=self.model,
|
||||
api_key=api_key,
|
||||
api_base=api_base,
|
||||
extra_headers={"authorization": f"Bearer {api_key}"},
|
||||
)
|
||||
elif api_key and use_codex:
|
||||
# OpenAI Codex subscription routes through the ChatGPT backend
|
||||
# (chatgpt.com/backend-api/codex/responses), NOT the standard
|
||||
# OpenAI API. The consumer OAuth token lacks platform API scopes.
|
||||
extra_headers: dict[str, str] = {
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"User-Agent": "CodexBar",
|
||||
}
|
||||
account_id = get_codex_account_id()
|
||||
if account_id:
|
||||
extra_headers["ChatGPT-Account-Id"] = account_id
|
||||
self._llm = LiteLLMProvider(
|
||||
model=self.model,
|
||||
api_key=api_key,
|
||||
api_base="https://chatgpt.com/backend-api/codex",
|
||||
extra_headers=extra_headers,
|
||||
store=False,
|
||||
allowed_openai_params=["store"],
|
||||
)
|
||||
elif api_key and use_kimi_code:
|
||||
# Kimi Code subscription uses the Kimi coding API (OpenAI-compatible).
|
||||
# The api_base is set automatically by LiteLLMProvider for kimi/ models.
|
||||
self._llm = LiteLLMProvider(
|
||||
model=self.model,
|
||||
api_key=api_key,
|
||||
api_base=api_base,
|
||||
)
|
||||
elif use_antigravity:
|
||||
# Direct OAuth to Google's internal Cloud Code Assist gateway.
|
||||
# No local proxy required — AntigravityProvider handles token
|
||||
# refresh and Gemini-format request/response conversion natively.
|
||||
from framework.llm.antigravity import AntigravityProvider # noqa: PLC0415
|
||||
|
||||
provider = AntigravityProvider(model=self.model)
|
||||
if not provider.has_credentials():
|
||||
print(
|
||||
"Warning: Antigravity credentials not found. "
|
||||
"Run: uv run python core/antigravity_auth.py auth account add"
|
||||
)
|
||||
self._llm = provider
|
||||
else:
|
||||
# Local models (e.g. Ollama) don't need an API key
|
||||
if self._is_local_model(self.model):
|
||||
self._llm = LiteLLMProvider(
|
||||
model=self.model,
|
||||
api_base=api_base,
|
||||
)
|
||||
else:
|
||||
# Fall back to environment variable
|
||||
# First check api_key_env_var from config (set by quickstart)
|
||||
api_key_env = llm_config.get("api_key_env_var") or self._get_api_key_env_var(
|
||||
self.model
|
||||
)
|
||||
if api_key_env and os.environ.get(api_key_env):
|
||||
self._llm = LiteLLMProvider(
|
||||
model=self.model,
|
||||
api_key=os.environ[api_key_env],
|
||||
api_base=api_base,
|
||||
)
|
||||
else:
|
||||
# Fall back to credential store
|
||||
api_key = self._get_api_key_from_credential_store()
|
||||
if api_key:
|
||||
self._llm = LiteLLMProvider(
|
||||
model=self.model, api_key=api_key, api_base=api_base
|
||||
)
|
||||
# Set env var so downstream code (e.g. cleanup LLM in
|
||||
# node._extract_json) can also find it
|
||||
if api_key_env:
|
||||
os.environ[api_key_env] = api_key
|
||||
elif api_key_env:
|
||||
logger.warning(
|
||||
"%s not set. LLM calls will fail. "
|
||||
"Set it with: export %s=your-api-key",
|
||||
api_key_env,
|
||||
api_key_env,
|
||||
)
|
||||
|
||||
# Fail fast if the agent needs an LLM but none was configured
|
||||
if self._llm is None:
|
||||
has_llm_nodes = any(
|
||||
node.node_type in ("event_loop", "gcu") for node in self.graph.nodes
|
||||
)
|
||||
if has_llm_nodes:
|
||||
from framework.credentials.models import CredentialError
|
||||
|
||||
if self._is_local_model(self.model):
|
||||
raise CredentialError(
|
||||
f"Failed to initialize LLM for local model '{self.model}'. "
|
||||
f"Ensure your local LLM server is running "
|
||||
f"(e.g. 'ollama serve' for Ollama)."
|
||||
)
|
||||
api_key_env = self._get_api_key_env_var(self.model)
|
||||
hint = (
|
||||
f"Set it with: export {api_key_env}=your-api-key"
|
||||
if api_key_env
|
||||
else "Configure an API key for your LLM provider."
|
||||
)
|
||||
raise CredentialError(f"LLM API key not found for model '{self.model}'. {hint}")
|
||||
|
||||
# For GCU nodes: auto-register GCU MCP server if needed, then expand tool lists
|
||||
has_gcu_nodes = any(node.node_type == "gcu" for node in self.graph.nodes)
|
||||
if has_gcu_nodes:
|
||||
from framework.graph.gcu import GCU_MCP_SERVER_CONFIG, GCU_SERVER_NAME
|
||||
|
||||
# Auto-register GCU MCP server if tools aren't loaded yet
|
||||
gcu_tool_names = self._tool_registry.get_server_tool_names(GCU_SERVER_NAME)
|
||||
if not gcu_tool_names:
|
||||
# Resolve cwd to repo-level tools/ (not relative to agent_path)
|
||||
gcu_config = dict(GCU_MCP_SERVER_CONFIG)
|
||||
_repo_root = Path(__file__).resolve().parent.parent.parent.parent
|
||||
gcu_config["cwd"] = str(_repo_root / "tools")
|
||||
self._tool_registry.register_mcp_server(gcu_config)
|
||||
gcu_tool_names = self._tool_registry.get_server_tool_names(GCU_SERVER_NAME)
|
||||
|
||||
# Expand each GCU node's tools list to include all GCU server tools
|
||||
if gcu_tool_names:
|
||||
for node in self.graph.nodes:
|
||||
if node.node_type == "gcu":
|
||||
existing = set(node.tools)
|
||||
for tool_name in sorted(gcu_tool_names):
|
||||
if tool_name not in existing:
|
||||
node.tools.append(tool_name)
|
||||
|
||||
# For event_loop/gcu nodes: auto-register file tools MCP server, then expand tool lists
|
||||
has_loop_nodes = any(node.node_type in ("event_loop", "gcu") for node in self.graph.nodes)
|
||||
if has_loop_nodes:
|
||||
from framework.graph.files import FILES_MCP_SERVER_CONFIG, FILES_MCP_SERVER_NAME
|
||||
|
||||
files_tool_names = self._tool_registry.get_server_tool_names(FILES_MCP_SERVER_NAME)
|
||||
if not files_tool_names:
|
||||
# Resolve cwd to repo-level tools/ (not relative to agent_path)
|
||||
files_config = dict(FILES_MCP_SERVER_CONFIG)
|
||||
_repo_root = Path(__file__).resolve().parent.parent.parent.parent
|
||||
files_config["cwd"] = str(_repo_root / "tools")
|
||||
self._tool_registry.register_mcp_server(files_config)
|
||||
files_tool_names = self._tool_registry.get_server_tool_names(FILES_MCP_SERVER_NAME)
|
||||
|
||||
if files_tool_names:
|
||||
for node in self.graph.nodes:
|
||||
if node.node_type in ("event_loop", "gcu"):
|
||||
existing = set(node.tools)
|
||||
for tool_name in sorted(files_tool_names):
|
||||
if tool_name not in existing:
|
||||
node.tools.append(tool_name)
|
||||
|
||||
# Get tools for runtime
|
||||
tools = list(self._tool_registry.get_tools().values())
|
||||
tool_executor = self._tool_registry.get_executor()
|
||||
|
||||
# Collect connected account info for system prompt injection
|
||||
accounts_prompt = ""
|
||||
accounts_data: list[dict] | None = None
|
||||
tool_provider_map: dict[str, str] | None = None
|
||||
try:
|
||||
from aden_tools.credentials.store_adapter import CredentialStoreAdapter
|
||||
|
||||
if self._credential_store is not None:
|
||||
adapter = CredentialStoreAdapter(store=self._credential_store)
|
||||
else:
|
||||
adapter = CredentialStoreAdapter.default()
|
||||
accounts_data = adapter.get_all_account_info()
|
||||
tool_provider_map = adapter.get_tool_provider_map()
|
||||
if accounts_data:
|
||||
from framework.graph.prompting import build_accounts_prompt
|
||||
|
||||
accounts_prompt = build_accounts_prompt(accounts_data, tool_provider_map)
|
||||
except Exception:
|
||||
pass # Best-effort — agent works without account info
|
||||
|
||||
# Skill configuration — the runtime handles discovery, loading, trust-gating and
|
||||
# prompt rasterization. The runner just builds the config.
|
||||
from framework.skills.config import SkillsConfig
|
||||
from framework.skills.manager import SkillsManagerConfig
|
||||
|
||||
skills_manager_config = SkillsManagerConfig(
|
||||
skills_config=SkillsConfig.from_agent_vars(
|
||||
default_skills=getattr(self, "_agent_default_skills", None),
|
||||
skills=getattr(self, "_agent_skills", None),
|
||||
# Build default pipeline stages
|
||||
# Default infrastructure stages (always present)
|
||||
pipeline_stages = [
|
||||
LlmProviderStage(
|
||||
model=self.model,
|
||||
mock_mode=self.mock_mode,
|
||||
llm=self._llm,
|
||||
),
|
||||
project_root=self.agent_path,
|
||||
interactive=self._interactive,
|
||||
)
|
||||
CredentialResolverStage(
|
||||
credential_store=self._credential_store,
|
||||
),
|
||||
McpRegistryStage(
|
||||
server_refs=mcp_refs,
|
||||
agent_path=self.agent_path,
|
||||
tool_registry=self._tool_registry,
|
||||
),
|
||||
SkillRegistryStage(
|
||||
project_root=self.agent_path,
|
||||
interactive=self._interactive,
|
||||
skills_config=SkillsConfig.from_agent_vars(
|
||||
default_skills=getattr(self, "_agent_default_skills", None),
|
||||
skills=getattr(self, "_agent_skills", None),
|
||||
),
|
||||
),
|
||||
]
|
||||
|
||||
self._setup_agent_runtime(
|
||||
tools,
|
||||
tool_executor,
|
||||
accounts_prompt=accounts_prompt,
|
||||
accounts_data=accounts_data,
|
||||
tool_provider_map=tool_provider_map,
|
||||
# Merge user-configured stages from ~/.hive/configuration.json
|
||||
from framework.config import get_hive_config
|
||||
from framework.pipeline.registry import build_pipeline_from_config
|
||||
|
||||
hive_config = get_hive_config()
|
||||
user_stages_config = hive_config.get("pipeline", {}).get("stages", [])
|
||||
if user_stages_config:
|
||||
user_pipeline = build_pipeline_from_config(user_stages_config)
|
||||
pipeline_stages.extend(user_pipeline.stages)
|
||||
|
||||
# Merge agent-level overrides from agent.json pipeline field
|
||||
if agent_json.exists():
|
||||
try:
|
||||
agent_pipeline = (
|
||||
_json.loads(agent_json.read_text(encoding="utf-8"))
|
||||
.get("pipeline", {})
|
||||
.get("stages", [])
|
||||
)
|
||||
if agent_pipeline:
|
||||
agent_stages = build_pipeline_from_config(agent_pipeline)
|
||||
pipeline_stages.extend(agent_stages.stages)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Create AgentHost directly (no wrapper)
|
||||
from framework.host.execution_manager import EntryPointSpec
|
||||
from framework.orchestrator.checkpoint_config import CheckpointConfig
|
||||
from framework.tracker.runtime_log_store import RuntimeLogStore
|
||||
|
||||
self._agent_runtime = AgentHost(
|
||||
graph=self.graph,
|
||||
goal=self.goal,
|
||||
storage_path=self._storage_path,
|
||||
runtime_log_store=RuntimeLogStore(
|
||||
base_path=self._storage_path / "runtime_logs",
|
||||
),
|
||||
checkpoint_config=CheckpointConfig(
|
||||
enabled=True,
|
||||
checkpoint_on_node_complete=True,
|
||||
checkpoint_max_age_days=7,
|
||||
async_checkpoint=True,
|
||||
),
|
||||
graph_id=self.graph.id or self.agent_path.name,
|
||||
event_bus=event_bus,
|
||||
skills_manager_config=skills_manager_config,
|
||||
pipeline_stages=pipeline_stages,
|
||||
)
|
||||
self._agent_runtime.register_entry_point(
|
||||
EntryPointSpec(
|
||||
id="default",
|
||||
name="Default",
|
||||
entry_node=self.graph.entry_node,
|
||||
trigger_type="manual",
|
||||
isolation_level="shared",
|
||||
),
|
||||
)
|
||||
self._agent_runtime.intro_message = self.intro_message
|
||||
|
||||
def _get_api_key_env_var(self, model: str) -> str | None:
|
||||
"""Get the environment variable name for the API key based on model name."""
|
||||
@@ -1781,13 +1729,28 @@ class AgentRunner:
|
||||
cred_id = None
|
||||
if model_lower.startswith("anthropic/") or model_lower.startswith("claude"):
|
||||
cred_id = "anthropic"
|
||||
elif model_lower.startswith("openai/") or model_lower.startswith("gpt"):
|
||||
cred_id = "openai"
|
||||
elif model_lower.startswith("gemini/") or model_lower.startswith("gemini"):
|
||||
cred_id = "gemini"
|
||||
elif model_lower.startswith("minimax/") or model_lower.startswith("minimax-"):
|
||||
cred_id = "minimax"
|
||||
elif model_lower.startswith("groq/"):
|
||||
cred_id = "groq"
|
||||
elif model_lower.startswith("cerebras/"):
|
||||
cred_id = "cerebras"
|
||||
elif model_lower.startswith("openrouter/"):
|
||||
cred_id = "openrouter"
|
||||
elif model_lower.startswith("mistral/"):
|
||||
cred_id = "mistral"
|
||||
elif model_lower.startswith("together_ai/") or model_lower.startswith("together/"):
|
||||
cred_id = "together"
|
||||
elif model_lower.startswith("deepseek/"):
|
||||
cred_id = "deepseek"
|
||||
elif model_lower.startswith("kimi/"):
|
||||
cred_id = "kimi"
|
||||
elif model_lower.startswith("hive/"):
|
||||
cred_id = "hive"
|
||||
# Add more mappings as providers are added to LLM_CREDENTIALS
|
||||
|
||||
if cred_id is None:
|
||||
return None
|
||||
@@ -1818,83 +1781,6 @@ class AgentRunner:
|
||||
)
|
||||
return model.lower().startswith(LOCAL_PREFIXES)
|
||||
|
||||
def _setup_agent_runtime(
|
||||
self,
|
||||
tools: list,
|
||||
tool_executor: Callable | None,
|
||||
accounts_prompt: str = "",
|
||||
accounts_data: list[dict] | None = None,
|
||||
tool_provider_map: dict[str, str] | None = None,
|
||||
event_bus=None,
|
||||
skills_catalog_prompt: str = "",
|
||||
protocols_prompt: str = "",
|
||||
skill_dirs: list[str] | None = None,
|
||||
skills_manager_config=None,
|
||||
) -> None:
|
||||
"""Set up multi-entry-point execution using AgentRuntime."""
|
||||
entry_points = []
|
||||
|
||||
# Always create a primary entry point for the graph's entry node.
|
||||
# For multi-entry-point agents this ensures the primary path (e.g.
|
||||
# user-facing rule setup) is reachable alongside async entry points.
|
||||
if self.graph.entry_node:
|
||||
entry_points.insert(
|
||||
0,
|
||||
EntryPointSpec(
|
||||
id="default",
|
||||
name="Default",
|
||||
entry_node=self.graph.entry_node,
|
||||
trigger_type="manual",
|
||||
isolation_level="shared",
|
||||
),
|
||||
)
|
||||
|
||||
# Create AgentRuntime with all entry points
|
||||
log_store = RuntimeLogStore(base_path=self._storage_path / "runtime_logs")
|
||||
|
||||
# Enable checkpointing by default for resumable sessions
|
||||
from framework.graph.checkpoint_config import CheckpointConfig
|
||||
|
||||
checkpoint_config = CheckpointConfig(
|
||||
enabled=True,
|
||||
checkpoint_on_node_start=False, # Only checkpoint after nodes complete
|
||||
checkpoint_on_node_complete=True,
|
||||
checkpoint_max_age_days=7,
|
||||
async_checkpoint=True, # Non-blocking
|
||||
)
|
||||
|
||||
# Handle runtime_config - only pass through if it's actually an AgentRuntimeConfig.
|
||||
# Agents may export a RuntimeConfig (LLM settings) or queen-generated custom classes
|
||||
# that would crash AgentRuntime if passed through.
|
||||
runtime_config = None
|
||||
if self.runtime_config is not None:
|
||||
from framework.runtime.agent_runtime import AgentRuntimeConfig
|
||||
|
||||
if isinstance(self.runtime_config, AgentRuntimeConfig):
|
||||
runtime_config = self.runtime_config
|
||||
|
||||
self._agent_runtime = create_agent_runtime(
|
||||
graph=self.graph,
|
||||
goal=self.goal,
|
||||
storage_path=self._storage_path,
|
||||
entry_points=entry_points,
|
||||
llm=self._llm,
|
||||
tools=tools,
|
||||
tool_executor=tool_executor,
|
||||
runtime_log_store=log_store,
|
||||
checkpoint_config=checkpoint_config,
|
||||
config=runtime_config,
|
||||
graph_id=self.graph.id or self.agent_path.name,
|
||||
accounts_prompt=accounts_prompt,
|
||||
accounts_data=accounts_data,
|
||||
tool_provider_map=tool_provider_map,
|
||||
event_bus=event_bus,
|
||||
skills_manager_config=skills_manager_config,
|
||||
)
|
||||
|
||||
# Pass intro_message through for TUI display
|
||||
self._agent_runtime.intro_message = self.intro_message
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Execution modes
|
||||
#
|
||||
@@ -1975,7 +1861,7 @@ class AgentRunner:
|
||||
sub_ids: list[str] = []
|
||||
|
||||
if has_queen and sys.stdin.isatty():
|
||||
from framework.runtime.event_bus import EventType
|
||||
from framework.host.event_bus import EventType
|
||||
|
||||
runtime = self._agent_runtime
|
||||
|
||||
@@ -2230,9 +2116,7 @@ class AgentRunner:
|
||||
warnings.append(warning_msg)
|
||||
except ImportError:
|
||||
# aden_tools not installed - fall back to direct check
|
||||
has_llm_nodes = any(
|
||||
node.node_type in ("event_loop", "gcu") for node in self.graph.nodes
|
||||
)
|
||||
has_llm_nodes = any(node.node_type == "event_loop" for node in self.graph.nodes)
|
||||
if has_llm_nodes:
|
||||
api_key_env = self._get_api_key_env_var(self.model)
|
||||
if api_key_env and not os.environ.get(api_key_env):
|
||||
@@ -2268,7 +2152,7 @@ class AgentRunner:
|
||||
# Run synchronous cleanup
|
||||
self.cleanup()
|
||||
|
||||
async def __aenter__(self) -> "AgentRunner":
|
||||
async def __aenter__(self) -> "AgentLoader":
|
||||
"""Context manager entry."""
|
||||
self._setup()
|
||||
if self._agent_runtime is not None:
|
||||
@@ -19,7 +19,7 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
|
||||
run_parser.add_argument(
|
||||
"agent_path",
|
||||
type=str,
|
||||
help="Path to agent folder (containing agent.json)",
|
||||
help="Path to agent folder (containing agent.json or agent.py)",
|
||||
)
|
||||
run_parser.add_argument(
|
||||
"--input",
|
||||
@@ -87,7 +87,7 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
|
||||
info_parser.add_argument(
|
||||
"agent_path",
|
||||
type=str,
|
||||
help="Path to agent folder (containing agent.json)",
|
||||
help="Path to agent folder (containing agent.json or agent.py)",
|
||||
)
|
||||
info_parser.add_argument(
|
||||
"--json",
|
||||
@@ -105,7 +105,7 @@ def register_commands(subparsers: argparse._SubParsersAction) -> None:
|
||||
validate_parser.add_argument(
|
||||
"agent_path",
|
||||
type=str,
|
||||
help="Path to agent folder (containing agent.json)",
|
||||
help="Path to agent folder (containing agent.json or agent.py)",
|
||||
)
|
||||
validate_parser.set_defaults(func=cmd_validate)
|
||||
|
||||
@@ -310,7 +310,7 @@ def _prompt_before_start(agent_path: str, runner, model: str | None = None):
|
||||
Updated runner if user proceeds, None if user aborts.
|
||||
"""
|
||||
from framework.credentials.setup import CredentialSetupSession
|
||||
from framework.runner import AgentRunner
|
||||
from framework.loader import AgentLoader
|
||||
|
||||
while True:
|
||||
print()
|
||||
@@ -328,7 +328,7 @@ def _prompt_before_start(agent_path: str, runner, model: str | None = None):
|
||||
if result.success:
|
||||
# Reload runner with updated credentials
|
||||
try:
|
||||
runner = AgentRunner.load(agent_path, model=model)
|
||||
runner = AgentLoader.load(agent_path, model=model)
|
||||
except Exception as e:
|
||||
print(f"Error reloading agent: {e}")
|
||||
return None
|
||||
@@ -342,7 +342,7 @@ def cmd_run(args: argparse.Namespace) -> int:
|
||||
|
||||
from framework.credentials.models import CredentialError
|
||||
from framework.observability import configure_logging
|
||||
from framework.runner import AgentRunner
|
||||
from framework.loader import AgentLoader
|
||||
|
||||
# Set logging level (quiet by default for cleaner output)
|
||||
if args.quiet:
|
||||
@@ -390,7 +390,7 @@ def cmd_run(args: argparse.Namespace) -> int:
|
||||
# Standard execution
|
||||
# AgentRunner handles credential setup interactively when stdin is a TTY.
|
||||
try:
|
||||
runner = AgentRunner.load(
|
||||
runner = AgentLoader.load(
|
||||
args.agent_path,
|
||||
model=args.model,
|
||||
)
|
||||
@@ -528,10 +528,10 @@ def cmd_run(args: argparse.Namespace) -> int:
|
||||
def cmd_info(args: argparse.Namespace) -> int:
|
||||
"""Show agent information."""
|
||||
from framework.credentials.models import CredentialError
|
||||
from framework.runner import AgentRunner
|
||||
from framework.loader import AgentLoader
|
||||
|
||||
try:
|
||||
runner = AgentRunner.load(args.agent_path)
|
||||
runner = AgentLoader.load(args.agent_path)
|
||||
except CredentialError as e:
|
||||
print(f"\n{e}", file=sys.stderr)
|
||||
return 1
|
||||
@@ -595,10 +595,10 @@ def cmd_info(args: argparse.Namespace) -> int:
|
||||
def cmd_validate(args: argparse.Namespace) -> int:
|
||||
"""Validate an exported agent."""
|
||||
from framework.credentials.models import CredentialError
|
||||
from framework.runner import AgentRunner
|
||||
from framework.loader import AgentLoader
|
||||
|
||||
try:
|
||||
runner = AgentRunner.load(args.agent_path)
|
||||
runner = AgentLoader.load(args.agent_path)
|
||||
except CredentialError as e:
|
||||
print(f"\n{e}", file=sys.stderr)
|
||||
return 1
|
||||
@@ -632,7 +632,7 @@ def cmd_validate(args: argparse.Namespace) -> int:
|
||||
|
||||
def cmd_list(args: argparse.Namespace) -> int:
|
||||
"""List available agents."""
|
||||
from framework.runner import AgentRunner
|
||||
from framework.loader import AgentLoader
|
||||
|
||||
directory = Path(args.directory)
|
||||
if not directory.exists():
|
||||
@@ -644,7 +644,7 @@ def cmd_list(args: argparse.Namespace) -> int:
|
||||
for path in directory.iterdir():
|
||||
if _is_valid_agent_dir(path):
|
||||
try:
|
||||
runner = AgentRunner.load(path)
|
||||
runner = AgentLoader.load(path)
|
||||
info = runner.info()
|
||||
agents.append(
|
||||
{
|
||||
@@ -686,7 +686,7 @@ def cmd_list(args: argparse.Namespace) -> int:
|
||||
|
||||
def _interactive_approval(request):
|
||||
"""Interactive approval callback for HITL mode."""
|
||||
from framework.graph import ApprovalDecision, ApprovalResult
|
||||
from framework.orchestrator import ApprovalDecision, ApprovalResult
|
||||
|
||||
print()
|
||||
print("=" * 60)
|
||||
@@ -775,7 +775,7 @@ def cmd_shell(args: argparse.Namespace) -> int:
|
||||
|
||||
from framework.credentials.models import CredentialError
|
||||
from framework.observability import configure_logging
|
||||
from framework.runner import AgentRunner
|
||||
from framework.loader import AgentLoader
|
||||
|
||||
configure_logging(level="INFO")
|
||||
|
||||
@@ -789,7 +789,7 @@ def cmd_shell(args: argparse.Namespace) -> int:
|
||||
return 1
|
||||
|
||||
try:
|
||||
runner = AgentRunner.load(agent_path)
|
||||
runner = AgentLoader.load(agent_path)
|
||||
except CredentialError as e:
|
||||
print(f"\n{e}", file=sys.stderr)
|
||||
return 1
|
||||
@@ -1004,17 +1004,35 @@ def _get_framework_agents_dir() -> Path:
|
||||
|
||||
|
||||
def _extract_python_agent_metadata(agent_path: Path) -> tuple[str, str]:
|
||||
"""Extract name and description from a Python-based agent's config.py.
|
||||
"""Extract name and description from an agent directory.
|
||||
|
||||
Uses AST parsing to safely extract values without executing code.
|
||||
Checks agent.json first (declarative), then falls back to config.py
|
||||
(legacy Python). Uses AST parsing for Python to avoid executing code.
|
||||
Returns (name, description) tuple, with fallbacks if parsing fails.
|
||||
"""
|
||||
import ast
|
||||
|
||||
config_path = agent_path / "config.py"
|
||||
fallback_name = agent_path.name.replace("_", " ").title()
|
||||
fallback_desc = "(Python-based agent)"
|
||||
|
||||
# Declarative agent: read from agent.json
|
||||
agent_json = agent_path / "agent.json"
|
||||
if agent_json.exists():
|
||||
try:
|
||||
import json
|
||||
|
||||
data = json.loads(agent_json.read_text(encoding="utf-8"))
|
||||
if isinstance(data, dict):
|
||||
name = data.get("name", fallback_name)
|
||||
# Convert kebab-case to Title Case for display
|
||||
if "-" in name and " " not in name:
|
||||
name = name.replace("-", " ").title()
|
||||
desc = data.get("description", fallback_desc)
|
||||
return name, desc
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
config_path = agent_path / "config.py"
|
||||
if not config_path.exists():
|
||||
return fallback_name, fallback_desc
|
||||
|
||||
@@ -1083,7 +1101,7 @@ def _is_valid_agent_dir(path: Path) -> bool:
|
||||
|
||||
|
||||
def _has_agents(directory: Path) -> bool:
|
||||
"""Check if a directory contains any valid agents (folders with agent.json or agent.py)."""
|
||||
"""Check if a directory contains any valid agents."""
|
||||
if not directory.exists():
|
||||
return False
|
||||
return any(_is_valid_agent_dir(p) for p in directory.iterdir())
|
||||
@@ -1253,6 +1271,7 @@ def _select_agent(agents_dir: Path) -> str | None:
|
||||
print()
|
||||
return None
|
||||
|
||||
|
||||
def cmd_setup_credentials(args: argparse.Namespace) -> int:
|
||||
"""Interactive credential setup for an agent."""
|
||||
from framework.credentials.setup import CredentialSetupSession
|
||||
@@ -1275,10 +1294,51 @@ def cmd_setup_credentials(args: argparse.Namespace) -> int:
|
||||
return 0 if result.success else 1
|
||||
|
||||
|
||||
def _find_chrome_bin() -> str | None:
|
||||
"""Return the path to a Chrome/Chromium binary, or None if not found."""
|
||||
import shutil
|
||||
|
||||
for candidate in (
|
||||
"google-chrome",
|
||||
"google-chrome-stable",
|
||||
"chromium",
|
||||
"chromium-browser",
|
||||
"microsoft-edge",
|
||||
"microsoft-edge-stable",
|
||||
):
|
||||
if shutil.which(candidate):
|
||||
return candidate
|
||||
|
||||
mac_paths = [
|
||||
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome",
|
||||
Path.home() / "Applications/Google Chrome.app/Contents/MacOS/Google Chrome",
|
||||
"/Applications/Microsoft Edge.app/Contents/MacOS/Microsoft Edge",
|
||||
]
|
||||
for p in mac_paths:
|
||||
if Path(p).exists():
|
||||
return str(p)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _open_browser(url: str) -> None:
|
||||
"""Open URL in the default browser (best-effort, non-blocking)."""
|
||||
"""Open URL in the browser (best-effort, non-blocking)."""
|
||||
import subprocess
|
||||
|
||||
chrome = _find_chrome_bin()
|
||||
|
||||
try:
|
||||
if chrome:
|
||||
subprocess.Popen(
|
||||
[chrome, url],
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL,
|
||||
)
|
||||
return
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Fallback: open with system default browser
|
||||
try:
|
||||
if sys.platform == "darwin":
|
||||
subprocess.Popen(
|
||||
@@ -1304,6 +1364,21 @@ def _open_browser(url: str) -> None:
|
||||
pass # Best-effort — don't crash if browser can't open
|
||||
|
||||
|
||||
def _ping_hive_gateway_availability(from_source: str) -> None:
|
||||
"""Ping Hive gateway availability for lightweight reachability logging."""
|
||||
from urllib import error, parse, request
|
||||
|
||||
base_url = "https://api.adenhq.com/v1/gateway/availability"
|
||||
query = parse.urlencode({"from": from_source})
|
||||
url = f"{base_url}?{query}"
|
||||
|
||||
try:
|
||||
with request.urlopen(url, timeout=5) as response:
|
||||
response.read()
|
||||
except (error.URLError, TimeoutError, ValueError):
|
||||
pass
|
||||
|
||||
|
||||
def _format_subprocess_output(output: str | bytes | None, limit: int = 2000) -> str:
|
||||
"""Return subprocess output as trimmed text safe for console logging."""
|
||||
if not output:
|
||||
@@ -1472,5 +1547,6 @@ def cmd_serve(args: argparse.Namespace) -> int:
|
||||
|
||||
def cmd_open(args: argparse.Namespace) -> int:
|
||||
"""Start the HTTP API server and open the dashboard in the browser."""
|
||||
_ping_hive_gateway_availability("hive-open")
|
||||
args.open = True
|
||||
return cmd_serve(args)
|
||||
@@ -14,7 +14,7 @@ from typing import Any, Literal
|
||||
|
||||
import httpx
|
||||
|
||||
from framework.runner.mcp_errors import MCPToolNotFoundError
|
||||
from framework.loader.mcp_errors import MCPToolNotFoundError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
+1
-1
@@ -5,7 +5,7 @@ import threading
|
||||
|
||||
import httpx
|
||||
|
||||
from framework.runner.mcp_client import MCPClient, MCPServerConfig
|
||||
from framework.loader.mcp_client import MCPClient, MCPServerConfig
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -14,9 +14,9 @@ from typing import Any, Literal
|
||||
|
||||
import httpx
|
||||
|
||||
from framework.runner.mcp_client import MCPClient, MCPServerConfig
|
||||
from framework.runner.mcp_connection_manager import MCPConnectionManager
|
||||
from framework.runner.mcp_errors import (
|
||||
from framework.loader.mcp_client import MCPClient, MCPServerConfig
|
||||
from framework.loader.mcp_connection_manager import MCPConnectionManager
|
||||
from framework.loader.mcp_errors import (
|
||||
MCPError,
|
||||
MCPErrorCode,
|
||||
MCPInstallError,
|
||||
+1
-1
@@ -28,7 +28,7 @@ from typing import Any
|
||||
|
||||
def _get_registry(base_path: Path | None = None):
|
||||
"""Initialize and return an MCPRegistry instance."""
|
||||
from framework.runner.mcp_registry import MCPRegistry
|
||||
from framework.loader.mcp_registry import MCPRegistry
|
||||
|
||||
registry = MCPRegistry(base_path=base_path)
|
||||
registry.initialize()
|
||||
+2
-2
@@ -11,8 +11,8 @@ from dataclasses import dataclass, field
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.node import NodeSpec
|
||||
from framework.orchestrator.edge import GraphSpec
|
||||
from framework.orchestrator.node import NodeSpec
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -48,7 +48,7 @@ class ToolRegistry:
|
||||
# Framework-internal context keys injected into tool calls.
|
||||
# Stripped from LLM-facing schemas (the LLM doesn't know these values)
|
||||
# and auto-injected at call time for tools that accept them.
|
||||
CONTEXT_PARAMS = frozenset({"agent_id", "data_dir"})
|
||||
CONTEXT_PARAMS = frozenset({"agent_id", "data_dir", "profile"})
|
||||
|
||||
# Credential directory used for change detection
|
||||
_CREDENTIAL_DIR = Path("~/.hive/credentials/credentials").expanduser()
|
||||
@@ -262,15 +262,21 @@ class ToolRegistry:
|
||||
is_error=False,
|
||||
)
|
||||
|
||||
registry_ref = self
|
||||
|
||||
def executor(tool_use: ToolUse) -> ToolResult:
|
||||
if tool_use.name not in self._tools:
|
||||
# Check if credential files changed (lightweight dir listing).
|
||||
# If new OAuth tokens appeared, restarts MCP servers to pick them up.
|
||||
registry_ref.resync_mcp_servers_if_needed()
|
||||
|
||||
if tool_use.name not in registry_ref._tools:
|
||||
return ToolResult(
|
||||
tool_use_id=tool_use.id,
|
||||
content=json.dumps({"error": f"Unknown tool: {tool_use.name}"}),
|
||||
is_error=True,
|
||||
)
|
||||
|
||||
registered = self._tools[tool_use.name]
|
||||
registered = registry_ref._tools[tool_use.name]
|
||||
try:
|
||||
result = registered.executor(tool_use.input)
|
||||
|
||||
@@ -635,8 +641,8 @@ class ToolRegistry:
|
||||
Number of tools registered from this server
|
||||
"""
|
||||
try:
|
||||
from framework.runner.mcp_client import MCPClient, MCPServerConfig
|
||||
from framework.runner.mcp_connection_manager import MCPConnectionManager
|
||||
from framework.loader.mcp_client import MCPClient, MCPServerConfig
|
||||
from framework.loader.mcp_connection_manager import MCPConnectionManager
|
||||
|
||||
# Build config object
|
||||
config = MCPServerConfig(
|
||||
@@ -883,7 +889,7 @@ class ToolRegistry:
|
||||
"""Re-run ``mcp_registry.json`` resolution and register servers (post-resync)."""
|
||||
if self._mcp_registry_agent_path is None:
|
||||
return
|
||||
from framework.runner.mcp_registry import MCPRegistry
|
||||
from framework.loader.mcp_registry import MCPRegistry
|
||||
|
||||
try:
|
||||
reg = MCPRegistry()
|
||||
@@ -922,6 +928,11 @@ class ToolRegistry:
|
||||
clients and re-loads them so the new subprocess picks up the fresh
|
||||
credentials.
|
||||
|
||||
Note: Individual credential TTL/refresh is handled by the MCP server
|
||||
process internally -- it resolves tokens from the credential store
|
||||
on every tool call, not at startup. This method only handles the case
|
||||
where entirely new credential files appear.
|
||||
|
||||
Returns True if a resync was performed, False otherwise.
|
||||
"""
|
||||
if not self._mcp_clients or self._mcp_config_path is None:
|
||||
@@ -975,7 +986,7 @@ class ToolRegistry:
|
||||
server_name = self._mcp_client_servers.get(client_id, client.config.name)
|
||||
try:
|
||||
if client_id in self._mcp_managed_clients:
|
||||
from framework.runner.mcp_connection_manager import MCPConnectionManager
|
||||
from framework.loader.mcp_connection_manager import MCPConnectionManager
|
||||
|
||||
MCPConnectionManager.get_instance().release(server_name)
|
||||
else:
|
||||
@@ -0,0 +1,27 @@
|
||||
"""Orchestrator layer -- how agents are composed via graphs.
|
||||
|
||||
Lazy imports to avoid circular dependencies with graph/event_loop/*.
|
||||
"""
|
||||
|
||||
|
||||
def __getattr__(name: str):
|
||||
if name in ("GraphContext",):
|
||||
from framework.orchestrator.context import GraphContext
|
||||
return GraphContext
|
||||
if name in ("DEFAULT_MAX_TOKENS", "EdgeCondition", "EdgeSpec", "GraphSpec"):
|
||||
from framework.orchestrator import edge as _e
|
||||
return getattr(_e, name)
|
||||
if name in ("Orchestrator", "ExecutionResult"):
|
||||
from framework.orchestrator import orchestrator as _o
|
||||
return getattr(_o, name)
|
||||
if name in ("Constraint", "Goal", "GoalStatus", "SuccessCriterion"):
|
||||
from framework.orchestrator import goal as _g
|
||||
return getattr(_g, name)
|
||||
if name in ("DataBuffer", "NodeContext", "NodeProtocol", "NodeResult", "NodeSpec"):
|
||||
from framework.orchestrator import node as _n
|
||||
return getattr(_n, name)
|
||||
if name in ("NodeWorker", "Activation", "FanOutTag", "FanOutTracker",
|
||||
"WorkerCompletion", "WorkerLifecycle"):
|
||||
from framework.orchestrator import node_worker as _nw
|
||||
return getattr(_nw, name)
|
||||
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||
@@ -0,0 +1,182 @@
|
||||
"""
|
||||
Client I/O gateway for graph nodes.
|
||||
|
||||
Provides the bridge between node code and external clients:
|
||||
- ActiveNodeClientIO: for client_facing=True nodes (streams output, accepts input)
|
||||
- InertNodeClientIO: for client_facing=False nodes (logs internally, redirects input)
|
||||
- ClientIOGateway: factory that creates the right variant per node
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from abc import ABC, abstractmethod
|
||||
from collections.abc import AsyncIterator
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.host.event_bus import EventBus
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class NodeClientIO(ABC):
|
||||
"""Abstract base for node client I/O."""
|
||||
|
||||
@abstractmethod
|
||||
async def emit_output(self, content: str, is_final: bool = False) -> None:
|
||||
"""Emit output content. If is_final=True, signal end of stream."""
|
||||
|
||||
@abstractmethod
|
||||
async def request_input(self, prompt: str = "", timeout: float | None = None) -> str:
|
||||
"""Request input. Behavior depends on whether the node is client-facing."""
|
||||
|
||||
|
||||
class ActiveNodeClientIO(NodeClientIO):
|
||||
"""
|
||||
Client I/O for client_facing=True nodes.
|
||||
|
||||
- emit_output() queues content and publishes CLIENT_OUTPUT_DELTA.
|
||||
- request_input() publishes CLIENT_INPUT_REQUESTED, then awaits provide_input().
|
||||
- output_stream() yields queued content until the final sentinel.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
node_id: str,
|
||||
event_bus: EventBus | None = None,
|
||||
execution_id: str = "",
|
||||
) -> None:
|
||||
self.node_id = node_id
|
||||
self._event_bus = event_bus
|
||||
self._execution_id = execution_id
|
||||
|
||||
self._output_queue: asyncio.Queue[str | None] = asyncio.Queue()
|
||||
self._output_snapshot = ""
|
||||
|
||||
self._input_event: asyncio.Event | None = None
|
||||
self._input_result: str | None = None
|
||||
|
||||
async def emit_output(self, content: str, is_final: bool = False) -> None:
|
||||
# Strip leading whitespace from first output chunk to avoid leading spaces
|
||||
# (some LLMs like Kimi output leading whitespace before text)
|
||||
if not self._output_snapshot and content:
|
||||
content = content.lstrip()
|
||||
if not content: # Content was all whitespace
|
||||
return
|
||||
|
||||
self._output_snapshot += content
|
||||
await self._output_queue.put(content)
|
||||
|
||||
if self._event_bus is not None:
|
||||
await self._event_bus.emit_client_output_delta(
|
||||
stream_id=self.node_id,
|
||||
node_id=self.node_id,
|
||||
content=content,
|
||||
snapshot=self._output_snapshot,
|
||||
execution_id=self._execution_id or None,
|
||||
)
|
||||
|
||||
if is_final:
|
||||
await self._output_queue.put(None)
|
||||
|
||||
async def request_input(self, prompt: str = "", timeout: float | None = None) -> str:
|
||||
if self._input_event is not None:
|
||||
raise RuntimeError("request_input already pending for this node")
|
||||
|
||||
self._input_event = asyncio.Event()
|
||||
self._input_result = None
|
||||
|
||||
if self._event_bus is not None:
|
||||
await self._event_bus.emit_client_input_requested(
|
||||
stream_id=self.node_id,
|
||||
node_id=self.node_id,
|
||||
prompt=prompt,
|
||||
execution_id=self._execution_id or None,
|
||||
)
|
||||
|
||||
try:
|
||||
if timeout is not None:
|
||||
await asyncio.wait_for(self._input_event.wait(), timeout=timeout)
|
||||
else:
|
||||
await self._input_event.wait()
|
||||
finally:
|
||||
self._input_event = None
|
||||
|
||||
if self._input_result is None:
|
||||
raise RuntimeError("input event was set but no input was provided")
|
||||
result = self._input_result
|
||||
self._input_result = None
|
||||
return result
|
||||
|
||||
async def provide_input(self, content: str) -> None:
|
||||
"""Called externally to fulfill a pending request_input()."""
|
||||
if self._input_event is None:
|
||||
raise RuntimeError("no pending request_input to fulfill")
|
||||
self._input_result = content
|
||||
self._input_event.set()
|
||||
|
||||
async def output_stream(self) -> AsyncIterator[str]:
|
||||
"""Async iterator that yields output chunks until the final sentinel."""
|
||||
while True:
|
||||
chunk = await self._output_queue.get()
|
||||
if chunk is None:
|
||||
break
|
||||
yield chunk
|
||||
|
||||
|
||||
class InertNodeClientIO(NodeClientIO):
|
||||
"""
|
||||
Client I/O for client_facing=False nodes.
|
||||
|
||||
- emit_output() publishes NODE_INTERNAL_OUTPUT (content is not discarded).
|
||||
- request_input() publishes NODE_INPUT_BLOCKED and returns a redirect string.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
node_id: str,
|
||||
event_bus: EventBus | None = None,
|
||||
) -> None:
|
||||
self.node_id = node_id
|
||||
self._event_bus = event_bus
|
||||
|
||||
async def emit_output(self, content: str, is_final: bool = False) -> None:
|
||||
if self._event_bus is not None:
|
||||
await self._event_bus.emit_node_internal_output(
|
||||
stream_id=self.node_id,
|
||||
node_id=self.node_id,
|
||||
content=content,
|
||||
)
|
||||
|
||||
async def request_input(self, prompt: str = "", timeout: float | None = None) -> str:
|
||||
if self._event_bus is not None:
|
||||
await self._event_bus.emit_node_input_blocked(
|
||||
stream_id=self.node_id,
|
||||
node_id=self.node_id,
|
||||
prompt=prompt,
|
||||
)
|
||||
return (
|
||||
"You are an internal processing node. There is no user to interact with."
|
||||
" Work with the data provided in your inputs to complete your task."
|
||||
)
|
||||
|
||||
|
||||
class ClientIOGateway:
|
||||
"""Factory that creates the appropriate NodeClientIO for a node."""
|
||||
|
||||
def __init__(self, event_bus: EventBus | None = None) -> None:
|
||||
self._event_bus = event_bus
|
||||
|
||||
def create_io(self, node_id: str, client_facing: bool, execution_id: str = "") -> NodeClientIO:
|
||||
if client_facing:
|
||||
return ActiveNodeClientIO(
|
||||
node_id=node_id,
|
||||
event_bus=self._event_bus,
|
||||
execution_id=execution_id,
|
||||
)
|
||||
return InertNodeClientIO(
|
||||
node_id=node_id,
|
||||
event_bus=self._event_bus,
|
||||
)
|
||||
@@ -10,13 +10,32 @@ This module centralizes:
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.goal import Goal
|
||||
from framework.graph.node import DataBuffer, NodeContext, NodeProtocol, NodeSpec
|
||||
from framework.runtime.core import Runtime
|
||||
from framework.orchestrator.edge import GraphSpec
|
||||
from framework.orchestrator.goal import Goal
|
||||
from framework.orchestrator.node import DataBuffer, NodeContext, NodeProtocol, NodeSpec
|
||||
from framework.tracker.decision_tracker import DecisionTracker
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Tool names that are ALWAYS available to every node, regardless of
|
||||
# the node's explicit tool policy. These are framework essentials that
|
||||
# agents need unconditionally.
|
||||
_ALWAYS_AVAILABLE_TOOLS: frozenset[str] = frozenset(
|
||||
{
|
||||
"read_file",
|
||||
"write_file",
|
||||
"edit_file",
|
||||
"list_directory",
|
||||
"search_files",
|
||||
"hashline_edit",
|
||||
"set_output",
|
||||
"escalate",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -26,7 +45,7 @@ class GraphContext:
|
||||
graph: GraphSpec
|
||||
goal: Goal
|
||||
buffer: DataBuffer
|
||||
runtime: Runtime
|
||||
runtime: DecisionTracker
|
||||
llm: Any # LLMProvider
|
||||
tools: list[Any] # list[Tool]
|
||||
tool_executor: Any # Callable
|
||||
@@ -67,12 +86,6 @@ class GraphContext:
|
||||
# Retry tracking: worker_id → retry_count (for execution quality assessment)
|
||||
retry_counts: dict[str, int] = field(default_factory=dict)
|
||||
nodes_with_retries: set[str] = field(default_factory=set)
|
||||
# Colony memory reflection at node handoff
|
||||
colony_memory_dir: Any = None # Path | None
|
||||
worker_sessions_dir: Any = None # Path | None
|
||||
colony_recall_cache: dict[str, str] = field(default_factory=dict)
|
||||
colony_reflect_llm: Any = None # LLMProvider for reflection
|
||||
_colony_reflect_lock: asyncio.Lock = field(default_factory=asyncio.Lock)
|
||||
|
||||
|
||||
def build_scoped_buffer(buffer: DataBuffer, node_spec: NodeSpec) -> DataBuffer:
|
||||
@@ -112,7 +125,7 @@ def build_node_accounts_prompt(
|
||||
|
||||
resolved = accounts_prompt
|
||||
if accounts_data and tool_provider_map:
|
||||
from framework.graph.prompting import build_accounts_prompt
|
||||
from framework.orchestrator.prompting import build_accounts_prompt
|
||||
|
||||
filtered = build_accounts_prompt(
|
||||
accounts_data,
|
||||
@@ -131,15 +144,41 @@ def _resolve_available_tools(
|
||||
tools: list[Any],
|
||||
override_tools: list[Any] | None,
|
||||
) -> list[Any]:
|
||||
"""Select tools available to the current node."""
|
||||
"""Select tools available to the current node.
|
||||
|
||||
Respects ``node_spec.tool_access_policy``:
|
||||
- ``"explicit"`` -- only tools whose name appears in ``node_spec.tools``
|
||||
PLUS framework-default tools (read_file, set_output, etc.).
|
||||
If the list is empty, only defaults are given.
|
||||
- ``"none"`` -- only framework-default tools (read_file, set_output, etc.).
|
||||
|
||||
Framework-default tools (``_ALWAYS_AVAILABLE_TOOLS``) are always included
|
||||
regardless of policy — agents need file I/O and output/escalate to function.
|
||||
"""
|
||||
|
||||
if override_tools is not None:
|
||||
return list(override_tools)
|
||||
# Merge override with always-available, dedup by name
|
||||
names = {t.name for t in override_tools}
|
||||
extra = [t for t in tools if t.name in _ALWAYS_AVAILABLE_TOOLS and t.name not in names]
|
||||
return list(override_tools) + extra
|
||||
|
||||
policy = getattr(node_spec, "tool_access_policy", "explicit")
|
||||
|
||||
# Always include framework-default tools
|
||||
always_tools = [t for t in tools if t.name in _ALWAYS_AVAILABLE_TOOLS]
|
||||
|
||||
if policy == "none":
|
||||
return always_tools
|
||||
|
||||
# "explicit" (default): declared tools + framework defaults
|
||||
if not node_spec.tools:
|
||||
return []
|
||||
return always_tools
|
||||
|
||||
return [tool for tool in tools if tool.name in node_spec.tools]
|
||||
declared = set(node_spec.tools)
|
||||
declared_tools = [
|
||||
t for t in tools if t.name in declared and t.name not in _ALWAYS_AVAILABLE_TOOLS
|
||||
]
|
||||
return always_tools + declared_tools
|
||||
|
||||
|
||||
def _derive_input_data(buffer: DataBuffer, input_keys: list[str]) -> dict[str, Any]:
|
||||
@@ -155,7 +194,7 @@ def _derive_input_data(buffer: DataBuffer, input_keys: list[str]) -> dict[str, A
|
||||
|
||||
def build_node_context(
|
||||
*,
|
||||
runtime: Runtime,
|
||||
runtime: DecisionTracker,
|
||||
node_spec: NodeSpec,
|
||||
buffer: DataBuffer,
|
||||
goal: Goal,
|
||||
@@ -240,9 +279,6 @@ def build_node_context(
|
||||
execution_id=execution_id,
|
||||
run_id=run_id,
|
||||
stream_id=stream_id,
|
||||
node_registry=node_registry or {},
|
||||
all_tools=list(all_tools or tools),
|
||||
shared_node_registry=shared_node_registry or {},
|
||||
dynamic_tools_provider=dynamic_tools_provider,
|
||||
dynamic_prompt_provider=dynamic_prompt_provider,
|
||||
dynamic_memory_provider=dynamic_memory_provider,
|
||||
@@ -276,7 +312,11 @@ def build_node_context_from_graph_context(
|
||||
gc = graph_context
|
||||
resolved_override_tools = override_tools
|
||||
if resolved_override_tools is None and gc.is_continuous and gc.cumulative_tools:
|
||||
resolved_override_tools = list(gc.cumulative_tools)
|
||||
if node_spec.tool_access_policy == "explicit" and node_spec.tools:
|
||||
declared = set(node_spec.tools) | _ALWAYS_AVAILABLE_TOOLS
|
||||
resolved_override_tools = [t for t in gc.cumulative_tools if t.name in declared]
|
||||
else:
|
||||
resolved_override_tools = list(gc.cumulative_tools)
|
||||
|
||||
resolved_inherited_conversation = inherited_conversation
|
||||
if resolved_inherited_conversation is None and gc.is_continuous:
|
||||
@@ -307,14 +347,13 @@ def build_node_context_from_graph_context(
|
||||
accounts_data=gc.accounts_data,
|
||||
tool_provider_map=gc.tool_provider_map,
|
||||
fallback_to_default_accounts_prompt=fallback_to_default_accounts_prompt,
|
||||
identity_prompt=identity_prompt if identity_prompt is not None else getattr(gc.graph, "identity_prompt", "") or "",
|
||||
identity_prompt=identity_prompt
|
||||
if identity_prompt is not None
|
||||
else getattr(gc.graph, "identity_prompt", "") or "",
|
||||
narrative=narrative,
|
||||
execution_id=gc.execution_id,
|
||||
run_id=gc.run_id,
|
||||
stream_id=gc.stream_id,
|
||||
node_registry=node_registry or gc.node_spec_registry,
|
||||
all_tools=gc.tools,
|
||||
shared_node_registry=gc.node_registry,
|
||||
dynamic_tools_provider=gc.dynamic_tools_provider,
|
||||
dynamic_prompt_provider=gc.dynamic_prompt_provider,
|
||||
dynamic_memory_provider=gc.dynamic_memory_provider,
|
||||
+2
-2
@@ -6,10 +6,10 @@ import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from framework.graph.conversation import _try_extract_key
|
||||
from framework.agent_loop.conversation import _try_extract_key
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.graph.conversation import NodeConversation
|
||||
from framework.agent_loop.conversation import NodeConversation
|
||||
from framework.llm.provider import LLMProvider
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
+1
-1
@@ -15,7 +15,7 @@ import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import Any
|
||||
|
||||
from framework.graph.conversation import NodeConversation
|
||||
from framework.agent_loop.conversation import NodeConversation
|
||||
from framework.llm.provider import LLMProvider
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -29,7 +29,7 @@ from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from framework.graph.safe_eval import safe_eval
|
||||
from framework.orchestrator.safe_eval import safe_eval
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -302,6 +302,7 @@ Respond with ONLY a JSON object:
|
||||
|
||||
return result
|
||||
|
||||
|
||||
class GraphSpec(BaseModel):
|
||||
"""
|
||||
Complete specification of an agent graph.
|
||||
@@ -537,13 +538,6 @@ class GraphSpec(BaseModel):
|
||||
for edge in self.get_outgoing_edges(current):
|
||||
to_visit.append(edge.target)
|
||||
|
||||
# Also mark sub-agents as reachable (they're invoked via delegate_to_sub_agent, not edges)
|
||||
for node in self.nodes:
|
||||
if node.id in reachable:
|
||||
sub_agents = getattr(node, "sub_agents", []) or []
|
||||
for sub_agent_id in sub_agents:
|
||||
reachable.add(sub_agent_id)
|
||||
|
||||
for node in self.nodes:
|
||||
if node.id not in reachable:
|
||||
# Skip if node is a pause node or entry point target
|
||||
@@ -582,48 +576,4 @@ class GraphSpec(BaseModel):
|
||||
else:
|
||||
seen_keys[key] = node_id
|
||||
|
||||
# GCU nodes must only be used as subagents
|
||||
gcu_node_ids = {n.id for n in self.nodes if n.node_type == "gcu"}
|
||||
if gcu_node_ids:
|
||||
# GCU nodes must not be entry nodes
|
||||
if self.entry_node in gcu_node_ids:
|
||||
errors.append(
|
||||
f"GCU node '{self.entry_node}' is used as entry node. "
|
||||
"GCU nodes must only be used as subagents via delegate_to_sub_agent()."
|
||||
)
|
||||
|
||||
# GCU nodes must not be terminal nodes
|
||||
for term in self.terminal_nodes:
|
||||
if term in gcu_node_ids:
|
||||
errors.append(
|
||||
f"GCU node '{term}' is used as terminal node. "
|
||||
"GCU nodes must only be used as subagents."
|
||||
)
|
||||
|
||||
# GCU nodes must not be connected via edges
|
||||
for edge in self.edges:
|
||||
if edge.source in gcu_node_ids:
|
||||
errors.append(
|
||||
f"GCU node '{edge.source}' is used as edge source (edge '{edge.id}'). "
|
||||
"GCU nodes must only be used as subagents, not connected via edges."
|
||||
)
|
||||
if edge.target in gcu_node_ids:
|
||||
errors.append(
|
||||
f"GCU node '{edge.target}' is used as edge target (edge '{edge.id}'). "
|
||||
"GCU nodes must only be used as subagents, not connected via edges."
|
||||
)
|
||||
|
||||
# GCU nodes must be referenced in at least one parent's sub_agents
|
||||
referenced_subagents = set()
|
||||
for node in self.nodes:
|
||||
for sa_id in node.sub_agents or []:
|
||||
referenced_subagents.add(sa_id)
|
||||
|
||||
orphaned = gcu_node_ids - referenced_subagents
|
||||
for nid in orphaned:
|
||||
errors.append(
|
||||
f"GCU node '{nid}' is not referenced in any node's sub_agents list. "
|
||||
"GCU nodes must be declared as subagents of a parent node."
|
||||
)
|
||||
|
||||
return {"errors": errors, "warnings": warnings}
|
||||
@@ -1,34 +1,14 @@
|
||||
"""GCU (browser automation) node type constants.
|
||||
"""Browser automation best-practices prompt.
|
||||
|
||||
A ``gcu`` node is an ``event_loop`` node with two automatic enhancements:
|
||||
1. A canonical browser best-practices system prompt is prepended.
|
||||
2. All tools from the GCU MCP server are auto-included.
|
||||
This module provides ``GCU_BROWSER_SYSTEM_PROMPT`` -- a canonical set of
|
||||
browser automation guidelines that can be included in any node's system
|
||||
prompt that uses browser tools from the gcu-tools MCP server.
|
||||
|
||||
No new ``NodeProtocol`` subclass — the ``gcu`` type is purely a declarative
|
||||
signal processed by the runner and executor at setup time.
|
||||
Browser tools are registered via the global MCP registry (gcu-tools).
|
||||
Nodes that need browser access declare ``tools: {policy: "all"}`` in their
|
||||
agent.json config.
|
||||
"""
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# MCP server identity
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
GCU_SERVER_NAME = "gcu-tools"
|
||||
"""Name used to identify the GCU MCP server in ``mcp_servers.json``."""
|
||||
|
||||
GCU_MCP_SERVER_CONFIG: dict = {
|
||||
"name": GCU_SERVER_NAME,
|
||||
"transport": "stdio",
|
||||
"command": "uv",
|
||||
"args": ["run", "python", "-m", "gcu.server", "--stdio"],
|
||||
"cwd": "../../tools",
|
||||
"description": "GCU tools for browser automation",
|
||||
}
|
||||
"""Default stdio config for the GCU MCP server (relative to exports/<agent>/)."""
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Browser best-practices system prompt
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
GCU_BROWSER_SYSTEM_PROMPT = """\
|
||||
# Browser Automation Best Practices
|
||||
|
||||
@@ -114,6 +94,82 @@ After reading or extracting data from a tab, close it immediately.
|
||||
|
||||
Never accumulate tabs. Treat every tab you open as a resource you must free.
|
||||
|
||||
## Shadow DOM & Overlays
|
||||
|
||||
Some sites (LinkedIn messaging, etc.) render content inside closed shadow roots that are
|
||||
invisible to regular DOM queries and `browser_snapshot` coordinates.
|
||||
|
||||
**Detecting shadow DOM**: `document.elementFromPoint(x, y)` returns a zero-height host element
|
||||
(e.g. `#interop-outlet`) for the entire overlay area — this is normal, not a bug.
|
||||
`document.body.innerText` and `document.querySelectorAll` return nothing for shadow content.
|
||||
`browser_snapshot` CAN read shadow DOM text but cannot return coordinates.
|
||||
|
||||
**Querying into shadow DOM:**
|
||||
```
|
||||
browser_shadow_query("#interop-outlet >>> #msg-overlay >>> p")
|
||||
```
|
||||
Uses `>>>` to pierce shadow roots. Returns `rect` in CSS pixels and `physicalRect` ready for
|
||||
`browser_click_coordinate` / `browser_hover_coordinate`.
|
||||
|
||||
**Getting physical rect for any element (including shadow DOM):**
|
||||
```
|
||||
browser_get_rect(selector="#interop-outlet >>> .msg-convo-wrapper", pierce_shadow=true)
|
||||
```
|
||||
|
||||
**Manual JS traversal when selector is dynamic:**
|
||||
```js
|
||||
const shadow = document.getElementById('interop-outlet').shadowRoot;
|
||||
const convo = shadow.querySelector('#ember37');
|
||||
const rect = convo.querySelector('p').getBoundingClientRect();
|
||||
// rect is in CSS pixels — multiply by DPR for physical pixels
|
||||
```
|
||||
Pass this as a multi-statement script to `browser_evaluate`; it wraps automatically in an IIFE.
|
||||
Use `JSON.stringify(rect)` to serialize the result.
|
||||
|
||||
## Coordinate System
|
||||
|
||||
There are THREE coordinate spaces. Using the wrong one causes clicks/hovers to land in the
|
||||
wrong place.
|
||||
|
||||
| Space | Used by | How to get |
|
||||
|---|---|---|
|
||||
| Physical pixels | `browser_click_coordinate` | `browser_coords` `physical_x/y` |
|
||||
| CSS pixels | `getBoundingClientRect()`, `elementFromPoint` | `browser_coords` `css_x/y` |
|
||||
| Screenshot pixels | What you see in the 800px image | Raw position in screenshot |
|
||||
|
||||
**Converting screenshot → physical**: `browser_coords(x, y)` → use `physical_x/y`.
|
||||
**Converting CSS → physical**: multiply by `window.devicePixelRatio` (typically 1.6 on HiDPI).
|
||||
**Never** pass raw `getBoundingClientRect()` values to `browser_hover_coordinate` without
|
||||
multiplying by DPR first.
|
||||
|
||||
## Screenshots
|
||||
|
||||
Screenshot data is base64-encoded PNG. To view it:
|
||||
```
|
||||
run_command("echo '<base64_data>' | base64 -d > /tmp/screenshot.png")
|
||||
```
|
||||
Then use `read_file("/tmp/screenshot.png")` to view the image.
|
||||
|
||||
Always use `full_page=false` (default) unless you specifically need the full scrolled page.
|
||||
|
||||
## JavaScript Evaluation
|
||||
|
||||
`browser_evaluate` wraps your script in an IIFE automatically:
|
||||
- Single expression (`document.title`) → wrapped with `return`
|
||||
- Multi-statement or contains `;`/`\n` → wrapped without return (add explicit `return` yourself)
|
||||
- Already an IIFE → run as-is
|
||||
|
||||
**Avoid**: complex closures with `return` inside `for` loops — Chrome CDP returns `null`.
|
||||
**Use instead**: `Array.from(...).map(...).join(...)` chains, or build result objects and
|
||||
`JSON.stringify()` them.
|
||||
|
||||
**For shadow DOM traversal with dynamic selectors**, write the full JS path:
|
||||
```js
|
||||
const s = document.getElementById('interop-outlet').shadowRoot;
|
||||
const el = s.querySelector('.msg-convo-wrapper');
|
||||
return JSON.stringify(el.getBoundingClientRect());
|
||||
```
|
||||
|
||||
## Login & Auth Walls
|
||||
- If you see a "Log in" or "Sign up" prompt instead of expected
|
||||
content, report the auth wall immediately — do NOT attempt to log in.
|
||||
@@ -25,7 +25,7 @@ from typing import Any
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from framework.llm.provider import LLMProvider, Tool
|
||||
from framework.runtime.core import Runtime
|
||||
from framework.tracker.decision_tracker import DecisionTracker
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -144,15 +144,19 @@ class NodeSpec(BaseModel):
|
||||
# For LLM nodes
|
||||
system_prompt: str | None = Field(default=None, description="System prompt for LLM nodes")
|
||||
tools: list[str] = Field(default_factory=list, description="Tool names this node can use")
|
||||
tool_access_policy: str = Field(
|
||||
default="explicit",
|
||||
description=(
|
||||
"Tool access policy for this node. "
|
||||
"'all' = all tools from registry, "
|
||||
"'explicit' = only tools listed in `tools` (default, recommended), "
|
||||
"'none' = no tools at all."
|
||||
),
|
||||
)
|
||||
model: str | None = Field(
|
||||
default=None, description="Specific model to use (defaults to graph default)"
|
||||
)
|
||||
|
||||
# For subagent delegation
|
||||
sub_agents: list[str] = Field(
|
||||
default_factory=list,
|
||||
description="Node IDs that can be invoked as subagents from this node",
|
||||
)
|
||||
# For function nodes
|
||||
function: str | None = Field(
|
||||
default=None, description="Function name or path for function nodes"
|
||||
@@ -459,7 +463,7 @@ class NodeContext:
|
||||
"""
|
||||
|
||||
# Core runtime
|
||||
runtime: Runtime
|
||||
runtime: DecisionTracker
|
||||
|
||||
# Node identity
|
||||
node_id: str
|
||||
@@ -526,20 +530,6 @@ class NodeContext:
|
||||
# Falls back to node_id when not set (legacy / standalone executor).
|
||||
stream_id: str = ""
|
||||
|
||||
# Subagent mode
|
||||
is_subagent_mode: bool = False # True when running as a subagent (prevents nested delegation)
|
||||
report_callback: Any = None # async (message: str, data: dict | None) -> None
|
||||
node_registry: dict[str, "NodeSpec"] = field(default_factory=dict) # For subagent lookup
|
||||
|
||||
# Full tool catalog (unfiltered) — used by _execute_subagent to resolve
|
||||
# subagent tools that aren't in the parent node's filtered available_tools.
|
||||
all_tools: list[Tool] = field(default_factory=list)
|
||||
|
||||
# Shared reference to the executor's node_registry — used by subagent
|
||||
# escalation (_EscalationReceiver) to register temporary receivers that
|
||||
# the inject_input() routing chain can find.
|
||||
shared_node_registry: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
# Dynamic tool provider — when set, EventLoopNode rebuilds the tool
|
||||
# list from this callback at the start of each iteration. Used by
|
||||
# the queen to switch between building-mode and running-mode tools.
|
||||
@@ -19,15 +19,15 @@ from dataclasses import dataclass, field
|
||||
from enum import StrEnum
|
||||
from typing import Any
|
||||
|
||||
from framework.graph.context import GraphContext, build_node_context_from_graph_context
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec
|
||||
from framework.graph.node import (
|
||||
from framework.orchestrator.context import GraphContext, build_node_context_from_graph_context
|
||||
from framework.orchestrator.edge import EdgeCondition, EdgeSpec
|
||||
from framework.orchestrator.node import (
|
||||
NodeContext,
|
||||
NodeProtocol,
|
||||
NodeResult,
|
||||
NodeSpec,
|
||||
)
|
||||
from framework.graph.validator import OutputValidator
|
||||
from framework.orchestrator.validator import OutputValidator
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -109,7 +109,7 @@ class RetryState:
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class WorkerAgent:
|
||||
class NodeWorker:
|
||||
"""First-class autonomous worker for one node in the graph.
|
||||
|
||||
Lifecycle:
|
||||
@@ -256,7 +256,9 @@ class WorkerAgent:
|
||||
if node_spec.max_node_visits > 0 and visit_count > node_spec.max_node_visits:
|
||||
logger.info(
|
||||
"Worker %s: visit %d exceeds max_node_visits=%d, skipping",
|
||||
node_spec.id, visit_count, node_spec.max_node_visits,
|
||||
node_spec.id,
|
||||
visit_count,
|
||||
node_spec.max_node_visits,
|
||||
)
|
||||
# Build a synthetic success result from current buffer state
|
||||
existing_output: dict[str, Any] = {}
|
||||
@@ -318,8 +320,6 @@ class WorkerAgent:
|
||||
self.lifecycle = WorkerLifecycle.COMPLETED
|
||||
self._last_result = result
|
||||
self._last_activations = activations
|
||||
# Colony memory reflection — runs before downstream activation
|
||||
await self._reflect_colony_memory()
|
||||
completion = WorkerCompletion(
|
||||
worker_id=node_spec.id,
|
||||
success=True,
|
||||
@@ -340,8 +340,6 @@ class WorkerAgent:
|
||||
self.lifecycle = WorkerLifecycle.FAILED
|
||||
self._last_result = result
|
||||
self._last_activations = activations
|
||||
# Colony memory reflection — capture learnings even on failure
|
||||
await self._reflect_colony_memory()
|
||||
await self._publish_failure(result.error or "Unknown error")
|
||||
except Exception as exc:
|
||||
error = str(exc) or type(exc).__name__
|
||||
@@ -351,15 +349,14 @@ class WorkerAgent:
|
||||
self._last_activations = []
|
||||
await self._publish_failure(error)
|
||||
|
||||
async def _execute_with_retries(
|
||||
self, node_impl: NodeProtocol, ctx: NodeContext
|
||||
) -> NodeResult:
|
||||
async def _execute_with_retries(self, node_impl: NodeProtocol, ctx: NodeContext) -> NodeResult:
|
||||
"""Execute node with exponential backoff retry."""
|
||||
gc = self._gc
|
||||
# Only skip retries for actual EventLoopNode instances (they handle
|
||||
# retries internally). Custom NodeProtocol impls registered via
|
||||
# register_node should be retried by the executor.
|
||||
from framework.graph.event_loop_node import EventLoopNode as _ELN
|
||||
from framework.agent_loop.agent_loop import AgentLoop as _ELN
|
||||
|
||||
if isinstance(node_impl, _ELN):
|
||||
max_retries = 0
|
||||
else:
|
||||
@@ -382,7 +379,9 @@ class WorkerAgent:
|
||||
|
||||
# Failure
|
||||
if attempt + 1 < total_attempts:
|
||||
gc.retry_counts[self.node_spec.id] = gc.retry_counts.get(self.node_spec.id, 0) + 1
|
||||
gc.retry_counts[self.node_spec.id] = (
|
||||
gc.retry_counts.get(self.node_spec.id, 0) + 1
|
||||
)
|
||||
gc.nodes_with_retries.add(self.node_spec.id)
|
||||
delay = 1.0 * (2**attempt)
|
||||
logger.warning(
|
||||
@@ -412,7 +411,9 @@ class WorkerAgent:
|
||||
|
||||
except Exception as exc:
|
||||
if attempt + 1 < total_attempts:
|
||||
gc.retry_counts[self.node_spec.id] = gc.retry_counts.get(self.node_spec.id, 0) + 1
|
||||
gc.retry_counts[self.node_spec.id] = (
|
||||
gc.retry_counts.get(self.node_spec.id, 0) + 1
|
||||
)
|
||||
gc.nodes_with_retries.add(self.node_spec.id)
|
||||
delay = 1.0 * (2**attempt)
|
||||
logger.warning(
|
||||
@@ -439,9 +440,7 @@ class WorkerAgent:
|
||||
# Edge evaluation (source-side)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
async def _evaluate_outgoing_edges(
|
||||
self, result: NodeResult
|
||||
) -> list[Activation]:
|
||||
async def _evaluate_outgoing_edges(self, result: NodeResult) -> list[Activation]:
|
||||
"""Evaluate outgoing edges and create activations for downstream.
|
||||
|
||||
Same logic as current _get_all_traversable_edges() plus
|
||||
@@ -573,14 +572,17 @@ class WorkerAgent:
|
||||
elif conflict_strategy == "first_wins":
|
||||
logger.debug(
|
||||
"Skipping write to '%s' (first_wins: already set by %s)",
|
||||
key, prior_worker,
|
||||
key,
|
||||
prior_worker,
|
||||
)
|
||||
continue
|
||||
else:
|
||||
# last_wins: log and overwrite
|
||||
logger.debug(
|
||||
"Key '%s' overwritten (last_wins: %s -> %s)",
|
||||
key, prior_worker, node_spec.id,
|
||||
key,
|
||||
prior_worker,
|
||||
node_spec.id,
|
||||
)
|
||||
gc._fanout_written_keys[key] = node_spec.id
|
||||
gc.buffer.write(key, value, validate=False)
|
||||
@@ -601,10 +603,10 @@ class WorkerAgent:
|
||||
return self._node_impl
|
||||
|
||||
# Auto-create EventLoopNode
|
||||
if self.node_spec.node_type in ("event_loop", "gcu"):
|
||||
from framework.graph.event_loop_node import EventLoopNode
|
||||
from framework.graph.event_loop.types import LoopConfig
|
||||
from framework.graph.node import warn_if_deprecated_client_facing
|
||||
if self.node_spec.node_type == "event_loop":
|
||||
from framework.agent_loop.internals.types import LoopConfig
|
||||
from framework.agent_loop.agent_loop import AgentLoop
|
||||
from framework.orchestrator.node import warn_if_deprecated_client_facing
|
||||
|
||||
conv_store = None
|
||||
if gc.storage_path:
|
||||
@@ -617,7 +619,7 @@ class WorkerAgent:
|
||||
warn_if_deprecated_client_facing(self.node_spec)
|
||||
default_max_iter = 100 if self.node_spec.supports_direct_user_io() else 50
|
||||
|
||||
node = EventLoopNode(
|
||||
node = AgentLoop(
|
||||
event_bus=gc.event_bus,
|
||||
judge=None,
|
||||
config=LoopConfig(
|
||||
@@ -641,8 +643,7 @@ class WorkerAgent:
|
||||
return node
|
||||
|
||||
raise RuntimeError(
|
||||
f"No implementation for node '{self.node_spec.id}' "
|
||||
f"(type: {self.node_spec.node_type})"
|
||||
f"No implementation for node '{self.node_spec.id}' (type: {self.node_spec.node_type})"
|
||||
)
|
||||
|
||||
def _build_node_context(self) -> NodeContext:
|
||||
@@ -653,58 +654,6 @@ class WorkerAgent:
|
||||
pause_event=self._pause_requested,
|
||||
)
|
||||
|
||||
async def _reflect_colony_memory(self) -> None:
|
||||
"""Run colony memory reflection at node handoff.
|
||||
|
||||
Awaits the shared colony lock so parallel workers queue (never skip).
|
||||
"""
|
||||
gc = self._gc
|
||||
if gc.colony_memory_dir is None or gc.colony_reflect_llm is None:
|
||||
return
|
||||
if gc.worker_sessions_dir is None:
|
||||
return
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
session_dir = Path(gc.worker_sessions_dir) / gc.execution_id
|
||||
if not session_dir.exists():
|
||||
return
|
||||
|
||||
# Await lock — serializes reflection but never skips
|
||||
async with gc._colony_reflect_lock:
|
||||
try:
|
||||
from framework.agents.queen.reflection_agent import run_short_reflection
|
||||
|
||||
await run_short_reflection(
|
||||
session_dir, gc.colony_reflect_llm, gc.colony_memory_dir
|
||||
)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"Worker %s: colony reflection failed",
|
||||
self.node_spec.id,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
# Update recall cache outside lock (per-execution key, no write races)
|
||||
try:
|
||||
from framework.agents.queen.recall_selector import update_recall_cache
|
||||
|
||||
await update_recall_cache(
|
||||
session_dir,
|
||||
gc.colony_reflect_llm,
|
||||
memory_dir=gc.colony_memory_dir,
|
||||
cache_setter=lambda block: gc.colony_recall_cache.__setitem__(
|
||||
gc.execution_id, block
|
||||
),
|
||||
heading="Colony Memories",
|
||||
)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"Worker %s: recall cache update failed",
|
||||
self.node_spec.id,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Event publishing
|
||||
# ------------------------------------------------------------------
|
||||
@@ -720,21 +669,23 @@ class WorkerAgent:
|
||||
# Serialize activations to dicts for event data
|
||||
activations_data = []
|
||||
for act in completion.activations:
|
||||
activations_data.append({
|
||||
"source_id": act.source_id,
|
||||
"target_id": act.target_id,
|
||||
"edge_id": act.edge_id,
|
||||
"mapped_inputs": act.mapped_inputs,
|
||||
"fan_out_tags": [
|
||||
{
|
||||
"fan_out_id": t.fan_out_id,
|
||||
"fan_out_source": t.fan_out_source,
|
||||
"branches": list(t.branches),
|
||||
"via_branch": t.via_branch,
|
||||
}
|
||||
for t in act.fan_out_tags
|
||||
],
|
||||
})
|
||||
activations_data.append(
|
||||
{
|
||||
"source_id": act.source_id,
|
||||
"target_id": act.target_id,
|
||||
"edge_id": act.edge_id,
|
||||
"mapped_inputs": act.mapped_inputs,
|
||||
"fan_out_tags": [
|
||||
{
|
||||
"fan_out_id": t.fan_out_id,
|
||||
"fan_out_source": t.fan_out_source,
|
||||
"branches": list(t.branches),
|
||||
"via_branch": t.via_branch,
|
||||
}
|
||||
for t in act.fan_out_tags
|
||||
],
|
||||
}
|
||||
)
|
||||
|
||||
await gc.event_bus.emit_worker_completed(
|
||||
stream_id=gc.stream_id,
|
||||
@@ -783,7 +734,7 @@ class WorkerAgent:
|
||||
if not next_spec or next_spec.node_type != "event_loop":
|
||||
return
|
||||
|
||||
from framework.graph.prompting import (
|
||||
from framework.orchestrator.prompting import (
|
||||
TransitionSpec,
|
||||
build_narrative,
|
||||
build_system_prompt_for_node_context,
|
||||
@@ -16,21 +16,21 @@ from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from framework.graph.checkpoint_config import CheckpointConfig
|
||||
from framework.graph.context import GraphContext, build_node_context
|
||||
from framework.graph.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.graph.goal import Goal
|
||||
from framework.graph.conversation import LEGACY_RUN_ID, get_run_cursor
|
||||
from framework.graph.node import (
|
||||
from framework.orchestrator.checkpoint_config import CheckpointConfig
|
||||
from framework.orchestrator.context import GraphContext, build_node_context
|
||||
from framework.agent_loop.conversation import LEGACY_RUN_ID
|
||||
from framework.orchestrator.edge import EdgeCondition, EdgeSpec, GraphSpec
|
||||
from framework.orchestrator.goal import Goal
|
||||
from framework.orchestrator.node import (
|
||||
DataBuffer,
|
||||
NodeProtocol,
|
||||
NodeResult,
|
||||
NodeSpec,
|
||||
DataBuffer,
|
||||
)
|
||||
from framework.graph.validator import OutputValidator
|
||||
from framework.llm.provider import LLMProvider, Tool, ToolUse
|
||||
from framework.orchestrator.validator import OutputValidator
|
||||
from framework.llm.provider import LLMProvider, Tool
|
||||
from framework.observability import set_trace_context
|
||||
from framework.runtime.core import Runtime
|
||||
from framework.tracker.decision_tracker import DecisionTracker
|
||||
from framework.schemas.checkpoint import Checkpoint
|
||||
from framework.storage.checkpoint_store import CheckpointStore
|
||||
from framework.utils.io import atomic_write
|
||||
@@ -112,7 +112,7 @@ class ParallelExecutionConfig:
|
||||
branch_timeout_seconds: float = 300.0
|
||||
|
||||
|
||||
class GraphExecutor:
|
||||
class Orchestrator:
|
||||
"""
|
||||
Executes agent graphs.
|
||||
|
||||
@@ -133,7 +133,7 @@ class GraphExecutor:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
runtime: Runtime,
|
||||
runtime: DecisionTracker,
|
||||
llm: LLMProvider | None = None,
|
||||
tools: list[Tool] | None = None,
|
||||
tool_executor: Callable | None = None,
|
||||
@@ -160,16 +160,12 @@ class GraphExecutor:
|
||||
skill_dirs: list[str] | None = None,
|
||||
context_warn_ratio: float | None = None,
|
||||
batch_init_nudge: str | None = None,
|
||||
colony_memory_dir: Any = None,
|
||||
colony_worker_sessions_dir: Any = None,
|
||||
colony_recall_cache: dict[str, str] | None = None,
|
||||
colony_reflect_llm: Any = None,
|
||||
):
|
||||
"""
|
||||
Initialize the executor.
|
||||
|
||||
Args:
|
||||
runtime: Runtime for decision logging
|
||||
runtime: DecisionTracker for decision logging
|
||||
llm: LLM provider for LLM nodes
|
||||
tools: Available tools
|
||||
tool_executor: Function to execute tools
|
||||
@@ -205,7 +201,14 @@ class GraphExecutor:
|
||||
self.approval_callback = approval_callback
|
||||
self.validator = OutputValidator()
|
||||
self.logger = logging.getLogger(__name__)
|
||||
self.logger.debug("[GraphExecutor.__init__] Created with stream_id=%s, execution_id=%s, initial node_registry keys: %s", stream_id, execution_id, list(self.node_registry.keys()))
|
||||
self.logger.debug(
|
||||
"[Orchestrator.__init__] Created with"
|
||||
" stream_id=%s, execution_id=%s,"
|
||||
" initial node_registry keys: %s",
|
||||
stream_id,
|
||||
execution_id,
|
||||
list(self.node_registry.keys()),
|
||||
)
|
||||
self._event_bus = event_bus
|
||||
self._stream_id = stream_id
|
||||
self._execution_id = execution_id or getattr(runtime, "execution_id", "")
|
||||
@@ -225,11 +228,6 @@ class GraphExecutor:
|
||||
self.skill_dirs: list[str] = skill_dirs or []
|
||||
self.context_warn_ratio: float | None = context_warn_ratio
|
||||
self.batch_init_nudge: str | None = batch_init_nudge
|
||||
self.colony_memory_dir = colony_memory_dir
|
||||
self.colony_worker_sessions_dir = colony_worker_sessions_dir
|
||||
self.colony_recall_cache = colony_recall_cache or {}
|
||||
self.colony_reflect_llm = colony_reflect_llm
|
||||
|
||||
if protocols_prompt:
|
||||
self.logger.info(
|
||||
"GraphExecutor[%s] received protocols_prompt (%d chars)",
|
||||
@@ -363,8 +361,8 @@ class GraphExecutor:
|
||||
|
||||
Uses the same recursive binary-search splitting as EventLoopNode.
|
||||
"""
|
||||
from framework.graph.conversation import extract_tool_call_history
|
||||
from framework.graph.event_loop_node import _is_context_too_large_error
|
||||
from framework.agent_loop.conversation import extract_tool_call_history
|
||||
from framework.agent_loop.agent_loop import _is_context_too_large_error
|
||||
|
||||
if _depth > self._PHASE_LLM_MAX_DEPTH:
|
||||
raise RuntimeError("Phase LLM compaction recursion limit")
|
||||
@@ -529,13 +527,15 @@ class GraphExecutor:
|
||||
|
||||
# Continuous conversation mode state
|
||||
is_continuous = getattr(graph, "conversation_mode", "isolated") == "continuous"
|
||||
continuous_conversation = None # NodeConversation threaded across nodes
|
||||
cumulative_tools: list = [] # Tools accumulate, never removed
|
||||
cumulative_tool_names: set[str] = set()
|
||||
cumulative_output_keys: list[str] = [] # Output keys from all visited nodes
|
||||
continuous_conversation = None # NodeConversation threaded across nodes # noqa: F841
|
||||
cumulative_tools: list = [] # Tools accumulate, never removed # noqa: F841
|
||||
cumulative_tool_names: set[str] = set() # noqa: F841
|
||||
cumulative_output_keys: list[str] = [] # noqa: F841
|
||||
|
||||
# Build node registry for subagent lookup
|
||||
node_registry: dict[str, NodeSpec] = {node.id: node for node in graph.nodes}
|
||||
node_registry: dict[str, NodeSpec] = { # noqa: F841
|
||||
node.id: node for node in graph.nodes
|
||||
}
|
||||
|
||||
# Initialize checkpoint store if checkpointing is enabled
|
||||
checkpoint_store: CheckpointStore | None = None
|
||||
@@ -575,10 +575,7 @@ class GraphExecutor:
|
||||
# input_data would overwrite intermediate results with stale values.
|
||||
_is_resuming = bool(
|
||||
session_state
|
||||
and (
|
||||
session_state.get("paused_at")
|
||||
or session_state.get("resume_from_checkpoint")
|
||||
)
|
||||
and (session_state.get("paused_at") or session_state.get("resume_from_checkpoint"))
|
||||
)
|
||||
if input_data and not _is_resuming:
|
||||
for key, value in input_data.items():
|
||||
@@ -588,9 +585,9 @@ class GraphExecutor:
|
||||
_event_triggered = bool(input_data and isinstance(input_data.get("event"), dict))
|
||||
|
||||
path: list[str] = []
|
||||
total_tokens = 0
|
||||
total_latency = 0
|
||||
node_retry_counts: dict[str, int] = {} # Track retries per node
|
||||
total_tokens = 0 # noqa: F841
|
||||
total_latency = 0 # noqa: F841
|
||||
node_retry_counts: dict[str, int] = {} # noqa: F841
|
||||
node_visit_counts: dict[str, int] = {} # Track visits for feedback loops
|
||||
_is_retry = False # True when looping back for a retry (not a new visit)
|
||||
|
||||
@@ -661,7 +658,7 @@ class GraphExecutor:
|
||||
else:
|
||||
current_node_id = graph.get_entry_point(session_state)
|
||||
|
||||
steps = 0
|
||||
steps = 0 # noqa: F841
|
||||
|
||||
if session_state and current_node_id != graph.entry_node:
|
||||
self.logger.info(f"🔄 Resuming from: {current_node_id}")
|
||||
@@ -693,7 +690,7 @@ class GraphExecutor:
|
||||
# and spillover files share the same session-scoped directory.
|
||||
_ctx_token = None
|
||||
if self._storage_path:
|
||||
from framework.runner.tool_registry import ToolRegistry
|
||||
from framework.loader.tool_registry import ToolRegistry
|
||||
|
||||
_ctx_token = ToolRegistry.set_execution_context(
|
||||
data_dir=str(self._storage_path / "data"),
|
||||
@@ -715,14 +712,12 @@ class GraphExecutor:
|
||||
|
||||
finally:
|
||||
if _ctx_token is not None:
|
||||
from framework.runner.tool_registry import ToolRegistry
|
||||
from framework.loader.tool_registry import ToolRegistry
|
||||
|
||||
ToolRegistry.reset_execution_context(_ctx_token)
|
||||
|
||||
|
||||
VALID_NODE_TYPES = {
|
||||
"event_loop",
|
||||
"gcu",
|
||||
}
|
||||
# Node types removed in v0.5 — provide migration guidance
|
||||
REMOVED_NODE_TYPES = {
|
||||
@@ -739,9 +734,17 @@ class GraphExecutor:
|
||||
"""Get or create a node implementation."""
|
||||
# Check registry first
|
||||
if node_spec.id in self.node_registry:
|
||||
logger.debug("[GraphExecutor._get_node_implementation] Found node '%s' in registry", node_spec.id)
|
||||
logger.debug(
|
||||
"[Orchestrator._get_node_implementation] Found node '%s' in registry", node_spec.id
|
||||
)
|
||||
return self.node_registry[node_spec.id]
|
||||
logger.debug("[GraphExecutor._get_node_implementation] Node '%s' not in registry (keys: %s), creating new", node_spec.id, list(self.node_registry.keys()))
|
||||
logger.debug(
|
||||
"[Orchestrator._get_node_implementation]"
|
||||
" Node '%s' not in registry (keys: %s),"
|
||||
" creating new",
|
||||
node_spec.id,
|
||||
list(self.node_registry.keys()),
|
||||
)
|
||||
|
||||
# Reject removed node types with migration guidance
|
||||
if node_spec.node_type in self.REMOVED_NODE_TYPES:
|
||||
@@ -760,10 +763,10 @@ class GraphExecutor:
|
||||
)
|
||||
|
||||
# Create based on type
|
||||
if node_spec.node_type in ("event_loop", "gcu"):
|
||||
if node_spec.node_type == "event_loop":
|
||||
# Auto-create EventLoopNode with sensible defaults.
|
||||
# Custom configs can still be pre-registered via node_registry.
|
||||
from framework.graph.event_loop_node import EventLoopNode, LoopConfig
|
||||
from framework.agent_loop.agent_loop import AgentLoop, LoopConfig
|
||||
|
||||
# Create a FileConversationStore if a storage path is available
|
||||
conv_store = None
|
||||
@@ -783,13 +786,13 @@ class GraphExecutor:
|
||||
if self._storage_path:
|
||||
spillover = str(self._storage_path / "data")
|
||||
|
||||
from framework.graph.node import warn_if_deprecated_client_facing
|
||||
from framework.orchestrator.node import warn_if_deprecated_client_facing
|
||||
|
||||
warn_if_deprecated_client_facing(node_spec)
|
||||
|
||||
lc = self._loop_config
|
||||
default_max_iter = 100 if node_spec.supports_direct_user_io() else 50
|
||||
node = EventLoopNode(
|
||||
node = AgentLoop(
|
||||
event_bus=self._event_bus,
|
||||
judge=None, # implicit judge: accept when output_keys are filled
|
||||
config=LoopConfig(
|
||||
@@ -807,7 +810,13 @@ class GraphExecutor:
|
||||
)
|
||||
# Cache so inject_event() is reachable for queen interaction and escalation routing
|
||||
self.node_registry[node_spec.id] = node
|
||||
logger.debug("[GraphExecutor._get_node_implementation] Cached node '%s' in node_registry, registry now has keys: %s", node_spec.id, list(self.node_registry.keys()))
|
||||
logger.debug(
|
||||
"[Orchestrator._get_node_implementation]"
|
||||
" Cached node '%s' in node_registry,"
|
||||
" registry now has keys: %s",
|
||||
node_spec.id,
|
||||
list(self.node_registry.keys()),
|
||||
)
|
||||
return node
|
||||
|
||||
# Should never reach here due to validation above
|
||||
@@ -988,10 +997,10 @@ class GraphExecutor:
|
||||
branch_impl = self._get_node_implementation(node_spec, graph.cleanup_llm_model)
|
||||
|
||||
effective_max_retries = node_spec.max_retries
|
||||
# Only override for actual EventLoopNode instances, not custom NodeProtocol impls
|
||||
from framework.graph.event_loop_node import EventLoopNode
|
||||
# Only override for actual AgentLoop instances, not custom NodeProtocol impls
|
||||
from framework.agent_loop.agent_loop import AgentLoop as _AgentLoop # noqa: F811
|
||||
|
||||
if isinstance(branch_impl, EventLoopNode) and effective_max_retries > 1:
|
||||
if isinstance(branch_impl, _AgentLoop) and effective_max_retries > 1:
|
||||
self.logger.warning(
|
||||
f"EventLoopNode '{node_spec.id}' has "
|
||||
f"max_retries={effective_max_retries}. Overriding "
|
||||
@@ -1032,9 +1041,6 @@ class GraphExecutor:
|
||||
execution_id=self._execution_id,
|
||||
run_id=self._run_id,
|
||||
stream_id=self._stream_id,
|
||||
node_registry=node_registry,
|
||||
all_tools=self.tools,
|
||||
shared_node_registry=self.node_registry,
|
||||
dynamic_tools_provider=self.dynamic_tools_provider,
|
||||
dynamic_prompt_provider=self.dynamic_prompt_provider,
|
||||
dynamic_memory_provider=self.dynamic_memory_provider,
|
||||
@@ -1283,14 +1289,14 @@ class GraphExecutor:
|
||||
Replaces the imperative while-loop with autonomous workers that
|
||||
self-activate based on edge conditions and fan-out tracking.
|
||||
"""
|
||||
from framework.graph.worker_agent import (
|
||||
from framework.orchestrator.node_worker import (
|
||||
Activation,
|
||||
FanOutTag,
|
||||
WorkerAgent,
|
||||
NodeWorker,
|
||||
WorkerCompletion,
|
||||
WorkerLifecycle,
|
||||
)
|
||||
from framework.runtime.event_bus import AgentEvent, EventType
|
||||
from framework.host.event_bus import AgentEvent, EventType
|
||||
|
||||
# Build shared graph context
|
||||
gc = GraphContext(
|
||||
@@ -1326,16 +1332,12 @@ class GraphExecutor:
|
||||
iteration_metadata_provider=self.iteration_metadata_provider,
|
||||
loop_config=self._loop_config,
|
||||
node_visit_counts=dict(node_visit_counts),
|
||||
colony_memory_dir=self.colony_memory_dir,
|
||||
worker_sessions_dir=self.colony_worker_sessions_dir,
|
||||
colony_recall_cache=self.colony_recall_cache,
|
||||
colony_reflect_llm=self.colony_reflect_llm,
|
||||
)
|
||||
|
||||
# Create one WorkerAgent per node
|
||||
workers: dict[str, WorkerAgent] = {}
|
||||
workers: dict[str, NodeWorker] = {}
|
||||
for node_spec in graph.nodes:
|
||||
workers[node_spec.id] = WorkerAgent(node_spec=node_spec, graph_context=gc)
|
||||
workers[node_spec.id] = NodeWorker(node_spec=node_spec, graph_context=gc)
|
||||
|
||||
# Identify entry workers (graph entry node, not based on edge count)
|
||||
# A node can be the entry point AND have incoming feedback edges.
|
||||
@@ -1408,8 +1410,7 @@ class GraphExecutor:
|
||||
if any(w.lifecycle == WorkerLifecycle.RUNNING for w in workers.values()):
|
||||
return False
|
||||
return any(
|
||||
tid in completed_terminals or tid in failed_workers
|
||||
for tid in terminal_worker_ids
|
||||
tid in completed_terminals or tid in failed_workers for tid in terminal_worker_ids
|
||||
)
|
||||
|
||||
def _mark_quiescent_terminal_failure() -> bool:
|
||||
@@ -1419,8 +1420,7 @@ class GraphExecutor:
|
||||
if any(w.lifecycle == WorkerLifecycle.RUNNING for w in workers.values()):
|
||||
return False
|
||||
if any(
|
||||
tid in completed_terminals or tid in failed_workers
|
||||
for tid in terminal_worker_ids
|
||||
tid in completed_terminals or tid in failed_workers for tid in terminal_worker_ids
|
||||
):
|
||||
return False
|
||||
execution_error = (
|
||||
@@ -1432,11 +1432,13 @@ class GraphExecutor:
|
||||
|
||||
# Track fan-out branch workers for per-branch timeout enforcement
|
||||
_fanout_branch_tasks: dict[str, asyncio.Task] = {} # worker_id → timeout-wrapper task
|
||||
branch_timeout = self._parallel_config.branch_timeout_seconds if self._parallel_config else 300.0
|
||||
branch_timeout = (
|
||||
self._parallel_config.branch_timeout_seconds if self._parallel_config else 300.0
|
||||
)
|
||||
|
||||
def _route_activation(
|
||||
activation: Activation,
|
||||
workers_map: dict[str, WorkerAgent],
|
||||
workers_map: dict[str, NodeWorker],
|
||||
pending_tasks_map: dict[str, asyncio.Task],
|
||||
*,
|
||||
has_event_subscription: bool,
|
||||
@@ -1468,8 +1470,7 @@ class GraphExecutor:
|
||||
if target_worker._task is not None:
|
||||
# Fan-out branch: wrap with timeout
|
||||
is_fanout_branch = any(
|
||||
tag.via_branch == activation.target_id
|
||||
for tag in activation.fan_out_tags
|
||||
tag.via_branch == activation.target_id for tag in activation.fan_out_tags
|
||||
)
|
||||
if is_fanout_branch and branch_timeout > 0:
|
||||
timed_task = asyncio.ensure_future(
|
||||
@@ -1526,14 +1527,15 @@ class GraphExecutor:
|
||||
gc.continuous_conversation = completion.conversation
|
||||
|
||||
self.logger.info(
|
||||
f" ✓ Worker completed: {worker_id} "
|
||||
f"({len(activations)} outgoing activation(s))"
|
||||
f" ✓ Worker completed: {worker_id} ({len(activations)} outgoing activation(s))"
|
||||
)
|
||||
|
||||
# Route activations to target workers
|
||||
for activation in activations:
|
||||
_route_activation(
|
||||
activation, workers, {},
|
||||
activation,
|
||||
workers,
|
||||
{},
|
||||
has_event_subscription=True,
|
||||
)
|
||||
|
||||
@@ -1567,8 +1569,8 @@ class GraphExecutor:
|
||||
completion_event.set()
|
||||
|
||||
# Subscribe to events (only if event bus has subscribe capability)
|
||||
has_event_subscription = (
|
||||
self._event_bus is not None and hasattr(self._event_bus, "subscribe")
|
||||
has_event_subscription = self._event_bus is not None and hasattr(
|
||||
self._event_bus, "subscribe"
|
||||
)
|
||||
if has_event_subscription:
|
||||
sub_completed = self._event_bus.subscribe(
|
||||
@@ -1597,9 +1599,7 @@ class GraphExecutor:
|
||||
else:
|
||||
# No event bus: wait on worker tasks directly and route completions inline.
|
||||
pending_tasks: dict[str, asyncio.Task] = {
|
||||
wid: w._task
|
||||
for wid, w in workers.items()
|
||||
if w._task is not None
|
||||
wid: w._task for wid, w in workers.items() if w._task is not None
|
||||
}
|
||||
while True:
|
||||
if _check_graph_done():
|
||||
@@ -1651,7 +1651,10 @@ class GraphExecutor:
|
||||
task_error = exc
|
||||
|
||||
# Check for fan-out branch timeout
|
||||
if isinstance(task_error, asyncio.TimeoutError) and wid in _fanout_branch_tasks:
|
||||
if (
|
||||
isinstance(task_error, asyncio.TimeoutError)
|
||||
and wid in _fanout_branch_tasks
|
||||
):
|
||||
error = f"Branch failed (timed out after {branch_timeout}s)"
|
||||
failed_workers[wid] = error
|
||||
worker.lifecycle = WorkerLifecycle.FAILED
|
||||
@@ -1716,7 +1719,9 @@ class GraphExecutor:
|
||||
# Route activations
|
||||
for activation in outgoing_activations:
|
||||
_route_activation(
|
||||
activation, workers, pending_tasks,
|
||||
activation,
|
||||
workers,
|
||||
pending_tasks,
|
||||
has_event_subscription=False,
|
||||
)
|
||||
|
||||
@@ -1744,7 +1749,9 @@ class GraphExecutor:
|
||||
if outgoing_activations:
|
||||
for activation in outgoing_activations:
|
||||
_route_activation(
|
||||
activation, workers, pending_tasks,
|
||||
activation,
|
||||
workers,
|
||||
pending_tasks,
|
||||
has_event_subscription=False,
|
||||
)
|
||||
elif task_error is not None:
|
||||
+4
-7
@@ -9,7 +9,7 @@ import json
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from framework.graph.prompting import (
|
||||
from framework.orchestrator.prompting import (
|
||||
EXECUTION_SCOPE_PREAMBLE,
|
||||
TransitionSpec,
|
||||
build_accounts_prompt,
|
||||
@@ -19,8 +19,7 @@ from framework.graph.prompting import (
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.node import DataBuffer, NodeSpec
|
||||
from framework.orchestrator.node import DataBuffer, NodeSpec
|
||||
|
||||
|
||||
_with_datetime = stamp_prompt_datetime
|
||||
@@ -37,7 +36,7 @@ def compose_system_prompt(
|
||||
node_type_preamble: str | None = None,
|
||||
) -> str:
|
||||
"""Compatibility wrapper for the legacy function signature."""
|
||||
from framework.graph.prompting import NodePromptSpec
|
||||
from framework.orchestrator.prompting import NodePromptSpec
|
||||
|
||||
spec = NodePromptSpec(
|
||||
identity_prompt=identity_prompt or "",
|
||||
@@ -67,7 +66,6 @@ def compose_system_prompt(
|
||||
protocols_prompt=spec.protocols_prompt,
|
||||
node_type=spec.node_type,
|
||||
output_keys=spec.output_keys,
|
||||
is_subagent_mode=spec.is_subagent_mode,
|
||||
)
|
||||
return build_system_prompt(spec)
|
||||
|
||||
@@ -136,8 +134,7 @@ def build_transition_marker(
|
||||
)
|
||||
|
||||
|
||||
from framework.graph.prompting import build_transition_message
|
||||
|
||||
from framework.orchestrator.prompting import build_transition_message # noqa: E402
|
||||
|
||||
__all__ = [
|
||||
"EXECUTION_SCOPE_PREAMBLE",
|
||||
@@ -12,8 +12,8 @@ from datetime import datetime
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.node import DataBuffer
|
||||
from framework.orchestrator.edge import GraphSpec
|
||||
from framework.orchestrator.node import DataBuffer
|
||||
|
||||
|
||||
# Injected into every worker node's system prompt so the LLM understands
|
||||
@@ -40,7 +40,6 @@ class NodePromptSpec:
|
||||
memory_prompt: str = ""
|
||||
node_type: str = "event_loop"
|
||||
output_keys: tuple[str, ...] = ()
|
||||
is_subagent_mode: bool = False
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
@@ -104,7 +103,9 @@ def build_accounts_prompt(
|
||||
tools_for_provider = sorted(provider_tools.get(provider, []))
|
||||
|
||||
if node_tool_set is not None:
|
||||
relevant_tools = [tool_name for tool_name in tools_for_provider if tool_name in node_tool_set]
|
||||
relevant_tools = [
|
||||
tool_name for tool_name in tools_for_provider if tool_name in node_tool_set
|
||||
]
|
||||
if not relevant_tools:
|
||||
continue
|
||||
tools_for_provider = relevant_tools
|
||||
@@ -153,7 +154,9 @@ def build_prompt_spec_from_node_context(
|
||||
resolved_memory_prompt = getattr(ctx, "memory_prompt", "") or ""
|
||||
return NodePromptSpec(
|
||||
identity_prompt=ctx.identity_prompt or "",
|
||||
focus_prompt=focus_prompt if focus_prompt is not None else (ctx.node_spec.system_prompt or ""),
|
||||
focus_prompt=focus_prompt
|
||||
if focus_prompt is not None
|
||||
else (ctx.node_spec.system_prompt or ""),
|
||||
narrative=narrative if narrative is not None else (ctx.narrative or ""),
|
||||
accounts_prompt=ctx.accounts_prompt or "",
|
||||
skills_catalog_prompt=ctx.skills_catalog_prompt or "",
|
||||
@@ -161,7 +164,6 @@ def build_prompt_spec_from_node_context(
|
||||
memory_prompt=resolved_memory_prompt,
|
||||
node_type=ctx.node_spec.node_type,
|
||||
output_keys=tuple(ctx.node_spec.output_keys or ()),
|
||||
is_subagent_mode=bool(getattr(ctx, "is_subagent_mode", False)),
|
||||
)
|
||||
|
||||
|
||||
@@ -191,17 +193,10 @@ def build_system_prompt(spec: NodePromptSpec) -> str:
|
||||
if spec.narrative:
|
||||
parts.append(f"\n--- Context (what has happened so far) ---\n{spec.narrative}")
|
||||
|
||||
if (
|
||||
not spec.is_subagent_mode
|
||||
and spec.node_type in ("event_loop", "gcu")
|
||||
and spec.output_keys
|
||||
):
|
||||
if not False and spec.node_type == "event_loop" and spec.output_keys:
|
||||
parts.append(f"\n{EXECUTION_SCOPE_PREAMBLE}")
|
||||
|
||||
if spec.node_type == "gcu":
|
||||
from framework.graph.gcu import GCU_BROWSER_SYSTEM_PROMPT
|
||||
|
||||
parts.append(f"\n{GCU_BROWSER_SYSTEM_PROMPT}")
|
||||
|
||||
if spec.focus_prompt:
|
||||
parts.append(f"\n--- Current Focus ---\n{spec.focus_prompt}")
|
||||
@@ -1,7 +1,84 @@
|
||||
import ast
|
||||
import operator
|
||||
import signal
|
||||
import threading
|
||||
import time
|
||||
from contextlib import contextmanager
|
||||
from typing import Any
|
||||
|
||||
# Power operations can allocate extremely large integers. Keep conservative
|
||||
# limits here so untrusted edge conditions cannot exhaust CPU or memory.
|
||||
MAX_POWER_ABS_EXPONENT = 1_000
|
||||
MAX_POWER_RESULT_BITS = 4_096
|
||||
# Typical edge-condition evaluations in this repo complete well under 1ms.
|
||||
# 100ms leaves ample headroom for legitimate checks while failing fast on abuse.
|
||||
DEFAULT_TIMEOUT_MS = 100
|
||||
|
||||
|
||||
def _safe_pow(base: Any, exp: Any) -> Any:
|
||||
if isinstance(exp, (int, float)) and abs(exp) > MAX_POWER_ABS_EXPONENT:
|
||||
raise ValueError(f"Power exponent exceeds safe limit ({MAX_POWER_ABS_EXPONENT})")
|
||||
|
||||
if isinstance(base, int) and isinstance(exp, int) and exp > 0:
|
||||
abs_base = abs(base)
|
||||
if abs_base > 1:
|
||||
# Estimate bit growth instead of materializing a huge integer.
|
||||
estimated_bits = exp * abs_base.bit_length()
|
||||
if estimated_bits > MAX_POWER_RESULT_BITS:
|
||||
raise ValueError("Power operation exceeds safe size limit")
|
||||
|
||||
return operator.pow(base, exp)
|
||||
|
||||
|
||||
def _timeout_message(timeout_ms: int) -> str:
|
||||
return f"safe_eval exceeded {timeout_ms}ms execution timeout"
|
||||
|
||||
|
||||
def _check_timeout(deadline: float | None, timeout_ms: int | None) -> None:
|
||||
if deadline is not None and timeout_ms is not None and time.perf_counter() >= deadline:
|
||||
raise TimeoutError(_timeout_message(timeout_ms))
|
||||
|
||||
|
||||
@contextmanager
|
||||
def _execution_timeout(timeout_ms: int | None):
|
||||
if timeout_ms is None:
|
||||
yield
|
||||
return
|
||||
|
||||
if timeout_ms <= 0:
|
||||
raise ValueError("timeout_ms must be greater than 0")
|
||||
|
||||
can_use_alarm = (
|
||||
hasattr(signal, "SIGALRM")
|
||||
and hasattr(signal, "ITIMER_REAL")
|
||||
and hasattr(signal, "getitimer")
|
||||
and hasattr(signal, "setitimer")
|
||||
and threading.current_thread() is threading.main_thread()
|
||||
)
|
||||
if not can_use_alarm:
|
||||
yield
|
||||
return
|
||||
|
||||
current_delay, current_interval = signal.getitimer(signal.ITIMER_REAL)
|
||||
if current_delay > 0 or current_interval > 0:
|
||||
# safe_eval runs inside a shared framework process, so it must not
|
||||
# replace a timer another subsystem already owns.
|
||||
yield
|
||||
return
|
||||
|
||||
def _handle_timeout(signum, frame):
|
||||
raise TimeoutError(_timeout_message(timeout_ms))
|
||||
|
||||
old_handler = signal.getsignal(signal.SIGALRM)
|
||||
signal.signal(signal.SIGALRM, _handle_timeout)
|
||||
old_delay, old_interval = signal.setitimer(signal.ITIMER_REAL, timeout_ms / 1000)
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
signal.signal(signal.SIGALRM, old_handler)
|
||||
signal.setitimer(signal.ITIMER_REAL, old_delay, old_interval)
|
||||
|
||||
|
||||
# Safe operators whitelist
|
||||
SAFE_OPERATORS = {
|
||||
ast.Add: operator.add,
|
||||
@@ -10,7 +87,7 @@ SAFE_OPERATORS = {
|
||||
ast.Div: operator.truediv,
|
||||
ast.FloorDiv: operator.floordiv,
|
||||
ast.Mod: operator.mod,
|
||||
ast.Pow: operator.pow,
|
||||
ast.Pow: _safe_pow,
|
||||
ast.LShift: operator.lshift,
|
||||
ast.RShift: operator.rshift,
|
||||
ast.BitOr: operator.or_,
|
||||
@@ -54,10 +131,19 @@ SAFE_FUNCTIONS = {
|
||||
|
||||
|
||||
class SafeEvalVisitor(ast.NodeVisitor):
|
||||
def __init__(self, context: dict[str, Any]):
|
||||
def __init__(
|
||||
self,
|
||||
context: dict[str, Any],
|
||||
*,
|
||||
deadline: float | None = None,
|
||||
timeout_ms: int | None = None,
|
||||
):
|
||||
self.context = context
|
||||
self.deadline = deadline
|
||||
self.timeout_ms = timeout_ms
|
||||
|
||||
def visit(self, node: ast.AST) -> Any:
|
||||
_check_timeout(self.deadline, self.timeout_ms)
|
||||
# Override visit to prevent default behavior and ensure only explicitly allowed nodes work
|
||||
method = "visit_" + node.__class__.__name__
|
||||
visitor = getattr(self, method, self.generic_visit)
|
||||
@@ -183,6 +269,7 @@ class SafeEvalVisitor(ast.NodeVisitor):
|
||||
raise AttributeError(f"Object has no attribute '{node.attr}'")
|
||||
|
||||
def visit_Call(self, node: ast.Call) -> Any:
|
||||
_check_timeout(self.deadline, self.timeout_ms)
|
||||
# Only allow calling whitelisted functions
|
||||
func = self.visit(node.func)
|
||||
|
||||
@@ -226,16 +313,24 @@ class SafeEvalVisitor(ast.NodeVisitor):
|
||||
args = [self.visit(arg) for arg in node.args]
|
||||
keywords = {kw.arg: self.visit(kw.value) for kw in node.keywords}
|
||||
|
||||
_check_timeout(self.deadline, self.timeout_ms)
|
||||
return func(*args, **keywords)
|
||||
|
||||
|
||||
def safe_eval(expr: str, context: dict[str, Any] | None = None) -> Any:
|
||||
def safe_eval(
|
||||
expr: str,
|
||||
context: dict[str, Any] | None = None,
|
||||
*,
|
||||
timeout_ms: int | None = DEFAULT_TIMEOUT_MS,
|
||||
) -> Any:
|
||||
"""
|
||||
Safely evaluate a python expression string.
|
||||
|
||||
Args:
|
||||
expr: The expression string to evaluate.
|
||||
context: Dictionary of variables available in the expression.
|
||||
timeout_ms: Maximum evaluation time in milliseconds. Use ``None`` to
|
||||
disable the timeout.
|
||||
|
||||
Returns:
|
||||
The result of the evaluation.
|
||||
@@ -251,10 +346,18 @@ def safe_eval(expr: str, context: dict[str, Any] | None = None) -> Any:
|
||||
full_context = context.copy()
|
||||
full_context.update(SAFE_FUNCTIONS)
|
||||
|
||||
try:
|
||||
tree = ast.parse(expr, mode="eval")
|
||||
except SyntaxError as e:
|
||||
raise SyntaxError(f"Invalid syntax in expression: {e}") from e
|
||||
deadline = None if timeout_ms is None else time.perf_counter() + (timeout_ms / 1000)
|
||||
|
||||
visitor = SafeEvalVisitor(full_context)
|
||||
return visitor.visit(tree)
|
||||
with _execution_timeout(timeout_ms):
|
||||
try:
|
||||
tree = ast.parse(expr, mode="eval")
|
||||
except SyntaxError as e:
|
||||
raise SyntaxError(f"Invalid syntax in expression: {e}") from e
|
||||
|
||||
_check_timeout(deadline, timeout_ms)
|
||||
visitor = SafeEvalVisitor(
|
||||
full_context,
|
||||
deadline=deadline,
|
||||
timeout_ms=timeout_ms,
|
||||
)
|
||||
return visitor.visit(tree)
|
||||
@@ -0,0 +1,32 @@
|
||||
"""Pipeline middleware for the agent runtime.
|
||||
|
||||
Stages run in order when :meth:`AgentRuntime.trigger` receives a request.
|
||||
Each stage can pass the context through, transform the input data, or reject
|
||||
the request entirely. This is the runtime-level analogue of AstrBot's
|
||||
pipeline architecture and lets operators compose rate limiting, validation,
|
||||
cost guards, and custom pre/post-processing without patching core code.
|
||||
"""
|
||||
|
||||
from framework.pipeline.registry import (
|
||||
build_pipeline_from_config,
|
||||
build_stage,
|
||||
register,
|
||||
)
|
||||
from framework.pipeline.runner import PipelineRunner
|
||||
from framework.pipeline.stage import (
|
||||
PipelineContext,
|
||||
PipelineRejectedError,
|
||||
PipelineResult,
|
||||
PipelineStage,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"PipelineContext",
|
||||
"PipelineRejectedError",
|
||||
"PipelineResult",
|
||||
"PipelineRunner",
|
||||
"PipelineStage",
|
||||
"build_pipeline_from_config",
|
||||
"build_stage",
|
||||
"register",
|
||||
]
|
||||
@@ -0,0 +1,44 @@
|
||||
"""Execution-level middleware protocol.
|
||||
|
||||
Unlike :class:`PipelineStage` (which gates ``AgentHost.trigger()`` at the
|
||||
request level), execution middleware runs at the start of **every** execution
|
||||
attempt inside ``ExecutionManager._run_execution()`` -- including resurrection
|
||||
retries.
|
||||
|
||||
Use this for concerns that must re-evaluate per attempt:
|
||||
- Cost tracking (charge per attempt, not per trigger)
|
||||
- Tool scoping (different tools on retry)
|
||||
- Checkpoint config overrides
|
||||
- Per-execution logging/tracing setup
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
|
||||
@dataclass
|
||||
class ExecutionContext:
|
||||
"""Context passed to execution middleware."""
|
||||
|
||||
execution_id: str
|
||||
stream_id: str
|
||||
run_id: str
|
||||
input_data: dict[str, Any]
|
||||
session_state: dict[str, Any] | None = None
|
||||
attempt: int = 1
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
class ExecutionMiddleware(ABC):
|
||||
"""Base class for per-execution middleware."""
|
||||
|
||||
@abstractmethod
|
||||
async def on_execution_start(self, ctx: ExecutionContext) -> ExecutionContext:
|
||||
"""Called before each execution attempt (including resurrections).
|
||||
|
||||
Modify and return *ctx* to transform execution parameters.
|
||||
Raise to abort the execution.
|
||||
"""
|
||||
@@ -0,0 +1,107 @@
|
||||
"""Pipeline stage registry -- maps type names to stage classes.
|
||||
|
||||
Stages self-register via the ``@register`` decorator. The
|
||||
``build_pipeline_from_config`` function reads a declarative config
|
||||
(from ``~/.hive/configuration.json`` or ``agent.json``) and
|
||||
instantiates the corresponding stage objects.
|
||||
|
||||
Example config::
|
||||
|
||||
{
|
||||
"pipeline": {
|
||||
"stages": [
|
||||
{"type": "rate_limit", "order": 200, "config": {"max_requests_per_minute": 60}},
|
||||
{"type": "cost_guard", "order": 300, "config": {"max_cost_per_request": 0.50}}
|
||||
]
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from framework.pipeline.runner import PipelineRunner
|
||||
from framework.pipeline.stage import PipelineStage
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_STAGE_REGISTRY: dict[str, type[PipelineStage]] = {}
|
||||
|
||||
|
||||
def register(name: str):
|
||||
"""Decorator to register a pipeline stage class by type name.
|
||||
|
||||
Usage::
|
||||
|
||||
@register("rate_limit")
|
||||
class RateLimitStage(PipelineStage):
|
||||
...
|
||||
"""
|
||||
|
||||
def decorator(cls: type[PipelineStage]) -> type[PipelineStage]:
|
||||
_STAGE_REGISTRY[name] = cls
|
||||
return cls
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
def get_registered_stages() -> dict[str, type[PipelineStage]]:
|
||||
"""Return a copy of the stage registry."""
|
||||
return dict(_STAGE_REGISTRY)
|
||||
|
||||
|
||||
def build_stage(spec: dict[str, Any]) -> PipelineStage:
|
||||
"""Instantiate a single stage from a config spec.
|
||||
|
||||
Args:
|
||||
spec: Dict with ``type`` (required), ``order`` (optional),
|
||||
and ``config`` (optional kwargs dict).
|
||||
|
||||
Raises:
|
||||
KeyError: If the stage type is not registered.
|
||||
"""
|
||||
stage_type = spec["type"]
|
||||
if stage_type not in _STAGE_REGISTRY:
|
||||
available = ", ".join(sorted(_STAGE_REGISTRY)) or "(none)"
|
||||
raise KeyError(
|
||||
f"Unknown pipeline stage type '{stage_type}'. "
|
||||
f"Available: {available}"
|
||||
)
|
||||
cls = _STAGE_REGISTRY[stage_type]
|
||||
config = spec.get("config", {})
|
||||
stage = cls(**config)
|
||||
if "order" in spec:
|
||||
stage.order = spec["order"]
|
||||
return stage
|
||||
|
||||
|
||||
def build_pipeline_from_config(
|
||||
stages_config: list[dict[str, Any]],
|
||||
) -> PipelineRunner:
|
||||
"""Build a ``PipelineRunner`` from a declarative stages list.
|
||||
|
||||
Each entry is ``{"type": "...", "order": N, "config": {...}}``.
|
||||
"""
|
||||
# Import built-in stages so they self-register
|
||||
_ensure_builtins_registered()
|
||||
|
||||
stages = [build_stage(s) for s in stages_config]
|
||||
return PipelineRunner(stages)
|
||||
|
||||
|
||||
def _ensure_builtins_registered() -> None:
|
||||
"""Import built-in stage modules so their ``@register`` decorators fire."""
|
||||
if _STAGE_REGISTRY:
|
||||
return # already populated
|
||||
try:
|
||||
import framework.pipeline.stages.cost_guard # noqa: F401
|
||||
import framework.pipeline.stages.credential_resolver # noqa: F401
|
||||
import framework.pipeline.stages.input_validation # noqa: F401
|
||||
import framework.pipeline.stages.llm_provider # noqa: F401
|
||||
import framework.pipeline.stages.mcp_registry # noqa: F401
|
||||
import framework.pipeline.stages.rate_limit # noqa: F401
|
||||
import framework.pipeline.stages.skill_registry # noqa: F401
|
||||
except ImportError:
|
||||
pass
|
||||
@@ -0,0 +1,111 @@
|
||||
"""Pipeline runner -- executes registered stages in order."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from framework.pipeline.stage import (
|
||||
PipelineContext,
|
||||
PipelineRejectedError,
|
||||
PipelineStage,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PipelineRunner:
|
||||
"""Executes a list of :class:`PipelineStage` instances in ``order``.
|
||||
|
||||
The runner is the orchestration layer that :class:`AgentRuntime` calls
|
||||
on every trigger. Stages execute in ascending ``order`` (ties broken by
|
||||
registration order). A stage returning ``reject`` short-circuits the
|
||||
pipeline and causes the trigger to raise :class:`PipelineRejectedError`.
|
||||
"""
|
||||
|
||||
def __init__(self, stages: list[PipelineStage] | None = None) -> None:
|
||||
self._stages: list[PipelineStage] = sorted(stages or [], key=lambda s: s.order)
|
||||
|
||||
@property
|
||||
def stages(self) -> list[PipelineStage]:
|
||||
return list(self._stages)
|
||||
|
||||
def add_stage(self, stage: PipelineStage) -> None:
|
||||
"""Add a stage after construction (for dynamic registration)."""
|
||||
self._stages.append(stage)
|
||||
self._stages.sort(key=lambda s: s.order)
|
||||
|
||||
async def initialize_all(self) -> None:
|
||||
"""Call ``initialize`` on every registered stage."""
|
||||
for stage in self._stages:
|
||||
name = stage.__class__.__name__
|
||||
logger.info("[pipeline] Initializing %s (order=%d)", name, stage.order)
|
||||
await stage.initialize()
|
||||
logger.info("[pipeline] %s initialized", name)
|
||||
if self._stages:
|
||||
logger.info(
|
||||
"[pipeline] Ready: %d stages [%s]",
|
||||
len(self._stages),
|
||||
" -> ".join(s.__class__.__name__ for s in self._stages),
|
||||
)
|
||||
|
||||
async def run(self, ctx: PipelineContext) -> PipelineContext:
|
||||
"""Run all stages. Raises ``PipelineRejectedError`` on rejection.
|
||||
|
||||
Returns the (possibly transformed) context.
|
||||
"""
|
||||
if not self._stages:
|
||||
return ctx
|
||||
import time
|
||||
|
||||
pipeline_start = time.perf_counter()
|
||||
logger.info(
|
||||
"[pipeline] Running %d stages for entry_point=%s",
|
||||
len(self._stages),
|
||||
ctx.entry_point_id,
|
||||
)
|
||||
for stage in self._stages:
|
||||
stage_name = stage.__class__.__name__
|
||||
t0 = time.perf_counter()
|
||||
result = await stage.process(ctx)
|
||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||
if result.action == "reject":
|
||||
reason = result.rejection_reason or "(no reason given)"
|
||||
logger.warning(
|
||||
"[pipeline] REJECTED by %s (%.1fms): %s",
|
||||
stage_name, elapsed_ms, reason,
|
||||
)
|
||||
raise PipelineRejectedError(stage_name, reason)
|
||||
if result.action == "transform":
|
||||
logger.info(
|
||||
"[pipeline] %s TRANSFORMED input (%.1fms)",
|
||||
stage_name, elapsed_ms,
|
||||
)
|
||||
if result.input_data is not None:
|
||||
ctx.input_data = result.input_data
|
||||
else:
|
||||
logger.info(
|
||||
"[pipeline] %s passed (%.1fms)",
|
||||
stage_name, elapsed_ms,
|
||||
)
|
||||
total_ms = (time.perf_counter() - pipeline_start) * 1000
|
||||
logger.info("[pipeline] Complete (%.1fms total)", total_ms)
|
||||
return ctx
|
||||
|
||||
async def run_post(self, ctx: PipelineContext, result: Any) -> Any:
|
||||
"""Run all stages' ``post_process`` hooks in order.
|
||||
|
||||
Each stage can transform the result; the final value is returned.
|
||||
Exceptions are logged and swallowed -- post-processing must not
|
||||
break a successful execution.
|
||||
"""
|
||||
current = result
|
||||
for stage in self._stages:
|
||||
try:
|
||||
current = await stage.post_process(ctx, current)
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"Pipeline post_process raised in %s; continuing with previous result",
|
||||
stage.__class__.__name__,
|
||||
)
|
||||
return current
|
||||
@@ -0,0 +1,77 @@
|
||||
"""Pipeline stage base class and request/response types."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any, Literal
|
||||
|
||||
|
||||
class PipelineRejectedError(Exception):
|
||||
"""Raised by ``AgentHost.trigger`` when a stage rejects the request."""
|
||||
|
||||
def __init__(self, stage_name: str, reason: str) -> None:
|
||||
super().__init__(f"Pipeline rejected by {stage_name}: {reason}")
|
||||
self.stage_name = stage_name
|
||||
self.reason = reason
|
||||
|
||||
|
||||
@dataclass
|
||||
class PipelineContext:
|
||||
"""Carries request data through the pipeline."""
|
||||
|
||||
entry_point_id: str
|
||||
input_data: dict[str, Any]
|
||||
correlation_id: str | None = None
|
||||
session_state: dict[str, Any] | None = None
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
@dataclass
|
||||
class PipelineResult:
|
||||
"""Outcome of a stage's ``process`` call."""
|
||||
|
||||
action: Literal["continue", "reject", "transform"] = "continue"
|
||||
input_data: dict[str, Any] | None = None
|
||||
rejection_reason: str | None = None
|
||||
|
||||
|
||||
class PipelineStage(ABC):
|
||||
"""Base class for all middleware stages.
|
||||
|
||||
Infrastructure stages (LLM, MCP, credentials, skills) set typed
|
||||
attributes during ``initialize()`` that the host reads after all
|
||||
stages have initialized. Request-level stages (rate limit, input
|
||||
validation, cost guard) implement ``process()``.
|
||||
|
||||
Attributes set by infrastructure stages:
|
||||
llm: LLM provider instance (set by LlmProviderStage)
|
||||
tool_registry: ToolRegistry with discovered MCP tools (set by McpRegistryStage)
|
||||
accounts_prompt: Connected accounts system prompt block (set by CredentialResolverStage)
|
||||
accounts_data: Raw account info list (set by CredentialResolverStage)
|
||||
tool_provider_map: Tool name -> provider mapping (set by CredentialResolverStage)
|
||||
skills_manager: SkillsManager instance (set by SkillRegistryStage)
|
||||
"""
|
||||
|
||||
order: int = 100
|
||||
|
||||
# Infrastructure stage outputs -- typed so _apply_pipeline_results
|
||||
# doesn't need hasattr() sniffing.
|
||||
llm: Any = None
|
||||
tool_registry: Any = None
|
||||
accounts_prompt: str = ""
|
||||
accounts_data: list[dict] | None = None
|
||||
tool_provider_map: dict[str, str] | None = None
|
||||
skills_manager: Any = None
|
||||
|
||||
async def initialize(self) -> None:
|
||||
"""Called once when the runtime starts."""
|
||||
return None
|
||||
|
||||
@abstractmethod
|
||||
async def process(self, ctx: PipelineContext) -> PipelineResult:
|
||||
"""Process the incoming request."""
|
||||
|
||||
async def post_process(self, ctx: PipelineContext, result: Any) -> Any:
|
||||
"""Optional post-execution hook. Default: pass-through."""
|
||||
return result
|
||||
@@ -0,0 +1,19 @@
|
||||
"""Built-in pipeline stages."""
|
||||
|
||||
from framework.pipeline.stages.cost_guard import CostGuardStage
|
||||
from framework.pipeline.stages.credential_resolver import CredentialResolverStage
|
||||
from framework.pipeline.stages.input_validation import InputValidationStage
|
||||
from framework.pipeline.stages.llm_provider import LlmProviderStage
|
||||
from framework.pipeline.stages.mcp_registry import McpRegistryStage
|
||||
from framework.pipeline.stages.rate_limit import RateLimitStage
|
||||
from framework.pipeline.stages.skill_registry import SkillRegistryStage
|
||||
|
||||
__all__ = [
|
||||
"CostGuardStage",
|
||||
"CredentialResolverStage",
|
||||
"InputValidationStage",
|
||||
"LlmProviderStage",
|
||||
"McpRegistryStage",
|
||||
"RateLimitStage",
|
||||
"SkillRegistryStage",
|
||||
]
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user