Compare commits
21 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 0cf17e1c63 | |||
| b459a2f7a9 | |||
| 20bea9cd7f | |||
| a7709d489c | |||
| 18dfc997b8 | |||
| 92d0b6addf | |||
| f305745295 | |||
| fc22586752 | |||
| 646440eba3 | |||
| 53e5579326 | |||
| 29a1630d0f | |||
| 171f4ab2ae | |||
| a5ae071a03 | |||
| b6e2634537 | |||
| 9f424f2fc0 | |||
| 94197cbcb9 | |||
| 3ee6d98905 | |||
| a96cd546c8 | |||
| eb33d4f1c2 | |||
| 4253956326 | |||
| d6b05bf337 |
@@ -65,10 +65,31 @@ jobs:
|
||||
cd core
|
||||
pytest tests/ -v
|
||||
|
||||
test-tools:
|
||||
name: Test Tools
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Install uv
|
||||
uses: astral-sh/setup-uv@v4
|
||||
|
||||
- name: Install dependencies and run tests
|
||||
run: |
|
||||
cd tools
|
||||
uv sync --extra dev
|
||||
uv pip install --python .venv/bin/python -e ../core
|
||||
uv run --extra dev pytest tests/ -v
|
||||
|
||||
validate:
|
||||
name: Validate Agent Exports
|
||||
runs-on: ubuntu-latest
|
||||
needs: [lint, test]
|
||||
needs: [lint, test, test-tools]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
|
||||
+20
-22
@@ -44,7 +44,7 @@ Aden Agent Framework is a Python-based system for building goal-driven, self-imp
|
||||
Ensure you have installed:
|
||||
|
||||
- **Python 3.11+** - [Download](https://www.python.org/downloads/) (3.12 or 3.13 recommended)
|
||||
- **pip** - Package installer for Python (comes with Python)
|
||||
- **uv** - Python package manager ([Install](https://docs.astral.sh/uv/getting-started/installation/))
|
||||
- **git** - Version control
|
||||
- **Claude Code** - [Install](https://docs.anthropic.com/claude/docs/claude-code) (optional, for using building skills)
|
||||
|
||||
@@ -52,7 +52,7 @@ Verify installation:
|
||||
|
||||
```bash
|
||||
python --version # Should be 3.11+
|
||||
pip --version # Should be latest
|
||||
uv --version # Should be latest
|
||||
git --version # Any recent version
|
||||
```
|
||||
|
||||
@@ -128,8 +128,12 @@ hive/ # Repository root
|
||||
│
|
||||
├── .github/ # GitHub configuration
|
||||
│ ├── workflows/
|
||||
│ │ ├── ci.yml # Runs on every PR
|
||||
│ │ └── release.yml # Runs on tags
|
||||
│ │ ├── ci.yml # Lint, test, validate on every PR
|
||||
│ │ ├── release.yml # Runs on tags
|
||||
│ │ ├── pr-requirements.yml # PR requirement checks
|
||||
│ │ ├── pr-check-command.yml # PR check commands
|
||||
│ │ ├── claude-issue-triage.yml # Automated issue triage
|
||||
│ │ └── auto-close-duplicates.yml # Close duplicate issues
|
||||
│ ├── ISSUE_TEMPLATE/ # Bug report & feature request templates
|
||||
│ ├── PULL_REQUEST_TEMPLATE.md # PR description template
|
||||
│ └── CODEOWNERS # Auto-assign reviewers
|
||||
@@ -166,7 +170,6 @@ hive/ # Repository root
|
||||
│ │ ├── testing/ # Testing utilities
|
||||
│ │ └── __init__.py
|
||||
│ ├── pyproject.toml # Package metadata and dependencies
|
||||
│ ├── requirements.txt # Python dependencies
|
||||
│ ├── README.md # Framework documentation
|
||||
│ ├── MCP_INTEGRATION_GUIDE.md # MCP server integration guide
|
||||
│ └── docs/ # Protocol documentation
|
||||
@@ -182,7 +185,6 @@ hive/ # Repository root
|
||||
│ │ ├── mcp_server.py # HTTP MCP server
|
||||
│ │ └── __init__.py
|
||||
│ ├── pyproject.toml # Package metadata
|
||||
│ ├── requirements.txt # Python dependencies
|
||||
│ └── README.md # Tools documentation
|
||||
│
|
||||
├── exports/ # AGENT PACKAGES (user-created, gitignored)
|
||||
@@ -191,14 +193,16 @@ hive/ # Repository root
|
||||
├── docs/ # Documentation
|
||||
│ ├── getting-started.md # Quick start guide
|
||||
│ ├── configuration.md # Configuration reference
|
||||
│ ├── architecture.md # System architecture
|
||||
│ └── articles/ # Technical articles
|
||||
│ ├── architecture/ # System architecture
|
||||
│ ├── articles/ # Technical articles
|
||||
│ ├── quizzes/ # Developer quizzes
|
||||
│ └── i18n/ # Translations
|
||||
│
|
||||
├── scripts/ # Build & utility scripts
|
||||
│ ├── setup-python.sh # Python environment setup
|
||||
│ └── setup.sh # Legacy setup script
|
||||
│
|
||||
├── quickstart.sh # Install Claude Code skills
|
||||
├── quickstart.sh # Interactive setup wizard
|
||||
├── ENVIRONMENT_SETUP.md # Complete Python setup guide
|
||||
├── README.md # Project overview
|
||||
├── DEVELOPER.md # This file
|
||||
@@ -375,7 +379,7 @@ def test_ticket_categorization():
|
||||
- **PEP 8** - Follow Python style guide
|
||||
- **Type hints** - Use for function signatures and class attributes
|
||||
- **Docstrings** - Document classes and public functions
|
||||
- **Black** - Code formatter (run with `black .`)
|
||||
- **Ruff** - Linter and formatter (run with `make check`)
|
||||
|
||||
```python
|
||||
# Good
|
||||
@@ -509,8 +513,8 @@ chore(deps): update React to 18.2.0
|
||||
|
||||
1. Create a feature branch from `main`
|
||||
2. Make your changes with clear commits
|
||||
3. Run tests locally: `PYTHONPATH=core:exports python -m pytest`
|
||||
4. Run linting: `black --check .`
|
||||
3. Run tests locally: `make test`
|
||||
4. Run linting: `make check`
|
||||
5. Push and create a PR
|
||||
6. Fill out the PR template
|
||||
7. Request review from CODEOWNERS
|
||||
@@ -528,16 +532,11 @@ chore(deps): update React to 18.2.0
|
||||
```bash
|
||||
# Add to core framework
|
||||
cd core
|
||||
pip install <package>
|
||||
# Then add to requirements.txt or pyproject.toml
|
||||
uv add <package>
|
||||
|
||||
# Add to tools package
|
||||
cd tools
|
||||
pip install <package>
|
||||
# Then add to requirements.txt or pyproject.toml
|
||||
|
||||
# Reinstall in editable mode
|
||||
pip install -e .
|
||||
uv add <package>
|
||||
```
|
||||
|
||||
### Creating a New Agent
|
||||
@@ -670,9 +669,8 @@ cat .env
|
||||
# Or check shell environment
|
||||
echo $ANTHROPIC_API_KEY
|
||||
|
||||
# Copy from .env.example if needed
|
||||
cp .env.example .env
|
||||
# Then edit .env with your API keys
|
||||
# Create .env if needed
|
||||
# Then add your API keys
|
||||
```
|
||||
|
||||
|
||||
|
||||
+86
-3
@@ -21,6 +21,43 @@ This will:
|
||||
- Fix package compatibility issues (openai + litellm)
|
||||
- Verify all installations
|
||||
|
||||
## Quick Setup (Windows – PowerShell)
|
||||
|
||||
Windows users can use the native PowerShell setup script.
|
||||
|
||||
Before running the script, allow script execution for the current session:
|
||||
|
||||
```powershell
|
||||
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
|
||||
```
|
||||
|
||||
Run setup from the project root:
|
||||
|
||||
```powershell
|
||||
./scripts/setup-python.ps1
|
||||
```
|
||||
|
||||
This will:
|
||||
|
||||
- Check Python version (requires 3.11+)
|
||||
- Create a local `.venv` virtual environment
|
||||
- Install the core framework package (`framework`)
|
||||
- Install the tools package (`aden_tools`)
|
||||
- Fix package compatibility issues (openai + litellm)
|
||||
- Verify all installations
|
||||
|
||||
After setup, activate the virtual environment:
|
||||
|
||||
```powershell
|
||||
.\.venv\Scripts\Activate.ps1
|
||||
```
|
||||
|
||||
Set `PYTHONPATH` (required in every new PowerShell session):
|
||||
|
||||
```powershell
|
||||
$env:PYTHONPATH="core;exports"
|
||||
```
|
||||
|
||||
## Alpine Linux Setup
|
||||
|
||||
If you are using Alpine Linux (e.g., inside a Docker container), you must install system dependencies and use a virtual environment before running the setup script:
|
||||
@@ -100,6 +137,12 @@ For running agents with real LLMs:
|
||||
export ANTHROPIC_API_KEY="your-key-here"
|
||||
```
|
||||
|
||||
Windows (PowerShell):
|
||||
|
||||
```powershell
|
||||
$env:ANTHROPIC_API_KEY="your-key-here"
|
||||
```
|
||||
|
||||
## Running Agents
|
||||
|
||||
All agent commands must be run from the project root with `PYTHONPATH` set:
|
||||
@@ -109,9 +152,14 @@ All agent commands must be run from the project root with `PYTHONPATH` set:
|
||||
PYTHONPATH=core:exports python -m agent_name COMMAND
|
||||
```
|
||||
|
||||
### Example Commands
|
||||
Windows (PowerShell):
|
||||
|
||||
After building an agent via `/building-agents-construction`, use these commands:
|
||||
```powershell
|
||||
$env:PYTHONPATH="core;exports"
|
||||
python -m agent_name COMMAND
|
||||
```
|
||||
|
||||
### Example: Support Ticket Agent
|
||||
|
||||
```bash
|
||||
# Validate agent structure
|
||||
@@ -248,6 +296,14 @@ source .venv/bin/activate
|
||||
PYTHONPATH=core:exports python -m your_agent_name demo
|
||||
```
|
||||
|
||||
### PowerShell: “running scripts is disabled on this system”
|
||||
|
||||
Run once per session:
|
||||
|
||||
```powershell
|
||||
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
|
||||
```
|
||||
|
||||
### "ModuleNotFoundError: No module named 'framework'"
|
||||
|
||||
**Solution:** Install the core package:
|
||||
@@ -270,6 +326,12 @@ Or run the setup script:
|
||||
./quickstart.sh
|
||||
```
|
||||
|
||||
Windows:
|
||||
|
||||
```powershell
|
||||
./scripts/setup-python.ps1
|
||||
```
|
||||
|
||||
### "ModuleNotFoundError: No module named 'openai.\_models'"
|
||||
|
||||
**Cause:** Outdated `openai` package (0.27.x) incompatible with `litellm`
|
||||
@@ -284,12 +346,21 @@ pip install --upgrade "openai>=1.0.0"
|
||||
|
||||
**Cause:** Not running from project root, missing PYTHONPATH, or agent not yet created
|
||||
|
||||
**Solution:** Ensure you're in the project root directory, have built an agent, and use:
|
||||
**Solution:** Ensure you're in `/hive/` and use:
|
||||
|
||||
Linux/macOS:
|
||||
|
||||
```bash
|
||||
PYTHONPATH=core:exports python -m your_agent_name validate
|
||||
```
|
||||
|
||||
Windows:
|
||||
|
||||
```powershell
|
||||
$env:PYTHONPATH="core;exports"
|
||||
python -m support_ticket_agent validate
|
||||
```
|
||||
|
||||
### Agent imports fail with "broken installation"
|
||||
|
||||
**Symptom:** `pip list` shows packages pointing to non-existent directories
|
||||
@@ -304,6 +375,12 @@ pip uninstall -y framework tools
|
||||
./quickstart.sh
|
||||
```
|
||||
|
||||
Windows:
|
||||
|
||||
```powershell
|
||||
./scripts/setup-python.ps1
|
||||
```
|
||||
|
||||
## Package Structure
|
||||
|
||||
The Hive framework consists of three Python packages:
|
||||
@@ -402,6 +479,12 @@ This design allows agents in `exports/` to be:
|
||||
./quickstart.sh
|
||||
```
|
||||
|
||||
Windows:
|
||||
|
||||
```powershell
|
||||
./scripts/setup-python.ps1
|
||||
```
|
||||
|
||||
### 2. Build Agent (Claude Code)
|
||||
|
||||
```
|
||||
|
||||
@@ -15,7 +15,6 @@
|
||||
|
||||
[](https://github.com/adenhq/hive/blob/main/LICENSE)
|
||||
[](https://www.ycombinator.com/companies/aden)
|
||||
[](https://hub.docker.com/u/adenhq)
|
||||
[](https://discord.com/invite/MXE49hrKDk)
|
||||
[](https://x.com/aden_hq)
|
||||
[](https://www.linkedin.com/company/teamaden/)
|
||||
@@ -40,6 +39,31 @@ Build reliable, self-improving AI agents without hardcoding workflows. Define yo
|
||||
|
||||
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
|
||||
|
||||
## Who Is Hive For?
|
||||
|
||||
Hive is designed for developers and teams who want to build **production-grade AI agents** without manually wiring complex workflows.
|
||||
|
||||
Hive is a good fit if you:
|
||||
|
||||
- Want AI agents that **execute real business processes**, not demos
|
||||
- Prefer **goal-driven development** over hardcoded workflows
|
||||
- Need **self-healing and adaptive agents** that improve over time
|
||||
- Require **human-in-the-loop control**, observability, and cost limits
|
||||
- Plan to run agents in **production environments**
|
||||
|
||||
Hive may not be the best fit if you’re only experimenting with simple agent chains or one-off scripts.
|
||||
|
||||
## When Should You Use Hive?
|
||||
|
||||
Use Hive when you need:
|
||||
|
||||
- Long-running, autonomous agents
|
||||
- Multi-agent coordination
|
||||
- Continuous improvement based on failures
|
||||
- Strong monitoring, safety, and budget controls
|
||||
- A framework that evolves with your goals
|
||||
|
||||
|
||||
## What is Aden
|
||||
|
||||
<p align="center">
|
||||
|
||||
+1
-1
@@ -268,7 +268,7 @@ classDef done fill:#9e9e9e,color:#fff,stroke:#757575
|
||||
- [ ] Wake-up Tool (resume agent tasks)
|
||||
|
||||
### Deployment (Self-Hosted)
|
||||
- [ ] Docker container standardization
|
||||
- [ ] Workder agent docker container standardization
|
||||
- [ ] Headless backend execution
|
||||
- [ ] Exposed API for frontend attachment
|
||||
- [ ] Local monitoring & observability
|
||||
|
||||
@@ -1348,7 +1348,9 @@ Expected output keys: {output_keys}
|
||||
LLM Response:
|
||||
{raw_response}
|
||||
|
||||
Output ONLY the JSON object, nothing else."""
|
||||
Output ONLY the JSON object, nothing else.
|
||||
If no valid JSON object exists in the response, output exactly: {{"error": "NO_JSON_FOUND"}}
|
||||
Do NOT fabricate data or return empty objects."""
|
||||
|
||||
try:
|
||||
result = cleaner_llm.complete(
|
||||
@@ -1395,6 +1397,14 @@ Output ONLY the JSON object, nothing else."""
|
||||
parsed = json.loads(cleaned)
|
||||
except json.JSONDecodeError:
|
||||
parsed = json.loads(_fix_unescaped_newlines_in_json(cleaned))
|
||||
|
||||
# Validate LLM didn't return empty or fabricated data
|
||||
if parsed.get("error") == "NO_JSON_FOUND":
|
||||
raise ValueError("Cannot parse JSON from response")
|
||||
if not parsed or parsed == {}:
|
||||
raise ValueError("Cannot parse JSON from response")
|
||||
if all(v is None for v in parsed.values()):
|
||||
raise ValueError("Cannot parse JSON from response")
|
||||
logger.info(" ✓ LLM cleaned JSON output")
|
||||
return parsed
|
||||
|
||||
|
||||
@@ -75,16 +75,6 @@ class SafeEvalVisitor(ast.NodeVisitor):
|
||||
def visit_Constant(self, node: ast.Constant) -> Any:
|
||||
return node.value
|
||||
|
||||
# --- Number/String/Bytes/NameConstant (Python < 3.8 compat if needed) ---
|
||||
def visit_Num(self, node: ast.Num) -> Any:
|
||||
return node.n
|
||||
|
||||
def visit_Str(self, node: ast.Str) -> Any:
|
||||
return node.s
|
||||
|
||||
def visit_NameConstant(self, node: ast.NameConstant) -> Any:
|
||||
return node.value
|
||||
|
||||
# --- Data Structures ---
|
||||
def visit_List(self, node: ast.List) -> list:
|
||||
return [self.visit(elt) for elt in node.elts]
|
||||
|
||||
@@ -378,11 +378,18 @@ class LiteLLMProvider(LLMProvider):
|
||||
|
||||
# Execute tools and add results.
|
||||
for tool_call in message.tool_calls:
|
||||
# Parse arguments
|
||||
try:
|
||||
args = json.loads(tool_call.function.arguments)
|
||||
except json.JSONDecodeError:
|
||||
args = {}
|
||||
# Surface error to LLM and skip tool execution
|
||||
current_messages.append(
|
||||
{
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call.id,
|
||||
"content": "Invalid JSON arguments provided to tool.",
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
tool_use = ToolUse(
|
||||
id=tool_call.id,
|
||||
|
||||
@@ -167,14 +167,18 @@ class ConcurrentStorage:
|
||||
run: Run to save
|
||||
immediate: If True, save immediately (bypasses batching)
|
||||
"""
|
||||
# Invalidate summary cache since the run data is changing
|
||||
# This ensures load_summary() fetches fresh data after the save
|
||||
self._cache.pop(f"summary:{run.id}", None)
|
||||
|
||||
if immediate or not self._running:
|
||||
await self._save_run_locked(run)
|
||||
# Update cache only after successful immediate write
|
||||
self._cache[f"run:{run.id}"] = CacheEntry(run, time.time())
|
||||
else:
|
||||
# For batched writes, cache will be updated in _flush_batch after successful write
|
||||
await self._write_queue.put(("run", run))
|
||||
|
||||
# Update cache
|
||||
self._cache[f"run:{run.id}"] = CacheEntry(run, time.time())
|
||||
|
||||
async def _save_run_locked(self, run: Run) -> None:
|
||||
"""Save a run with file locking, including index locks."""
|
||||
lock_key = f"run:{run.id}"
|
||||
@@ -363,8 +367,12 @@ class ConcurrentStorage:
|
||||
try:
|
||||
if item_type == "run":
|
||||
await self._save_run_locked(item)
|
||||
# Update cache only after successful batched write
|
||||
# This fixes the race condition where cache was updated before write completed
|
||||
self._cache[f"run:{item.id}"] = CacheEntry(item, time.time())
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to save {item_type}: {e}")
|
||||
# Cache is NOT updated on failure - prevents stale/inconsistent cache state
|
||||
|
||||
async def _flush_pending(self) -> None:
|
||||
"""Flush all pending writes."""
|
||||
|
||||
@@ -0,0 +1,162 @@
|
||||
"""Tests for ConcurrentStorage race condition and cache invalidation fixes."""
|
||||
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from framework.schemas.run import Run, RunMetrics, RunStatus
|
||||
from framework.storage.concurrent import ConcurrentStorage
|
||||
|
||||
|
||||
def create_test_run(
|
||||
run_id: str, goal_id: str = "test-goal", status: RunStatus = RunStatus.RUNNING
|
||||
) -> Run:
|
||||
"""Create a minimal test Run object."""
|
||||
return Run(
|
||||
id=run_id,
|
||||
goal_id=goal_id,
|
||||
status=status,
|
||||
narrative="Test run",
|
||||
metrics=RunMetrics(
|
||||
nodes_executed=[],
|
||||
),
|
||||
decisions=[],
|
||||
problems=[],
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_cache_invalidation_on_save(tmp_path: Path):
|
||||
"""Test that summary cache is invalidated when a run is saved.
|
||||
|
||||
This tests the fix for the cache invalidation bug where load_summary()
|
||||
would return stale data after a run was updated.
|
||||
"""
|
||||
storage = ConcurrentStorage(tmp_path)
|
||||
await storage.start()
|
||||
|
||||
try:
|
||||
run_id = "test-run-1"
|
||||
|
||||
# Create and save initial run
|
||||
run = create_test_run(run_id, status=RunStatus.RUNNING)
|
||||
await storage.save_run(run, immediate=True)
|
||||
|
||||
# Load summary to populate the cache
|
||||
summary = await storage.load_summary(run_id)
|
||||
assert summary is not None
|
||||
assert summary.status == RunStatus.RUNNING
|
||||
|
||||
# Update run with new status
|
||||
run.status = RunStatus.COMPLETED
|
||||
await storage.save_run(run, immediate=True)
|
||||
|
||||
# Load summary again - should get fresh data, not cached stale data
|
||||
summary = await storage.load_summary(run_id)
|
||||
assert summary is not None
|
||||
assert summary.status == RunStatus.COMPLETED, (
|
||||
"Summary cache should be invalidated on save - got stale data"
|
||||
)
|
||||
finally:
|
||||
await storage.stop()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_batched_write_cache_consistency(tmp_path: Path):
|
||||
"""Test that cache is only updated after successful batched write.
|
||||
|
||||
This tests the fix for the race condition where cache was updated
|
||||
before the batched write completed.
|
||||
"""
|
||||
storage = ConcurrentStorage(tmp_path, batch_interval=0.05)
|
||||
await storage.start()
|
||||
|
||||
try:
|
||||
run_id = "test-run-2"
|
||||
|
||||
# Save via batching (immediate=False)
|
||||
run = create_test_run(run_id, status=RunStatus.RUNNING)
|
||||
await storage.save_run(run, immediate=False)
|
||||
|
||||
# Before batch flush, cache should NOT contain the run
|
||||
# (This is the fix - previously cache was updated immediately)
|
||||
cache_key = f"run:{run_id}"
|
||||
assert cache_key not in storage._cache, (
|
||||
"Cache should not be updated before batch is flushed"
|
||||
)
|
||||
|
||||
# Wait for batch to flush
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
# After batch flush, cache should contain the run
|
||||
assert cache_key in storage._cache, "Cache should be updated after batch flush"
|
||||
|
||||
# Verify data on disk matches cache
|
||||
loaded_run = await storage.load_run(run_id, use_cache=False)
|
||||
assert loaded_run is not None
|
||||
assert loaded_run.id == run_id
|
||||
assert loaded_run.status == RunStatus.RUNNING
|
||||
finally:
|
||||
await storage.stop()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_immediate_write_updates_cache(tmp_path: Path):
|
||||
"""Test that immediate writes still update cache correctly."""
|
||||
storage = ConcurrentStorage(tmp_path)
|
||||
await storage.start()
|
||||
|
||||
try:
|
||||
run_id = "test-run-3"
|
||||
|
||||
# Save with immediate=True
|
||||
run = create_test_run(run_id, status=RunStatus.COMPLETED)
|
||||
await storage.save_run(run, immediate=True)
|
||||
|
||||
# Cache should be updated immediately for immediate writes
|
||||
cache_key = f"run:{run_id}"
|
||||
assert cache_key in storage._cache, "Cache should be updated after immediate write"
|
||||
|
||||
# Verify cached value is correct
|
||||
cached_run = storage._cache[cache_key].value
|
||||
assert cached_run.id == run_id
|
||||
assert cached_run.status == RunStatus.COMPLETED
|
||||
finally:
|
||||
await storage.stop()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_summary_cache_invalidated_on_multiple_saves(tmp_path: Path):
|
||||
"""Test that summary cache is invalidated on each save, not just the first."""
|
||||
storage = ConcurrentStorage(tmp_path)
|
||||
await storage.start()
|
||||
|
||||
try:
|
||||
run_id = "test-run-4"
|
||||
|
||||
# First save
|
||||
run = create_test_run(run_id, status=RunStatus.RUNNING)
|
||||
await storage.save_run(run, immediate=True)
|
||||
|
||||
# Load summary to cache it
|
||||
summary1 = await storage.load_summary(run_id)
|
||||
assert summary1.status == RunStatus.RUNNING
|
||||
|
||||
# Second save with new status
|
||||
run.status = RunStatus.RUNNING
|
||||
await storage.save_run(run, immediate=True)
|
||||
|
||||
# Load summary - should be fresh
|
||||
summary2 = await storage.load_summary(run_id)
|
||||
assert summary2.status == RunStatus.RUNNING
|
||||
|
||||
# Third save with final status
|
||||
run.status = RunStatus.COMPLETED
|
||||
await storage.save_run(run, immediate=True)
|
||||
|
||||
# Load summary - should be fresh again
|
||||
summary3 = await storage.load_summary(run_id)
|
||||
assert summary3.status == RunStatus.COMPLETED
|
||||
finally:
|
||||
await storage.stop()
|
||||
@@ -209,6 +209,62 @@ class TestLiteLLMProviderToolUse:
|
||||
assert result.output_tokens == 25 # 15 + 10
|
||||
assert mock_completion.call_count == 2
|
||||
|
||||
@patch("litellm.completion")
|
||||
def test_complete_with_tools_invalid_json_arguments_are_handled(self, mock_completion):
|
||||
"""Test that invalid JSON tool arguments do not execute the tool."""
|
||||
# Mock response with invalid JSON arguments
|
||||
tool_call_response = MagicMock()
|
||||
tool_call_response.choices = [MagicMock()]
|
||||
tool_call_response.choices[0].message.content = None
|
||||
tool_call_response.choices[0].message.tool_calls = [MagicMock()]
|
||||
tool_call_response.choices[0].message.tool_calls[0].id = "call_123"
|
||||
tool_call_response.choices[0].message.tool_calls[0].function.name = "test_tool"
|
||||
tool_call_response.choices[0].message.tool_calls[0].function.arguments = "{invalid json"
|
||||
tool_call_response.choices[0].finish_reason = "tool_calls"
|
||||
tool_call_response.model = "gpt-4o-mini"
|
||||
tool_call_response.usage.prompt_tokens = 10
|
||||
tool_call_response.usage.completion_tokens = 5
|
||||
|
||||
# Final response (LLM continues after tool error)
|
||||
final_response = MagicMock()
|
||||
final_response.choices = [MagicMock()]
|
||||
final_response.choices[0].message.content = "Handled error"
|
||||
final_response.choices[0].message.tool_calls = None
|
||||
final_response.choices[0].finish_reason = "stop"
|
||||
final_response.model = "gpt-4o-mini"
|
||||
final_response.usage.prompt_tokens = 5
|
||||
final_response.usage.completion_tokens = 5
|
||||
|
||||
mock_completion.side_effect = [tool_call_response, final_response]
|
||||
|
||||
provider = LiteLLMProvider(model="gpt-4o-mini", api_key="test-key")
|
||||
|
||||
tools = [
|
||||
Tool(
|
||||
name="test_tool",
|
||||
description="Test tool",
|
||||
parameters={"properties": {}, "required": []},
|
||||
)
|
||||
]
|
||||
|
||||
called = {"value": False}
|
||||
|
||||
def tool_executor(tool_use: ToolUse) -> ToolResult:
|
||||
called["value"] = True
|
||||
return ToolResult(
|
||||
tool_use_id=tool_use.id, content="should not be called", is_error=False
|
||||
)
|
||||
|
||||
result = provider.complete_with_tools(
|
||||
messages=[{"role": "user", "content": "Run tool"}],
|
||||
system="You are a test assistant.",
|
||||
tools=tools,
|
||||
tool_executor=tool_executor,
|
||||
)
|
||||
|
||||
assert called["value"] is False
|
||||
assert result.content == "Handled error"
|
||||
|
||||
|
||||
class TestToolConversion:
|
||||
"""Test tool format conversion."""
|
||||
|
||||
@@ -362,7 +362,6 @@ class AgentRequest(BaseModel):
|
||||
raise ValueError('max_tokens too high')
|
||||
return v
|
||||
```
|
||||
|
||||
### Output Sanitization
|
||||
> **Note:** The following snippet is illustrative and shows a simplified example
|
||||
> of output sanitization logic. Actual implementations may differ.
|
||||
|
||||
@@ -14,7 +14,6 @@
|
||||
|
||||
[](https://github.com/adenhq/hive/blob/main/LICENSE)
|
||||
[](https://www.ycombinator.com/companies/aden)
|
||||
[](https://hub.docker.com/u/adenhq)
|
||||
[](https://discord.com/invite/MXE49hrKDk)
|
||||
[](https://x.com/aden_hq)
|
||||
[](https://www.linkedin.com/company/teamaden/)
|
||||
@@ -66,7 +65,6 @@ Aden es una plataforma para construir, desplegar, operar y adaptar agentes de IA
|
||||
### Prerrequisitos
|
||||
|
||||
- [Python 3.11+](https://www.python.org/downloads/) - Para desarrollo de agentes
|
||||
- [Docker](https://docs.docker.com/get-docker/) (v20.10+) - Opcional, para herramientas en contenedores
|
||||
|
||||
### Instalación
|
||||
|
||||
|
||||
@@ -0,0 +1,41 @@
|
||||
# Examples
|
||||
|
||||
This directory contains two types of examples to help you build agents with the Hive framework.
|
||||
|
||||
## Recipes vs Templates
|
||||
|
||||
### [recipes/](recipes/) — "How to make it"
|
||||
|
||||
A recipe is a **prompt-only** description of an agent. It tells you the goal, the nodes, the prompts, the edge routing logic, and what tools to wire in — but it's not runnable code. You read the recipe, then build the agent yourself.
|
||||
|
||||
Use recipes when you want to:
|
||||
- Understand a pattern before committing to an implementation
|
||||
- Adapt an idea to your own codebase or tooling
|
||||
- Learn how to think about agent design (goals, nodes, edges, prompts)
|
||||
|
||||
### [templates/](templates/) — "Ready to eat"
|
||||
|
||||
A template is a **working agent scaffold** that follows the standard Hive export structure. Copy the folder, rename it, swap in your own prompts and tools, and run it.
|
||||
|
||||
Use templates when you want to:
|
||||
- Get a new agent running quickly
|
||||
- Start from a known-good structure instead of from scratch
|
||||
- See how all the pieces (goal, nodes, edges, config, CLI) fit together in real code
|
||||
|
||||
## How to use a template
|
||||
|
||||
```bash
|
||||
# 1. Copy the template
|
||||
cp -r examples/templates/marketing_agent exports/my_agent
|
||||
|
||||
# 2. Edit the goal, nodes, and edges in agent.py and nodes/__init__.py
|
||||
|
||||
# 3. Run it
|
||||
PYTHONPATH=core python -m exports.my_agent --help
|
||||
```
|
||||
|
||||
## How to use a recipe
|
||||
|
||||
1. Read the recipe markdown file
|
||||
2. Use the patterns described to build your own agent — either manually or with the builder agent (`/agent-workflow`)
|
||||
3. Refer to the [core README](../core/README.md) for framework API details
|
||||
@@ -0,0 +1,27 @@
|
||||
# Recipes
|
||||
|
||||
A recipe describes an agent's design — the goal, nodes, prompts, edge logic, and tools — without providing runnable code. Think of it as a blueprint: it tells you *how* to build the agent, but you do the building.
|
||||
|
||||
## What's in a recipe
|
||||
|
||||
Each recipe is a markdown file (or folder with a markdown file) containing:
|
||||
|
||||
- **Goal**: What the agent accomplishes, including success criteria and constraints
|
||||
- **Nodes**: Each step in the workflow, with the system prompt, node type, and input/output keys
|
||||
- **Edges**: How nodes connect, including conditions and routing logic
|
||||
- **Tools**: What external tools or MCP servers the agent needs
|
||||
- **Usage notes**: Tips, gotchas, and suggested variations
|
||||
|
||||
## How to use a recipe
|
||||
|
||||
1. Read through the recipe to understand the design
|
||||
2. Create a new agent using the standard export structure (see [templates/](../templates/) for a scaffold)
|
||||
3. Translate the recipe's goal, nodes, and edges into code
|
||||
4. Wire in the tools described
|
||||
5. Test and iterate
|
||||
|
||||
## Available recipes
|
||||
|
||||
| Recipe | Description |
|
||||
|--------|-------------|
|
||||
| [marketing_agent](marketing_agent/) | Multi-channel marketing content generator with audience analysis and A/B copy variants |
|
||||
@@ -0,0 +1,156 @@
|
||||
# Recipe: Marketing Content Agent
|
||||
|
||||
A multi-channel marketing content generator. Given a product description and target audience, this agent analyzes the audience, generates tailored copy for multiple channels, and produces A/B variants.
|
||||
|
||||
## Goal
|
||||
|
||||
```
|
||||
Name: Marketing Content Generator
|
||||
Description: Generate targeted marketing content across multiple channels
|
||||
for a given product and audience.
|
||||
|
||||
Success criteria:
|
||||
- Audience analysis is produced with demographics and pain points
|
||||
- At least 2 channel-specific content pieces are generated
|
||||
- A/B variants are provided for each piece
|
||||
- All content aligns with the specified brand voice
|
||||
|
||||
Constraints:
|
||||
- (hard) No competitor brand names in generated content
|
||||
- (soft) Content should be under 280 characters for social media channels
|
||||
```
|
||||
|
||||
## Input / Output
|
||||
|
||||
**Input:**
|
||||
- `product_description` (str) — What the product is and does
|
||||
- `target_audience` (str) — Who the content is for
|
||||
- `brand_voice` (str) — Tone and style guidelines (e.g., "professional but approachable")
|
||||
- `channels` (list[str]) — Target channels, e.g. `["email", "twitter", "linkedin"]`
|
||||
|
||||
**Output:**
|
||||
- `audience_analysis` (dict) — Demographics, pain points, motivations
|
||||
- `content` (list[dict]) — Per-channel content with A/B variants
|
||||
|
||||
## Workflow
|
||||
|
||||
```
|
||||
[analyze_audience] → [generate_content] → [review_and_refine]
|
||||
|
|
||||
(conditional)
|
||||
|
|
||||
needs_revision == True → [generate_content]
|
||||
needs_revision == False → (done)
|
||||
```
|
||||
|
||||
## Nodes
|
||||
|
||||
### 1. analyze_audience
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Type | `llm_generate` |
|
||||
| Input keys | `product_description`, `target_audience` |
|
||||
| Output keys | `audience_analysis` |
|
||||
| Tools | None |
|
||||
|
||||
**System prompt:**
|
||||
```
|
||||
You are a marketing strategist. Analyze the target audience for a product.
|
||||
|
||||
Product: {product_description}
|
||||
Target audience: {target_audience}
|
||||
|
||||
Produce a structured analysis in JSON:
|
||||
{{
|
||||
"audience_analysis": {{
|
||||
"demographics": "...",
|
||||
"pain_points": ["..."],
|
||||
"motivations": ["..."],
|
||||
"preferred_channels": ["..."],
|
||||
"messaging_angle": "..."
|
||||
}}
|
||||
}}
|
||||
```
|
||||
|
||||
### 2. generate_content
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Type | `llm_generate` |
|
||||
| Input keys | `product_description`, `audience_analysis`, `brand_voice`, `channels` |
|
||||
| Output keys | `content` |
|
||||
| Tools | None |
|
||||
|
||||
**System prompt:**
|
||||
```
|
||||
You are a marketing copywriter. Generate content for each channel.
|
||||
|
||||
Product: {product_description}
|
||||
Audience analysis: {audience_analysis}
|
||||
Brand voice: {brand_voice}
|
||||
Channels: {channels}
|
||||
|
||||
For each channel, produce two variants (A and B).
|
||||
|
||||
Output as JSON:
|
||||
{{
|
||||
"content": [
|
||||
{{
|
||||
"channel": "twitter",
|
||||
"variant_a": "...",
|
||||
"variant_b": "..."
|
||||
}}
|
||||
]
|
||||
}}
|
||||
```
|
||||
|
||||
### 3. review_and_refine
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Type | `llm_generate` |
|
||||
| Input keys | `content`, `brand_voice` |
|
||||
| Output keys | `content`, `needs_revision` |
|
||||
| Tools | None |
|
||||
|
||||
**System prompt:**
|
||||
```
|
||||
You are a senior marketing editor. Review the following content for brand
|
||||
voice alignment, clarity, and channel appropriateness.
|
||||
|
||||
Content: {content}
|
||||
Brand voice: {brand_voice}
|
||||
|
||||
If any piece needs revision, fix it and set needs_revision to true.
|
||||
If everything looks good, return the content unchanged with needs_revision false.
|
||||
|
||||
Output as JSON:
|
||||
{{
|
||||
"content": [...],
|
||||
"needs_revision": false
|
||||
}}
|
||||
```
|
||||
|
||||
## Edges
|
||||
|
||||
| Source | Target | Condition | Priority |
|
||||
|--------|--------|-----------|----------|
|
||||
| analyze_audience | generate_content | `on_success` | 0 |
|
||||
| generate_content | review_and_refine | `on_success` | 0 |
|
||||
| review_and_refine | generate_content | `conditional: needs_revision == True` | 10 |
|
||||
|
||||
The `review_and_refine → generate_content` loop has higher priority so it's checked first. If `needs_revision` is false, execution ends at `review_and_refine` (terminal node).
|
||||
|
||||
## Tools
|
||||
|
||||
This recipe uses no external tools — all nodes are `llm_generate`. To extend it, consider adding:
|
||||
- A web search tool for competitive analysis in the `analyze_audience` node
|
||||
- A URL shortener tool for social media content
|
||||
- An image generation tool for visual content variants
|
||||
|
||||
## Variations
|
||||
|
||||
- **Single-channel mode**: Remove the `channels` input and hardcode to one channel for simpler output
|
||||
- **With approval gate**: Add a `human_input` node between `review_and_refine` and the terminal to require human sign-off
|
||||
- **With analytics**: Add a `function` node that logs generated content to a tracking system
|
||||
@@ -0,0 +1,38 @@
|
||||
# Templates
|
||||
|
||||
A template is a working agent scaffold that follows the standard Hive export structure. Copy it, rename it, customize the goal/nodes/edges, and run it.
|
||||
|
||||
## What's in a template
|
||||
|
||||
Each template is a complete agent package:
|
||||
|
||||
```
|
||||
template_name/
|
||||
├── __init__.py # Package exports
|
||||
├── __main__.py # CLI entry point
|
||||
├── agent.py # Goal, edges, graph spec, agent class
|
||||
├── config.py # Runtime configuration
|
||||
├── nodes/
|
||||
│ └── __init__.py # Node definitions (NodeSpec instances)
|
||||
└── README.md # What this template demonstrates
|
||||
```
|
||||
|
||||
## How to use a template
|
||||
|
||||
```bash
|
||||
# 1. Copy to your exports directory
|
||||
cp -r examples/templates/marketing_agent exports/my_marketing_agent
|
||||
|
||||
# 2. Update the module references in __main__.py and __init__.py
|
||||
|
||||
# 3. Customize goal, nodes, edges, and prompts
|
||||
|
||||
# 4. Run it
|
||||
PYTHONPATH=core python -m exports.my_marketing_agent --input '{"product_description": "..."}'
|
||||
```
|
||||
|
||||
## Available templates
|
||||
|
||||
| Template | Description |
|
||||
|----------|-------------|
|
||||
| [marketing_agent](marketing_agent/) | Multi-channel marketing content generator with audience analysis, content generation, and editorial review nodes |
|
||||
@@ -0,0 +1,57 @@
|
||||
# Template: Marketing Content Agent
|
||||
|
||||
A multi-channel marketing content generator. Given a product and audience, this agent analyzes the audience, generates tailored copy for multiple channels with A/B variants, and reviews the output for quality.
|
||||
|
||||
## Workflow
|
||||
|
||||
```
|
||||
[analyze-audience] → [generate-content] → [review-and-refine]
|
||||
|
|
||||
(conditional)
|
||||
|
|
||||
needs_revision == True → [generate-content]
|
||||
needs_revision == False → (done)
|
||||
```
|
||||
|
||||
## Nodes
|
||||
|
||||
| Node | Type | Description |
|
||||
|------|------|-------------|
|
||||
| `analyze-audience` | `llm_generate` | Produces structured audience analysis |
|
||||
| `generate-content` | `llm_generate` | Creates per-channel copy with A/B variants |
|
||||
| `review-and-refine` | `llm_generate` | Reviews and optionally revises content |
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# From the repo root
|
||||
PYTHONPATH=core python -m examples.templates.marketing_agent
|
||||
|
||||
# With custom input
|
||||
PYTHONPATH=core python -m examples.templates.marketing_agent --input '{
|
||||
"product_description": "A fitness tracking app",
|
||||
"target_audience": "Health-conscious millennials",
|
||||
"brand_voice": "Energetic and motivational",
|
||||
"channels": ["instagram", "email"]
|
||||
}'
|
||||
```
|
||||
|
||||
## Customization ideas
|
||||
|
||||
- Add a `function` node to call an analytics API and inform audience analysis with real data
|
||||
- Add a `human_input` pause node before final output for editorial approval
|
||||
- Swap `llm_generate` nodes to `llm_tool_use` and add web search tools for competitive research
|
||||
- Add an image generation tool to produce visual assets alongside copy
|
||||
|
||||
## File structure
|
||||
|
||||
```
|
||||
marketing_agent/
|
||||
├── __init__.py # Package exports
|
||||
├── __main__.py # CLI entry point
|
||||
├── agent.py # Goal, edges, graph spec, MarketingAgent class
|
||||
├── config.py # RuntimeConfig and AgentMetadata
|
||||
├── nodes/
|
||||
│ └── __init__.py # NodeSpec definitions
|
||||
└── README.md # This file
|
||||
```
|
||||
@@ -0,0 +1,6 @@
|
||||
"""Marketing Content Agent — template example."""
|
||||
|
||||
from .agent import MarketingAgent, goal, edges, nodes
|
||||
from .config import default_config
|
||||
|
||||
__all__ = ["MarketingAgent", "goal", "edges", "nodes", "default_config"]
|
||||
@@ -0,0 +1,31 @@
|
||||
"""CLI entry point for Marketing Content Agent."""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import sys
|
||||
|
||||
|
||||
def main():
|
||||
from .agent import MarketingAgent
|
||||
from .config import default_config
|
||||
|
||||
# Simple CLI — replace with Click for production use
|
||||
input_data = {
|
||||
"product_description": "An AI-powered project management tool for remote teams",
|
||||
"target_audience": "Engineering managers at mid-size tech companies",
|
||||
"brand_voice": "Professional but approachable, concise, data-driven",
|
||||
"channels": ["email", "twitter", "linkedin"],
|
||||
}
|
||||
|
||||
# Accept JSON input from command line
|
||||
if len(sys.argv) > 1 and sys.argv[1] == "--input":
|
||||
input_data = json.loads(sys.argv[2])
|
||||
|
||||
agent = MarketingAgent(config=default_config)
|
||||
result = asyncio.run(agent.run(input_data))
|
||||
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,161 @@
|
||||
"""Marketing Content Agent — goal, edges, graph spec, and agent class."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from framework.graph import EdgeCondition, EdgeSpec, Goal, SuccessCriterion, Constraint
|
||||
from framework.graph.edge import GraphSpec
|
||||
from framework.graph.executor import GraphExecutor
|
||||
from framework.runtime.core import Runtime
|
||||
from framework.llm.anthropic import AnthropicProvider
|
||||
|
||||
from .config import default_config, RuntimeConfig
|
||||
from .nodes import all_nodes
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Goal
|
||||
# ---------------------------------------------------------------------------
|
||||
goal = Goal(
|
||||
id="marketing-content",
|
||||
name="Marketing Content Generator",
|
||||
description=(
|
||||
"Generate targeted marketing content across multiple channels "
|
||||
"for a given product and audience."
|
||||
),
|
||||
success_criteria=[
|
||||
SuccessCriterion(
|
||||
id="audience-analyzed",
|
||||
description="Audience analysis is produced with demographics and pain points",
|
||||
metric="output_contains",
|
||||
target="audience_analysis",
|
||||
),
|
||||
SuccessCriterion(
|
||||
id="content-generated",
|
||||
description="At least 2 channel-specific content pieces are generated",
|
||||
metric="custom",
|
||||
target="len(content) >= 2",
|
||||
),
|
||||
SuccessCriterion(
|
||||
id="variants-provided",
|
||||
description="A/B variants are provided for each content piece",
|
||||
metric="custom",
|
||||
target="all variants present",
|
||||
),
|
||||
],
|
||||
constraints=[
|
||||
Constraint(
|
||||
id="no-competitor-names",
|
||||
description="No competitor brand names in generated content",
|
||||
constraint_type="hard",
|
||||
category="safety",
|
||||
),
|
||||
Constraint(
|
||||
id="social-length",
|
||||
description="Social media content should be under 280 characters",
|
||||
constraint_type="soft",
|
||||
category="quality",
|
||||
),
|
||||
],
|
||||
input_schema={
|
||||
"product_description": {"type": "string"},
|
||||
"target_audience": {"type": "string"},
|
||||
"brand_voice": {"type": "string"},
|
||||
"channels": {"type": "array", "items": {"type": "string"}},
|
||||
},
|
||||
output_schema={
|
||||
"audience_analysis": {"type": "object"},
|
||||
"content": {"type": "array"},
|
||||
},
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Edges
|
||||
# ---------------------------------------------------------------------------
|
||||
edges = [
|
||||
EdgeSpec(
|
||||
id="analyze-to-generate",
|
||||
source="analyze-audience",
|
||||
target="generate-content",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
description="After audience analysis, generate content",
|
||||
),
|
||||
EdgeSpec(
|
||||
id="generate-to-review",
|
||||
source="generate-content",
|
||||
target="review-and-refine",
|
||||
condition=EdgeCondition.ON_SUCCESS,
|
||||
description="After content generation, review and refine",
|
||||
),
|
||||
EdgeSpec(
|
||||
id="review-to-regenerate",
|
||||
source="review-and-refine",
|
||||
target="generate-content",
|
||||
condition=EdgeCondition.CONDITIONAL,
|
||||
condition_expr="needs_revision == True",
|
||||
priority=10,
|
||||
description="If revision needed, loop back to content generation",
|
||||
),
|
||||
]
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Graph structure
|
||||
# ---------------------------------------------------------------------------
|
||||
entry_node = "analyze-audience"
|
||||
entry_points = {"start": "analyze-audience"}
|
||||
terminal_nodes = ["review-and-refine"]
|
||||
pause_nodes = []
|
||||
nodes = all_nodes
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Agent class
|
||||
# ---------------------------------------------------------------------------
|
||||
class MarketingAgent:
|
||||
"""Multi-channel marketing content generator agent."""
|
||||
|
||||
def __init__(self, config: RuntimeConfig | None = None):
|
||||
self.config = config or default_config
|
||||
self.goal = goal
|
||||
self.nodes = nodes
|
||||
self.edges = edges
|
||||
self.entry_node = entry_node
|
||||
self.terminal_nodes = terminal_nodes
|
||||
self.executor = None
|
||||
|
||||
def _build_graph(self) -> GraphSpec:
|
||||
return GraphSpec(
|
||||
id="marketing-content-graph",
|
||||
goal_id=self.goal.id,
|
||||
entry_node=self.entry_node,
|
||||
entry_points=entry_points,
|
||||
terminal_nodes=self.terminal_nodes,
|
||||
pause_nodes=pause_nodes,
|
||||
nodes=self.nodes,
|
||||
edges=self.edges,
|
||||
default_model=self.config.model,
|
||||
max_tokens=self.config.max_tokens,
|
||||
description="Marketing content generation workflow",
|
||||
)
|
||||
|
||||
def _create_executor(self):
|
||||
runtime = Runtime(storage_path=Path(self.config.storage_path).expanduser())
|
||||
llm = AnthropicProvider(model=self.config.model)
|
||||
self.executor = GraphExecutor(runtime=runtime, llm=llm)
|
||||
return self.executor
|
||||
|
||||
async def run(self, context: dict, mock_mode: bool = False) -> dict:
|
||||
graph = self._build_graph()
|
||||
executor = self._create_executor()
|
||||
result = await executor.execute(
|
||||
graph=graph,
|
||||
goal=self.goal,
|
||||
input_data=context,
|
||||
)
|
||||
return {
|
||||
"success": result.success,
|
||||
"output": result.output,
|
||||
"steps": result.steps_executed,
|
||||
"path": result.path,
|
||||
}
|
||||
|
||||
|
||||
default_agent = MarketingAgent()
|
||||
@@ -0,0 +1,24 @@
|
||||
"""Runtime configuration for Marketing Content Agent."""
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
|
||||
@dataclass
|
||||
class RuntimeConfig:
|
||||
model: str = "claude-haiku-4-5-20251001"
|
||||
max_tokens: int = 2048
|
||||
storage_path: str = "~/.hive/storage"
|
||||
mock_mode: bool = False
|
||||
|
||||
|
||||
@dataclass
|
||||
class AgentMetadata:
|
||||
name: str = "marketing_agent"
|
||||
version: str = "0.1.0"
|
||||
description: str = "Multi-channel marketing content generator"
|
||||
author: str = ""
|
||||
tags: list[str] = field(default_factory=lambda: ["marketing", "content", "template"])
|
||||
|
||||
|
||||
default_config = RuntimeConfig()
|
||||
metadata = AgentMetadata()
|
||||
@@ -0,0 +1,106 @@
|
||||
"""Node definitions for Marketing Content Agent."""
|
||||
|
||||
from framework.graph import NodeSpec
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Node 1: Analyze the target audience
|
||||
# ---------------------------------------------------------------------------
|
||||
analyze_audience_node = NodeSpec(
|
||||
id="analyze-audience",
|
||||
name="Analyze Audience",
|
||||
description="Produce a structured audience analysis from the product and target audience description.",
|
||||
node_type="llm_generate",
|
||||
input_keys=["product_description", "target_audience"],
|
||||
output_keys=["audience_analysis"],
|
||||
system_prompt="""\
|
||||
You are a marketing strategist. Analyze the target audience for a product.
|
||||
|
||||
Product: {product_description}
|
||||
Target audience: {target_audience}
|
||||
|
||||
Produce a structured analysis as raw JSON (no markdown):
|
||||
{{
|
||||
"audience_analysis": {{
|
||||
"demographics": "...",
|
||||
"pain_points": ["..."],
|
||||
"motivations": ["..."],
|
||||
"preferred_channels": ["..."],
|
||||
"messaging_angle": "..."
|
||||
}}
|
||||
}}
|
||||
""",
|
||||
tools=[],
|
||||
max_retries=2,
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Node 2: Generate channel-specific content with A/B variants
|
||||
# ---------------------------------------------------------------------------
|
||||
generate_content_node = NodeSpec(
|
||||
id="generate-content",
|
||||
name="Generate Content",
|
||||
description="Create marketing copy for each requested channel with two variants per channel.",
|
||||
node_type="llm_generate",
|
||||
input_keys=["product_description", "audience_analysis", "brand_voice", "channels"],
|
||||
output_keys=["content"],
|
||||
system_prompt="""\
|
||||
You are a marketing copywriter. Generate content for each channel.
|
||||
|
||||
Product: {product_description}
|
||||
Audience analysis: {audience_analysis}
|
||||
Brand voice: {brand_voice}
|
||||
Channels: {channels}
|
||||
|
||||
For each channel, produce two variants (A and B).
|
||||
|
||||
Output as raw JSON (no markdown):
|
||||
{{
|
||||
"content": [
|
||||
{{
|
||||
"channel": "twitter",
|
||||
"variant_a": "...",
|
||||
"variant_b": "..."
|
||||
}}
|
||||
]
|
||||
}}
|
||||
""",
|
||||
tools=[],
|
||||
max_retries=2,
|
||||
)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Node 3: Review and refine content
|
||||
# ---------------------------------------------------------------------------
|
||||
review_and_refine_node = NodeSpec(
|
||||
id="review-and-refine",
|
||||
name="Review and Refine",
|
||||
description="Review generated content for brand voice alignment and channel fit. Revise if needed.",
|
||||
node_type="llm_generate",
|
||||
input_keys=["content", "brand_voice"],
|
||||
output_keys=["content", "needs_revision"],
|
||||
system_prompt="""\
|
||||
You are a senior marketing editor. Review the following content for brand
|
||||
voice alignment, clarity, and channel appropriateness.
|
||||
|
||||
Content: {content}
|
||||
Brand voice: {brand_voice}
|
||||
|
||||
If any piece needs revision, fix it and set needs_revision to true.
|
||||
If everything looks good, return the content unchanged with needs_revision false.
|
||||
|
||||
Output as raw JSON (no markdown):
|
||||
{{
|
||||
"content": [...],
|
||||
"needs_revision": false
|
||||
}}
|
||||
""",
|
||||
tools=[],
|
||||
max_retries=2,
|
||||
)
|
||||
|
||||
# All nodes for easy import
|
||||
all_nodes = [
|
||||
analyze_audience_node,
|
||||
generate_content_node,
|
||||
review_and_refine_node,
|
||||
]
|
||||
@@ -0,0 +1,251 @@
|
||||
<#
|
||||
|
||||
setup-python.ps1 - Python Environment Setup for Aden Agent Framework
|
||||
|
||||
This script sets up the Python environment with all required packages
|
||||
for building and running goal-driven agents.
|
||||
#>
|
||||
|
||||
$ErrorActionPreference = "Stop"
|
||||
|
||||
# Colors for output
|
||||
$RED = "Red"
|
||||
$GREEN = "Green"
|
||||
$YELLOW = "Yellow"
|
||||
$BLUE = "Cyan"
|
||||
|
||||
# Get the directory where this script is located
|
||||
$SCRIPT_DIR = Split-Path -Parent $MyInvocation.MyCommand.Path
|
||||
$PROJECT_ROOT = Split-Path -Parent $SCRIPT_DIR
|
||||
|
||||
Write-Host ""
|
||||
Write-Host "=================================================="
|
||||
Write-Host " Aden Agent Framework - Python Setup"
|
||||
Write-Host "=================================================="
|
||||
Write-Host ""
|
||||
|
||||
# Check for Python
|
||||
$pythonCmd = $null
|
||||
if (Get-Command python -ErrorAction SilentlyContinue) {
|
||||
$pythonCmd = "python"
|
||||
}
|
||||
|
||||
if (-not $pythonCmd) {
|
||||
Write-Host "Error: Python is not installed." -ForegroundColor $RED
|
||||
Write-Host "Please install Python 3.11+ from https://python.org"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check Python version
|
||||
$versionInfo = & $pythonCmd -c "import sys; print(f'{sys.version_info.major}.{sys.version_info.minor}')"
|
||||
$major = & $pythonCmd -c "import sys; print(sys.version_info.major)"
|
||||
$minor = & $pythonCmd -c "import sys; print(sys.version_info.minor)"
|
||||
|
||||
Write-Host "Detected Python: $versionInfo" -ForegroundColor $BLUE
|
||||
|
||||
if ($major -lt 3 -or ($major -eq 3 -and $minor -lt 11)) {
|
||||
Write-Host "Error: Python 3.11+ is required (found $versionInfo)" -ForegroundColor $RED
|
||||
Write-Host "Please upgrade your Python installation"
|
||||
exit 1
|
||||
}
|
||||
|
||||
if ($minor -lt 11) {
|
||||
Write-Host "Warning: Python 3.11+ is recommended for best compatibility" -ForegroundColor $YELLOW
|
||||
Write-Host "You have Python $versionInfo which may work but is not officially supported" -ForegroundColor $YELLOW
|
||||
Write-Host ""
|
||||
}
|
||||
|
||||
Write-Host "[OK] Python version check passed" -ForegroundColor $GREEN
|
||||
Write-Host ""
|
||||
|
||||
# Create and activate virtual environment
|
||||
Write-Host "=================================================="
|
||||
Write-Host "Setting up Python Virtual Environment"
|
||||
Write-Host "=================================================="
|
||||
Write-Host ""
|
||||
|
||||
$VENV_PATH = Join-Path $PROJECT_ROOT ".venv"
|
||||
$VENV_PYTHON = Join-Path $VENV_PATH "Scripts\python.exe"
|
||||
$VENV_ACTIVATE = Join-Path $VENV_PATH "Scripts\Activate.ps1"
|
||||
|
||||
if (-not (Test-Path $VENV_PYTHON)) {
|
||||
Write-Host "Creating virtual environment at .venv..."
|
||||
& $pythonCmd -m venv $VENV_PATH
|
||||
Write-Host "[OK] Virtual environment created" -ForegroundColor $GREEN
|
||||
}
|
||||
else {
|
||||
Write-Host "[OK] Virtual environment already exists" -ForegroundColor $GREEN
|
||||
}
|
||||
|
||||
# Activate venv
|
||||
Write-Host "Activating virtual environment..."
|
||||
& $VENV_ACTIVATE
|
||||
Write-Host "[OK] Virtual environment activated" -ForegroundColor $GREEN
|
||||
|
||||
# From here on, always use venv python
|
||||
$pythonCmd = $VENV_PYTHON
|
||||
|
||||
Write-Host ""
|
||||
|
||||
# Check for pip
|
||||
try {
|
||||
& $pythonCmd -m pip --version | Out-Null
|
||||
}
|
||||
catch {
|
||||
Write-Host "Error: pip is not installed" -ForegroundColor $RED
|
||||
Write-Host "Please install pip for Python $versionInfo"
|
||||
exit 1
|
||||
}
|
||||
|
||||
Write-Host "[OK] pip detected" -ForegroundColor $GREEN
|
||||
Write-Host ""
|
||||
|
||||
# Upgrade pip, setuptools, and wheel
|
||||
Write-Host "Upgrading pip, setuptools, and wheel..."
|
||||
& $pythonCmd -m pip install --upgrade pip setuptools wheel
|
||||
Write-Host "[OK] Core packages upgraded" -ForegroundColor $GREEN
|
||||
Write-Host ""
|
||||
|
||||
# Install core framework package
|
||||
Write-Host "=================================================="
|
||||
Write-Host "Installing Core Framework Package"
|
||||
Write-Host "=================================================="
|
||||
Write-Host ""
|
||||
|
||||
Set-Location "$PROJECT_ROOT\core"
|
||||
|
||||
if (Test-Path "pyproject.toml") {
|
||||
Write-Host "Installing framework from core/ (editable mode)..."
|
||||
& $pythonCmd -m pip install -e . | Out-Null
|
||||
Write-Host "[OK] Framework package installed" -ForegroundColor $GREEN
|
||||
}
|
||||
else {
|
||||
Write-Host "[WARN] No pyproject.toml found in core/, skipping framework installation" -ForegroundColor $YELLOW
|
||||
}
|
||||
|
||||
Write-Host ""
|
||||
|
||||
# Install tools package
|
||||
Write-Host "=================================================="
|
||||
Write-Host "Installing Tools Package (aden_tools)"
|
||||
Write-Host "=================================================="
|
||||
Write-Host ""
|
||||
|
||||
Set-Location "$PROJECT_ROOT\tools"
|
||||
|
||||
if (Test-Path "pyproject.toml") {
|
||||
Write-Host "Installing aden_tools from tools/ (editable mode)..."
|
||||
& $pythonCmd -m pip install -e . | Out-Null
|
||||
Write-Host "[OK] Tools package installed" -ForegroundColor $GREEN
|
||||
}
|
||||
else {
|
||||
Write-Host "Error: No pyproject.toml found in tools/" -ForegroundColor $RED
|
||||
exit 1
|
||||
}
|
||||
|
||||
Write-Host ""
|
||||
|
||||
# Fix openai version compatibility with litellm
|
||||
Write-Host "=================================================="
|
||||
Write-Host "Fixing Package Compatibility"
|
||||
Write-Host "=================================================="
|
||||
Write-Host ""
|
||||
|
||||
try {
|
||||
$openaiVersion = & $pythonCmd -c "import openai; print(openai.__version__)"
|
||||
}
|
||||
catch {
|
||||
$openaiVersion = "not_installed"
|
||||
}
|
||||
|
||||
if ($openaiVersion -eq "not_installed") {
|
||||
Write-Host "Installing openai package..."
|
||||
& $pythonCmd -m pip install "openai>=1.0.0" | Out-Null
|
||||
Write-Host "[OK] openai package installed" -ForegroundColor $GREEN
|
||||
}
|
||||
elseif ($openaiVersion.StartsWith("0.")) {
|
||||
Write-Host "Found old openai version: $openaiVersion" -ForegroundColor $YELLOW
|
||||
Write-Host "Upgrading to openai 1.x+ for litellm compatibility..."
|
||||
& $pythonCmd -m pip install --upgrade "openai>=1.0.0" | Out-Null
|
||||
$openaiVersion = & $pythonCmd -c "import openai; print(openai.__version__)"
|
||||
Write-Host "[OK] openai upgraded to $openaiVersion" -ForegroundColor $GREEN
|
||||
}
|
||||
else {
|
||||
Write-Host "[OK] openai $openaiVersion is compatible" -ForegroundColor $GREEN
|
||||
}
|
||||
|
||||
Write-Host ""
|
||||
|
||||
# Verify installations
|
||||
Write-Host "=================================================="
|
||||
Write-Host "Verifying Installation"
|
||||
Write-Host "=================================================="
|
||||
Write-Host ""
|
||||
|
||||
Set-Location $PROJECT_ROOT
|
||||
|
||||
# Test framework import
|
||||
& $pythonCmd -c "import framework" 2>$null
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
Write-Host "[OK] framework package imports successfully" -ForegroundColor Green
|
||||
}
|
||||
else {
|
||||
Write-Host "[FAIL] framework package import failed" -ForegroundColor Red
|
||||
}
|
||||
|
||||
# Test aden_tools import
|
||||
& $pythonCmd -c "import aden_tools" 2>$null
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
Write-Host "[OK] aden_tools package imports successfully" -ForegroundColor Green
|
||||
}
|
||||
else {
|
||||
Write-Host "[FAIL] aden_tools package import failed" -ForegroundColor Red
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Test litellm
|
||||
& $pythonCmd -c "import litellm" 2>$null
|
||||
if ($LASTEXITCODE -eq 0) {
|
||||
Write-Host "[OK] litellm package imports successfully" -ForegroundColor $GREEN
|
||||
}
|
||||
else {
|
||||
Write-Host "[WARN] litellm import had issues (may be OK if not using LLM features)" -ForegroundColor $YELLOW
|
||||
}
|
||||
|
||||
Write-Host ""
|
||||
|
||||
# Print agent commands
|
||||
Write-Host "=================================================="
|
||||
Write-Host " Setup Complete!"
|
||||
Write-Host "=================================================="
|
||||
Write-Host ""
|
||||
Write-Host "Python packages installed:"
|
||||
Write-Host " - framework (core agent runtime)"
|
||||
Write-Host " - aden_tools (tools and MCP servers)"
|
||||
Write-Host " - All dependencies and compatibility fixes applied"
|
||||
Write-Host ""
|
||||
Write-Host "To run agents on Windows (PowerShell):"
|
||||
Write-Host ""
|
||||
Write-Host "1. From the project root, set PYTHONPATH:"
|
||||
Write-Host " `$env:PYTHONPATH=`"core;exports`""
|
||||
Write-Host ""
|
||||
Write-Host "2. Run an agent command:"
|
||||
Write-Host " python -m agent_name validate"
|
||||
Write-Host " python -m agent_name info"
|
||||
Write-Host " python -m agent_name run --input '{...}'"
|
||||
Write-Host ""
|
||||
Write-Host "Example (support_ticket_agent):"
|
||||
Write-Host " python -m support_ticket_agent validate"
|
||||
Write-Host " python -m support_ticket_agent info"
|
||||
Write-Host " python -m support_ticket_agent run --input '{""ticket_content"":""..."",""customer_id"":""..."",""ticket_id"":""...""}'"
|
||||
Write-Host ""
|
||||
Write-Host "Notes:"
|
||||
Write-Host " - Ensure the virtual environment is activated (.venv)"
|
||||
Write-Host " - PYTHONPATH must be set in each new PowerShell session"
|
||||
Write-Host ""
|
||||
Write-Host "Documentation:"
|
||||
Write-Host " $PROJECT_ROOT\README.md"
|
||||
Write-Host ""
|
||||
Write-Host "Agent Examples:"
|
||||
Write-Host " $PROJECT_ROOT\exports\"
|
||||
Write-Host ""
|
||||
@@ -1,13 +0,0 @@
|
||||
# MCP Server
|
||||
fastmcp
|
||||
|
||||
# Tool dependencies
|
||||
diff-match-patch
|
||||
pypdf
|
||||
beautifulsoup4
|
||||
lxml
|
||||
playwright
|
||||
playwright-stealth
|
||||
requests
|
||||
|
||||
# Note: After installing, run `playwright install` to download browser binaries
|
||||
@@ -9,7 +9,7 @@ Philosophy: Google Strictness + Apple UX
|
||||
|
||||
Usage:
|
||||
from aden_tools.credentials import CredentialStoreAdapter
|
||||
from core.framework.credentials import CredentialStore
|
||||
from framework.credentials import CredentialStore
|
||||
|
||||
# With encrypted storage (production)
|
||||
store = CredentialStore.with_encrypted_storage() # defaults to ~/.hive/credentials
|
||||
|
||||
@@ -5,7 +5,7 @@ This provides backward compatibility, allowing existing tools to work unchanged
|
||||
while enabling new features (template resolution, multi-key credentials, etc.).
|
||||
|
||||
Usage:
|
||||
from core.framework.credentials import CredentialStore
|
||||
from framework.credentials import CredentialStore
|
||||
from aden_tools.credentials.store_adapter import CredentialStoreAdapter
|
||||
|
||||
# Create new credential store
|
||||
@@ -31,7 +31,7 @@ from typing import TYPE_CHECKING
|
||||
from .base import CredentialError, CredentialSpec
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from core.framework.credentials import CredentialStore
|
||||
from framework.credentials import CredentialStore
|
||||
|
||||
|
||||
class CredentialStoreAdapter:
|
||||
@@ -368,7 +368,7 @@ class CredentialStoreAdapter:
|
||||
credentials = CredentialStoreAdapter.for_testing({"brave_search": "test-key"})
|
||||
assert credentials.get("brave_search") == "test-key"
|
||||
"""
|
||||
from core.framework.credentials import CredentialStore
|
||||
from framework.credentials import CredentialStore
|
||||
|
||||
# Convert to CredentialStore.for_testing format
|
||||
# Simple credentials get a single "api_key" key
|
||||
@@ -395,13 +395,14 @@ class CredentialStoreAdapter:
|
||||
Returns:
|
||||
CredentialStoreAdapter using env vars for storage
|
||||
"""
|
||||
from core.framework.credentials import CredentialStore
|
||||
from framework.credentials import CredentialStore
|
||||
|
||||
# Build env mapping from specs if not provided
|
||||
if env_mapping is None and specs is None:
|
||||
from . import CREDENTIAL_SPECS
|
||||
if env_mapping is None:
|
||||
if specs is None:
|
||||
from . import CREDENTIAL_SPECS
|
||||
|
||||
specs = CREDENTIAL_SPECS
|
||||
specs = CREDENTIAL_SPECS
|
||||
env_mapping = {name: spec.env_var for name, spec in specs.items()}
|
||||
|
||||
store = CredentialStore.with_env_storage(env_mapping)
|
||||
|
||||
@@ -415,13 +415,13 @@ class TestDealTools:
|
||||
|
||||
class TestHubSpotOAuth2Provider:
|
||||
def test_provider_id(self):
|
||||
from core.framework.credentials.oauth2.hubspot_provider import HubSpotOAuth2Provider
|
||||
from framework.credentials.oauth2.hubspot_provider import HubSpotOAuth2Provider
|
||||
|
||||
provider = HubSpotOAuth2Provider(client_id="cid", client_secret="csecret")
|
||||
assert provider.provider_id == "hubspot_oauth2"
|
||||
|
||||
def test_default_scopes(self):
|
||||
from core.framework.credentials.oauth2.hubspot_provider import (
|
||||
from framework.credentials.oauth2.hubspot_provider import (
|
||||
HUBSPOT_DEFAULT_SCOPES,
|
||||
HubSpotOAuth2Provider,
|
||||
)
|
||||
@@ -430,7 +430,7 @@ class TestHubSpotOAuth2Provider:
|
||||
assert provider.config.default_scopes == HUBSPOT_DEFAULT_SCOPES
|
||||
|
||||
def test_custom_scopes(self):
|
||||
from core.framework.credentials.oauth2.hubspot_provider import HubSpotOAuth2Provider
|
||||
from framework.credentials.oauth2.hubspot_provider import HubSpotOAuth2Provider
|
||||
|
||||
provider = HubSpotOAuth2Provider(
|
||||
client_id="cid",
|
||||
@@ -440,7 +440,7 @@ class TestHubSpotOAuth2Provider:
|
||||
assert provider.config.default_scopes == ["crm.objects.contacts.read"]
|
||||
|
||||
def test_endpoints(self):
|
||||
from core.framework.credentials.oauth2.hubspot_provider import (
|
||||
from framework.credentials.oauth2.hubspot_provider import (
|
||||
HUBSPOT_AUTHORIZATION_URL,
|
||||
HUBSPOT_TOKEN_URL,
|
||||
HubSpotOAuth2Provider,
|
||||
@@ -451,15 +451,15 @@ class TestHubSpotOAuth2Provider:
|
||||
assert provider.config.authorization_url == HUBSPOT_AUTHORIZATION_URL
|
||||
|
||||
def test_supported_types(self):
|
||||
from core.framework.credentials.models import CredentialType
|
||||
from core.framework.credentials.oauth2.hubspot_provider import HubSpotOAuth2Provider
|
||||
from framework.credentials.models import CredentialType
|
||||
from framework.credentials.oauth2.hubspot_provider import HubSpotOAuth2Provider
|
||||
|
||||
provider = HubSpotOAuth2Provider(client_id="cid", client_secret="csecret")
|
||||
assert CredentialType.OAUTH2 in provider.supported_types
|
||||
|
||||
def test_validate_no_access_token(self):
|
||||
from core.framework.credentials.models import CredentialObject
|
||||
from core.framework.credentials.oauth2.hubspot_provider import HubSpotOAuth2Provider
|
||||
from framework.credentials.models import CredentialObject
|
||||
from framework.credentials.oauth2.hubspot_provider import HubSpotOAuth2Provider
|
||||
|
||||
provider = HubSpotOAuth2Provider(client_id="cid", client_secret="csecret")
|
||||
cred = CredentialObject(id="test")
|
||||
|
||||
@@ -25,8 +25,6 @@ pip install playwright playwright-stealth
|
||||
playwright install chromium
|
||||
```
|
||||
|
||||
In Docker, add `RUN playwright install chromium --with-deps` to the Dockerfile.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
This tool does not require any environment variables.
|
||||
|
||||
@@ -95,6 +95,7 @@ class TestPdfReadTool:
|
||||
def __init__(self, path: Path) -> None: # noqa: ARG002
|
||||
self.pages = [FakePage(f"Page {i + 1}") for i in range(50)]
|
||||
self.is_encrypted = False
|
||||
self.metadata = None
|
||||
|
||||
# Patch PdfReader used inside the tool so we don't need a real PDF
|
||||
from aden_tools.tools.pdf_read_tool import pdf_read_tool
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
"""Tests for web_scrape tool (FastMCP)."""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from fastmcp import FastMCP
|
||||
@@ -15,60 +15,135 @@ def web_scrape_fn(mcp: FastMCP):
|
||||
return mcp._tool_manager._tools["web_scrape"].fn
|
||||
|
||||
|
||||
def _make_playwright_mocks(html, status=200, final_url="https://example.com/page"):
|
||||
"""Build a full playwright mock chain and return (context_manager, response, page)."""
|
||||
mock_response = MagicMock(
|
||||
status=status,
|
||||
url=final_url,
|
||||
headers={"content-type": "text/html; charset=utf-8"},
|
||||
)
|
||||
|
||||
mock_page = AsyncMock()
|
||||
mock_page.goto.return_value = mock_response
|
||||
mock_page.content.return_value = html
|
||||
mock_page.wait_for_timeout.return_value = None
|
||||
|
||||
mock_context = AsyncMock()
|
||||
mock_context.new_page.return_value = mock_page
|
||||
|
||||
mock_browser = AsyncMock()
|
||||
mock_browser.new_context.return_value = mock_context
|
||||
|
||||
mock_pw = MagicMock()
|
||||
mock_pw.chromium.launch = AsyncMock(return_value=mock_browser)
|
||||
|
||||
# async context manager for async_playwright()
|
||||
mock_cm = MagicMock()
|
||||
mock_cm.__aenter__ = AsyncMock(return_value=mock_pw)
|
||||
mock_cm.__aexit__ = AsyncMock(return_value=False)
|
||||
|
||||
return mock_cm, mock_response, mock_page
|
||||
|
||||
|
||||
_PW_PATH = "aden_tools.tools.web_scrape_tool.web_scrape_tool.async_playwright"
|
||||
_STEALTH_PATH = "aden_tools.tools.web_scrape_tool.web_scrape_tool.Stealth"
|
||||
|
||||
|
||||
class TestWebScrapeTool:
|
||||
"""Tests for web_scrape tool."""
|
||||
|
||||
def test_url_auto_prefixed_with_https(self, web_scrape_fn):
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_url_auto_prefixed_with_https(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""URLs without scheme get https:// prefix."""
|
||||
# This will fail to connect, but we can verify the behavior
|
||||
result = web_scrape_fn(url="example.com")
|
||||
# Should either succeed or have a network error (not a validation error)
|
||||
assert isinstance(result, dict)
|
||||
html = "<html><body>Hello</body></html>"
|
||||
mock_cm, _, _ = _make_playwright_mocks(html, final_url="https://example.com")
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
def test_max_length_clamped_low(self, web_scrape_fn):
|
||||
result = await web_scrape_fn(url="example.com")
|
||||
assert isinstance(result, dict)
|
||||
assert "error" not in result
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_max_length_clamped_low(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""max_length below 1000 is clamped to 1000."""
|
||||
# Test with a very low max_length - implementation clamps to 1000
|
||||
result = web_scrape_fn(url="https://example.com", max_length=500)
|
||||
# Should not error due to invalid max_length
|
||||
assert isinstance(result, dict)
|
||||
html = "<html><body>Hello</body></html>"
|
||||
mock_cm, _, _ = _make_playwright_mocks(html, final_url="https://example.com")
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
def test_max_length_clamped_high(self, web_scrape_fn):
|
||||
result = await web_scrape_fn(url="https://example.com", max_length=500)
|
||||
assert isinstance(result, dict)
|
||||
assert "error" not in result
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_max_length_clamped_high(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""max_length above 500000 is clamped to 500000."""
|
||||
# Test with a very high max_length - implementation clamps to 500000
|
||||
result = web_scrape_fn(url="https://example.com", max_length=600000)
|
||||
# Should not error due to invalid max_length
|
||||
assert isinstance(result, dict)
|
||||
html = "<html><body>Hello</body></html>"
|
||||
mock_cm, _, _ = _make_playwright_mocks(html, final_url="https://example.com")
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
def test_valid_max_length_accepted(self, web_scrape_fn):
|
||||
result = await web_scrape_fn(url="https://example.com", max_length=600000)
|
||||
assert isinstance(result, dict)
|
||||
assert "error" not in result
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_valid_max_length_accepted(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""Valid max_length values are accepted."""
|
||||
result = web_scrape_fn(url="https://example.com", max_length=10000)
|
||||
assert isinstance(result, dict)
|
||||
html = "<html><body>Hello</body></html>"
|
||||
mock_cm, _, _ = _make_playwright_mocks(html, final_url="https://example.com")
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
def test_include_links_option(self, web_scrape_fn):
|
||||
result = await web_scrape_fn(url="https://example.com", max_length=10000)
|
||||
assert isinstance(result, dict)
|
||||
assert "error" not in result
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_include_links_option(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""include_links parameter is accepted."""
|
||||
result = web_scrape_fn(url="https://example.com", include_links=True)
|
||||
assert isinstance(result, dict)
|
||||
html = '<html><body><a href="/link">Link</a></body></html>'
|
||||
mock_cm, _, _ = _make_playwright_mocks(html, final_url="https://example.com")
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
def test_selector_option(self, web_scrape_fn):
|
||||
"""selector parameter is accepted."""
|
||||
result = web_scrape_fn(url="https://example.com", selector=".content")
|
||||
result = await web_scrape_fn(url="https://example.com", include_links=True)
|
||||
assert isinstance(result, dict)
|
||||
assert "error" not in result
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_selector_option(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""selector parameter is accepted."""
|
||||
html = '<html><body><div class="content">Content here</div></body></html>'
|
||||
mock_cm, _, _ = _make_playwright_mocks(html, final_url="https://example.com")
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
result = await web_scrape_fn(url="https://example.com", selector=".content")
|
||||
assert isinstance(result, dict)
|
||||
assert "error" not in result
|
||||
|
||||
|
||||
class TestWebScrapeToolLinkConversion:
|
||||
"""Tests for link URL conversion (relative to absolute)."""
|
||||
|
||||
def _mock_response(self, html_content, final_url="https://example.com/page"):
|
||||
"""Create a mock httpx response object."""
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.text = html_content
|
||||
mock_response.url = final_url
|
||||
mock_response.headers = {"content-type": "text/html; charset=utf-8"}
|
||||
return mock_response
|
||||
|
||||
@patch("aden_tools.tools.web_scrape_tool.web_scrape_tool.httpx.get")
|
||||
def test_relative_links_converted_to_absolute(self, mock_get, web_scrape_fn):
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_relative_links_converted_to_absolute(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""Relative URLs like ../page are converted to absolute URLs."""
|
||||
html = """
|
||||
<html>
|
||||
@@ -78,9 +153,11 @@ class TestWebScrapeToolLinkConversion:
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
mock_get.return_value = self._mock_response(html, "https://example.com/blog/post")
|
||||
mock_cm, _, _ = _make_playwright_mocks(html, final_url="https://example.com/blog/post")
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
result = web_scrape_fn(url="https://example.com/blog/post", include_links=True)
|
||||
result = await web_scrape_fn(url="https://example.com/blog/post", include_links=True)
|
||||
|
||||
assert "error" not in result
|
||||
assert "links" in result
|
||||
@@ -95,8 +172,10 @@ class TestWebScrapeToolLinkConversion:
|
||||
expected = "https://example.com/blog/page.html"
|
||||
assert hrefs["Next Page"] == expected, f"Got {hrefs['Next Page']}"
|
||||
|
||||
@patch("aden_tools.tools.web_scrape_tool.web_scrape_tool.httpx.get")
|
||||
def test_root_relative_links_converted(self, mock_get, web_scrape_fn):
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_root_relative_links_converted(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""Root-relative URLs like /about are converted to absolute URLs."""
|
||||
html = """
|
||||
<html>
|
||||
@@ -106,9 +185,11 @@ class TestWebScrapeToolLinkConversion:
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
mock_get.return_value = self._mock_response(html, "https://example.com/blog/post")
|
||||
mock_cm, _, _ = _make_playwright_mocks(html, final_url="https://example.com/blog/post")
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
result = web_scrape_fn(url="https://example.com/blog/post", include_links=True)
|
||||
result = await web_scrape_fn(url="https://example.com/blog/post", include_links=True)
|
||||
|
||||
assert "error" not in result
|
||||
assert "links" in result
|
||||
@@ -119,8 +200,10 @@ class TestWebScrapeToolLinkConversion:
|
||||
assert hrefs["About"] == "https://example.com/about"
|
||||
assert hrefs["Contact"] == "https://example.com/contact"
|
||||
|
||||
@patch("aden_tools.tools.web_scrape_tool.web_scrape_tool.httpx.get")
|
||||
def test_absolute_links_unchanged(self, mock_get, web_scrape_fn):
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_absolute_links_unchanged(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""Absolute URLs remain unchanged."""
|
||||
html = """
|
||||
<html>
|
||||
@@ -130,9 +213,11 @@ class TestWebScrapeToolLinkConversion:
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
mock_get.return_value = self._mock_response(html)
|
||||
mock_cm, _, _ = _make_playwright_mocks(html)
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
result = web_scrape_fn(url="https://example.com", include_links=True)
|
||||
result = await web_scrape_fn(url="https://example.com", include_links=True)
|
||||
|
||||
assert "error" not in result
|
||||
assert "links" in result
|
||||
@@ -143,8 +228,10 @@ class TestWebScrapeToolLinkConversion:
|
||||
assert hrefs["Other Site"] == "https://other.com"
|
||||
assert hrefs["Internal"] == "https://example.com/page"
|
||||
|
||||
@patch("aden_tools.tools.web_scrape_tool.web_scrape_tool.httpx.get")
|
||||
def test_links_after_redirects(self, mock_get, web_scrape_fn):
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_links_after_redirects(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""Links are resolved relative to final URL after redirects."""
|
||||
html = """
|
||||
<html>
|
||||
@@ -155,12 +242,14 @@ class TestWebScrapeToolLinkConversion:
|
||||
</html>
|
||||
"""
|
||||
# Mock redirect: request to /old/url redirects to /new/location
|
||||
mock_get.return_value = self._mock_response(
|
||||
mock_cm, _, _ = _make_playwright_mocks(
|
||||
html,
|
||||
final_url="https://example.com/new/location", # Final URL after redirect
|
||||
)
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
result = web_scrape_fn(url="https://example.com/old/url", include_links=True)
|
||||
result = await web_scrape_fn(url="https://example.com/old/url", include_links=True)
|
||||
|
||||
assert "error" not in result
|
||||
assert "links" in result
|
||||
@@ -173,8 +262,10 @@ class TestWebScrapeToolLinkConversion:
|
||||
)
|
||||
assert hrefs["Next"] == "https://example.com/new/next"
|
||||
|
||||
@patch("aden_tools.tools.web_scrape_tool.web_scrape_tool.httpx.get")
|
||||
def test_fragment_links_preserved(self, mock_get, web_scrape_fn):
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_fragment_links_preserved(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""Fragment links (anchors) are preserved."""
|
||||
html = """
|
||||
<html>
|
||||
@@ -184,9 +275,11 @@ class TestWebScrapeToolLinkConversion:
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
mock_get.return_value = self._mock_response(html, "https://example.com/page")
|
||||
mock_cm, _, _ = _make_playwright_mocks(html, final_url="https://example.com/page")
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
result = web_scrape_fn(url="https://example.com/page", include_links=True)
|
||||
result = await web_scrape_fn(url="https://example.com/page", include_links=True)
|
||||
|
||||
assert "error" not in result
|
||||
assert "links" in result
|
||||
@@ -197,8 +290,10 @@ class TestWebScrapeToolLinkConversion:
|
||||
assert hrefs["Section 1"] == "https://example.com/page#section1"
|
||||
assert hrefs["Page Section 2"] == "https://example.com/page#section2"
|
||||
|
||||
@patch("aden_tools.tools.web_scrape_tool.web_scrape_tool.httpx.get")
|
||||
def test_query_parameters_preserved(self, mock_get, web_scrape_fn):
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_query_parameters_preserved(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""Query parameters in URLs are preserved."""
|
||||
html = """
|
||||
<html>
|
||||
@@ -208,9 +303,11 @@ class TestWebScrapeToolLinkConversion:
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
mock_get.return_value = self._mock_response(html, "https://example.com/blog/post")
|
||||
mock_cm, _, _ = _make_playwright_mocks(html, final_url="https://example.com/blog/post")
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
result = web_scrape_fn(url="https://example.com/blog/post", include_links=True)
|
||||
result = await web_scrape_fn(url="https://example.com/blog/post", include_links=True)
|
||||
|
||||
assert "error" not in result
|
||||
assert "links" in result
|
||||
@@ -222,8 +319,10 @@ class TestWebScrapeToolLinkConversion:
|
||||
assert "q=test" in hrefs["Search"]
|
||||
assert "sort=date" in hrefs["Search"]
|
||||
|
||||
@patch("aden_tools.tools.web_scrape_tool.web_scrape_tool.httpx.get")
|
||||
def test_empty_href_skipped(self, mock_get, web_scrape_fn):
|
||||
@pytest.mark.asyncio
|
||||
@patch(_STEALTH_PATH)
|
||||
@patch(_PW_PATH)
|
||||
async def test_empty_href_skipped(self, mock_pw, mock_stealth, web_scrape_fn):
|
||||
"""Links with empty or whitespace text are skipped."""
|
||||
html = """
|
||||
<html>
|
||||
@@ -234,9 +333,11 @@ class TestWebScrapeToolLinkConversion:
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
mock_get.return_value = self._mock_response(html)
|
||||
mock_cm, _, _ = _make_playwright_mocks(html)
|
||||
mock_pw.return_value = mock_cm
|
||||
mock_stealth.return_value.apply_stealth_async = AsyncMock()
|
||||
|
||||
result = web_scrape_fn(url="https://example.com", include_links=True)
|
||||
result = await web_scrape_fn(url="https://example.com", include_links=True)
|
||||
|
||||
assert "error" not in result
|
||||
assert "links" in result
|
||||
|
||||
Reference in New Issue
Block a user