Compare commits

...

51 Commits

Author SHA1 Message Date
jiangfeng.11 e60621d519 refactor(prompt): remove unused skill functions and clean up code 2026-04-11 11:03:47 +08:00
jiangfeng.11 f7a6ca8364 refactor(workspace): simplify sidebar state management using cookies 2026-04-11 10:38:05 +08:00
jiangfeng.11 2540acd5f7 Merge branch 'main' into release/2.0-rc 2026-04-11 10:34:31 +08:00
Asish Kumar 092bf13f5e fix(makefile): route Windows shell-script targets through Git Bash (#2060) 2026-04-11 09:30:22 +08:00
JeffJiang fe2595a05c Update CMD to run uvicorn with --no-sync option (#2100) 2026-04-10 23:00:00 +08:00
Jin 718dddde75 fix(sandbox): prevent memory leak in file operation locks using WeakValueDictionary (#2096)
* fix(sandbox): prevent memory leak in file operation locks using WeakValueDictionary

* lint: fix lint issue in sandbox tools security
2026-04-10 22:55:53 +08:00
Willem Jiang 679ca657ee Add Contributor Covenant Code of Conduct
Added Contributor Covenant Code of Conduct to ensure a respectful and inclusive community.
2026-04-10 22:26:40 +08:00
Zic-Wang fa96acdf4b feat: add WeChat channel integration (#1869)
* feat: add WeChat channel integration

* fix(backend): recover stale channel threads and align upload artifact handling

* refactor(wechat): reduce scope and restore QR bootstrap

* fix(backend): sort manager imports for Ruff lint

* fix(tests): add missing patch import in test_channels.py

* Update backend/app/channels/wechat.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update backend/app/channels/manager.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix(wechat): streamline allowed file extensions initialization and clean up test file

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-10 20:49:28 +08:00
Willem Jiang 90299e2710 feat(provisioner): add optional PVC support for sandbox volumes (#2020)
* feat(provisioner): add optional PVC support for sandbox volumes (#1978)

  Add SKILLS_PVC_NAME and USERDATA_PVC_NAME env vars to allow sandbox
  Pods to use PersistentVolumeClaims instead of hostPath volumes. This
  prevents data loss in production when pods are rescheduled across nodes.

  When USERDATA_PVC_NAME is set, a subPath of threads/{thread_id}/user-data
  is used so a single PVC can serve multiple threads. Falls back to hostPath
  when the new env vars are not set, preserving backward compatibility.

* add unit test for provisioner pvc volumes

* refactor: extract shared provisioner_module fixture to conftest.py

Agent-Logs-Url: https://github.com/bytedance/deer-flow/sessions/e7ccf708-c6ba-40e4-844a-b526bdb249dd

Co-authored-by: WillemJiang <219644+WillemJiang@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: JeffJiang <for-eleven@hotmail.com>
2026-04-10 20:40:30 +08:00
JeffJiang 7dc0c7d01f feat(blog): implement blog structure with post listing, tagging, and layout enhancements (#1962)
* feat(blog): implement blog structure with post listing and tagging functionality

* feat(blog): enhance blog layout and post metadata display with new components

* fix(blog): address PR #1962 review feedback and fix lint issues (#14)

* fix: format

---------

Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2026-04-10 20:24:52 +08:00
JeffJiang 809b341350 Add TypeScript SDK path to code-workspace settings (#2052)
* Add TypeScript SDK path to code-workspace settings

Agent-Logs-Url: https://github.com/foreleven/deer-flow/sessions/7d99db18-eb9d-4798-b0a5-b33f6079cd1a

Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com>

* Update deer-flow.code-workspace

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-10 18:20:08 +08:00
greatmengqi b1aabe88b8 fix(backend): stream DeerFlowClient AI text as token deltas (#1969) (#1974)
* fix(backend): stream DeerFlowClient AI text as token deltas (#1969)

DeerFlowClient.stream() subscribed to LangGraph stream_mode=["values",
"custom"] which only delivers full-state snapshots at graph-node
boundaries, so AI replies were dumped as a single messages-tuple event
per node instead of streaming token-by-token. `client.stream("hello")`
looked identical to `client.chat("hello")` — the bug reported in #1969.

Subscribe to "messages" mode as well, forward AIMessageChunk deltas as
messages-tuple events with delta semantics (consumers accumulate by id),
and dedup the values-snapshot path so it does not re-synthesize AI
text that was already streamed. Introduce a per-id usage_metadata
counter so the final AIMessage in the values snapshot and the final
"messages" chunk — which carry the same cumulative usage — are not
double-counted.

chat() now accumulates per-id deltas and returns the last message's
full accumulated text. Non-streaming mock sources (single event per id)
are a degenerate case of the same logic, keeping existing callers and
tests backward compatible.

Verified end-to-end against a real LLM: a 15-number count emits 35
messages-tuple events with BPE subword boundaries clearly visible
("eleven" -> "ele" / "ven", "twelve" -> "tw" / "elve"), 476ms across
the window, end-event usage matches the values-snapshot usage exactly
(not doubled). tests/test_client_live.py::TestLiveStreaming passes.

New unit tests:
- test_messages_mode_emits_token_deltas: 3 AIMessageChunks produce 3
  delta events with correct content/id/usage, values-snapshot does not
  duplicate, usage counted once.
- test_chat_accumulates_streamed_deltas: chat() rebuilds full text
  from deltas.
- test_messages_mode_tool_message: ToolMessage delivered via messages
  mode is not duplicated by the values-snapshot synthesis path.

The stream() docstring now documents why this client does not reuse
Gateway's run_agent() / StreamBridge pipeline (sync vs async, raw
LangChain objects vs serialized dicts, single caller vs HTTP fan-out).

Fixes #1969

* refactor(backend): simplify DeerFlowClient streaming helpers (#1969)

Post-review cleanup for the token-level streaming fix. No behavior
change for correct inputs; one efficiency regression fixed.

Fix: chat() O(n²) accumulator
-----------------------------
`chat()` accumulated per-id text via `buffers[id] = buffers.get(id,"") + delta`,
which is O(n) per concat → O(n²) total over a streamed response. At
~2 KB cumulative text this becomes user-visible; at 50 KB / 5000 chunks
it costs roughly 100-300 ms of pure copying. Switched to
`dict[str, list[str]]` + `"".join()` once at return.

Cleanup
-------
- Extract `_serialize_tool_calls`, `_ai_text_event`, `_ai_tool_calls_event`,
  and `_tool_message_event` static helpers. The messages-mode and
  values-mode branches previously repeated four inline dict literals each;
  they now call the same builders.
- `StreamEvent.type` is now typed as `Literal["values", "messages-tuple",
  "custom", "end"]` via a `StreamEventType` alias. Makes the closed set
  explicit and catches typos at type-check time.
- Direct attribute access on `AIMessage`/`AIMessageChunk`: `.usage_metadata`,
  `.tool_calls`, `.id` all have default values on the base class, so the
  `getattr(..., None)` fallbacks were dead code. Removed from the hot
  path.
- `_account_usage` parameter type loosened to `Any` so that LangChain's
  `UsageMetadata` TypedDict is accepted under strict type checking.
- Trimmed narrating comments on `seen_ids` / `streamed_ids` / the
  values-synthesis skip block; kept the non-obvious ones that document
  the cross-mode dedup invariant.

Net diff: -15 lines. All 132 unit tests + harness boundary test still
pass; ruff check and ruff format pass.

* docs(backend): add STREAMING.md design note (#1969)

Dedicated design document for the token-level streaming architecture,
prompted by the bug investigation in #1969.

Contents:
- Why two parallel streaming paths exist (Gateway HTTP/async vs
  DeerFlowClient sync/in-process) and why they cannot be merged.
- LangGraph's three-layer mode naming (Graph "messages" vs Platform
  SDK "messages-tuple" vs HTTP SSE) and why a shared string constant
  would be harmful.
- Gateway path: run_agent + StreamBridge + sse_consumer with a
  sequence diagram.
- DeerFlowClient path: sync generator + direct yield, delta semantics,
  chat() accumulator.
- Why the three id sets (seen_ids / streamed_ids / counted_usage_ids)
  each carry an independent invariant and cannot be collapsed.
- End-to-end sequence for a real conversation turn.
- Lessons from #1969: why mock-based tests missed the bug, why
  BPE subword boundaries in live output are the strongest
  correctness signal, and the regression test that locks it in.
- Source code location index.

Also:
- Link from backend/CLAUDE.md Embedded Client section.
- Link from backend/docs/README.md under Feature Documentation.

* test(backend): add refactor regression guards for stream() (#1969)

Three new tests in TestStream that lock the contract introduced by
PR #1974 so any future refactor (sync->async migration, sharing a
core with Gateway's run_agent, dedup strategy change) cannot
silently change behavior.

- test_dedup_requires_messages_before_values_invariant: canary that
  documents the order-dependence of cross-mode dedup. streamed_ids
  is populated only by the messages branch, so values-before-messages
  for the same id produces duplicate AI text events. Real LangGraph
  never inverts this order, but a refactor that does (or that makes
  dedup idempotent) must update this test deliberately.

- test_messages_mode_golden_event_sequence: locks the *exact* event
  sequence (4 events: 2 messages-tuple deltas, 1 values snapshot, 1
  end) for a canonical streaming turn. List equality gives a clear
  diff on any drift in order, type, or payload shape.

- test_chat_accumulates_in_linear_time: perf canary for the O(n^2)
  fix in commit 1f11ba10. 10,000 single-char chunks must accumulate
  in under 1s; the threshold is wide enough to pass on slow CI but
  tight enough to fail if buffer = buffer + delta is restored.

All three tests pass alongside the existing 12 TestStream tests
(15/15). ruff check + ruff format clean.

* docs(backend): clarify stream() docstring on JSON serialization (#1969)

Replace the misleading "raw LangChain objects (AIMessage,
usage_metadata as dataclasses), not dicts" claim in the
"Why not reuse Gateway's run_agent?" section. The implementation
already yields plain Python dicts (StreamEvent.data is dict, and
usage_metadata is a TypedDict), so the original wording suggested
a richer return type than the API actually delivers.

The corrected wording focuses on what is actually true and
relevant: this client skips the JSON/SSE serialization layer that
Gateway adds for HTTP wire transmission, and yields stream event
payloads directly as Python data structures.

Addresses Copilot review feedback on PR #1974.

* test(backend): document none-id messages dedup limitation (#1969)

Add test_none_id_chunks_produce_duplicates_known_limitation to
TestStream that explicitly documents and asserts the current
behavior when an LLM provider emits AIMessageChunk with id=None
(vLLM, certain custom backends).

The cross-mode dedup machinery cannot record a None id in
streamed_ids (guarded by ``if msg_id:``), so the values snapshot's
reassembled AIMessage with a real id falls through and synthesizes
a duplicate AI text event. The test asserts len == 2 and locks
this as a known limitation rather than silently letting future
contributors hit it without context.

Why this is documented rather than fixed:
* Falling back to ``metadata.get("id")`` does not help — LangGraph's
  messages-mode metadata never carries the message id.
* Synthesizing ``f"_synth_{id(msg_chunk)}"`` only helps if the
  values snapshot uses the same fallback, which it does not.
* A real fix requires provider cooperation (always emit chunk ids)
  or content-based dedup (false-positive risk), neither of which
  belongs in this PR.

If a real fix lands, replace this test with a positive assertion
that dedup works for None-id chunks.

Addresses Copilot review feedback on PR #1974 (client.py:515).

* fix(frontend): UI polish - fix CSS typo, dark mode border, and hardcoded colors (#1942)

- Fix `font-norma` typo to `font-normal` in message-list subtask count
- Fix dark mode `--border` using reddish hue (22.216) instead of neutral
- Replace hardcoded `rgb(184,184,192)` in hero with `text-muted-foreground`
- Replace hardcoded `bg-[#a3a1a1]` in streaming indicator with `bg-muted-foreground`
- Add missing `font-sans` to welcome description `<pre>` for consistency
- Make case-study-section padding responsive (`px-4 md:px-20`)

Closes #1940

* docs: clarify deployment sizing guidance (#1963)

* fix(frontend): prevent stale 'new' thread ID from triggering 422 history requests (#1960)

After history.replaceState updates the URL from /chats/new to
/chats/{UUID}, Next.js useParams does not update because replaceState
bypasses the router. The useEffect in useThreadChat would then set
threadIdFromPath ('new') as the threadId, causing the LangGraph SDK
to call POST /threads/new/history which returns HTTP 422 (Invalid
thread ID: must be a UUID).

This fix adds a guard to skip the threadId update when
threadIdFromPath is the literal string 'new', preserving the
already-correct UUID that was set when the thread was created.

* fix(frontend): avoid using route new as thread id (#1967)

Co-authored-by: luoxiao6645 <luoxiao6645@gmail.com>

* Fix(subagent): Event loop conflict in SubagentExecutor.execute() (#1965)

* Fix event loop conflict in SubagentExecutor.execute()

When SubagentExecutor.execute() is called from within an already-running
event loop (e.g., when the parent agent uses async/await), calling
asyncio.run() creates a new event loop that conflicts with asyncio
primitives (like httpx.AsyncClient) that were created in and bound to
the parent loop.

This fix detects if we're already in a running event loop, and if so,
runs the subagent in a separate thread with its own isolated event loop
to avoid conflicts.

Fixes: sub-task cards not appearing in Ultra mode when using async parent agents

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(subagent): harden isolated event loop execution

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor(backend): remove dead getattr in _tool_message_event

---------

Co-authored-by: greatmengqi <chenmengqi.0376@bytedance.com>
Co-authored-by: Xinmin Zeng <135568692+fancyboi999@users.noreply.github.com>
Co-authored-by: 13ernkastel <LennonCMJ@live.com>
Co-authored-by: siwuai <458372151@qq.com>
Co-authored-by: 肖 <168966994+luoxiao6645@users.noreply.github.com>
Co-authored-by: luoxiao6645 <luoxiao6645@gmail.com>
Co-authored-by: Saber <11769524+hawkli-1994@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2026-04-10 18:16:38 +08:00
KKK 654354c624 test(skills): add evaluation + trigger analysis for systematic-literature-review (#2061)
* test(skills): add trigger eval set for systematic-literature-review skill

20 eval queries (10 should-trigger, 10 should-not-trigger) for use with
skill-creator's run_eval.py. Includes real-world SLR queries contributed
by @VANDRANKI (issue #1862 author) and edge cases for routing
disambiguation with academic-paper-review.

* test(skills): add grader expectations for SLR skill evaluation

5 eval cases with 39 expectations covering:
- Standard SLR flow (APA/BibTeX/IEEE format selection)
- Keyword extraction and search behavior
- Subagent dispatch for metadata extraction
- Report structure (themes, convergences, gaps, per-paper annotations)
- Negative case: single-paper routing to academic-paper-review
- Edge case: implicit SLR without explicit keywords

* refactor(skills): shorten SLR description for better trigger rate

Reduce description from 833 to 344 chars. Key changes:
- Lead with "systematic literature review" as primary trigger phrase
- Strengthen single-paper exclusion: "Not for single-paper tasks"
- Remove verbose example patterns that didn't improve routing

Tested with run_eval.py (10 runs/query):
- False positive "best paper on RL": 67% → 20% (improved)
- True positive explicit SLR query: ~30% (unchanged)

Low recall is a routing-layer limitation, not a description issue —
see PR description for full analysis.

* Potential fix for pull request finding

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
2026-04-10 18:02:45 +08:00
DanielWalnut eef0a6e2da feat(dx): Setup Wizard + doctor command — closes #2030 (#2034) 2026-04-10 17:43:39 +08:00
Javen Fang b107444878 docs(api): document recursion_limit for LangGraph API runs (#1929)
The /api/langgraph/* endpoints proxy straight to the LangGraph server,
so clients inherit LangGraph's native recursion_limit default of 25
instead of the 100 that build_run_config sets for the Gateway and IM
channel paths. 25 is too low for plan-mode or subagent runs and
reliably triggers GraphRecursionError on the lead agent's final
synthesis step after subagents return.

Set recursion_limit: 100 in the Create Run example and the cURL
snippet, and add a short note explaining the discrepancy so users
following the docs don't hit the 25-step ceiling as a surprise.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 09:28:57 +08:00
KKK 16aa51c9b3 feat(skills): add systematic-literature-review skill for multi-paper SLR workflows (#2032)
* feat(skills): add systematic-literature-review skill for multi-paper SLR workflows

Adds a new skill that produces a structured systematic literature review (SLR)
across multiple academic papers on a topic. Addresses #1862 with a pure skill
approach: no new tools, no architectural changes, no new dependencies.

Skill layout:
- SKILL.md — 4+1 phase workflow (plan, search, extract, synthesize, present)
- scripts/arxiv_search.py — arXiv API client, stdlib only, with a
  requests->urllib fallback shim modeled after github-deep-research's
  github_api.py
- templates/{apa,ieee,bibtex}.md — citation format templates selected
  dynamically in Phase 4, mirroring podcast-generation's templates/ pattern

Design notes:
- Multi-paper synthesis uses the existing `task` tool to dispatch extraction
  subagents in parallel. SKILL.md's Phase 3 includes a fixed decision table
  for batch splitting to respect the runtime's MAX_CONCURRENT_SUBAGENTS = 3
  cap, and explicitly tells the agent to strip the "Task Succeeded. Result: "
  prefix before parsing subagent JSON output.
- arXiv only, by design. Semantic Scholar and PubMed adapters would push the
  scope toward a standalone MCP server (see #933) and are intentionally out
  of scope for this skill.
- Coexists with the existing `academic-paper-review` skill: this skill does
  breadth-first synthesis across many papers, academic-paper-review does
  single-paper peer review. The two are routed via distinct triggers and
  can compose (SLR on many + deep review on 1-2 important ones).
- Hard upper bound of 50 papers, tied to the Phase 3 concurrency strategy.
  Larger surveys degrade in synthesis quality and are better split by
  sub-topic.

BibTeX template explicitly uses @misc for arXiv preprints (not @article),
which is the most common mistake when generating BibTeX for arXiv papers.

arxiv_search.py was smoke-tested end-to-end against the live arXiv API with
two query shapes (relevance sort, submittedDate sort with category filter);
all returned JSON fields parse correctly (id normalization, Atom namespace
handling, URL encoding for multi-word queries).

* fix(skills): prevent LLM from saving intermediate search results to file

Adds an explicit "do not save" instruction at the end of Phase 2.
Observed during Test 1 with DeepSeek: the model saved search results
to a markdown file before proceeding to Phase 3, wasting 2-3 tool call
rounds and increasing the risk of hitting the graph recursion limit.
The search JSON should stay in context for Phase 3, not be persisted.

* fix(skills): use relevance+start-date instead of submittedDate sorting

Test 2 revealed that arXiv's submittedDate sorting returns the most
recently submitted papers in the category regardless of query relevance.
Searching "diffusion models" with sortBy=submittedDate in cs.CV returned
papers on spatial memory, Navier-Stokes, and photon-counting CT — none
about diffusion models. The LLM then retried with 4 different queries,
wasting tool calls and approaching the recursion limit.

Fix: always sort by relevance; when the user wants "recent" papers,
combine relevance sorting with --start-date to constrain the time window.
Also add an explicit "run the search exactly once" instruction to prevent
the retry loop.

* fix(skills): wrap multi-word arXiv queries in double quotes for phrase matching

Without quotes, `all:diffusion model` is parsed by arXiv's Lucene as
`all:diffusion OR model`, pulling in unrelated papers from physics
(thermal diffusion) and other fields. Wrapping in double quotes forces
phrase matching: `all:"diffusion model"`.

Also fixes date filtering: the previous bug caused 2011 papers to appear
in results despite --start-date 2024-04-09, because the unquoted query
words were OR'd with the date constraint.

Verified: "diffusion models" --category cs.CV --start-date 2024-04-09
now returns only relevant diffusion model papers published after April
2024.

* fix(skills): add query phrasing guide and enforce subagent delegation

Two fixes from Test 2 observations with DeepSeek:

1. Query phrasing: add a table showing good vs bad query examples.
   The script wraps multi-word queries in double quotes for phrase
   matching, so long queries like "diffusion models in computer vision"
   return 0 results. Guide the LLM to use 2-3 core keywords + --category
   instead.

2. Subagent enforcement: DeepSeek was extracting metadata inline via
   python -c scripts instead of using the task tool. Strengthen Phase 3
   to explicitly name the task tool, say "do not extract metadata
   yourself", and explain why (token budget, isolation). This is more
   direct than the previous natural-language-only approach while still
   providing the reasoning behind the constraint.

* fix(skills): strengthen search keyword guidance and subagent enforcement

Address two issues found during end-to-end testing with DeepSeek:

1. Search retry: LLM passed full topic descriptions as queries (e.g.
   "diffusion models in computer vision"), which returned 0 results due
   to exact phrase matching and triggered retries. Added explicit
   instruction to extract 2-3 core keywords before searching.

2. Subagent bypass: LLM used python -c to extract metadata instead of
   dispatching via task tool. Added explicit prohibition list (python -c,
   bash scripts, inline extraction) with  markers for clarity.

* fix(skills): address Copilot review feedback on SLR skill

- Fix legacy arXiv ID parsing: preserve archive prefix for pre-2007
  papers (e.g. hep-th/9901001 instead of just 9901001)
- Fix phase count: "four phases" -> "five phases"
- Add subagent_enabled prerequisite note to SKILL.md Notes section
- Remove PR-specific references ("PR 1") from ieee.md and bibtex.md
  templates, replace with workflow-scoped wording
- Fix script header: "stdlib only" -> "no additional dependencies
  required", fix relative path to github_api.py reference
- Remove reference to non-existent docs/enhancement/ path in header

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-10 08:54:28 +08:00
Javen Fang 133ffe7174 feat(models): add langchain-ollama for native Ollama thinking support (#2062)
Add langchain-ollama as an optional dependency and provide ChatOllama
config examples, enabling proper thinking/reasoning content preservation
for local Ollama models.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 08:38:31 +08:00
yangzheli f88970985a fix(frontend): replace invalid "context" select field with "metadata" in threads.search (#2053)
* fix(frontend): replace invalid "context" select field with "metadata" in threads.search

The LangGraph API server does not support "context" as a select field for
threads/search, causing a 422 Unprocessable Entity error introduced by
commit 60e0abf (#1771).

- Replace "context" with "metadata" in the default select list
- Persist agent_name into thread metadata on creation so search results
  carry the agent identity
- Update pathOfThread() to fall back to metadata.agent_name when
  context is unavailable from search results
- Add regression tests for metadata-based agent routing

Fixes #2037

Made-with: Cursor

* fix: apply Copilot suggestions

* style: fix the lint error
2026-04-10 08:35:07 +08:00
knukn 6572fa5b75 feat(smoke-test): add smoke test skill (#1947)
* feat(smoke-test): add end-to-end smoke test skill

* Update .agent/skills/smoke-test/SKILL.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update .agent/skills/smoke-test/SKILL.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update .agent/skills/smoke-test/references/SOP.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update .agent/skills/smoke-test/scripts/check_local_env.sh

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update .agent/skills/smoke-test/scripts/check_docker.sh

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update .agent/skills/smoke-test/scripts/deploy_docker.sh

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* refactor(smoke-test): optimize health check scripts and update document structure

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-09 18:56:28 +08:00
shivam johri 194bab4691 feat(config): add when_thinking_disabled support for model configs (#1970)
* feat(config): add when_thinking_disabled support for model configs

Allow users to explicitly configure what parameters are sent to the
model when thinking is disabled, via a new `when_thinking_disabled`
field in model config. This mirrors the existing `when_thinking_enabled`
pattern and takes full precedence over the hardcoded disable behavior
when set. Backwards compatible — existing configs work unchanged.

Closes #1675

* fix(config): address copilot review — gate when_thinking_disabled independently

- Switch truthiness check to `is not None` so empty dict overrides work
- Restructure disable path so when_thinking_disabled is gated independently
  of has_thinking_settings, allowing it to work without when_thinking_enabled
- Update test to reflect new behavior
2026-04-09 18:49:00 +08:00
luo jiyin 35f141fc48 feat: implement full checkpoint rollback on user cancellation (#1867)
* feat: implement full checkpoint rollback on user cancellation

- Capture pre-run checkpoint snapshot including checkpoint state, metadata, and pending_writes
- Add _rollback_to_pre_run_checkpoint() function to restore thread state
- Implement _call_checkpointer_method() helper to support both async and sync checkpointer methods
- Rollback now properly restores checkpoint, metadata, channel_versions, and pending_writes
- Remove obsolete TODO comment (Phase 2) as rollback is now complete

This resolves the TODO(Phase 2) comment and enables full thread state
restoration when a run is cancelled by the user.

* fix: address rollback review feedback

* fix: strengthen checkpoint rollback validation and error handling

- Validate restored_config structure and checkpoint_id before use
- Raise RuntimeError on malformed pending_writes instead of silent skip
- Normalize None checkpoint_ns to empty string instead of "None"
- Move delete_thread to only execute when pre_run_snapshot is None
- Add docstring noting non-atomic rollback as known limitation

This addresses review feedback on PR #1867 regarding data integrity
in the checkpoint rollback implementation.

* test: add comprehensive coverage for checkpoint rollback edge cases

- test_rollback_restores_snapshot_without_deleting_thread
- test_rollback_deletes_thread_when_no_snapshot_exists
- test_rollback_raises_when_restore_config_has_no_checkpoint_id
- test_rollback_normalizes_none_checkpoint_ns_to_root_namespace
- test_rollback_raises_on_malformed_pending_write_not_a_tuple
- test_rollback_raises_on_malformed_pending_write_non_string_channel
- test_rollback_propagates_aput_writes_failure

Covers all scenarios from PR #1867 review feedback.

* test: format rollback worker tests
2026-04-09 17:56:36 +08:00
Xinmin Zeng 0b6fa8b9e1 fix(sandbox): add startup reconciliation to prevent orphaned container leaks (#1976)
* fix(sandbox): add startup reconciliation to prevent orphaned container leaks

Sandbox containers were never cleaned up when the managing process restarted,
because all lifecycle tracking lived in in-memory dictionaries. This adds
startup reconciliation that enumerates running containers via `docker ps` and
either destroys orphans (age > idle_timeout) or adopts them into the warm pool.

Closes #1972

* fix(sandbox): address Copilot review — adopt-all strategy, improved error handling

- Reconciliation now adopts all containers into warm pool unconditionally,
  letting the idle checker decide cleanup. Avoids destroying containers
  that another concurrent process may still be using.
- list_running() logs stderr on docker ps failure and catches
  FileNotFoundError/OSError.
- Signal handler test restores SIGTERM/SIGINT in addition to SIGHUP.
- E2E test docstring corrected to match actual coverage scope.

* fix(sandbox): address maintainer review — batch inspect, lock tightening, import hygiene

- _reconcile_orphans(): merge check-and-insert into a single lock acquisition
  per container to eliminate the TOCTOU window.
- list_running(): batch the per-container docker inspect into a single call.
  Total subprocess calls drop from 2N+1 to 2 (one ps + one batch inspect).
  Parse port and created_at from the inspect JSON payload.
- Extract _parse_docker_timestamp() and _extract_host_port() as module-level
  pure helpers and test them directly.
- Move datetime/json imports to module top level.
- _make_provider_for_reconciliation(): document the __new__ bypass and the
  lockstep coupling to AioSandboxProvider.__init__.
- Add assertion that list_running() makes exactly ONE inspect call.
2026-04-09 17:21:23 +08:00
Admire 140907ce1d Fix abnormal preview of HTML files (#1986)
* Fix HTML artifact preview rendering

* Add after screenshot for HTML preview fix

* Add before screenshot for HTML preview fix

* Update before screenshot for HTML preview fix

* Update after screenshot for HTML preview fix

* Update before screenshot to Tsinghua homepage repro

* Update after screenshot to Tsinghua homepage preview

* Address PR review on HTML artifact preview

* Harden HTML artifact preview isolation
2026-04-09 16:32:01 +08:00
yangzheli 52718b0f23 fix(frontend): disable incomplete markdown parsing for human messages (#2014)
Streamdown's streaming safeguard appends closing markers (e.g. `*`) to
text with unmatched markdown syntax. This causes user messages containing
literal `*` (such as `99 * 87`) to display with a spurious trailing
asterisk. Human messages are always complete, so the incomplete-markdown
pre-processing is unnecessary.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 16:30:32 +08:00
Admire 563383c60f fix(agent): file-io path guidance in agent prompts (#2019)
* fix(prompt): guide workspace-relative file io

* Clarify bash agent file IO path guidance
2026-04-09 16:12:34 +08:00
Xun 1b74d84590 fix: resolve missing serialized kwargs in PatchedChatDeepSeek (#2025)
* add tests

* fix ci

* fix ci
2026-04-09 16:07:16 +08:00
Zhou 823f3af98c fix(docker): dev uv cache mounts on macOS (#2036) 2026-04-09 15:59:33 +08:00
Gao Mingfei 13664e99e7 fix(docker): nginx fails to start on hosts without IPv6 (#2027)
* fix(docker): nginx fails to start on hosts without IPv6

- Detect IPv6 support at runtime and remove `listen [::]` directive
  when unavailable, preventing nginx startup failure on non-IPv6 hosts
- Use `exec` to replace shell with nginx as PID 1 for proper signal
  handling (graceful shutdown on SIGTERM)
- Reformat command from YAML folded scalar to block scalar (no
  functional change)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(docker): harden nginx startup script (Copilot review feedback)

Add `set -e` so envsubst failures exit immediately instead of starting
nginx with an incomplete config. Narrow the sed pattern to match only
the `listen [::]:2026;` directive to avoid accidentally removing future
lines containing [::].

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 15:58:30 +08:00
60e0abfdb8 fix(frontend): preserve agent context in thread history routes (#1771)
* fix(frontend): preserve agent context in thread history routes

* fix(frontend): preserve agent thread fallback context

* style(frontend): format thread route utils test

---------

Co-authored-by: luoxiao6645 <luoxiao6645@gmail.com>
2026-04-09 15:11:57 +08:00
Octopus 616caa92b1 fix(models): resolve duplicate keyword argument error when reasoning_effort appears in both config and kwargs (#2017)
When a model config includes `reasoning_effort` as an extra YAML field
(ModelConfig uses `extra="allow"`), and the thinking-disabled code path
also injects `reasoning_effort="minimal"` into kwargs, the previous
`model_class(**kwargs, **model_settings_from_config)` call raises:

  TypeError: got multiple values for keyword argument 'reasoning_effort'

Fix by merging the two dicts before instantiation, giving runtime kwargs
precedence over config values: `{**model_settings_from_config, **kwargs}`.

Fixes #1977

Co-authored-by: octo-patch <octo-patch@github.com>
2026-04-09 15:09:39 +08:00
knukn 31a3c9a3de feat(client): add thread query methods list_threads and get_thread (#1609)
* feat(client): add thread query methods `list_threads` and `get_thread`

Implemented two public API methods in `DeerFlowClient` to query threads using the underlying `checkpointer`.

* Update backend/packages/harness/deerflow/client.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update backend/packages/harness/deerflow/client.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update backend/tests/test_client.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update backend/packages/harness/deerflow/client.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix(deerflow): Fix possible KeyError issue when sorting threads

* fix unit test

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-09 15:00:22 +08:00
greatmengqi b2704525a0 feat(auth): release-validation pass for 2.0-rc — 12 blockers + simplify follow-ups (#2008)
* feat(auth): introduce backend auth module

Port RFC-001 authentication core from PR #1728:
- JWT token handling (create_access_token, decode_token, TokenPayload)
- Password hashing (bcrypt) with verify_password
- SQLite UserRepository with base interface
- Provider Factory pattern (LocalAuthProvider)
- CLI reset_admin tool
- Auth-specific errors (AuthErrorCode, TokenError, AuthErrorResponse)

Deps:
- bcrypt>=4.0.0
- pyjwt>=2.9.0
- email-validator>=2.0.0
- backend/uv.toml pins public PyPI index

Tests: 12 pure unit tests (test_auth_config.py, test_auth_errors.py).

Scope note: authz.py, test_auth.py, and test_auth_type_system.py are
deferred to commit 2 because they depend on middleware and deps wiring
that is not yet in place. Commit 1 stays "pure new files only" as the
spec mandates.

* feat(auth): wire auth end-to-end (middleware + frontend replacement)

Backend:
- Port auth_middleware, csrf_middleware, langgraph_auth, routers/auth
- Port authz decorator (owner_filter_key defaults to 'owner_id')
- Merge app.py: register AuthMiddleware + CSRFMiddleware + CORS, add
  _ensure_admin_user lifespan hook, _migrate_orphaned_threads helper,
  register auth router
- Merge deps.py: add get_local_provider, get_current_user_from_request,
  get_optional_user_from_request; keep get_current_user as thin str|None
  adapter for feedback router
- langgraph.json: add auth path pointing to langgraph_auth.py:auth
- Rename metadata['user_id'] -> metadata['owner_id'] in langgraph_auth
  (both metadata write and LangGraph filter dict) + test fixtures

Frontend:
- Delete better-auth library and api catch-all route
- Remove better-auth npm dependency and env vars (BETTER_AUTH_SECRET,
  BETTER_AUTH_GITHUB_*) from env.js
- Port frontend/src/core/auth/* (AuthProvider, gateway-config,
  proxy-policy, server-side getServerSideUser, types)
- Port frontend/src/core/api/fetcher.ts
- Port (auth)/layout, (auth)/login, (auth)/setup pages
- Rewrite workspace/layout.tsx as server component that calls
  getServerSideUser and wraps in AuthProvider
- Port workspace/workspace-content.tsx for the client-side sidebar logic

Tests:
- Port 5 auth test files (test_auth, test_auth_middleware,
  test_auth_type_system, test_ensure_admin, test_langgraph_auth)
- 176 auth tests PASS

After this commit: login/logout/registration flow works, but persistence
layer does not yet filter by owner_id. Commit 4 closes that gap.

* feat(auth): account settings page + i18n

- Port account-settings-page.tsx (change password, change email, logout)
- Wire into settings-dialog.tsx as new "account" section with UserIcon,
  rendered first in the section list
- Add i18n keys:
  - en-US/zh-CN: settings.sections.account ("Account" / "账号")
  - en-US/zh-CN: button.logout ("Log out" / "退出登录")
  - types.ts: matching type declarations

* feat(auth): enforce owner_id across 2.0-rc persistence layer

Add request-scoped contextvar-based owner filtering to threads_meta,
runs, run_events, and feedback repositories. Router code is unchanged
— isolation is enforced at the storage layer so that any caller that
forgets to pass owner_id still gets filtered results, and new routes
cannot accidentally leak data.

Core infrastructure
-------------------
- deerflow/runtime/user_context.py (new):
  - ContextVar[CurrentUser | None] with default None
  - runtime_checkable CurrentUser Protocol (structural subtype with .id)
  - set/reset/get/require helpers
  - AUTO sentinel + resolve_owner_id(value, method_name) for sentinel
    three-state resolution: AUTO reads contextvar, explicit str
    overrides, explicit None bypasses the filter (for migration/CLI)

Repository changes
------------------
- ThreadMetaRepository: create/get/search/update_*/delete gain
  owner_id=AUTO kwarg; read paths filter by owner, writes stamp it,
  mutations check ownership before applying
- RunRepository: put/get/list_by_thread/delete gain owner_id=AUTO kwarg
- FeedbackRepository: create/get/list_by_run/list_by_thread/delete
  gain owner_id=AUTO kwarg
- DbRunEventStore: list_messages/list_events/list_messages_by_run/
  count_messages/delete_by_thread/delete_by_run gain owner_id=AUTO
  kwarg. Write paths (put/put_batch) read contextvar softly: when a
  request-scoped user is available, owner_id is stamped; background
  worker writes without a user context pass None which is valid
  (orphan row to be bound by migration)

Schema
------
- persistence/models/run_event.py: RunEventRow.owner_id = Mapped[
  str | None] = mapped_column(String(64), nullable=True, index=True)
- No alembic migration needed: 2.0 ships fresh, Base.metadata.create_all
  picks up the new column automatically

Middleware
----------
- auth_middleware.py: after cookie check, call get_optional_user_from_
  request to load the real User, stamp it into request.state.user AND
  the contextvar via set_current_user, reset in a try/finally. Public
  paths and unauthenticated requests continue without contextvar, and
  @require_auth handles the strict 401 path

Test infrastructure
-------------------
- tests/conftest.py: @pytest.fixture(autouse=True) _auto_user_context
  sets a default SimpleNamespace(id="test-user-autouse") on every test
  unless marked @pytest.mark.no_auto_user. Keeps existing 20+
  persistence tests passing without modification
- pyproject.toml [tool.pytest.ini_options]: register no_auto_user
  marker so pytest does not emit warnings for opt-out tests
- tests/test_user_context.py: 6 tests covering three-state semantics,
  Protocol duck typing, and require/optional APIs
- tests/test_thread_meta_repo.py: one test updated to pass owner_id=
  None explicitly where it was previously relying on the old default

Test results
------------
- test_user_context.py: 6 passed
- test_auth*.py + test_langgraph_auth.py + test_ensure_admin.py: 127
- test_run_event_store / test_run_repository / test_thread_meta_repo
  / test_feedback: 92 passed
- Full backend suite: 1905 passed, 2 failed (both @requires_llm flaky
  integration tests unrelated to auth), 1 skipped

* feat(auth): extend orphan migration to 2.0-rc persistence tables

_ensure_admin_user now runs a three-step pipeline on every boot:

  Step 1 (fatal):     admin user exists / is created / password is reset
  Step 2 (non-fatal): LangGraph store orphan threads → admin
  Step 3 (non-fatal): SQL persistence tables → admin
    - threads_meta
    - runs
    - run_events
    - feedback

Each step is idempotent. The fatal/non-fatal split mirrors PR #1728's
original philosophy: admin creation failure blocks startup (the system
is unusable without an admin), whereas migration failures log a warning
and let the service proceed (a partial migration is recoverable; a
missing admin is not).

Key helpers
-----------
- _iter_store_items(store, namespace, *, page_size=500):
  async generator that cursor-paginates across LangGraph store pages.
  Fixes PR #1728's hardcoded limit=1000 bug that would silently lose
  orphans beyond the first page.

- _migrate_orphaned_threads(store, admin_user_id):
  Rewritten to use _iter_store_items. Returns the migrated count so the
  caller can log it; raises only on unhandled exceptions.

- _migrate_orphan_sql_tables(admin_user_id):
  Imports the 4 ORM models lazily, grabs the shared session factory,
  runs one UPDATE per table in a single transaction, commits once.
  No-op when no persistence backend is configured (in-memory dev).

Tests: test_ensure_admin.py (8 passed)

* test(auth): port AUTH test plan docs + lint/format pass

- Port backend/docs/AUTH_TEST_PLAN.md and AUTH_UPGRADE.md from PR #1728
- Rename metadata.user_id → metadata.owner_id in AUTH_TEST_PLAN.md
  (4 occurrences from the original PR doc)
- ruff auto-fix UP037 in sentinel type annotations: drop quotes around
  "str | None | _AutoSentinel" now that from __future__ import
  annotations makes them implicit string forms
- ruff format: 2 files (app/gateway/app.py, runtime/user_context.py)

Note on test coverage additions:
- conftest.py autouse fixture was already added in commit 4 (had to
  be co-located with the repository changes to keep pre-existing
  persistence tests passing)
- cross-user isolation E2E tests (test_owner_isolation.py) deferred
  — enforcement is already proven by the 98-test repository suite
  via the autouse fixture + explicit _AUTO sentinel exercises
- New test cases (TC-API-17..20, TC-ATK-13, TC-MIG-01..07) listed
  in AUTH_TEST_PLAN.md are deferred to a follow-up PR — they are
  manual-QA test cases rather than pytest code, and the spec-level
  coverage is already met by test_user_context.py + the 98-test
  repository suite.

Final test results:
- Auth suite (test_auth*, test_langgraph_auth, test_ensure_admin,
  test_user_context): 186 passed
- Persistence suite (test_run_event_store, test_run_repository,
  test_thread_meta_repo, test_feedback): 98 passed
- Lint: ruff check + ruff format both clean

* test(auth): add cross-user isolation test suite

10 tests exercising the storage-layer owner filter by manually
switching the user_context contextvar between two users. Verifies
the safety invariant:

  After a repository write with owner_id=A, a subsequent read with
  owner_id=B must not return the row, and vice versa.

Covers all 4 tables that own user-scoped data:

TC-API-17  threads_meta  — read, search, update, delete cross-user
TC-API-18  runs          — get, list_by_thread, delete cross-user
TC-API-19  run_events    — list_messages, list_events, count_messages,
                           delete_by_thread (CRITICAL: raw conversation
                           content leak vector)
TC-API-20  feedback      — get, list_by_run, delete cross-user

Plus two meta-tests verifying the sentinel pattern itself:
- AUTO + unset contextvar raises RuntimeError
- explicit owner_id=None bypasses the filter (migration escape hatch)

Architecture note
-----------------
These tests bypass the HTTP layer by design. The full chain
(cookie → middleware → contextvar → repository) is covered piecewise:

- test_auth_middleware.py: middleware sets contextvar from cookies
- test_owner_isolation.py: repositories enforce isolation when
  contextvar is set to different users

Together they prove the end-to-end safety property without the
ceremony of spinning up a full TestClient + in-memory DB for every
router endpoint.

Tests pass: 231 (full auth + persistence + isolation suite)
Lint: clean

* refactor(auth): migrate user repository to SQLAlchemy ORM

Move the users table into the shared persistence engine so auth
matches the pattern of threads_meta, runs, run_events, and feedback —
one engine, one session factory, one schema init codepath.

New files
---------
- persistence/user/__init__.py, persistence/user/model.py: UserRow
  ORM class with partial unique index on (oauth_provider, oauth_id)
- Registered in persistence/models/__init__.py so
  Base.metadata.create_all() picks it up

Modified
--------
- auth/repositories/sqlite.py: rewritten as async SQLAlchemy,
  identical constructor pattern to the other four repositories
  (def __init__(self, session_factory) + self._sf = session_factory)
- auth/config.py: drop users_db_path field — storage is configured
  through config.database like every other table
- deps.py/get_local_provider: construct SQLiteUserRepository with
  the shared session factory, fail fast if engine is not initialised
- tests/test_auth.py: rewrite test_sqlite_round_trip_new_fields to
  use the shared engine (init_engine + close_engine in a tempdir)
- tests/test_auth_type_system.py: add per-test autouse fixture that
  spins up a scratch engine and resets deps._cached_* singletons

* refactor(auth): remove SQL orphan migration (unused in supported scenarios)

The _migrate_orphan_sql_tables helper existed to bind NULL owner_id
rows in threads_meta, runs, run_events, and feedback to the admin on
first boot. But in every supported upgrade path, it's a no-op:

  1. Fresh install: create_all builds fresh tables, no legacy rows
  2. No-auth → with-auth (no existing persistence DB): persistence
     tables are created fresh by create_all, no legacy rows
  3. No-auth → with-auth (has existing persistence DB from #1930):
     NOT a supported upgrade path — "有 DB 到有 DB" schema evolution
     is out of scope; users wipe DB or run manual ALTER

So the SQL orphan migration never has anything to do in the
supported matrix. Delete the function, simplify _ensure_admin_user
from a 3-step pipeline to a 2-step one (admin creation + LangGraph
store orphan migration only).

LangGraph store orphan migration stays: it serves the real
"no-auth → with-auth" upgrade path where a user's existing LangGraph
thread metadata has no owner_id field and needs to be stamped with
the newly-created admin's id.

Tests: 284 passed (auth + persistence + isolation)
Lint: clean

* security(auth): write initial admin password to 0600 file instead of logs

CodeQL py/clear-text-logging-sensitive-data flagged 3 call sites that
logged the auto-generated admin password to stdout via logger.info().
Production log aggregators (ELK/Splunk/etc) would have captured those
cleartext secrets. Replace with a shared helper that writes to
.deer-flow/admin_initial_credentials.txt with mode 0600, and log only
the path.

New file
--------
- app/gateway/auth/credential_file.py: write_initial_credentials()
  helper. Takes email, password, and a "initial"/"reset" label.
  Creates .deer-flow/ if missing, writes a header comment plus the
  email+password, chmods 0o600, returns the absolute Path.

Modified
--------
- app/gateway/app.py: both _ensure_admin_user paths (fresh creation
  + needs_setup password reset) now write to file and log the path
- app/gateway/auth/reset_admin.py: rewritten to use the shared ORM
  repo (SQLiteUserRepository with session_factory) and the
  credential_file helper. The previous implementation was broken
  after the earlier ORM refactor — it still imported _get_users_conn
  and constructed SQLiteUserRepository() without a session factory.

No tests changed — the three password-log sites are all exercised
via existing test_ensure_admin.py which checks that startup
succeeds, not that a specific string appears in logs.

CodeQL alerts 272, 283, 284: all resolved.

* security(auth): strict JWT validation in middleware (fix junk cookie bypass)

AUTH_TEST_PLAN test 7.5.8 expects junk cookies to be rejected with
401. The previous middleware behaviour was "presence-only": check
that some access_token cookie exists, then pass through. In
combination with my Task-12 decision to skip @require_auth
decorators on routes, this created a gap where a request with any
cookie-shaped string (e.g. access_token=not-a-jwt) would bypass
authentication on routes that do not touch the repository
(/api/models, /api/mcp/config, /api/memory, /api/skills, …).

Fix: middleware now calls get_current_user_from_request() strictly
and catches the resulting HTTPException to render a 401 with the
proper fine-grained error code (token_invalid, token_expired,
user_not_found, …). On success it stamps request.state.user and
the contextvar so repository-layer owner filters work downstream.

The 4 old "_with_cookie_passes" tests in test_auth_middleware.py
were written for the presence-only behaviour; they asserted that
a junk cookie would make the handler return 200. They are renamed
to "_with_junk_cookie_rejected" and their assertions flipped to
401. The negative path (no cookie → 401 not_authenticated)
is unchanged.

Verified:
  no cookie       → 401 not_authenticated
  junk cookie     → 401 token_invalid     (the fixed bug)
  expired cookie  → 401 token_expired

Tests: 284 passed (auth + persistence + isolation)
Lint: clean

* security(auth): wire @require_permission(owner_check=True) on isolation routes

Apply the require_permission decorator to all 28 routes that take a
{thread_id} path parameter. Combined with the strict middleware
(previous commit), this gives the double-layer protection that
AUTH_TEST_PLAN test 7.5.9 documents:

  Layer 1 (AuthMiddleware): cookie + JWT validation, rejects junk
                            cookies and stamps request.state.user
  Layer 2 (@require_permission with owner_check=True): per-resource
                            ownership verification via
                            ThreadMetaStore.check_access — returns
                            404 if a different user owns the thread

The decorator's owner_check branch is rewritten to use the SQL
thread_meta_repo (the 2.0-rc persistence layer) instead of the
LangGraph store path that PR #1728 used (_store_get / get_store
in routers/threads.py). The inject_record convenience is dropped
— no caller in 2.0 needs the LangGraph blob, and the SQL repo has
a different shape.

Routes decorated (28 total):
- threads.py: delete, patch, get, get-state, post-state, post-history
- thread_runs.py: post-runs, post-runs-stream, post-runs-wait,
  list_runs, get_run, cancel_run, join_run, stream_existing_run,
  list_thread_messages, list_run_messages, list_run_events,
  thread_token_usage
- feedback.py: create, list, stats, delete
- uploads.py: upload (added Request param), list, delete
- artifacts.py: get_artifact
- suggestions.py: generate (renamed body parameter to avoid
  conflict with FastAPI Request)

Test fixes:
- test_suggestions_router.py: bypass the decorator via __wrapped__
  (the unit tests cover parsing logic, not auth — no point spinning
  up a thread_meta_repo just to test JSON unwrapping)
- test_auth_middleware.py 4 fake-cookie tests: already updated in
  the previous commit (745bf432)

Tests: 293 passed (auth + persistence + isolation + suggestions)
Lint: clean

* security(auth): defense-in-depth fixes from release validation pass

Eight findings caught while running the AUTH_TEST_PLAN end-to-end against
the deployed sg_dev stack. Each is a pre-condition for shipping
release/2.0-rc that the previous PRs missed.

Backend hardening
- routers/auth.py: rate limiter X-Real-IP now requires AUTH_TRUSTED_PROXIES
  whitelist (CIDR/IP allowlist). Without nginx in front, the previous code
  honored arbitrary X-Real-IP, letting an attacker rotate the header to
  fully bypass the per-IP login lockout.
- routers/auth.py: 36-entry common-password blocklist via Pydantic
  field_validator on RegisterRequest + ChangePasswordRequest. The shared
  _validate_strong_password helper keeps the constraint in one place.
- routers/threads.py: ThreadCreateRequest + ThreadPatchRequest strip
  server-reserved metadata keys (owner_id, user_id) via Pydantic
  field_validator so a forged value can never round-trip back to other
  clients reading the same thread. The actual ownership invariant stays
  on the threads_meta row; this closes the metadata-blob echo gap.
- authz.py + thread_meta/sql.py: require_permission gains a require_existing
  flag plumbed through check_access(require_existing=True). Destructive
  routes (DELETE/PATCH/state-update/runs/feedback) now treat a missing
  thread_meta row as 404 instead of "untracked legacy thread, allow",
  closing the cross-user delete-idempotence gap where any user could
  successfully DELETE another user's deleted thread.
- repositories/sqlite.py + base.py: update_user raises UserNotFoundError
  on a vanished row instead of silently returning the input. Concurrent
  delete during password reset can no longer look like a successful update.
- runtime/user_context.py: resolve_owner_id() coerces User.id (UUID) to
  str at the contextvar boundary so SQLAlchemy String(64) columns can
  bind it. The whole 2.0-rc isolation pipeline was previously broken
  end-to-end (POST /api/threads → 500 "type 'UUID' is not supported").
- persistence/engine.py: SQLAlchemy listener enables PRAGMA journal_mode=WAL,
  synchronous=NORMAL, foreign_keys=ON on every new SQLite connection.
  TC-UPG-06 in the test plan expects WAL; previous code shipped with the
  default 'delete' journal.
- auth_middleware.py: stamp request.state.auth = AuthContext(...) so
  @require_permission's short-circuit fires; previously every isolation
  request did a duplicate JWT decode + users SELECT. Also unifies the
  401 payload through AuthErrorResponse(...).model_dump().
- app.py: _ensure_admin_user restructure removes the noqa F821 scoping
  bug where 'password' was referenced outside the branch that defined it.
  New _announce_credentials helper absorbs the duplicate log block in
  the fresh-admin and reset-admin branches.

* fix(frontend+nginx): rollout CSRF on every state-changing client path

The frontend was 100% broken in gateway-pro mode for any user trying to
open a specific chat thread. Three cumulative bugs each silently
masked the next.

LangGraph SDK CSRF gap (api-client.ts)
- The Client constructor took only apiUrl, no defaultHeaders, no fetch
  interceptor. The SDK's internal fetch never sent X-CSRF-Token, so
  every state-changing /api/langgraph-compat/* call (runs/stream,
  threads/search, threads/{tid}/history, ...) hit CSRFMiddleware and
  got 403 before reaching the auth check. UI symptom: empty thread page
  with no error message; the SPA's hooks swallowed the rejection.
- Fix: pass an onRequest hook that injects X-CSRF-Token from the
  csrf_token cookie per request. Reading the cookie per call (not at
  construction time) handles login / logout / password-change cookie
  rotation transparently. The SDK's prepareFetchOptions calls
  onRequest for both regular requests AND streaming/SSE/reconnect, so
  the same hook covers runs.stream and runs.joinStream.

Raw fetch CSRF gap (7 files)
- Audit: 11 frontend fetch sites, only 2 included CSRF (login/setup +
  account-settings change-password). The other 7 routed through raw
  fetch() with no header — suggestions, memory, agents, mcp, skills,
  uploads, and the local thread cleanup hook all 403'd silently.
- Fix: enhance fetcher.ts:fetchWithAuth to auto-inject X-CSRF-Token on
  POST/PUT/DELETE/PATCH from a single shared readCsrfCookie() helper.
  Convert all 7 raw fetch() callers to fetchWithAuth so the contract
  is centrally enforced. api-client.ts and fetcher.ts share
  readCsrfCookie + STATE_CHANGING_METHODS to avoid drift.

nginx routing + buffering (nginx.local.conf)
- The auth feature shipped without updating the nginx config: per-API
  explicit location blocks but no /api/v1/auth/, /api/feedback, /api/runs.
  The frontend's client-side fetches to /api/v1/auth/login/local 404'd
  from the Next.js side because nginx routed /api/* to the frontend.
- Fix: add catch-all `location /api/` that proxies to the gateway.
  nginx longest-prefix matching keeps the explicit blocks (/api/models,
  /api/threads regex, /api/langgraph/, ...) winning for their paths.
- Fix: disable proxy_buffering + proxy_request_buffering for the
  frontend `location /` block. Without it, nginx tries to spool large
  Next.js chunks into /var/lib/nginx/proxy (root-owned) and fails with
  Permission denied → ERR_INCOMPLETE_CHUNKED_ENCODING → ChunkLoadError.

* test(auth): release-validation test infra and new coverage

Test fixtures and unit tests added during the validation pass.

Router test helpers (NEW: tests/_router_auth_helpers.py)
- make_authed_test_app(): builds a FastAPI test app with a stub
  middleware that stamps request.state.user + request.state.auth and a
  permissive thread_meta_repo mock. TestClient-based router tests
  (test_artifacts_router, test_threads_router) use it instead of bare
  FastAPI() so the new @require_permission(owner_check=True) decorators
  short-circuit cleanly.
- call_unwrapped(): walks the __wrapped__ chain to invoke the underlying
  handler without going through the authz wrappers. Direct-call tests
  (test_uploads_router) use it. Typed with ParamSpec so the wrapped
  signature flows through.

Backend test additions
- test_auth.py: 7 tests for the new _get_client_ip trust model (no
  proxy / trusted proxy / untrusted peer / XFF rejection / invalid
  CIDR / no client). 5 tests for the password blocklist (literal,
  case-insensitive, strong password accepted, change-password binding,
  short-password length-check still fires before blocklist).
  test_update_user_raises_when_row_concurrently_deleted: closes a
  shipped-without-coverage gap on the new UserNotFoundError contract.
- test_thread_meta_repo.py: 4 tests for check_access(require_existing=True)
  — strict missing-row denial, strict owner match, strict owner mismatch,
  strict null-owner still allowed (shared rows survive the tightening).
- test_ensure_admin.py: 3 tests for _migrate_orphaned_threads /
  _iter_store_items pagination, covering the TC-UPG-02 upgrade story
  end-to-end via mock store. Closes the gap where the cursor pagination
  was untested even though the previous PR rewrote it.
- test_threads_router.py: 5 tests for _strip_reserved_metadata
  (owner_id removal, user_id removal, safe-keys passthrough, empty
  input, both-stripped).
- test_auth_type_system.py: replace "password123" fixtures with
  Tr0ub4dor3a / AnotherStr0ngPwd! so the new password blocklist
  doesn't reject the test data.

* docs(auth): refresh TC-DOCKER-05 + document Docker validation gap

- AUTH_TEST_PLAN.md TC-DOCKER-05: the previous expectation
  ("admin password visible in docker logs") was stale after the simplify
  pass that moved credentials to a 0600 file. The grep "Password:" check
  would have silently failed and given a false sense of coverage. New
  expectation matches the actual file-based path: 0600 file in
  DEER_FLOW_HOME, log shows the path (not the secret), reverse-grep
  asserts no leaked password in container logs.
- NEW: docs/AUTH_TEST_DOCKER_GAP.md documents the only un-executed
  block in the test plan (TC-DOCKER-01..06). Reason: sg_dev validation
  host has no Docker daemon installed. The doc maps each Docker case
  to an already-validated bare-metal equivalent (TC-1.1, TC-REENT-01,
  TC-API-02 etc.) so the gap is auditable, and includes pre-flight
  reproduction steps for whoever has Docker available.

---------

Co-authored-by: greatmengqi <chenmengqi.0376@bytedance.com>
2026-04-09 11:29:32 +08:00
Xinmin Zeng ad6d934a5f fix(middleware): handle string-serialized options in ClarificationMiddleware (#1997)
* fix(middleware): handle string-serialized options in ClarificationMiddleware (#1995)

Some models (e.g. Qwen3-Max) serialize array tool parameters as JSON
strings instead of native arrays. Add defensive type checking in
_format_clarification_message() to deserialize string options before
iteration, preventing per-character rendering.

* fix(middleware): normalize options after JSON deserialization

Address Copilot review feedback:
- Add post-deserialization normalization so options is always a list
  (handles json.loads returning a scalar string, dict, or None)
- Add test for JSON-encoded scalar string ("development")
- Fix test_json_string_with_mixed_types to use actual mixed types
2026-04-08 21:04:20 +08:00
hung_ng__ 5350b2fb24 feat(community): add Exa search as community tool provider (#1357)
* feat(community): add Exa search as community tool provider

Add Exa (exa.ai) as a new community search provider alongside Tavily,
Firecrawl, InfoQuest, and Jina AI. Exa is an AI-native search engine
with neural, keyword, and auto search types.

New files:
- community/exa/tools.py: web_search_tool and web_fetch_tool
- tests/test_exa_tools.py: 10 unit tests with mocked Exa client

Changes:
- pyproject.toml: add exa-py dependency
- config.example.yaml: add commented-out Exa configuration examples

Usage: set `use: deerflow.community.exa.tools:web_search_tool` in
config.yaml and provide EXA_API_KEY.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(community): address PR review comments for Exa tools

- Make _get_exa_client() accept tool_name param so web_fetch reads its own config
- Remove __init__.py to match namespace package pattern of other providers
- Add duplicate tool name warning in config.example.yaml
- Add regression tests for web_fetch config resolution

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Update revision in uv.lock to 3

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2026-04-08 17:13:39 +08:00
Gao Mingfei 29817c3b34 fix(backend): use timezone-aware UTC in memory modules (fix pytest DeprecationWarnings) (#1992)
* fix(backend): use timezone-aware UTC in memory modules

Replace datetime.utcnow() with datetime.now(timezone.utc) and a shared
utc_now_iso_z() helper so persisted ISO timestamps keep the trailing Z
suffix without triggering Python 3.12+ deprecation warnings.

Made-with: Cursor

* refactor(backend): use removesuffix for utc_now_iso_z suffix

Makes the +00:00 -> Z transform explicit for the trailing offset only
(Copilot review on PR #1992).

Made-with: Cursor

* style(backend): satisfy ruff UP017 with datetime.UTC in memory queue

Made-with: Cursor

---------

Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2026-04-08 16:28:00 +08:00
Saber e5b149068c Fix(subagent): Event loop conflict in SubagentExecutor.execute() (#1965)
* Fix event loop conflict in SubagentExecutor.execute()

When SubagentExecutor.execute() is called from within an already-running
event loop (e.g., when the parent agent uses async/await), calling
asyncio.run() creates a new event loop that conflicts with asyncio
primitives (like httpx.AsyncClient) that were created in and bound to
the parent loop.

This fix detects if we're already in a running event loop, and if so,
runs the subagent in a separate thread with its own isolated event loop
to avoid conflicts.

Fixes: sub-task cards not appearing in Ultra mode when using async parent agents

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(subagent): harden isolated event loop execution

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-08 11:46:06 +08:00
85b7ed3cec fix(frontend): avoid using route new as thread id (#1967)
Co-authored-by: luoxiao6645 <luoxiao6645@gmail.com>
2026-04-08 10:08:55 +08:00
siwuai 24805200f0 fix(frontend): prevent stale 'new' thread ID from triggering 422 history requests (#1960)
After history.replaceState updates the URL from /chats/new to
/chats/{UUID}, Next.js useParams does not update because replaceState
bypasses the router. The useEffect in useThreadChat would then set
threadIdFromPath ('new') as the threadId, causing the LangGraph SDK
to call POST /threads/new/history which returns HTTP 422 (Invalid
thread ID: must be a UUID).

This fix adds a guard to skip the threadId update when
threadIdFromPath is the literal string 'new', preserving the
already-correct UUID that was set when the thread was created.
2026-04-08 10:03:07 +08:00
13ernkastel 722a9c4753 docs: clarify deployment sizing guidance (#1963) 2026-04-08 09:45:31 +08:00
Xinmin Zeng d1baf7212b fix(frontend): UI polish - fix CSS typo, dark mode border, and hardcoded colors (#1942)
- Fix `font-norma` typo to `font-normal` in message-list subtask count
- Fix dark mode `--border` using reddish hue (22.216) instead of neutral
- Replace hardcoded `rgb(184,184,192)` in hero with `text-muted-foreground`
- Replace hardcoded `bg-[#a3a1a1]` in streaming indicator with `bg-muted-foreground`
- Add missing `font-sans` to welcome description `<pre>` for consistency
- Make case-study-section padding responsive (`px-4 md:px-20`)

Closes #1940
2026-04-08 09:07:39 +08:00
Async23 0948c7a4e1 fix(provider): preserve streamed Codex output when response.completed.output is empty (#1928)
* fix: preserve streamed Codex output items

* fix: prefer completed Codex output over streamed placeholders
2026-04-07 18:21:22 +08:00
koppx c3170f22da fix(backend): make loop detection hash tool calls by stable keys (#1911)
* fix(backend): make loop detection hash tool calls by stable keys

The loop detection middleware previously hashed full tool call arguments,
which made repeated calls look different when only non-essential argument
details changed. In particular, `read_file` calls with nearby line ranges
could bypass repetition detection even when the agent was effectively
reading the same file region again and again.

- Hash tool calls using stable keys instead of the full raw args payload
- Bucket `read_file` line ranges so nearby reads map to the same region key
- Prefer stable identifiers such as `path`, `url`, `query`, or `command`
  before falling back to JSON serialization of args
- Keep hashing order-independent so the same tool call set produces the
  same hash regardless of call order

Fixes #1905

* fix(backend): harden loop detection hash normalization

- Normalize and parse stringified tool args defensively
- Expand stable key derivation to include pattern, glob, and cmd
- Normalize reversed read_file ranges before bucketing

Fixes #1905

* fix(backend): harden loop detection tool format

* exclude write_file and str_replace from the stable-key path — writing different content to the same file shouldn't be flagged.

---------

Co-authored-by: JeffJiang <for-eleven@hotmail.com>
2026-04-07 17:46:33 +08:00
Anson Li 1193ac64dc fix(frontend): unify local settings runtime state and remove sidebar layout from LocalSettings (#1879)
* fix(frontend): resolve layout flickering by migrating workspace sidebar state to cookie

* fix(frontend): unify local settings runtime state to fix state drift

* fix(frontend): only persist thread model on explicit context model updates
2026-04-07 17:41:34 +08:00
Admire ab41de2961 fix(frontend):keep DeerFlow chat thread ids in sync (#1931)
* fix: replay thread sync changes on top of main

* fix: avoid stale thread ids during stream startup
2026-04-07 17:15:46 +08:00
KKK 3b3e8e1b0b feat(sandbox): strengthen bash command auditing with compound splitting and expanded patterns (#1881)
* fix(sandbox): strengthen regex coverage in SandboxAuditMiddleware

Expand high-risk patterns from 6 to 13 and medium-risk from 4 to 6,
closing several bypass vectors identified by cross-referencing Claude
Code's BashSecurity validator chain against DeerFlow's threat model.

High-risk additions:
- Generalised pipe-to-sh (replaces narrow curl|sh rule)
- Targeted command substitution ($() / backtick with dangerous executables)
- base64 decode piped to execution
- Overwrite system binaries (/usr/bin/, /bin/, /sbin/)
- Overwrite shell startup files (~/.bashrc, ~/.profile, etc.)
- /proc/*/environ leakage
- LD_PRELOAD / LD_LIBRARY_PATH hijack
- /dev/tcp/ bash built-in networking

Medium-risk additions:
- sudo/su (no-op under Docker root, warn only)
- PATH= modification (long attack chain, warn only)

Design decisions:
- Command substitution uses targeted matching (curl/wget/bash/sh/python/
  ruby/perl/base64) rather than blanket block to avoid false positives
  on safe usage like $(date) or `whoami`.
- Skipped encoding/obfuscation checks (hex, octal, Unicode homoglyphs)
  as ROI is low in Docker sandbox — LLMs don't generate encoded commands
  and container isolation bounds the blast radius.
- Merged pip/pip3 into single pip3? pattern.

* feat(sandbox): compound command splitting and fork bomb detection

Split compound bash commands (&&, ||, ;) into sub-commands and classify
each independently — prevents dangerous commands hidden after safe
prefixes (e.g. "cd /workspace && rm -rf /") from bypassing detection.

- Add _split_compound_command() with shlex quote-aware splitting
- Add fork bomb detection patterns (classic and while-loop variants)
- Most severe verdict wins; block short-circuits
- 15 new tests covering compound commands, splitting, and fork bombs

* test(sandbox): add async tests for fork bomb and compound commands

Cover awrap_tool_call path for fork bomb detection (3 variants) and
compound command splitting (block/warn/pass scenarios).

* fix(sandbox): address Copilot review — no-whitespace operators, >>/etc/, whole-command scan

- _split_compound_command: replace shlex-based implementation with a
  character-by-character quote/escape-aware scanner. shlex.split only
  separates '&&' / '||' / ';' when they are surrounded by whitespace,
  so payloads like 'rm -rf /&&echo ok' or 'safe;rm -rf /' bypassed the
  previous splitter and therefore the per-sub-command classifier.
- _HIGH_RISK_PATTERNS: change r'>\s*/etc/' to r'>+\s*/etc/' so append
  redirection ('>>/etc/hosts') is also blocked.
- _classify_command: run a whole-command high-risk scan *before*
  splitting. Structural attacks like 'while true; do bash & done'
  span multiple shell statements — splitting on ';' destroys the
  pattern context, so the raw command must be scanned first.
- tests: add no-whitespace operator cases to TestSplitCompoundCommand
  and test_compound_command_classification to lock in the bypass fix.
2026-04-07 17:15:24 +08:00
Admire 4004fb849f Fix agent gallery after bootstrap creation 修复新建智能体后菜单仍为空的问题 (#1934)
* fix: persist agent before bootstrap chat

* style: normalize line endings for agent creation page

* fix: address review feedback for agent creation flow

---------

Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2026-04-07 17:10:08 +08:00
Henry Li f467e613b6 feat: add BytePlus logo (#1948) 2026-04-07 16:07:37 +08:00
rayhpeng 00e0e9a49a feat(persistence): add unified persistence layer with event store, token tracking, and feedback (#1930)
* feat(persistence): add SQLAlchemy 2.0 async ORM scaffold

Introduce a unified database configuration (DatabaseConfig) that
controls both the LangGraph checkpointer and the DeerFlow application
persistence layer from a single `database:` config section.

New modules:
- deerflow.config.database_config — Pydantic config with memory/sqlite/postgres backends
- deerflow.persistence — async engine lifecycle, DeclarativeBase with to_dict mixin, Alembic skeleton
- deerflow.runtime.runs.store — RunStore ABC + MemoryRunStore implementation

Gateway integration initializes/tears down the persistence engine in
the existing langgraph_runtime() context manager. Legacy checkpointer
config is preserved for backward compatibility.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(persistence): add RunEventStore ABC + MemoryRunEventStore

Phase 2-A prerequisite for event storage: adds the unified run event
stream interface (RunEventStore) with an in-memory implementation,
RunEventsConfig, gateway integration, and comprehensive tests (27 cases).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(persistence): add ORM models, repositories, DB/JSONL event stores, RunJournal, and API endpoints

Phase 2-B: run persistence + event storage + token tracking.

- ORM models: RunRow (with token fields), ThreadMetaRow, RunEventRow
- RunRepository implements RunStore ABC via SQLAlchemy ORM
- ThreadMetaRepository with owner access control
- DbRunEventStore with trace content truncation and cursor pagination
- JsonlRunEventStore with per-run files and seq recovery from disk
- RunJournal (BaseCallbackHandler) captures LLM/tool/lifecycle events,
  accumulates token usage by caller type, buffers and flushes to store
- RunManager now accepts optional RunStore for persistent backing
- Worker creates RunJournal, writes human_message, injects callbacks
- Gateway deps use factory functions (RunRepository when DB available)
- New endpoints: messages, run messages, run events, token-usage
- ThreadCreateRequest gains assistant_id field
- 92 tests pass (33 new), zero regressions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(persistence): add user feedback + follow-up run association

Phase 2-C: feedback and follow-up tracking.

- FeedbackRow ORM model (rating +1/-1, optional message_id, comment)
- FeedbackRepository with CRUD, list_by_run/thread, aggregate stats
- Feedback API endpoints: create, list, stats, delete
- follow_up_to_run_id in RunCreateRequest (explicit or auto-detected
  from latest successful run on the thread)
- Worker writes follow_up_to_run_id into human_message event metadata
- Gateway deps: feedback_repo factory + getter
- 17 new tests (14 FeedbackRepository + 3 follow-up association)
- 109 total tests pass, zero regressions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test+config: comprehensive Phase 2 test coverage + deprecate checkpointer config

- config.example.yaml: deprecate standalone checkpointer section, activate
  unified database:sqlite as default (drives both checkpointer + app data)
- New: test_thread_meta_repo.py (14 tests) — full ThreadMetaRepository coverage
  including check_access owner logic, list_by_owner pagination
- Extended test_run_repository.py (+4 tests) — completion preserves fields,
  list ordering desc, limit, owner_none returns all
- Extended test_run_journal.py (+8 tests) — on_chain_error, track_tokens=false,
  middleware no ai_message, unknown caller tokens, convenience fields,
  tool_error, non-summarization custom event
- Extended test_run_event_store.py (+7 tests) — DB batch seq continuity,
  make_run_event_store factory (memory/db/jsonl/fallback/unknown)
- Extended test_phase2b_integration.py (+4 tests) — create_or_reject persists,
  follow-up metadata, summarization in history, full DB-backed lifecycle
- Fixed DB integration test to use proper fake objects (not MagicMock)
  for JSON-serializable metadata
- 157 total Phase 2 tests pass, zero regressions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* config: move default sqlite_dir to .deer-flow/data

Keep SQLite databases alongside other DeerFlow-managed data
(threads, memory) under the .deer-flow/ directory instead of a
top-level ./data folder.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(persistence): remove UTFJSON, use engine-level json_serializer + datetime.now()

- Replace custom UTFJSON type with standard sqlalchemy.JSON in all ORM
  models. Add json_serializer=json.dumps(ensure_ascii=False) to all
  create_async_engine calls so non-ASCII text (Chinese etc.) is stored
  as-is in both SQLite and Postgres.
- Change ORM datetime defaults from datetime.now(UTC) to datetime.now(),
  remove UTC imports.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(gateway): simplify deps.py with getter factory + inline repos

- Replace 6 identical getter functions with _require() factory.
- Inline 3 _make_*_repo() factories into langgraph_runtime(), call
  get_session_factory() once instead of 3 times.
- Add thread_meta upsert in start_run (services.py).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(docker): add UV_EXTRAS build arg for optional dependencies

Support installing optional dependency groups (e.g. postgres) at
Docker build time via UV_EXTRAS build arg:
  UV_EXTRAS=postgres docker compose build

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(journal): fix flush, token tracking, and consolidate tests

RunJournal fixes:
- _flush_sync: retain events in buffer when no event loop instead of
  dropping them; worker's finally block flushes via async flush().
- on_llm_end: add tool_calls filter and caller=="lead_agent" guard for
  ai_message events; mark message IDs for dedup with record_llm_usage.
- worker.py: persist completion data (tokens, message count) to RunStore
  in finally block.

Model factory:
- Auto-inject stream_usage=True for BaseChatOpenAI subclasses with
  custom api_base, so usage_metadata is populated in streaming responses.

Test consolidation:
- Delete test_phase2b_integration.py (redundant with existing tests).
- Move DB-backed lifecycle test into test_run_journal.py.
- Add tests for stream_usage injection in test_model_factory.py.
- Clean up executor/task_tool dead journal references.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(events): widen content type to str|dict in all store backends

Allow event content to be a dict (for structured OpenAI-format messages)
in addition to plain strings. Dict values are JSON-serialized for the DB
backend and deserialized on read; memory and JSONL backends handle dicts
natively. Trace truncation now serializes dicts to JSON before measuring.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(events): use metadata flag instead of heuristic for dict content detection

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(converters): add LangChain-to-OpenAI message format converters

Pure functions langchain_to_openai_message, langchain_to_openai_completion,
langchain_messages_to_openai, and _infer_finish_reason for converting
LangChain BaseMessage objects to OpenAI Chat Completions format, used by
RunJournal for event storage. 15 unit tests added.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(converters): handle empty list content as null, clean up test

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(events): human_message content uses OpenAI user message format

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(events): ai_message uses OpenAI format, add ai_tool_call message event

- ai_message content now uses {"role": "assistant", "content": "..."} format
- New ai_tool_call message event emitted when lead_agent LLM responds with tool_calls
- ai_tool_call uses langchain_to_openai_message converter for consistent format
- Both events include finish_reason in metadata ("stop" or "tool_calls")

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(events): add tool_result message event with OpenAI tool message format

Cache tool_call_id from on_tool_start keyed by run_id as fallback for on_tool_end,
then emit a tool_result message event (role=tool, tool_call_id, content) after each
successful tool completion.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(events): summary content uses OpenAI system message format

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(events): replace llm_start/llm_end with llm_request/llm_response in OpenAI format

Add on_chat_model_start to capture structured prompt messages as llm_request events.
Replace llm_end trace events with llm_response using OpenAI Chat Completions format.
Track llm_call_index to pair request/response events.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(events): add record_middleware method for middleware trace events

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test(events): add full run sequence integration test for OpenAI content format

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(events): align message events with checkpoint format and add middleware tag injection

- Message events (ai_message, ai_tool_call, tool_result, human_message) now use
  BaseMessage.model_dump() format, matching LangGraph checkpoint values.messages
- on_tool_end extracts tool_call_id/name/status from ToolMessage objects
- on_tool_error now emits tool_result message events with error status
- record_middleware uses middleware:{tag} event_type and middleware category
- Summarization custom events use middleware:summarize category
- TitleMiddleware injects middleware:title tag via get_config() inheritance
- SummarizationMiddleware model bound with middleware:summarize tag
- Worker writes human_message using HumanMessage.model_dump()

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(threads): switch search endpoint to threads_meta table and sync title

- POST /api/threads/search now queries threads_meta table directly,
  removing the two-phase Store + Checkpointer scan approach
- Add ThreadMetaRepository.search() with metadata/status filters
- Add ThreadMetaRepository.update_display_name() for title sync
- Worker syncs checkpoint title to threads_meta.display_name on run completion
- Map display_name to values.title in search response for API compatibility

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(threads): history endpoint reads messages from event store

- POST /api/threads/{thread_id}/history now combines two data sources:
  checkpointer for checkpoint_id, metadata, title, thread_data;
  event store for messages (complete history, not truncated by summarization)
- Strip internal LangGraph metadata keys from response
- Remove full channel_values serialization in favor of selective fields

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: remove duplicate optional-dependencies header in pyproject.toml

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(middleware): pass tagged config to TitleMiddleware ainvoke call

Without the config, the middleware:title tag was not injected,
causing the LLM response to be recorded as a lead_agent ai_message
in run_events.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: resolve merge conflict in .env.example

Keep both DATABASE_URL (from persistence-scaffold) and WECOM
credentials (from main) after the merge.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(persistence): address review feedback on PR #1851

- Fix naive datetime.now() → datetime.now(UTC) in all ORM models
- Fix seq race condition in DbRunEventStore.put() with FOR UPDATE
  and UNIQUE(thread_id, seq) constraint
- Encapsulate _store access in RunManager.update_run_completion()
- Deduplicate _store.put() logic in RunManager via _persist_to_store()
- Add update_run_completion to RunStore ABC + MemoryRunStore
- Wire follow_up_to_run_id through the full create path
- Add error recovery to RunJournal._flush_sync() lost-event scenario
- Add migration note for search_threads breaking change
- Fix test_checkpointer_none_fix mock to set database=None

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: update uv.lock

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(persistence): address 22 review comments from CodeQL, Copilot, and Code Quality

Bug fixes:
- Sanitize log params to prevent log injection (CodeQL)
- Reset threads_meta.status to idle/error when run completes
- Attach messages only to latest checkpoint in /history response
- Write threads_meta on POST /threads so new threads appear in search

Lint fixes:
- Remove unused imports (journal.py, migrations/env.py, test_converters.py)
- Convert lambda to named function (engine.py, Ruff E731)
- Remove unused logger definitions in repos (Ruff F841)
- Add logging to JSONL decode errors and empty except blocks
- Separate assert side-effects in tests (CodeQL)
- Remove unused local variables in tests (Ruff F841)
- Fix max_trace_content truncation to use byte length, not char length

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: apply ruff format to persistence and runtime files

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Potential fix for pull request finding 'Statement has no effect'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* refactor(runtime): introduce RunContext to reduce run_agent parameter bloat

Extract checkpointer, store, event_store, run_events_config, thread_meta_repo,
and follow_up_to_run_id into a frozen RunContext dataclass. Add get_run_context()
in deps.py to build the base context from app.state singletons. start_run() uses
dataclasses.replace() to enrich per-run fields before passing ctx to run_agent.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(gateway): move sanitize_log_param to app/gateway/utils.py

Extract the log-injection sanitizer from routers/threads.py into a shared
utils module and rename to sanitize_log_param (public API). Eliminates the
reverse service → router import in services.py.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* perf: use SQL aggregation for feedback stats and thread token usage

Replace Python-side counting in FeedbackRepository.aggregate_by_run with
a single SELECT COUNT/SUM query. Add RunStore.aggregate_tokens_by_thread
abstract method with SQL GROUP BY implementation in RunRepository and
Python fallback in MemoryRunStore. Simplify the thread_token_usage
endpoint to delegate to the new method, eliminating the limit=10000
truncation risk.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: annotate DbRunEventStore.put() as low-frequency path

Add docstring clarifying that put() opens a per-call transaction with
FOR UPDATE and should only be used for infrequent writes (currently
just the initial human_message event). High-throughput callers should
use put_batch() instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(threads): fall back to Store search when ThreadMetaRepository is unavailable

When database.backend=memory (default) or no SQL session factory is
configured, search_threads now queries the LangGraph Store instead of
returning 503. Returns empty list if neither Store nor repo is available.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(persistence): introduce ThreadMetaStore ABC for backend-agnostic thread metadata

Add ThreadMetaStore abstract base class with create/get/search/update/delete
interface. ThreadMetaRepository (SQL) now inherits from it. New
MemoryThreadMetaStore wraps LangGraph BaseStore for memory-mode deployments.

deps.py now always provides a non-None thread_meta_repo, eliminating all
`if thread_meta_repo is not None` guards in services.py, worker.py, and
routers/threads.py. search_threads no longer needs a Store fallback branch.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(history): read messages from checkpointer instead of RunEventStore

The /history endpoint now reads messages directly from the
checkpointer's channel_values (the authoritative source) instead of
querying RunEventStore.list_messages(). The RunEventStore API is
preserved for other consumers.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(persistence): address new Copilot review comments

- feedback.py: validate thread_id/run_id before deleting feedback
- jsonl.py: add path traversal protection with ID validation
- run_repo.py: parse `before` to datetime for PostgreSQL compat
- thread_meta_repo.py: fix pagination when metadata filter is active
- database_config.py: use resolve_path for sqlite_dir consistency

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Implement skill self-evolution and skill_manage flow (#1874)

* chore: ignore .worktrees directory

* Add skill_manage self-evolution flow

* Fix CI regressions for skill_manage

* Address PR review feedback for skill evolution

* fix(skill-evolution): preserve history on delete

* fix(skill-evolution): tighten scanner fallbacks

* docs: add skill_manage e2e evidence screenshot

* fix(skill-manage): avoid blocking fs ops in session runtime

---------

Co-authored-by: Willem Jiang <willem.jiang@gmail.com>

* fix(config): resolve sqlite_dir relative to CWD, not Paths.base_dir

resolve_path() resolves relative to Paths.base_dir (.deer-flow),
which double-nested the path to .deer-flow/.deer-flow/data/app.db.
Use Path.resolve() (CWD-relative) instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Feature/feishu receive file (#1608)

* feat(feishu): add channel file materialization hook for inbound messages

- Introduce Channel.receive_file(msg, thread_id) as a base method for file materialization; default is no-op.
- Implement FeishuChannel.receive_file to download files/images from Feishu messages, save to sandbox, and inject virtual paths into msg.text.
- Update ChannelManager to call receive_file for any channel if msg.files is present, enabling downstream model access to user-uploaded files.
- No impact on Slack/Telegram or other channels (they inherit the default no-op).

* style(backend): format code with ruff for lint compliance

- Auto-formatted packages/harness/deerflow/agents/factory.py and tests/test_create_deerflow_agent.py using `ruff format`
- Ensured both files conform to project linting standards
- Fixes CI lint check failures caused by code style issues

* fix(feishu): handle file write operation asynchronously to prevent blocking

* fix(feishu): rename GetMessageResourceRequest to _GetMessageResourceRequest and remove redundant code

* test(feishu): add tests for receive_file method and placeholder replacement

* fix(manager): remove unnecessary type casting for channel retrieval

* fix(feishu): update logging messages to reflect resource handling instead of image

* fix(feishu): sanitize filename by replacing invalid characters in file uploads

* fix(feishu): improve filename sanitization and reorder image key handling in message processing

* fix(feishu): add thread lock to prevent filename conflicts during file downloads

* fix(test): correct bad merge in test_feishu_parser.py

* chore: run ruff and apply formatting cleanup
fix(feishu): preserve rich-text attachment order and improve fallback filename handling

* fix(docker): restore gateway env vars and fix langgraph empty arg issue (#1915)

Two production docker-compose.yaml bugs prevent `make up` from working:

1. Gateway missing DEER_FLOW_CONFIG_PATH and DEER_FLOW_EXTENSIONS_CONFIG_PATH
   environment overrides. Added in fb2d99f (#1836) but accidentally reverted
   by ca2fb95 (#1847). Without them, gateway reads host paths from .env via
   env_file, causing FileNotFoundError inside the container.

2. Langgraph command fails when LANGGRAPH_ALLOW_BLOCKING is unset (default).
   Empty $${allow_blocking} inserts a bare space between flags, causing
   ' --no-reload' to be parsed as unexpected extra argument. Fix by building
   args string first and conditionally appending --allow-blocking.

Co-authored-by: cooper <cooperfu@tencent.com>

* fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities (#1904)

* fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities

Fix `<button>` inside `<a>` invalid HTML in artifact components and add
missing `noopener,noreferrer` to `window.open` calls to prevent reverse
tabnabbing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(frontend): address Copilot review on tabnabbing and double-tab-open

Remove redundant parent onClick on web_fetch ChainOfThoughtStep to
prevent opening two tabs on link click, and explicitly null out
window.opener after window.open() for defensive tabnabbing hardening.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(persistence): organize entities into per-entity directories

Restructure the persistence layer from horizontal "models/ + repositories/"
split into vertical entity-aligned directories. Each entity (thread_meta,
run, feedback) now owns its ORM model, abstract interface (where applicable),
and concrete implementations under a single directory with an aggregating
__init__.py for one-line imports.

Layout:
  persistence/thread_meta/{base,model,sql,memory}.py
  persistence/run/{model,sql}.py
  persistence/feedback/{model,sql}.py

models/__init__.py is kept as a facade so Alembic autogenerate continues to
discover all ORM tables via Base.metadata. RunEventRow remains under
models/run_event.py because its storage implementation lives in
runtime/events/store/db.py and has no matching repository directory.

The repositories/ directory is removed entirely. All call sites in
gateway/deps.py and tests are updated to import from the new entity
packages, e.g.:

    from deerflow.persistence.thread_meta import ThreadMetaRepository
    from deerflow.persistence.run import RunRepository
    from deerflow.persistence.feedback import FeedbackRepository

Full test suite passes (1690 passed, 14 skipped).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(gateway): sync thread rename and delete through ThreadMetaStore

The POST /threads/{id}/state endpoint previously synced title changes
only to the LangGraph Store via _store_upsert. In sqlite mode the search
endpoint reads from the ThreadMetaRepository SQL table, so renames never
appeared in /threads/search until the next agent run completed (worker.py
syncs title from checkpoint to thread_meta in its finally block).

Likewise the DELETE /threads/{id} endpoint cleaned up the filesystem,
Store, and checkpointer but left the threads_meta row orphaned in sqlite,
so deleted threads kept appearing in /threads/search.

Fix both endpoints by routing through the ThreadMetaStore abstraction
which already has the correct sqlite/memory implementations wired up by
deps.py. The rename path now calls update_display_name() and the delete
path calls delete() — both work uniformly across backends.

Verified end-to-end with curl in gateway mode against sqlite backend.
Existing test suite (1690 passed) and focused router/repo tests pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(gateway): route all thread metadata access through ThreadMetaStore

Following the rename/delete bug fix in PR1, migrate the remaining direct
LangGraph Store reads/writes in the threads router and services to the
ThreadMetaStore abstraction so that the sqlite and memory backends behave
identically and the legacy dual-write paths can be removed.

Migrated endpoints (threads.py):
- create_thread: idempotency check + write now use thread_meta_repo.get/create
  instead of dual-writing the LangGraph Store and the SQL row.
- get_thread: reads from thread_meta_repo.get; the checkpoint-only fallback
  for legacy threads is preserved.
- patch_thread: replaced _store_get/_store_put with thread_meta_repo.update_metadata.
- delete_thread_data: dropped the legacy store.adelete; thread_meta_repo.delete
  already covers it.

Removed dead code (services.py):
- _upsert_thread_in_store — redundant with the immediately following
  thread_meta_repo.create() call.
- _sync_thread_title_after_run — worker.py's finally block already syncs
  the title via thread_meta_repo.update_display_name() after each run.

Removed dead code (threads.py):
- _store_get / _store_put / _store_upsert helpers (no remaining callers).
- THREADS_NS constant.
- get_store import (router no longer touches the LangGraph Store directly).

New abstract method:
- ThreadMetaStore.update_metadata(thread_id, metadata) merges metadata into
  the thread's metadata field. Implemented in both ThreadMetaRepository (SQL,
  read-modify-write inside one session) and MemoryThreadMetaStore. Three new
  unit tests cover merge / empty / nonexistent behaviour.

Net change: -134 lines. Full test suite: 1693 passed, 14 skipped.
Verified end-to-end with curl in gateway mode against sqlite backend
(create / patch / get / rename / search / delete).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: JilongSun <965640067@qq.com>
Co-authored-by: jie <49781832+stan-fu@users.noreply.github.com>
Co-authored-by: cooper <cooperfu@tencent.com>
Co-authored-by: yangzheli <43645580+yangzheli@users.noreply.github.com>
2026-04-07 11:53:52 +08:00
lulusiyuyu f0dd8cb0d2 fix(subagents): add cooperative cancellation for subagent threads (#1873)
* fix(subagents): add cooperative cancellation for subagent threads

Subagent tasks run inside ThreadPoolExecutor threads with their own
event loop (asyncio.run). When a user clicks stop, RunManager cancels
the parent asyncio.Task, but Future.cancel() cannot terminate a running
thread and asyncio.Event does not propagate across event loops. This
causes subagent threads to keep executing (writing files, calling LLMs)
even after the user explicitly stops the run.

Fix: add a threading.Event (cancel_event) to SubagentResult and check
it cooperatively in _aexecute()'s astream iteration loop. On cancel,
request_cancel_background_task() sets the event, and the thread exits
at the next iteration boundary.

Changes:
- executor.py: Add cancel_event field to SubagentResult, check it in
  _aexecute loop, set it on timeout, add request_cancel_background_task
- task_tool.py: Call request_cancel_background_task on CancelledError

* fix(subagents): guard cancel status and add pre-check before astream

- Only overwrite status to FAILED when still RUNNING, preserving
  TIMED_OUT set by the scheduler thread.
- Add cancel_event pre-check before entering the astream loop so
  cancellation is detected immediately when already signalled.

* fix(subagents): guard status updates with lock to prevent race condition

Wrap the check-and-set on result.status in _aexecute with
_background_tasks_lock so the timeout handler in execute_async
cannot interleave between the read and write.

* fix(subagents): add dedicated CANCELLED status for user cancellation

Introduce SubagentStatus.CANCELLED to distinguish user-initiated
cancellation from actual execution failures.  Update _aexecute,
task_tool polling, cleanup terminal-status sets, and test fixtures.

* test(subagents): add cancellation tests and fix timeout regression test

- Add dedicated TestCooperativeCancellation test class with 6 tests:
  - Pre-set cancel_event prevents astream from starting
  - Mid-stream cancel_event returns CANCELLED immediately
  - request_cancel_background_task() sets cancel_event correctly
  - request_cancel on nonexistent task is a no-op
  - Real execute_async timeout does not overwrite CANCELLED (deterministic
    threading.Event sync, no wall-clock sleeps)
  - cleanup_background_task removes CANCELLED tasks

- Add task_tool cancellation coverage:
  - test_cancellation_calls_request_cancel: assert CancelledError path
    calls request_cancel_background_task(task_id)
  - test_task_tool_returns_cancelled_message: assert CANCELLED polling
    branch emits task_cancelled event and returns expected message

- Fix pre-existing test infrastructure issue: add deerflow.sandbox.security
  to _MOCKED_MODULE_NAMES (fixes ModuleNotFoundError for all executor tests)

- Add RUNNING guard to timeout handler in executor.py to prevent
  TIMED_OUT from overwriting CANCELLED status

- Add cooperative cancellation granularity comment documenting that
  cancellation is only detected at astream iteration boundaries

---------

Co-authored-by: lulusiyuyu <lulusiyuyu@users.noreply.github.com>
2026-04-07 11:12:25 +08:00
DanielWalnut 7643a46fca fix(skill): make skill prompt cache refresh nonblocking (#1924)
* fix: make skill prompt cache refresh nonblocking

* fix: harden skills prompt cache refresh

* chore: add timeout to skills cache warm-up
2026-04-07 10:50:34 +08:00
Markus Corazzione c4da0e8ca9 Move async SQLite mkdir off the event loop (#1921)
Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com>
2026-04-07 10:47:20 +08:00
283 changed files with 31589 additions and 1325 deletions
+181
View File
@@ -0,0 +1,181 @@
---
name: smoke-test
description: End-to-end smoke test skill for DeerFlow. Guides through: 1) Pulling latest code, 2) Docker OR Local installation and deployment (user preference, default to Local if Docker network issues), 3) Service availability verification, 4) Health check, 5) Final test report. Use when the user says "run smoke test", "smoke test deployment", "verify installation", "test service availability", "end-to-end test", or similar.
---
# DeerFlow Smoke Test Skill
This skill guides the Agent through DeerFlow's full end-to-end smoke test workflow, including code updates, deployment (supporting both Docker and local installation modes), service availability verification, and health checks.
## Deployment Mode Selection
This skill supports two deployment modes:
- **Local installation mode** (recommended, especially when network issues occur) - Run all services directly on the local machine
- **Docker mode** - Run all services inside Docker containers
**Selection strategy**:
- If the user explicitly asks for Docker mode, use Docker
- If network issues occur (such as slow image pulls), automatically switch to local mode
- Default to local mode whenever possible
## Structure
```
smoke-test/
├── SKILL.md ← You are here - core workflow and logic
├── scripts/
│ ├── check_docker.sh ← Check the Docker environment
│ ├── check_local_env.sh ← Check local environment dependencies
│ ├── frontend_check.sh ← Frontend page smoke check
│ ├── pull_code.sh ← Pull the latest code
│ ├── deploy_docker.sh ← Docker deployment
│ ├── deploy_local.sh ← Local deployment
│ └── health_check.sh ← Service health check
├── references/
│ ├── SOP.md ← Standard operating procedure
│ └── troubleshooting.md ← Troubleshooting guide
└── templates/
├── report.local.template.md ← Local mode smoke test report template
└── report.docker.template.md ← Docker mode smoke test report template
```
## Standard Operating Procedure (SOP)
### Phase 1: Code Update Check
1. **Confirm current directory** - Verify that the current working directory is the DeerFlow project root
2. **Check Git status** - See whether there are uncommitted changes
3. **Pull the latest code** - Use `git pull origin main` to get the latest updates
4. **Confirm code update** - Verify that the latest code was pulled successfully
### Phase 2: Deployment Mode Selection and Environment Check
**Choose deployment mode**:
- Ask for user preference, or choose automatically based on network conditions
- Default to local installation mode
**Local mode environment check**:
1. **Check Node.js version** - Requires 22+
2. **Check pnpm** - Package manager
3. **Check uv** - Python package manager
4. **Check nginx** - Reverse proxy
5. **Check required ports** - Confirm that ports 2026, 3000, 8001, and 2024 are not occupied
**Docker mode environment check** (if Docker is selected):
1. **Check whether Docker is installed** - Run `docker --version`
2. **Check Docker daemon status** - Run `docker info`
3. **Check Docker Compose availability** - Run `docker compose version`
4. **Check required ports** - Confirm that port 2026 is not occupied
### Phase 3: Configuration Preparation
1. **Check whether config.yaml exists**
- If it does not exist, run `make config` to generate it
- If it already exists, check whether it needs an upgrade with `make config-upgrade`
2. **Check the .env file**
- Verify that required environment variables are configured
- Especially model API keys such as `OPENAI_API_KEY`
### Phase 4: Deployment Execution
**Local mode deployment**:
1. **Check dependencies** - Run `make check`
2. **Install dependencies** - Run `make install`
3. **(Optional) Pre-pull the sandbox image** - If needed, run `make setup-sandbox`
4. **Start services** - Run `make dev-daemon` (background mode, recommended) or `make dev` (foreground mode)
5. **Wait for startup** - Give all services enough time to start completely (90-120 seconds recommended)
**Docker mode deployment** (if Docker is selected):
1. **Initialize Docker environment** - Run `make docker-init`
2. **Start Docker services** - Run `make docker-start`
3. **Wait for startup** - Give all containers enough time to start completely (60 seconds recommended)
### Phase 5: Service Health Check
**Local mode health check**:
1. **Check process status** - Confirm that LangGraph, Gateway, Frontend, and Nginx processes are all running
2. **Check frontend service** - Visit `http://localhost:2026` and verify that the page loads
3. **Check API Gateway** - Verify the `http://localhost:2026/health` endpoint
4. **Check LangGraph service** - Verify the availability of relevant endpoints
5. **Frontend route smoke check** - Run `bash .agent/skills/smoke-test/scripts/frontend_check.sh` to verify key routes under `/workspace`
**Docker mode health check** (when using Docker):
1. **Check container status** - Run `docker ps` and confirm that all containers are running
2. **Check frontend service** - Visit `http://localhost:2026` and verify that the page loads
3. **Check API Gateway** - Verify the `http://localhost:2026/health` endpoint
4. **Check LangGraph service** - Verify the availability of relevant endpoints
5. **Frontend route smoke check** - Run `bash .agent/skills/smoke-test/scripts/frontend_check.sh` to verify key routes under `/workspace`
### Optional Functional Verification
1. **List available models** - Verify that model configuration loads correctly
2. **List available skills** - Verify that the skill directory is mounted correctly
3. **Simple chat test** - Send a simple message to verify the end-to-end flow
### Phase 6: Generate Test Report
1. **Collect all test results** - Summarize execution status for each phase
2. **Record encountered issues** - If anything fails, record the error details
3. **Generate the final report** - Use the template that matches the selected deployment mode to create the complete test report, including overall conclusion, detailed key test cases, and explicit frontend page / route results
4. **Provide follow-up recommendations** - Offer suggestions based on the test results
## Execution Rules
- **Follow the sequence** - Execute strictly in the order described above
- **Idempotency** - Every step should be safe to repeat
- **Error handling** - If a step fails, stop and report the issue, then provide troubleshooting suggestions
- **Detailed logging** - Record the execution result and status of each step
- **User confirmation** - Ask for confirmation before potentially risky operations such as overwriting config
- **Mode preference** - Prefer local mode to avoid network-related issues
- **Template requirement** - The final report must use the matching template under `templates/`; do not output a free-form summary instead of the template-based report
- **Report clarity** - The execution summary must include the overall pass/fail conclusion plus per-case result explanations, and frontend smoke check results must be listed explicitly in the report
- **Optional phase handling** - If functional verification is not executed, do not present it as a separate skipped phase in the final report
## Known Acceptable Warnings
The following warnings can appear during smoke testing and do not block a successful result:
- Feishu/Lark SSL errors in Gateway logs (certificate verification failure) can be ignored if that channel is not enabled
- Warnings in LangGraph logs about missing methods in the custom checkpointer, such as `adelete_for_runs` or `aprune`, do not affect the core functionality
## Key Tools
Use the following tools during execution:
1. **bash** - Run shell commands
2. **present_file** - Show generated reports and important files
3. **task_tool** - Organize complex steps with subtasks when needed
## Success Criteria
Smoke test pass criteria (local mode):
- [x] Latest code is pulled successfully
- [x] Local environment check passes (Node.js 22+, pnpm, uv, nginx)
- [x] Configuration files are set up correctly
- [x] `make check` passes
- [x] `make install` completes successfully
- [x] `make dev` starts successfully
- [x] All service processes run normally
- [x] Frontend page is accessible
- [x] Frontend route smoke check passes (`/workspace` key routes)
- [x] API Gateway health check passes
- [x] Test report is generated completely
Smoke test pass criteria (Docker mode):
- [x] Latest code is pulled successfully
- [x] Docker environment check passes
- [x] Configuration files are set up correctly
- [x] `make docker-init` completes successfully
- [x] `make docker-start` completes successfully
- [x] All Docker containers run normally
- [x] Frontend page is accessible
- [x] Frontend route smoke check passes (`/workspace` key routes)
- [x] API Gateway health check passes
- [x] Test report is generated completely
## Read Reference Files
Before starting execution, read the following reference files:
1. `references/SOP.md` - Detailed step-by-step operating instructions
2. `references/troubleshooting.md` - Common issues and solutions
3. `templates/report.local.template.md` - Local mode test report template
4. `templates/report.docker.template.md` - Docker mode test report template
+452
View File
@@ -0,0 +1,452 @@
# DeerFlow Smoke Test Standard Operating Procedure (SOP)
This document describes the detailed operating steps for each phase of the DeerFlow smoke test.
## Phase 1: Code Update Check
### 1.1 Confirm Current Directory
**Objective**: Verify that the current working directory is the DeerFlow project root.
**Steps**:
1. Run `pwd` to view the current working directory
2. Check whether the directory contains the following files/directories:
- `Makefile`
- `backend/`
- `frontend/`
- `config.example.yaml`
**Success Criteria**: The current directory contains all of the files/directories listed above.
---
### 1.2 Check Git Status
**Objective**: Check whether there are uncommitted changes.
**Steps**:
1. Run `git status`
2. Check whether the output includes "Changes not staged for commit" or "Untracked files"
**Notes**:
- If there are uncommitted changes, recommend that the user commit or stash them first to avoid conflicts while pulling
- If the user confirms that they want to continue, this step can be skipped
---
### 1.3 Pull the Latest Code
**Objective**: Fetch the latest code updates.
**Steps**:
1. Run `git fetch origin main`
2. Run `git pull origin main`
**Success Criteria**:
- The commands succeed without errors
- The output shows "Already up to date" or indicates that new commits were pulled successfully
---
### 1.4 Confirm Code Update
**Objective**: Verify that the latest code was pulled successfully.
**Steps**:
1. Run `git log -1 --oneline` to view the latest commit
2. Record the commit hash and message
---
## Phase 2: Deployment Mode Selection and Environment Check
### 2.1 Choose Deployment Mode
**Objective**: Decide whether to use local mode or Docker mode.
**Decision Flow**:
1. Prefer local mode first to avoid network-related issues
2. If the user explicitly requests Docker, use Docker
3. If Docker network issues occur, switch to local mode automatically
---
### 2.2 Local Mode Environment Check
**Objective**: Verify that local development environment dependencies are satisfied.
#### 2.2.1 Check Node.js Version
**Steps**:
1. If nvm is used, run `nvm use 22` to switch to Node 22+
2. Run `node --version`
**Success Criteria**: Version >= 22.x
**Failure Handling**:
- If the version is too low, ask the user to install/switch Node.js with nvm:
```bash
nvm install 22
nvm use 22
```
- Or install it from the official website: https://nodejs.org/
---
#### 2.2.2 Check pnpm
**Steps**:
1. Run `pnpm --version`
**Success Criteria**: The command returns pnpm version information.
**Failure Handling**:
- If pnpm is not installed, ask the user to install it with `npm install -g pnpm`
---
#### 2.2.3 Check uv
**Steps**:
1. Run `uv --version`
**Success Criteria**: The command returns uv version information.
**Failure Handling**:
- If uv is not installed, ask the user to install uv
---
#### 2.2.4 Check nginx
**Steps**:
1. Run `nginx -v`
**Success Criteria**: The command returns nginx version information.
**Failure Handling**:
- macOS: install with Homebrew using `brew install nginx`
- Linux: install using the system package manager
---
#### 2.2.5 Check Required Ports
**Steps**:
1. Run the following commands to check ports:
```bash
lsof -i :2026 # Main port
lsof -i :3000 # Frontend
lsof -i :8001 # Gateway
lsof -i :2024 # LangGraph
```
**Success Criteria**: All ports are free, or they are occupied only by DeerFlow-related processes.
**Failure Handling**:
- If a port is occupied, ask the user to stop the related process
---
### 2.3 Docker Mode Environment Check (If Docker Is Selected)
#### 2.3.1 Check Whether Docker Is Installed
**Steps**:
1. Run `docker --version`
**Success Criteria**: The command returns Docker version information, such as "Docker version 24.x.x".
---
#### 2.3.2 Check Docker Daemon Status
**Steps**:
1. Run `docker info`
**Success Criteria**: The command runs successfully and shows Docker system information.
**Failure Handling**:
- If it fails, ask the user to start Docker Desktop or the Docker service
---
#### 2.3.3 Check Docker Compose Availability
**Steps**:
1. Run `docker compose version`
**Success Criteria**: The command returns Docker Compose version information.
---
#### 2.3.4 Check Required Ports
**Steps**:
1. Run `lsof -i :2026` (macOS/Linux) or `netstat -ano | findstr :2026` (Windows)
**Success Criteria**: Port 2026 is free, or it is occupied only by a DeerFlow-related process.
**Failure Handling**:
- If the port is occupied by another process, ask the user to stop that process or change the configuration
---
## Phase 3: Configuration Preparation
### 3.1 Check config.yaml
**Steps**:
1. Check whether `config.yaml` exists
2. If it does not exist, run `make config`
3. If it already exists, consider running `make config-upgrade` to merge new fields
**Validation**:
- Check whether at least one model is configured in config.yaml
- Check whether the model configuration references the correct environment variables
---
### 3.2 Check the .env File
**Steps**:
1. Check whether the `.env` file exists
2. If it does not exist, copy it from `.env.example`
3. Check whether the following environment variables are configured:
- `OPENAI_API_KEY` (or other model API keys)
- Other required settings
---
## Phase 4: Deployment Execution
### 4.1 Local Mode Deployment
#### 4.1.1 Check Dependencies
**Steps**:
1. Run `make check`
**Description**: This command validates all required tools (Node.js 22+, pnpm, uv, nginx).
---
#### 4.1.2 Install Dependencies
**Steps**:
1. Run `make install`
**Description**: This command installs both backend and frontend dependencies.
**Notes**:
- This step may take some time
- If network issues cause failures, try using a closer or mirrored package registry
---
#### 4.1.3 (Optional) Pre-pull the Sandbox Image
**Steps**:
1. If Docker / Container sandbox is used, run `make setup-sandbox`
**Description**: This step is optional and not needed for local sandbox mode.
---
#### 4.1.4 Start Services
**Steps**:
1. Run `make dev-daemon` (background mode)
**Description**: This command starts all services (LangGraph, Gateway, Frontend, Nginx).
**Notes**:
- `make dev` runs in the foreground and stops with Ctrl+C
- `make dev-daemon` runs in the background
- Use `make stop` to stop services
---
#### 4.1.5 Wait for Services to Start
**Steps**:
1. Wait 90-120 seconds for all services to start completely
2. You can monitor startup progress by checking these log files:
- `logs/langgraph.log`
- `logs/gateway.log`
- `logs/frontend.log`
- `logs/nginx.log`
---
### 4.2 Docker Mode Deployment (If Docker Is Selected)
#### 4.2.1 Initialize the Docker Environment
**Steps**:
1. Run `make docker-init`
**Description**: This command pulls the sandbox image if needed.
---
#### 4.2.2 Start Docker Services
**Steps**:
1. Run `make docker-start`
**Description**: This command builds and starts all required Docker containers.
---
#### 4.2.3 Wait for Services to Start
**Steps**:
1. Wait 60-90 seconds for all services to start completely
2. You can run `make docker-logs` to monitor startup progress
---
## Phase 5: Service Health Check
### 5.1 Local Mode Health Check
#### 5.1.1 Check Process Status
**Steps**:
1. Run the following command to check processes:
```bash
ps aux | grep -E "(langgraph|uvicorn|next|nginx)" | grep -v grep
```
**Success Criteria**: Confirm that the following processes are running:
- LangGraph (`langgraph dev`)
- Gateway (`uvicorn app.gateway.app:app`)
- Frontend (`next dev` or `next start`)
- Nginx (`nginx`)
---
#### 5.1.2 Check Frontend Service
**Steps**:
1. Use curl or a browser to visit `http://localhost:2026`
2. Verify that the page loads normally
**Example curl command**:
```bash
curl -I http://localhost:2026
```
**Success Criteria**: Returns an HTTP 200 status code.
---
#### 5.1.3 Check API Gateway
**Steps**:
1. Visit `http://localhost:2026/health`
**Example curl command**:
```bash
curl http://localhost:2026/health
```
**Success Criteria**: Returns health status JSON.
---
#### 5.1.4 Check LangGraph Service
**Steps**:
1. Visit relevant LangGraph endpoints to verify availability
---
### 5.2 Docker Mode Health Check (When Using Docker)
#### 5.2.1 Check Container Status
**Steps**:
1. Run `docker ps`
2. Confirm that the following containers are running:
- `deer-flow-nginx`
- `deer-flow-frontend`
- `deer-flow-gateway`
- `deer-flow-langgraph` (if not in gateway mode)
---
#### 5.2.2 Check Frontend Service
**Steps**:
1. Use curl or a browser to visit `http://localhost:2026`
2. Verify that the page loads normally
**Example curl command**:
```bash
curl -I http://localhost:2026
```
**Success Criteria**: Returns an HTTP 200 status code.
---
#### 5.2.3 Check API Gateway
**Steps**:
1. Visit `http://localhost:2026/health`
**Example curl command**:
```bash
curl http://localhost:2026/health
```
**Success Criteria**: Returns health status JSON.
---
#### 5.2.4 Check LangGraph Service
**Steps**:
1. Visit relevant LangGraph endpoints to verify availability
---
## Optional Functional Verification
### 6.1 List Available Models
**Steps**: Verify the model list through the API or UI.
---
### 6.2 List Available Skills
**Steps**: Verify the skill list through the API or UI.
---
### 6.3 Simple Chat Test
**Steps**: Send a simple message to test the complete workflow.
---
## Phase 6: Generate the Test Report
### 6.1 Collect Test Results
Summarize the execution status of each phase and record successful and failed items.
### 6.2 Record Issues
If anything fails, record detailed error information.
### 6.3 Generate the Report
Use the template to create a complete test report.
### 6.4 Provide Recommendations
Provide follow-up recommendations based on the test results.
@@ -0,0 +1,612 @@
# Troubleshooting Guide
This document lists common issues encountered during DeerFlow smoke testing and how to resolve them.
## Code Update Issues
### Issue: `git pull` Fails with a Merge Conflict Warning
**Symptoms**:
```
error: Your local changes to the following files would be overwritten by merge
```
**Solutions**:
1. Option A: Commit local changes first
```bash
git add .
git commit -m "Save local changes"
git pull origin main
```
2. Option B: Stash local changes
```bash
git stash
git pull origin main
git stash pop # Restore changes later if needed
```
3. Option C: Discard local changes (use with caution)
```bash
git reset --hard HEAD
git pull origin main
```
---
## Local Mode Environment Issues
### Issue: Node.js Version Is Too Old
**Symptoms**:
```
Node.js version is too old. Requires 22+, got x.x.x
```
**Solutions**:
1. Install or upgrade Node.js with nvm:
```bash
nvm install 22
nvm use 22
```
2. Or download and install it from the official website: https://nodejs.org/
3. Verify the version:
```bash
node --version
```
---
### Issue: pnpm Is Not Installed
**Symptoms**:
```
command not found: pnpm
```
**Solutions**:
1. Install pnpm with npm:
```bash
npm install -g pnpm
```
2. Or use the official installation script:
```bash
curl -fsSL https://get.pnpm.io/install.sh | sh -
```
3. Verify the installation:
```bash
pnpm --version
```
---
### Issue: uv Is Not Installed
**Symptoms**:
```
command not found: uv
```
**Solutions**:
1. Use the official installation script:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
2. macOS users can also install it with Homebrew:
```bash
brew install uv
```
3. Verify the installation:
```bash
uv --version
```
---
### Issue: nginx Is Not Installed
**Symptoms**:
```
command not found: nginx
```
**Solutions**:
1. macOS (Homebrew):
```bash
brew install nginx
```
2. Ubuntu/Debian:
```bash
sudo apt update
sudo apt install nginx
```
3. CentOS/RHEL:
```bash
sudo yum install nginx
```
4. Verify the installation:
```bash
nginx -v
```
---
### Issue: Port Is Already in Use
**Symptoms**:
```
Error: listen EADDRINUSE: address already in use :::2026
```
**Solutions**:
1. Find the process using the port:
```bash
lsof -i :2026 # macOS/Linux
netstat -ano | findstr :2026 # Windows
```
2. Stop that process:
```bash
kill -9 <PID> # macOS/Linux
taskkill /PID <PID> /F # Windows
```
3. Or stop DeerFlow services first:
```bash
make stop
```
---
## Local Mode Dependency Installation Issues
### Issue: `make install` Fails Due to Network Timeout
**Symptoms**:
Network timeouts or connection failures occur during dependency installation.
**Solutions**:
1. Configure pnpm to use a mirror registry:
```bash
pnpm config set registry https://registry.npmmirror.com
```
2. Configure uv to use a mirror registry:
```bash
uv pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
```
3. Retry the installation:
```bash
make install
```
---
### Issue: Python Dependency Installation Fails
**Symptoms**:
Errors occur during `uv sync`.
**Solutions**:
1. Clean the uv cache:
```bash
cd backend
uv cache clean
```
2. Resync dependencies:
```bash
cd backend
uv sync
```
3. View detailed error logs:
```bash
cd backend
uv sync --verbose
```
---
### Issue: Frontend Dependency Installation Fails
**Symptoms**:
Errors occur during `pnpm install`.
**Solutions**:
1. Clean the pnpm cache:
```bash
cd frontend
pnpm store prune
```
2. Remove node_modules and the lock file:
```bash
cd frontend
rm -rf node_modules pnpm-lock.yaml
```
3. Reinstall:
```bash
cd frontend
pnpm install
```
---
## Local Mode Service Startup Issues
### Issue: Services Exit Immediately After Startup
**Symptoms**:
Processes exit quickly after running `make dev-daemon`.
**Solutions**:
1. Check log files:
```bash
tail -f logs/langgraph.log
tail -f logs/gateway.log
tail -f logs/frontend.log
tail -f logs/nginx.log
```
2. Check whether config.yaml is configured correctly
3. Check environment variables in the .env file
4. Confirm that required ports are not occupied
5. Stop all services and restart:
```bash
make stop
make dev-daemon
```
---
### Issue: Nginx Fails to Start Because Temp Directories Do Not Exist
**Symptoms**:
```
nginx: [emerg] mkdir() "/opt/homebrew/var/run/nginx/client_body_temp" failed (2: No such file or directory)
```
**Solutions**:
Add local temp directory configuration to `docker/nginx/nginx.local.conf` so nginx uses the repository's temp directory.
Add the following at the beginning of the `http` block:
```nginx
client_body_temp_path temp/client_body_temp;
proxy_temp_path temp/proxy_temp;
fastcgi_temp_path temp/fastcgi_temp;
uwsgi_temp_path temp/uwsgi_temp;
scgi_temp_path temp/scgi_temp;
```
Note: The `temp/` directory under the repository root is created automatically by `make dev` or `make dev-daemon`.
---
### Issue: Nginx Fails to Start (General)
**Symptoms**:
The nginx process fails to start or reports an error.
**Solutions**:
1. Check the nginx configuration:
```bash
nginx -t -c docker/nginx/nginx.local.conf -p .
```
2. Check nginx logs:
```bash
tail -f logs/nginx.log
```
3. Ensure no other nginx process is running:
```bash
ps aux | grep nginx
```
4. If needed, stop existing nginx processes:
```bash
pkill -9 nginx
```
---
### Issue: Frontend Compilation Fails
**Symptoms**:
Compilation errors appear in `frontend.log`.
**Solutions**:
1. Check frontend logs:
```bash
tail -f logs/frontend.log
```
2. Check whether Node.js version is 22+
3. Reinstall frontend dependencies:
```bash
cd frontend
rm -rf node_modules .next
pnpm install
```
4. Restart services:
```bash
make stop
make dev-daemon
```
---
### Issue: Gateway Fails to Start
**Symptoms**:
Errors appear in `gateway.log`.
**Solutions**:
1. Check gateway logs:
```bash
tail -f logs/gateway.log
```
2. Check whether config.yaml exists and has valid formatting
3. Check whether Python dependencies are complete:
```bash
cd backend
uv sync
```
4. Confirm that the LangGraph service is running normally (if not in gateway mode)
---
### Issue: LangGraph Fails to Start
**Symptoms**:
Errors appear in `langgraph.log`.
**Solutions**:
1. Check LangGraph logs:
```bash
tail -f logs/langgraph.log
```
2. Check config.yaml
3. Check whether Python dependencies are complete
4. Confirm that port 2024 is not occupied
---
## Docker-Related Issues
### Issue: Docker Commands Cannot Run
**Symptoms**:
```
Cannot connect to the Docker daemon
```
**Solutions**:
1. Confirm that Docker Desktop is running
2. macOS: check whether the Docker icon appears in the top menu bar
3. Linux: run `sudo systemctl start docker`
4. Run `docker info` again to verify
---
### Issue: `make docker-init` Fails to Pull the Image
**Symptoms**:
```
Error pulling image: connection refused
```
**Solutions**:
1. Check network connectivity
2. Configure a Docker image mirror if needed
3. Check whether a proxy is required
4. Switch to local installation mode if necessary (recommended)
---
## Configuration File Issues
### Issue: config.yaml Is Missing or Invalid
**Symptoms**:
```
Error: could not read config.yaml
```
**Solutions**:
1. Regenerate the configuration file:
```bash
make config
```
2. Check YAML syntax:
- Make sure indentation is correct (use 2 spaces)
- Make sure there are no tab characters
- Check that there is a space after each colon
3. Use a YAML validation tool to check the format
---
### Issue: Model API Key Is Not Configured
**Symptoms**:
After services start, API requests fail with authentication errors.
**Solutions**:
1. Edit the .env file and add the API key:
```bash
OPENAI_API_KEY=your-actual-api-key-here
```
2. Restart services (local mode):
```bash
make stop
make dev-daemon
```
3. Restart services (Docker mode):
```bash
make docker-stop
make docker-start
```
4. Confirm that the model configuration in config.yaml references the environment variable correctly
---
## Service Health Check Issues
### Issue: Frontend Page Is Not Accessible
**Symptoms**:
The browser shows a connection failure when visiting http://localhost:2026.
**Solutions** (local mode):
1. Confirm that the nginx process is running:
```bash
ps aux | grep nginx
```
2. Check nginx logs:
```bash
tail -f logs/nginx.log
```
3. Check firewall settings
**Solutions** (Docker mode):
1. Confirm that the nginx container is running:
```bash
docker ps | grep nginx
```
2. Check nginx logs:
```bash
cd docker && docker compose -p deer-flow-dev -f docker-compose-dev.yaml logs nginx
```
3. Check firewall settings
---
### Issue: API Gateway Health Check Fails
**Symptoms**:
Accessing `/health` returns an error or times out.
**Solutions** (local mode):
1. Check gateway logs:
```bash
tail -f logs/gateway.log
```
2. Confirm that config.yaml exists and has valid formatting
3. Check whether Python dependencies are complete
4. Confirm that the LangGraph service is running normally
**Solutions** (Docker mode):
1. Check gateway container logs:
```bash
make docker-logs-gateway
```
2. Confirm that config.yaml is mounted correctly
3. Check whether Python dependencies are complete
4. Confirm that the LangGraph service is running normally
---
## Common Diagnostic Commands
### Local Mode Diagnostics
#### View All Service Processes
```bash
ps aux | grep -E "(langgraph|uvicorn|next|nginx)" | grep -v grep
```
#### View Service Logs
```bash
# View all logs
tail -f logs/*.log
# View specific service logs
tail -f logs/langgraph.log
tail -f logs/gateway.log
tail -f logs/frontend.log
tail -f logs/nginx.log
```
#### Stop All Services
```bash
make stop
```
#### Fully Reset the Local Environment
```bash
make stop
make clean
make config
make install
make dev-daemon
```
---
### Docker Mode Diagnostics
#### View All Container Status
```bash
docker ps -a
```
#### View Container Resource Usage
```bash
docker stats
```
#### Enter a Container for Debugging
```bash
docker exec -it deer-flow-gateway sh
```
#### Clean Up All DeerFlow-Related Containers and Images
```bash
make docker-stop
cd docker && docker compose -p deer-flow-dev -f docker-compose-dev.yaml down -v
```
#### Fully Reset the Docker Environment
```bash
make docker-stop
make clean
make config
make docker-init
make docker-start
```
---
## Get More Help
If the solutions above do not resolve the issue:
1. Check the GitHub issues for the project: https://github.com/bytedance/deer-flow/issues
2. Review the project documentation: README.md and the `backend/docs/` directory
3. Open a new issue and include detailed error logs
+80
View File
@@ -0,0 +1,80 @@
#!/usr/bin/env bash
set -e
echo "=========================================="
echo " Checking Docker Environment"
echo "=========================================="
echo ""
# Check whether Docker is installed
if command -v docker >/dev/null 2>&1; then
echo "✓ Docker is installed"
docker --version
else
echo "✗ Docker is not installed"
exit 1
fi
echo ""
# Check the Docker daemon
if docker info >/dev/null 2>&1; then
echo "✓ Docker daemon is running normally"
else
echo "✗ Docker daemon is not running"
echo " Please start Docker Desktop or the Docker service"
exit 1
fi
echo ""
# Check Docker Compose
if docker compose version >/dev/null 2>&1; then
echo "✓ Docker Compose is available"
docker compose version
else
echo "✗ Docker Compose is not available"
exit 1
fi
echo ""
# Check port 2026
if ! command -v lsof >/dev/null 2>&1; then
echo "✗ lsof is required to check whether port 2026 is available"
exit 1
fi
port_2026_usage="$(lsof -nP -iTCP:2026 -sTCP:LISTEN 2>/dev/null || true)"
if [ -n "$port_2026_usage" ]; then
echo "⚠ Port 2026 is already in use"
echo " Occupying process:"
echo "$port_2026_usage"
deerflow_process_found=0
while IFS= read -r pid; do
if [ -z "$pid" ]; then
continue
fi
process_command="$(ps -p "$pid" -o command= 2>/dev/null || true)"
case "$process_command" in
*[Dd]eer[Ff]low*|*[Dd]eerflow*|*[Nn]ginx*deerflow*|*deerflow/*[Nn]ginx*)
deerflow_process_found=1
;;
esac
done <<EOF
$(printf '%s\n' "$port_2026_usage" | awk 'NR > 1 {print $2}')
EOF
if [ "$deerflow_process_found" -eq 1 ]; then
echo "✓ Port 2026 is occupied by DeerFlow"
else
echo "✗ Port 2026 must be free before starting DeerFlow"
exit 1
fi
else
echo "✓ Port 2026 is available"
fi
echo ""
echo "=========================================="
echo " Docker Environment Check Complete"
echo "=========================================="
+93
View File
@@ -0,0 +1,93 @@
#!/usr/bin/env bash
set -e
echo "=========================================="
echo " Checking Local Development Environment"
echo "=========================================="
echo ""
all_passed=true
# Check Node.js
echo "1. Checking Node.js..."
if command -v node >/dev/null 2>&1; then
NODE_VERSION=$(node --version | sed 's/v//')
NODE_MAJOR=$(echo "$NODE_VERSION" | cut -d. -f1)
if [ "$NODE_MAJOR" -ge 22 ]; then
echo "✓ Node.js is installed (version: $NODE_VERSION)"
else
echo "✗ Node.js version is too old (current: $NODE_VERSION, required: 22+)"
all_passed=false
fi
else
echo "✗ Node.js is not installed"
all_passed=false
fi
echo ""
# Check pnpm
echo "2. Checking pnpm..."
if command -v pnpm >/dev/null 2>&1; then
echo "✓ pnpm is installed (version: $(pnpm --version))"
else
echo "✗ pnpm is not installed"
echo " Install command: npm install -g pnpm"
all_passed=false
fi
echo ""
# Check uv
echo "3. Checking uv..."
if command -v uv >/dev/null 2>&1; then
echo "✓ uv is installed (version: $(uv --version))"
else
echo "✗ uv is not installed"
all_passed=false
fi
echo ""
# Check nginx
echo "4. Checking nginx..."
if command -v nginx >/dev/null 2>&1; then
echo "✓ nginx is installed (version: $(nginx -v 2>&1))"
else
echo "✗ nginx is not installed"
echo " macOS: brew install nginx"
echo " Linux: install it with the system package manager"
all_passed=false
fi
echo ""
# Check ports
echo "5. Checking ports..."
if ! command -v lsof >/dev/null 2>&1; then
echo "✗ lsof is not installed, so port availability cannot be verified"
echo " Install lsof and rerun this check"
all_passed=false
else
for port in 2026 3000 8001 2024; do
if lsof -i :$port >/dev/null 2>&1; then
echo "⚠ Port $port is already in use:"
lsof -i :$port | head -2
all_passed=false
else
echo "✓ Port $port is available"
fi
done
fi
echo ""
# Summary
echo "=========================================="
echo " Environment Check Summary"
echo "=========================================="
echo ""
if [ "$all_passed" = true ]; then
echo "✅ All environment checks passed!"
echo ""
echo "Next step: run make install to install dependencies"
exit 0
else
echo "❌ Some checks failed. Please fix the issues above first"
exit 1
fi
+65
View File
@@ -0,0 +1,65 @@
#!/usr/bin/env bash
set -e
echo "=========================================="
echo " Docker Deployment"
echo "=========================================="
echo ""
# Check config.yaml
if [ ! -f "config.yaml" ]; then
echo "config.yaml does not exist. Generating it..."
make config
echo ""
echo "⚠ Please edit config.yaml to configure your models and API keys"
echo " Then run this script again"
exit 1
else
echo "✓ config.yaml exists"
fi
echo ""
# Check the .env file
if [ ! -f ".env" ]; then
echo ".env does not exist. Copying it from the example..."
if [ -f ".env.example" ]; then
cp .env.example .env
echo "✓ Created the .env file"
else
echo "⚠ .env.example does not exist. Please create the .env file manually"
fi
else
echo "✓ .env file exists"
fi
echo ""
# Check the frontend .env file
if [ ! -f "frontend/.env" ]; then
echo "frontend/.env does not exist. Copying it from the example..."
if [ -f "frontend/.env.example" ]; then
cp frontend/.env.example frontend/.env
echo "✓ Created the frontend/.env file"
else
echo "⚠ frontend/.env.example does not exist. Please create frontend/.env manually"
fi
else
echo "✓ frontend/.env file exists"
fi
echo ""
# Initialize the Docker environment
echo "Initializing the Docker environment..."
make docker-init
echo ""
# Start Docker services
echo "Starting Docker services..."
make docker-start
echo ""
echo "=========================================="
echo " Deployment Complete"
echo "=========================================="
echo ""
echo "🌐 Access URL: http://localhost:2026"
echo "📋 View logs: make docker-logs"
echo "🛑 Stop services: make docker-stop"
+63
View File
@@ -0,0 +1,63 @@
#!/usr/bin/env bash
set -e
echo "=========================================="
echo " Local Mode Deployment"
echo "=========================================="
echo ""
# Check config.yaml
if [ ! -f "config.yaml" ]; then
echo "config.yaml does not exist. Generating it..."
make config
echo ""
echo "⚠ Please edit config.yaml to configure your models and API keys"
echo " Then run this script again"
exit 1
else
echo "✓ config.yaml exists"
fi
echo ""
# Check the .env file
if [ ! -f ".env" ]; then
echo ".env does not exist. Copying it from the example..."
if [ -f ".env.example" ]; then
cp .env.example .env
echo "✓ Created the .env file"
else
echo "⚠ .env.example does not exist. Please create the .env file manually"
fi
else
echo "✓ .env file exists"
fi
echo ""
# Check dependencies
echo "Checking dependencies..."
make check
echo ""
# Install dependencies
echo "Installing dependencies..."
make install
echo ""
# Start services
echo "Starting services (background mode)..."
make dev-daemon
echo ""
echo "=========================================="
echo " Deployment Complete"
echo "=========================================="
echo ""
echo "🌐 Access URL: http://localhost:2026"
echo "📋 View logs:"
echo " - logs/langgraph.log"
echo " - logs/gateway.log"
echo " - logs/frontend.log"
echo " - logs/nginx.log"
echo "🛑 Stop services: make stop"
echo ""
echo "Please wait 90-120 seconds for all services to start completely, then run the health check"
@@ -0,0 +1,70 @@
#!/usr/bin/env bash
set +e
echo "=========================================="
echo " Frontend Page Smoke Check"
echo "=========================================="
echo ""
BASE_URL="${BASE_URL:-http://localhost:2026}"
DOC_PATH="${DOC_PATH:-/en/docs}"
all_passed=true
check_status() {
local name="$1"
local url="$2"
local expected_re="$3"
local status
status="$(curl -s -o /dev/null -w "%{http_code}" -L "$url")"
if echo "$status" | grep -Eq "$expected_re"; then
echo "$name ($url) -> $status"
else
echo "$name ($url) -> $status (expected: $expected_re)"
all_passed=false
fi
}
check_final_url() {
local name="$1"
local url="$2"
local expected_path_re="$3"
local effective
effective="$(curl -s -o /dev/null -w "%{url_effective}" -L "$url")"
if echo "$effective" | grep -Eq "$expected_path_re"; then
echo "$name redirect target -> $effective"
else
echo "$name redirect target -> $effective (expected path: $expected_path_re)"
all_passed=false
fi
}
echo "1. Checking entry pages..."
check_status "Landing page" "${BASE_URL}/" "200"
check_status "Workspace redirect" "${BASE_URL}/workspace" "200|301|302|307|308"
check_final_url "Workspace redirect" "${BASE_URL}/workspace" "/workspace/chats/"
echo ""
echo "2. Checking key workspace routes..."
check_status "New chat page" "${BASE_URL}/workspace/chats/new" "200"
check_status "Chats list page" "${BASE_URL}/workspace/chats" "200"
check_status "Agents gallery page" "${BASE_URL}/workspace/agents" "200"
echo ""
echo "3. Checking docs route (optional)..."
check_status "Docs page" "${BASE_URL}${DOC_PATH}" "200|404"
echo ""
echo "=========================================="
echo " Frontend Smoke Check Summary"
echo "=========================================="
echo ""
if [ "$all_passed" = true ]; then
echo "✅ Frontend smoke checks passed!"
exit 0
else
echo "❌ Frontend smoke checks failed"
exit 1
fi
+125
View File
@@ -0,0 +1,125 @@
#!/usr/bin/env bash
set +e
echo "=========================================="
echo " Service Health Check"
echo "=========================================="
echo ""
all_passed=true
mode="${SMOKE_TEST_MODE:-auto}"
summary_hint="make logs"
print_step() {
echo "$1"
}
check_http_status() {
local name="$1"
local url="$2"
local expected_re="$3"
local status
status="$(curl -s -o /dev/null -w "%{http_code}" "$url" 2>/dev/null)"
if echo "$status" | grep -Eq "$expected_re"; then
echo "$name is accessible ($url -> $status)"
else
echo "$name is not accessible ($url -> ${status:-000})"
all_passed=false
fi
}
check_listen_port() {
local name="$1"
local port="$2"
if lsof -nP -iTCP:"$port" -sTCP:LISTEN >/dev/null 2>&1; then
echo "$name is listening on port $port"
else
echo "$name is not listening on port $port"
all_passed=false
fi
}
docker_available() {
command -v docker >/dev/null 2>&1 && docker info >/dev/null 2>&1
}
detect_mode() {
case "$mode" in
local|docker)
echo "$mode"
return
;;
esac
if docker_available && docker ps --format "{{.Names}}" | grep -q "deer-flow"; then
echo "docker"
else
echo "local"
fi
}
mode="$(detect_mode)"
echo "Deployment mode: $mode"
echo ""
if [ "$mode" = "docker" ]; then
summary_hint="make docker-logs"
print_step "1. Checking container status..."
if docker ps --format "{{.Names}}" | grep -q "deer-flow"; then
echo "✓ Containers are running:"
docker ps --format " - {{.Names}} ({{.Status}})"
else
echo "✗ No DeerFlow-related containers are running"
all_passed=false
fi
else
summary_hint="logs/{langgraph,gateway,frontend,nginx}.log"
print_step "1. Checking local service ports..."
check_listen_port "Nginx" 2026
check_listen_port "Frontend" 3000
check_listen_port "Gateway" 8001
check_listen_port "LangGraph" 2024
fi
echo ""
echo "2. Waiting for services to fully start (30 seconds)..."
sleep 30
echo ""
echo "3. Checking frontend service..."
check_http_status "Frontend service" "http://localhost:2026" "200|301|302|307|308"
echo ""
echo "4. Checking API Gateway..."
health_response=$(curl -s http://localhost:2026/health 2>/dev/null)
if [ $? -eq 0 ] && [ -n "$health_response" ]; then
echo "✓ API Gateway health check passed"
echo " Response: $health_response"
else
echo "✗ API Gateway health check failed"
all_passed=false
fi
echo ""
echo "5. Checking LangGraph service..."
check_http_status "LangGraph service" "http://localhost:2024/" "200|301|302|307|308|404"
echo ""
echo "=========================================="
echo " Health Check Summary"
echo "=========================================="
echo ""
if [ "$all_passed" = true ]; then
echo "✅ All checks passed!"
echo ""
echo "🌐 Application URL: http://localhost:2026"
exit 0
else
echo "❌ Some checks failed"
echo ""
echo "Please review: $summary_hint"
exit 1
fi
+49
View File
@@ -0,0 +1,49 @@
#!/usr/bin/env bash
set -e
echo "=========================================="
echo " Pulling the Latest Code"
echo "=========================================="
echo ""
# Check whether the current directory is a Git repository
if [ ! -d ".git" ]; then
echo "✗ The current directory is not a Git repository"
exit 1
fi
# Check Git status
echo "Checking Git status..."
if git status --porcelain | grep -q .; then
echo "⚠ Uncommitted changes detected:"
git status --short
echo ""
echo "Please commit or stash your changes before continuing"
echo "Options:"
echo " 1. git add . && git commit -m 'Save changes'"
echo " 2. git stash (stash changes and restore them later)"
echo " 3. git reset --hard HEAD (discard local changes - use with caution)"
exit 1
else
echo "✓ Working tree is clean"
fi
echo ""
# Fetch remote updates
echo "Fetching remote updates..."
git fetch origin main
echo ""
# Pull the latest code
echo "Pulling the latest code..."
git pull origin main
echo ""
# Show the latest commit
echo "Latest commit:"
git log -1 --oneline
echo ""
echo "=========================================="
echo " Code Update Complete"
echo "=========================================="
@@ -0,0 +1,180 @@
# DeerFlow Smoke Test Report
**Test Date**: {{test_date}}
**Test Environment**: {{test_environment}}
**Deployment Mode**: Docker
**Test Version**: {{git_commit}}
---
## Execution Summary
| Metric | Status |
|------|------|
| Total Test Phases | 6 |
| Passed Phases | {{passed_stages}} |
| Failed Phases | {{failed_stages}} |
| Overall Conclusion | **{{overall_status}}** |
### Key Test Cases
| Case | Result | Details |
|------|--------|---------|
| Code update check | {{case_code_update}} | {{case_code_update_details}} |
| Environment check | {{case_env_check}} | {{case_env_check_details}} |
| Configuration preparation | {{case_config_prep}} | {{case_config_prep_details}} |
| Deployment | {{case_deploy}} | {{case_deploy_details}} |
| Health check | {{case_health_check}} | {{case_health_check_details}} |
| Frontend routes | {{case_frontend_routes_overall}} | {{case_frontend_routes_details}} |
---
## Detailed Test Results
### Phase 1: Code Update Check
- [x] Confirm current directory - {{status_dir_check}}
- [x] Check Git status - {{status_git_status}}
- [x] Pull latest code - {{status_git_pull}}
- [x] Confirm code update - {{status_git_verify}}
**Phase Status**: {{stage1_status}}
---
### Phase 2: Docker Environment Check
- [x] Docker version - {{status_docker_version}}
- [x] Docker daemon - {{status_docker_daemon}}
- [x] Docker Compose - {{status_docker_compose}}
- [x] Port check - {{status_port_check}}
**Phase Status**: {{stage2_status}}
---
### Phase 3: Configuration Preparation
- [x] config.yaml - {{status_config_yaml}}
- [x] .env file - {{status_env_file}}
- [x] Model configuration - {{status_model_config}}
**Phase Status**: {{stage3_status}}
---
### Phase 4: Docker Deployment
- [x] docker-init - {{status_docker_init}}
- [x] docker-start - {{status_docker_start}}
- [x] Service startup wait - {{status_wait_startup}}
**Phase Status**: {{stage4_status}}
---
### Phase 5: Service Health Check
- [x] Container status - {{status_containers}}
- [x] Frontend service - {{status_frontend}}
- [x] API Gateway - {{status_api_gateway}}
- [x] LangGraph service - {{status_langgraph}}
**Phase Status**: {{stage5_status}}
---
### Frontend Routes Smoke Results
| Route | Status | Details |
|-------|--------|---------|
| Landing `/` | {{landing_status}} | {{landing_details}} |
| Workspace redirect `/workspace` | {{workspace_redirect_status}} | target {{workspace_redirect_target}} |
| New chat `/workspace/chats/new` | {{new_chat_status}} | {{new_chat_details}} |
| Chats list `/workspace/chats` | {{chats_list_status}} | {{chats_list_details}} |
| Agents gallery `/workspace/agents` | {{agents_gallery_status}} | {{agents_gallery_details}} |
| Docs `{{docs_path}}` | {{docs_status}} | {{docs_details}} |
**Summary**: {{frontend_routes_summary}}
---
### Phase 6: Test Report Generation
- [x] Result summary - {{status_summary}}
- [x] Issue log - {{status_issues}}
- [x] Report generation - {{status_report}}
**Phase Status**: {{stage6_status}}
---
## Issue Log
### Issue 1
**Description**: {{issue1_description}}
**Severity**: {{issue1_severity}}
**Solution**: {{issue1_solution}}
---
## Environment Information
### Docker Version
```text
{{docker_version_output}}
```
### Git Information
```text
Repository: {{git_repo}}
Branch: {{git_branch}}
Commit: {{git_commit}}
Commit Message: {{git_commit_message}}
```
### Configuration Summary
- config.yaml exists: {{config_exists}}
- .env file exists: {{env_exists}}
- Number of configured models: {{model_count}}
---
## Container Status
| Container Name | Status | Uptime |
|----------|------|----------|
| deer-flow-nginx | {{nginx_status}} | {{nginx_uptime}} |
| deer-flow-frontend | {{frontend_status}} | {{frontend_uptime}} |
| deer-flow-gateway | {{gateway_status}} | {{gateway_uptime}} |
| deer-flow-langgraph | {{langgraph_status}} | {{langgraph_uptime}} |
---
## Recommendations and Next Steps
### If the Test Passes
1. [ ] Visit http://localhost:2026 to start using DeerFlow
2. [ ] Configure your preferred model if it is not configured yet
3. [ ] Explore available skills
4. [ ] Refer to the documentation to learn more features
### If the Test Fails
1. [ ] Review references/troubleshooting.md for common solutions
2. [ ] Check Docker logs: `make docker-logs`
3. [ ] Verify configuration file format and content
4. [ ] If needed, fully reset the environment: `make clean && make config && make docker-init && make docker-start`
---
## Appendix
### Full Logs
{{full_logs}}
### Tester
{{tester_name}}
---
*Report generated at: {{report_time}}*
@@ -0,0 +1,185 @@
# DeerFlow Smoke Test Report
**Test Date**: {{test_date}}
**Test Environment**: {{test_environment}}
**Deployment Mode**: Local
**Test Version**: {{git_commit}}
---
## Execution Summary
| Metric | Status |
|------|------|
| Total Test Phases | 6 |
| Passed Phases | {{passed_stages}} |
| Failed Phases | {{failed_stages}} |
| Overall Conclusion | **{{overall_status}}** |
### Key Test Cases
| Case | Result | Details |
|------|--------|---------|
| Code update check | {{case_code_update}} | {{case_code_update_details}} |
| Environment check | {{case_env_check}} | {{case_env_check_details}} |
| Configuration preparation | {{case_config_prep}} | {{case_config_prep_details}} |
| Deployment | {{case_deploy}} | {{case_deploy_details}} |
| Health check | {{case_health_check}} | {{case_health_check_details}} |
| Frontend routes | {{case_frontend_routes_overall}} | {{case_frontend_routes_details}} |
---
## Detailed Test Results
### Phase 1: Code Update Check
- [x] Confirm current directory - {{status_dir_check}}
- [x] Check Git status - {{status_git_status}}
- [x] Pull latest code - {{status_git_pull}}
- [x] Confirm code update - {{status_git_verify}}
**Phase Status**: {{stage1_status}}
---
### Phase 2: Local Environment Check
- [x] Node.js version - {{status_node_version}}
- [x] pnpm - {{status_pnpm}}
- [x] uv - {{status_uv}}
- [x] nginx - {{status_nginx}}
- [x] Port check - {{status_port_check}}
**Phase Status**: {{stage2_status}}
---
### Phase 3: Configuration Preparation
- [x] config.yaml - {{status_config_yaml}}
- [x] .env file - {{status_env_file}}
- [x] Model configuration - {{status_model_config}}
**Phase Status**: {{stage3_status}}
---
### Phase 4: Local Deployment
- [x] make check - {{status_make_check}}
- [x] make install - {{status_make_install}}
- [x] make dev-daemon / make dev - {{status_local_start}}
- [x] Service startup wait - {{status_wait_startup}}
**Phase Status**: {{stage4_status}}
---
### Phase 5: Service Health Check
- [x] Process status - {{status_processes}}
- [x] Frontend service - {{status_frontend}}
- [x] API Gateway - {{status_api_gateway}}
- [x] LangGraph service - {{status_langgraph}}
**Phase Status**: {{stage5_status}}
---
### Frontend Routes Smoke Results
| Route | Status | Details |
|-------|--------|---------|
| Landing `/` | {{landing_status}} | {{landing_details}} |
| Workspace redirect `/workspace` | {{workspace_redirect_status}} | target {{workspace_redirect_target}} |
| New chat `/workspace/chats/new` | {{new_chat_status}} | {{new_chat_details}} |
| Chats list `/workspace/chats` | {{chats_list_status}} | {{chats_list_details}} |
| Agents gallery `/workspace/agents` | {{agents_gallery_status}} | {{agents_gallery_details}} |
| Docs `{{docs_path}}` | {{docs_status}} | {{docs_details}} |
**Summary**: {{frontend_routes_summary}}
---
### Phase 6: Test Report Generation
- [x] Result summary - {{status_summary}}
- [x] Issue log - {{status_issues}}
- [x] Report generation - {{status_report}}
**Phase Status**: {{stage6_status}}
---
## Issue Log
### Issue 1
**Description**: {{issue1_description}}
**Severity**: {{issue1_severity}}
**Solution**: {{issue1_solution}}
---
## Environment Information
### Local Dependency Versions
```text
Node.js: {{node_version_output}}
pnpm: {{pnpm_version_output}}
uv: {{uv_version_output}}
nginx: {{nginx_version_output}}
```
### Git Information
```text
Repository: {{git_repo}}
Branch: {{git_branch}}
Commit: {{git_commit}}
Commit Message: {{git_commit_message}}
```
### Configuration Summary
- config.yaml exists: {{config_exists}}
- .env file exists: {{env_exists}}
- Number of configured models: {{model_count}}
---
## Local Service Status
| Service | Status | Endpoint |
|---------|--------|----------|
| Nginx | {{nginx_status}} | {{nginx_endpoint}} |
| Frontend | {{frontend_status}} | {{frontend_endpoint}} |
| Gateway | {{gateway_status}} | {{gateway_endpoint}} |
| LangGraph | {{langgraph_status}} | {{langgraph_endpoint}} |
---
## Recommendations and Next Steps
### If the Test Passes
1. [ ] Visit http://localhost:2026 to start using DeerFlow
2. [ ] Configure your preferred model if it is not configured yet
3. [ ] Explore available skills
4. [ ] Refer to the documentation to learn more features
### If the Test Fails
1. [ ] Review references/troubleshooting.md for common solutions
2. [ ] Check local logs: `logs/{langgraph,gateway,frontend,nginx}.log`
3. [ ] Verify configuration file format and content
4. [ ] If needed, fully reset the environment: `make stop && make clean && make install && make dev-daemon`
---
## Appendix
### Full Logs
{{full_logs}}
### Tester
{{tester_name}}
---
*Report generated at: {{report_time}}*
+4
View File
@@ -33,5 +33,9 @@ INFOQUEST_API_KEY=your-infoquest-api-key
# GitHub API Token
# GITHUB_TOKEN=your-github-token
# Database (only needed when config.yaml has database.backend: postgres)
# DATABASE_URL=postgresql://deerflow:password@localhost:5432/deerflow
#
# WECOM_BOT_ID=your-wecom-bot-id
# WECOM_BOT_SECRET=your-wecom-bot-secret
+128
View File
@@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
willem.jiang@gmail.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.
+12
View File
@@ -77,6 +77,18 @@ export UV_INDEX_URL=https://pypi.org/simple
export NPM_REGISTRY=https://registry.npmjs.org
```
#### Recommended host resources
Use these as practical starting points for development and review environments:
| Scenario | Starting point | Recommended | Notes |
|---------|-----------|------------|-------|
| `make dev` on one machine | 4 vCPU, 8 GB RAM | 8 vCPU, 16 GB RAM | Best when DeerFlow uses hosted model APIs. |
| `make docker-start` review environment | 4 vCPU, 8 GB RAM | 8 vCPU, 16 GB RAM | Docker image builds and sandbox containers need extra headroom. |
| Shared Linux test server | 8 vCPU, 16 GB RAM | 16 vCPU, 32 GB RAM | Prefer this for heavier multi-agent runs or multiple reviewers. |
`2 vCPU / 4 GB` environments often fail to start reliably or become unresponsive under normal DeerFlow workloads.
#### Linux: Docker daemon permission denied
If `make docker-init`, `make docker-start`, or `make docker-stop` fails on Linux with an error like below, your current user likely does not have permission to access the Docker daemon socket:
+34 -53
View File
@@ -1,19 +1,25 @@
# DeerFlow - Unified Development Environment
.PHONY: help config config-upgrade check install dev dev-pro dev-daemon dev-daemon-pro start start-pro start-daemon start-daemon-pro stop up up-pro down clean docker-init docker-start docker-start-pro docker-stop docker-logs docker-logs-frontend docker-logs-gateway
.PHONY: help config config-upgrade check install setup doctor dev dev-pro dev-daemon dev-daemon-pro start start-pro start-daemon start-daemon-pro stop up up-pro down clean docker-init docker-start docker-start-pro docker-stop docker-logs docker-logs-frontend docker-logs-gateway
BASH ?= bash
BACKEND_UV_RUN = cd backend && uv run
# Detect OS for Windows compatibility
ifeq ($(OS),Windows_NT)
SHELL := cmd.exe
PYTHON ?= python
# Run repo shell scripts through Git Bash when Make is launched from cmd.exe / PowerShell.
RUN_WITH_GIT_BASH = call scripts\run-with-git-bash.cmd
else
PYTHON ?= python3
RUN_WITH_GIT_BASH =
endif
help:
@echo "DeerFlow Development Commands:"
@echo " make setup - Interactive setup wizard (recommended for new users)"
@echo " make doctor - Check configuration and system requirements"
@echo " make config - Generate local config files (aborts if config already exists)"
@echo " make config-upgrade - Merge new fields from config.example.yaml into config.yaml"
@echo " make check - Check if all required tools are installed"
@@ -44,11 +50,18 @@ help:
@echo " make docker-logs-frontend - View Docker frontend logs"
@echo " make docker-logs-gateway - View Docker gateway logs"
## Setup & Diagnosis
setup:
@$(BACKEND_UV_RUN) python ../scripts/setup_wizard.py
doctor:
@$(BACKEND_UV_RUN) python ../scripts/doctor.py
config:
@$(PYTHON) ./scripts/configure.py
config-upgrade:
@./scripts/config-upgrade.sh
@$(RUN_WITH_GIT_BASH) ./scripts/config-upgrade.sh
# Check required tools
check:
@@ -106,78 +119,46 @@ setup-sandbox:
# Start all services in development mode (with hot-reloading)
dev:
@$(PYTHON) ./scripts/check.py
ifeq ($(OS),Windows_NT)
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --dev
else
@./scripts/serve.sh --dev
endif
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev
# Start all services in dev + Gateway mode (experimental: agent runtime embedded in Gateway)
dev-pro:
@$(PYTHON) ./scripts/check.py
ifeq ($(OS),Windows_NT)
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --dev --gateway
else
@./scripts/serve.sh --dev --gateway
endif
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev --gateway
# Start all services in production mode (with optimizations)
start:
@$(PYTHON) ./scripts/check.py
ifeq ($(OS),Windows_NT)
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --prod
else
@./scripts/serve.sh --prod
endif
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod
# Start all services in prod + Gateway mode (experimental)
start-pro:
@$(PYTHON) ./scripts/check.py
ifeq ($(OS),Windows_NT)
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --prod --gateway
else
@./scripts/serve.sh --prod --gateway
endif
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod --gateway
# Start all services in daemon mode (background)
dev-daemon:
@$(PYTHON) ./scripts/check.py
ifeq ($(OS),Windows_NT)
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --dev --daemon
else
@./scripts/serve.sh --dev --daemon
endif
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev --daemon
# Start daemon + Gateway mode (experimental)
dev-daemon-pro:
@$(PYTHON) ./scripts/check.py
ifeq ($(OS),Windows_NT)
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --dev --gateway --daemon
else
@./scripts/serve.sh --dev --gateway --daemon
endif
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --dev --gateway --daemon
# Start prod services in daemon mode (background)
start-daemon:
@$(PYTHON) ./scripts/check.py
ifeq ($(OS),Windows_NT)
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --prod --daemon
else
@./scripts/serve.sh --prod --daemon
endif
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod --daemon
# Start prod daemon + Gateway mode (experimental)
start-daemon-pro:
@$(PYTHON) ./scripts/check.py
ifeq ($(OS),Windows_NT)
@call scripts\run-with-git-bash.cmd ./scripts/serve.sh --prod --gateway --daemon
else
@./scripts/serve.sh --prod --gateway --daemon
endif
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --prod --gateway --daemon
# Stop all services
stop:
@./scripts/serve.sh --stop
@$(RUN_WITH_GIT_BASH) ./scripts/serve.sh --stop
# Clean up
clean: stop
@@ -193,29 +174,29 @@ clean: stop
# Initialize Docker containers and install dependencies
docker-init:
@./scripts/docker.sh init
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh init
# Start Docker development environment
docker-start:
@./scripts/docker.sh start
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh start
# Start Docker in Gateway mode (experimental)
docker-start-pro:
@./scripts/docker.sh start --gateway
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh start --gateway
# Stop Docker development environment
docker-stop:
@./scripts/docker.sh stop
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh stop
# View Docker development logs
docker-logs:
@./scripts/docker.sh logs
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh logs
# View Docker development logs
docker-logs-frontend:
@./scripts/docker.sh logs --frontend
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh logs --frontend
docker-logs-gateway:
@./scripts/docker.sh logs --gateway
@$(RUN_WITH_GIT_BASH) ./scripts/docker.sh logs --gateway
# ==========================================
# Production Docker Commands
@@ -223,12 +204,12 @@ docker-logs-gateway:
# Build and start production services
up:
@./scripts/deploy.sh
@$(RUN_WITH_GIT_BASH) ./scripts/deploy.sh
# Build and start production services in Gateway mode
up-pro:
@./scripts/deploy.sh --gateway
@$(RUN_WITH_GIT_BASH) ./scripts/deploy.sh --gateway
# Stop and remove production containers
down:
@./scripts/deploy.sh down
@$(RUN_WITH_GIT_BASH) ./scripts/deploy.sh down
+64 -45
View File
@@ -53,6 +53,7 @@ DeerFlow has newly integrated the intelligent search and crawling toolset indepe
- [Quick Start](#quick-start)
- [Configuration](#configuration)
- [Running the Application](#running-the-application)
- [Deployment Sizing](#deployment-sizing)
- [Option 1: Docker (Recommended)](#option-1-docker-recommended)
- [Option 2: Local Development](#option-2-local-development)
- [Advanced](#advanced)
@@ -103,35 +104,38 @@ That prompt is intended for coding agents. It tells the agent to clone the repo
cd deer-flow
```
2. **Generate local configuration files**
2. **Run the setup wizard**
From the project root directory (`deer-flow/`), run:
```bash
make config
make setup
```
This command creates local configuration files based on the provided example templates.
This launches an interactive wizard that guides you through choosing an LLM provider, optional web search, and execution/safety preferences such as sandbox mode, bash access, and file-write tools. It generates a minimal `config.yaml` and writes your keys to `.env`. Takes about 2 minutes.
3. **Configure your preferred model(s)**
The wizard also lets you configure an optional web search provider, or skip it for now.
Edit `config.yaml` and define at least one model:
Run `make doctor` at any time to verify your setup and get actionable fix hints.
> **Advanced / manual configuration**: If you prefer to edit `config.yaml` directly, run `make config` instead to copy the full template. See `config.example.yaml` for the complete reference including CLI-backed providers (Codex CLI, Claude Code OAuth), OpenRouter, Responses API, and more.
<details>
<summary>Manual model configuration examples</summary>
```yaml
models:
- name: gpt-4 # Internal identifier
display_name: GPT-4 # Human-readable name
use: langchain_openai:ChatOpenAI # LangChain class path
model: gpt-4 # Model identifier for API
api_key: $OPENAI_API_KEY # API key (recommended: use env var)
max_tokens: 4096 # Maximum tokens per request
temperature: 0.7 # Sampling temperature
- name: gpt-4o
display_name: GPT-4o
use: langchain_openai:ChatOpenAI
model: gpt-4o
api_key: $OPENAI_API_KEY
- name: openrouter-gemini-2.5-flash
display_name: Gemini 2.5 Flash (OpenRouter)
use: langchain_openai:ChatOpenAI
model: google/gemini-2.5-flash-preview
api_key: $OPENAI_API_KEY # OpenRouter still uses the OpenAI-compatible field name here
api_key: $OPENROUTER_API_KEY
base_url: https://openrouter.ai/api/v1
- name: gpt-5-responses
@@ -181,50 +185,39 @@ That prompt is intended for coding agents. It tells the agent to clone the repo
```
- Codex CLI reads `~/.codex/auth.json`
- The Codex Responses endpoint currently rejects `max_tokens` and `max_output_tokens`, so `CodexChatModel` does not expose a request-level token cap
- Claude Code accepts `CLAUDE_CODE_OAUTH_TOKEN`, `ANTHROPIC_AUTH_TOKEN`, `CLAUDE_CODE_OAUTH_TOKEN_FILE_DESCRIPTOR`, `CLAUDE_CODE_CREDENTIALS_PATH`, or plaintext `~/.claude/.credentials.json`
- ACP agent entries are separate from model providers. If you configure `acp_agents.codex`, point it at a Codex ACP adapter such as `npx -y @zed-industries/codex-acp`; the standard `codex` CLI binary is not ACP-compatible by itself
- On macOS, DeerFlow does not probe Keychain automatically. Export Claude Code auth explicitly if needed:
- Claude Code accepts `CLAUDE_CODE_OAUTH_TOKEN`, `ANTHROPIC_AUTH_TOKEN`, `CLAUDE_CODE_CREDENTIALS_PATH`, or `~/.claude/.credentials.json`
- ACP agent entries are separate from model providers — if you configure `acp_agents.codex`, point it at a Codex ACP adapter such as `npx -y @zed-industries/codex-acp`
- On macOS, export Claude Code auth explicitly if needed:
```bash
eval "$(python3 scripts/export_claude_code_oauth.py --print-export)"
```
4. **Set API keys for your configured model(s)**
Choose one of the following methods:
- Option A: Edit the `.env` file in the project root (Recommended)
API keys can also be set manually in `.env` (recommended) or exported in your shell:
```bash
TAVILY_API_KEY=your-tavily-api-key
OPENAI_API_KEY=your-openai-api-key
# OpenRouter also uses OPENAI_API_KEY when your config uses langchain_openai:ChatOpenAI + base_url.
# Add other provider keys as needed
INFOQUEST_API_KEY=your-infoquest-api-key
TAVILY_API_KEY=your-tavily-api-key
```
- Option B: Export environment variables in your shell
```bash
export OPENAI_API_KEY=your-openai-api-key
```
For CLI-backed providers:
- Codex CLI: `~/.codex/auth.json`
- Claude Code OAuth: explicit env/file handoff or `~/.claude/.credentials.json`
- Option C: Edit `config.yaml` directly (Not recommended for production)
```yaml
models:
- name: gpt-4
api_key: your-actual-api-key-here # Replace placeholder
```
</details>
### Running the Application
#### Deployment Sizing
Use the table below as a practical starting point when choosing how to run DeerFlow:
| Deployment target | Starting point | Recommended | Notes |
|---------|-----------|------------|-------|
| Local evaluation / `make dev` | 4 vCPU, 8 GB RAM, 20 GB free SSD | 8 vCPU, 16 GB RAM | Good for one developer or one light session with hosted model APIs. `2 vCPU / 4 GB` is usually not enough. |
| Docker development / `make docker-start` | 4 vCPU, 8 GB RAM, 25 GB free SSD | 8 vCPU, 16 GB RAM | Image builds, bind mounts, and sandbox containers need more headroom than pure local dev. |
| Long-running server / `make up` | 8 vCPU, 16 GB RAM, 40 GB free SSD | 16 vCPU, 32 GB RAM | Preferred for shared use, multi-agent runs, report generation, or heavier sandbox workloads. |
- These numbers cover DeerFlow itself. If you also host a local LLM, size that service separately.
- Linux plus Docker is the recommended deployment target for a persistent server. macOS and Windows are best treated as development or evaluation environments.
- If CPU or memory usage stays pinned, reduce concurrent runs first, then move to the next sizing tier.
#### Option 1: Docker (Recommended)
**Development** (hot-reload, source mounts):
@@ -261,7 +254,7 @@ See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed Docker development guide.
If you prefer running services locally:
Prerequisite: complete the "Configuration" steps above first (`make config` and model API keys). `make dev` requires a valid configuration file (defaults to `config.yaml` in the project root; can be overridden via `DEER_FLOW_CONFIG_PATH`).
Prerequisite: complete the "Configuration" steps above first (`make setup`). `make dev` requires a valid `config.yaml` in the project root (can be overridden via `DEER_FLOW_CONFIG_PATH`). Run `make doctor` to verify your setup before starting.
On Windows, run the local development flow from Git Bash. Native `cmd.exe` and PowerShell shells are not supported for the bash-based service scripts, and WSL is not guaranteed because some scripts rely on Git for Windows utilities such as `cygpath`.
1. **Check prerequisites**:
@@ -375,6 +368,7 @@ DeerFlow supports receiving tasks from messaging apps. Channels auto-start when
| Telegram | Bot API (long-polling) | Easy |
| Slack | Socket Mode | Moderate |
| Feishu / Lark | WebSocket | Moderate |
| WeChat | Tencent iLink (long-polling) | Moderate |
| WeCom | WebSocket | Moderate |
**Configuration in `config.yaml`:**
@@ -419,6 +413,19 @@ channels:
bot_token: $TELEGRAM_BOT_TOKEN
allowed_users: [] # empty = allow all
wechat:
enabled: false
bot_token: $WECHAT_BOT_TOKEN
ilink_bot_id: $WECHAT_ILINK_BOT_ID
qrcode_login_enabled: true # optional: allow first-time QR bootstrap when bot_token is absent
allowed_users: [] # empty = allow all
polling_timeout: 35
state_dir: ./.deer-flow/wechat/state
max_inbound_image_bytes: 20971520
max_outbound_image_bytes: 20971520
max_inbound_file_bytes: 52428800
max_outbound_file_bytes: 52428800
# Optional: per-channel / per-user session settings
session:
assistant_id: mobile-agent # custom agent names are also supported here
@@ -452,6 +459,10 @@ SLACK_APP_TOKEN=xapp-...
FEISHU_APP_ID=cli_xxxx
FEISHU_APP_SECRET=your_app_secret
# WeChat iLink
WECHAT_BOT_TOKEN=your_ilink_bot_token
WECHAT_ILINK_BOT_ID=your_ilink_bot_id
# WeCom
WECOM_BOT_ID=your_bot_id
WECOM_BOT_SECRET=your_bot_secret
@@ -477,6 +488,14 @@ WECOM_BOT_SECRET=your_bot_secret
3. Under **Events**, subscribe to `im.message.receive_v1` and select **Long Connection** mode.
4. Copy the App ID and App Secret. Set `FEISHU_APP_ID` and `FEISHU_APP_SECRET` in `.env` and enable the channel in `config.yaml`.
**WeChat Setup**
1. Enable the `wechat` channel in `config.yaml`.
2. Either set `WECHAT_BOT_TOKEN` in `.env`, or set `qrcode_login_enabled: true` for first-time QR bootstrap.
3. When `bot_token` is absent and QR bootstrap is enabled, watch backend logs for the QR content returned by iLink and complete the binding flow.
4. After the QR flow succeeds, DeerFlow persists the acquired token under `state_dir` for later restarts.
5. For Docker Compose deployments, keep `state_dir` on a persistent volume so the `get_updates_buf` cursor and saved auth state survive restarts.
**WeCom Setup**
1. Create a bot on the WeCom AI Bot platform and obtain the `bot_id` and `bot_secret`.
+15
View File
@@ -40,6 +40,7 @@ https://github.com/user-attachments/assets/a8bcadc4-e040-4cf2-8fda-dd768b999c18
- [快速开始](#快速开始)
- [配置](#配置)
- [运行应用](#运行应用)
- [部署建议与资源规划](#部署建议与资源规划)
- [方式一:Docker(推荐)](#方式一docker推荐)
- [方式二:本地开发](#方式二本地开发)
- [进阶配置](#进阶配置)
@@ -150,6 +151,20 @@ https://github.com/user-attachments/assets/a8bcadc4-e040-4cf2-8fda-dd768b999c18
### 运行应用
#### 部署建议与资源规划
可以先按下面的资源档位来选择 DeerFlow 的运行方式:
| 部署场景 | 起步配置 | 推荐配置 | 说明 |
|---------|-----------|------------|-------|
| 本地体验 / `make dev` | 4 vCPU、8 GB 内存、20 GB SSD 可用空间 | 8 vCPU、16 GB 内存 | 适合单个开发者或单个轻量会话,且模型走外部 API。`2 核 / 4 GB` 通常跑不稳。 |
| Docker 开发 / `make docker-start` | 4 vCPU、8 GB 内存、25 GB SSD 可用空间 | 8 vCPU、16 GB 内存 | 镜像构建、源码挂载和 sandbox 容器都会比纯本地模式更吃资源。 |
| 长期运行服务 / `make up` | 8 vCPU、16 GB 内存、40 GB SSD 可用空间 | 16 vCPU、32 GB 内存 | 更适合共享环境、多 agent 任务、报告生成或更重的 sandbox 负载。 |
- 上面的配置只覆盖 DeerFlow 本身;如果你还要本机部署本地大模型,请单独为模型服务预留资源。
- 持续运行的服务更推荐使用 Linux + Docker。macOS 和 Windows 更适合作为开发机或体验环境。
- 如果 CPU 或内存长期打满,先降低并发会话或重任务数量,再考虑升级到更高一档配置。
#### 方式一:Docker(推荐)
**开发模式**(支持热更新,挂载源码):
+7 -5
View File
@@ -395,14 +395,16 @@ Both can be modified at runtime via Gateway API endpoints or `DeerFlowClient` me
**Architecture**: Imports the same `deerflow` modules that LangGraph Server and Gateway API use. Shares the same config files and data directories. No FastAPI dependency.
**Agent Conversation** (replaces LangGraph Server):
- `chat(message, thread_id)` — synchronous, returns final text
- `stream(message, thread_id)`yields `StreamEvent` aligned with LangGraph SSE protocol:
- `"values"` — full state snapshot (title, messages, artifacts)
- `"messages-tuple"` — per-message update (AI text, tool calls, tool results)
- `"end"` — stream finished
- `chat(message, thread_id)` — synchronous, accumulates streaming deltas per message-id and returns the final AI text
- `stream(message, thread_id)`subscribes to LangGraph `stream_mode=["values", "messages", "custom"]` and yields `StreamEvent`:
- `"values"` — full state snapshot (title, messages, artifacts); AI text already delivered via `messages` mode is **not** re-synthesized here to avoid duplicate deliveries
- `"messages-tuple"` — per-chunk update: for AI text this is a **delta** (concat per `id` to rebuild the full message); tool calls and tool results are emitted once each
- `"custom"` — forwarded from `StreamWriter`
- `"end"` — stream finished (carries cumulative `usage` counted once per message id)
- Agent created lazily via `create_agent()` + `_build_middlewares()`, same as `make_lead_agent`
- Supports `checkpointer` parameter for state persistence across turns
- `reset_agent()` forces agent recreation (e.g. after memory or skill changes)
- See [docs/STREAMING.md](docs/STREAMING.md) for the full design: why Gateway and DeerFlowClient are parallel paths, LangGraph's `stream_mode` semantics, the per-id dedup invariants, and regression testing strategy
**Gateway Equivalent Methods** (replaces Gateway API):
+6 -2
View File
@@ -13,6 +13,9 @@ FROM python:3.12-slim-bookworm AS builder
ARG NODE_MAJOR=22
ARG APT_MIRROR
ARG UV_INDEX_URL
# Optional extras to install (e.g. "postgres" for PostgreSQL support)
# Usage: docker build --build-arg UV_EXTRAS=postgres ...
ARG UV_EXTRAS
# Optionally override apt mirror for restricted networks (e.g. APT_MIRROR=mirrors.aliyun.com)
RUN if [ -n "${APT_MIRROR}" ]; then \
@@ -43,8 +46,9 @@ WORKDIR /app
COPY backend ./backend
# Install dependencies with cache mount
# When UV_EXTRAS is set (e.g. "postgres"), installs optional dependencies.
RUN --mount=type=cache,target=/root/.cache/uv \
sh -c "cd backend && UV_INDEX_URL=${UV_INDEX_URL:-https://pypi.org/simple} uv sync"
sh -c "cd backend && UV_INDEX_URL=${UV_INDEX_URL:-https://pypi.org/simple} uv sync ${UV_EXTRAS:+--extra $UV_EXTRAS}"
# ── Stage 2: Dev ──────────────────────────────────────────────────────────────
# Retains compiler toolchain from builder so startup-time `uv sync` can build
@@ -84,4 +88,4 @@ COPY --from=builder /app/backend ./backend
EXPOSE 8001 2024
# Default command (can be overridden in docker-compose)
CMD ["sh", "-c", "cd backend && PYTHONPATH=. uv run uvicorn app.gateway.app:app --host 0.0.0.0 --port 8001"]
CMD ["sh", "-c", "cd backend && PYTHONPATH=. uv run --no-sync uvicorn app.gateway.app:app --host 0.0.0.0 --port 8001"]
+19
View File
@@ -8,6 +8,7 @@ import mimetypes
import re
import time
from collections.abc import Awaitable, Callable, Mapping
from pathlib import Path
from typing import Any
import httpx
@@ -37,6 +38,7 @@ CHANNEL_CAPABILITIES = {
"feishu": {"supports_streaming": True},
"slack": {"supports_streaming": False},
"telegram": {"supports_streaming": False},
"wechat": {"supports_streaming": False},
"wecom": {"supports_streaming": True},
}
@@ -78,7 +80,24 @@ async def _read_wecom_inbound_file(file_info: dict[str, Any], client: httpx.Asyn
return decrypt_file(data, aeskey)
async def _read_wechat_inbound_file(file_info: dict[str, Any], client: httpx.AsyncClient) -> bytes | None:
raw_path = file_info.get("path")
if isinstance(raw_path, str) and raw_path.strip():
try:
return await asyncio.to_thread(Path(raw_path).read_bytes)
except OSError:
logger.exception("[Manager] failed to read WeChat inbound file from local path: %s", raw_path)
return None
full_url = file_info.get("full_url")
if isinstance(full_url, str) and full_url.strip():
return await _read_http_inbound_file({"url": full_url}, client)
return None
register_inbound_file_reader("wecom", _read_wecom_inbound_file)
register_inbound_file_reader("wechat", _read_wechat_inbound_file)
class InvalidChannelSessionConfigError(ValueError):
+1
View File
@@ -18,6 +18,7 @@ _CHANNEL_REGISTRY: dict[str, str] = {
"feishu": "app.channels.feishu:FeishuChannel",
"slack": "app.channels.slack:SlackChannel",
"telegram": "app.channels.telegram:TelegramChannel",
"wechat": "app.channels.wechat:WechatChannel",
"wecom": "app.channels.wecom:WeComChannel",
}
File diff suppressed because it is too large Load Diff
+161 -1
View File
@@ -1,16 +1,23 @@
import logging
import os
from collections.abc import AsyncGenerator
from contextlib import asynccontextmanager
from datetime import UTC
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from app.gateway.auth_middleware import AuthMiddleware
from app.gateway.config import get_gateway_config
from app.gateway.csrf_middleware import CSRFMiddleware
from app.gateway.deps import langgraph_runtime
from app.gateway.routers import (
agents,
artifacts,
assistants_compat,
auth,
channels,
feedback,
mcp,
memory,
models,
@@ -33,6 +40,125 @@ logging.basicConfig(
logger = logging.getLogger(__name__)
async def _ensure_admin_user(app: FastAPI) -> None:
"""Auto-create the admin user on first boot if no users exist.
After admin creation, migrate orphan threads from the LangGraph
store (metadata.owner_id unset) to the admin account. This is the
"no-auth → with-auth" upgrade path: users who ran DeerFlow without
authentication have existing LangGraph thread data that needs an
owner assigned.
No SQL persistence migration is needed: the four owner_id columns
(threads_meta, runs, run_events, feedback) only come into existence
alongside the auth module via create_all, so freshly created tables
never contain NULL-owner rows. "Existing persistence DB + new auth"
is not a supported upgrade path — fresh install or wipe-and-retry.
Multi-worker safe: relies on SQLite UNIQUE constraint to resolve
races during admin creation. Only the worker that successfully
creates/updates the admin prints the password; losers silently skip.
"""
import secrets
from app.gateway.auth.credential_file import write_initial_credentials
from app.gateway.deps import get_local_provider
def _announce_credentials(email: str, password: str, *, label: str, headline: str) -> None:
"""Write the password to a 0600 file and log the path (never the secret)."""
cred_path = write_initial_credentials(email, password, label=label)
logger.info("=" * 60)
logger.info(" %s", headline)
logger.info(" Credentials written to: %s (mode 0600)", cred_path)
logger.info(" Change it after login: Settings -> Account")
logger.info("=" * 60)
provider = get_local_provider()
user_count = await provider.count_users()
admin = None
if user_count == 0:
password = secrets.token_urlsafe(16)
try:
admin = await provider.create_user(email="admin@deerflow.dev", password=password, system_role="admin", needs_setup=True)
except ValueError:
return # Another worker already created the admin.
_announce_credentials(admin.email, password, label="initial", headline="Admin account created on first boot")
else:
# Admin exists but setup never completed — reset password so operator
# can always find it in the console without needing the CLI.
# Multi-worker guard: if admin was created less than 30s ago, another
# worker just created it and will print the password — skip reset.
admin = await provider.get_user_by_email("admin@deerflow.dev")
if admin and admin.needs_setup:
import time
age = time.time() - admin.created_at.replace(tzinfo=UTC).timestamp()
if age >= 30:
from app.gateway.auth.password import hash_password_async
password = secrets.token_urlsafe(16)
admin.password_hash = await hash_password_async(password)
admin.token_version += 1
await provider.update_user(admin)
_announce_credentials(admin.email, password, label="reset", headline="Admin account setup incomplete — password reset")
if admin is None:
return # Nothing to bind orphans to.
admin_id = str(admin.id)
# LangGraph store orphan migration — non-fatal.
# This covers the "no-auth → with-auth" upgrade path for users
# whose existing LangGraph thread metadata has no owner_id set.
store = getattr(app.state, "store", None)
if store is not None:
try:
migrated = await _migrate_orphaned_threads(store, admin_id)
if migrated:
logger.info("Migrated %d orphan LangGraph thread(s) to admin", migrated)
except Exception:
logger.exception("LangGraph thread migration failed (non-fatal)")
async def _iter_store_items(store, namespace, *, page_size: int = 500):
"""Paginated async iterator over a LangGraph store namespace.
Replaces the old hardcoded ``limit=1000`` call with a cursor-style
loop so that environments with more than one page of orphans do
not silently lose data. Terminates when a page is empty OR when a
short page arrives (indicating the last page).
"""
offset = 0
while True:
batch = await store.asearch(namespace, limit=page_size, offset=offset)
if not batch:
return
for item in batch:
yield item
if len(batch) < page_size:
return
offset += page_size
async def _migrate_orphaned_threads(store, admin_user_id: str) -> int:
"""Migrate LangGraph store threads with no owner_id to the given admin.
Uses cursor pagination so all orphans are migrated regardless of
count. Returns the number of rows migrated.
"""
migrated = 0
async for item in _iter_store_items(store, ("threads",)):
metadata = item.value.get("metadata", {})
if not metadata.get("owner_id"):
metadata["owner_id"] = admin_user_id
item.value["metadata"] = metadata
await store.aput(("threads",), item.key, item.value)
migrated += 1
return migrated
@asynccontextmanager
async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
"""Application lifespan handler."""
@@ -52,6 +178,10 @@ async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
async with langgraph_runtime(app):
logger.info("LangGraph runtime initialised")
# Ensure admin user exists (auto-create on first boot)
# Must run AFTER langgraph_runtime so app.state.store is available for thread migration
await _ensure_admin_user(app)
# Start IM channel service if any channels are configured
try:
from app.channels.service import start_channel_service
@@ -163,7 +293,31 @@ This gateway provides custom endpoints for models, MCP configuration, skills, an
],
)
# CORS is handled by nginx - no need for FastAPI middleware
# Auth: reject unauthenticated requests to non-public paths (fail-closed safety net)
app.add_middleware(AuthMiddleware)
# CSRF: Double Submit Cookie pattern for state-changing requests
app.add_middleware(CSRFMiddleware)
# CORS: when GATEWAY_CORS_ORIGINS is set (dev without nginx), add CORS middleware.
# In production, nginx handles CORS and no middleware is needed.
cors_origins_env = os.environ.get("GATEWAY_CORS_ORIGINS", "")
if cors_origins_env:
cors_origins = [o.strip() for o in cors_origins_env.split(",") if o.strip()]
# Validate: wildcard origin with credentials is a security misconfiguration
for origin in cors_origins:
if origin == "*":
logger.error("GATEWAY_CORS_ORIGINS contains wildcard '*' with allow_credentials=True. This is a security misconfiguration — browsers will reject the response. Use explicit scheme://host:port origins instead.")
cors_origins = [o for o in cors_origins if o != "*"]
break
if cors_origins:
app.add_middleware(
CORSMiddleware,
allow_origins=cors_origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Include routers
# Models API is mounted at /api/models
@@ -199,6 +353,12 @@ This gateway provides custom endpoints for models, MCP configuration, skills, an
# Assistants compatibility API (LangGraph Platform stub)
app.include_router(assistants_compat.router)
# Auth API is mounted at /api/v1/auth
app.include_router(auth.router)
# Feedback API is mounted at /api/threads/{thread_id}/runs/{run_id}/feedback
app.include_router(feedback.router)
# Thread Runs API (LangGraph Platform-compatible runs lifecycle)
app.include_router(thread_runs.router)
+42
View File
@@ -0,0 +1,42 @@
"""Authentication module for DeerFlow.
This module provides:
- JWT-based authentication
- Provider Factory pattern for extensible auth methods
- UserRepository interface for storage backends (SQLite)
"""
from app.gateway.auth.config import AuthConfig, get_auth_config, set_auth_config
from app.gateway.auth.errors import AuthErrorCode, AuthErrorResponse, TokenError
from app.gateway.auth.jwt import TokenPayload, create_access_token, decode_token
from app.gateway.auth.local_provider import LocalAuthProvider
from app.gateway.auth.models import User, UserResponse
from app.gateway.auth.password import hash_password, verify_password
from app.gateway.auth.providers import AuthProvider
from app.gateway.auth.repositories.base import UserRepository
__all__ = [
# Config
"AuthConfig",
"get_auth_config",
"set_auth_config",
# Errors
"AuthErrorCode",
"AuthErrorResponse",
"TokenError",
# JWT
"TokenPayload",
"create_access_token",
"decode_token",
# Password
"hash_password",
"verify_password",
# Models
"User",
"UserResponse",
# Providers
"AuthProvider",
"LocalAuthProvider",
# Repository
"UserRepository",
]
+57
View File
@@ -0,0 +1,57 @@
"""Authentication configuration for DeerFlow."""
import logging
import os
import secrets
from dotenv import load_dotenv
from pydantic import BaseModel, Field
load_dotenv()
logger = logging.getLogger(__name__)
class AuthConfig(BaseModel):
"""JWT and auth-related configuration. Parsed once at startup.
Note: the ``users`` table now lives in the shared persistence
database managed by ``deerflow.persistence.engine``. The old
``users_db_path`` config key has been removed — user storage is
configured through ``config.database`` like every other table.
"""
jwt_secret: str = Field(
...,
description="Secret key for JWT signing. MUST be set via AUTH_JWT_SECRET.",
)
token_expiry_days: int = Field(default=7, ge=1, le=30)
oauth_github_client_id: str | None = Field(default=None)
oauth_github_client_secret: str | None = Field(default=None)
_auth_config: AuthConfig | None = None
def get_auth_config() -> AuthConfig:
"""Get the global AuthConfig instance. Parses from env on first call."""
global _auth_config
if _auth_config is None:
jwt_secret = os.environ.get("AUTH_JWT_SECRET")
if not jwt_secret:
jwt_secret = secrets.token_urlsafe(32)
os.environ["AUTH_JWT_SECRET"] = jwt_secret
logger.warning(
"⚠ AUTH_JWT_SECRET is not set — using an auto-generated ephemeral secret. "
"Sessions will be invalidated on restart. "
"For production, add AUTH_JWT_SECRET to your .env file: "
'python -c "import secrets; print(secrets.token_urlsafe(32))"'
)
_auth_config = AuthConfig(jwt_secret=jwt_secret)
return _auth_config
def set_auth_config(config: AuthConfig) -> None:
"""Set the global AuthConfig instance (for testing)."""
global _auth_config
_auth_config = config
@@ -0,0 +1,48 @@
"""Write initial admin credentials to a restricted file instead of logs.
Logging secrets to stdout/stderr is a well-known CodeQL finding
(py/clear-text-logging-sensitive-data) — in production those logs
get collected into ELK/Splunk/etc and become a secret sprawl
source. This helper writes the credential to a 0600 file that only
the process user can read, and returns the path so the caller can
log **the path** (not the password) for the operator to pick up.
"""
from __future__ import annotations
import os
from pathlib import Path
from deerflow.config.paths import get_paths
_CREDENTIAL_FILENAME = "admin_initial_credentials.txt"
def write_initial_credentials(email: str, password: str, *, label: str = "initial") -> Path:
"""Write the admin email + password to ``{base_dir}/admin_initial_credentials.txt``.
The file is created **atomically** with mode 0600 via ``os.open``
so the password is never world-readable, even for the single syscall
window between ``write_text`` and ``chmod``.
``label`` distinguishes "initial" (fresh creation) from "reset"
(password reset) in the file header so an operator picking up the
file after a restart can tell which event produced it.
Returns the absolute :class:`Path` to the file.
"""
target = get_paths().base_dir / _CREDENTIAL_FILENAME
target.parent.mkdir(parents=True, exist_ok=True)
content = (
f"# DeerFlow admin {label} credentials\n# This file is generated on first boot or password reset.\n# Change the password after login via Settings -> Account,\n# then delete this file.\n#\nemail: {email}\npassword: {password}\n"
)
# Atomic 0600 create-or-truncate. O_TRUNC (not O_EXCL) so the
# reset-password path can rewrite an existing file without a
# separate unlink-then-create dance.
fd = os.open(target, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600)
with os.fdopen(fd, "w", encoding="utf-8") as fh:
fh.write(content)
return target.resolve()
+44
View File
@@ -0,0 +1,44 @@
"""Typed error definitions for auth module.
AuthErrorCode: exhaustive enum of all auth failure conditions.
TokenError: exhaustive enum of JWT decode failures.
AuthErrorResponse: structured error payload for HTTP responses.
"""
from enum import StrEnum
from pydantic import BaseModel
class AuthErrorCode(StrEnum):
"""Exhaustive list of auth error conditions."""
INVALID_CREDENTIALS = "invalid_credentials"
TOKEN_EXPIRED = "token_expired"
TOKEN_INVALID = "token_invalid"
USER_NOT_FOUND = "user_not_found"
EMAIL_ALREADY_EXISTS = "email_already_exists"
PROVIDER_NOT_FOUND = "provider_not_found"
NOT_AUTHENTICATED = "not_authenticated"
class TokenError(StrEnum):
"""Exhaustive list of JWT decode failure reasons."""
EXPIRED = "expired"
INVALID_SIGNATURE = "invalid_signature"
MALFORMED = "malformed"
class AuthErrorResponse(BaseModel):
"""Structured error response — replaces bare `detail` strings."""
code: AuthErrorCode
message: str
def token_error_to_code(err: TokenError) -> AuthErrorCode:
"""Map TokenError to AuthErrorCode — single source of truth."""
if err == TokenError.EXPIRED:
return AuthErrorCode.TOKEN_EXPIRED
return AuthErrorCode.TOKEN_INVALID
+55
View File
@@ -0,0 +1,55 @@
"""JWT token creation and verification."""
from datetime import UTC, datetime, timedelta
import jwt
from pydantic import BaseModel
from app.gateway.auth.config import get_auth_config
from app.gateway.auth.errors import TokenError
class TokenPayload(BaseModel):
"""JWT token payload."""
sub: str # user_id
exp: datetime
iat: datetime | None = None
ver: int = 0 # token_version — must match User.token_version
def create_access_token(user_id: str, expires_delta: timedelta | None = None, token_version: int = 0) -> str:
"""Create a JWT access token.
Args:
user_id: The user's UUID as string
expires_delta: Optional custom expiry, defaults to 7 days
token_version: User's current token_version for invalidation
Returns:
Encoded JWT string
"""
config = get_auth_config()
expiry = expires_delta or timedelta(days=config.token_expiry_days)
now = datetime.now(UTC)
payload = {"sub": user_id, "exp": now + expiry, "iat": now, "ver": token_version}
return jwt.encode(payload, config.jwt_secret, algorithm="HS256")
def decode_token(token: str) -> TokenPayload | TokenError:
"""Decode and validate a JWT token.
Returns:
TokenPayload if valid, or a specific TokenError variant.
"""
config = get_auth_config()
try:
payload = jwt.decode(token, config.jwt_secret, algorithms=["HS256"])
return TokenPayload(**payload)
except jwt.ExpiredSignatureError:
return TokenError.EXPIRED
except jwt.InvalidSignatureError:
return TokenError.INVALID_SIGNATURE
except jwt.PyJWTError:
return TokenError.MALFORMED
@@ -0,0 +1,87 @@
"""Local email/password authentication provider."""
from app.gateway.auth.models import User
from app.gateway.auth.password import hash_password_async, verify_password_async
from app.gateway.auth.providers import AuthProvider
from app.gateway.auth.repositories.base import UserRepository
class LocalAuthProvider(AuthProvider):
"""Email/password authentication provider using local database."""
def __init__(self, repository: UserRepository):
"""Initialize with a UserRepository.
Args:
repository: UserRepository implementation (SQLite)
"""
self._repo = repository
async def authenticate(self, credentials: dict) -> User | None:
"""Authenticate with email and password.
Args:
credentials: dict with 'email' and 'password' keys
Returns:
User if authentication succeeds, None otherwise
"""
email = credentials.get("email")
password = credentials.get("password")
if not email or not password:
return None
user = await self._repo.get_user_by_email(email)
if user is None:
return None
if user.password_hash is None:
# OAuth user without local password
return None
if not await verify_password_async(password, user.password_hash):
return None
return user
async def get_user(self, user_id: str) -> User | None:
"""Get user by ID."""
return await self._repo.get_user_by_id(user_id)
async def create_user(self, email: str, password: str | None = None, system_role: str = "user", needs_setup: bool = False) -> User:
"""Create a new local user.
Args:
email: User email address
password: Plain text password (will be hashed)
system_role: Role to assign ("admin" or "user")
needs_setup: If True, user must complete setup on first login
Returns:
Created User instance
"""
password_hash = await hash_password_async(password) if password else None
user = User(
email=email,
password_hash=password_hash,
system_role=system_role,
needs_setup=needs_setup,
)
return await self._repo.create_user(user)
async def get_user_by_oauth(self, provider: str, oauth_id: str) -> User | None:
"""Get user by OAuth provider and ID."""
return await self._repo.get_user_by_oauth(provider, oauth_id)
async def count_users(self) -> int:
"""Return total number of registered users."""
return await self._repo.count_users()
async def update_user(self, user: User) -> User:
"""Update an existing user."""
return await self._repo.update_user(user)
async def get_user_by_email(self, email: str) -> User | None:
"""Get user by email."""
return await self._repo.get_user_by_email(email)
+41
View File
@@ -0,0 +1,41 @@
"""User Pydantic models for authentication."""
from datetime import UTC, datetime
from typing import Literal
from uuid import UUID, uuid4
from pydantic import BaseModel, ConfigDict, EmailStr, Field
def _utc_now() -> datetime:
"""Return current UTC time (timezone-aware)."""
return datetime.now(UTC)
class User(BaseModel):
"""Internal user representation."""
model_config = ConfigDict(from_attributes=True)
id: UUID = Field(default_factory=uuid4, description="Primary key")
email: EmailStr = Field(..., description="Unique email address")
password_hash: str | None = Field(None, description="bcrypt hash, nullable for OAuth users")
system_role: Literal["admin", "user"] = Field(default="user")
created_at: datetime = Field(default_factory=_utc_now)
# OAuth linkage (optional)
oauth_provider: str | None = Field(None, description="e.g. 'github', 'google'")
oauth_id: str | None = Field(None, description="User ID from OAuth provider")
# Auth lifecycle
needs_setup: bool = Field(default=False, description="True for auto-created admin until setup completes")
token_version: int = Field(default=0, description="Incremented on password change to invalidate old JWTs")
class UserResponse(BaseModel):
"""Response model for user info endpoint."""
id: str
email: str
system_role: Literal["admin", "user"]
needs_setup: bool = False
+33
View File
@@ -0,0 +1,33 @@
"""Password hashing utilities using bcrypt directly."""
import asyncio
import bcrypt
def hash_password(password: str) -> str:
"""Hash a password using bcrypt."""
return bcrypt.hashpw(password.encode("utf-8"), bcrypt.gensalt()).decode("utf-8")
def verify_password(plain_password: str, hashed_password: str) -> bool:
"""Verify a password against its hash."""
return bcrypt.checkpw(plain_password.encode("utf-8"), hashed_password.encode("utf-8"))
async def hash_password_async(password: str) -> str:
"""Hash a password using bcrypt (non-blocking).
Wraps the blocking bcrypt operation in a thread pool to avoid
blocking the event loop during password hashing.
"""
return await asyncio.to_thread(hash_password, password)
async def verify_password_async(plain_password: str, hashed_password: str) -> bool:
"""Verify a password against its hash (non-blocking).
Wraps the blocking bcrypt operation in a thread pool to avoid
blocking the event loop during password verification.
"""
return await asyncio.to_thread(verify_password, plain_password, hashed_password)
+24
View File
@@ -0,0 +1,24 @@
"""Auth provider abstraction."""
from abc import ABC, abstractmethod
class AuthProvider(ABC):
"""Abstract base class for authentication providers."""
@abstractmethod
async def authenticate(self, credentials: dict) -> "User | None":
"""Authenticate user with given credentials.
Returns User if authentication succeeds, None otherwise.
"""
...
@abstractmethod
async def get_user(self, user_id: str) -> "User | None":
"""Retrieve user by ID."""
...
# Import User at runtime to avoid circular imports
from app.gateway.auth.models import User # noqa: E402
@@ -0,0 +1,97 @@
"""User repository interface for abstracting database operations."""
from abc import ABC, abstractmethod
from app.gateway.auth.models import User
class UserNotFoundError(LookupError):
"""Raised when a user repository operation targets a non-existent row.
Subclass of :class:`LookupError` so callers that already catch
``LookupError`` for "missing entity" can keep working unchanged,
while specific call sites can pin to this class to distinguish
"concurrent delete during update" from other lookups.
"""
class UserRepository(ABC):
"""Abstract interface for user data storage.
Implement this interface to support different storage backends
(SQLite)
"""
@abstractmethod
async def create_user(self, user: User) -> User:
"""Create a new user.
Args:
user: User object to create
Returns:
Created User with ID assigned
Raises:
ValueError: If email already exists
"""
...
@abstractmethod
async def get_user_by_id(self, user_id: str) -> User | None:
"""Get user by ID.
Args:
user_id: User UUID as string
Returns:
User if found, None otherwise
"""
...
@abstractmethod
async def get_user_by_email(self, email: str) -> User | None:
"""Get user by email.
Args:
email: User email address
Returns:
User if found, None otherwise
"""
...
@abstractmethod
async def update_user(self, user: User) -> User:
"""Update an existing user.
Args:
user: User object with updated fields
Returns:
Updated User
Raises:
UserNotFoundError: If no row exists for ``user.id``. This is
a hard failure (not a no-op) so callers cannot mistake a
concurrent-delete race for a successful update.
"""
...
@abstractmethod
async def count_users(self) -> int:
"""Return total number of registered users."""
...
@abstractmethod
async def get_user_by_oauth(self, provider: str, oauth_id: str) -> User | None:
"""Get user by OAuth provider and ID.
Args:
provider: OAuth provider name (e.g. 'github', 'google')
oauth_id: User ID from the OAuth provider
Returns:
User if found, None otherwise
"""
...
@@ -0,0 +1,122 @@
"""SQLAlchemy-backed UserRepository implementation.
Uses the shared async session factory from
``deerflow.persistence.engine`` — the ``users`` table lives in the
same database as ``threads_meta``, ``runs``, ``run_events``, and
``feedback``.
Constructor takes the session factory directly (same pattern as the
other four repositories in ``deerflow.persistence.*``). Callers
construct this after ``init_engine_from_config()`` has run.
"""
from __future__ import annotations
from datetime import UTC
from uuid import UUID
from sqlalchemy import func, select
from sqlalchemy.exc import IntegrityError
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
from app.gateway.auth.models import User
from app.gateway.auth.repositories.base import UserNotFoundError, UserRepository
from deerflow.persistence.user.model import UserRow
class SQLiteUserRepository(UserRepository):
"""Async user repository backed by the shared SQLAlchemy engine."""
def __init__(self, session_factory: async_sessionmaker[AsyncSession]) -> None:
self._sf = session_factory
# ── Converters ────────────────────────────────────────────────────
@staticmethod
def _row_to_user(row: UserRow) -> User:
return User(
id=UUID(row.id),
email=row.email,
password_hash=row.password_hash,
system_role=row.system_role, # type: ignore[arg-type]
# SQLite loses tzinfo on read; reattach UTC so downstream
# code can compare timestamps reliably.
created_at=row.created_at if row.created_at.tzinfo else row.created_at.replace(tzinfo=UTC),
oauth_provider=row.oauth_provider,
oauth_id=row.oauth_id,
needs_setup=row.needs_setup,
token_version=row.token_version,
)
@staticmethod
def _user_to_row(user: User) -> UserRow:
return UserRow(
id=str(user.id),
email=user.email,
password_hash=user.password_hash,
system_role=user.system_role,
created_at=user.created_at,
oauth_provider=user.oauth_provider,
oauth_id=user.oauth_id,
needs_setup=user.needs_setup,
token_version=user.token_version,
)
# ── CRUD ──────────────────────────────────────────────────────────
async def create_user(self, user: User) -> User:
"""Insert a new user. Raises ``ValueError`` on duplicate email."""
row = self._user_to_row(user)
async with self._sf() as session:
session.add(row)
try:
await session.commit()
except IntegrityError as exc:
await session.rollback()
raise ValueError(f"Email already registered: {user.email}") from exc
return user
async def get_user_by_id(self, user_id: str) -> User | None:
async with self._sf() as session:
row = await session.get(UserRow, user_id)
return self._row_to_user(row) if row is not None else None
async def get_user_by_email(self, email: str) -> User | None:
stmt = select(UserRow).where(UserRow.email == email)
async with self._sf() as session:
result = await session.execute(stmt)
row = result.scalar_one_or_none()
return self._row_to_user(row) if row is not None else None
async def update_user(self, user: User) -> User:
async with self._sf() as session:
row = await session.get(UserRow, str(user.id))
if row is None:
# Hard fail on concurrent delete: callers (reset_admin,
# password change handlers, _ensure_admin_user) all
# fetched the user just before this call, so a missing
# row here means the row vanished underneath us. Silent
# success would let the caller log "password reset" for
# a row that no longer exists.
raise UserNotFoundError(f"User {user.id} no longer exists")
row.email = user.email
row.password_hash = user.password_hash
row.system_role = user.system_role
row.oauth_provider = user.oauth_provider
row.oauth_id = user.oauth_id
row.needs_setup = user.needs_setup
row.token_version = user.token_version
await session.commit()
return user
async def count_users(self) -> int:
stmt = select(func.count()).select_from(UserRow)
async with self._sf() as session:
return await session.scalar(stmt) or 0
async def get_user_by_oauth(self, provider: str, oauth_id: str) -> User | None:
stmt = select(UserRow).where(UserRow.oauth_provider == provider, UserRow.oauth_id == oauth_id)
async with self._sf() as session:
result = await session.execute(stmt)
row = result.scalar_one_or_none()
return self._row_to_user(row) if row is not None else None
+91
View File
@@ -0,0 +1,91 @@
"""CLI tool to reset an admin password.
Usage:
python -m app.gateway.auth.reset_admin
python -m app.gateway.auth.reset_admin --email admin@example.com
Writes the new password to ``.deer-flow/admin_initial_credentials.txt``
(mode 0600) instead of printing it, so CI / log aggregators never see
the cleartext secret.
"""
from __future__ import annotations
import argparse
import asyncio
import secrets
import sys
from sqlalchemy import select
from app.gateway.auth.credential_file import write_initial_credentials
from app.gateway.auth.password import hash_password
from app.gateway.auth.repositories.sqlite import SQLiteUserRepository
from deerflow.persistence.user.model import UserRow
async def _run(email: str | None) -> int:
from deerflow.config import get_app_config
from deerflow.persistence.engine import (
close_engine,
get_session_factory,
init_engine_from_config,
)
config = get_app_config()
await init_engine_from_config(config.database)
try:
sf = get_session_factory()
if sf is None:
print("Error: persistence engine not available (check config.database).", file=sys.stderr)
return 1
repo = SQLiteUserRepository(sf)
if email:
user = await repo.get_user_by_email(email)
else:
# Find first admin via direct SELECT — repository does not
# expose a "first admin" helper and we do not want to add
# one just for this CLI.
async with sf() as session:
stmt = select(UserRow).where(UserRow.system_role == "admin").limit(1)
row = (await session.execute(stmt)).scalar_one_or_none()
if row is None:
user = None
else:
user = await repo.get_user_by_id(row.id)
if user is None:
if email:
print(f"Error: user '{email}' not found.", file=sys.stderr)
else:
print("Error: no admin user found.", file=sys.stderr)
return 1
new_password = secrets.token_urlsafe(16)
user.password_hash = hash_password(new_password)
user.token_version += 1
user.needs_setup = True
await repo.update_user(user)
cred_path = write_initial_credentials(user.email, new_password, label="reset")
print(f"Password reset for: {user.email}")
print(f"Credentials written to: {cred_path} (mode 0600)")
print("Next login will require setup (new email + password).")
return 0
finally:
await close_engine()
def main() -> None:
parser = argparse.ArgumentParser(description="Reset admin password")
parser.add_argument("--email", help="Admin email (default: first admin found)")
args = parser.parse_args()
exit_code = asyncio.run(_run(args.email))
sys.exit(exit_code)
if __name__ == "__main__":
main()
+117
View File
@@ -0,0 +1,117 @@
"""Global authentication middleware — fail-closed safety net.
Rejects unauthenticated requests to non-public paths with 401. When a
request passes the cookie check, resolves the JWT payload to a real
``User`` object and stamps it into both ``request.state.user`` and the
``deerflow.runtime.user_context`` contextvar so that repository-layer
owner filtering works automatically via the sentinel pattern.
Fine-grained permission checks remain in authz.py decorators.
"""
from collections.abc import Callable
from fastapi import HTTPException, Request, Response
from starlette.middleware.base import BaseHTTPMiddleware
from starlette.responses import JSONResponse
from starlette.types import ASGIApp
from app.gateway.auth.errors import AuthErrorCode, AuthErrorResponse
from app.gateway.authz import _ALL_PERMISSIONS, AuthContext
from deerflow.runtime.user_context import reset_current_user, set_current_user
# Paths that never require authentication.
_PUBLIC_PATH_PREFIXES: tuple[str, ...] = (
"/health",
"/docs",
"/redoc",
"/openapi.json",
)
# Exact auth paths that are public (login/register/status check).
# /api/v1/auth/me, /api/v1/auth/change-password etc. are NOT public.
_PUBLIC_EXACT_PATHS: frozenset[str] = frozenset(
{
"/api/v1/auth/login/local",
"/api/v1/auth/register",
"/api/v1/auth/logout",
"/api/v1/auth/setup-status",
}
)
def _is_public(path: str) -> bool:
stripped = path.rstrip("/")
if stripped in _PUBLIC_EXACT_PATHS:
return True
return any(path.startswith(prefix) for prefix in _PUBLIC_PATH_PREFIXES)
class AuthMiddleware(BaseHTTPMiddleware):
"""Strict auth gate: reject requests without a valid session.
Two-stage check for non-public paths:
1. Cookie presence — return 401 NOT_AUTHENTICATED if missing
2. JWT validation via ``get_optional_user_from_request`` — return 401
TOKEN_INVALID if the token is absent, malformed, expired, or the
signed user does not exist / is stale
On success, stamps ``request.state.user`` and the
``deerflow.runtime.user_context`` contextvar so that repository-layer
owner filters work downstream without every route needing a
``@require_auth`` decorator. Routes that need per-resource
authorization (e.g. "user A cannot read user B's thread by guessing
the URL") should additionally use ``@require_permission(...,
owner_check=True)`` for explicit enforcement — but authentication
itself is fully handled here.
"""
def __init__(self, app: ASGIApp) -> None:
super().__init__(app)
async def dispatch(self, request: Request, call_next: Callable) -> Response:
if _is_public(request.url.path):
return await call_next(request)
# Non-public path: require session cookie
if not request.cookies.get("access_token"):
return JSONResponse(
status_code=401,
content={
"detail": AuthErrorResponse(
code=AuthErrorCode.NOT_AUTHENTICATED,
message="Authentication required",
).model_dump()
},
)
# Strict JWT validation: reject junk/expired tokens with 401
# right here instead of silently passing through. This closes
# the "junk cookie bypass" gap (AUTH_TEST_PLAN test 7.5.8):
# without this, non-isolation routes like /api/models would
# accept any cookie-shaped string as authentication.
#
# We call the *strict* resolver so that fine-grained error
# codes (token_expired, token_invalid, user_not_found, …)
# propagate from AuthErrorCode, not get flattened into one
# generic code. BaseHTTPMiddleware doesn't let HTTPException
# bubble up, so we catch and render it as JSONResponse here.
from app.gateway.deps import get_current_user_from_request
try:
user = await get_current_user_from_request(request)
except HTTPException as exc:
return JSONResponse(status_code=exc.status_code, content={"detail": exc.detail})
# Stamp both request.state.user (for the contextvar pattern)
# and request.state.auth (so @require_permission's "auth is
# None" branch short-circuits instead of running the entire
# JWT-decode + DB-lookup pipeline a second time per request).
request.state.user = user
request.state.auth = AuthContext(user=user, permissions=_ALL_PERMISSIONS)
token = set_current_user(user)
try:
return await call_next(request)
finally:
reset_current_user(token)
+262
View File
@@ -0,0 +1,262 @@
"""Authorization decorators and context for DeerFlow.
Inspired by LangGraph Auth system: https://github.com/langchain-ai/langgraph/blob/main/libs/sdk-py/langgraph_sdk/auth/__init__.py
**Usage:**
1. Use ``@require_auth`` on routes that need authentication
2. Use ``@require_permission("resource", "action", filter_key=...)`` for permission checks
3. The decorator chain processes from bottom to top
**Example:**
@router.get("/{thread_id}")
@require_auth
@require_permission("threads", "read", owner_check=True)
async def get_thread(thread_id: str, request: Request):
# User is authenticated and has threads:read permission
...
**Permission Model:**
- threads:read - View thread
- threads:write - Create/update thread
- threads:delete - Delete thread
- runs:create - Run agent
- runs:read - View run
- runs:cancel - Cancel run
"""
from __future__ import annotations
import functools
from collections.abc import Callable
from typing import TYPE_CHECKING, Any, ParamSpec, TypeVar
from fastapi import HTTPException, Request
if TYPE_CHECKING:
from app.gateway.auth.models import User
P = ParamSpec("P")
T = TypeVar("T")
# Permission constants
class Permissions:
"""Permission constants for resource:action format."""
# Threads
THREADS_READ = "threads:read"
THREADS_WRITE = "threads:write"
THREADS_DELETE = "threads:delete"
# Runs
RUNS_CREATE = "runs:create"
RUNS_READ = "runs:read"
RUNS_CANCEL = "runs:cancel"
class AuthContext:
"""Authentication context for the current request.
Stored in request.state.auth after require_auth decoration.
Attributes:
user: The authenticated user, or None if anonymous
permissions: List of permission strings (e.g., "threads:read")
"""
__slots__ = ("user", "permissions")
def __init__(self, user: User | None = None, permissions: list[str] | None = None):
self.user = user
self.permissions = permissions or []
@property
def is_authenticated(self) -> bool:
"""Check if user is authenticated."""
return self.user is not None
def has_permission(self, resource: str, action: str) -> bool:
"""Check if context has permission for resource:action.
Args:
resource: Resource name (e.g., "threads")
action: Action name (e.g., "read")
Returns:
True if user has permission
"""
permission = f"{resource}:{action}"
return permission in self.permissions
def require_user(self) -> User:
"""Get user or raise 401.
Raises:
HTTPException 401 if not authenticated
"""
if not self.user:
raise HTTPException(status_code=401, detail="Authentication required")
return self.user
def get_auth_context(request: Request) -> AuthContext | None:
"""Get AuthContext from request state."""
return getattr(request.state, "auth", None)
_ALL_PERMISSIONS: list[str] = [
Permissions.THREADS_READ,
Permissions.THREADS_WRITE,
Permissions.THREADS_DELETE,
Permissions.RUNS_CREATE,
Permissions.RUNS_READ,
Permissions.RUNS_CANCEL,
]
async def _authenticate(request: Request) -> AuthContext:
"""Authenticate request and return AuthContext.
Delegates to deps.get_optional_user_from_request() for the JWT→User pipeline.
Returns AuthContext with user=None for anonymous requests.
"""
from app.gateway.deps import get_optional_user_from_request
user = await get_optional_user_from_request(request)
if user is None:
return AuthContext(user=None, permissions=[])
# In future, permissions could be stored in user record
return AuthContext(user=user, permissions=_ALL_PERMISSIONS)
def require_auth[**P, T](func: Callable[P, T]) -> Callable[P, T]:
"""Decorator that authenticates the request and sets AuthContext.
Must be placed ABOVE other decorators (executes after them).
Usage:
@router.get("/{thread_id}")
@require_auth # Bottom decorator (executes first after permission check)
@require_permission("threads", "read")
async def get_thread(thread_id: str, request: Request):
auth: AuthContext = request.state.auth
...
Raises:
ValueError: If 'request' parameter is missing
"""
@functools.wraps(func)
async def wrapper(*args: Any, **kwargs: Any) -> Any:
request = kwargs.get("request")
if request is None:
raise ValueError("require_auth decorator requires 'request' parameter")
# Authenticate and set context
auth_context = await _authenticate(request)
request.state.auth = auth_context
return await func(*args, **kwargs)
return wrapper
def require_permission(
resource: str,
action: str,
owner_check: bool = False,
require_existing: bool = False,
) -> Callable[[Callable[P, T]], Callable[P, T]]:
"""Decorator that checks permission for resource:action.
Must be used AFTER @require_auth.
Args:
resource: Resource name (e.g., "threads", "runs")
action: Action name (e.g., "read", "write", "delete")
owner_check: If True, validates that the current user owns the resource.
Requires 'thread_id' path parameter and performs ownership check.
require_existing: Only meaningful with ``owner_check=True``. If True, a
missing ``threads_meta`` row counts as a denial (404)
instead of "untracked legacy thread, allow". Use on
**destructive / mutating** routes (DELETE, PATCH,
state-update) so a deleted thread can't be re-targeted
by another user via the missing-row code path.
Usage:
# Read-style: legacy untracked threads are allowed
@require_permission("threads", "read", owner_check=True)
async def get_thread(thread_id: str, request: Request):
...
# Destructive: thread row MUST exist and be owned by caller
@require_permission("threads", "delete", owner_check=True, require_existing=True)
async def delete_thread(thread_id: str, request: Request):
...
Raises:
HTTPException 401: If authentication required but user is anonymous
HTTPException 403: If user lacks permission
HTTPException 404: If owner_check=True but user doesn't own the thread
ValueError: If owner_check=True but 'thread_id' parameter is missing
"""
def decorator(func: Callable[P, T]) -> Callable[P, T]:
@functools.wraps(func)
async def wrapper(*args: Any, **kwargs: Any) -> Any:
request = kwargs.get("request")
if request is None:
raise ValueError("require_permission decorator requires 'request' parameter")
auth: AuthContext = getattr(request.state, "auth", None)
if auth is None:
auth = await _authenticate(request)
request.state.auth = auth
if not auth.is_authenticated:
raise HTTPException(status_code=401, detail="Authentication required")
# Check permission
if not auth.has_permission(resource, action):
raise HTTPException(
status_code=403,
detail=f"Permission denied: {resource}:{action}",
)
# Owner check for thread-specific resources.
#
# 2.0-rc moved thread metadata into the SQL persistence layer
# (``threads_meta`` table). We verify ownership via
# ``ThreadMetaStore.check_access``: it returns True for
# missing rows (untracked legacy thread) and for rows whose
# ``owner_id`` is NULL (shared / pre-auth data), so this is
# strict-deny rather than strict-allow — only an *existing*
# row with a *different* owner_id triggers 404.
if owner_check:
thread_id = kwargs.get("thread_id")
if thread_id is None:
raise ValueError("require_permission with owner_check=True requires 'thread_id' parameter")
from app.gateway.deps import get_thread_meta_repo
thread_meta_repo = get_thread_meta_repo(request)
allowed = await thread_meta_repo.check_access(
thread_id,
str(auth.user.id),
require_existing=require_existing,
)
if not allowed:
raise HTTPException(
status_code=404,
detail=f"Thread {thread_id} not found",
)
return await func(*args, **kwargs)
return wrapper
return decorator
+112
View File
@@ -0,0 +1,112 @@
"""CSRF protection middleware for FastAPI.
Per RFC-001:
State-changing operations require CSRF protection.
"""
import secrets
from collections.abc import Callable
from fastapi import Request, Response
from starlette.middleware.base import BaseHTTPMiddleware
from starlette.responses import JSONResponse
from starlette.types import ASGIApp
CSRF_COOKIE_NAME = "csrf_token"
CSRF_HEADER_NAME = "X-CSRF-Token"
CSRF_TOKEN_LENGTH = 64 # bytes
def is_secure_request(request: Request) -> bool:
"""Detect whether the original client request was made over HTTPS."""
return request.headers.get("x-forwarded-proto", request.url.scheme) == "https"
def generate_csrf_token() -> str:
"""Generate a secure random CSRF token."""
return secrets.token_urlsafe(CSRF_TOKEN_LENGTH)
def should_check_csrf(request: Request) -> bool:
"""Determine if a request needs CSRF validation.
CSRF is checked for state-changing methods (POST, PUT, DELETE, PATCH).
GET, HEAD, OPTIONS, and TRACE are exempt per RFC 7231.
"""
if request.method not in ("POST", "PUT", "DELETE", "PATCH"):
return False
path = request.url.path.rstrip("/")
# Exempt /api/v1/auth/me endpoint
if path == "/api/v1/auth/me":
return False
return True
_AUTH_EXEMPT_PATHS: frozenset[str] = frozenset(
{
"/api/v1/auth/login/local",
"/api/v1/auth/logout",
"/api/v1/auth/register",
}
)
def is_auth_endpoint(request: Request) -> bool:
"""Check if the request is to an auth endpoint.
Auth endpoints don't need CSRF validation on first call (no token).
"""
return request.url.path.rstrip("/") in _AUTH_EXEMPT_PATHS
class CSRFMiddleware(BaseHTTPMiddleware):
"""Middleware that implements CSRF protection using Double Submit Cookie pattern."""
def __init__(self, app: ASGIApp) -> None:
super().__init__(app)
async def dispatch(self, request: Request, call_next: Callable) -> Response:
_is_auth = is_auth_endpoint(request)
if should_check_csrf(request) and not _is_auth:
cookie_token = request.cookies.get(CSRF_COOKIE_NAME)
header_token = request.headers.get(CSRF_HEADER_NAME)
if not cookie_token or not header_token:
return JSONResponse(
status_code=403,
content={"detail": "CSRF token missing. Include X-CSRF-Token header."},
)
if not secrets.compare_digest(cookie_token, header_token):
return JSONResponse(
status_code=403,
content={"detail": "CSRF token mismatch."},
)
response = await call_next(request)
# For auth endpoints that set up session, also set CSRF cookie
if _is_auth and request.method == "POST":
# Generate a new CSRF token for the session
csrf_token = generate_csrf_token()
is_https = is_secure_request(request)
response.set_cookie(
key=CSRF_COOKIE_NAME,
value=csrf_token,
httponly=False, # Must be JS-readable for Double Submit Cookie pattern
secure=is_https,
samesite="strict",
)
return response
def get_csrf_token(request: Request) -> str | None:
"""Get the CSRF token from the current request's cookies.
This is useful for server-side rendering where you need to embed
token in forms or headers.
"""
return request.cookies.get(CSRF_COOKIE_NAME)
+180 -25
View File
@@ -1,7 +1,8 @@
"""Centralized accessors for singleton objects stored on ``app.state``.
**Getters** (used by routers): raise 503 when a required dependency is
missing, except ``get_store`` which returns ``None``.
missing, except ``get_store`` and ``get_thread_meta_repo`` which return
``None``.
Initialization is handled directly in ``app.py`` via :class:`AsyncExitStack`.
"""
@@ -10,10 +11,15 @@ from __future__ import annotations
from collections.abc import AsyncGenerator
from contextlib import AsyncExitStack, asynccontextmanager
from typing import TYPE_CHECKING
from fastapi import FastAPI, HTTPException, Request
from deerflow.runtime import RunManager, StreamBridge
from deerflow.runtime import RunContext, RunManager
if TYPE_CHECKING:
from app.gateway.auth.local_provider import LocalAuthProvider
from app.gateway.auth.repositories.sqlite import SQLiteUserRepository
@asynccontextmanager
@@ -26,45 +32,194 @@ async def langgraph_runtime(app: FastAPI) -> AsyncGenerator[None, None]:
yield
"""
from deerflow.agents.checkpointer.async_provider import make_checkpointer
from deerflow.config import get_app_config
from deerflow.persistence.engine import close_engine, get_session_factory, init_engine_from_config
from deerflow.runtime import make_store, make_stream_bridge
from deerflow.runtime.events.store import make_run_event_store
async with AsyncExitStack() as stack:
app.state.stream_bridge = await stack.enter_async_context(make_stream_bridge())
# Initialize persistence engine BEFORE checkpointer so that
# auto-create-database logic runs first (postgres backend).
config = get_app_config()
await init_engine_from_config(config.database)
app.state.checkpointer = await stack.enter_async_context(make_checkpointer())
app.state.store = await stack.enter_async_context(make_store())
app.state.run_manager = RunManager()
yield
# Initialize repositories — one get_session_factory() call for all.
sf = get_session_factory()
if sf is not None:
from deerflow.persistence.feedback import FeedbackRepository
from deerflow.persistence.run import RunRepository
from deerflow.persistence.thread_meta import ThreadMetaRepository
app.state.run_store = RunRepository(sf)
app.state.feedback_repo = FeedbackRepository(sf)
app.state.thread_meta_repo = ThreadMetaRepository(sf)
else:
from deerflow.persistence.thread_meta import MemoryThreadMetaStore
from deerflow.runtime.runs.store.memory import MemoryRunStore
app.state.run_store = MemoryRunStore()
app.state.feedback_repo = None
app.state.thread_meta_repo = MemoryThreadMetaStore(app.state.store)
# Run event store (has its own factory with config-driven backend selection)
run_events_config = getattr(config, "run_events", None)
app.state.run_event_store = make_run_event_store(run_events_config)
# RunManager with store backing for persistence
app.state.run_manager = RunManager(store=app.state.run_store)
try:
yield
finally:
await close_engine()
# ---------------------------------------------------------------------------
# Getters called by routers per-request
# Getters -- called by routers per-request
# ---------------------------------------------------------------------------
def get_stream_bridge(request: Request) -> StreamBridge:
"""Return the global :class:`StreamBridge`, or 503."""
bridge = getattr(request.app.state, "stream_bridge", None)
if bridge is None:
raise HTTPException(status_code=503, detail="Stream bridge not available")
return bridge
def _require(attr: str, label: str):
"""Create a FastAPI dependency that returns ``app.state.<attr>`` or 503."""
def dep(request: Request):
val = getattr(request.app.state, attr, None)
if val is None:
raise HTTPException(status_code=503, detail=f"{label} not available")
return val
dep.__name__ = dep.__qualname__ = f"get_{attr}"
return dep
def get_run_manager(request: Request) -> RunManager:
"""Return the global :class:`RunManager`, or 503."""
mgr = getattr(request.app.state, "run_manager", None)
if mgr is None:
raise HTTPException(status_code=503, detail="Run manager not available")
return mgr
def get_checkpointer(request: Request):
"""Return the global checkpointer, or 503."""
cp = getattr(request.app.state, "checkpointer", None)
if cp is None:
raise HTTPException(status_code=503, detail="Checkpointer not available")
return cp
get_stream_bridge = _require("stream_bridge", "Stream bridge")
get_run_manager = _require("run_manager", "Run manager")
get_checkpointer = _require("checkpointer", "Checkpointer")
get_run_event_store = _require("run_event_store", "Run event store")
get_feedback_repo = _require("feedback_repo", "Feedback")
get_run_store = _require("run_store", "Run store")
def get_store(request: Request):
"""Return the global store (may be ``None`` if not configured)."""
return getattr(request.app.state, "store", None)
get_thread_meta_repo = _require("thread_meta_repo", "Thread metadata store")
def get_run_context(request: Request) -> RunContext:
"""Build a :class:`RunContext` from ``app.state`` singletons.
Returns a *base* context with infrastructure dependencies. Callers that
need per-run fields (e.g. ``follow_up_to_run_id``) should use
``dataclasses.replace(ctx, follow_up_to_run_id=...)`` before passing it
to :func:`run_agent`.
"""
from deerflow.config import get_app_config
return RunContext(
checkpointer=get_checkpointer(request),
store=get_store(request),
event_store=get_run_event_store(request),
run_events_config=getattr(get_app_config(), "run_events", None),
thread_meta_repo=get_thread_meta_repo(request),
)
# ---------------------------------------------------------------------------
# Auth helpers (used by authz.py and auth middleware)
# ---------------------------------------------------------------------------
# Cached singletons to avoid repeated instantiation per request
_cached_local_provider: LocalAuthProvider | None = None
_cached_repo: SQLiteUserRepository | None = None
def get_local_provider() -> LocalAuthProvider:
"""Get or create the cached LocalAuthProvider singleton.
Must be called after ``init_engine_from_config()`` — the shared
session factory is required to construct the user repository.
"""
global _cached_local_provider, _cached_repo
if _cached_repo is None:
from app.gateway.auth.repositories.sqlite import SQLiteUserRepository
from deerflow.persistence.engine import get_session_factory
sf = get_session_factory()
if sf is None:
raise RuntimeError("get_local_provider() called before init_engine_from_config(); cannot access users table")
_cached_repo = SQLiteUserRepository(sf)
if _cached_local_provider is None:
from app.gateway.auth.local_provider import LocalAuthProvider
_cached_local_provider = LocalAuthProvider(repository=_cached_repo)
return _cached_local_provider
async def get_current_user_from_request(request: Request):
"""Get the current authenticated user from the request cookie.
Raises HTTPException 401 if not authenticated.
"""
from app.gateway.auth import decode_token
from app.gateway.auth.errors import AuthErrorCode, AuthErrorResponse, TokenError, token_error_to_code
access_token = request.cookies.get("access_token")
if not access_token:
raise HTTPException(
status_code=401,
detail=AuthErrorResponse(code=AuthErrorCode.NOT_AUTHENTICATED, message="Not authenticated").model_dump(),
)
payload = decode_token(access_token)
if isinstance(payload, TokenError):
raise HTTPException(
status_code=401,
detail=AuthErrorResponse(code=token_error_to_code(payload), message=f"Token error: {payload.value}").model_dump(),
)
provider = get_local_provider()
user = await provider.get_user(payload.sub)
if user is None:
raise HTTPException(
status_code=401,
detail=AuthErrorResponse(code=AuthErrorCode.USER_NOT_FOUND, message="User not found").model_dump(),
)
# Token version mismatch → password was changed, token is stale
if user.token_version != payload.ver:
raise HTTPException(
status_code=401,
detail=AuthErrorResponse(code=AuthErrorCode.TOKEN_INVALID, message="Token revoked (password changed)").model_dump(),
)
return user
async def get_optional_user_from_request(request: Request):
"""Get optional authenticated user from request.
Returns None if not authenticated.
"""
try:
return await get_current_user_from_request(request)
except HTTPException:
return None
async def get_current_user(request: Request) -> str | None:
"""Extract user_id from request cookie, or None if not authenticated.
Thin adapter that returns the string id for callers that only need
identification (e.g., ``feedback.py``). Full-user callers should use
``get_current_user_from_request`` or ``get_optional_user_from_request``.
"""
user = await get_optional_user_from_request(request)
return str(user.id) if user else None
+106
View File
@@ -0,0 +1,106 @@
"""LangGraph Server auth handler — shares JWT logic with Gateway.
Loaded by LangGraph Server via langgraph.json ``auth.path``.
Reuses the same ``decode_token`` / ``get_auth_config`` as Gateway,
so both modes validate tokens with the same secret and rules.
Two layers:
1. @auth.authenticate — validates JWT cookie, extracts user_id,
and enforces CSRF on state-changing methods (POST/PUT/DELETE/PATCH)
2. @auth.on — returns metadata filter so each user only sees own threads
"""
import secrets
from langgraph_sdk import Auth
from app.gateway.auth.errors import TokenError
from app.gateway.auth.jwt import decode_token
from app.gateway.deps import get_local_provider
auth = Auth()
# Methods that require CSRF validation (state-changing per RFC 7231).
_CSRF_METHODS = frozenset({"POST", "PUT", "DELETE", "PATCH"})
def _check_csrf(request) -> None:
"""Enforce Double Submit Cookie CSRF check for state-changing requests.
Mirrors Gateway's CSRFMiddleware logic so that LangGraph routes
proxied directly by nginx have the same CSRF protection.
"""
method = getattr(request, "method", "") or ""
if method.upper() not in _CSRF_METHODS:
return
cookie_token = request.cookies.get("csrf_token")
header_token = request.headers.get("x-csrf-token")
if not cookie_token or not header_token:
raise Auth.exceptions.HTTPException(
status_code=403,
detail="CSRF token missing. Include X-CSRF-Token header.",
)
if not secrets.compare_digest(cookie_token, header_token):
raise Auth.exceptions.HTTPException(
status_code=403,
detail="CSRF token mismatch.",
)
@auth.authenticate
async def authenticate(request):
"""Validate the session cookie, decode JWT, and check token_version.
Same validation chain as Gateway's get_current_user_from_request:
cookie → decode JWT → DB lookup → token_version match
Also enforces CSRF on state-changing methods.
"""
# CSRF check before authentication so forged cross-site requests
# are rejected early, even if the cookie carries a valid JWT.
_check_csrf(request)
token = request.cookies.get("access_token")
if not token:
raise Auth.exceptions.HTTPException(
status_code=401,
detail="Not authenticated",
)
payload = decode_token(token)
if isinstance(payload, TokenError):
raise Auth.exceptions.HTTPException(
status_code=401,
detail=f"Token error: {payload.value}",
)
user = await get_local_provider().get_user(payload.sub)
if user is None:
raise Auth.exceptions.HTTPException(
status_code=401,
detail="User not found",
)
if user.token_version != payload.ver:
raise Auth.exceptions.HTTPException(
status_code=401,
detail="Token revoked (password changed)",
)
return payload.sub
@auth.on
async def add_owner_filter(ctx: Auth.types.AuthContext, value: dict):
"""Inject owner_id metadata on writes; filter by owner_id on reads.
Gateway stores thread ownership as ``metadata.owner_id``.
This handler ensures LangGraph Server enforces the same isolation.
"""
# On create/update: stamp owner_id into metadata
metadata = value.setdefault("metadata", {})
metadata["owner_id"] = ctx.user.identity
# Return filter dict — LangGraph applies it to search/read/delete
return {"owner_id": ctx.user.identity}
+2
View File
@@ -7,6 +7,7 @@ from urllib.parse import quote
from fastapi import APIRouter, HTTPException, Request
from fastapi.responses import FileResponse, PlainTextResponse, Response
from app.gateway.authz import require_permission
from app.gateway.path_utils import resolve_thread_virtual_path
logger = logging.getLogger(__name__)
@@ -81,6 +82,7 @@ def _extract_file_from_skill_archive(zip_path: Path, internal_path: str) -> byte
summary="Get Artifact File",
description="Retrieve an artifact file generated by the AI agent. Text and binary files can be viewed inline, while active web content is always downloaded.",
)
@require_permission("threads", "read", owner_check=True)
async def get_artifact(thread_id: str, path: str, request: Request, download: bool = False) -> Response:
"""Get an artifact file by its path.
+418
View File
@@ -0,0 +1,418 @@
"""Authentication endpoints."""
import logging
import os
import time
from ipaddress import ip_address, ip_network
from fastapi import APIRouter, Depends, HTTPException, Request, Response, status
from fastapi.security import OAuth2PasswordRequestForm
from pydantic import BaseModel, EmailStr, Field, field_validator
from app.gateway.auth import (
UserResponse,
create_access_token,
)
from app.gateway.auth.config import get_auth_config
from app.gateway.auth.errors import AuthErrorCode, AuthErrorResponse
from app.gateway.csrf_middleware import is_secure_request
from app.gateway.deps import get_current_user_from_request, get_local_provider
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/v1/auth", tags=["auth"])
# ── Request/Response Models ──────────────────────────────────────────────
class LoginResponse(BaseModel):
"""Response model for login — token only lives in HttpOnly cookie."""
expires_in: int # seconds
needs_setup: bool = False
# Top common-password blocklist. Drawn from the public SecLists "10k worst
# passwords" set, lowercased + length>=8 only (shorter ones already fail
# the min_length check). Kept tight on purpose: this is the **lower bound**
# defense, not a full HIBP / passlib check, and runs in-process per request.
_COMMON_PASSWORDS: frozenset[str] = frozenset(
{
"password",
"password1",
"password12",
"password123",
"password1234",
"12345678",
"123456789",
"1234567890",
"qwerty12",
"qwertyui",
"qwerty123",
"abc12345",
"abcd1234",
"iloveyou",
"letmein1",
"welcome1",
"welcome123",
"admin123",
"administrator",
"passw0rd",
"p@ssw0rd",
"monkey12",
"trustno1",
"sunshine",
"princess",
"football",
"baseball",
"superman",
"batman123",
"starwars",
"dragon123",
"master123",
"shadow12",
"michael1",
"jennifer",
"computer",
}
)
def _password_is_common(password: str) -> bool:
"""Case-insensitive blocklist check.
Lowercases the input so trivial mutations like ``Password`` /
``PASSWORD`` are also rejected. Does not normalize digit substitutions
(``p@ssw0rd`` is included as a literal entry instead) — keeping the
rule cheap and predictable.
"""
return password.lower() in _COMMON_PASSWORDS
def _validate_strong_password(value: str) -> str:
"""Pydantic field-validator body shared by Register + ChangePassword.
Constraint = function, not type-level mixin. The two request models
have no "is-a" relationship; they only share the password-strength
rule. Lifting it into a free function lets each model bind it via
``@field_validator(field_name)`` without inheritance gymnastics.
"""
if _password_is_common(value):
raise ValueError("Password is too common; choose a stronger password.")
return value
class RegisterRequest(BaseModel):
"""Request model for user registration."""
email: EmailStr
password: str = Field(..., min_length=8)
_strong_password = field_validator("password")(classmethod(lambda cls, v: _validate_strong_password(v)))
class ChangePasswordRequest(BaseModel):
"""Request model for password change (also handles setup flow)."""
current_password: str
new_password: str = Field(..., min_length=8)
new_email: EmailStr | None = None
_strong_password = field_validator("new_password")(classmethod(lambda cls, v: _validate_strong_password(v)))
class MessageResponse(BaseModel):
"""Generic message response."""
message: str
# ── Helpers ───────────────────────────────────────────────────────────────
def _set_session_cookie(response: Response, token: str, request: Request) -> None:
"""Set the access_token HttpOnly cookie on the response."""
config = get_auth_config()
is_https = is_secure_request(request)
response.set_cookie(
key="access_token",
value=token,
httponly=True,
secure=is_https,
samesite="lax",
max_age=config.token_expiry_days * 24 * 3600 if is_https else None,
)
# ── Rate Limiting ────────────────────────────────────────────────────────
# In-process dict — not shared across workers. Sufficient for single-worker deployments.
_MAX_LOGIN_ATTEMPTS = 5
_LOCKOUT_SECONDS = 300 # 5 minutes
# ip → (fail_count, lock_until_timestamp)
_login_attempts: dict[str, tuple[int, float]] = {}
def _trusted_proxies() -> list:
"""Parse ``AUTH_TRUSTED_PROXIES`` env var into a list of ip_network objects.
Comma-separated CIDR or single-IP entries. Empty / unset = no proxy is
trusted (direct mode). Invalid entries are skipped with a logger warning.
Read live so env-var overrides take effect immediately and tests can
``monkeypatch.setenv`` without poking a module-level cache.
"""
raw = os.getenv("AUTH_TRUSTED_PROXIES", "").strip()
if not raw:
return []
nets = []
for entry in raw.split(","):
entry = entry.strip()
if not entry:
continue
try:
nets.append(ip_network(entry, strict=False))
except ValueError:
logger.warning("AUTH_TRUSTED_PROXIES: ignoring invalid entry %r", entry)
return nets
def _get_client_ip(request: Request) -> str:
"""Extract the real client IP for rate limiting.
Trust model:
- The TCP peer (``request.client.host``) is always the baseline. It is
whatever the kernel reports as the connecting socket — unforgeable
by the client itself.
- ``X-Real-IP`` is **only** honored if the TCP peer is in the
``AUTH_TRUSTED_PROXIES`` allowlist (set via env var, comma-separated
CIDR or single IPs). When set, the gateway is assumed to be behind a
reverse proxy (nginx, Cloudflare, ALB, …) that overwrites
``X-Real-IP`` with the original client address.
- With no ``AUTH_TRUSTED_PROXIES`` set, ``X-Real-IP`` is silently
ignored — closing the bypass where any client could rotate the
header to dodge per-IP rate limits in dev / direct-gateway mode.
``X-Forwarded-For`` is intentionally NOT used because it is naturally
client-controlled at the *first* hop and the trust chain is harder to
audit per-request.
"""
peer_host = request.client.host if request.client else None
trusted = _trusted_proxies()
if trusted and peer_host:
try:
peer_ip = ip_address(peer_host)
if any(peer_ip in net for net in trusted):
real_ip = request.headers.get("x-real-ip", "").strip()
if real_ip:
return real_ip
except ValueError:
# peer_host wasn't a parseable IP (e.g. "unknown") — fall through
pass
return peer_host or "unknown"
def _check_rate_limit(ip: str) -> None:
"""Raise 429 if the IP is currently locked out."""
record = _login_attempts.get(ip)
if record is None:
return
fail_count, lock_until = record
if fail_count >= _MAX_LOGIN_ATTEMPTS:
if time.time() < lock_until:
raise HTTPException(
status_code=429,
detail="Too many login attempts. Try again later.",
)
del _login_attempts[ip]
_MAX_TRACKED_IPS = 10000
def _record_login_failure(ip: str) -> None:
"""Record a failed login attempt for the given IP."""
# Evict expired lockouts when dict grows too large
if len(_login_attempts) >= _MAX_TRACKED_IPS:
now = time.time()
expired = [k for k, (c, t) in _login_attempts.items() if c >= _MAX_LOGIN_ATTEMPTS and now >= t]
for k in expired:
del _login_attempts[k]
# If still too large, evict cheapest-to-lose half: below-threshold
# IPs (lock_until=0.0) sort first, then earliest-expiring lockouts.
if len(_login_attempts) >= _MAX_TRACKED_IPS:
by_time = sorted(_login_attempts.items(), key=lambda kv: kv[1][1])
for k, _ in by_time[: len(by_time) // 2]:
del _login_attempts[k]
record = _login_attempts.get(ip)
if record is None:
_login_attempts[ip] = (1, 0.0)
else:
new_count = record[0] + 1
lock_until = time.time() + _LOCKOUT_SECONDS if new_count >= _MAX_LOGIN_ATTEMPTS else 0.0
_login_attempts[ip] = (new_count, lock_until)
def _record_login_success(ip: str) -> None:
"""Clear failure counter for the given IP on successful login."""
_login_attempts.pop(ip, None)
# ── Endpoints ─────────────────────────────────────────────────────────────
@router.post("/login/local", response_model=LoginResponse)
async def login_local(
request: Request,
response: Response,
form_data: OAuth2PasswordRequestForm = Depends(),
):
"""Local email/password login."""
client_ip = _get_client_ip(request)
_check_rate_limit(client_ip)
user = await get_local_provider().authenticate({"email": form_data.username, "password": form_data.password})
if user is None:
_record_login_failure(client_ip)
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=AuthErrorResponse(code=AuthErrorCode.INVALID_CREDENTIALS, message="Incorrect email or password").model_dump(),
)
_record_login_success(client_ip)
token = create_access_token(str(user.id), token_version=user.token_version)
_set_session_cookie(response, token, request)
return LoginResponse(
expires_in=get_auth_config().token_expiry_days * 24 * 3600,
needs_setup=user.needs_setup,
)
@router.post("/register", response_model=UserResponse, status_code=status.HTTP_201_CREATED)
async def register(request: Request, response: Response, body: RegisterRequest):
"""Register a new user account (always 'user' role).
Admin is auto-created on first boot. This endpoint creates regular users.
Auto-login by setting the session cookie.
"""
try:
user = await get_local_provider().create_user(email=body.email, password=body.password, system_role="user")
except ValueError:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=AuthErrorResponse(code=AuthErrorCode.EMAIL_ALREADY_EXISTS, message="Email already registered").model_dump(),
)
token = create_access_token(str(user.id), token_version=user.token_version)
_set_session_cookie(response, token, request)
return UserResponse(id=str(user.id), email=user.email, system_role=user.system_role)
@router.post("/logout", response_model=MessageResponse)
async def logout(request: Request, response: Response):
"""Logout current user by clearing the cookie."""
response.delete_cookie(key="access_token", secure=is_secure_request(request), samesite="lax")
return MessageResponse(message="Successfully logged out")
@router.post("/change-password", response_model=MessageResponse)
async def change_password(request: Request, response: Response, body: ChangePasswordRequest):
"""Change password for the currently authenticated user.
Also handles the first-boot setup flow:
- If new_email is provided, updates email (checks uniqueness)
- If user.needs_setup is True and new_email is given, clears needs_setup
- Always increments token_version to invalidate old sessions
- Re-issues session cookie with new token_version
"""
from app.gateway.auth.password import hash_password_async, verify_password_async
user = await get_current_user_from_request(request)
if user.password_hash is None:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=AuthErrorResponse(code=AuthErrorCode.INVALID_CREDENTIALS, message="OAuth users cannot change password").model_dump())
if not await verify_password_async(body.current_password, user.password_hash):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=AuthErrorResponse(code=AuthErrorCode.INVALID_CREDENTIALS, message="Current password is incorrect").model_dump())
provider = get_local_provider()
# Update email if provided
if body.new_email is not None:
existing = await provider.get_user_by_email(body.new_email)
if existing and str(existing.id) != str(user.id):
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=AuthErrorResponse(code=AuthErrorCode.EMAIL_ALREADY_EXISTS, message="Email already in use").model_dump())
user.email = body.new_email
# Update password + bump version
user.password_hash = await hash_password_async(body.new_password)
user.token_version += 1
# Clear setup flag if this is the setup flow
if user.needs_setup and body.new_email is not None:
user.needs_setup = False
await provider.update_user(user)
# Re-issue cookie with new token_version
token = create_access_token(str(user.id), token_version=user.token_version)
_set_session_cookie(response, token, request)
return MessageResponse(message="Password changed successfully")
@router.get("/me", response_model=UserResponse)
async def get_me(request: Request):
"""Get current authenticated user info."""
user = await get_current_user_from_request(request)
return UserResponse(id=str(user.id), email=user.email, system_role=user.system_role, needs_setup=user.needs_setup)
@router.get("/setup-status")
async def setup_status():
"""Check if admin account exists. Always False after first boot."""
user_count = await get_local_provider().count_users()
return {"needs_setup": user_count == 0}
# ── OAuth Endpoints (Future/Placeholder) ─────────────────────────────────
@router.get("/oauth/{provider}")
async def oauth_login(provider: str):
"""Initiate OAuth login flow.
Redirects to the OAuth provider's authorization URL.
Currently a placeholder - requires OAuth provider implementation.
"""
if provider not in ["github", "google"]:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Unsupported OAuth provider: {provider}",
)
raise HTTPException(
status_code=status.HTTP_501_NOT_IMPLEMENTED,
detail="OAuth login not yet implemented",
)
@router.get("/callback/{provider}")
async def oauth_callback(provider: str, code: str, state: str):
"""OAuth callback endpoint.
Handles the OAuth provider's callback after user authorization.
Currently a placeholder.
"""
raise HTTPException(
status_code=status.HTTP_501_NOT_IMPLEMENTED,
detail="OAuth callback not yet implemented",
)
+132
View File
@@ -0,0 +1,132 @@
"""Feedback endpoints — create, list, stats, delete.
Allows users to submit thumbs-up/down feedback on runs,
optionally scoped to a specific message.
"""
from __future__ import annotations
import logging
from typing import Any
from fastapi import APIRouter, HTTPException, Request
from pydantic import BaseModel, Field
from app.gateway.authz import require_permission
from app.gateway.deps import get_current_user, get_feedback_repo, get_run_store
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/threads", tags=["feedback"])
# ---------------------------------------------------------------------------
# Request / response models
# ---------------------------------------------------------------------------
class FeedbackCreateRequest(BaseModel):
rating: int = Field(..., description="Feedback rating: +1 (positive) or -1 (negative)")
comment: str | None = Field(default=None, description="Optional text feedback")
message_id: str | None = Field(default=None, description="Optional: scope feedback to a specific message")
class FeedbackResponse(BaseModel):
feedback_id: str
run_id: str
thread_id: str
owner_id: str | None = None
message_id: str | None = None
rating: int
comment: str | None = None
created_at: str = ""
class FeedbackStatsResponse(BaseModel):
run_id: str
total: int = 0
positive: int = 0
negative: int = 0
# ---------------------------------------------------------------------------
# Endpoints
# ---------------------------------------------------------------------------
@router.post("/{thread_id}/runs/{run_id}/feedback", response_model=FeedbackResponse)
@require_permission("threads", "write", owner_check=True, require_existing=True)
async def create_feedback(
thread_id: str,
run_id: str,
body: FeedbackCreateRequest,
request: Request,
) -> dict[str, Any]:
"""Submit feedback (thumbs-up/down) for a run."""
if body.rating not in (1, -1):
raise HTTPException(status_code=400, detail="rating must be +1 or -1")
user_id = await get_current_user(request)
# Validate run exists and belongs to thread
run_store = get_run_store(request)
run = await run_store.get(run_id)
if run is None:
raise HTTPException(status_code=404, detail=f"Run {run_id} not found")
if run.get("thread_id") != thread_id:
raise HTTPException(status_code=404, detail=f"Run {run_id} not found in thread {thread_id}")
feedback_repo = get_feedback_repo(request)
return await feedback_repo.create(
run_id=run_id,
thread_id=thread_id,
rating=body.rating,
owner_id=user_id,
message_id=body.message_id,
comment=body.comment,
)
@router.get("/{thread_id}/runs/{run_id}/feedback", response_model=list[FeedbackResponse])
@require_permission("threads", "read", owner_check=True)
async def list_feedback(
thread_id: str,
run_id: str,
request: Request,
) -> list[dict[str, Any]]:
"""List all feedback for a run."""
feedback_repo = get_feedback_repo(request)
return await feedback_repo.list_by_run(thread_id, run_id)
@router.get("/{thread_id}/runs/{run_id}/feedback/stats", response_model=FeedbackStatsResponse)
@require_permission("threads", "read", owner_check=True)
async def feedback_stats(
thread_id: str,
run_id: str,
request: Request,
) -> dict[str, Any]:
"""Get aggregated feedback stats (positive/negative counts) for a run."""
feedback_repo = get_feedback_repo(request)
return await feedback_repo.aggregate_by_run(thread_id, run_id)
@router.delete("/{thread_id}/runs/{run_id}/feedback/{feedback_id}")
@require_permission("threads", "delete", owner_check=True, require_existing=True)
async def delete_feedback(
thread_id: str,
run_id: str,
feedback_id: str,
request: Request,
) -> dict[str, bool]:
"""Delete a feedback record."""
feedback_repo = get_feedback_repo(request)
# Verify feedback belongs to the specified thread/run before deleting
existing = await feedback_repo.get(feedback_id)
if existing is None:
raise HTTPException(status_code=404, detail=f"Feedback {feedback_id} not found")
if existing.get("thread_id") != thread_id or existing.get("run_id") != run_id:
raise HTTPException(status_code=404, detail=f"Feedback {feedback_id} not found in run {run_id}")
deleted = await feedback_repo.delete(feedback_id)
if not deleted:
raise HTTPException(status_code=404, detail=f"Feedback {feedback_id} not found")
return {"success": True}
+6 -4
View File
@@ -7,7 +7,7 @@ from fastapi import APIRouter, HTTPException
from pydantic import BaseModel, Field
from app.gateway.path_utils import resolve_thread_virtual_path
from deerflow.agents.lead_agent.prompt import clear_skills_system_prompt_cache
from deerflow.agents.lead_agent.prompt import refresh_skills_system_prompt_cache_async
from deerflow.config.extensions_config import ExtensionsConfig, SkillStateConfig, get_extensions_config, reload_extensions_config
from deerflow.skills import Skill, load_skills
from deerflow.skills.installer import SkillAlreadyExistsError, install_skill_from_archive
@@ -119,6 +119,7 @@ async def install_skill(request: SkillInstallRequest) -> SkillInstallResponse:
try:
skill_file_path = resolve_thread_virtual_path(request.thread_id, request.path)
result = install_skill_from_archive(skill_file_path)
await refresh_skills_system_prompt_cache_async()
return SkillInstallResponse(**result)
except FileNotFoundError as e:
raise HTTPException(status_code=404, detail=str(e))
@@ -181,7 +182,7 @@ async def update_custom_skill(skill_name: str, request: CustomSkillUpdateRequest
"scanner": {"decision": scan.decision, "reason": scan.reason},
},
)
clear_skills_system_prompt_cache()
await refresh_skills_system_prompt_cache_async()
return await get_custom_skill(skill_name)
except HTTPException:
raise
@@ -213,7 +214,7 @@ async def delete_custom_skill(skill_name: str) -> dict[str, bool]:
},
)
shutil.rmtree(skill_dir)
clear_skills_system_prompt_cache()
await refresh_skills_system_prompt_cache_async()
return {"success": True}
except FileNotFoundError as e:
raise HTTPException(status_code=404, detail=str(e))
@@ -268,7 +269,7 @@ async def rollback_custom_skill(skill_name: str, request: SkillRollbackRequest)
raise HTTPException(status_code=400, detail=f"Rollback blocked by security scanner: {scan.reason}")
atomic_write(skill_file, target_content)
append_history(skill_name, history_entry)
clear_skills_system_prompt_cache()
await refresh_skills_system_prompt_cache_async()
return await get_custom_skill(skill_name)
except HTTPException:
raise
@@ -337,6 +338,7 @@ async def update_skill(skill_name: str, request: SkillUpdateRequest) -> SkillRes
logger.info(f"Skills configuration updated and saved to: {config_path}")
reload_extensions_config()
await refresh_skills_system_prompt_cache_async()
skills = load_skills(enabled_only=False)
updated_skill = next((s for s in skills if s.name == skill_name), None)
+8 -6
View File
@@ -1,10 +1,11 @@
import json
import logging
from fastapi import APIRouter
from fastapi import APIRouter, Request
from langchain_core.messages import HumanMessage, SystemMessage
from pydantic import BaseModel, Field
from app.gateway.authz import require_permission
from deerflow.models import create_chat_model
logger = logging.getLogger(__name__)
@@ -98,12 +99,13 @@ def _format_conversation(messages: list[SuggestionMessage]) -> str:
summary="Generate Follow-up Questions",
description="Generate short follow-up questions a user might ask next, based on recent conversation context.",
)
async def generate_suggestions(thread_id: str, request: SuggestionsRequest) -> SuggestionsResponse:
if not request.messages:
@require_permission("threads", "read", owner_check=True)
async def generate_suggestions(thread_id: str, body: SuggestionsRequest, request: Request) -> SuggestionsResponse:
if not body.messages:
return SuggestionsResponse(suggestions=[])
n = request.n
conversation = _format_conversation(request.messages)
n = body.n
conversation = _format_conversation(body.messages)
if not conversation:
return SuggestionsResponse(suggestions=[])
@@ -120,7 +122,7 @@ async def generate_suggestions(thread_id: str, request: SuggestionsRequest) -> S
user_content = f"Conversation Context:\n{conversation}\n\nGenerate {n} follow-up questions"
try:
model = create_chat_model(name=request.model_name, thinking_enabled=False)
model = create_chat_model(name=body.model_name, thinking_enabled=False)
response = await model.ainvoke([SystemMessage(content=system_instruction), HumanMessage(content=user_content)])
raw = _extract_response_text(response.content)
suggestions = _parse_json_string_list(raw) or []
+62 -1
View File
@@ -19,7 +19,8 @@ from fastapi import APIRouter, HTTPException, Query, Request
from fastapi.responses import Response, StreamingResponse
from pydantic import BaseModel, Field
from app.gateway.deps import get_checkpointer, get_run_manager, get_stream_bridge
from app.gateway.authz import require_permission
from app.gateway.deps import get_checkpointer, get_run_event_store, get_run_manager, get_run_store, get_stream_bridge
from app.gateway.services import sse_consumer, start_run
from deerflow.runtime import RunRecord, serialize_channel_values
@@ -53,6 +54,7 @@ class RunCreateRequest(BaseModel):
after_seconds: float | None = Field(default=None, description="Delayed execution")
if_not_exists: Literal["reject", "create"] = Field(default="create", description="Thread creation policy")
feedback_keys: list[str] | None = Field(default=None, description="LangSmith feedback keys")
follow_up_to_run_id: str | None = Field(default=None, description="Run ID this message follows up on. Auto-detected from latest successful run if not provided.")
class RunResponse(BaseModel):
@@ -92,6 +94,7 @@ def _record_to_response(record: RunRecord) -> RunResponse:
@router.post("/{thread_id}/runs", response_model=RunResponse)
@require_permission("runs", "create", owner_check=True, require_existing=True)
async def create_run(thread_id: str, body: RunCreateRequest, request: Request) -> RunResponse:
"""Create a background run (returns immediately)."""
record = await start_run(body, thread_id, request)
@@ -99,6 +102,7 @@ async def create_run(thread_id: str, body: RunCreateRequest, request: Request) -
@router.post("/{thread_id}/runs/stream")
@require_permission("runs", "create", owner_check=True, require_existing=True)
async def stream_run(thread_id: str, body: RunCreateRequest, request: Request) -> StreamingResponse:
"""Create a run and stream events via SSE.
@@ -126,6 +130,7 @@ async def stream_run(thread_id: str, body: RunCreateRequest, request: Request) -
@router.post("/{thread_id}/runs/wait", response_model=dict)
@require_permission("runs", "create", owner_check=True, require_existing=True)
async def wait_run(thread_id: str, body: RunCreateRequest, request: Request) -> dict:
"""Create a run and block until it completes, returning the final state."""
record = await start_run(body, thread_id, request)
@@ -151,6 +156,7 @@ async def wait_run(thread_id: str, body: RunCreateRequest, request: Request) ->
@router.get("/{thread_id}/runs", response_model=list[RunResponse])
@require_permission("runs", "read", owner_check=True)
async def list_runs(thread_id: str, request: Request) -> list[RunResponse]:
"""List all runs for a thread."""
run_mgr = get_run_manager(request)
@@ -159,6 +165,7 @@ async def list_runs(thread_id: str, request: Request) -> list[RunResponse]:
@router.get("/{thread_id}/runs/{run_id}", response_model=RunResponse)
@require_permission("runs", "read", owner_check=True)
async def get_run(thread_id: str, run_id: str, request: Request) -> RunResponse:
"""Get details of a specific run."""
run_mgr = get_run_manager(request)
@@ -169,6 +176,7 @@ async def get_run(thread_id: str, run_id: str, request: Request) -> RunResponse:
@router.post("/{thread_id}/runs/{run_id}/cancel")
@require_permission("runs", "cancel", owner_check=True, require_existing=True)
async def cancel_run(
thread_id: str,
run_id: str,
@@ -206,6 +214,7 @@ async def cancel_run(
@router.get("/{thread_id}/runs/{run_id}/join")
@require_permission("runs", "read", owner_check=True)
async def join_run(thread_id: str, run_id: str, request: Request) -> StreamingResponse:
"""Join an existing run's SSE stream."""
bridge = get_stream_bridge(request)
@@ -226,6 +235,7 @@ async def join_run(thread_id: str, run_id: str, request: Request) -> StreamingRe
@router.api_route("/{thread_id}/runs/{run_id}/stream", methods=["GET", "POST"], response_model=None)
@require_permission("runs", "read", owner_check=True)
async def stream_existing_run(
thread_id: str,
run_id: str,
@@ -265,3 +275,54 @@ async def stream_existing_run(
"X-Accel-Buffering": "no",
},
)
# ---------------------------------------------------------------------------
# Messages / Events / Token usage endpoints
# ---------------------------------------------------------------------------
@router.get("/{thread_id}/messages")
@require_permission("runs", "read", owner_check=True)
async def list_thread_messages(
thread_id: str,
request: Request,
limit: int = Query(default=50, le=200),
before_seq: int | None = Query(default=None),
after_seq: int | None = Query(default=None),
) -> list[dict]:
"""Return displayable messages for a thread (across all runs)."""
event_store = get_run_event_store(request)
return await event_store.list_messages(thread_id, limit=limit, before_seq=before_seq, after_seq=after_seq)
@router.get("/{thread_id}/runs/{run_id}/messages")
@require_permission("runs", "read", owner_check=True)
async def list_run_messages(thread_id: str, run_id: str, request: Request) -> list[dict]:
"""Return displayable messages for a specific run."""
event_store = get_run_event_store(request)
return await event_store.list_messages_by_run(thread_id, run_id)
@router.get("/{thread_id}/runs/{run_id}/events")
@require_permission("runs", "read", owner_check=True)
async def list_run_events(
thread_id: str,
run_id: str,
request: Request,
event_types: str | None = Query(default=None),
limit: int = Query(default=500, le=2000),
) -> list[dict]:
"""Return the full event stream for a run (debug/audit)."""
event_store = get_run_event_store(request)
types = event_types.split(",") if event_types else None
return await event_store.list_events(thread_id, run_id, event_types=types, limit=limit)
@router.get("/{thread_id}/token-usage")
@require_permission("threads", "read", owner_check=True)
async def thread_token_usage(thread_id: str, request: Request) -> dict:
"""Thread-level token usage aggregation."""
run_store = get_run_store(request)
agg = await run_store.aggregate_tokens_by_thread(thread_id)
return {"thread_id": thread_id, **agg}
+171 -237
View File
@@ -18,23 +18,34 @@ import uuid
from typing import Any
from fastapi import APIRouter, HTTPException, Request
from pydantic import BaseModel, Field
from pydantic import BaseModel, Field, field_validator
from app.gateway.deps import get_checkpointer, get_store
from app.gateway.authz import require_permission
from app.gateway.deps import get_checkpointer
from app.gateway.utils import sanitize_log_param
from deerflow.config.paths import Paths, get_paths
from deerflow.runtime import serialize_channel_values
# ---------------------------------------------------------------------------
# Store namespace
# ---------------------------------------------------------------------------
THREADS_NS: tuple[str, ...] = ("threads",)
"""Namespace used by the Store for thread metadata records."""
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/threads", tags=["threads"])
# Metadata keys that the server controls; clients are not allowed to set
# them. Pydantic ``@field_validator("metadata")`` strips them on every
# inbound model below so a malicious client cannot reflect a forged
# owner identity through the API surface. Defense-in-depth — the
# row-level invariant is still ``threads_meta.owner_id`` populated from
# the auth contextvar; this list closes the metadata-blob echo gap.
_SERVER_RESERVED_METADATA_KEYS: frozenset[str] = frozenset({"owner_id", "user_id"})
def _strip_reserved_metadata(metadata: dict[str, Any] | None) -> dict[str, Any]:
"""Return ``metadata`` with server-controlled keys removed."""
if not metadata:
return metadata or {}
return {k: v for k, v in metadata.items() if k not in _SERVER_RESERVED_METADATA_KEYS}
# ---------------------------------------------------------------------------
# Response / request models
# ---------------------------------------------------------------------------
@@ -63,8 +74,11 @@ class ThreadCreateRequest(BaseModel):
"""Request body for creating a thread."""
thread_id: str | None = Field(default=None, description="Optional thread ID (auto-generated if omitted)")
assistant_id: str | None = Field(default=None, description="Associate thread with an assistant")
metadata: dict[str, Any] = Field(default_factory=dict, description="Initial metadata")
_strip_reserved = field_validator("metadata")(classmethod(lambda cls, v: _strip_reserved_metadata(v)))
class ThreadSearchRequest(BaseModel):
"""Request body for searching threads."""
@@ -93,6 +107,8 @@ class ThreadPatchRequest(BaseModel):
metadata: dict[str, Any] = Field(default_factory=dict, description="Metadata to merge")
_strip_reserved = field_validator("metadata")(classmethod(lambda cls, v: _strip_reserved_metadata(v)))
class ThreadStateUpdateRequest(BaseModel):
"""Request body for updating thread state (human-in-the-loop resume)."""
@@ -135,61 +151,16 @@ def _delete_thread_data(thread_id: str, paths: Paths | None = None) -> ThreadDel
raise HTTPException(status_code=422, detail=str(exc)) from exc
except FileNotFoundError:
# Not critical — thread data may not exist on disk
logger.debug("No local thread data to delete for %s", thread_id)
logger.debug("No local thread data to delete for %s", sanitize_log_param(thread_id))
return ThreadDeleteResponse(success=True, message=f"No local data for {thread_id}")
except Exception as exc:
logger.exception("Failed to delete thread data for %s", thread_id)
logger.exception("Failed to delete thread data for %s", sanitize_log_param(thread_id))
raise HTTPException(status_code=500, detail="Failed to delete local thread data.") from exc
logger.info("Deleted local thread data for %s", thread_id)
logger.info("Deleted local thread data for %s", sanitize_log_param(thread_id))
return ThreadDeleteResponse(success=True, message=f"Deleted local thread data for {thread_id}")
async def _store_get(store, thread_id: str) -> dict | None:
"""Fetch a thread record from the Store; returns ``None`` if absent."""
item = await store.aget(THREADS_NS, thread_id)
return item.value if item is not None else None
async def _store_put(store, record: dict) -> None:
"""Write a thread record to the Store."""
await store.aput(THREADS_NS, record["thread_id"], record)
async def _store_upsert(store, thread_id: str, *, metadata: dict | None = None, values: dict | None = None) -> None:
"""Create or refresh a thread record in the Store.
On creation the record is written with ``status="idle"``. On update only
``updated_at`` (and optionally ``metadata`` / ``values``) are changed so
that existing fields are preserved.
``values`` carries the agent-state snapshot exposed to the frontend
(currently just ``{"title": "..."}``).
"""
now = time.time()
existing = await _store_get(store, thread_id)
if existing is None:
await _store_put(
store,
{
"thread_id": thread_id,
"status": "idle",
"created_at": now,
"updated_at": now,
"metadata": metadata or {},
"values": values or {},
},
)
else:
val = dict(existing)
val["updated_at"] = now
if metadata:
val.setdefault("metadata", {}).update(metadata)
if values:
val.setdefault("values", {}).update(values)
await _store_put(store, val)
def _derive_thread_status(checkpoint_tuple) -> str:
"""Derive thread status from checkpoint metadata."""
if checkpoint_tuple is None:
@@ -215,23 +186,19 @@ def _derive_thread_status(checkpoint_tuple) -> str:
@router.delete("/{thread_id}", response_model=ThreadDeleteResponse)
@require_permission("threads", "delete", owner_check=True, require_existing=True)
async def delete_thread_data(thread_id: str, request: Request) -> ThreadDeleteResponse:
"""Delete local persisted filesystem data for a thread.
Cleans DeerFlow-managed thread directories, removes checkpoint data,
and removes the thread record from the Store.
and removes the thread_meta row from the configured ThreadMetaStore
(sqlite or memory).
"""
from app.gateway.deps import get_thread_meta_repo
# Clean local filesystem
response = _delete_thread_data(thread_id)
# Remove from Store (best-effort)
store = get_store(request)
if store is not None:
try:
await store.adelete(THREADS_NS, thread_id)
except Exception:
logger.debug("Could not delete store record for thread %s (not critical)", thread_id)
# Remove checkpoints (best-effort)
checkpointer = getattr(request.app.state, "checkpointer", None)
if checkpointer is not None:
@@ -239,7 +206,15 @@ async def delete_thread_data(thread_id: str, request: Request) -> ThreadDeleteRe
if hasattr(checkpointer, "adelete_thread"):
await checkpointer.adelete_thread(thread_id)
except Exception:
logger.debug("Could not delete checkpoints for thread %s (not critical)", thread_id)
logger.debug("Could not delete checkpoints for thread %s (not critical)", sanitize_log_param(thread_id))
# Remove thread_meta row (best-effort) — required for sqlite backend
# so the deleted thread no longer appears in /threads/search.
try:
thread_meta_repo = get_thread_meta_repo(request)
await thread_meta_repo.delete(thread_id)
except Exception:
logger.debug("Could not delete thread_meta for %s (not critical)", sanitize_log_param(thread_id))
return response
@@ -248,43 +223,40 @@ async def delete_thread_data(thread_id: str, request: Request) -> ThreadDeleteRe
async def create_thread(body: ThreadCreateRequest, request: Request) -> ThreadResponse:
"""Create a new thread.
The thread record is written to the Store (for fast listing) and an
empty checkpoint is written to the checkpointer (for state reads).
Writes a thread_meta record (so the thread appears in /threads/search)
and an empty checkpoint (so state endpoints work immediately).
Idempotent: returns the existing record when ``thread_id`` already exists.
"""
store = get_store(request)
from app.gateway.deps import get_thread_meta_repo
checkpointer = get_checkpointer(request)
thread_meta_repo = get_thread_meta_repo(request)
thread_id = body.thread_id or str(uuid.uuid4())
now = time.time()
# ``body.metadata`` is already stripped of server-reserved keys by
# ``ThreadCreateRequest._strip_reserved`` — see the model definition.
# Idempotency: return existing record from Store when already present
if store is not None:
existing_record = await _store_get(store, thread_id)
if existing_record is not None:
return ThreadResponse(
thread_id=thread_id,
status=existing_record.get("status", "idle"),
created_at=str(existing_record.get("created_at", "")),
updated_at=str(existing_record.get("updated_at", "")),
metadata=existing_record.get("metadata", {}),
)
# Idempotency: return existing record when already present
existing_record = await thread_meta_repo.get(thread_id)
if existing_record is not None:
return ThreadResponse(
thread_id=thread_id,
status=existing_record.get("status", "idle"),
created_at=str(existing_record.get("created_at", "")),
updated_at=str(existing_record.get("updated_at", "")),
metadata=existing_record.get("metadata", {}),
)
# Write thread record to Store
if store is not None:
try:
await _store_put(
store,
{
"thread_id": thread_id,
"status": "idle",
"created_at": now,
"updated_at": now,
"metadata": body.metadata,
},
)
except Exception:
logger.exception("Failed to write thread %s to store", thread_id)
raise HTTPException(status_code=500, detail="Failed to create thread")
# Write thread_meta so the thread appears in /threads/search immediately
try:
await thread_meta_repo.create(
thread_id,
assistant_id=getattr(body, "assistant_id", None),
metadata=body.metadata,
)
except Exception:
logger.exception("Failed to write thread_meta for %s", sanitize_log_param(thread_id))
raise HTTPException(status_code=500, detail="Failed to create thread")
# Write an empty checkpoint so state endpoints work immediately
config = {"configurable": {"thread_id": thread_id, "checkpoint_ns": ""}}
@@ -301,10 +273,10 @@ async def create_thread(body: ThreadCreateRequest, request: Request) -> ThreadRe
}
await checkpointer.aput(config, empty_checkpoint(), ckpt_metadata, {})
except Exception:
logger.exception("Failed to create checkpoint for thread %s", thread_id)
logger.exception("Failed to create checkpoint for thread %s", sanitize_log_param(thread_id))
raise HTTPException(status_code=500, detail="Failed to create thread")
logger.info("Thread created: %s", thread_id)
logger.info("Thread created: %s", sanitize_log_param(thread_id))
return ThreadResponse(
thread_id=thread_id,
status="idle",
@@ -318,166 +290,91 @@ async def create_thread(body: ThreadCreateRequest, request: Request) -> ThreadRe
async def search_threads(body: ThreadSearchRequest, request: Request) -> list[ThreadResponse]:
"""Search and list threads.
Two-phase approach:
**Phase 1 — Store (fast path, O(threads))**: returns threads that were
created or run through this Gateway. Store records are tiny metadata
dicts so fetching all of them at once is cheap.
**Phase 2 — Checkpointer supplement (lazy migration)**: threads that
were created directly by LangGraph Server (and therefore absent from the
Store) are discovered here by iterating the shared checkpointer. Any
newly found thread is immediately written to the Store so that the next
search skips Phase 2 for that thread — the Store converges to a full
index over time without a one-shot migration job.
Delegates to the configured ThreadMetaStore implementation
(SQL-backed for sqlite/postgres, Store-backed for memory mode).
"""
store = get_store(request)
checkpointer = get_checkpointer(request)
from app.gateway.deps import get_thread_meta_repo
# -----------------------------------------------------------------------
# Phase 1: Store
# -----------------------------------------------------------------------
merged: dict[str, ThreadResponse] = {}
if store is not None:
try:
items = await store.asearch(THREADS_NS, limit=10_000)
except Exception:
logger.warning("Store search failed — falling back to checkpointer only", exc_info=True)
items = []
for item in items:
val = item.value
merged[val["thread_id"]] = ThreadResponse(
thread_id=val["thread_id"],
status=val.get("status", "idle"),
created_at=str(val.get("created_at", "")),
updated_at=str(val.get("updated_at", "")),
metadata=val.get("metadata", {}),
values=val.get("values", {}),
)
# -----------------------------------------------------------------------
# Phase 2: Checkpointer supplement
# Discovers threads not yet in the Store (e.g. created by LangGraph
# Server) and lazily migrates them so future searches skip this phase.
# -----------------------------------------------------------------------
try:
async for checkpoint_tuple in checkpointer.alist(None):
cfg = getattr(checkpoint_tuple, "config", {})
thread_id = cfg.get("configurable", {}).get("thread_id")
if not thread_id or thread_id in merged:
continue
# Skip sub-graph checkpoints (checkpoint_ns is non-empty for those)
if cfg.get("configurable", {}).get("checkpoint_ns", ""):
continue
ckpt_meta = getattr(checkpoint_tuple, "metadata", {}) or {}
# Strip LangGraph internal keys from the user-visible metadata dict
user_meta = {k: v for k, v in ckpt_meta.items() if k not in ("created_at", "updated_at", "step", "source", "writes", "parents")}
# Extract state values (title) from the checkpoint's channel_values
checkpoint_data = getattr(checkpoint_tuple, "checkpoint", {}) or {}
channel_values = checkpoint_data.get("channel_values", {})
ckpt_values = {}
if title := channel_values.get("title"):
ckpt_values["title"] = title
thread_resp = ThreadResponse(
thread_id=thread_id,
status=_derive_thread_status(checkpoint_tuple),
created_at=str(ckpt_meta.get("created_at", "")),
updated_at=str(ckpt_meta.get("updated_at", ckpt_meta.get("created_at", ""))),
metadata=user_meta,
values=ckpt_values,
)
merged[thread_id] = thread_resp
# Lazy migration — write to Store so the next search finds it there
if store is not None:
try:
await _store_upsert(store, thread_id, metadata=user_meta, values=ckpt_values or None)
except Exception:
logger.debug("Failed to migrate thread %s to store (non-fatal)", thread_id)
except Exception:
logger.exception("Checkpointer scan failed during thread search")
# Don't raise — return whatever was collected from Store + partial scan
# -----------------------------------------------------------------------
# Phase 3: Filter → sort → paginate
# -----------------------------------------------------------------------
results = list(merged.values())
if body.metadata:
results = [r for r in results if all(r.metadata.get(k) == v for k, v in body.metadata.items())]
if body.status:
results = [r for r in results if r.status == body.status]
results.sort(key=lambda r: r.updated_at, reverse=True)
return results[body.offset : body.offset + body.limit]
repo = get_thread_meta_repo(request)
rows = await repo.search(
metadata=body.metadata or None,
status=body.status,
limit=body.limit,
offset=body.offset,
)
return [
ThreadResponse(
thread_id=r["thread_id"],
status=r.get("status", "idle"),
created_at=r.get("created_at", ""),
updated_at=r.get("updated_at", ""),
metadata=r.get("metadata", {}),
values={"title": r["display_name"]} if r.get("display_name") else {},
interrupts={},
)
for r in rows
]
@router.patch("/{thread_id}", response_model=ThreadResponse)
@require_permission("threads", "write", owner_check=True, require_existing=True)
async def patch_thread(thread_id: str, body: ThreadPatchRequest, request: Request) -> ThreadResponse:
"""Merge metadata into a thread record."""
store = get_store(request)
if store is None:
raise HTTPException(status_code=503, detail="Store not available")
from app.gateway.deps import get_thread_meta_repo
record = await _store_get(store, thread_id)
thread_meta_repo = get_thread_meta_repo(request)
record = await thread_meta_repo.get(thread_id)
if record is None:
raise HTTPException(status_code=404, detail=f"Thread {thread_id} not found")
now = time.time()
updated = dict(record)
updated.setdefault("metadata", {}).update(body.metadata)
updated["updated_at"] = now
# ``body.metadata`` already stripped by ``ThreadPatchRequest._strip_reserved``.
try:
await _store_put(store, updated)
await thread_meta_repo.update_metadata(thread_id, body.metadata)
except Exception:
logger.exception("Failed to patch thread %s", thread_id)
logger.exception("Failed to patch thread %s", sanitize_log_param(thread_id))
raise HTTPException(status_code=500, detail="Failed to update thread")
# Re-read to get the merged metadata + refreshed updated_at
record = await thread_meta_repo.get(thread_id) or record
return ThreadResponse(
thread_id=thread_id,
status=updated.get("status", "idle"),
created_at=str(updated.get("created_at", "")),
updated_at=str(now),
metadata=updated.get("metadata", {}),
status=record.get("status", "idle"),
created_at=str(record.get("created_at", "")),
updated_at=str(record.get("updated_at", "")),
metadata=record.get("metadata", {}),
)
@router.get("/{thread_id}", response_model=ThreadResponse)
@require_permission("threads", "read", owner_check=True)
async def get_thread(thread_id: str, request: Request) -> ThreadResponse:
"""Get thread info.
Reads metadata from the Store and derives the accurate execution
status from the checkpointer. Falls back to the checkpointer alone
for threads that pre-date Store adoption (backward compat).
Reads metadata from the ThreadMetaStore and derives the accurate
execution status from the checkpointer. Falls back to the checkpointer
alone for threads that pre-date ThreadMetaStore adoption (backward compat).
"""
store = get_store(request)
from app.gateway.deps import get_thread_meta_repo
thread_meta_repo = get_thread_meta_repo(request)
checkpointer = get_checkpointer(request)
record: dict | None = None
if store is not None:
record = await _store_get(store, thread_id)
record: dict | None = await thread_meta_repo.get(thread_id)
# Derive accurate status from the checkpointer
config = {"configurable": {"thread_id": thread_id, "checkpoint_ns": ""}}
try:
checkpoint_tuple = await checkpointer.aget_tuple(config)
except Exception:
logger.exception("Failed to get checkpoint for thread %s", thread_id)
logger.exception("Failed to get checkpoint for thread %s", sanitize_log_param(thread_id))
raise HTTPException(status_code=500, detail="Failed to get thread")
if record is None and checkpoint_tuple is None:
raise HTTPException(status_code=404, detail=f"Thread {thread_id} not found")
# If the thread exists in the checkpointer but not the store (e.g. legacy
# data), synthesize a minimal store record from the checkpoint metadata.
# If the thread exists in the checkpointer but not in thread_meta (e.g.
# legacy data created before thread_meta adoption), synthesize a minimal
# record from the checkpoint metadata.
if record is None and checkpoint_tuple is not None:
ckpt_meta = getattr(checkpoint_tuple, "metadata", {}) or {}
record = {
@@ -506,6 +403,7 @@ async def get_thread(thread_id: str, request: Request) -> ThreadResponse:
@router.get("/{thread_id}/state", response_model=ThreadStateResponse)
@require_permission("threads", "read", owner_check=True)
async def get_thread_state(thread_id: str, request: Request) -> ThreadStateResponse:
"""Get the latest state snapshot for a thread.
@@ -518,7 +416,7 @@ async def get_thread_state(thread_id: str, request: Request) -> ThreadStateRespo
try:
checkpoint_tuple = await checkpointer.aget_tuple(config)
except Exception:
logger.exception("Failed to get state for thread %s", thread_id)
logger.exception("Failed to get state for thread %s", sanitize_log_param(thread_id))
raise HTTPException(status_code=500, detail="Failed to get thread state")
if checkpoint_tuple is None:
@@ -555,15 +453,19 @@ async def get_thread_state(thread_id: str, request: Request) -> ThreadStateRespo
@router.post("/{thread_id}/state", response_model=ThreadStateResponse)
@require_permission("threads", "write", owner_check=True, require_existing=True)
async def update_thread_state(thread_id: str, body: ThreadStateUpdateRequest, request: Request) -> ThreadStateResponse:
"""Update thread state (e.g. for human-in-the-loop resume or title rename).
Writes a new checkpoint that merges *body.values* into the latest
channel values, then syncs any updated ``title`` field back to the Store
so that ``/threads/search`` reflects the change immediately.
channel values, then syncs any updated ``title`` field through the
ThreadMetaStore abstraction so that ``/threads/search`` reflects the
change immediately in both sqlite and memory backends.
"""
from app.gateway.deps import get_thread_meta_repo
checkpointer = get_checkpointer(request)
store = get_store(request)
thread_meta_repo = get_thread_meta_repo(request)
# checkpoint_ns must be present in the config for aput — default to ""
# (the root graph namespace). checkpoint_id is optional; omitting it
@@ -580,7 +482,7 @@ async def update_thread_state(thread_id: str, body: ThreadStateUpdateRequest, re
try:
checkpoint_tuple = await checkpointer.aget_tuple(read_config)
except Exception:
logger.exception("Failed to get state for thread %s", thread_id)
logger.exception("Failed to get state for thread %s", sanitize_log_param(thread_id))
raise HTTPException(status_code=500, detail="Failed to get thread state")
if checkpoint_tuple is None:
@@ -614,19 +516,22 @@ async def update_thread_state(thread_id: str, body: ThreadStateUpdateRequest, re
try:
new_config = await checkpointer.aput(write_config, checkpoint, metadata, {})
except Exception:
logger.exception("Failed to update state for thread %s", thread_id)
logger.exception("Failed to update state for thread %s", sanitize_log_param(thread_id))
raise HTTPException(status_code=500, detail="Failed to update thread state")
new_checkpoint_id: str | None = None
if isinstance(new_config, dict):
new_checkpoint_id = new_config.get("configurable", {}).get("checkpoint_id")
# Sync title changes to the Store so /threads/search reflects them immediately.
if store is not None and body.values and "title" in body.values:
try:
await _store_upsert(store, thread_id, values={"title": body.values["title"]})
except Exception:
logger.debug("Failed to sync title to store for thread %s (non-fatal)", thread_id)
# Sync title changes through the ThreadMetaStore abstraction so /threads/search
# reflects them immediately in both sqlite and memory backends.
if body.values and "title" in body.values:
new_title = body.values["title"]
if new_title: # Skip empty strings and None
try:
await thread_meta_repo.update_display_name(thread_id, new_title)
except Exception:
logger.debug("Failed to sync title to thread_meta for %s (non-fatal)", sanitize_log_param(thread_id))
return ThreadStateResponse(
values=serialize_channel_values(channel_values),
@@ -638,8 +543,16 @@ async def update_thread_state(thread_id: str, body: ThreadStateUpdateRequest, re
@router.post("/{thread_id}/history", response_model=list[HistoryEntry])
@require_permission("threads", "read", owner_check=True)
async def get_thread_history(thread_id: str, body: ThreadHistoryRequest, request: Request) -> list[HistoryEntry]:
"""Get checkpoint history for a thread."""
"""Get checkpoint history for a thread.
Messages are read from the checkpointer's channel values (the
authoritative source) and serialized via
:func:`~deerflow.runtime.serialization.serialize_channel_values`.
Only the latest (first) checkpoint carries the ``messages`` key to
avoid duplicating them across every entry.
"""
checkpointer = get_checkpointer(request)
config: dict[str, Any] = {"configurable": {"thread_id": thread_id}}
@@ -647,6 +560,7 @@ async def get_thread_history(thread_id: str, body: ThreadHistoryRequest, request
config["configurable"]["checkpoint_id"] = body.before
entries: list[HistoryEntry] = []
is_latest_checkpoint = True
try:
async for checkpoint_tuple in checkpointer.alist(config, limit=body.limit):
ckpt_config = getattr(checkpoint_tuple, "config", {})
@@ -661,22 +575,42 @@ async def get_thread_history(thread_id: str, body: ThreadHistoryRequest, request
channel_values = checkpoint.get("channel_values", {})
# Build values from checkpoint channel_values
values: dict[str, Any] = {}
if title := channel_values.get("title"):
values["title"] = title
if thread_data := channel_values.get("thread_data"):
values["thread_data"] = thread_data
# Attach messages from checkpointer only for the latest checkpoint
if is_latest_checkpoint:
messages = channel_values.get("messages")
if messages:
values["messages"] = serialize_channel_values({"messages": messages}).get("messages", [])
is_latest_checkpoint = False
# Derive next tasks
tasks_raw = getattr(checkpoint_tuple, "tasks", []) or []
next_tasks = [t.name for t in tasks_raw if hasattr(t, "name")]
# Strip LangGraph internal keys from metadata
user_meta = {k: v for k, v in metadata.items() if k not in ("created_at", "updated_at", "step", "source", "writes", "parents")}
# Keep step for ordering context
if "step" in metadata:
user_meta["step"] = metadata["step"]
entries.append(
HistoryEntry(
checkpoint_id=checkpoint_id,
parent_checkpoint_id=parent_id,
metadata=metadata,
values=serialize_channel_values(channel_values),
metadata=user_meta,
values=values,
created_at=str(metadata.get("created_at", "")),
next=next_tasks,
)
)
except Exception:
logger.exception("Failed to get history for thread %s", thread_id)
logger.exception("Failed to get history for thread %s", sanitize_log_param(thread_id))
raise HTTPException(status_code=500, detail="Failed to get thread history")
return entries
+8 -3
View File
@@ -4,9 +4,10 @@ import logging
import os
import stat
from fastapi import APIRouter, File, HTTPException, UploadFile
from fastapi import APIRouter, File, HTTPException, Request, UploadFile
from pydantic import BaseModel
from app.gateway.authz import require_permission
from deerflow.config.paths import get_paths
from deerflow.sandbox.sandbox_provider import get_sandbox_provider
from deerflow.uploads.manager import (
@@ -54,8 +55,10 @@ def _make_file_sandbox_writable(file_path: os.PathLike[str] | str) -> None:
@router.post("", response_model=UploadResponse)
@require_permission("threads", "write", owner_check=True, require_existing=True)
async def upload_files(
thread_id: str,
request: Request,
files: list[UploadFile] = File(...),
) -> UploadResponse:
"""Upload multiple files to a thread's uploads directory."""
@@ -133,7 +136,8 @@ async def upload_files(
@router.get("/list", response_model=dict)
async def list_uploaded_files(thread_id: str) -> dict:
@require_permission("threads", "read", owner_check=True)
async def list_uploaded_files(thread_id: str, request: Request) -> dict:
"""List all files in a thread's uploads directory."""
try:
uploads_dir = get_uploads_dir(thread_id)
@@ -151,7 +155,8 @@ async def list_uploaded_files(thread_id: str) -> dict:
@router.delete("/{filename}")
async def delete_uploaded_file(thread_id: str, filename: str) -> dict:
@require_permission("threads", "delete", owner_check=True, require_existing=True)
async def delete_uploaded_file(thread_id: str, filename: str, request: Request) -> dict:
"""Delete a file from a thread's uploads directory."""
try:
uploads_dir = get_uploads_dir(thread_id)
+39 -81
View File
@@ -8,16 +8,17 @@ frames, and consuming stream bridge events. Router modules
from __future__ import annotations
import asyncio
import dataclasses
import json
import logging
import re
import time
from typing import Any
from fastapi import HTTPException, Request
from langchain_core.messages import HumanMessage
from app.gateway.deps import get_checkpointer, get_run_manager, get_store, get_stream_bridge
from app.gateway.deps import get_run_context, get_run_manager, get_run_store, get_stream_bridge
from app.gateway.utils import sanitize_log_param
from deerflow.runtime import (
END_SENTINEL,
HEARTBEAT_SENTINEL,
@@ -171,71 +172,6 @@ def build_run_config(
# ---------------------------------------------------------------------------
async def _upsert_thread_in_store(store, thread_id: str, metadata: dict | None) -> None:
"""Create or refresh the thread record in the Store.
Called from :func:`start_run` so that threads created via the stateless
``/runs/stream`` endpoint (which never calls ``POST /threads``) still
appear in ``/threads/search`` results.
"""
# Deferred import to avoid circular import with the threads router module.
from app.gateway.routers.threads import _store_upsert
try:
await _store_upsert(store, thread_id, metadata=metadata)
except Exception:
logger.warning("Failed to upsert thread %s in store (non-fatal)", thread_id)
async def _sync_thread_title_after_run(
run_task: asyncio.Task,
thread_id: str,
checkpointer: Any,
store: Any,
) -> None:
"""Wait for *run_task* to finish, then persist the generated title to the Store.
TitleMiddleware writes the generated title to the LangGraph agent state
(checkpointer) but the Gateway's Store record is not updated automatically.
This coroutine closes that gap by reading the final checkpoint after the
run completes and syncing ``values.title`` into the Store record so that
subsequent ``/threads/search`` responses include the correct title.
Runs as a fire-and-forget :func:`asyncio.create_task`; failures are
logged at DEBUG level and never propagate.
"""
# Wait for the background run task to complete (any outcome).
# asyncio.wait does not propagate task exceptions — it just returns
# when the task is done, cancelled, or failed.
await asyncio.wait({run_task})
# Deferred import to avoid circular import with the threads router module.
from app.gateway.routers.threads import _store_get, _store_put
try:
ckpt_config = {"configurable": {"thread_id": thread_id, "checkpoint_ns": ""}}
ckpt_tuple = await checkpointer.aget_tuple(ckpt_config)
if ckpt_tuple is None:
return
channel_values = ckpt_tuple.checkpoint.get("channel_values", {})
title = channel_values.get("title")
if not title:
return
existing = await _store_get(store, thread_id)
if existing is None:
return
updated = dict(existing)
updated.setdefault("values", {})["title"] = title
updated["updated_at"] = time.time()
await _store_put(store, updated)
logger.debug("Synced title %r for thread %s", title, thread_id)
except Exception:
logger.debug("Failed to sync title for thread %s (non-fatal)", thread_id, exc_info=True)
async def start_run(
body: Any,
thread_id: str,
@@ -255,11 +191,25 @@ async def start_run(
"""
bridge = get_stream_bridge(request)
run_mgr = get_run_manager(request)
checkpointer = get_checkpointer(request)
store = get_store(request)
run_ctx = get_run_context(request)
disconnect = DisconnectMode.cancel if body.on_disconnect == "cancel" else DisconnectMode.continue_
# Resolve follow_up_to_run_id: explicit from request, or auto-detect from latest successful run
follow_up_to_run_id = getattr(body, "follow_up_to_run_id", None)
if follow_up_to_run_id is None:
run_store = get_run_store(request)
try:
recent_runs = await run_store.list_by_thread(thread_id, limit=1)
if recent_runs and recent_runs[0].get("status") == "success":
follow_up_to_run_id = recent_runs[0]["run_id"]
except Exception:
pass # Don't block run creation
# Enrich base context with per-run field
if follow_up_to_run_id:
run_ctx = dataclasses.replace(run_ctx, follow_up_to_run_id=follow_up_to_run_id)
try:
record = await run_mgr.create_or_reject(
thread_id,
@@ -268,17 +218,28 @@ async def start_run(
metadata=body.metadata or {},
kwargs={"input": body.input, "config": body.config},
multitask_strategy=body.multitask_strategy,
follow_up_to_run_id=follow_up_to_run_id,
)
except ConflictError as exc:
raise HTTPException(status_code=409, detail=str(exc)) from exc
except UnsupportedStrategyError as exc:
raise HTTPException(status_code=501, detail=str(exc)) from exc
# Ensure the thread is visible in /threads/search, even for threads that
# were never explicitly created via POST /threads (e.g. stateless runs).
store = get_store(request)
if store is not None:
await _upsert_thread_in_store(store, thread_id, body.metadata)
# Upsert thread metadata so the thread appears in /threads/search,
# even for threads that were never explicitly created via POST /threads
# (e.g. stateless runs).
try:
existing = await run_ctx.thread_meta_repo.get(thread_id)
if existing is None:
await run_ctx.thread_meta_repo.create(
thread_id,
assistant_id=body.assistant_id,
metadata=body.metadata,
)
else:
await run_ctx.thread_meta_repo.update_status(thread_id, "running")
except Exception:
logger.warning("Failed to upsert thread_meta for %s (non-fatal)", sanitize_log_param(thread_id))
agent_factory = resolve_agent_factory(body.assistant_id)
graph_input = normalize_input(body.input)
@@ -311,8 +272,7 @@ async def start_run(
bridge,
run_mgr,
record,
checkpointer=checkpointer,
store=store,
ctx=run_ctx,
agent_factory=agent_factory,
graph_input=graph_input,
config=config,
@@ -324,11 +284,9 @@ async def start_run(
)
record.task = task
# After the run completes, sync the title generated by TitleMiddleware from
# the checkpointer into the Store record so that /threads/search returns the
# correct title instead of an empty values dict.
if store is not None:
asyncio.create_task(_sync_thread_title_after_run(task, thread_id, checkpointer, store))
# Title sync is handled by worker.py's finally block which reads the
# title from the checkpoint and calls thread_meta_repo.update_display_name
# after the run completes.
return record
+6
View File
@@ -0,0 +1,6 @@
"""Shared utility helpers for the Gateway layer."""
def sanitize_log_param(value: str) -> str:
"""Strip control characters to prevent log injection."""
return value.replace("\n", "").replace("\r", "").replace("\x00", "")
+25 -1
View File
@@ -86,6 +86,7 @@ Content-Type: application/json
]
},
"config": {
"recursion_limit": 100,
"configurable": {
"model_name": "gpt-4",
"thinking_enabled": false,
@@ -100,6 +101,21 @@ Content-Type: application/json
- Use: `values`, `messages-tuple`, `custom`, `updates`, `events`, `debug`, `tasks`, `checkpoints`
- Do not use: `tools` (deprecated/invalid in current `langgraph-api` and will trigger schema validation errors)
**Recursion Limit:**
`config.recursion_limit` caps the number of graph steps LangGraph will execute
in a single run. The `/api/langgraph/*` endpoints go straight to the LangGraph
server and therefore inherit LangGraph's native default of **25**, which is
too low for plan-mode or subagent-heavy runs — the agent typically errors out
with `GraphRecursionError` after the first round of subagent results comes
back, before the lead agent can synthesize the final answer.
DeerFlow's own Gateway and IM-channel paths mitigate this by defaulting to
`100` in `build_run_config` (see `backend/app/gateway/services.py`), but
clients calling the LangGraph API directly must set `recursion_limit`
explicitly in the request body. `100` matches the Gateway default and is a
safe starting point; increase it if you run deeply nested subagent graphs.
**Configurable Options:**
- `model_name` (string): Override the default model
- `thinking_enabled` (boolean): Enable extended thinking for supported models
@@ -626,6 +642,14 @@ curl -X POST http://localhost:2026/api/langgraph/threads/abc123/runs \
-H "Content-Type: application/json" \
-d '{
"input": {"messages": [{"role": "user", "content": "Hello"}]},
"config": {"configurable": {"model_name": "gpt-4"}}
"config": {
"recursion_limit": 100,
"configurable": {"model_name": "gpt-4"}
}
}'
```
> The `/api/langgraph/*` endpoints bypass DeerFlow's Gateway and inherit
> LangGraph's native `recursion_limit` default of 25, which is too low for
> plan-mode or subagent runs. Set `config.recursion_limit` explicitly — see
> the [Create Run](#create-run) section for details.
+77
View File
@@ -0,0 +1,77 @@
# Docker Test Gap (Section 七 7.4)
This file documents the only **un-executed** test cases from
`backend/docs/AUTH_TEST_PLAN.md` after the full release validation pass.
## Why this gap exists
The release validation environment (sg_dev: `10.251.229.92`) **does not have
a Docker daemon installed**. The TC-DOCKER cases are container-runtime
behavior tests that need an actual Docker engine to spin up
`docker/docker-compose.yaml` services.
```bash
$ ssh sg_dev "which docker; docker --version"
# (empty)
# bash: docker: command not found
```
All other test plan sections were executed against either:
- The local dev box (Mac, all services running locally), or
- The deployed sg_dev instance (gateway + frontend + nginx via SSH tunnel)
## Cases not executed
| Case | Title | What it covers | Why not run |
|---|---|---|---|
| TC-DOCKER-01 | `users.db` volume persistence | Verify the `DEER_FLOW_HOME` bind mount survives container restart | needs `docker compose up` |
| TC-DOCKER-02 | Session persistence across container restart | `AUTH_JWT_SECRET` env var keeps cookies valid after `docker compose down && up` | needs `docker compose down/up` |
| TC-DOCKER-03 | Per-worker rate limiter divergence | Confirms in-process `_login_attempts` dict doesn't share state across `gunicorn` workers (4 by default in the compose file); known limitation, documented | needs multi-worker container |
| TC-DOCKER-04 | IM channels skip AuthMiddleware | Verify Feishu/Slack/Telegram dispatchers run in-container against `http://langgraph:2024` without going through nginx | needs `docker logs` |
| TC-DOCKER-05 | Admin credentials surfacing | **Updated post-simplify** — was "log scrape", now "0600 credential file in `DEER_FLOW_HOME`". The file-based behavior is already validated by TC-1.1 + TC-UPG-13 on sg_dev (non-Docker), so the only Docker-specific gap is verifying the volume mount carries the file out to the host | needs container + host volume |
| TC-DOCKER-06 | Gateway-mode Docker deploy | `./scripts/deploy.sh --gateway` produces a 3-container topology (no `langgraph` container); same auth flow as standard mode | needs `docker compose --profile gateway` |
## Coverage already provided by non-Docker tests
The **auth-relevant** behavior in each Docker case is already exercised by
the test cases that ran on sg_dev or local:
| Docker case | Auth behavior covered by |
|---|---|
| TC-DOCKER-01 (volume persistence) | TC-REENT-01 on sg_dev (admin row survives gateway restart) — same SQLite file, just no container layer between |
| TC-DOCKER-02 (session persistence) | TC-API-02/03/06 (cookie roundtrip), plus TC-REENT-04 (multi-cookie) — JWT verification is process-state-free, container restart is equivalent to `pkill uvicorn && uv run uvicorn` |
| TC-DOCKER-03 (per-worker rate limit) | TC-GW-04 + TC-REENT-09 (single-worker rate limit + 5min expiry). The cross-worker divergence is an architectural property of the in-memory dict; no auth code path differs |
| TC-DOCKER-04 (IM channels skip auth) | Code-level only: `app/channels/manager.py` uses `langgraph_sdk` directly with no cookie handling. The langgraph_auth handler is bypassed by going through SDK, not HTTP |
| TC-DOCKER-05 (credential surfacing) | TC-1.1 on sg_dev (file at `~/deer-flow/backend/.deer-flow/admin_initial_credentials.txt`, mode 0600, password 22 chars) — the only Docker-unique step is whether the bind mount projects this path onto the host, which is a `docker compose` config check, not a runtime behavior change |
| TC-DOCKER-06 (gateway-mode container) | Section 七 7.2 covered by TC-GW-01..05 + Section 二 (gateway-mode auth flow on sg_dev) — same Gateway code, container is just a packaging change |
## Reproduction steps when Docker becomes available
Anyone with `docker` + `docker compose` installed can reproduce the gap by
running the test plan section verbatim. Pre-flight:
```bash
# Required on the host
docker --version # >=24.x
docker compose version # plugin >=2.x
# Required env var (otherwise sessions reset on every container restart)
echo "AUTH_JWT_SECRET=$(python3 -c 'import secrets; print(secrets.token_urlsafe(32))')" \
>> .env
# Optional: pin DEER_FLOW_HOME to a stable host path
echo "DEER_FLOW_HOME=$HOME/deer-flow-data" >> .env
```
Then run TC-DOCKER-01..06 from the test plan as written.
## Decision log
- **Not blocking the release.** The auth-relevant behavior in every Docker
case has an already-validated equivalent on bare metal. The gap is purely
about *container packaging* details (bind mounts, multi-worker, log
collection), not about whether the auth code paths work.
- **TC-DOCKER-05 was updated in place** in `AUTH_TEST_PLAN.md` to reflect
the post-simplify reality (credentials file → 0600 file, no log leak).
The old "grep 'Password:' in docker logs" expectation would have failed
silently and given a false sense of coverage.
File diff suppressed because it is too large Load Diff
+129
View File
@@ -0,0 +1,129 @@
# Authentication Upgrade Guide
DeerFlow 内置了认证模块。本文档面向从无认证版本升级的用户。
## 核心概念
认证模块采用**始终强制**策略:
- 首次启动时自动创建 admin 账号,随机密码打印到控制台日志
- 认证从一开始就是强制的,无竞争窗口
- 历史对话(升级前创建的 thread)自动迁移到 admin 名下
## 升级步骤
### 1. 更新代码
```bash
git pull origin main
cd backend && make install
```
### 2. 首次启动
```bash
make dev
```
控制台会输出:
```
============================================================
Admin account created on first boot
Email: admin@deerflow.dev
Password: aB3xK9mN_pQ7rT2w
Change it after login: Settings → Account
============================================================
```
如果未登录就重启了服务,不用担心——只要 setup 未完成,每次启动都会重置密码并重新打印到控制台。
### 3. 登录
访问 `http://localhost:2026/login`,使用控制台输出的邮箱和密码登录。
### 4. 修改密码
登录后进入 Settings → Account → Change Password。
### 5. 添加用户(可选)
其他用户通过 `/login` 页面注册,自动获得 **user** 角色。每个用户只能看到自己的对话。
## 安全机制
| 机制 | 说明 |
|------|------|
| JWT HttpOnly Cookie | Token 不暴露给 JavaScript,防止 XSS 窃取 |
| CSRF Double Submit Cookie | 所有 POST/PUT/DELETE 请求需携带 `X-CSRF-Token` |
| bcrypt 密码哈希 | 密码不以明文存储 |
| 多租户隔离 | 用户只能访问自己的 thread |
| HTTPS 自适应 | 检测 `x-forwarded-proto`,自动设置 `Secure` cookie 标志 |
## 常见操作
### 忘记密码
```bash
cd backend
# 重置 admin 密码
python -m app.gateway.auth.reset_admin
# 重置指定用户密码
python -m app.gateway.auth.reset_admin --email user@example.com
```
会输出新的随机密码。
### 完全重置
删除用户数据库,重启后自动创建新 admin:
```bash
rm -f backend/.deer-flow/users.db
# 重启服务,控制台输出新密码
```
## 数据存储
| 文件 | 内容 |
|------|------|
| `.deer-flow/users.db` | SQLite 用户数据库(密码哈希、角色) |
| `.env` 中的 `AUTH_JWT_SECRET` | JWT 签名密钥(未设置时自动生成临时密钥,重启后 session 失效) |
### 生产环境建议
```bash
# 生成持久化 JWT 密钥,避免重启后所有用户需重新登录
python -c "import secrets; print(secrets.token_urlsafe(32))"
# 将输出添加到 .env
# AUTH_JWT_SECRET=<生成的密钥>
```
## API 端点
| 端点 | 方法 | 说明 |
|------|------|------|
| `/api/v1/auth/login/local` | POST | 邮箱密码登录(OAuth2 form |
| `/api/v1/auth/register` | POST | 注册新用户(user 角色) |
| `/api/v1/auth/logout` | POST | 登出(清除 cookie |
| `/api/v1/auth/me` | GET | 获取当前用户信息 |
| `/api/v1/auth/change-password` | POST | 修改密码 |
| `/api/v1/auth/setup-status` | GET | 检查 admin 是否存在 |
## 兼容性
- **标准模式**`make dev`):完全兼容,admin 自动创建
- **Gateway 模式**`make dev-pro`):完全兼容
- **Docker 部署**:完全兼容,`.deer-flow/users.db` 需持久化卷挂载
- **IM 渠道**Feishu/Slack/Telegram):通过 LangGraph SDK 通信,不经过认证层
- **DeerFlowClient**(嵌入式):不经过 HTTP,不受认证影响
## 故障排查
| 症状 | 原因 | 解决 |
|------|------|------|
| 启动后没看到密码 | admin 已存在(非首次启动) | 用 `reset_admin` 重置,或删 `users.db` |
| 登录后 POST 返回 403 | CSRF token 缺失 | 确认前端已更新 |
| 重启后需要重新登录 | `AUTH_JWT_SECRET` 未持久化 | 在 `.env` 中设置固定密钥 |
+2 -2
View File
@@ -192,8 +192,8 @@ tools:
```
**Built-in Tools**:
- `web_search` - Search the web (Tavily)
- `web_fetch` - Fetch web pages (Jina AI)
- `web_search` - Search the web (DuckDuckGo, Tavily, Exa, InfoQuest, Firecrawl)
- `web_fetch` - Fetch web pages (Jina AI, Exa, InfoQuest, Firecrawl)
- `ls` - List directory contents
- `read_file` - Read file contents
- `write_file` - Write file contents
+2
View File
@@ -15,6 +15,7 @@ This directory contains detailed documentation for the DeerFlow backend.
| Document | Description |
|----------|-------------|
| [STREAMING.md](STREAMING.md) | Token-level streaming design: Gateway vs DeerFlowClient paths, `stream_mode` semantics, per-id dedup |
| [FILE_UPLOAD.md](FILE_UPLOAD.md) | File upload functionality |
| [PATH_EXAMPLES.md](PATH_EXAMPLES.md) | Path types and usage examples |
| [summarization.md](summarization.md) | Context summarization feature |
@@ -47,6 +48,7 @@ docs/
├── PATH_EXAMPLES.md # Path usage examples
├── summarization.md # Summarization feature
├── plan_mode_usage.md # Plan mode feature
├── STREAMING.md # Token-level streaming design
├── AUTO_TITLE_GENERATION.md # Title generation
├── TITLE_GENERATION_IMPLEMENTATION.md # Title implementation details
└── TODO.md # Roadmap and issues
+351
View File
@@ -0,0 +1,351 @@
# DeerFlow 流式输出设计
本文档解释 DeerFlow 是如何把 LangGraph agent 的事件流端到端送到两类消费者(HTTP 客户端、嵌入式 Python 调用方)的:两条路径为什么**必须**并存、它们各自的契约是什么、以及设计里那些 non-obvious 的不变式。
---
## TL;DR
- DeerFlow 有**两条并行**的流式路径:**Gateway 路径**async / HTTP SSE / JSON 序列化)服务浏览器和 IM 渠道;**DeerFlowClient 路径**sync / in-process / 原生 LangChain 对象)服务 Jupyter、脚本、测试。它们**无法合并**——消费者模型不同。
- 两条路径都从 `create_agent()` 工厂出发,核心都是订阅 LangGraph 的 `stream_mode=["values", "messages", "custom"]``values` 是节点级 state 快照,`messages` 是 LLM token 级 delta`custom` 是显式 `StreamWriter` 事件。**这三种模式不是详细程度的梯度,是三个独立的事件源**,要 token 流就必须显式订阅 `messages`
- 嵌入式 client 为每个 `stream()` 调用维护三个 `set[str]``seen_ids` / `streamed_ids` / `counted_usage_ids`。三者看起来相似但管理**三个独立的不变式**,不能合并。
---
## 为什么有两条流式路径
两条路径服务的消费者模型根本不同:
| 维度 | Gateway 路径 | DeerFlowClient 路径 |
|---|---|---|
| 入口 | FastAPI `/runs/stream` endpoint | `DeerFlowClient.stream(message)` |
| 触发层 | `runtime/runs/worker.py::run_agent` | `packages/harness/deerflow/client.py::DeerFlowClient.stream` |
| 执行模型 | `async def` + `agent.astream()` | sync generator + `agent.stream()` |
| 事件传输 | `StreamBridge`asyncio Queue+ `sse_consumer` | 直接 `yield` |
| 序列化 | `serialize(chunk)` → 纯 JSON dict,匹配 LangGraph Platform wire 格式 | `StreamEvent.data`,携带原生 LangChain 对象 |
| 消费者 | 前端 `useStream` React hook、飞书/Slack/Telegram channel、LangGraph SDK 客户端 | Jupyter notebook、集成测试、内部 Python 脚本 |
| 生命周期管理 | `RunManager`run_id 跟踪、disconnect 语义、multitask 策略、heartbeat | 无;函数返回即结束 |
| 断连恢复 | `Last-Event-ID` SSE 重连 | 无需要 |
**两条路径的存在是 DRY 的刻意妥协**:Gateway 的全部基础设施(async + Queue + JSON + RunManager**都是为了跨网络边界把事件送给 HTTP 消费者**。当生产者(agent)和消费者(Python 调用栈)在同一个进程时,这整套东西都是纯开销。
### 为什么不能让 DeerFlowClient 复用 Gateway
曾经考虑过三种复用方案,都被否决:
1. **让 `client.stream()` 变成 `async def client.astream()`**
breaking change。用户用不上的 `async for` / `asyncio.run()` 要硬塞进 Jupyter notebook 和同步脚本。DeerFlowClient 的一大卖点("把 agent 当普通函数调用")直接消失。
2. **在 `client.stream()` 内部起一个独立事件循环线程,用 `StreamBridge` 在 sync/async 之间做桥接**
引入线程池、队列、信号量。为了"消除重复",把**复杂度**代替代码行数引进来。是典型的"wrong abstraction"——开销高于复用收益。
3. **让 `run_agent` 自己兼容 sync mode**
给 Gateway 加一条用不到的死分支,污染 worker.py 的焦点。
所以两条路径的事件处理逻辑会**相似但不共享**。这是刻意设计,不是疏忽。
---
## LangGraph `stream_mode` 三层语义
LangGraph 的 `agent.stream(stream_mode=[...])` 是**多路复用**接口:一次订阅多个 mode,每个 mode 是一个独立的事件源。三种核心 mode:
```mermaid
flowchart LR
classDef values fill:#B8C5D1,stroke:#5A6B7A,color:#2C3E50
classDef messages fill:#C9B8A8,stroke:#7A6B5A,color:#2C3E50
classDef custom fill:#B5C4B1,stroke:#5A7A5A,color:#2C3E50
subgraph LG["LangGraph agent graph"]
direction TB
Node1["node: LLM call"]
Node2["node: tool call"]
Node3["node: reducer"]
end
LG -->|"每个节点完成后"| V["values: 完整 state 快照"]
Node1 -->|"LLM 每产生一个 token"| M["messages: (AIMessageChunk, meta)"]
Node1 -->|"StreamWriter.write()"| C["custom: 任意 dict"]
class V values
class M messages
class C custom
```
| Mode | 发射时机 | Payload | 粒度 |
|---|---|---|---|
| `values` | 每个 graph 节点完成后 | 完整 state dicttitle、messages、artifacts| 节点级 |
| `messages` | LLM 每次 yield 一个 chunktool 节点完成时 | `(AIMessageChunk \| ToolMessage, metadata_dict)` | token 级 |
| `custom` | 用户代码显式调用 `StreamWriter.write()` | 任意 dict | 应用定义 |
### 两套命名的由来
同一件事在**三个协议层**有三个名字:
```
Application HTTP / SSE LangGraph Graph
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ frontend │ │ LangGraph │ │ agent.astream│
│ useStream │──"messages- │ Platform SDK │──"messages"──│ graph.astream│
│ Feishu IM │ tuple"──────│ HTTP wire │ │ │
└──────────────┘ └──────────────┘ └──────────────┘
```
- **Graph 层**`agent.stream` / `agent.astream`):LangGraph Python 直接 APImode 叫 **`"messages"`**。
- **Platform SDK 层**`langgraph-sdk` HTTP client):跨进程 HTTP 契约,mode 叫 **`"messages-tuple"`**。
- **Gateway worker** 显式做翻译:`if m == "messages-tuple": lg_modes.append("messages")``runtime/runs/worker.py:117-121`)。
**后果**`DeerFlowClient.stream()` 直接调 `agent.stream()`Graph 层),所以必须传 `"messages"``app/channels/manager.py` 通过 `langgraph-sdk` 走 HTTP SDK,所以传 `"messages-tuple"`。**这两个字符串不能互相替代**,也不能抽成"一个共享常量"——它们是不同协议层的 type alias,共享只会让某一层说不是它母语的话。
---
## Gateway 路径:async + HTTP SSE
```mermaid
sequenceDiagram
participant Client as HTTP Client
participant API as FastAPI<br/>thread_runs.py
participant Svc as services.py<br/>start_run
participant Worker as worker.py<br/>run_agent (async)
participant Bridge as StreamBridge<br/>(asyncio.Queue)
participant Agent as LangGraph<br/>agent.astream
participant SSE as sse_consumer
Client->>API: POST /runs/stream
API->>Svc: start_run(body)
Svc->>Bridge: create bridge
Svc->>Worker: asyncio.create_task(run_agent(...))
Svc-->>API: StreamingResponse(sse_consumer)
API-->>Client: event-stream opens
par worker (producer)
Worker->>Agent: astream(stream_mode=lg_modes)
loop 每个 chunk
Agent-->>Worker: (mode, chunk)
Worker->>Bridge: publish(run_id, event, serialize(chunk))
end
Worker->>Bridge: publish_end(run_id)
and sse_consumer (consumer)
SSE->>Bridge: subscribe(run_id)
loop 每个 event
Bridge-->>SSE: StreamEvent
SSE-->>Client: "event: <name>\ndata: <json>\n\n"
end
end
```
关键组件:
- `runtime/runs/worker.py::run_agent` — 在 `asyncio.Task` 里跑 `agent.astream()`,把每个 chunk 通过 `serialize(chunk, mode=mode)` 转成 JSON,再 `bridge.publish()`
- `runtime/stream_bridge` — 抽象 Queue。`publish/subscribe` 解耦生产者和消费者,支持 `Last-Event-ID` 重连、心跳、多订阅者 fan-out。
- `app/gateway/services.py::sse_consumer` — 从 bridge 订阅,格式化为 SSE wire 帧。
- `runtime/serialization.py::serialize` — mode-aware 序列化;`messages` mode 下 `serialize_messages_tuple``(chunk, metadata)` 转成 `[chunk.model_dump(), metadata]`
**`StreamBridge` 的存在价值**:当生产者(`run_agent` 任务)和消费者(HTTP 连接)在不同的 asyncio task 里运行时,需要一个可以跨 task 传递事件的中介。Queue 同时还承担断连重连的 buffer 和多订阅者的 fan-out。
---
## DeerFlowClient 路径:sync + in-process
```mermaid
sequenceDiagram
participant User as Python caller
participant Client as DeerFlowClient.stream
participant Agent as LangGraph<br/>agent.stream (sync)
User->>Client: for event in client.stream("hi"):
Client->>Agent: stream(stream_mode=["values","messages","custom"])
loop 每个 chunk
Agent-->>Client: (mode, chunk)
Client->>Client: 分发 mode<br/>构建 StreamEvent
Client-->>User: yield StreamEvent
end
Client-->>User: yield StreamEvent(type="end")
```
对比之下,sync 路径的每个环节都是显著更少的移动部件:
- 没有 `RunManager` —— 一次 `stream()` 调用对应一次生命周期,无需 run_id。
- 没有 `StreamBridge` —— 直接 `yield`,生产和消费在同一个 Python 调用栈,不需要跨 task 中介。
- 没有 JSON 序列化 —— `StreamEvent.data` 直接装原生 LangChain 对象(`AIMessage.content``usage_metadata``UsageMetadata` TypedDict)。Jupyter 用户拿到的是真正的类型,不是匿名 dict。
- 没有 asyncio —— 调用者可以直接 `for event in ...`,不必写 `async for`
---
## 消费语义:delta vs cumulative
LangGraph `messages` mode 给出的是 **delta**:每个 `AIMessageChunk.content` 只包含这一次新 yield 的 token,**不是**从头的累计文本。
这个语义和 LangChain 的 `fs2 Stream` 风格一致:**上游发增量,下游负责累加**。Gateway 路径里前端 `useStream` React hook 自己维护累加器;DeerFlowClient 路径里 `chat()` 方法替调用者做累加。
### `DeerFlowClient.chat()` 的 O(n) 累加器
```python
chunks: dict[str, list[str]] = {}
last_id: str = ""
for event in self.stream(message, thread_id=thread_id, **kwargs):
if event.type == "messages-tuple" and event.data.get("type") == "ai":
msg_id = event.data.get("id") or ""
delta = event.data.get("content", "")
if delta:
chunks.setdefault(msg_id, []).append(delta)
last_id = msg_id
return "".join(chunks.get(last_id, ()))
```
**为什么不是 `buffers[id] = buffers.get(id,"") + delta`**CPython 的字符串 in-place concat 优化仅在 refcount=1 且 LHS 是 local name 时生效;这里字符串存在 dict 里被 reassign,优化失效,每次都是 O(n) 拷贝 → 总体 O(n²)。实测 50 KB / 5000 chunk 的回复要 100-300ms 纯拷贝开销。用 `list` + `"".join()` 是 O(n)。
---
## 三个 id set 为什么不能合并
`DeerFlowClient.stream()` 在一次调用生命周期内维护三个 `set[str]`
```python
seen_ids: set[str] = set() # values 路径内部 dedup
streamed_ids: set[str] = set() # messages → values 跨模式 dedup
counted_usage_ids: set[str] = set() # usage_metadata 幂等计数
```
乍看像是"三份几乎一样的东西",实际每个管**不同的不变式**。
| Set | 负责的不变式 | 被谁填充 | 被谁查询 |
|---|---|---|---|
| `seen_ids` | 连续两个 `values` 快照里同一条 message 只生成一个 `messages-tuple` 事件 | values 分支每处理一条消息就加入 | values 分支处理下一条消息前检查 |
| `streamed_ids` | 如果一条消息已经通过 `messages` 模式 token 级流过,values 快照到达时**不要**再合成一次完整 `messages-tuple` | messages 分支每发一个 AI/tool 事件就加入 | values 分支看到消息时检查 |
| `counted_usage_ids` | 同一个 `usage_metadata` 在 messages 末尾 chunk 和 values 快照的 final AIMessage 里各带一份,**累计总量只算一次** | `_account_usage()` 每次接受 usage 就加入 | `_account_usage()` 每次调用时检查 |
### 为什么不能只用一个 set
关键观察:**同一个 message id 在这三个 set 里的加入时机不同**。
```mermaid
sequenceDiagram
participant M as messages mode
participant V as values mode
participant SS as streamed_ids
participant SU as counted_usage_ids
participant SE as seen_ids
Note over M: 第一个 AI text chunk 到达
M->>SS: add(msg_id)
Note over M: 最后一个 chunk 带 usage
M->>SU: add(msg_id)
Note over V: snapshot 到达,包含同一条 AI message
V->>SE: add(msg_id)
V->>SS: 查询 → 已存在,跳过文本合成
V->>SU: 查询 → 已存在,不重复计数
```
- `seen_ids` **永远在 values 快照到达时**加入,所以它是 "values 已处理" 的标记。一条只出现在 messages 流里的消息(罕见但可能),`seen_ids` 里永远没有它。
- `streamed_ids` **在 messages 流的第一个有效事件时**加入。一条只通过 values 快照到达的非 AI 消息(HumanMessage、被 truncate 的 tool 消息),`streamed_ids` 里永远没有它。
- `counted_usage_ids` **只在看到非空 `usage_metadata` 时**加入。一条完全没有 usage 的消息(tool message、错误消息)永远不会进去。
**集合包含关系**`counted_usage_ids ⊆ (streamed_ids seen_ids)` 大致成立,但**不是严格子集**,因为一条消息可以在 messages 模式流完 text 但**在最后那个带 usage 的 chunk 之前**就被 values snapshot 赶上——此时它已经在 `streamed_ids` 里,但还不在 `counted_usage_ids` 里。把它们合并成一个 dict-of-flags 会让这个微妙的时序依赖**从类型系统里消失**,变成注释里的一句话。三个独立的 set 把不变式显式化了:每个 set 名对应一个可以口头回答的问题。
---
## 端到端:一次真实对话的事件时序
假设调用 `client.stream("Count from 1 to 15")`LLM 给出 "one\ntwo\n...\nfifteen"88 字符),tokenizer 把它拆成 ~35 个 BPE chunk。下面是事件到达序列的精简版:
```mermaid
sequenceDiagram
participant U as User
participant C as DeerFlowClient
participant A as LangGraph<br/>agent.stream
U->>C: stream("Count ... 15")
C->>A: stream(mode=["values","messages","custom"])
A-->>C: ("values", {messages: [HumanMessage]})
C-->>U: StreamEvent(type="values", ...)
Note over A,C: LLM 开始 yield token
loop 35 次,约 476ms
A-->>C: ("messages", (AIMessageChunk(content="ele"), meta))
C->>C: streamed_ids.add(ai-1)
C-->>U: StreamEvent(type="messages-tuple",<br/>data={type:ai, content:"ele", id:ai-1})
end
Note over A: LLM finish_reason=stop,最后一个 chunk 带 usage
A-->>C: ("messages", (AIMessageChunk(content="", usage_metadata={...}), meta))
C->>C: counted_usage_ids.add(ai-1)<br/>(无文本,不 yield)
A-->>C: ("values", {messages: [..., AIMessage(complete)]})
C->>C: ai-1 in streamed_ids → 跳过合成
C->>C: 捕获 usage (已在 counted_usage_idsno-op)
C-->>U: StreamEvent(type="values", ...)
C-->>U: StreamEvent(type="end", data={usage:{...}})
```
关键观察:
1. 用户看到 **35 个 messages-tuple 事件**,跨越约 476ms,每个事件带一个 token delta 和同一个 `id=ai-1`
2. 最后一个 `values` 快照里的 `AIMessage` **不会**再触发一个完整的 `messages-tuple` 事件——因为 `ai-1 in streamed_ids` 跳过了合成。
3. `end` 事件里的 `usage` 正好等于那一份 cumulative usage**不是它的两倍**——`counted_usage_ids` 在 messages 末尾 chunk 上已经吸收了,values 分支的重复访问是 no-op。
4. 消费者拿到的 `content` 是**增量**"ele" 只包含 3 个字符,不是 "one\ntwo\n...ele"。想要完整文本要按 `id` 累加,`chat()` 已经帮你做了。
---
## 为什么这个设计容易出 bug,以及测试策略
本文档的直接起因是 bytedance/deer-flow#1969`DeerFlowClient.stream()` 原本只订阅 `["values", "custom"]`**漏了 `"messages"`**。结果 `client.stream("hello")` 等价于一次性返回,视觉上和 `chat()` 没区别。
这类 bug 有三个结构性原因:
1. **多协议层命名**`messages` / `messages-tuple` / HTTP SSE `messages` 是同一概念的三个名字。在其中一层出错不会在另外两层报错。
2. **多消费者模型**Gateway 和 DeerFlowClient 是两套独立实现,**没有单一的"订阅哪些 mode"的 single source of truth**。前者订阅对了不代表后者也订阅对了。
3. **mock 测试绕开了真实路径**:老测试用 `agent.stream.return_value = iter([dict_chunk, ...])` 喂 values 形状的 dict 模拟 state 快照。这样构造的输入**永远不会进入 `messages` mode 分支**,所以即使 `stream_mode` 里少一个元素,CI 依然全绿。
### 防御手段
真正的防线是**显式断言 "messages" mode 被订阅 + 用真实 chunk shape mock**
```python
# tests/test_client.py::test_messages_mode_emits_token_deltas
agent.stream.return_value = iter([
("messages", (AIMessageChunk(content="Hel", id="ai-1"), {})),
("messages", (AIMessageChunk(content="lo ", id="ai-1"), {})),
("messages", (AIMessageChunk(content="world!", id="ai-1"), {})),
("values", {"messages": [HumanMessage(...), AIMessage(content="Hello world!", id="ai-1")]}),
])
# ...
assert [e.data["content"] for e in ai_text_events] == ["Hel", "lo ", "world!"]
assert len(ai_text_events) == 3 # values snapshot must NOT re-synthesize
assert "messages" in agent.stream.call_args.kwargs["stream_mode"]
```
**为什么这比"抽一个共享常量"更有效**:共享常量只能保证"用它的人写对字符串",但新增消费者的人可能根本不知道常量在哪。行为断言强制任何改动都要穿过**实际执行路径**,改回 `["values", "custom"]` 会立刻让 `assert "messages" in ...` 失败。
### 活体信号:BPE 子词边界
回归的最终验证是让真实 LLM 数 1-15,然后看是否能在输出里看到 tokenizer 的子词切分:
```
[5.460s] 'ele' / 'ven' eleven 被拆成两个 token
[5.508s] 'tw' / 'elve' twelve 拆两个
[5.568s] 'th' / 'irteen' thirteen 拆两个
[5.623s] 'four'/ 'teen' fourteen 拆两个
[5.677s] 'f' / 'if' / 'teen' fifteen 拆三个
```
子词切分是 tokenizer 的外部事实,**无法伪造**。能看到它就说明数据流**逐 chunk** 地穿过了整条管道,没有被任何中间层缓冲成整段。这种"活体信号"在流式系统里是比单元测试更高置信度的证据。
---
## 相关源码定位
| 关心什么 | 看这里 |
|---|---|
| DeerFlowClient 嵌入式流 | `packages/harness/deerflow/client.py::DeerFlowClient.stream` |
| `chat()` 的 delta 累加器 | `packages/harness/deerflow/client.py::DeerFlowClient.chat` |
| Gateway async 流 | `packages/harness/deerflow/runtime/runs/worker.py::run_agent` |
| HTTP SSE 帧输出 | `app/gateway/services.py::sse_consumer` / `format_sse` |
| 序列化到 wire 格式 | `packages/harness/deerflow/runtime/serialization.py` |
| LangGraph mode 命名翻译 | `packages/harness/deerflow/runtime/runs/worker.py:117-121` |
| 飞书渠道的增量卡片更新 | `app/channels/manager.py::_handle_streaming_chat` |
| Channels 自带的 delta/cumulative 防御性累加 | `app/channels/manager.py::_merge_stream_text` |
| Frontend useStream 支持的 mode 集合 | `frontend/src/core/api/stream-mode.ts` |
| 核心回归测试 | `backend/tests/test_client.py::TestStream::test_messages_mode_emits_token_deltas` |
+3
View File
@@ -8,6 +8,9 @@
"graphs": {
"lead_agent": "deerflow.agents:make_lead_agent"
},
"auth": {
"path": "./app/gateway/langgraph_auth.py:auth"
},
"checkpointer": {
"path": "./packages/harness/deerflow/agents/checkpointer/async_provider.py:make_checkpointer"
}
@@ -2,8 +2,14 @@ from .checkpointer import get_checkpointer, make_checkpointer, reset_checkpointe
from .factory import create_deerflow_agent
from .features import Next, Prev, RuntimeFeatures
from .lead_agent import make_lead_agent
from .lead_agent.prompt import prime_enabled_skills_cache
from .thread_state import SandboxState, ThreadState
# LangGraph imports deerflow.agents when registering the graph. Prime the
# enabled-skills cache here so the request path can usually read a warm cache
# without forcing synchronous filesystem work during prompt module import.
prime_enabled_skills_cache()
__all__ = [
"create_deerflow_agent",
"RuntimeFeatures",
@@ -17,6 +17,7 @@ For sync usage see :mod:`deerflow.agents.checkpointer.provider`.
from __future__ import annotations
import asyncio
import contextlib
import logging
from collections.abc import AsyncIterator
@@ -54,7 +55,7 @@ async def _async_checkpointer(config) -> AsyncIterator[Checkpointer]:
raise ImportError(SQLITE_INSTALL) from exc
conn_str = resolve_sqlite_conn_str(config.connection_string or "store.db")
ensure_sqlite_parent_dir(conn_str)
await asyncio.to_thread(ensure_sqlite_parent_dir, conn_str)
async with AsyncSqliteSaver.from_conn_string(conn_str) as saver:
await saver.setup()
yield saver
@@ -83,23 +84,76 @@ async def _async_checkpointer(config) -> AsyncIterator[Checkpointer]:
@contextlib.asynccontextmanager
async def make_checkpointer() -> AsyncIterator[Checkpointer]:
"""Async context manager that yields a checkpointer for the caller's lifetime.
Resources are opened on enter and closed on exit no global state::
async with make_checkpointer() as checkpointer:
app.state.checkpointer = checkpointer
Yields an ``InMemorySaver`` when no checkpointer is configured in *config.yaml*.
"""
config = get_app_config()
if config.checkpointer is None:
async def _async_checkpointer_from_database(db_config) -> AsyncIterator[Checkpointer]:
"""Async context manager that constructs a checkpointer from unified DatabaseConfig."""
if db_config.backend == "memory":
from langgraph.checkpoint.memory import InMemorySaver
yield InMemorySaver()
return
async with _async_checkpointer(config.checkpointer) as saver:
yield saver
if db_config.backend == "sqlite":
try:
from langgraph.checkpoint.sqlite.aio import AsyncSqliteSaver
except ImportError as exc:
raise ImportError(SQLITE_INSTALL) from exc
conn_str = db_config.checkpointer_sqlite_path
ensure_sqlite_parent_dir(conn_str)
async with AsyncSqliteSaver.from_conn_string(conn_str) as saver:
await saver.setup()
yield saver
return
if db_config.backend == "postgres":
try:
from langgraph.checkpoint.postgres.aio import AsyncPostgresSaver
except ImportError as exc:
raise ImportError(POSTGRES_INSTALL) from exc
if not db_config.postgres_url:
raise ValueError("database.postgres_url is required for the postgres backend")
async with AsyncPostgresSaver.from_conn_string(db_config.postgres_url) as saver:
await saver.setup()
yield saver
return
raise ValueError(f"Unknown database backend: {db_config.backend!r}")
@contextlib.asynccontextmanager
async def make_checkpointer() -> AsyncIterator[Checkpointer]:
"""Async context manager that yields a checkpointer for the caller's lifetime.
Resources are opened on enter and closed on exit -- no global state::
async with make_checkpointer() as checkpointer:
app.state.checkpointer = checkpointer
Yields an ``InMemorySaver`` when no checkpointer is configured in *config.yaml*.
Priority:
1. Legacy ``checkpointer:`` config section (backward compatible)
2. Unified ``database:`` config section
3. Default InMemorySaver
"""
config = get_app_config()
# Legacy: standalone checkpointer config takes precedence
if config.checkpointer is not None:
async with _async_checkpointer(config.checkpointer) as saver:
yield saver
return
# Unified database config
db_config = getattr(config, "database", None)
if db_config is not None and db_config.backend != "memory":
async with _async_checkpointer_from_database(db_config) as saver:
yield saver
return
# Default: in-memory
from langgraph.checkpoint.memory import InMemorySaver
yield InMemorySaver()
@@ -56,13 +56,15 @@ def _create_summarization_middleware() -> SummarizationMiddleware | None:
# Prepare keep parameter
keep = config.keep.to_tuple()
# Prepare model parameter
# Prepare model parameter.
# Bind "middleware:summarize" tag so RunJournal identifies these LLM calls
# as middleware rather than lead_agent (SummarizationMiddleware is a
# LangChain built-in, so we tag the model at creation time).
if config.model_name:
model = create_chat_model(name=config.model_name, thinking_enabled=False)
else:
# Use a lightweight model for summarization to save costs
# Falls back to default model if not explicitly specified
model = create_chat_model(thinking_enabled=False)
model = model.with_config(tags=["middleware:summarize"])
# Prepare kwargs
kwargs = {
@@ -287,14 +289,14 @@ def make_lead_agent(config: RunnableConfig):
agent_name = cfg.get("agent_name")
agent_config = load_agent_config(agent_name) if not is_bootstrap else None
# Custom agent model or fallback to global/default model resolution
agent_model_name = agent_config.model if agent_config and agent_config.model else _resolve_model_name()
# Custom agent model from agent config (if any), or None to let _resolve_model_name pick the default
agent_model_name = agent_config.model if agent_config and agent_config.model else None
# Final model name resolution with request override, then agent config, then global default
model_name = requested_model_name or agent_model_name
# Final model name resolution: request agent config global default, with fallback for unknown names
model_name = _resolve_model_name(requested_model_name or agent_model_name)
app_config = get_app_config()
model_config = app_config.get_model_config(model_name) if model_name else None
model_config = app_config.get_model_config(model_name)
if model_config is None:
raise ValueError("No chat model could be resolved. Please configure at least one model in config.yaml or provide a valid 'model_name'/'model' in the request.")
@@ -1,20 +1,114 @@
import asyncio
import logging
import threading
from datetime import datetime
from functools import lru_cache
from deerflow.config.agents_config import load_agent_soul
from deerflow.skills import load_skills
from deerflow.skills.types import Skill
from deerflow.subagents import get_available_subagent_names
logger = logging.getLogger(__name__)
_ENABLED_SKILLS_REFRESH_WAIT_TIMEOUT_SECONDS = 5.0
_enabled_skills_lock = threading.Lock()
_enabled_skills_cache: list[Skill] | None = None
_enabled_skills_refresh_active = False
_enabled_skills_refresh_version = 0
_enabled_skills_refresh_event = threading.Event()
def _load_enabled_skills_sync() -> list[Skill]:
return list(load_skills(enabled_only=True))
def _start_enabled_skills_refresh_thread() -> None:
threading.Thread(
target=_refresh_enabled_skills_cache_worker,
name="deerflow-enabled-skills-loader",
daemon=True,
).start()
def _refresh_enabled_skills_cache_worker() -> None:
global _enabled_skills_cache, _enabled_skills_refresh_active
while True:
with _enabled_skills_lock:
target_version = _enabled_skills_refresh_version
try:
skills = _load_enabled_skills_sync()
except Exception:
logger.exception("Failed to load enabled skills for prompt injection")
skills = []
with _enabled_skills_lock:
if _enabled_skills_refresh_version == target_version:
_enabled_skills_cache = skills
_enabled_skills_refresh_active = False
_enabled_skills_refresh_event.set()
return
# A newer invalidation happened while loading. Keep the worker alive
# and loop again so the cache always converges on the latest version.
_enabled_skills_cache = None
def _ensure_enabled_skills_cache() -> threading.Event:
global _enabled_skills_refresh_active
with _enabled_skills_lock:
if _enabled_skills_cache is not None:
_enabled_skills_refresh_event.set()
return _enabled_skills_refresh_event
if _enabled_skills_refresh_active:
return _enabled_skills_refresh_event
_enabled_skills_refresh_active = True
_enabled_skills_refresh_event.clear()
_start_enabled_skills_refresh_thread()
return _enabled_skills_refresh_event
def _invalidate_enabled_skills_cache() -> threading.Event:
global _enabled_skills_cache, _enabled_skills_refresh_active, _enabled_skills_refresh_version
_get_cached_skills_prompt_section.cache_clear()
with _enabled_skills_lock:
_enabled_skills_cache = None
_enabled_skills_refresh_version += 1
_enabled_skills_refresh_event.clear()
if _enabled_skills_refresh_active:
return _enabled_skills_refresh_event
_enabled_skills_refresh_active = True
_start_enabled_skills_refresh_thread()
return _enabled_skills_refresh_event
def prime_enabled_skills_cache() -> None:
_ensure_enabled_skills_cache()
def warm_enabled_skills_cache(timeout_seconds: float = _ENABLED_SKILLS_REFRESH_WAIT_TIMEOUT_SECONDS) -> bool:
if _ensure_enabled_skills_cache().wait(timeout=timeout_seconds):
return True
logger.warning("Timed out waiting %.1fs for enabled skills cache warm-up", timeout_seconds)
return False
def _get_enabled_skills():
try:
return list(load_skills(enabled_only=True))
except Exception:
logger.exception("Failed to load enabled skills for prompt injection")
return []
with _enabled_skills_lock:
cached = _enabled_skills_cache
if cached is not None:
return list(cached)
_ensure_enabled_skills_cache()
return []
def _skill_mutability_label(category: str) -> str:
@@ -22,7 +116,36 @@ def _skill_mutability_label(category: str) -> str:
def clear_skills_system_prompt_cache() -> None:
_invalidate_enabled_skills_cache()
async def refresh_skills_system_prompt_cache_async() -> None:
await asyncio.to_thread(_invalidate_enabled_skills_cache().wait)
def _reset_skills_system_prompt_cache_state() -> None:
global _enabled_skills_cache, _enabled_skills_refresh_active, _enabled_skills_refresh_version
_get_cached_skills_prompt_section.cache_clear()
with _enabled_skills_lock:
_enabled_skills_cache = None
_enabled_skills_refresh_active = False
_enabled_skills_refresh_version = 0
_enabled_skills_refresh_event.clear()
def _refresh_enabled_skills_cache() -> None:
"""Backward-compatible test helper for direct synchronous reload."""
try:
skills = _load_enabled_skills_sync()
except Exception:
logger.exception("Failed to load enabled skills for prompt injection")
skills = []
with _enabled_skills_lock:
_enabled_skills_cache = skills
_enabled_skills_refresh_active = False
_enabled_skills_refresh_event.set()
def _build_skill_evolution_section(skill_evolution_enabled: bool) -> str:
@@ -294,6 +417,9 @@ You: "Deploying to staging..." [proceed]
- Use `read_file` tool to read uploaded files using their paths from the list
- For PDF, PPT, Excel, and Word files, converted Markdown versions (*.md) are available alongside originals
- All temporary work happens in `/mnt/user-data/workspace`
- Treat `/mnt/user-data/workspace` as your default current working directory for coding and file-editing tasks
- When writing scripts or commands that create/read files from the workspace, prefer relative paths such as `hello.txt`, `../uploads/data.csv`, and `../outputs/report.md`
- Avoid hardcoding `/mnt/user-data/...` inside generated scripts when a relative path from the workspace is enough
- Final deliverables must be copied to `/mnt/user-data/outputs` and presented using `present_file` tool
{acp_section}
</working_directory>
@@ -4,7 +4,7 @@ import logging
import threading
import time
from dataclasses import dataclass, field
from datetime import datetime
from datetime import UTC, datetime
from typing import Any
from deerflow.config.memory_config import get_memory_config
@@ -18,7 +18,7 @@ class ConversationContext:
thread_id: str
messages: list[Any]
timestamp: datetime = field(default_factory=datetime.utcnow)
timestamp: datetime = field(default_factory=lambda: datetime.now(UTC))
agent_name: str | None = None
correction_detected: bool = False
reinforcement_detected: bool = False
@@ -4,7 +4,7 @@ import abc
import json
import logging
import threading
from datetime import datetime
from datetime import UTC, datetime
from pathlib import Path
from typing import Any
@@ -15,11 +15,16 @@ from deerflow.config.paths import get_paths
logger = logging.getLogger(__name__)
def utc_now_iso_z() -> str:
"""Current UTC time as ISO-8601 with ``Z`` suffix (matches prior naive-UTC output)."""
return datetime.now(UTC).isoformat().removesuffix("+00:00") + "Z"
def create_empty_memory() -> dict[str, Any]:
"""Create an empty memory structure."""
return {
"version": "1.0",
"lastUpdated": datetime.utcnow().isoformat() + "Z",
"lastUpdated": utc_now_iso_z(),
"user": {
"workContext": {"summary": "", "updatedAt": ""},
"personalContext": {"summary": "", "updatedAt": ""},
@@ -137,7 +142,7 @@ class FileMemoryStorage(MemoryStorage):
try:
file_path.parent.mkdir(parents=True, exist_ok=True)
memory_data["lastUpdated"] = datetime.utcnow().isoformat() + "Z"
memory_data["lastUpdated"] = utc_now_iso_z()
temp_path = file_path.with_suffix(".tmp")
with open(temp_path, "w", encoding="utf-8") as f:
@@ -5,14 +5,17 @@ import logging
import math
import re
import uuid
from datetime import datetime
from typing import Any
from deerflow.agents.memory.prompt import (
MEMORY_UPDATE_PROMPT,
format_conversation_for_update,
)
from deerflow.agents.memory.storage import create_empty_memory, get_memory_storage
from deerflow.agents.memory.storage import (
create_empty_memory,
get_memory_storage,
utc_now_iso_z,
)
from deerflow.config.memory_config import get_memory_config
from deerflow.models import create_chat_model
@@ -86,7 +89,7 @@ def create_memory_fact(
normalized_category = category.strip() or "context"
validated_confidence = _validate_confidence(confidence)
now = datetime.utcnow().isoformat() + "Z"
now = utc_now_iso_z()
memory_data = get_memory_data(agent_name)
updated_memory = dict(memory_data)
facts = list(memory_data.get("facts", []))
@@ -376,7 +379,7 @@ class MemoryUpdater:
Updated memory data.
"""
config = get_memory_config()
now = datetime.utcnow().isoformat() + "Z"
now = utc_now_iso_z()
# Update user sections
user_updates = update_data.get("user", {})
@@ -1,5 +1,6 @@
"""Middleware for intercepting clarification requests and presenting them to the user."""
import json
import logging
from collections.abc import Callable
from typing import override
@@ -60,6 +61,20 @@ class ClarificationMiddleware(AgentMiddleware[ClarificationMiddlewareState]):
context = args.get("context")
options = args.get("options", [])
# Some models (e.g. Qwen3-Max) serialize array parameters as JSON strings
# instead of native arrays. Deserialize and normalize so `options`
# is always a list for the rendering logic below.
if isinstance(options, str):
try:
options = json.loads(options)
except (json.JSONDecodeError, TypeError):
options = [options]
if options is None:
options = []
elif not isinstance(options, list):
options = [options]
# Type-specific icons
type_icons = {
"missing_info": "",
@@ -33,30 +33,92 @@ _DEFAULT_WINDOW_SIZE = 20 # track last N tool calls
_DEFAULT_MAX_TRACKED_THREADS = 100 # LRU eviction limit
def _normalize_tool_call_args(raw_args: object) -> tuple[dict, str | None]:
"""Normalize tool call args to a dict plus an optional fallback key.
Some providers serialize ``args`` as a JSON string instead of a dict.
We defensively parse those cases so loop detection does not crash while
still preserving a stable fallback key for non-dict payloads.
"""
if isinstance(raw_args, dict):
return raw_args, None
if isinstance(raw_args, str):
try:
parsed = json.loads(raw_args)
except (TypeError, ValueError, json.JSONDecodeError):
return {}, raw_args
if isinstance(parsed, dict):
return parsed, None
return {}, json.dumps(parsed, sort_keys=True, default=str)
if raw_args is None:
return {}, None
return {}, json.dumps(raw_args, sort_keys=True, default=str)
def _stable_tool_key(name: str, args: dict, fallback_key: str | None) -> str:
"""Derive a stable key from salient args without overfitting to noise."""
if name == "read_file" and fallback_key is None:
path = args.get("path") or ""
start_line = args.get("start_line")
end_line = args.get("end_line")
bucket_size = 200
try:
start_line = int(start_line) if start_line is not None else 1
except (TypeError, ValueError):
start_line = 1
try:
end_line = int(end_line) if end_line is not None else start_line
except (TypeError, ValueError):
end_line = start_line
start_line, end_line = sorted((start_line, end_line))
bucket_start = max(start_line, 1)
bucket_end = max(end_line, 1)
bucket_start = (bucket_start - 1) // bucket_size
bucket_end = (bucket_end - 1) // bucket_size
return f"{path}:{bucket_start}-{bucket_end}"
# write_file / str_replace are content-sensitive: same path may be updated
# with different payloads during iteration. Using only salient fields (path)
# can collapse distinct calls, so we hash full args to reduce false positives.
if name in {"write_file", "str_replace"}:
if fallback_key is not None:
return fallback_key
return json.dumps(args, sort_keys=True, default=str)
salient_fields = ("path", "url", "query", "command", "pattern", "glob", "cmd")
stable_args = {field: args[field] for field in salient_fields if args.get(field) is not None}
if stable_args:
return json.dumps(stable_args, sort_keys=True, default=str)
if fallback_key is not None:
return fallback_key
return json.dumps(args, sort_keys=True, default=str)
def _hash_tool_calls(tool_calls: list[dict]) -> str:
"""Deterministic hash of a set of tool calls (name + args).
"""Deterministic hash of a set of tool calls (name + stable key).
This is intended to be order-independent: the same multiset of tool calls
should always produce the same hash, regardless of their input order.
"""
# First normalize each tool call to a minimal (name, args) structure.
normalized: list[dict] = []
# Normalize each tool call to a stable (name, key) structure.
normalized: list[str] = []
for tc in tool_calls:
normalized.append(
{
"name": tc.get("name", ""),
"args": tc.get("args", {}),
}
)
name = tc.get("name", "")
args, fallback_key = _normalize_tool_call_args(tc.get("args", {}))
key = _stable_tool_key(name, args, fallback_key)
# Sort by both name and a deterministic serialization of args so that
# permutations of the same multiset of calls yield the same ordering.
normalized.sort(
key=lambda tc: (
tc["name"],
json.dumps(tc["args"], sort_keys=True, default=str),
)
)
normalized.append(f"{name}:{key}")
# Sort so permutations of the same multiset of calls yield the same ordering.
normalized.sort()
blob = json.dumps(normalized, sort_keys=True, default=str)
return hashlib.md5(blob.encode()).hexdigest()[:12]
@@ -23,25 +23,119 @@ logger = logging.getLogger(__name__)
# Each pattern is compiled once at import time.
_HIGH_RISK_PATTERNS: list[re.Pattern[str]] = [
re.compile(r"rm\s+-[^\s]*r[^\s]*\s+(/\*?|~/?\*?|/home\b|/root\b)\s*$"), # rm -rf / /* ~ /home /root
re.compile(r"(curl|wget).+\|\s*(ba)?sh"), # curl|sh, wget|sh
# --- original rules (retained) ---
re.compile(r"rm\s+-[^\s]*r[^\s]*\s+(/\*?|~/?\*?|/home\b|/root\b)\s*$"),
re.compile(r"dd\s+if="),
re.compile(r"mkfs"),
re.compile(r"cat\s+/etc/shadow"),
re.compile(r">\s*/etc/"), # overwrite /etc/ files
re.compile(r">+\s*/etc/"),
# --- pipe to sh/bash (generalised, replaces old curl|sh rule) ---
re.compile(r"\|\s*(ba)?sh\b"),
# --- command substitution (targeted only dangerous executables) ---
re.compile(r"[`$]\(?\s*(curl|wget|bash|sh|python|ruby|perl|base64)"),
# --- base64 decode piped to execution ---
re.compile(r"base64\s+.*-d.*\|"),
# --- overwrite system binaries ---
re.compile(r">+\s*(/usr/bin/|/bin/|/sbin/)"),
# --- overwrite shell startup files ---
re.compile(r">+\s*~/?\.(bashrc|profile|zshrc|bash_profile)"),
# --- process environment leakage ---
re.compile(r"/proc/[^/]+/environ"),
# --- dynamic linker hijack (one-step escalation) ---
re.compile(r"\b(LD_PRELOAD|LD_LIBRARY_PATH)\s*="),
# --- bash built-in networking (bypasses tool allowlists) ---
re.compile(r"/dev/tcp/"),
# --- fork bomb ---
re.compile(r"\S+\(\)\s*\{[^}]*\|\s*\S+\s*&"), # :(){ :|:& };:
re.compile(r"while\s+true.*&\s*done"), # while true; do bash & done
]
_MEDIUM_RISK_PATTERNS: list[re.Pattern[str]] = [
re.compile(r"chmod\s+777"), # overly permissive, but reversible
re.compile(r"pip\s+install"),
re.compile(r"pip3\s+install"),
re.compile(r"chmod\s+777"),
re.compile(r"pip3?\s+install"),
re.compile(r"apt(-get)?\s+install"),
# sudo/su: no-op under Docker root; warn so LLM is aware
re.compile(r"\b(sudo|su)\b"),
# PATH modification: long attack chain, warn rather than block
re.compile(r"\bPATH\s*="),
]
def _classify_command(command: str) -> str:
"""Return 'block', 'warn', or 'pass'."""
# Normalize for matching (collapse whitespace)
def _split_compound_command(command: str) -> list[str]:
"""Split a compound command into sub-commands (quote-aware).
Scans the raw command string so unquoted shell control operators are
recognised even when they are not surrounded by whitespace
(e.g. ``safe;rm -rf /`` or ``rm -rf /&&echo ok``). Operators inside
quotes are ignored. If the command ends with an unclosed quote or a
dangling escape, return the whole command unchanged (fail-closed
safer to classify the unsplit string than silently drop parts).
"""
parts: list[str] = []
current: list[str] = []
in_single_quote = False
in_double_quote = False
escaping = False
index = 0
while index < len(command):
char = command[index]
if escaping:
current.append(char)
escaping = False
index += 1
continue
if char == "\\" and not in_single_quote:
current.append(char)
escaping = True
index += 1
continue
if char == "'" and not in_double_quote:
in_single_quote = not in_single_quote
current.append(char)
index += 1
continue
if char == '"' and not in_single_quote:
in_double_quote = not in_double_quote
current.append(char)
index += 1
continue
if not in_single_quote and not in_double_quote:
if command.startswith("&&", index) or command.startswith("||", index):
part = "".join(current).strip()
if part:
parts.append(part)
current = []
index += 2
continue
if char == ";":
part = "".join(current).strip()
if part:
parts.append(part)
current = []
index += 1
continue
current.append(char)
index += 1
# Unclosed quote or dangling escape → fail-closed, return whole command
if in_single_quote or in_double_quote or escaping:
return [command]
part = "".join(current).strip()
if part:
parts.append(part)
return parts if parts else [command]
def _classify_single_command(command: str) -> str:
"""Classify a single (non-compound) command. Return 'block', 'warn', or 'pass'."""
normalized = " ".join(command.split())
for pattern in _HIGH_RISK_PATTERNS:
@@ -66,6 +160,35 @@ def _classify_command(command: str) -> str:
return "pass"
def _classify_command(command: str) -> str:
"""Return 'block', 'warn', or 'pass'.
Strategy:
1. First scan the *whole* raw command against high-risk patterns. This
catches structural attacks like ``while true; do bash & done`` or
``:(){ :|:& };:`` that span multiple shell statements splitting them
on ``;`` would destroy the pattern context.
2. Then split compound commands (e.g. ``cmd1 && cmd2 ; cmd3``) and
classify each sub-command independently. The most severe verdict wins.
"""
# Pass 1: whole-command high-risk scan (catches multi-statement patterns)
normalized = " ".join(command.split())
for pattern in _HIGH_RISK_PATTERNS:
if pattern.search(normalized):
return "block"
# Pass 2: per-sub-command classification
sub_commands = _split_compound_command(command)
worst = "pass"
for sub in sub_commands:
verdict = _classify_single_command(sub)
if verdict == "block":
return "block" # short-circuit: can't get worse
if verdict == "warn":
worst = "warn"
return worst
# ---------------------------------------------------------------------------
# Middleware
# ---------------------------------------------------------------------------
@@ -1,10 +1,11 @@
"""Middleware for automatic thread title generation."""
import logging
from typing import NotRequired, override
from typing import Any, NotRequired, override
from langchain.agents import AgentState
from langchain.agents.middleware import AgentMiddleware
from langgraph.config import get_config
from langgraph.runtime import Runtime
from deerflow.config.title_config import get_title_config
@@ -100,6 +101,20 @@ class TitleMiddleware(AgentMiddleware[TitleMiddlewareState]):
return user_msg[:fallback_chars].rstrip() + "..."
return user_msg if user_msg else "New Conversation"
def _get_runnable_config(self) -> dict[str, Any]:
"""Inherit the parent RunnableConfig and add middleware tag.
This ensures RunJournal identifies LLM calls from this middleware
as ``middleware:title`` instead of ``lead_agent``.
"""
try:
parent = get_config()
except Exception:
parent = {}
config = {**parent}
config["tags"] = [*(config.get("tags") or []), "middleware:title"]
return config
def _generate_title_result(self, state: TitleMiddlewareState) -> dict | None:
"""Generate a local fallback title without blocking on an LLM call."""
if not self._should_generate_title(state):
@@ -121,7 +136,7 @@ class TitleMiddleware(AgentMiddleware[TitleMiddlewareState]):
model = create_chat_model(name=config.model_name, thinking_enabled=False)
else:
model = create_chat_model(thinking_enabled=False)
response = await model.ainvoke(prompt)
response = await model.ainvoke(prompt, config=self._get_runnable_config())
title = self._parse_title(response.content)
if title:
return {"title": title}
+291 -49
View File
@@ -25,7 +25,7 @@ import uuid
from collections.abc import Generator, Sequence
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any
from typing import Any, Literal
from langchain.agents import create_agent
from langchain.agents.middleware import AgentMiddleware
@@ -55,6 +55,9 @@ from deerflow.uploads.manager import (
logger = logging.getLogger(__name__)
StreamEventType = Literal["values", "messages-tuple", "custom", "end"]
@dataclass
class StreamEvent:
"""A single event from the streaming agent response.
@@ -69,7 +72,7 @@ class StreamEvent:
data: Event payload. Contents vary by type.
"""
type: str
type: StreamEventType
data: dict[str, Any] = field(default_factory=dict)
@@ -254,13 +257,53 @@ class DeerFlowClient:
return get_available_tools(model_name=model_name, subagent_enabled=subagent_enabled)
@staticmethod
def _serialize_tool_calls(tool_calls) -> list[dict]:
"""Reshape LangChain tool_calls into the wire format used in events."""
return [{"name": tc["name"], "args": tc["args"], "id": tc.get("id")} for tc in tool_calls]
@staticmethod
def _ai_text_event(msg_id: str | None, text: str, usage: dict | None) -> "StreamEvent":
"""Build a ``messages-tuple`` AI text event, attaching usage when present."""
data: dict[str, Any] = {"type": "ai", "content": text, "id": msg_id}
if usage:
data["usage_metadata"] = usage
return StreamEvent(type="messages-tuple", data=data)
@staticmethod
def _ai_tool_calls_event(msg_id: str | None, tool_calls) -> "StreamEvent":
"""Build a ``messages-tuple`` AI tool-calls event."""
return StreamEvent(
type="messages-tuple",
data={
"type": "ai",
"content": "",
"id": msg_id,
"tool_calls": DeerFlowClient._serialize_tool_calls(tool_calls),
},
)
@staticmethod
def _tool_message_event(msg: ToolMessage) -> "StreamEvent":
"""Build a ``messages-tuple`` tool-result event from a ToolMessage."""
return StreamEvent(
type="messages-tuple",
data={
"type": "tool",
"content": DeerFlowClient._extract_text(msg.content),
"name": msg.name,
"tool_call_id": msg.tool_call_id,
"id": msg.id,
},
)
@staticmethod
def _serialize_message(msg) -> dict:
"""Serialize a LangChain message to a plain dict for values events."""
if isinstance(msg, AIMessage):
d: dict[str, Any] = {"type": "ai", "content": msg.content, "id": getattr(msg, "id", None)}
if msg.tool_calls:
d["tool_calls"] = [{"name": tc["name"], "args": tc["args"], "id": tc.get("id")} for tc in msg.tool_calls]
d["tool_calls"] = DeerFlowClient._serialize_tool_calls(msg.tool_calls)
if getattr(msg, "usage_metadata", None):
d["usage_metadata"] = msg.usage_metadata
return d
@@ -315,6 +358,108 @@ class DeerFlowClient:
return "\n".join(pieces) if pieces else ""
return str(content)
# ------------------------------------------------------------------
# Public API — threads
# ------------------------------------------------------------------
def list_threads(self, limit: int = 10) -> dict:
"""List the recent N threads.
Args:
limit: Maximum number of threads to return. Default is 10.
Returns:
Dict with "thread_list" key containing list of thread info dicts,
sorted by thread creation time descending.
"""
checkpointer = self._checkpointer
if checkpointer is None:
from deerflow.agents.checkpointer.provider import get_checkpointer
checkpointer = get_checkpointer()
thread_info_map = {}
for cp in checkpointer.list(config=None, limit=limit):
cfg = cp.config.get("configurable", {})
thread_id = cfg.get("thread_id")
if not thread_id:
continue
ts = cp.checkpoint.get("ts")
checkpoint_id = cfg.get("checkpoint_id")
if thread_id not in thread_info_map:
channel_values = cp.checkpoint.get("channel_values", {})
thread_info_map[thread_id] = {
"thread_id": thread_id,
"created_at": ts,
"updated_at": ts,
"latest_checkpoint_id": checkpoint_id,
"title": channel_values.get("title"),
}
else:
# Explicitly compare timestamps to ensure accuracy when iterating over unordered namespaces.
# Treat None as "missing" and only compare when existing values are non-None.
if ts is not None:
current_created = thread_info_map[thread_id]["created_at"]
if current_created is None or ts < current_created:
thread_info_map[thread_id]["created_at"] = ts
current_updated = thread_info_map[thread_id]["updated_at"]
if current_updated is None or ts > current_updated:
thread_info_map[thread_id]["updated_at"] = ts
thread_info_map[thread_id]["latest_checkpoint_id"] = checkpoint_id
channel_values = cp.checkpoint.get("channel_values", {})
thread_info_map[thread_id]["title"] = channel_values.get("title")
threads = list(thread_info_map.values())
threads.sort(key=lambda x: x.get("created_at") or "", reverse=True)
return {"thread_list": threads[:limit]}
def get_thread(self, thread_id: str) -> dict:
"""Get the complete thread record, including all node execution records.
Args:
thread_id: Thread ID.
Returns:
Dict containing the thread's full checkpoint history.
"""
checkpointer = self._checkpointer
if checkpointer is None:
from deerflow.agents.checkpointer.provider import get_checkpointer
checkpointer = get_checkpointer()
config = {"configurable": {"thread_id": thread_id}}
checkpoints = []
for cp in checkpointer.list(config):
channel_values = dict(cp.checkpoint.get("channel_values", {}))
if "messages" in channel_values:
channel_values["messages"] = [self._serialize_message(m) if hasattr(m, "content") else m for m in channel_values["messages"]]
cfg = cp.config.get("configurable", {})
parent_cfg = cp.parent_config.get("configurable", {}) if cp.parent_config else {}
checkpoints.append(
{
"checkpoint_id": cfg.get("checkpoint_id"),
"parent_checkpoint_id": parent_cfg.get("checkpoint_id"),
"ts": cp.checkpoint.get("ts"),
"metadata": cp.metadata,
"values": channel_values,
"pending_writes": [{"task_id": w[0], "channel": w[1], "value": w[2]} for w in getattr(cp, "pending_writes", [])],
}
)
# Sort globally by timestamp to prevent partial ordering issues caused by different namespaces (e.g., subgraphs)
checkpoints.sort(key=lambda x: x["ts"] if x["ts"] else "")
return {"thread_id": thread_id, "checkpoints": checkpoints}
# ------------------------------------------------------------------
# Public API — conversation
# ------------------------------------------------------------------
@@ -336,6 +481,53 @@ class DeerFlowClient:
consumers can switch between HTTP streaming and embedded mode
without changing their event-handling logic.
Token-level streaming
~~~~~~~~~~~~~~~~~~~~~
This method subscribes to LangGraph's ``messages`` stream mode, so
``messages-tuple`` events for AI text are emitted as **deltas** as
the model generates tokens, not as one cumulative dump at node
completion. Each delta carries a stable ``id`` consumers that
want the full text must accumulate ``content`` per ``id``.
``chat()`` already does this for you.
Tool calls and tool results are still emitted once per logical
message. ``values`` events continue to carry full state snapshots
after each graph node finishes; AI text already delivered via the
``messages`` stream is **not** re-synthesized from the snapshot to
avoid duplicate deliveries.
Why not reuse Gateway's ``run_agent``?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Gateway (``runtime/runs/worker.py``) has a complete streaming
pipeline: ``run_agent`` ``StreamBridge`` ``sse_consumer``. It
looks like this client duplicates that work, but the two paths
serve different audiences and **cannot** share execution:
* ``run_agent`` is ``async def`` and uses ``agent.astream()``;
this method is a sync generator using ``agent.stream()`` so
callers can write ``for event in client.stream(...)`` without
touching asyncio. Bridging the two would require spinning up
an event loop + thread per call.
* Gateway events are JSON-serialized by ``serialize()`` for SSE
wire transmission. This client yields in-process stream event
payloads directly as Python data structures (``StreamEvent``
with ``data`` as a plain ``dict``), without the extra
JSON/SSE serialization layer used for HTTP delivery.
* ``StreamBridge`` is an asyncio-queue decoupling producers from
consumers across an HTTP boundary (``Last-Event-ID`` replay,
heartbeats, multi-subscriber fan-out). A single in-process
caller with a direct iterator needs none of that.
So ``DeerFlowClient.stream()`` is a parallel, sync, in-process
consumer of the same ``create_agent()`` factory not a wrapper
around Gateway. The two paths **should** stay in sync on which
LangGraph stream modes they subscribe to; that invariant is
enforced by ``tests/test_client.py::test_messages_mode_emits_token_deltas``
rather than by a shared constant, because the three layers
(Graph, Platform SDK, HTTP) each use their own naming
(``messages`` vs ``messages-tuple``) and cannot literally share
a string.
Args:
message: User message text.
thread_id: Thread ID for conversation context. Auto-generated if None.
@@ -346,8 +538,8 @@ class DeerFlowClient:
StreamEvent with one of:
- type="values" data={"title": str|None, "messages": [...], "artifacts": [...]}
- type="custom" data={...}
- type="messages-tuple" data={"type": "ai", "content": str, "id": str}
- type="messages-tuple" data={"type": "ai", "content": str, "id": str, "usage_metadata": {...}}
- type="messages-tuple" data={"type": "ai", "content": <delta>, "id": str}
- type="messages-tuple" data={"type": "ai", "content": <delta>, "id": str, "usage_metadata": {...}}
- type="messages-tuple" data={"type": "ai", "content": "", "id": str, "tool_calls": [...]}
- type="messages-tuple" data={"type": "tool", "content": str, "name": str, "tool_call_id": str, "id": str}
- type="end" data={"usage": {"input_tokens": int, "output_tokens": int, "total_tokens": int}}
@@ -364,13 +556,47 @@ class DeerFlowClient:
context["agent_name"] = self._agent_name
seen_ids: set[str] = set()
# Cross-mode handoff: ids already streamed via LangGraph ``messages``
# mode so the ``values`` path skips re-synthesis of the same message.
streamed_ids: set[str] = set()
# The same message id carries identical cumulative ``usage_metadata``
# in both the final ``messages`` chunk and the values snapshot —
# count it only on whichever arrives first.
counted_usage_ids: set[str] = set()
cumulative_usage: dict[str, int] = {"input_tokens": 0, "output_tokens": 0, "total_tokens": 0}
def _account_usage(msg_id: str | None, usage: Any) -> dict | None:
"""Add *usage* to cumulative totals if this id has not been counted.
``usage`` is a ``langchain_core.messages.UsageMetadata`` TypedDict
or ``None``; typed as ``Any`` because TypedDicts are not
structurally assignable to plain ``dict`` under strict type
checking. Returns the normalized usage dict (for attaching
to an event) when we accepted it, otherwise ``None``.
"""
if not usage:
return None
if msg_id and msg_id in counted_usage_ids:
return None
if msg_id:
counted_usage_ids.add(msg_id)
input_tokens = usage.get("input_tokens", 0) or 0
output_tokens = usage.get("output_tokens", 0) or 0
total_tokens = usage.get("total_tokens", 0) or 0
cumulative_usage["input_tokens"] += input_tokens
cumulative_usage["output_tokens"] += output_tokens
cumulative_usage["total_tokens"] += total_tokens
return {
"input_tokens": input_tokens,
"output_tokens": output_tokens,
"total_tokens": total_tokens,
}
for item in self._agent.stream(
state,
config=config,
context=context,
stream_mode=["values", "custom"],
stream_mode=["values", "messages", "custom"],
):
if isinstance(item, tuple) and len(item) == 2:
mode, chunk = item
@@ -382,6 +608,36 @@ class DeerFlowClient:
yield StreamEvent(type="custom", data=chunk)
continue
if mode == "messages":
# LangGraph ``messages`` mode emits ``(message_chunk, metadata)``.
if isinstance(chunk, tuple) and len(chunk) == 2:
msg_chunk, _metadata = chunk
else:
msg_chunk = chunk
msg_id = getattr(msg_chunk, "id", None)
if isinstance(msg_chunk, AIMessage):
text = self._extract_text(msg_chunk.content)
counted_usage = _account_usage(msg_id, msg_chunk.usage_metadata)
if text:
if msg_id:
streamed_ids.add(msg_id)
yield self._ai_text_event(msg_id, text, counted_usage)
if msg_chunk.tool_calls:
if msg_id:
streamed_ids.add(msg_id)
yield self._ai_tool_calls_event(msg_id, msg_chunk.tool_calls)
elif isinstance(msg_chunk, ToolMessage):
if msg_id:
streamed_ids.add(msg_id)
yield self._tool_message_event(msg_chunk)
continue
# mode == "values"
messages = chunk.get("messages", [])
for msg in messages:
@@ -391,47 +647,25 @@ class DeerFlowClient:
if msg_id:
seen_ids.add(msg_id)
# Already streamed via ``messages`` mode; only (defensively)
# capture usage here and skip re-synthesizing the event.
if msg_id and msg_id in streamed_ids:
if isinstance(msg, AIMessage):
_account_usage(msg_id, getattr(msg, "usage_metadata", None))
continue
if isinstance(msg, AIMessage):
# Track token usage from AI messages
usage = getattr(msg, "usage_metadata", None)
if usage:
cumulative_usage["input_tokens"] += usage.get("input_tokens", 0) or 0
cumulative_usage["output_tokens"] += usage.get("output_tokens", 0) or 0
cumulative_usage["total_tokens"] += usage.get("total_tokens", 0) or 0
counted_usage = _account_usage(msg_id, msg.usage_metadata)
if msg.tool_calls:
yield StreamEvent(
type="messages-tuple",
data={
"type": "ai",
"content": "",
"id": msg_id,
"tool_calls": [{"name": tc["name"], "args": tc["args"], "id": tc.get("id")} for tc in msg.tool_calls],
},
)
yield self._ai_tool_calls_event(msg_id, msg.tool_calls)
text = self._extract_text(msg.content)
if text:
event_data: dict[str, Any] = {"type": "ai", "content": text, "id": msg_id}
if usage:
event_data["usage_metadata"] = {
"input_tokens": usage.get("input_tokens", 0) or 0,
"output_tokens": usage.get("output_tokens", 0) or 0,
"total_tokens": usage.get("total_tokens", 0) or 0,
}
yield StreamEvent(type="messages-tuple", data=event_data)
yield self._ai_text_event(msg_id, text, counted_usage)
elif isinstance(msg, ToolMessage):
yield StreamEvent(
type="messages-tuple",
data={
"type": "tool",
"content": self._extract_text(msg.content),
"name": getattr(msg, "name", None),
"tool_call_id": getattr(msg, "tool_call_id", None),
"id": msg_id,
},
)
yield self._tool_message_event(msg)
# Emit a values event for each state snapshot
yield StreamEvent(
@@ -448,10 +682,12 @@ class DeerFlowClient:
def chat(self, message: str, *, thread_id: str | None = None, **kwargs) -> str:
"""Send a message and return the final text response.
Convenience wrapper around :meth:`stream` that returns only the
**last** AI text from ``messages-tuple`` events. If the agent emits
multiple text segments in one turn, intermediate segments are
discarded. Use :meth:`stream` directly to capture all events.
Convenience wrapper around :meth:`stream` that accumulates delta
``messages-tuple`` events per ``id`` and returns the text of the
**last** AI message to complete. Intermediate AI messages (e.g.
planner drafts) are discarded only the final id's accumulated
text is returned. Use :meth:`stream` directly if you need every
delta as it arrives.
Args:
message: User message text.
@@ -459,15 +695,21 @@ class DeerFlowClient:
**kwargs: Override client defaults (same as stream()).
Returns:
The last AI message text, or empty string if no response.
The accumulated text of the last AI message, or empty string
if no AI text was produced.
"""
last_text = ""
# Per-id delta lists joined once at the end — avoids the O(n²) cost
# of repeated ``str + str`` on a growing buffer for long responses.
chunks: dict[str, list[str]] = {}
last_id: str = ""
for event in self.stream(message, thread_id=thread_id, **kwargs):
if event.type == "messages-tuple" and event.data.get("type") == "ai":
content = event.data.get("content", "")
if content:
last_text = content
return last_text
msg_id = event.data.get("id") or ""
delta = event.data.get("content", "")
if delta:
chunks.setdefault(msg_id, []).append(delta)
last_id = msg_id
return "".join(chunks.get(last_id, ()))
# ------------------------------------------------------------------
# Public API — configuration queries
@@ -112,6 +112,9 @@ class AioSandboxProvider(SandboxProvider):
atexit.register(self.shutdown)
self._register_signal_handlers()
# Reconcile orphaned containers from previous process lifecycles
self._reconcile_orphans()
# Start idle checker if enabled
if self._config.get("idle_timeout", DEFAULT_IDLE_TIMEOUT) > 0:
self._start_idle_checker()
@@ -175,6 +178,51 @@ class AioSandboxProvider(SandboxProvider):
resolved[key] = str(value)
return resolved
# ── Startup reconciliation ────────────────────────────────────────────
def _reconcile_orphans(self) -> None:
"""Reconcile orphaned containers left by previous process lifecycles.
On startup, enumerate all running containers matching our prefix
and adopt them all into the warm pool. The idle checker will reclaim
containers that nobody re-acquires within ``idle_timeout``.
All containers are adopted unconditionally because we cannot
distinguish "orphaned" from "actively used by another process"
based on age alone ``idle_timeout`` represents inactivity, not
uptime. Adopting into the warm pool and letting the idle checker
decide avoids destroying containers that a concurrent process may
still be using.
This closes the fundamental gap where in-memory state loss (process
restart, crash, SIGKILL) leaves Docker containers running forever.
"""
try:
running = self._backend.list_running()
except Exception as e:
logger.warning(f"Failed to enumerate running containers during startup reconciliation: {e}")
return
if not running:
return
current_time = time.time()
adopted = 0
for info in running:
age = current_time - info.created_at if info.created_at > 0 else float("inf")
# Single lock acquisition per container: atomic check-and-insert.
# Avoids a TOCTOU window between the "already tracked?" check and
# the warm-pool insert.
with self._lock:
if info.sandbox_id in self._sandboxes or info.sandbox_id in self._warm_pool:
continue
self._warm_pool[info.sandbox_id] = (info, current_time)
adopted += 1
logger.info(f"Adopted container {info.sandbox_id} into warm pool (age: {age:.0f}s)")
logger.info(f"Startup reconciliation complete: {adopted} adopted into warm pool, {len(running)} total found")
# ── Deterministic ID ─────────────────────────────────────────────────
@staticmethod
@@ -316,13 +364,23 @@ class AioSandboxProvider(SandboxProvider):
# ── Signal handling ──────────────────────────────────────────────────
def _register_signal_handlers(self) -> None:
"""Register signal handlers for graceful shutdown."""
"""Register signal handlers for graceful shutdown.
Handles SIGTERM, SIGINT, and SIGHUP (terminal close) to ensure
sandbox containers are cleaned up even when the user closes the terminal.
"""
self._original_sigterm = signal.getsignal(signal.SIGTERM)
self._original_sigint = signal.getsignal(signal.SIGINT)
self._original_sighup = signal.getsignal(signal.SIGHUP) if hasattr(signal, "SIGHUP") else None
def signal_handler(signum, frame):
self.shutdown()
original = self._original_sigterm if signum == signal.SIGTERM else self._original_sigint
if signum == signal.SIGTERM:
original = self._original_sigterm
elif hasattr(signal, "SIGHUP") and signum == signal.SIGHUP:
original = self._original_sighup
else:
original = self._original_sigint
if callable(original):
original(signum, frame)
elif original == signal.SIG_DFL:
@@ -332,6 +390,8 @@ class AioSandboxProvider(SandboxProvider):
try:
signal.signal(signal.SIGTERM, signal_handler)
signal.signal(signal.SIGINT, signal_handler)
if hasattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, signal_handler)
except ValueError:
logger.debug("Could not register signal handlers (not main thread)")
@@ -96,3 +96,19 @@ class SandboxBackend(ABC):
SandboxInfo if found and healthy, None otherwise.
"""
...
def list_running(self) -> list[SandboxInfo]:
"""Enumerate all running sandboxes managed by this backend.
Used for startup reconciliation: when the process restarts, it needs
to discover containers started by previous processes so they can be
adopted into the warm pool or destroyed if idle too long.
The default implementation returns an empty list, which is correct
for backends that don't manage local containers (e.g., RemoteSandboxBackend
delegates lifecycle to the provisioner which handles its own cleanup).
Returns:
A list of SandboxInfo for all currently running sandboxes.
"""
return []
@@ -6,9 +6,11 @@ Handles container lifecycle, port allocation, and cross-process container discov
from __future__ import annotations
import json
import logging
import os
import subprocess
from datetime import datetime
from deerflow.utils.network import get_free_port, release_port
@@ -18,6 +20,52 @@ from .sandbox_info import SandboxInfo
logger = logging.getLogger(__name__)
def _parse_docker_timestamp(raw: str) -> float:
"""Parse Docker's ISO 8601 timestamp into a Unix epoch float.
Docker returns timestamps with nanosecond precision and a trailing ``Z``
(e.g. ``2026-04-08T01:22:50.123456789Z``). Python's ``fromisoformat``
accepts at most microseconds and (pre-3.11) does not accept ``Z``, so the
string is normalized before parsing. Returns ``0.0`` on empty input or
parse failure so callers can use ``0.0`` as a sentinel for "unknown age".
"""
if not raw:
return 0.0
try:
s = raw.strip()
if "." in s:
dot_pos = s.index(".")
tz_start = dot_pos + 1
while tz_start < len(s) and s[tz_start].isdigit():
tz_start += 1
frac = s[dot_pos + 1 : tz_start][:6] # truncate to microseconds
tz_suffix = s[tz_start:]
s = s[: dot_pos + 1] + frac + tz_suffix
if s.endswith("Z"):
s = s[:-1] + "+00:00"
return datetime.fromisoformat(s).timestamp()
except (ValueError, TypeError) as e:
logger.debug(f"Could not parse docker timestamp {raw!r}: {e}")
return 0.0
def _extract_host_port(inspect_entry: dict, container_port: int) -> int | None:
"""Extract the host port mapped to ``container_port/tcp`` from a docker inspect entry.
Returns None if the container has no port mapping for that port.
"""
try:
ports = (inspect_entry.get("NetworkSettings") or {}).get("Ports") or {}
bindings = ports.get(f"{container_port}/tcp") or []
if bindings:
host_port = bindings[0].get("HostPort")
if host_port:
return int(host_port)
except (ValueError, TypeError, AttributeError):
pass
return None
def _format_container_mount(runtime: str, host_path: str, container_path: str, read_only: bool) -> list[str]:
"""Format a bind-mount argument for the selected runtime.
@@ -172,8 +220,12 @@ class LocalContainerBackend(SandboxBackend):
def destroy(self, info: SandboxInfo) -> None:
"""Stop the container and release its port."""
if info.container_id:
self._stop_container(info.container_id)
# Prefer container_id, fall back to container_name (both accepted by docker stop).
# This ensures containers discovered via list_running() (which only has the name)
# can also be stopped.
stop_target = info.container_id or info.container_name
if stop_target:
self._stop_container(stop_target)
# Extract port from sandbox_url for release
try:
from urllib.parse import urlparse
@@ -222,6 +274,129 @@ class LocalContainerBackend(SandboxBackend):
container_name=container_name,
)
def list_running(self) -> list[SandboxInfo]:
"""Enumerate all running containers matching the configured prefix.
Uses a single ``docker ps`` call to list container names, then a
single batched ``docker inspect`` call to retrieve creation timestamp
and port mapping for all containers at once. Total subprocess calls:
2 (down from 2N+1 in the naive per-container approach).
Note: Docker's ``--filter name=`` performs *substring* matching,
so a secondary ``startswith`` check is applied to ensure only
containers with the exact prefix are included.
Containers without port mappings are still included (with empty
sandbox_url) so that startup reconciliation can adopt orphans
regardless of their port state.
"""
# Step 1: enumerate container names via docker ps
try:
result = subprocess.run(
[
self._runtime,
"ps",
"--filter",
f"name={self._container_prefix}-",
"--format",
"{{.Names}}",
],
capture_output=True,
text=True,
timeout=10,
)
if result.returncode != 0:
stderr = (result.stderr or "").strip()
logger.warning(
"Failed to list running containers with %s ps (returncode=%s, stderr=%s)",
self._runtime,
result.returncode,
stderr or "<empty>",
)
return []
if not result.stdout.strip():
return []
except (subprocess.CalledProcessError, subprocess.TimeoutExpired, FileNotFoundError, OSError) as e:
logger.warning(f"Failed to list running containers: {e}")
return []
# Filter to names matching our exact prefix (docker filter is substring-based)
container_names = [name.strip() for name in result.stdout.strip().splitlines() if name.strip().startswith(self._container_prefix + "-")]
if not container_names:
return []
# Step 2: batched docker inspect — single subprocess call for all containers
inspections = self._batch_inspect(container_names)
infos: list[SandboxInfo] = []
sandbox_host = os.environ.get("DEER_FLOW_SANDBOX_HOST", "localhost")
for container_name in container_names:
data = inspections.get(container_name)
if data is None:
# Container disappeared between ps and inspect, or inspect failed
continue
created_at, host_port = data
sandbox_id = container_name[len(self._container_prefix) + 1 :]
sandbox_url = f"http://{sandbox_host}:{host_port}" if host_port else ""
infos.append(
SandboxInfo(
sandbox_id=sandbox_id,
sandbox_url=sandbox_url,
container_name=container_name,
created_at=created_at,
)
)
logger.info(f"Found {len(infos)} running sandbox container(s)")
return infos
def _batch_inspect(self, container_names: list[str]) -> dict[str, tuple[float, int | None]]:
"""Batch-inspect containers in a single subprocess call.
Returns a mapping of ``container_name -> (created_at, host_port)``.
Missing containers or parse failures are silently dropped from the result.
"""
if not container_names:
return {}
try:
result = subprocess.run(
[self._runtime, "inspect", *container_names],
capture_output=True,
text=True,
timeout=15,
)
except (subprocess.CalledProcessError, subprocess.TimeoutExpired, FileNotFoundError, OSError) as e:
logger.warning(f"Failed to batch-inspect containers: {e}")
return {}
if result.returncode != 0:
stderr = (result.stderr or "").strip()
logger.warning(
"Failed to batch-inspect containers with %s inspect (returncode=%s, stderr=%s)",
self._runtime,
result.returncode,
stderr or "<empty>",
)
return {}
try:
payload = json.loads(result.stdout or "[]")
except json.JSONDecodeError as e:
logger.warning(f"Failed to parse docker inspect output as JSON: {e}")
return {}
out: dict[str, tuple[float, int | None]] = {}
for entry in payload:
# ``Name`` is prefixed with ``/`` in the docker inspect response
name = (entry.get("Name") or "").lstrip("/")
if not name:
continue
created_at = _parse_docker_timestamp(entry.get("Created", ""))
host_port = _extract_host_port(entry, 8080)
out[name] = (created_at, host_port)
return out
# ── Container operations ─────────────────────────────────────────────
def _start_container(
@@ -0,0 +1,79 @@
import json
from exa_py import Exa
from langchain.tools import tool
from deerflow.config import get_app_config
def _get_exa_client(tool_name: str = "web_search") -> Exa:
config = get_app_config().get_tool_config(tool_name)
api_key = None
if config is not None and "api_key" in config.model_extra:
api_key = config.model_extra.get("api_key")
return Exa(api_key=api_key)
@tool("web_search", parse_docstring=True)
def web_search_tool(query: str) -> str:
"""Search the web.
Args:
query: The query to search for.
"""
try:
config = get_app_config().get_tool_config("web_search")
max_results = 5
search_type = "auto"
contents_max_characters = 1000
if config is not None:
max_results = config.model_extra.get("max_results", max_results)
search_type = config.model_extra.get("search_type", search_type)
contents_max_characters = config.model_extra.get("contents_max_characters", contents_max_characters)
client = _get_exa_client()
res = client.search(
query,
type=search_type,
num_results=max_results,
contents={"highlights": {"max_characters": contents_max_characters}},
)
normalized_results = [
{
"title": result.title or "",
"url": result.url or "",
"snippet": "\n".join(result.highlights) if result.highlights else "",
}
for result in res.results
]
json_results = json.dumps(normalized_results, indent=2, ensure_ascii=False)
return json_results
except Exception as e:
return f"Error: {str(e)}"
@tool("web_fetch", parse_docstring=True)
def web_fetch_tool(url: str) -> str:
"""Fetch the contents of a web page at a given URL.
Only fetch EXACT URLs that have been provided directly by the user or have been returned in results from the web_search and web_fetch tools.
This tool can NOT access content that requires authentication, such as private Google Docs or pages behind login walls.
Do NOT add www. to URLs that do NOT have them.
URLs must include the schema: https://example.com is a valid URL while example.com is an invalid URL.
Args:
url: The URL to fetch the contents of.
"""
try:
client = _get_exa_client("web_fetch")
res = client.get_contents([url], text={"max_characters": 4096})
if res.results:
result = res.results[0]
title = result.title or "Untitled"
text = result.text or ""
return f"# {title}\n\n{text[:4096]}"
else:
return "Error: No results found"
except Exception as e:
return f"Error: {str(e)}"
@@ -6,10 +6,10 @@ from langchain.tools import tool
from deerflow.config import get_app_config
def _get_firecrawl_client() -> FirecrawlApp:
config = get_app_config().get_tool_config("web_search")
def _get_firecrawl_client(tool_name: str = "web_search") -> FirecrawlApp:
config = get_app_config().get_tool_config(tool_name)
api_key = None
if config is not None:
if config is not None and "api_key" in config.model_extra:
api_key = config.model_extra.get("api_key")
return FirecrawlApp(api_key=api_key) # type: ignore[arg-type]
@@ -27,7 +27,7 @@ def web_search_tool(query: str) -> str:
if config is not None:
max_results = config.model_extra.get("max_results", max_results)
client = _get_firecrawl_client()
client = _get_firecrawl_client("web_search")
result = client.search(query, limit=max_results)
# result.web contains list of SearchResultWeb objects
@@ -58,7 +58,7 @@ def web_fetch_tool(url: str) -> str:
url: The URL to fetch the contents of.
"""
try:
client = _get_firecrawl_client()
client = _get_firecrawl_client("web_fetch")
result = client.scrape(url, formats=["markdown"])
markdown_content = result.markdown or ""
@@ -10,10 +10,12 @@ from pydantic import BaseModel, ConfigDict, Field
from deerflow.config.acp_config import load_acp_config_from_dict
from deerflow.config.checkpointer_config import CheckpointerConfig, load_checkpointer_config_from_dict
from deerflow.config.database_config import DatabaseConfig
from deerflow.config.extensions_config import ExtensionsConfig
from deerflow.config.guardrails_config import GuardrailsConfig, load_guardrails_config_from_dict
from deerflow.config.memory_config import MemoryConfig, load_memory_config_from_dict
from deerflow.config.model_config import ModelConfig
from deerflow.config.run_events_config import RunEventsConfig
from deerflow.config.sandbox_config import SandboxConfig
from deerflow.config.skill_evolution_config import SkillEvolutionConfig
from deerflow.config.skills_config import SkillsConfig
@@ -56,6 +58,8 @@ class AppConfig(BaseModel):
subagents: SubagentsAppConfig = Field(default_factory=SubagentsAppConfig, description="Subagent runtime configuration")
guardrails: GuardrailsConfig = Field(default_factory=GuardrailsConfig, description="Guardrail middleware configuration")
model_config = ConfigDict(extra="allow", frozen=False)
database: DatabaseConfig = Field(default_factory=DatabaseConfig, description="Unified database backend configuration")
run_events: RunEventsConfig = Field(default_factory=RunEventsConfig, description="Run event storage configuration")
checkpointer: CheckpointerConfig | None = Field(default=None, description="Checkpointer configuration")
stream_bridge: StreamBridgeConfig | None = Field(default=None, description="Stream bridge configuration")
@@ -0,0 +1,92 @@
"""Unified database backend configuration.
Controls BOTH the LangGraph checkpointer and the DeerFlow application
persistence layer (runs, threads metadata, users, etc.). The user
configures one backend; the system handles physical separation details.
SQLite mode: checkpointer and app use different .db files in the same
directory to avoid write-lock contention. This is automatic.
Postgres mode: both use the same database URL but maintain independent
connection pools with different lifecycles.
Memory mode: checkpointer uses MemorySaver, app uses in-memory stores.
No database is initialized.
Sensitive values (postgres_url) should use $VAR syntax in config.yaml
to reference environment variables from .env:
database:
backend: postgres
postgres_url: $DATABASE_URL
The $VAR resolution is handled by AppConfig.resolve_env_variables()
before this config is instantiated -- DatabaseConfig itself does not
need to do any environment variable processing.
"""
from __future__ import annotations
import os
from typing import Literal
from pydantic import BaseModel, Field
class DatabaseConfig(BaseModel):
backend: Literal["memory", "sqlite", "postgres"] = Field(
default="memory",
description=("Storage backend for both checkpointer and application data. 'memory' for development (no persistence across restarts), 'sqlite' for single-node deployment, 'postgres' for production multi-node deployment."),
)
sqlite_dir: str = Field(
default=".deer-flow/data",
description=("Directory for SQLite database files. Checkpointer uses {sqlite_dir}/checkpoints.db, application data uses {sqlite_dir}/app.db."),
)
postgres_url: str = Field(
default="",
description=(
"PostgreSQL connection URL, shared by checkpointer and app. "
"Use $DATABASE_URL in config.yaml to reference .env. "
"Example: postgresql://user:pass@host:5432/deerflow "
"(the +asyncpg driver suffix is added automatically where needed)."
),
)
echo_sql: bool = Field(
default=False,
description="Echo all SQL statements to log (debug only).",
)
pool_size: int = Field(
default=5,
description="Connection pool size for the app ORM engine (postgres only).",
)
# -- Derived helpers (not user-configured) --
@property
def _resolved_sqlite_dir(self) -> str:
"""Resolve sqlite_dir to an absolute path (relative to CWD)."""
from pathlib import Path
return str(Path(self.sqlite_dir).resolve())
@property
def checkpointer_sqlite_path(self) -> str:
"""SQLite file path for the LangGraph checkpointer."""
return os.path.join(self._resolved_sqlite_dir, "checkpoints.db")
@property
def app_sqlite_path(self) -> str:
"""SQLite file path for application ORM data."""
return os.path.join(self._resolved_sqlite_dir, "app.db")
@property
def app_sqlalchemy_url(self) -> str:
"""SQLAlchemy async URL for the application ORM engine."""
if self.backend == "sqlite":
return f"sqlite+aiosqlite:///{self.app_sqlite_path}"
if self.backend == "postgres":
url = self.postgres_url
if url.startswith("postgresql://"):
url = url.replace("postgresql://", "postgresql+asyncpg://", 1)
return url
raise ValueError(f"No SQLAlchemy URL for backend={self.backend!r}")
@@ -27,6 +27,10 @@ class ModelConfig(BaseModel):
default_factory=lambda: None,
description="Extra settings to be passed to the model when thinking is enabled",
)
when_thinking_disabled: dict | None = Field(
default_factory=lambda: None,
description="Extra settings to be passed to the model when thinking is disabled",
)
supports_vision: bool = Field(default_factory=lambda: False, description="Whether the model supports vision/image inputs")
thinking: dict | None = Field(
default_factory=lambda: None,
@@ -0,0 +1,33 @@
"""Run event storage configuration.
Controls where run events (messages + execution traces) are persisted.
Backends:
- memory: In-memory storage, data lost on restart. Suitable for
development and testing.
- db: SQL database via SQLAlchemy ORM. Provides full query capability.
Suitable for production deployments.
- jsonl: Append-only JSONL files. Lightweight alternative for
single-node deployments that need persistence without a database.
"""
from __future__ import annotations
from typing import Literal
from pydantic import BaseModel, Field
class RunEventsConfig(BaseModel):
backend: Literal["memory", "db", "jsonl"] = Field(
default="memory",
description="Storage backend for run events. 'memory' for development (no persistence), 'db' for production (SQL queries), 'jsonl' for lightweight single-node persistence.",
)
max_trace_content: int = Field(
default=10240,
description="Maximum trace content size in bytes before truncation (db backend only).",
)
track_token_usage: bool = Field(
default=True,
description="Whether RunJournal should accumulate token counts to RunRow.",
)
@@ -56,6 +56,7 @@ def create_chat_model(name: str | None = None, thinking_enabled: bool = False, *
"supports_thinking",
"supports_reasoning_effort",
"when_thinking_enabled",
"when_thinking_disabled",
"thinking",
"supports_vision",
},
@@ -72,21 +73,24 @@ def create_chat_model(name: str | None = None, thinking_enabled: bool = False, *
raise ValueError(f"Model {name} does not support thinking. Set `supports_thinking` to true in the `config.yaml` to enable thinking.") from None
if effective_wte:
model_settings_from_config.update(effective_wte)
if not thinking_enabled and has_thinking_settings:
if effective_wte.get("extra_body", {}).get("thinking", {}).get("type"):
if not thinking_enabled:
if model_config.when_thinking_disabled is not None:
# User-provided disable settings take full precedence
model_settings_from_config.update(model_config.when_thinking_disabled)
elif has_thinking_settings and effective_wte.get("extra_body", {}).get("thinking", {}).get("type"):
# OpenAI-compatible gateway: thinking is nested under extra_body
model_settings_from_config["extra_body"] = _deep_merge_dicts(
model_settings_from_config.get("extra_body"),
{"thinking": {"type": "disabled"}},
)
model_settings_from_config["reasoning_effort"] = "minimal"
elif disable_chat_template_kwargs := _vllm_disable_chat_template_kwargs(effective_wte.get("extra_body", {}).get("chat_template_kwargs") or {}):
elif has_thinking_settings and (disable_chat_template_kwargs := _vllm_disable_chat_template_kwargs(effective_wte.get("extra_body", {}).get("chat_template_kwargs") or {})):
# vLLM uses chat template kwargs to switch thinking on/off.
model_settings_from_config["extra_body"] = _deep_merge_dicts(
model_settings_from_config.get("extra_body"),
{"chat_template_kwargs": disable_chat_template_kwargs},
)
elif effective_wte.get("thinking", {}).get("type"):
elif has_thinking_settings and effective_wte.get("thinking", {}).get("type"):
# Native langchain_anthropic: thinking is a direct constructor parameter
model_settings_from_config["thinking"] = {"type": "disabled"}
if not model_config.supports_reasoning_effort:
@@ -109,6 +113,15 @@ def create_chat_model(name: str | None = None, thinking_enabled: bool = False, *
elif "reasoning_effort" not in model_settings_from_config:
model_settings_from_config["reasoning_effort"] = "medium"
# Ensure stream_usage is enabled so that token usage metadata is available
# in streaming responses. LangChain's BaseChatOpenAI only defaults
# stream_usage=True when no custom base_url/api_base is set, so models
# hitting third-party endpoints (e.g. doubao, deepseek) silently lose
# usage data. We default it to True unless explicitly configured.
if "stream_usage" not in model_settings_from_config and "stream_usage" not in kwargs:
if "stream_usage" in getattr(model_class, "model_fields", {}):
model_settings_from_config["stream_usage"] = True
model_instance = model_class(**kwargs, **model_settings_from_config)
callbacks = build_tracing_callbacks()
@@ -48,6 +48,10 @@ class CodexChatModel(BaseChatModel):
model_config = {"arbitrary_types_allowed": True}
@classmethod
def is_lc_serializable(cls) -> bool:
return True
@property
def _llm_type(self) -> str:
return "codex-responses"
@@ -216,18 +220,48 @@ class CodexChatModel(BaseChatModel):
def _stream_response(self, headers: dict, payload: dict) -> dict:
"""Stream SSE from Codex API and collect the final response."""
completed_response = None
streamed_output_items: dict[int, dict[str, Any]] = {}
with httpx.Client(timeout=300) as client:
with client.stream("POST", f"{CODEX_BASE_URL}/responses", headers=headers, json=payload) as resp:
resp.raise_for_status()
for line in resp.iter_lines():
data = self._parse_sse_data_line(line)
if data and data.get("type") == "response.completed":
if not data:
continue
event_type = data.get("type")
if event_type == "response.output_item.done":
output_index = data.get("output_index")
output_item = data.get("item")
if isinstance(output_index, int) and isinstance(output_item, dict):
streamed_output_items[output_index] = output_item
elif event_type == "response.completed":
completed_response = data["response"]
if not completed_response:
raise RuntimeError("Codex API stream ended without response.completed event")
# ChatGPT Codex can emit the final assistant content only in stream events.
# When response.completed arrives, response.output may still be empty.
if streamed_output_items:
merged_output = []
response_output = completed_response.get("output")
if isinstance(response_output, list):
merged_output = list(response_output)
max_index = max(max(streamed_output_items), len(merged_output) - 1)
if max_index >= 0 and len(merged_output) <= max_index:
merged_output.extend([None] * (max_index + 1 - len(merged_output)))
for output_index, output_item in streamed_output_items.items():
existing_item = merged_output[output_index]
if not isinstance(existing_item, dict):
merged_output[output_index] = output_item
completed_response = dict(completed_response)
completed_response["output"] = [item for item in merged_output if isinstance(item, dict)]
return completed_response
@staticmethod
@@ -23,6 +23,14 @@ class PatchedChatDeepSeek(ChatDeepSeek):
request payload.
"""
@classmethod
def is_lc_serializable(cls) -> bool:
return True
@property
def lc_secrets(self) -> dict[str, str]:
return {"api_key": "DEEPSEEK_API_KEY", "openai_api_key": "DEEPSEEK_API_KEY"}
def _get_request_payload(
self,
input_: LanguageModelInput,
@@ -0,0 +1,13 @@
"""DeerFlow application persistence layer (SQLAlchemy 2.0 async ORM).
This module manages DeerFlow's own application data -- runs metadata,
thread ownership, cron jobs, users. It is completely separate from
LangGraph's checkpointer, which manages graph execution state.
Usage:
from deerflow.persistence import init_engine, close_engine, get_session_factory
"""
from deerflow.persistence.engine import close_engine, get_engine, get_session_factory, init_engine
__all__ = ["close_engine", "get_engine", "get_session_factory", "init_engine"]
@@ -0,0 +1,40 @@
"""SQLAlchemy declarative base with automatic to_dict support.
All DeerFlow ORM models inherit from this Base. It provides a generic
to_dict() method via SQLAlchemy's inspect() so individual models don't
need to write their own serialization logic.
LangGraph's checkpointer tables are NOT managed by this Base.
"""
from __future__ import annotations
from sqlalchemy import inspect as sa_inspect
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
"""Base class for all DeerFlow ORM models.
Provides:
- Automatic to_dict() via SQLAlchemy column inspection.
- Standard __repr__() showing all column values.
"""
def to_dict(self, *, exclude: set[str] | None = None) -> dict:
"""Convert ORM instance to plain dict.
Uses SQLAlchemy's inspect() to iterate mapped column attributes.
Args:
exclude: Optional set of column keys to omit.
Returns:
Dict of {column_key: value} for all mapped columns.
"""
exclude = exclude or set()
return {c.key: getattr(self, c.key) for c in sa_inspect(type(self)).mapper.column_attrs if c.key not in exclude}
def __repr__(self) -> str:
cols = ", ".join(f"{c.key}={getattr(self, c.key)!r}" for c in sa_inspect(type(self)).mapper.column_attrs)
return f"{type(self).__name__}({cols})"
@@ -0,0 +1,185 @@
"""Async SQLAlchemy engine lifecycle management.
Initializes at Gateway startup, provides session factory for
repositories, disposes at shutdown.
When database.backend="memory", init_engine is a no-op and
get_session_factory() returns None. Repositories must check for
None and fall back to in-memory implementations.
"""
from __future__ import annotations
import json
import logging
from sqlalchemy.ext.asyncio import AsyncEngine, AsyncSession, async_sessionmaker, create_async_engine
def _json_serializer(obj: object) -> str:
"""JSON serializer with ensure_ascii=False for Chinese character support."""
return json.dumps(obj, ensure_ascii=False)
logger = logging.getLogger(__name__)
_engine: AsyncEngine | None = None
_session_factory: async_sessionmaker[AsyncSession] | None = None
async def _auto_create_postgres_db(url: str) -> None:
"""Connect to the ``postgres`` maintenance DB and CREATE DATABASE.
The target database name is extracted from *url*. The connection is
made to the default ``postgres`` database on the same server using
``AUTOCOMMIT`` isolation (CREATE DATABASE cannot run inside a
transaction).
"""
from sqlalchemy import text
from sqlalchemy.engine.url import make_url
parsed = make_url(url)
db_name = parsed.database
if not db_name:
raise ValueError("Cannot auto-create database: no database name in URL")
# Connect to the default 'postgres' database to issue CREATE DATABASE
maint_url = parsed.set(database="postgres")
maint_engine = create_async_engine(maint_url, isolation_level="AUTOCOMMIT")
try:
async with maint_engine.connect() as conn:
await conn.execute(text(f'CREATE DATABASE "{db_name}"'))
logger.info("Auto-created PostgreSQL database: %s", db_name)
finally:
await maint_engine.dispose()
async def init_engine(
backend: str,
*,
url: str = "",
echo: bool = False,
pool_size: int = 5,
sqlite_dir: str = "",
) -> None:
"""Create the async engine and session factory, then auto-create tables.
Args:
backend: "memory", "sqlite", or "postgres".
url: SQLAlchemy async URL (for sqlite/postgres).
echo: Echo SQL to log.
pool_size: Postgres connection pool size.
sqlite_dir: Directory to create for SQLite (ensured to exist).
"""
global _engine, _session_factory
if backend == "memory":
logger.info("Persistence backend=memory -- ORM engine not initialized")
return
if backend == "postgres":
try:
import asyncpg # noqa: F401
except ImportError:
raise ImportError("database.backend is set to 'postgres' but asyncpg is not installed.\nInstall it with:\n uv sync --extra postgres\nOr switch to backend: sqlite in config.yaml for single-node deployment.") from None
if backend == "sqlite":
import os
from sqlalchemy import event
os.makedirs(sqlite_dir or ".", exist_ok=True)
_engine = create_async_engine(url, echo=echo, json_serializer=_json_serializer)
# Enable WAL on every new connection. SQLite PRAGMA settings are
# per-connection, so we wire the listener instead of running PRAGMA
# once at startup. WAL gives concurrent reads + writers without
# blocking and is the standard recommendation for any production
# SQLite deployment (TC-UPG-06 in AUTH_TEST_PLAN.md). The companion
# ``synchronous=NORMAL`` is the safe-and-fast pairing — fsync only
# at WAL checkpoint boundaries instead of every commit.
@event.listens_for(_engine.sync_engine, "connect")
def _enable_sqlite_wal(dbapi_conn, _record): # noqa: ARG001 — SQLAlchemy contract
cursor = dbapi_conn.cursor()
try:
cursor.execute("PRAGMA journal_mode=WAL;")
cursor.execute("PRAGMA synchronous=NORMAL;")
cursor.execute("PRAGMA foreign_keys=ON;")
finally:
cursor.close()
elif backend == "postgres":
_engine = create_async_engine(
url,
echo=echo,
pool_size=pool_size,
pool_pre_ping=True,
json_serializer=_json_serializer,
)
else:
raise ValueError(f"Unknown persistence backend: {backend!r}")
_session_factory = async_sessionmaker(_engine, expire_on_commit=False)
# Auto-create tables (dev convenience). Production should use Alembic.
from deerflow.persistence.base import Base
# Import all models so Base.metadata discovers them.
# When no models exist yet (scaffolding phase), this is a no-op.
try:
import deerflow.persistence.models # noqa: F401
except ImportError:
# Models package not yet available — tables won't be auto-created.
# This is expected during initial scaffolding or minimal installs.
logger.debug("deerflow.persistence.models not found; skipping auto-create tables")
try:
async with _engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)
except Exception as exc:
if backend == "postgres" and "does not exist" in str(exc):
# Database not yet created — attempt to auto-create it, then retry.
await _auto_create_postgres_db(url)
# Rebuild engine against the now-existing database
await _engine.dispose()
_engine = create_async_engine(url, echo=echo, pool_size=pool_size, pool_pre_ping=True, json_serializer=_json_serializer)
_session_factory = async_sessionmaker(_engine, expire_on_commit=False)
async with _engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)
else:
raise
logger.info("Persistence engine initialized: backend=%s", backend)
async def init_engine_from_config(config) -> None:
"""Convenience: init engine from a DatabaseConfig object."""
if config.backend == "memory":
await init_engine("memory")
return
await init_engine(
backend=config.backend,
url=config.app_sqlalchemy_url,
echo=config.echo_sql,
pool_size=config.pool_size,
sqlite_dir=config.sqlite_dir if config.backend == "sqlite" else "",
)
def get_session_factory() -> async_sessionmaker[AsyncSession] | None:
"""Return the async session factory, or None if backend=memory."""
return _session_factory
def get_engine() -> AsyncEngine | None:
"""Return the async engine, or None if not initialized."""
return _engine
async def close_engine() -> None:
"""Dispose the engine, release all connections."""
global _engine, _session_factory
if _engine is not None:
await _engine.dispose()
logger.info("Persistence engine closed")
_engine = None
_session_factory = None
@@ -0,0 +1,6 @@
"""Feedback persistence — ORM and SQL repository."""
from deerflow.persistence.feedback.model import FeedbackRow
from deerflow.persistence.feedback.sql import FeedbackRepository
__all__ = ["FeedbackRepository", "FeedbackRow"]
@@ -0,0 +1,30 @@
"""ORM model for user feedback on runs."""
from __future__ import annotations
from datetime import UTC, datetime
from sqlalchemy import DateTime, String, Text
from sqlalchemy.orm import Mapped, mapped_column
from deerflow.persistence.base import Base
class FeedbackRow(Base):
__tablename__ = "feedback"
feedback_id: Mapped[str] = mapped_column(String(64), primary_key=True)
run_id: Mapped[str] = mapped_column(String(64), nullable=False, index=True)
thread_id: Mapped[str] = mapped_column(String(64), nullable=False, index=True)
owner_id: Mapped[str | None] = mapped_column(String(64), index=True)
message_id: Mapped[str | None] = mapped_column(String(64))
# message_id is an optional RunEventStore event identifier —
# allows feedback to target a specific message or the entire run
rating: Mapped[int] = mapped_column(nullable=False)
# +1 (thumbs-up) or -1 (thumbs-down)
comment: Mapped[str | None] = mapped_column(Text)
# Optional text feedback from the user
created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), default=lambda: datetime.now(UTC))
@@ -0,0 +1,139 @@
"""SQLAlchemy-backed feedback storage.
Each method acquires its own short-lived session.
"""
from __future__ import annotations
import uuid
from datetime import UTC, datetime
from sqlalchemy import case, func, select
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
from deerflow.persistence.feedback.model import FeedbackRow
from deerflow.runtime.user_context import AUTO, _AutoSentinel, resolve_owner_id
class FeedbackRepository:
def __init__(self, session_factory: async_sessionmaker[AsyncSession]) -> None:
self._sf = session_factory
@staticmethod
def _row_to_dict(row: FeedbackRow) -> dict:
d = row.to_dict()
val = d.get("created_at")
if isinstance(val, datetime):
d["created_at"] = val.isoformat()
return d
async def create(
self,
*,
run_id: str,
thread_id: str,
rating: int,
owner_id: str | None | _AutoSentinel = AUTO,
message_id: str | None = None,
comment: str | None = None,
) -> dict:
"""Create a feedback record. rating must be +1 or -1."""
if rating not in (1, -1):
raise ValueError(f"rating must be +1 or -1, got {rating}")
resolved_owner_id = resolve_owner_id(owner_id, method_name="FeedbackRepository.create")
row = FeedbackRow(
feedback_id=str(uuid.uuid4()),
run_id=run_id,
thread_id=thread_id,
owner_id=resolved_owner_id,
message_id=message_id,
rating=rating,
comment=comment,
created_at=datetime.now(UTC),
)
async with self._sf() as session:
session.add(row)
await session.commit()
await session.refresh(row)
return self._row_to_dict(row)
async def get(
self,
feedback_id: str,
*,
owner_id: str | None | _AutoSentinel = AUTO,
) -> dict | None:
resolved_owner_id = resolve_owner_id(owner_id, method_name="FeedbackRepository.get")
async with self._sf() as session:
row = await session.get(FeedbackRow, feedback_id)
if row is None:
return None
if resolved_owner_id is not None and row.owner_id != resolved_owner_id:
return None
return self._row_to_dict(row)
async def list_by_run(
self,
thread_id: str,
run_id: str,
*,
limit: int = 100,
owner_id: str | None | _AutoSentinel = AUTO,
) -> list[dict]:
resolved_owner_id = resolve_owner_id(owner_id, method_name="FeedbackRepository.list_by_run")
stmt = select(FeedbackRow).where(FeedbackRow.thread_id == thread_id, FeedbackRow.run_id == run_id)
if resolved_owner_id is not None:
stmt = stmt.where(FeedbackRow.owner_id == resolved_owner_id)
stmt = stmt.order_by(FeedbackRow.created_at.asc()).limit(limit)
async with self._sf() as session:
result = await session.execute(stmt)
return [self._row_to_dict(r) for r in result.scalars()]
async def list_by_thread(
self,
thread_id: str,
*,
limit: int = 100,
owner_id: str | None | _AutoSentinel = AUTO,
) -> list[dict]:
resolved_owner_id = resolve_owner_id(owner_id, method_name="FeedbackRepository.list_by_thread")
stmt = select(FeedbackRow).where(FeedbackRow.thread_id == thread_id)
if resolved_owner_id is not None:
stmt = stmt.where(FeedbackRow.owner_id == resolved_owner_id)
stmt = stmt.order_by(FeedbackRow.created_at.asc()).limit(limit)
async with self._sf() as session:
result = await session.execute(stmt)
return [self._row_to_dict(r) for r in result.scalars()]
async def delete(
self,
feedback_id: str,
*,
owner_id: str | None | _AutoSentinel = AUTO,
) -> bool:
resolved_owner_id = resolve_owner_id(owner_id, method_name="FeedbackRepository.delete")
async with self._sf() as session:
row = await session.get(FeedbackRow, feedback_id)
if row is None:
return False
if resolved_owner_id is not None and row.owner_id != resolved_owner_id:
return False
await session.delete(row)
await session.commit()
return True
async def aggregate_by_run(self, thread_id: str, run_id: str) -> dict:
"""Aggregate feedback stats for a run using database-side counting."""
stmt = select(
func.count().label("total"),
func.coalesce(func.sum(case((FeedbackRow.rating == 1, 1), else_=0)), 0).label("positive"),
func.coalesce(func.sum(case((FeedbackRow.rating == -1, 1), else_=0)), 0).label("negative"),
).where(FeedbackRow.thread_id == thread_id, FeedbackRow.run_id == run_id)
async with self._sf() as session:
row = (await session.execute(stmt)).one()
return {
"run_id": run_id,
"total": row.total,
"positive": row.positive,
"negative": row.negative,
}
@@ -0,0 +1,38 @@
[alembic]
script_location = %(here)s
# Default URL for offline mode / autogenerate.
# Runtime uses engine from DeerFlow config.
sqlalchemy.url = sqlite+aiosqlite:///./data/app.db
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
@@ -0,0 +1,65 @@
"""Alembic environment for DeerFlow application tables.
ONLY manages DeerFlow's tables (runs, threads_meta, cron_jobs, users).
LangGraph's checkpointer tables are managed by LangGraph itself -- they
have their own schema lifecycle and must not be touched by Alembic.
"""
from __future__ import annotations
import asyncio
import logging
from logging.config import fileConfig
from alembic import context
from sqlalchemy.ext.asyncio import create_async_engine
from deerflow.persistence.base import Base
# Import all models so metadata is populated.
try:
import deerflow.persistence.models # noqa: F401 — register ORM models with Base.metadata
except ImportError:
# Models not available — migration will work with existing metadata only.
logging.getLogger(__name__).warning("Could not import deerflow.persistence.models; Alembic may not detect all tables")
config = context.config
if config.config_file_name is not None:
fileConfig(config.config_file_name)
target_metadata = Base.metadata
def run_migrations_offline() -> None:
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
render_as_batch=True,
)
with context.begin_transaction():
context.run_migrations()
def do_run_migrations(connection):
context.configure(
connection=connection,
target_metadata=target_metadata,
render_as_batch=True, # Required for SQLite ALTER TABLE support
)
with context.begin_transaction():
context.run_migrations()
async def run_migrations_online() -> None:
connectable = create_async_engine(config.get_main_option("sqlalchemy.url"))
async with connectable.connect() as connection:
await connection.run_sync(do_run_migrations)
await connectable.dispose()
if context.is_offline_mode():
run_migrations_offline()
else:
asyncio.run(run_migrations_online())
@@ -0,0 +1,23 @@
"""ORM model registration entry point.
Importing this module ensures all ORM models are registered with
``Base.metadata`` so Alembic autogenerate detects every table.
The actual ORM classes have moved to entity-specific subpackages:
- ``deerflow.persistence.thread_meta``
- ``deerflow.persistence.run``
- ``deerflow.persistence.feedback``
- ``deerflow.persistence.user``
``RunEventRow`` remains in ``deerflow.persistence.models.run_event`` because
its storage implementation lives in ``deerflow.runtime.events.store.db`` and
there is no matching entity directory.
"""
from deerflow.persistence.feedback.model import FeedbackRow
from deerflow.persistence.models.run_event import RunEventRow
from deerflow.persistence.run.model import RunRow
from deerflow.persistence.thread_meta.model import ThreadMetaRow
from deerflow.persistence.user.model import UserRow
__all__ = ["FeedbackRow", "RunEventRow", "RunRow", "ThreadMetaRow", "UserRow"]
@@ -0,0 +1,35 @@
"""ORM model for run events."""
from __future__ import annotations
from datetime import UTC, datetime
from sqlalchemy import JSON, DateTime, Index, String, Text, UniqueConstraint
from sqlalchemy.orm import Mapped, mapped_column
from deerflow.persistence.base import Base
class RunEventRow(Base):
__tablename__ = "run_events"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
thread_id: Mapped[str] = mapped_column(String(64), nullable=False)
run_id: Mapped[str] = mapped_column(String(64), nullable=False)
# Owner of the conversation this event belongs to. Nullable for data
# created before auth was introduced; populated by auth middleware on
# new writes and by the boot-time orphan migration on existing rows.
owner_id: Mapped[str | None] = mapped_column(String(64), nullable=True, index=True)
event_type: Mapped[str] = mapped_column(String(32), nullable=False)
category: Mapped[str] = mapped_column(String(16), nullable=False)
# "message" | "trace" | "lifecycle"
content: Mapped[str] = mapped_column(Text, default="")
event_metadata: Mapped[dict] = mapped_column(JSON, default=dict)
seq: Mapped[int] = mapped_column(nullable=False)
created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), default=lambda: datetime.now(UTC))
__table_args__ = (
UniqueConstraint("thread_id", "seq", name="uq_events_thread_seq"),
Index("ix_events_thread_cat_seq", "thread_id", "category", "seq"),
Index("ix_events_run", "thread_id", "run_id", "seq"),
)
@@ -0,0 +1,6 @@
"""Run metadata persistence — ORM and SQL repository."""
from deerflow.persistence.run.model import RunRow
from deerflow.persistence.run.sql import RunRepository
__all__ = ["RunRepository", "RunRow"]
@@ -0,0 +1,49 @@
"""ORM model for run metadata."""
from __future__ import annotations
from datetime import UTC, datetime
from sqlalchemy import JSON, DateTime, Index, String, Text
from sqlalchemy.orm import Mapped, mapped_column
from deerflow.persistence.base import Base
class RunRow(Base):
__tablename__ = "runs"
run_id: Mapped[str] = mapped_column(String(64), primary_key=True)
thread_id: Mapped[str] = mapped_column(String(64), nullable=False, index=True)
assistant_id: Mapped[str | None] = mapped_column(String(128))
owner_id: Mapped[str | None] = mapped_column(String(64), index=True)
status: Mapped[str] = mapped_column(String(20), default="pending")
# "pending" | "running" | "success" | "error" | "timeout" | "interrupted"
model_name: Mapped[str | None] = mapped_column(String(128))
multitask_strategy: Mapped[str] = mapped_column(String(20), default="reject")
metadata_json: Mapped[dict] = mapped_column(JSON, default=dict)
kwargs_json: Mapped[dict] = mapped_column(JSON, default=dict)
error: Mapped[str | None] = mapped_column(Text)
# Convenience fields (for listing pages without querying RunEventStore)
message_count: Mapped[int] = mapped_column(default=0)
first_human_message: Mapped[str | None] = mapped_column(Text)
last_ai_message: Mapped[str | None] = mapped_column(Text)
# Token usage (accumulated in-memory by RunJournal, written on run completion)
total_input_tokens: Mapped[int] = mapped_column(default=0)
total_output_tokens: Mapped[int] = mapped_column(default=0)
total_tokens: Mapped[int] = mapped_column(default=0)
llm_call_count: Mapped[int] = mapped_column(default=0)
lead_agent_tokens: Mapped[int] = mapped_column(default=0)
subagent_tokens: Mapped[int] = mapped_column(default=0)
middleware_tokens: Mapped[int] = mapped_column(default=0)
# Follow-up association
follow_up_to_run_id: Mapped[str | None] = mapped_column(String(64))
created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), default=lambda: datetime.now(UTC))
updated_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), default=lambda: datetime.now(UTC), onupdate=lambda: datetime.now(UTC))
__table_args__ = (Index("ix_runs_thread_status", "thread_id", "status"),)
@@ -0,0 +1,255 @@
"""SQLAlchemy-backed RunStore implementation.
Each method acquires and releases its own short-lived session.
Run status updates happen from background workers that may live
minutes -- we don't hold connections across long execution.
"""
from __future__ import annotations
import json
from datetime import UTC, datetime
from typing import Any
from sqlalchemy import func, select, update
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
from deerflow.persistence.run.model import RunRow
from deerflow.runtime.runs.store.base import RunStore
from deerflow.runtime.user_context import AUTO, _AutoSentinel, resolve_owner_id
class RunRepository(RunStore):
def __init__(self, session_factory: async_sessionmaker[AsyncSession]) -> None:
self._sf = session_factory
@staticmethod
def _safe_json(obj: Any) -> Any:
"""Ensure obj is JSON-serializable. Falls back to model_dump() or str()."""
if obj is None:
return None
if isinstance(obj, (str, int, float, bool)):
return obj
if isinstance(obj, dict):
return {k: RunRepository._safe_json(v) for k, v in obj.items()}
if isinstance(obj, (list, tuple)):
return [RunRepository._safe_json(v) for v in obj]
if hasattr(obj, "model_dump"):
try:
return obj.model_dump()
except Exception:
pass
if hasattr(obj, "dict"):
try:
return obj.dict()
except Exception:
pass
try:
json.dumps(obj)
return obj
except (TypeError, ValueError):
return str(obj)
@staticmethod
def _row_to_dict(row: RunRow) -> dict[str, Any]:
d = row.to_dict()
# Remap JSON columns to match RunStore interface
d["metadata"] = d.pop("metadata_json", {})
d["kwargs"] = d.pop("kwargs_json", {})
# Convert datetime to ISO string for consistency with MemoryRunStore
for key in ("created_at", "updated_at"):
val = d.get(key)
if isinstance(val, datetime):
d[key] = val.isoformat()
return d
async def put(
self,
run_id,
*,
thread_id,
assistant_id=None,
owner_id: str | None | _AutoSentinel = AUTO,
status="pending",
multitask_strategy="reject",
metadata=None,
kwargs=None,
error=None,
created_at=None,
follow_up_to_run_id=None,
):
resolved_owner_id = resolve_owner_id(owner_id, method_name="RunRepository.put")
now = datetime.now(UTC)
row = RunRow(
run_id=run_id,
thread_id=thread_id,
assistant_id=assistant_id,
owner_id=resolved_owner_id,
status=status,
multitask_strategy=multitask_strategy,
metadata_json=self._safe_json(metadata) or {},
kwargs_json=self._safe_json(kwargs) or {},
error=error,
follow_up_to_run_id=follow_up_to_run_id,
created_at=datetime.fromisoformat(created_at) if created_at else now,
updated_at=now,
)
async with self._sf() as session:
session.add(row)
await session.commit()
async def get(
self,
run_id,
*,
owner_id: str | None | _AutoSentinel = AUTO,
):
resolved_owner_id = resolve_owner_id(owner_id, method_name="RunRepository.get")
async with self._sf() as session:
row = await session.get(RunRow, run_id)
if row is None:
return None
if resolved_owner_id is not None and row.owner_id != resolved_owner_id:
return None
return self._row_to_dict(row)
async def list_by_thread(
self,
thread_id,
*,
owner_id: str | None | _AutoSentinel = AUTO,
limit=100,
):
resolved_owner_id = resolve_owner_id(owner_id, method_name="RunRepository.list_by_thread")
stmt = select(RunRow).where(RunRow.thread_id == thread_id)
if resolved_owner_id is not None:
stmt = stmt.where(RunRow.owner_id == resolved_owner_id)
stmt = stmt.order_by(RunRow.created_at.desc()).limit(limit)
async with self._sf() as session:
result = await session.execute(stmt)
return [self._row_to_dict(r) for r in result.scalars()]
async def update_status(self, run_id, status, *, error=None):
values: dict[str, Any] = {"status": status, "updated_at": datetime.now(UTC)}
if error is not None:
values["error"] = error
async with self._sf() as session:
await session.execute(update(RunRow).where(RunRow.run_id == run_id).values(**values))
await session.commit()
async def delete(
self,
run_id,
*,
owner_id: str | None | _AutoSentinel = AUTO,
):
resolved_owner_id = resolve_owner_id(owner_id, method_name="RunRepository.delete")
async with self._sf() as session:
row = await session.get(RunRow, run_id)
if row is None:
return
if resolved_owner_id is not None and row.owner_id != resolved_owner_id:
return
await session.delete(row)
await session.commit()
async def list_pending(self, *, before=None):
if before is None:
before_dt = datetime.now(UTC)
elif isinstance(before, datetime):
before_dt = before
else:
before_dt = datetime.fromisoformat(before)
stmt = select(RunRow).where(RunRow.status == "pending", RunRow.created_at <= before_dt).order_by(RunRow.created_at.asc())
async with self._sf() as session:
result = await session.execute(stmt)
return [self._row_to_dict(r) for r in result.scalars()]
async def update_run_completion(
self,
run_id: str,
*,
status: str,
total_input_tokens: int = 0,
total_output_tokens: int = 0,
total_tokens: int = 0,
llm_call_count: int = 0,
lead_agent_tokens: int = 0,
subagent_tokens: int = 0,
middleware_tokens: int = 0,
message_count: int = 0,
last_ai_message: str | None = None,
first_human_message: str | None = None,
error: str | None = None,
) -> None:
"""Update status + token usage + convenience fields on run completion."""
values: dict[str, Any] = {
"status": status,
"total_input_tokens": total_input_tokens,
"total_output_tokens": total_output_tokens,
"total_tokens": total_tokens,
"llm_call_count": llm_call_count,
"lead_agent_tokens": lead_agent_tokens,
"subagent_tokens": subagent_tokens,
"middleware_tokens": middleware_tokens,
"message_count": message_count,
"updated_at": datetime.now(UTC),
}
if last_ai_message is not None:
values["last_ai_message"] = last_ai_message[:2000]
if first_human_message is not None:
values["first_human_message"] = first_human_message[:2000]
if error is not None:
values["error"] = error
async with self._sf() as session:
await session.execute(update(RunRow).where(RunRow.run_id == run_id).values(**values))
await session.commit()
async def aggregate_tokens_by_thread(self, thread_id: str) -> dict[str, Any]:
"""Aggregate token usage via a single SQL GROUP BY query."""
_completed = RunRow.status.in_(("success", "error"))
_thread = RunRow.thread_id == thread_id
stmt = (
select(
func.coalesce(RunRow.model_name, "unknown").label("model"),
func.count().label("runs"),
func.coalesce(func.sum(RunRow.total_tokens), 0).label("total_tokens"),
func.coalesce(func.sum(RunRow.total_input_tokens), 0).label("total_input_tokens"),
func.coalesce(func.sum(RunRow.total_output_tokens), 0).label("total_output_tokens"),
func.coalesce(func.sum(RunRow.lead_agent_tokens), 0).label("lead_agent"),
func.coalesce(func.sum(RunRow.subagent_tokens), 0).label("subagent"),
func.coalesce(func.sum(RunRow.middleware_tokens), 0).label("middleware"),
)
.where(_thread, _completed)
.group_by(func.coalesce(RunRow.model_name, "unknown"))
)
async with self._sf() as session:
rows = (await session.execute(stmt)).all()
total_tokens = total_input = total_output = total_runs = 0
lead_agent = subagent = middleware = 0
by_model: dict[str, dict] = {}
for r in rows:
by_model[r.model] = {"tokens": r.total_tokens, "runs": r.runs}
total_tokens += r.total_tokens
total_input += r.total_input_tokens
total_output += r.total_output_tokens
total_runs += r.runs
lead_agent += r.lead_agent
subagent += r.subagent
middleware += r.middleware
return {
"total_tokens": total_tokens,
"total_input_tokens": total_input,
"total_output_tokens": total_output,
"total_runs": total_runs,
"by_model": by_model,
"by_caller": {
"lead_agent": lead_agent,
"subagent": subagent,
"middleware": middleware,
},
}
@@ -0,0 +1,13 @@
"""Thread metadata persistence — ORM, abstract store, and concrete implementations."""
from deerflow.persistence.thread_meta.base import ThreadMetaStore
from deerflow.persistence.thread_meta.memory import MemoryThreadMetaStore
from deerflow.persistence.thread_meta.model import ThreadMetaRow
from deerflow.persistence.thread_meta.sql import ThreadMetaRepository
__all__ = [
"MemoryThreadMetaStore",
"ThreadMetaRepository",
"ThreadMetaRow",
"ThreadMetaStore",
]
@@ -0,0 +1,60 @@
"""Abstract interface for thread metadata storage.
Implementations:
- ThreadMetaRepository: SQL-backed (sqlite / postgres via SQLAlchemy)
- MemoryThreadMetaStore: wraps LangGraph BaseStore (memory mode)
"""
from __future__ import annotations
import abc
class ThreadMetaStore(abc.ABC):
@abc.abstractmethod
async def create(
self,
thread_id: str,
*,
assistant_id: str | None = None,
owner_id: str | None = None,
display_name: str | None = None,
metadata: dict | None = None,
) -> dict:
pass
@abc.abstractmethod
async def get(self, thread_id: str) -> dict | None:
pass
@abc.abstractmethod
async def search(
self,
*,
metadata: dict | None = None,
status: str | None = None,
limit: int = 100,
offset: int = 0,
) -> list[dict]:
pass
@abc.abstractmethod
async def update_display_name(self, thread_id: str, display_name: str) -> None:
pass
@abc.abstractmethod
async def update_status(self, thread_id: str, status: str) -> None:
pass
@abc.abstractmethod
async def update_metadata(self, thread_id: str, metadata: dict) -> None:
"""Merge ``metadata`` into the thread's metadata field.
Existing keys are overwritten by the new values; keys absent from
``metadata`` are preserved. No-op if the thread does not exist.
"""
pass
@abc.abstractmethod
async def delete(self, thread_id: str) -> None:
pass

Some files were not shown because too many files have changed in this diff Show More