34e835bc33
* feat(gateway): implement LangGraph Platform API in Gateway, replace langgraph-cli Implement all core LangGraph Platform API endpoints in the Gateway, allowing it to fully replace the langgraph-cli dev server for local development. This eliminates a heavyweight dependency and simplifies the development stack. Changes: - Add runs lifecycle endpoints (create, stream, wait, cancel, join) - Add threads CRUD and search endpoints - Add assistants compatibility endpoints (search, get, graph, schemas) - Add StreamBridge (in-memory pub/sub for SSE) and async provider - Add RunManager with atomic create_or_reject (eliminates TOCTOU race) - Add worker with interrupt/rollback cancel actions and runtime context injection - Route /api/langgraph/* to Gateway in nginx config - Skip langgraph-cli startup by default (SKIP_LANGGRAPH_SERVER=0 to restore) - Add unit tests for RunManager, SSE format, and StreamBridge * fix: drain bridge queue on client disconnect to prevent backpressure When on_disconnect=continue, keep consuming events from the bridge without yielding, so the worker is not blocked by a full queue. Only on_disconnect=cancel breaks out immediately. Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * fix: remove pytest import Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * fix: Fix default stream_mode to ["values", "messages-tuple"] Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * fix: Remove unused if_exists field from ThreadCreateRequest Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * fix: address review comments on gateway LangGraph API - Mount runs.py router in app.py (missing include_router) - Normalize interrupt_before/after "*" to node list before run_agent() - Use entry.id for SSE event ID instead of counter - Drain bridge queue on disconnect when on_disconnect=continue - Reuse serialization helper in wait_run() for consistent wire format - Reject unsupported multitask_strategy with 400 - Remove SKIP_LANGGRAPH_SERVER fallback, always use Gateway * feat: extract app.state access into deps.py Encapsulate read/write operations for singleton objects (RunManager, StreamBridge, checkpointer) held in app.state into a shared utility, reducing repeated access patterns across router modules. * feat: extract deerflow.runtime.serialization module with tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: replace duplicated serialization with deerflow.runtime.serialization Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: extract app/gateway/services.py with run lifecycle logic Create a service layer that centralizes SSE formatting, input/config normalization, and run lifecycle management. Router modules will delegate to these functions instead of using private cross-imported helpers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: wire routers to use services layer, remove cross-module private imports Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style: apply ruff formatting to refactored files Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(runtime): support LangGraph dev server and add compat route - Enable official LangGraph dev server for local development workflow - Decouple runtime components from agents package for better separation - Provide gateway-backed fallback route when dev server is skipped - Simplify lifecycle management using context manager in gateway * feat(runtime): add Store providers with auto-backend selection - Add async_provider.py and provider.py under deerflow/runtime/store/ - Support memory, sqlite, postgres backends matching checkpointer config - Integrate into FastAPI lifespan via AsyncExitStack in deps.py - Replace hardcoded InMemoryStore with config-driven factory * refactor(gateway): migrate thread management from checkpointer to Store and resolve multiple endpoint failures - Add Store-backed CRUD helpers (_store_get, _store_put, _store_upsert) - Replace checkpoint-scanning search with two-phase strategy: phase 1 reads Store (O(threads)), phase 2 backfills from checkpointer for legacy/LangGraph Server threads with lazy migration - Extend Store record schema with values field for title persistence - Sync thread title from checkpoint to Store after run completion - Fix /threads/{id}/runs/{run_id}/stream 405 by accepting both GET and POST methods; POST handles interrupt/rollback actions - Fix /threads/{id}/state 500 by separating read_config and write_config, adding checkpoint_ns to configurable, and shallow-copying checkpoint/metadata before mutation - Sync title to Store on state update for immediate search reflection - Move _upsert_thread_in_store into services.py, remove duplicate logic - Add _sync_thread_title_after_run: await run task, read final checkpoint title, write back to Store record - Spawn title sync as background task from start_run when Store exists * refactor(runtime): deduplicate store and checkpointer provider logic Extract _ensure_sqlite_parent_dir() helper into checkpointer/provider.py and use it in all three places that previously inlined the same mkdir logic. Consolidate duplicate error constants in store/async_provider.py by importing from store/provider.py instead of redefining them. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor(runtime): move SQLite helpers to runtime/store, checkpointer imports from store _resolve_sqlite_conn_str and _ensure_sqlite_parent_dir now live in runtime/store/provider.py. agents/checkpointer/provider and agents/checkpointer/async_provider import from there, reversing the previous dependency direction (store → checkpointer becomes checkpointer → store). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor(runtime): extract SQLite helpers into runtime/store/_sqlite_utils.py Move resolve_sqlite_conn_str and ensure_sqlite_parent_dir out of checkpointer/provider.py into a dedicated _sqlite_utils module. Functions are now public (no underscore prefix), making cross-module imports semantically correct. All four provider files import from the single shared location. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(gateway): use adelete_thread to fully remove thread checkpoints on delete AsyncSqliteSaver has no adelete method — the previous hasattr check always evaluated to False, silently leaving all checkpoint rows in the database. Switch to adelete_thread(thread_id) which deletes every checkpoint and pending-write row for the thread across all namespaces (including sub-graph checkpoints). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(gateway): remove dead bridge_cm/ckpt_cm code and fix StrEnum lint app.py had unreachable code after the async-with lifespan refactor: bridge_cm and ckpt_cm were referenced but never defined (F821), and the channel service startup/shutdown was outside the langgraph_runtime block so it never ran. Move channel service lifecycle inside the async-with block where it belongs. Replace str+Enum inheritance in RunStatus and DisconnectMode with StrEnum as suggested by UP042. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * style: format with ruff --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: JeffJiang <for-eleven@hotmail.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
160 lines
4.4 KiB
Python
160 lines
4.4 KiB
Python
"""Tests for deerflow.runtime.serialization."""
|
|
|
|
from __future__ import annotations
|
|
|
|
|
|
class _FakePydanticV2:
|
|
"""Object with model_dump (Pydantic v2)."""
|
|
|
|
def model_dump(self):
|
|
return {"key": "v2"}
|
|
|
|
|
|
class _FakePydanticV1:
|
|
"""Object with dict (Pydantic v1)."""
|
|
|
|
def dict(self):
|
|
return {"key": "v1"}
|
|
|
|
|
|
class _Unprintable:
|
|
"""Object whose str() raises."""
|
|
|
|
def __str__(self):
|
|
raise RuntimeError("no str")
|
|
|
|
def __repr__(self):
|
|
return "<Unprintable>"
|
|
|
|
|
|
def test_serialize_none():
|
|
from deerflow.runtime.serialization import serialize_lc_object
|
|
|
|
assert serialize_lc_object(None) is None
|
|
|
|
|
|
def test_serialize_primitives():
|
|
from deerflow.runtime.serialization import serialize_lc_object
|
|
|
|
assert serialize_lc_object("hello") == "hello"
|
|
assert serialize_lc_object(42) == 42
|
|
assert serialize_lc_object(3.14) == 3.14
|
|
assert serialize_lc_object(True) is True
|
|
|
|
|
|
def test_serialize_dict():
|
|
from deerflow.runtime.serialization import serialize_lc_object
|
|
|
|
obj = {"a": _FakePydanticV2(), "b": [1, "two"]}
|
|
result = serialize_lc_object(obj)
|
|
assert result == {"a": {"key": "v2"}, "b": [1, "two"]}
|
|
|
|
|
|
def test_serialize_list():
|
|
from deerflow.runtime.serialization import serialize_lc_object
|
|
|
|
result = serialize_lc_object([_FakePydanticV1(), 1])
|
|
assert result == [{"key": "v1"}, 1]
|
|
|
|
|
|
def test_serialize_tuple():
|
|
from deerflow.runtime.serialization import serialize_lc_object
|
|
|
|
result = serialize_lc_object((_FakePydanticV2(),))
|
|
assert result == [{"key": "v2"}]
|
|
|
|
|
|
def test_serialize_pydantic_v2():
|
|
from deerflow.runtime.serialization import serialize_lc_object
|
|
|
|
assert serialize_lc_object(_FakePydanticV2()) == {"key": "v2"}
|
|
|
|
|
|
def test_serialize_pydantic_v1():
|
|
from deerflow.runtime.serialization import serialize_lc_object
|
|
|
|
assert serialize_lc_object(_FakePydanticV1()) == {"key": "v1"}
|
|
|
|
|
|
def test_serialize_fallback_str():
|
|
from deerflow.runtime.serialization import serialize_lc_object
|
|
|
|
result = serialize_lc_object(object())
|
|
assert isinstance(result, str)
|
|
|
|
|
|
def test_serialize_fallback_repr():
|
|
from deerflow.runtime.serialization import serialize_lc_object
|
|
|
|
assert serialize_lc_object(_Unprintable()) == "<Unprintable>"
|
|
|
|
|
|
def test_serialize_channel_values_strips_pregel_keys():
|
|
from deerflow.runtime.serialization import serialize_channel_values
|
|
|
|
raw = {
|
|
"messages": ["hello"],
|
|
"__pregel_tasks": "internal",
|
|
"__pregel_resuming": True,
|
|
"__interrupt__": "stop",
|
|
"title": "Test",
|
|
}
|
|
result = serialize_channel_values(raw)
|
|
assert "messages" in result
|
|
assert "title" in result
|
|
assert "__pregel_tasks" not in result
|
|
assert "__pregel_resuming" not in result
|
|
assert "__interrupt__" not in result
|
|
|
|
|
|
def test_serialize_channel_values_serializes_objects():
|
|
from deerflow.runtime.serialization import serialize_channel_values
|
|
|
|
result = serialize_channel_values({"obj": _FakePydanticV2()})
|
|
assert result == {"obj": {"key": "v2"}}
|
|
|
|
|
|
def test_serialize_messages_tuple():
|
|
from deerflow.runtime.serialization import serialize_messages_tuple
|
|
|
|
chunk = _FakePydanticV2()
|
|
metadata = {"langgraph_node": "agent"}
|
|
result = serialize_messages_tuple((chunk, metadata))
|
|
assert result == [{"key": "v2"}, {"langgraph_node": "agent"}]
|
|
|
|
|
|
def test_serialize_messages_tuple_non_dict_metadata():
|
|
from deerflow.runtime.serialization import serialize_messages_tuple
|
|
|
|
result = serialize_messages_tuple((_FakePydanticV2(), "not-a-dict"))
|
|
assert result == [{"key": "v2"}, {}]
|
|
|
|
|
|
def test_serialize_messages_tuple_fallback():
|
|
from deerflow.runtime.serialization import serialize_messages_tuple
|
|
|
|
result = serialize_messages_tuple("not-a-tuple")
|
|
assert result == "not-a-tuple"
|
|
|
|
|
|
def test_serialize_dispatcher_messages_mode():
|
|
from deerflow.runtime.serialization import serialize
|
|
|
|
chunk = _FakePydanticV2()
|
|
result = serialize((chunk, {"node": "x"}), mode="messages")
|
|
assert result == [{"key": "v2"}, {"node": "x"}]
|
|
|
|
|
|
def test_serialize_dispatcher_values_mode():
|
|
from deerflow.runtime.serialization import serialize
|
|
|
|
result = serialize({"msg": "hi", "__pregel_tasks": "x"}, mode="values")
|
|
assert result == {"msg": "hi"}
|
|
|
|
|
|
def test_serialize_dispatcher_default_mode():
|
|
from deerflow.runtime.serialization import serialize
|
|
|
|
result = serialize(_FakePydanticV1())
|
|
assert result == {"key": "v1"}
|